class: center, middle # Comparing Many Means: ANOVA and the Linear Model ![:scale 75%](./images/19/comparison-of-means-anova-you-must-do.jpg) --- class: center, middle # Etherpad <br><br> <center><h3>https://etherpad.wikimedia.org/p/607-anova-2020</h3></center> --- # ANOVA Adventure! 1. What are models with categorical predictors? 2. Asumptions of models with categorical predictors 3. Evaluating fit models 4. How predictive is a model? 5. How different are groups? --- # Categorical Predictors: Gene Expression and Mental Disorders .pull-left[ ![](./images/19/neuron_label.jpeg) ] .pull-right[ ![](./images/19/myelin.jpeg) ] --- # The data <img src="anova_1_files/figure-html/boxplot-1.png" style="display: block; margin: auto;" /> --- # Traditional Way to Think About Categories <img src="anova_1_files/figure-html/meansplot-1.png" style="display: block; margin: auto;" /> What is the variance between groups v. within groups? --- # But What is the Underlying Model ? <img src="anova_1_files/figure-html/brainGene_points-1.png" style="display: block; margin: auto;" /> --- # But What is the Underlying Model? <img src="anova_1_files/figure-html/brainGene_points_fit-1.png" style="display: block; margin: auto;" /> -- Underlying linear model with control = intercept, dummy variable for bipolar --- # But What is the Underlying Model? <img src="anova_1_files/figure-html/brainGene_points_fit1-1.png" style="display: block; margin: auto;" /> Underlying linear model with control = intercept, dummy variable for bipolar --- # But What is the Underlying Model ? <img src="anova_1_files/figure-html/brainGene_points_fit_2-1.png" style="display: block; margin: auto;" /> Underlying linear model with control = intercept, dummy variable for schizo --- # But What is the Underlying Model? <img src="anova_1_files/figure-html/ctl_schizo-1.png" style="display: block; margin: auto;" /> Underlying linear model with control = intercept, dummy variable for schizo --- # Different Ways to Write a Categorical Model Express Different Questions (but are the same) 1) `\(y_{ij} = \bar{y} + (\bar{y}_{i} - \bar{y}) + ({y}_{ij} - \bar{y}_{i})\)` <Br><br> -- 2) `\(y_{ij} = \alpha_{i} + \epsilon_{ij}\)` `\(\epsilon_{ij} \sim N(0, \sigma^{2} )\)` <Br><br> -- 3) `\(y_{j} = \beta_{0} + \sum \beta_{i}x_{i} + \epsilon_{j}\)` `\(x_{i} = 0,1\)` --- # Partioning Model: ANOVA Reified `$$\large y_{ij} = \bar{y} + (\bar{y}_{i} - \bar{y}) + ({y}_{ij} - \bar{y}_{i})$$` - i = replicate, j = group - Shows partitioning of variation - Corresponds to F test - Between group v. within group variation - Consider `\(\bar{y}\)` an intercept, deviations from intercept by treatment, and residuals --- # Means Model `$$\large y_{ij} = \alpha_{j} + \epsilon_{ij}$$` `$$\epsilon_{ij} \sim N(0, \sigma^{2} )$$` - i = replicate, j = group - Different mean for each group - Focus is on specificity of a categorical predictor --- # Linear Dummy Variable Model `$$\large y_{ij} = \beta_{0} + \sum \beta_{i}x_{i} + \epsilon_{ij}, \qquad x_{i} = 0,1$$` `$$\epsilon_{ij} \sim N(0, \sigma^{2})$$` - i = replicate, j = group - `\(x_{i}\)` inidicates presence/abscence (1/0) of a category - This coding is called a **Dummy variable** - Note similarities to a linear regression - One category set to `\(\beta_{0}\)` for ease of fitting, and other `\(\beta\)`s are different from it - Or `\(\beta_{0}\)` = 0 --- # Let's Fit that Model **Least Squares** ```r brain_lm <- lm(PLP1.expression ~ group, data=brainGene) ``` **Likelihood** ```r brain_glm <- glm(PLP1.expression ~ group, data=brainGene, family = gaussian(link = "identity")) ``` **Bayes** ```r library(brms) brain_brm <- brm(PLP1.expression ~ group, data=brainGene, family = gaussian(link = "identity"), chains = 2) ``` --- # ANOVA Adventure! 1. What are models with categorical predictors? 2. .red[Assumptions of models with categorical predictors] 3. Evaluating fit models 4. How predictive is a model? 5. How different are groups? --- # Assumptions of Linear Models with Categorical Variables - Same as Linear Regression! - Independence of data points - No relationship between fitted and residual values - Homoscedasticity (homogeneity of variance) of groups - This is just an extension of `\(\epsilon_i \sim N(0, \sigma)\)` where `\(\sigma\)` is constant across all groups - Can be tested with Levene Test - Normality within groups (of residuals) - No excess leverage, etc.... --- # Fitted v. Residuals <img src="anova_1_files/figure-html/unnamed-chunk-4-1.png" style="display: block; margin: auto;" /> Are **residual** variances consistent across groups? --- # Levene’s Test of Homogeneity of Variance <table class="table" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> </th> <th style="text-align:right;"> Df </th> <th style="text-align:right;"> F value </th> <th style="text-align:right;"> Pr(>F) </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> group </td> <td style="text-align:right;"> 2 </td> <td style="text-align:right;"> 1.007 </td> <td style="text-align:right;"> 0.374 </td> </tr> <tr> <td style="text-align:left;"> </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> </td> <td style="text-align:right;"> </td> </tr> </tbody> </table> We are all good! --- # Residuals! <img src="anova_1_files/figure-html/unnamed-chunk-5-1.png" style="display: block; margin: auto;" /> ``` Shapiro-Wilk normality test data: residuals(brain_lm) W = 0.94026, p-value = 0.02199 ``` Huh. Perhaps.Remember, this is a sensitive test, and our results are pretty robust. --- # Leverage <img src="anova_1_files/figure-html/unnamed-chunk-6-1.png" style="display: block; margin: auto;" /> --- # What do I do if I Violate Assumptions? - Nonparametric Kruskal-Wallace (uses ranks) - log(x+1) or otherwise transform - GLM with non-gaussian error (two weeks!) - Model the variance --- # Questions to Ask of your Fit Model 1. Does your model explain variation in the data? 2. Which groups are different from one another? 3. How predictive is your model? 4. What are the distributions of group means? --- # ANOVA Adventure! 1. What are models with categorical predictors? 2. Assumptions of models with categorical predictors 3. .red[Evaluating fit models] 4. How predictive is a model? 5. How different are groups? --- # ANOVA: Comparing Between Group Error and Within Group Error with NHST Ho = The model predicts no variation in the data. Ha = The model predicts variation in the data. --- # ANOVA: Comparing Between Group Error and Within Group Error with NHST <br><br> .center[ Central Question: **Is the variation in the data explained by the data generating process greater than that explained by the error generating process?** ] -- Test: Is a ratio of variability from data generating process v. error generating process large? -- Ratio of two normal distributions = F Distribution -- Yes, this is exactly what we did with regression! Because this IS regression. --- # Linking your Model to Your Question Data Generating Process: `$$\beta_{0} + \sum \beta_{i}x_{i}$$` VERSUS Error Generating Process `$$\epsilon_i \sim N(0,\sigma)$$` --- # Variability due to DGP (Between) versus EGP (Within) <img src="anova_1_files/figure-html/brain_anova_viz_1-1.png" style="display: block; margin: auto;" /> --- # Variability due to DGP (Between Groups) <img src="anova_1_files/figure-html/brain_anova_viz_2-1.png" style="display: block; margin: auto;" /> --- # Variability due to Error (Within Groups) <img src="anova_1_files/figure-html/brain_anova_viz_3-1.png" style="display: block; margin: auto;" /> --- # F-Test to Compare <br><br> `\(SS_{Total} = SS_{Between} + SS_{Within}\)` -- (Regression: `\(SS_{Total} = SS_{Model} + SS_{Error}\)`) -- Yes, these are the same! --- # F-Test to Compare `\(SS_{Between} = \sum_{i}\sum_{j}(\bar{Y_{i}} - \bar{Y})^{2}\)`, df=k-1 `\(SS_{Within} = \sum_{i}\sum_{j}(Y_{ij} - \bar{Y_{i}})^2\)`, df=n-k To compare them, we need to correct for different DF. This is the Mean Square. MS = SS/DF, e.g, `\(MS_{W} = \frac{SS_{W}}{n-k}\)` --- # ANOVA <table> <thead> <tr> <th style="text-align:left;"> </th> <th style="text-align:right;"> Df </th> <th style="text-align:right;"> Sum Sq </th> <th style="text-align:right;"> Mean Sq </th> <th style="text-align:right;"> F value </th> <th style="text-align:right;"> Pr(>F) </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> group </td> <td style="text-align:right;"> 2 </td> <td style="text-align:right;"> 0.5402533 </td> <td style="text-align:right;"> 0.2701267 </td> <td style="text-align:right;"> 7.823136 </td> <td style="text-align:right;"> 0.0012943 </td> </tr> <tr> <td style="text-align:left;"> Residuals </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 1.4502267 </td> <td style="text-align:right;"> 0.0345292 </td> <td style="text-align:right;"> </td> <td style="text-align:right;"> </td> </tr> </tbody> </table> --- # ANODEV: Is our model different from one without groups? - Uses Likelihood fit - Compares to a model with no groups included <table class="table" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> term </th> <th style="text-align:right;"> statistic </th> <th style="text-align:right;"> df </th> <th style="text-align:right;"> p.value </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> group </td> <td style="text-align:right;"> 15.64627 </td> <td style="text-align:right;"> 2 </td> <td style="text-align:right;"> 0.0004004 </td> </tr> </tbody> </table> --- # BANOVA: Compare SD due to Groups versus SD due to Residual <img src="anova_1_files/figure-html/banova-1.png" style="display: block; margin: auto;" /> --- # ANOVA Adventure! 1. What are models with categorical predictors? 2. Assumptions of models with categorical predictors 3. Evaluating fit models 4. .red[How predictive is a model?] 5. How different are groups? --- # How Well Do Groups Explain Variation in Response Data? We can look at fit to data - even in categorical data! ``` [1] 0.2714186 ``` ``` Estimate Est.Error Q2.5 Q97.5 R2 0.2767985 0.0923906 0.09023014 0.4414747 ``` -- But, remember, this is based on the sample at hand. --- # ANOVA Cross-Validation ```r library(loo) nobrain_brm <- brm(PLP1.expression ~ 1, data = brainGene, family = gaussian(link = "identity"), chains = 2) lc <- loo_compare(loo(brain_brm), loo(nobrain_brm)) %>% as.data.frame() lc[,1:2] %>% table_out ``` --- # ANOVA Adventure! 1. What are models with categorical predictors? 2. Assumptions of models with categorical predictors 3. Evaluating fit models 4. How predictive is a model? 5. .red[How different are groups?] --- # R Fits with Treatment Contrasts `$$y_{ij} = \beta_{0} + \sum \beta_{i}x_{i} + \epsilon_{ij}$$` ``` # A tibble: 3 x 5 term estimate std.error statistic p.value <chr> <dbl> <dbl> <dbl> <dbl> 1 (Intercept) -0.00400 0.0480 -0.0834 0.934 2 groupschizo -0.191 0.0679 -2.82 0.00730 3 groupbipolar -0.259 0.0679 -3.81 0.000444 ``` -- What does this mean? -- - Intercept = the average value associated with being in the control group - Others = the average difference between control and each other group - Note: Order is alphabetical --- # Actual Group Means <table class="table" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> contrast </th> <th style="text-align:right;"> estimate </th> <th style="text-align:right;"> SE </th> <th style="text-align:right;"> df </th> <th style="text-align:right;"> t.ratio </th> <th style="text-align:right;"> p.value </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> control effect </td> <td style="text-align:right;"> 0.1500000 </td> <td style="text-align:right;"> 0.0391744 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 3.829034 </td> <td style="text-align:right;"> 0.0012669 </td> </tr> <tr> <td style="text-align:left;"> schizo effect </td> <td style="text-align:right;"> -0.0413333 </td> <td style="text-align:right;"> 0.0391744 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> -1.055112 </td> <td style="text-align:right;"> 0.2974062 </td> </tr> <tr> <td style="text-align:left;"> bipolar effect </td> <td style="text-align:right;"> -0.1086667 </td> <td style="text-align:right;"> 0.0391744 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> -2.773922 </td> <td style="text-align:right;"> 0.0123422 </td> </tr> </tbody> </table> -- What does this mean? -- Being in group j is associated with an average outcome of y. --- # But which groups are different from each other? <img src="anova_1_files/figure-html/meansplot-1.png" style="display: block; margin: auto;" /> -- Many T-tests....multiple comparisons! --- # Post-Hoc Means Comparisons: Which groups are different from one another? - Each group has a mean and SE - We can calculate a comparison for each - For NHST, we can compare with a T-test - BUT - in NHST, with an alpha of 0.05, 1 in 20 will likely be wrong - This is called Family-wise Error Rate -Many down-adjustments of alpha, e.g., Tukey, etc. - Or, we can look at credible intervals, posteriors of differences, etc. --- # The Problem of Multiple Comparisons <img src="./multcomp.gif"> --- # Solutions to Multiple Comparisons and Family-wise Error Rate? 1. Ignore it - a test is a test + *a priori contrasts* + Least Squares Difference test -- 2. Lower your `\(\alpha\)` given m = # of comparisons + Bonferroni `\(\alpha/m\)` + False Discovery Rate `\(k\alpha/m\)` where k is rank of test -- 3. Other multiple comparinson correction + Tukey's Honestly Significant Difference + Dunnett's Test to Compare to Control --- # No Correction: Least Square Differences <table class="table table-striped" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> contrast </th> <th style="text-align:right;"> estimate </th> <th style="text-align:right;"> SE </th> <th style="text-align:right;"> df </th> <th style="text-align:right;"> t.ratio </th> <th style="text-align:right;"> p.value </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> control - schizo </td> <td style="text-align:right;"> 0.1913333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 2.8198628 </td> <td style="text-align:right;"> 0.0073015 </td> </tr> <tr> <td style="text-align:left;"> control - bipolar </td> <td style="text-align:right;"> 0.2586667 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 3.8122186 </td> <td style="text-align:right;"> 0.0004442 </td> </tr> <tr> <td style="text-align:left;"> schizo - bipolar </td> <td style="text-align:right;"> 0.0673333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 0.9923559 </td> <td style="text-align:right;"> 0.3267070 </td> </tr> </tbody> </table> --- # Bonferroni Corrections <table class="table table-striped" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> contrast </th> <th style="text-align:right;"> estimate </th> <th style="text-align:right;"> SE </th> <th style="text-align:right;"> df </th> <th style="text-align:right;"> t.ratio </th> <th style="text-align:right;"> p.value </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> control - schizo </td> <td style="text-align:right;"> 0.1913333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 2.8198628 </td> <td style="text-align:right;"> 0.0219044 </td> </tr> <tr> <td style="text-align:left;"> control - bipolar </td> <td style="text-align:right;"> 0.2586667 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 3.8122186 </td> <td style="text-align:right;"> 0.0013326 </td> </tr> <tr> <td style="text-align:left;"> schizo - bipolar </td> <td style="text-align:right;"> 0.0673333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 0.9923559 </td> <td style="text-align:right;"> 0.9801209 </td> </tr> </tbody> </table> p = m * p --- # FDR <table class="table table-striped" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> contrast </th> <th style="text-align:right;"> estimate </th> <th style="text-align:right;"> SE </th> <th style="text-align:right;"> df </th> <th style="text-align:right;"> t.ratio </th> <th style="text-align:right;"> p.value </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> control - schizo </td> <td style="text-align:right;"> 0.1913333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 2.8198628 </td> <td style="text-align:right;"> 0.0109522 </td> </tr> <tr> <td style="text-align:left;"> control - bipolar </td> <td style="text-align:right;"> 0.2586667 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 3.8122186 </td> <td style="text-align:right;"> 0.0013326 </td> </tr> <tr> <td style="text-align:left;"> schizo - bipolar </td> <td style="text-align:right;"> 0.0673333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 0.9923559 </td> <td style="text-align:right;"> 0.3267070 </td> </tr> </tbody> </table> p = `\(\frac{m}{k}\)` * p --- # Tukey's Honestly Significant Difference <table class="table table-striped" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> contrast </th> <th style="text-align:right;"> estimate </th> <th style="text-align:right;"> SE </th> <th style="text-align:right;"> df </th> <th style="text-align:right;"> t.ratio </th> <th style="text-align:right;"> p.value </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> control - schizo </td> <td style="text-align:right;"> 0.1913333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 2.8198628 </td> <td style="text-align:right;"> 0.0195775 </td> </tr> <tr> <td style="text-align:left;"> control - bipolar </td> <td style="text-align:right;"> 0.2586667 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 3.8122186 </td> <td style="text-align:right;"> 0.0012670 </td> </tr> <tr> <td style="text-align:left;"> schizo - bipolar </td> <td style="text-align:right;"> 0.0673333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> 0.9923559 </td> <td style="text-align:right;"> 0.5857148 </td> </tr> </tbody> </table> --- # Visualizing Comparisons (Tukey) <img src="anova_1_files/figure-html/tukey-viz-1.png" style="display: block; margin: auto;" /> --- # Dunnett's Comparison to Controls <table class="table table-striped" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> contrast </th> <th style="text-align:right;"> estimate </th> <th style="text-align:right;"> SE </th> <th style="text-align:right;"> df </th> <th style="text-align:right;"> t.ratio </th> <th style="text-align:right;"> p.value </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> schizo - control </td> <td style="text-align:right;"> -0.1913333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> -2.819863 </td> <td style="text-align:right;"> 0.0141195 </td> </tr> <tr> <td style="text-align:left;"> bipolar - control </td> <td style="text-align:right;"> -0.2586667 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> 42 </td> <td style="text-align:right;"> -3.812219 </td> <td style="text-align:right;"> 0.0008756 </td> </tr> </tbody> </table> <img src="anova_1_files/figure-html/dunnett-1.png" style="display: block; margin: auto;" /> --- # Likelihood and Posthocs <table class="table" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> group </th> <th style="text-align:right;"> emmean </th> <th style="text-align:right;"> SE </th> <th style="text-align:right;"> df </th> <th style="text-align:right;"> asymp.LCL </th> <th style="text-align:right;"> asymp.UCL </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> control </td> <td style="text-align:right;"> -0.0040000 </td> <td style="text-align:right;"> 0.0479786 </td> <td style="text-align:right;"> Inf </td> <td style="text-align:right;"> -0.0980363 </td> <td style="text-align:right;"> 0.0900363 </td> </tr> <tr> <td style="text-align:left;"> schizo </td> <td style="text-align:right;"> -0.1953333 </td> <td style="text-align:right;"> 0.0479786 </td> <td style="text-align:right;"> Inf </td> <td style="text-align:right;"> -0.2893697 </td> <td style="text-align:right;"> -0.1012970 </td> </tr> <tr> <td style="text-align:left;"> bipolar </td> <td style="text-align:right;"> -0.2626667 </td> <td style="text-align:right;"> 0.0479786 </td> <td style="text-align:right;"> Inf </td> <td style="text-align:right;"> -0.3567030 </td> <td style="text-align:right;"> -0.1686303 </td> </tr> </tbody> </table> --- # Posthocs Use Z-tests <table class="table" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> contrast </th> <th style="text-align:right;"> estimate </th> <th style="text-align:right;"> SE </th> <th style="text-align:right;"> df </th> <th style="text-align:right;"> z.ratio </th> <th style="text-align:right;"> p.value </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> control - schizo </td> <td style="text-align:right;"> 0.1913333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> Inf </td> <td style="text-align:right;"> 2.8198628 </td> <td style="text-align:right;"> 0.0133286 </td> </tr> <tr> <td style="text-align:left;"> control - bipolar </td> <td style="text-align:right;"> 0.2586667 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> Inf </td> <td style="text-align:right;"> 3.8122186 </td> <td style="text-align:right;"> 0.0004048 </td> </tr> <tr> <td style="text-align:left;"> schizo - bipolar </td> <td style="text-align:right;"> 0.0673333 </td> <td style="text-align:right;"> 0.067852 </td> <td style="text-align:right;"> Inf </td> <td style="text-align:right;"> 0.9923559 </td> <td style="text-align:right;"> 0.5816908 </td> </tr> </tbody> </table> --- # Bayes-Style! <table class="table table-striped" style="margin-left: auto; margin-right: auto;"> <thead> <tr> <th style="text-align:left;"> group </th> <th style="text-align:right;"> emmean </th> <th style="text-align:right;"> lower.HPD </th> <th style="text-align:right;"> upper.HPD </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;"> control </td> <td style="text-align:right;"> -0.0036712 </td> <td style="text-align:right;"> -0.1094673 </td> <td style="text-align:right;"> 0.0940614 </td> </tr> <tr> <td style="text-align:left;"> schizo </td> <td style="text-align:right;"> -0.1975024 </td> <td style="text-align:right;"> -0.3016901 </td> <td style="text-align:right;"> -0.1037627 </td> </tr> <tr> <td style="text-align:left;"> bipolar </td> <td style="text-align:right;"> -0.2641988 </td> <td style="text-align:right;"> -0.3588393 </td> <td style="text-align:right;"> -0.1650330 </td> </tr> </tbody> </table> --- # Or, Just Look at the Full Implications of the Posterior <img src="anova_1_files/figure-html/bayes_tukey-1.png" style="display: block; margin: auto;" />