Analysis of Variance
Glossary
The dependent variable is the measurement obtained from each of the subjects in the experiment. For example, in a 1-way ANOVA with independent groups, the dependent variable may be something such as systolic blood pressure, strength, or heart rate.
The independent variables, or factors, are the categorical variables that describe the treatments or categories associated with an ANOVA. A 1-way ANOVA will have 1 factor, a 2-way ANOVA will have 2 factors, and so forth. Factors can have several treatment levels. For example, a 1-way ANOVA that tests the effects of dietary supplements on strength can have a factor called Supplement with 3 treatment levels: 1) creatine, 2) amino acids, and 3) placebo.
The between group variance estimate, or mean squares between (MSb), is a measure of the variability of the population estimated from the group means. As the difference between group means (a mean describes each treatment level) increases, so does the between group variance estimate.
The within group variance estimate, or mean squares within (MSw), is a measure of the average variability within each of the treatment levels. In an independent group design, the within group variance estimate is also referred to as the error term. Theoretically, the within group variance is unaffected by changes in the treatment level means.
The interaction mean squares (MSsxt) provides a measure of variability that is sensitive to the strength of the relationship between two or more factors in a 2-way (or 3-way) ANOVA.
The F-ratio (F stands for Fisher, R.A.) is a ratio of the between group variance estimate divided by the error term. In the case of a 1-way ANOVA with independent groups, it is the ratio of the between group variance estimate divided by the within group variance estimate.
F-ratios can be used to test for differences between treatment levels within each factor, as well as the interaction that exists between factors. F-ratios are said to measure effects, since 2-way and 3-way ANOVAs produce more F-ratios than factors.
Each effect is associated with a specific number of degrees of freedom. For main effects, the number of degrees of freedom is equal to the number of levels minus 1. For interaction effects, the number of degrees of freedom is equal to the number of rows minus 1 times the number of columns minus 1, or (r – 1) (c – 1).
The error term is the term that describes the denominator used in calculating the F-ratio. The following error terms are associated with the appropriate designs below:
Design |
Effect |
Error Term |
1-way independent groups |
main |
MSw |
1-way repeated measures |
main |
MSsxt |
2-way independent groups |
row |
MSw |
|
column |
MSw |
|
interaction |
MSw |
2-way repeated measures |
row |
MSsxr |
|
column |
MSsxc |
|
interaction |
MSsxrxc |
2-way mixed |
row (ind group) |
MSs/r |
|
col (repeat meas) |
MSsxc/r |
|
interaction |
MSsxc/r |
Assumptions of ANOVA
Post-hoc tests are used to test the differences between levels or cells when a significant F-ratio is found. The purpose of a post-hoc test is to hold the experimentwise error at the selected alpha level. There are several kinds of post-hoc tests, and they vary in the degree of stringency imposed on the test. The most stringent test is the Scheffe test, while the least stringent is the Least Significant Difference (LSD) test.
Typically, two approaches for controlling experimentwise error are
available in SPSS: Bonferroni, or Sidak. The Bonferroni corection
is the most widely used and is more stringent than the Sidak
approach. Both work by adjusting the expermentwise alpha (usually
0.05) for the potential number of comparisons that can be made.
Bonferroni creates a new critical alpha value by simply dividing the
experimentwise alpha value by the number of comparisons (Critical
alpha = (experimentwise alpha) / k, where k is the number of
possible comparisons). Sidak uses the following
calculation: Critical alpha = 1 - (1 - experimentwise
alpha)**1/k.