Pages

Saturday, October 20, 2007

The Infamous type III SS

Before I started using R, I was under the impression that there was only one type of sums of squares (SS) that should be used: type III SS. It actually was a bit confusing when some people chose to use other types of SS. I also didn't know that SAS invented the term: type III SS. Once I started using R and came across my first unbalanced factorial design...reality hit me.

First of all, I couldn't get R to deploy my coveted type III SS. I could clearly see that any summary coming from lm() or aov() were clearly type I SS, or additive sums of squares. After some reading, I could get type II SS from different models, but the process seemed a bit to complicated. It had to be simpler!!

I played with the idea of posting in R-help , but I was (again) scared away by the terrible comments from some of the gurus. After searching for endless hours on Google, I came across a bunch of interesting information. I read comments and posts regarding how bad type III SS are, mainly because they allow to test for main effects even in the presence of interaction. Honestly I don't think is all that terrible, but I understand it shouldn't be done. I actually teach it that way to my students: "If an interaction is significant, the main effects are rarely interpretable".

On the other camp, I read proponents suggesting that type III SS are the only ones that allow hypothesis testing, giving that they are order independent. Also very true.

I made up my mind, when someone compared SAS to Micro$oft. From then on, I chose not to use type III SS if possible. Therefore, this entry is to teach how to deal with unbalanced factorials and how to get the appropriate SS and F values from R. And if you're so inclined, I will show you how to get the blasted type III SS's. Most of this information is scattered all over the Internet and I'm not to be credited for any of it. I just collected it for my personal use, and since this Blog serves as my basket case for R methods I easily forget, they ended up here.

Let's begin with an example. Let´s suppose we have a two-way unbalanced ANOVA to study the effect of forest type and species on herbivory. The biological hypothesis suggests that species vary in their susceptibility to herbivory, and the percentage of leaves damaged by herbivory are independent of forest type. We sample three different forest types: riparian, transition and mature forests. In each forest type, we collected samples from four different species and determined the percent of area removed from a randomly selected leaf. Given that species number is not constant on each site, the design is unbalanced. The data is presented in the following table:

Herbivory

Riparian
Transition
Mature
Sp1
42 44 36
13 19 22
33 26 33
21
31 3 25
25 24
Sp2
28 23 34
42 13
34 33 31
36
3 26 28
32 4 16
Sp3
1 29 19
11 9 7
1 6
21 1 9
3
Sp4
24 9 22
2 15
27 12 12
5 16 15
22 7 25
5 12

We read the data and included it in R as a data frame named: herb. Now we can check the number of replicates using the replications() command:

> replications(herbiv~bosque*especie, data=herb)
$bosque
bosque
   maduro    ripario transicion
       20         19         19

$especie
especie
sp1 sp2 sp3 sp4
15  15  12  16

$"bosque:especie"
           especie
bosque       sp1 sp2 sp3 sp4
 maduro       5   6   4   5
 ripario      6   5   3   5
 transicion   4   4   5   6
As we can clearly see, the model is highly unbalanced. We shall perform an analysis of variance in various ways. First we will fit a saturated model, and then vary the order of the factors in the model:
> mod.sat <- aov(herbiv~bosque*especie, data=herb)
> mod.sat.2 <- aov(herbiv~especie*bosque, data=herb)

The summary tables for both models show that SS are calculated sequentially.

> summary(mod.sat)

               Df Sum Sq Mean Sq F value    Pr(>F) 
bosque          2  464.0   232.0  2.4766   0.09516 .
especie         3 2810.8   936.9 10.0017 3.434e-05 ***
bosque:especie  6  530.4    88.4  0.9436   0.47348 
Residuals      46 4309.1    93.7                   

> summary(mod.sat.2)
              Df Sum Sq Mean Sq F value    Pr(>F) 
especie         3 2834.8   944.9 10.0871 3.186e-05 ***
bosque          2  440.0   220.0  2.3486    0.1069 
especie:bosque  6  530.4    88.4  0.9436    0.4735 
Residuals      46 4309.1    93.7                   

Although very close, the SS for the species factor (i.e. especies) is different depending on the order of the factors. If species comes first then (SS = 2834,8), but if species is the second term in the model then SS=2810.8. These SS are all sequential SS´s, known by SAS as Type I SS. In the SAS world, now we would use proc GLM and get type III sums of squares, and test hypothesis regarding the interaction and the main effects. Nonetheless, this approach may lead us down a dangerous path, since it allows us to test for main effects even in the presence of a significant interaction.

The R way The advisable method in R, is to search for the most parsimonious model and then obtain SS by comparing models that differ in the number of parameters (i.e. factors) included. This procedure is automagically done by the drop1() command. This command, as its name implies, drops arguments from the model and compares the original model to the reduced one. It generally begins by removing non-significant interactions. The results given by drop1() command are order-independent, therefore the following code examples will only be performed on mod.sat:

> drop1(mod.sat, test="F")
Single term deletions

Model:
herbiv ~ bosque * especie
              Df Sum of Sq    RSS    AIC F value  Pr(F)
<none>                      4309.1  273.9            
bosque:especie  6 &nbsp;   530.4 4839.5  268.6  0.9436 0.4735
>

The drop1() command used in the previous code, required two parameters or options: the name of the fitted aov() model, and the type of test to perform. We asked R to drop terms from "mod.sat", our saturated model. We also requested F statistics, which compare the original model, with the new model without the term. AIC, the Akaike Information Criterion, measures the goodness of fit of a statistical model taking into account the number of parameters included. The AIC statistic allows us to find the best model that fits the data, with the minimum number of parameters.

One should choose the model with the lowest AIC value. In the previous output, removing the interaction term: bosque:especie; produces a better model with a lower AIC than the saturated formula (268.6 vs 273.9). Therefore, we can conclude that removing the interaction term should benefit our model. The F statistic provided is based on SS calculated by comparing models with and without the interaction term, and hence, can be used to report the non-significance of the interaction.

We now fit a linear model without the interaction term, and request the appropriate sums-of-square with the drop1().

> mod1 <- aov(herbiv~bosque+especie, data=herb)

> drop1(mod1, test="F")

Single term deletions

Model:
herbiv ~ bosque + especie
       Df Sum of Sq    RSS    AIC F value     Pr(F) 
<none>               4839.5  268.6                   
bosque   2     440.0 5279.5  269.6  2.3639    0.1041 
especie  3    2810.8 7650.2  289.2 10.0672 2.461e-05 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The output from the drop1() command shows that removing any of the natural components (i.e. 'bosque' or 'especie') would lower the goodness of fit of the model. This can be seen by inspecting the AIC statistics. We see that by removing <none> of the factors, we get an AIC of 268.6. However, if we remove 'bosque' or 'especie', we get a higher AIC value, thus suggesting that we have the most parsimonious model.

F-statistics are computed by comparing the original model, with one where the factor is removed. We can see that 'bosque' is not significant, while 'especie' is highly significant (F=10.07; df=2,46; p<0.001). As I precieve it, these are all type III SS, but I could be wrong. We can clearly see that they are order independent, by fitting a model where 'especie' comes first:

> drop1(aov(herbiv~especie+bosque, herb), test="F") Single term deletions

Model:
herbiv ~ especie + bosque
       Df Sum of Sq    RSS    AIC F value     Pr(F) 
<none>               4839.5  268.6                   
especie  3    2810.8 7650.2  289.2 10.0672 2.461e-05 ***
bosque   2     440.0 5279.5  269.6  2.3639    0.1041 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The results do not deviate from the previous analysis, thus confirming that this type of analysis is order independent. My only grievance with this type of analysis, is that the ANOVA table has to be computed by hand. I still haven't figured out if this can be done automatically.

Type III SS from R

If you want to get type III sums-of-square anyway (maybe due to extraordinary circumstances: your boss demands them!) you can easily get them from R. As Dr. Venables says: you just have to know where to look for them.

I will not go into detail on how this works, mainly, because I don't know how it works. To get type III-SS, you have to do the following:

1. Change the default contrast matrix used by R, to one which produces orthogonal contrasts such as contr.helmert or contr.sum:

> options(contrasts=c("contr.helmert", "contr.poly"))

2. Fit a new anova model (saturated), which uses the new contrast matrix we selected in step 1.

> mod.III <- aov(herbiv~bosque*especie, data=herb)

3. Then use the drop1() command, choosing to remove all objects using the formula shorthand notation .~. :

> drop1(mod.III, .~., test="F") Single term deletions

Model:
herbiv ~ bosque * especie 

Df Sum of Sq RSS AIC F value Pr(F) <none> 4309.1 273.9 bosque 2 436.3 4745.4 275.5 2.3289 0.1088 especie 3 2753.8 7062.8 296.5 9.7989 4.108e-05 *** bosque:especie 6 530.4 4839.5 268.6 0.9436 0.4735 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

And voila!...You get type III sums-of-square, which should make any SAS user happy. The Car package, has built in functions to get type-III SS, but I've read they produce funny output sometimes. I've never tested this, since I don't like installing many libraries to do things you can get from the base stats module.

Conclusion

In my opinion the type III sums-of-square issue, is a bit technical for the average user. Although some suggest that the average user shouldn't be doing statistics, it is commonplace for many students and some researchers to have to deal with anova on their own, with very little statistics background. I think R does the correct thing by not supplying the infamous SS's easily, and forcing the user to think about what he/she really wants. Nonetheless, this discussion should be part of R's help files, and the correct procedure should be also easier to find. Finally, the drop1() method should provide a properly formatted ANOVA table, but again, I may just not know how to do it...and the correct procedure is hidden somewhere within R's mysterious innards.

Saturday, October 06, 2007

Contrasts in R

One of the most neglected topics in Biostatistics is the calculation of contrasts in ANOVA. Most researchers are content with simply calculating an ANOVA table and stating that differences among groups are statistically significant. Some even present box-plots or any other graphic that show where the differences lie. However, few researchers actually test the differences they are looking for.

An ANOVA is designed to test a statistical hypothesis. It looks at the means of each treatment or level, and determines if all the means are equal or not. Nevertheless, the biological hypotheses introduced by the researchers need to be specifically addressed with contrasts. One can generalize and state that while ANOVA analyzes the statistical hypothesis, contrasts look into the biological hypothesis.

The following entry will show how to create a contrast in R. I will not go into detail regarding the theory behind contrasts. I suggest you read a good book on Experimental Design.

Let’s begin by using a data set included in R. We will use the InsectSprays data frame. This experiment shows the number of insects killed (variable count) by using six different insecticides (A:F). The data is loaded by using:

> data(InsectSprays)

Once the data has been invoked, we can start working on it. The first thing we want to do is to determine if the model is balanced. We can use the summary() command to assess some basic information on the data.frame:

> summary(InsectSprays)

count spray

Min. : 0.00 A:12

1st Qu.: 3.00 B:12

Median : 7.00 C:12

Mean : 9.50 D:12

3rd Qu.:14.25 E:12

Max. :26.00 F:12

The previous output shows that we have a balanced design, with 12 replicates per group. Since this is a dataframe the alphanumeric variable, spray, is immediately recognized by R as a factor.

Let’s imagine there are three main questions. First, we want to determine if there are any differences between the spray treatments. Secondly, the researcher wants to know if the first three sprays differ from the latter three. Third, there is strong indication that sprays: A, B, and F; should have a stronger effect than the remaining treatments. Since contrasts are pre-planned or a priori comparisons, we should start by creating the contrasts matrix. We need to create a vector of coefficients for each comparison, and then bind them in a matrix:

> c1 <- c(1,1,1,-1,-1,-1)

> c2 <- c(1,1,-1,-1,-1,1)

> mat <- cbind(c1,c2)

Secondly, we need to specify that the contrast matrix mat should be used to compute SS during anova calculations. We achieve this by assigning the contrasts matrix to the factor sprays using the contrasts() command:

> contrasts(InsectSprays$spray) <- mat

It should be noted, that once this assignment is implemented, the aov() command will calculate SS for the contrasts established in each column of mat. If you wish to change the contrasts, then a new contrasts matrix should be created and assigned to the factor.

We now perform the analysis of variance, and request a summary table. The anova table is split to include the contrasts:

> model1 <- aov(count ~ spray, data = InsectSprays)

> summary(model1, split=list(spray=list("First 3 vs other"=1, "ABF vs CDE"=2)))

Df Sum Sq Mean Sq F value Pr(>F)
spray5

2668.83

533.77

34.702

<2e-16

***
spray: First 3 vs other 1

93.39

93.39

6.0716

0.01635

*
spray: ABF vs CDE 1

2558.67

2558.67

166.3495

< 2e-16

***
Residuals 66

1015.17

15.38

---

Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1

Let’s analyze the previous code. First we conduct an analysis of variance with the aov() command. The results are stored in the model1 object. Immediately following, we request a summary table with the summary() command. The summary command includes the option: split. The split option provides a list of factors where contrasts are stored. Within each factor, we also provide a list which includes the names of the contrasts (i.e. each column) stored in the contrasts matrix columns (i.e. mat).

The ANOVA table shows a significant effect of the spray varieties on insect moratility. The contrasts suggest significant differences between the first three and the latter three spray types. Finally, we see that most of the treatment SS are explained by the comparison of the ABF and CDE groups. A box-plot clearly shows that the main differences lie between the groups compared by the second contrasts. The comparisons among groups are better observed using a boxplot:

> boxplot(count ~ spray, data = InsectSprays, + xlab = "Type of spray", ylab = "Insect count", + main = "InsectSprays data", col = "skyblue")

Saturday, September 15, 2007

Neighbor Joining Tree with Ape

Today I used R to create a neighbor joining phenogram, using the ape library. The phenogram will be used to visualize a species genetic differences between six different locations in Costa Rica. Genetic distances were calculated from microsatellite data using GenAlEx (GenAlEx), because I haven't figured out how to calculate Nei's genetic dissimilarities with R. This is the procedure I used:

First I imported the genetic distance matrix from Exc..l. The matrix used was symmetric, meaning that the same distances were mirrored above and below the diagonal. I copied the matrix into the clipboard from Exc..l using Ctrl-C. The distance matrix included row and coloumn names, the six different locations. The matrix was imported in R by assigning it to an object named m:

> m <- as.matrix(read.table("clipboard", head=T, row.names=1))

The previous command imports the data stored in the clipboard and transforms the data frame into a matrix format. The options of the read.table() command: head=T and row.names=1, tell R that the first line of the matrix is a header and that row names are stored in the first column, respectively. After checking the matrix was imported correctly, I proceeded to load the ape library.

> library(ape)

The ape (analysis in phylogenetics and evolution) contains various methods for the analysis of genetic and evolutionary data. Ape also provides commands designed to calculate distances in DNA sequences. I will tackle those in another post. Now back to my tree. After importing the distance matrix, and including the ape library, I proceeded to create the phylogenetic tree with the nj() command:

> arbol <- nj(as.dist(m), "unrooted") > plot(arbol)

The "unrooted" parameter provided to the nj() command produces, as expected, an unrooted tree. The resulting tree topography was stored in the 'arbol' object. The plot command produced:

We can observe how 'Puriscal' is located between two well demarcated groups, one including 'Guapiles' and 'Guatuso' and the other three populations in the second group.