by Tonja
Last Updated July 01, 2017 17:19 PM

I have two samples A and B. In order to check if they are the same I want to apply a t-test and see the difference in means. One of the assumptions of the t-test is that the distribution of A and B is normal.

A colleague says:

The central limit theorem says that the means of the sample distributions is approximately normal. With this in mind you can apply t-test.

Why does this argumentation work? I thought that the underlying distributions of A and B (when plotted) should be normal. And not the values (in this case means) on which the t-test is actually applied for. What does the t-test assume then? Should the data be normally distributed or the means?

Okay, a few things:

1) A two-sample t-test does not assume the distributions of groups A and B are the same under the null hypothesis, even if the underlying distributions are both normal. That can only occur if you assume the standard deviations are the same, which is a hefty assumption to have. The two-sample t-test tests whether, under the null, the means are the same of the two groups. But yes, the classical two-sample t-test assumes the underlying data is normally distributed. This is the case because you do not only need the numerator to be normally distributed, but the variances also be (a scaled version of) a $\chi^2$. That being said, the t-test is fairly robust against the assumption of normality. See here.

2) It is true under a large enough sample, the underlying distribution of the means of each group is going to be approximately normal. How good that approximation depends on the underlying distribution of each group.

The general idea is this. If $X$ and $Y$ are independent, with $X$ having mean $\mu_X$ and standard deviation $\sigma_X$ and $Y$ having mean $\mu_Y$ with standard deviation $\sigma_Y$, and the respective sample $X_1,\dots,X_n$ and $Y_1,\dots,Y_m$ are large, then you can conclude $$ \frac{\bar{X}-\bar{Y}-(\mu_X-\mu_Y)}{\sqrt{\frac{\sigma^2_X}{n}+\frac{\sigma^2_Y}{m}}}$$

Is approximately normal with mean 0 and standard deviation 1. So critical values $z_{\alpha/2}$ can be used to do testing. Also, $t_{\alpha/2,\nu}$ are going to be close to $z_{\alpha/2}$, when $\nu$ is large (which occurs if the sample sizes is large). So for large enough sample sizes, a t-test can be used.

There are ways to check for this. (The standard rule of thumb is that each group has a sample size of 30 or larger, but I am usually against those rules because there are plenty of cases where that rule fails). One way you can check it (sort of) is to create a bootstrap distribution of the mean and see.

3) You can do better than approximate tests though. When you are testing to see if the means differ, your real question is really to see if the locations differ. A test that will be correct (almost) all the time will be the Mann Whitney U test. This does not test whether the means differ, but rather if the medians differ. In other words, it again tests whether one location differs from another. It may be a better option, and has a pretty high power overall.

- ServerfaultXchanger
- SuperuserXchanger
- UbuntuXchanger
- WebappsXchanger
- WebmastersXchanger
- ProgrammersXchanger
- DbaXchanger
- DrupalXchanger
- WordpressXchanger
- MagentoXchanger
- JoomlaXchanger
- AndroidXchanger
- AppleXchanger
- GameXchanger
- GamingXchanger
- BlenderXchanger
- UxXchanger
- CookingXchanger
- PhotoXchanger
- StatsXchanger
- MathXchanger
- DiyXchanger
- GisXchanger
- TexXchanger
- MetaXchanger
- ElectronicsXchanger
- StackoverflowXchanger
- BitcoinXchanger
- EthereumXcanger