Friday, October 4, 2013

Standard Error vs Standard Deviation, and Some Other Practical Statistics Stuff You Want to Know

Introduction

In most professional settings, and especially in the sciences, it is important to know a bit of statistics.  I say that this is particularly true for scientists because our jobs are centered around discovering and describing natural phenomena, and we rely on statistics to help us understand these.  Using inappropriate statistical methods, or interpreting statics incorrectly, can either result in missing interesting trends in data, or in making unjustified conclusions by mistake.  Because this is such an important topic, I want to highlight some major statistical points that all scientists (and professionals in general) should be aware of.  I will try to be brief here, but these topics can get pretty involved so I will also provide directions to more comprehensive literature for further reading in my Works Cited.  My goal here is only to hit some high points of commonly used statistics.



Standard Error vs Standard Deviation

One of the most important, yet sometimes misunderstood, statistical distinctions is that of the difference between standard error and standard deviation, especially when using them as error bars in graphs.  The main, practical distinction between the two is that standard deviation gives you an idea of how the data is spread in an experimental data set (these are used in descriptive error bars) while standard error is an estimate of how variable the mean will be after the experiment has been repeated multiple times (these are used in inferential error bars) [1].  To state the difference more practically, standard deviation should be used to visualize the distribution of the data in a single experimental (and can be used for comparing single data points to the experimental data spread), while standard error should be used when you want to compare means of experimental groups, such as treatment vs placebo.  Including standard error bars in figures is helpful because it gives the viewer an estimate of statistical significance for the differences between means (a lack of overlap in standard error bars between groups suggests statistical significance).  Finally, no matter what bars are included in graphs, they should always be labeled in the figure legend so as to properly inform the viewer.  See reference [1] for a more comprehensive review.

A table of commonly used error bar calculations.  This is found in reference [1].

Normally Distributed Data: t-test vs Wilcoxon-Mann-Whitney

Another important aspect to considering when performing statistical tests is whether or not the data set is normally distributed.  This is important because many of the common elementary statistical tests, such as the t-test, assume a normal distribution and will yield inappropriate results if used on a data set that is not normally distributed.  Because they make assumptions on the distribution shape of the data, these tests are considered parametric statistical tests.  In order to justify the use of such statistical tests, I test the normality of my dataset.  There are tests that can be run to determine whether a set of data is normally distributed, with one common test being the Shapiro-Wilk Normality Test.  Histograms can also be used to visualize the data set distribution and gain an idea of whether or not the data is normally distributed (it is often considered "normal" is the data is distributed in a bell shaped curve).

While there are multiple ways to deal with non-normal data sets, I am going to briefly mention one.  It is common, when dealing with non-normal data sets, to use non-parametric statistical methods, which are methods that do not assume normally distributed data (briefly, non-parametric statistical methods involve ranking of the values so that the order of the values is used instead of the values themselves).  In most cases, elementary parametric statistics have non-parametric equivalents.  An example of this is the Wilcoxon-Mann-Whitney test which can be thought of as a non-parametric t-test.  Both of these tests make certain assumptions and have advantages over the other, such as how the t-test is more sensitive than the Wilcoxon test to revealing subtle statistical significance but requires a normally distributed data set [2].  Overall, it is important to pay attention to the assumptions, limitations, and benefits of the statistical tests you are using.  I also want to reiterate how these topics are more involved than what I am discussing here, so please refer to my Works Cited, and especially reference [2] for this section, for further reading.

Normally Distributed Data: Data Visualization

An annotated example of a notched box plot.
Source
Determining the normality of your data set is not just important for the statistical tests you choose to use, but is also important for deciding how to graphically present the data.  Often we default to wanting to present our data as bar plots which show the mean and standard error, but this is only appropriate for normally distributed data.  When working with a data set that is not normally distributed, the mean becomes less useful than the median and the standard error rules of interpretation do not apply, therefore making the use of a box plot more appropriate.  Box plots give more information about where the median is located and how the data is distributed around it, making it more informative than a bar graph [3].  Additionally, notches can be added to box plots to how the approximate 95% confidence intervals and allow for intuitive comparisons of the medians in a way conceptually similar to comparisons of means using standard error bars.  See the figure to the right for a visual example of a notched box plot.

Conclusions

Statistics can be a confusing field with even some of the most common tests being complicated and involved.  Especially in fields like the sciences, which rely on even basic statistics, it is important to understand the assumptions different tests make and the limitations they have.  Hopefully this post provided you with some increased insight into the common statistics that you can now apply.  If you want to call me out on any mistakes or typos, or if you have any questions/comments, please feel free to leave a comment below.



ResearchBlogging.org

Works Cited





1.   Cumming G, Fidler F, Vaux DL. (2007). Error bars in experimental biology J Cell Biol. DOI: 10.1083/jcb.200611141

2.   Fay MP, Proschan MA (2010). Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules Stat Surv DOI: 10.1214/09-SS051

3.   Olsen CH (2003). Review of the use of statistics in infection and immunity. Infection and immunity, 71 (12), 6689-92 PMID: 14638751

* Statistics stock photo: Source

No comments:

Post a Comment