- Details
- Last Updated: 25 November 2015 25 November 2015

### Article Index

### A note on statistical relationships and p-values

A p-value is a number used to determine if a result is statistically significant and so is unlikely to be simply the result of chance. A p-value of less than .05 (less than 5% chance the result is simply from randomness) is considered to be statistically significant. A p-value of less than .01 (less than 1% chance the result is simply from randomness) is considered to be highly statistically significant.

An r-value is called Pearson’s r or a correlation coefficient. It is a measurement used to determine if two variables change in a proportional (linear) relationship in the same direction (positive r-value) or in opposite directions (negative r-value.)

So a positive r-value means that both variables have a tendency to increase or decrease together in a proportional manner.

A negative r-value means if one increases, the other decreases, and vice versa.

Correlation r-values are always numbers trapped between negative one and positive one.

A positive r-value “ close to” positive one means that if you graphed the numerical values of the two variables (using one as an x-value and the other as a y-value), the data points would be almost a straight line with positive slope. (/)

A negative r-value “ close to” negative one means the graph of data points would be almost a straight line with negative slope. (\)

An r-value that is “close to” zero means that the relationship between the two variables is not like a straight line at all.

A p-value is determined in order to tell whether your r-value should be considered “close to” either positive or negative one.

The p-value used with a correlation coefficient reflects that the standard of “close to” is much more forgiving if you are looking at 10,000 data points rather than 10 data points.

*Warning:* The correlation of two variables does not imply that one result “causes” the other one. It means they appear to be related, but there could be some hidden variables you don’t know about that are actually causing the apparent relationship.

What is a statistical meta-analysis?

This term refers to the statistical analysis of a *group* of* individual* studies.

The authors of a meta-analysis re-analyze the results of many smaller studies (with only the limited information available in each published report of a study, not the original data from each) pooled as if a larger study had been done with all of the patients in the smaller studies.

Thus a meta-analysis will take results from 10 studies of 10 patients each and claim to have valid results from 100 patients, but without having access to the original data for any of the 100 patients.

Thus a meta-analysis, in itself, is not an actual scientific study, but a type of averaging of averages from many different studies.

Doing a meta-analysis has become very popular in medicine, much to the dismay of mathematicians.

It has been frequently done to determine the "effectiveness" of a treatment compared to a placebo, by mixing the information from many studies that have different requirements for choosing subjects for the studies, different lengths of times of treatment, different drop-out rates, different numbers of participants (usually small numbers), etc.

Sometimes such an analysis is done by comparing studies that compare a single treatment to a placebo, but the studies used different treatments. (Sort of like mixing studies that compare the use of aspirin to a placebo with those that compare ibuprofen to a placebo. Then one claims to be able to deduce whether aspirin or ibuprofen was more effective, when none of the studies compared the two drugs directly.)

Therefore, from a mathematical and also a medical standpoint, trying to get "effectiveness" data comparing drugs from varying studies that only compared a single drug to placebo is not really valid. There are too many factors that can throw one's conclusions off such as variations in study length, severity of the condition in patients of different studies, numbers of dropouts in a study due to severity of side-effects, etc.

So a meta-analysis done in this way is similar to comparing apples to oranges (placebo), persimmons to oranges and redwood trees to oranges, and then coming up with conclusions about how apples, persimmons and redwood trees compare to each other.

To put things even more into perspective, some time ago a meta-analysis of treatments for CFS found that Cognitive Behavioral Therapy (CBT) was the "best" treatment for CFS.

The facts that most of the studies analyzed used the Oxford definition of CFS and compared CBT to placebo, and only a few of the analyzed studies looked at other treatments and patients identified by the 1988 or 1994 definitions of CFS, were ignored.

Thus, the conclusion of a meta-analysis can be heavily influenced by the number of studies of a certain treatment outweighing the number of studies looked at in the analysis that used a different treatment. Somes studies may have been conducted poorly, as well, and in the case of ME/CFS, the patient populations under examination could be very different.

So the take-away message is to be very dubious about the results of a meta-analysis.