Meta-analyses

Meta analyses have been around for a long time. By some accounts, the first meta-analyses trace back to 17th century astronomy research. They are also prevalent in medical research, with papers starting in 1900s. Since 1993, the Cochrane Reviews have published comparative health findings. More recently, the growth of open data initiatives have made it easier to perform data-integration and meta analyses in your favorite languages.

Rationale

Meta-analyses, when done properly, have both conceptual and statistical benefits. Conceptually, meta-analyses can detect the presence of publication bias. Statistically, results from individual studies can be combined to make conclusions more generalizable, and estimates can have improved precision and accuracy.

Publication bias

Since meta-analyses use published data, a concern is publication bias. PRISMA, criteria for minimum levels of evidence is used by both authors and reviewers to identifying and reducing publication bias.

General steps of meta-analysis workflow

  1. Exploratory analysis to identify data anomalies

  2. Normalize the studies’ reported metrics (eg. relative risk, odds ratios)

  3. Choose and fit the appropriate model(s) (eg. fixed-, random-, or mixed-effects)

  4. Validate the results

Visualization

For exploratory data analysis, a Q-Q plot is helpful in assessing the datasets’ distributions for anomalies. For comparing metrics across studies , a forest plot (sometimes called a catepillar plot) or funnel plot can also be employed.

Implementations

There are a number of R packages to perform meta-analyses, including meta, rmeta, and metafor. The documentation for metafor has a handy chart comparing the three packages.

PyMeta is an online tool, which has a pip package counterpart called PythonMeta.

References

A basic overview of meta-analyses, including PRISMA can be found in Haidich 2010. There are links to further references at this meta_analysis_books repo.

Written on April 4, 2020