Controlling the risk of spurious findings from meta‐regression
Citations Over TimeTop 1% of 2004 papers
Abstract
Meta-regression has become a commonly used tool for investigating whether study characteristics may explain heterogeneity of results among studies in a systematic review. However, such explorations of heterogeneity are prone to misleading false-positive results. It is unclear how many covariates can reliably be investigated, and how this might depend on the number of studies, the extent of the heterogeneity and the relative weights awarded to the different studies. Our objectives in this paper are two-fold. First, we use simulation to investigate the type I error rate of meta-regression in various situations. Second, we propose a permutation test approach for assessing the true statistical significance of an observed meta-regression finding. Standard meta-regression methods suffer from substantially inflated false-positive rates when heterogeneity is present, when there are few studies and when there are many covariates. These are typical of situations in which meta-regressions are routinely employed. We demonstrate in particular that fixed effect meta-regression is likely to produce seriously misleading results in the presence of heterogeneity. The permutation test appropriately tempers the statistical significance of meta-regression findings. We recommend its use before a statistically significant relationship is claimed from a standard meta-regression analysis.
Related Papers
- → Implementing false discovery rate control: increasing your power(2005)957 cited
- → Controlling the rate of Type I error over a large set of statistical tests(2002)160 cited
- Comparing Dunnett's Test with the False Discovery Rate Method: A Simulation Study(2013)
- → A revision of the tukey multiple comparisons procedure to control the probability of committing at most one type i error(1993)
- → Controlling Error Rates with Multiple Positively-Dependent Tests(2021)