14
Stata Technical Bulletin
STB-57
Duval and Tweedie use the symmetry argument in a somewhat roundabout way, choosing to first trim extreme positive
studies until the remaining studies meet symmetry requirements. This makes sense when the studies are subject only to publication
bias, since trimming should preferably toss out the low-weight, but extreme studies. Nonetheless, if other biases affect the data,
in particular if there is a study that is high-weight and extremely positive relative to the remainder of the studies, then the method
could fail to function properly. The user must remain alert to such possibilities.
Duval and Tweedie’s final step—filling in imputed reflections of the trimmed studies—has no effect on the final trimmed
point estimate in a fixed effects analysis but does cause the confidence interval of the estimate to be smaller than that from the
trimmed or original data. One could question whether this “increased” confidence is warranted.
The random-effects situation is more complex, as both the trimmed point estimate and confidence interval width are affected
by filling, with a tendency for the filled data to yield a point estimate between the values from the original and trimmed data.
When the random-effects model is used, the confidence interval of the filled data is typically smaller than that of either the
trimmed or original data.
Experimentation suggests that the Duval and Tweedie method trims more studies than may be expected; but because of
the increase in precision induced by the imputation of studies during filling, changes in the “significance” of the results occur
less often than expected. Thus the two operations (trimming, which reduces the point estimate, and filling, which increases the
precision) seem to counter each other.
Another phenomenon noted is a tendency for the heterogeneity of the filled data to be greater than that of the original data.
This suggests that the most likely studies to be trimmed and filled are those that are most responsible for heterogeneity. The
generality of this phenomenon and its impact on the analysis have not been investigated.
Duval and Tweedie provide a reasonable development based on accepted statistics; nonetheless, the number and the
magnitude of the assumptions required by the method are substantial. If the underlying assumptions hold in a given dataset,
then, as with many methods, it will tend to under- rather than over-correct. This is an acceptable situation in my view (whereas
“over-correction” of publication bias would be a critical flaw).
This author presents the program as an experimental tool only. Users must assess for themselves both the amount of
correction provided and the reasonableness of that correction. Other tools to assess publication bias issues should be used in
tandem. metatrim should be treated as merely one of an arsenal of methods needed to fully assess a meta-analysis.
Saved Results
metatrim does not save values in the system S# macros, nor does it return results in r().
Note
The command meta (Sharp and Sterne 1997, 1998) should be installed before running metatrim.
References
Begg, C. B. and M. Mazumdar. 1994. Operating characteristics of a rank correlation test for publication bias. Biometrics 50: 1088-1101.
Bradburn, M. J., J. J. Deeks, and D. G. Altman. 1998. sbe24: metan—an alternative meta-analysis command. Stata Technical Bulletin 44: 4-15.
Reprinted in The Stata Technical Bulletin Reprints vol. 8, pp. 86-100.
Cottingham, J. and D. Hunter. 1992. Chlamydia trachomatis and oral contraceptive use: A quantitative review. Genitourinary Medicine 68: 209-216.
Duval, S. and R. Tweedie. 2000. A nonparametric “trim and fill” method of accounting for publication bias in meta-analysis. Journal of the American
Statistical Association 95: 89-98.
Egger, M., G. D. Smith, M. Schneider, and C. Minder. 1997. Bias in meta-analysis detected by a simple, graphical test. British Medical Journal 315:
629-634.
Light, R. J. and D. B. Pillemer. 1984. Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.
Sharp, S. and J. Sterne. 1997. sbe16: Meta-analysis. Stata Technical Bulletin 38: 9-14. Reprinted in The Stata Technical Bulletin Reprints vol. 7,
pp. 100- 106.
——. 1998. sbe16.1: New syntax and output for the meta-analysis command. Stata Technical Bulletin 42: 6-8. Reprinted in The Stata Technical
Bulletin Reprints vol. 7, pp. 106-108.
Steichen, T. J. 1998. sbe19: Tests for publication bias in meta-analysis. Stata Technical Bulletin 41: 9-15. Reprinted in The Stata Technical Bulletin
Reprints vol. 7, pp. 125-133.
Steichen, T. J., M. Egger, and J. Sterne. 1998. sbe19.1: Tests for publication bias in meta-analysis. Stata Technical Bulletin 44: 3-4. Reprinted in The
Stata Technical Bulletin Reprints vol. 8, pp. 84-85.