**Introduction**

A meta-analysis is conducted when, given the existence of previous studies that report on some effect, we want to create an estimate of this effect. In this blog, we’ll show you how to conduct a meta-analysis in Stata.

**Sample Dataset and Motivation**

Let’s load a Stata dataset in which previous studies track the effect of teacher expectancy (that is, teacher beliefs about how well students will do) on student IQ.

use https://www.stata-press.com/data/r17/pupiliq

describe

We have 19 studies tracking this effect.

**Prepare Data for Meta-Analysis**

In Stata, the syntax for preparing these data for meta-analysis is as follows:

meta set stdmdiff se, studylabel(studylbl) eslabel(Std. Mean Diff.)

Let’s talk you through this code. Here, meta set is the command that tells Stata you are about to prepare data for meta-analysis. Next comes stdmdiff, which, in this dataset, is the name for effect size. Afterwards is se, which is the variable capturing the standard error of the standardized difference in means / effect size.

Next, after the comma, comes studylabel(studylbl), which tells Stata that the study labels are to be found in the variable studylbl.

Finally, eslabel(Std. Mean Diff.) tells Stata what to call the effect size in your meta-analysis.

**Choosing Fixed versus Random Effects Models**

Here’s what the code above generated in Stata:

Note that the model is random effects, with the other possible model being fixed effects. A good explanation of these differences is as follows:

“The fixed-effect model assumes 1 true effect size underlies all the studies in the meta-analysis, thus the term ‘fixed effect.’ Any differences in observed effects are due to sampling error. Investigators use the singular (effect) since there is only 1 true effect. The random-effects model assumes that the true effect could vary from study to study due to the differences (heterogeneity) among studies….If it were possible to perform an infinite number of studies, the effect estimates of all the studies would follow a normal distribution. The pooled estimate would be the mean or average effect. The effect sizes in the studies that are performed are assumed to represent a random sample of all possible effect sizes, hence the term ‘random effects.’ Investigators use the plural (effects) since there is a range of true effects.” (Dettori et al., 2022, p. 1624).

There is a consensus that fixed-effects approaches to meta-analysis should be attempted with four or fewer studies. In the dataset we are working with, there are 19 studies. Therefore, the random effects model is appropriate.

If you wanted to run a fixed effects model instead, you could use:

meta set stdmdiff se, studylabel(studylbl) eslabel(Std. Mean Diff.) fixed

However, as noted above, we will use the random effects model, as there were several studies (*k* = 19) in the data we are meta-analyzing.

**Run and Interpret the Analysis**

Now let’s run the meta-analysis.

meta summarize

Here’s what you get:

Before discussing the effect size, let’s consider some of the characteristics of the model. Note that there is a *p *value for a *Q* statistic. There is also tau2 (t2) and I2.

Borenstein, Hedges, Higgins, & Rothstein (2009, p. 107) stated that a significant *Q* statistic indicates the existence of sufficient heterogeneity (subsequently quantifiable by I2 and T2) in effect size to justify a random-effects model.

When *Q* is significant, then, according to Borenstein et al. (2009, p. 217), we can conclude that there is sufficient heterogeneity in effects to justify a random effects model. As *Q* is significant (*p *= .0074), we were justified in our use of random effects. Had *Q* been statistically insignificant, we might have considered a fixed effects model instead.

Note that I2 is fairly high. According to Borenstein et al., “The statistics *T2* (and *T*) reflect the *amount* of true heterogeneity (the variance or the standard deviation) while *I ^{2 }*reflects the proportion of observed dispersion that is due to this heterogeneity” (Borenstein et al., 2009, p. 120). Clearly, there is a great deal of heterogeneity in this dataset.

Now let’s look at the pooled effect size, theta, with some visual accompaniment:

meta forestplot

The pooled effect size, 0.084, is small, and 0 is within the 95% confidence interval (CI) of the effect size, that is, -0.018, 0.185. We can also see that the *p *value for theta is .1052. Therefore, we can conclude that there is no significant effect of teacher expectancy on pupil IQ. This conclusion seems theoretically justified in that IQ is an intrinsic cognitive processing capacity that is unlikely to be influenced by expectancy.

BridgeText can help you with all of your **statistical analysis needs**.

**References**

Borenstein, M., Hedges, L.V., Higgins, J.P., & Rothstein, H.R. (2009). *Introduction to meta-analysis*. Wiley.

Dettori, J. R., Norvell, D. C., & Chapman, J. R. (2022). Fixed-effect vs random-effects models for meta-analysis: 3 points to consider. *Global Spine Journal*, *12*(7), 1624–1626. https://doi.org/10.1177/21925682221110527