Author: Whiston, Susan C
Date published: July 1, 2011
In response to the theme of this special section on getting published in counseling journals, the purpose of this article is to provide an overview of steps in conducting a meta-analysis. Meta-analysis has been increasingly recognized as a methodologically sound approach to synthesizing research (Cooper, 2010; Cooper, Robinson, & Dorr, 2006). The purpose of metaanalysis is to quantitatively aggregate the results of numerous empirical studies on a topic of interest (Erford, Savin-Murphy, & Butler, 2010). It can be a particularly attractive endeavor for researchers and practitioners who have been working within a specific area for some time or for doctoral students who have completed very comprehensive literature reviews. The term meta-analysis was first coined by Gene V Glass in 1976. Meta is a Greek word that means "behind" or "in back of," but Glass (2000) emphasized that meta-analysis "is not the grand theory of research; it is simply a way of speaking of the statistical analysis of statistical analyses." He defined meta-analysis as "the statistical analysis of a large collection of analysis results from individual studies for purpose of integrating the findings" (Glass, 1976, p. 3).
Counseling researchers may want to consider meta-analytic reviews because their efforts could result in major contributions to the field. As a type of literature review, a meta-analysis aggregates research studies that usually are published in diverse journals (e.g., education, social work, psychology). Therefore, counseling practitioners often appreciate metaanalytic studies in which researchers systematically analyze research findings from diverse journals because practitioners often do not have time to read a wide variety of journals. Furthermore, meta-analytic researchers are increasingly including unpublished studies such as dissertations and theses in their quantitative reviews, which can further expand the comprehensive nature of a meta-analytic review and increase the degree to which the results are useful. Meta-analyses, moreover, can supplement traditional qualitative literature reviews because the process produces effect sizes, which are quantitative indices of the practical significance of the effect. For example, meta-analytic procedures can determine the magnitude of results of a counseling treatment over no treatment or the level of association between attending counseling sessions and well-being. With the current focus on the empirical support for treatment approaches (e.g., evidence-based practices), metaanalysis can provide useful information regarding the degree of empirical support for a type of treatment. This, however, should not be taken to mean that meta-analytic studies are the only source of evidence-based practices.
Durlak (1995) contended that conducting a meta-analysis is analogous to conducting a single scientific experiment in the social or behavioral sciences. As compared with collecting data from participants, in a meta-analysis, the data are collected from individual studies. Hence, rather than doing statistical analyses on the data gathered from multiple participants, the statistical analyses in meta-analyses are conducted on data gathered from multiple studies. Later in this article, we will discuss steps in conducting a meta-analysis, which will include formulating a research question and then finding studies that are specifically related to that question. Similar to other types of quantitative studies, with meta-analyses there are independent and dependent variables. In counseling research, many of the meta-analyses have focused on examining the effectiveness of different interventions, programs, or therapeutic modalities (Erford et al., 2010). In these studies, the dependent variable is some measure of effectiveness (e.g., well-being, level of depression, symptomatology) and the independent variables are study variables (e.g., client age, counselor training, types of treatment). As an example, Allumbaugh and Hoyt (1999) examined the effectiveness of grief therapy. Other meta-analyses have compared different schools of psychotherapy (e.g., Ahn & Wampold, 200 1 ; Miller, Wampold, & Varhely, 2008; Wampold, Minami, Baskin, & Tierney, 2002). As will be discussed in Step 2 (see below), however, meta-analysis can be used to synthesize other types of research such as whether the personality characteristic of conscientious is associated with longevity (Kern & Friedman, 2008). Meta-analysis also can be used to combine information from different studies that were conducted regarding an assessment instrument, such as Helms 's (1 999) meta-analysis of Cronbach alphas of the White Racial Identity Attitude Scale.
The following discussion of the steps involved in conducting a meta-analysis is only a primer designed to provide an overview of meta-analytic techniques for counseling researchers who are interested in quantitatively summarizing counseling research studies. This article is not intended as a comprehensive guide, and counseling researchers who decide to conduct a meta-analysis are directed to other sources, such as Borenstein, Hedges, Higgins, and Rothstein (2009), Cooper (2010), Hunter and Schmidt (2004), and Lipsey and Wilson (2001). In particular, Cooper, Hedges, and Valentine's (2009) The Handbook of Research Synthesis and Meta-Analysis is an excellent resource to accompany this overview of meta-analysis.
* Step 1: Formulate Research Question(s)
In considering conducting a meta-analysis, the first step involves formulating the research question(s). This first step is critical because it influences all future decisions, such as determining whether a study should be included or whether it should be excluded. In our view, the formulation of metaanalytic research question(s) should start with a counseling researcher's interests. At this point in the process, Cooper (2010) suggested that individuals ask themselves, "What are the constructs that I would like study?" Unless an individual is interested in the research topic, he or she may not complete the meta-analysis because the technique requires hours of reading and coding a substantial number of studies. Counseling researchers frequently are curious about topics that are well suited to meta-analytic techniques, such as, Does treatment X help clients with depression? What factors are associated with resiliency? What counselor factors are associated with better client outcomes? Is group counseling more effective than individual counseling for clients with posttraumatic stress disorder?
After a research area of interest has been identified, the counseling researcher should then move to operationally defining the constructs of interest. One of the basic premises of meta-analysis is that it analyzes a series of studies that address an identical conceptual hypothesis (Cooper, 2010). Identical conceptual hypothesis means the studies are not measures of similar concepts (e.g., self-esteem and self-efficacy) but measures of conceptually identical topics of interest. Therefore, the counseling researcher must develop clear operational definitions) of the central construct(s) to ensure the results from the studies they will be combining are conceptually alike even though researchers may be measuring the construct(s) using different instruments. For many researchers, operational definitions require an initial examination of the literature to see how other researchers have operationally defined the construct of interest. An example of an operational definition for career counseling interventions is Spokane and Oliver's (1 983), who defined career interventions as "any treatment or effort intended to enhance an individual's career development or to enable the person to make better career-related decisions" (p. 100). This operational definition can help a counseling researcher select studies for the meta-analysis that involve a treatment or effort, and the outcomes of these treatments or efforts must involve career development or making a careerrelated decision.
In meta-analysis, there is frequently a main effect question, such as, Are career interventions effective? However, with advances in meta-analysis, this is rarely the sole research question. Often there are questions that include moderator variables to address questions such as, What types of career interventions (e.g., individual or group) are most effective with which types of clients (e.g., adolescents or adults)? Again, the moderator variables need to be operationally defined. For example, Does a series of career exercises provided by a school counselor with ninth graders in English classes meet the operational definition of a group intervention? For counseling researchers who are interested in meta-analytic questions, spending time clearly articulating the variables of interest before starting to gather data from studies will often result in a better meta-analytic study.
* Step 2: Determine Meta-Analytic Approach That Best Fits
After the counseling researcher has clearly articulated the research question and operationally defined constructs and variables, then the next step is to decide which method of quantitatively summarizing studies can best address the research question(s) (Lipsey, 2009). This process involves determining the key metric in meta-analysis, namely, the index of effect size. Effect size serves as a standardized quantitative metric that researchers are able to calculate from the results of individual studies. For most counseling-related research questions, a counseling researcher can consider using one of the following major approaches to categorize the index of effect: the d index, r index, or odds ratios. It is now common for counseling researchers to report effect sizes in all research studies, and researchers are directed to Trusty, Thompson, and Petrocelli (2004) for a more detailed discussion on reporting effect sizes.
The d index (also referred as g or ES) or standardized mean difference is commonly applied when the research question is related to group differences. For example, if a counseling researcher is interested in whether Treatment A is better than Treatment B, he or she would use the d index. The counseling researcher would need to find studies in which Treatment A is compared with Treatment B, and the effect size would be calculated using the difference between the groups (the mean of Treatment A minus the mean of Treatment B) divided by their pooled standard deviations on the outcome measure. The effect size for each study comparing Treatment A with Treatment B then becomes a scale-free measure (Cooper et al., 2006) because dependent measures that vary in terms of means and standard deviations are converted to a common metric. If the counseling researcher's question involved using one specific dependent measure, then the simple difference between group means can be used as the index of effect size (Durlak & Lipsey, 1991). However, in most cases, researchers in counseling use different dependent measures (e.g., there are multiple measures of career development), and thus, the counseling researcher would use the d index. An example of a meta-analysis that used the d index measure is by Whiston, Sexton, and Lasoff (1998), who examined mean difference between individuals receiving career interventions and those receiving no career interventions.
When the counseling researcher's question involves a relationship (e.g., Is there a correlation between the number of sessions a group member attends and a measure of group effectiveness or counseling outcome?), then the counseling researcher would select the r index or product-moment correlation index of effect (Durlak, Meerson, & Foster, 2003). To calculate this type of effect size, both the independent and dependent variables of interest need to be continuous variables. An example of a meta-analysis study using an r index was one conducted by Martin, Garske, and Davis (2000), who reported a mean correlation of .22 between the therapeutic alliance and positive outcomes. When studies use both continuous and dichotomous dependent measures, biserial and point-biserial (or phi coefficient) can be used as variations to the r index to express effect size (Durlak & Lipsey, 1991). There are also a few researchers, such as Sheu et al. (2010), who are using path analysis to conduct meta-analyses.
Odds ratio is another common index of effect when dichotomous (or binary, categorical) outcome measurements are involved in analysis. It is considered as a less intuitive measure of effect size compared with the d index or the r index (Fleiss & Berlin, 2009). Basically, odds ratio describes the strength of association between two dichotomous variables. An odds ratio of 1 indicates no effect, and the number of odds ratio describes the degree of effect against no effect, that is, how many times a typical person in a group is more likely to fall into a positive binary outcome category (Durlak et al., 2003). An example of an odds-ratio meta-analysis is Bauer, Tharmanathan, VoIz, Moeller, and Freemantle (2009), who compared the response rate of venlafaxine with other antidepressant medications. Odds ratios are often used when only one variable (typically the dependent variable) is dichotomous. In this case, an odds ratio characterizes the change in the odds on the dichotomous dependent variable for a unit change in the independent variable (Trusty et al., 2004). Besides odds ratio, a few other effect size estimates have been commonly used to handle dichotomous variables, including the difference between two probabilities, the ratio of two probabilities, and the phi coefficient. Fleiss and Berlin (2009) provided detailed statistical descriptions and computation procedures for all four types of odds-ratio measures.
The three general approaches described above are quite general, and the actual calculation of effect sizes is infinitely more complex. In a meta-analysis, there are methods for combining d index, r index, and odds ratio, and these three indices can also possibly be converted from one to another (see Borenstein et al., 2009; Fleiss & Berlin, 2009; Thompson, 2002). However, for beginning meta-analytic researchers, we suggest considering which of the three general classes of effect sizes matches their research questions in order to start the process of identifying studies that can be used in their meta-analyses.
* Step 3: Search Literature and Identify Possible Studies
The process of searching for studies that correspond to the research question and that meet the operational definitions of the constructs defined earlier is typically a time-consuming task. The purpose of a meta-analysis is to aggregate all of the research studies in a defined area to make conclusions based on the compendium of research. Incorporating the results of all studies may be difficult for many reasons (e.g., a school counselor just presented the results to the local school board or the results of a study are not statistically significant and thus is not published). Valentine, Pigott, and Rothstein (2010) contended that the terms systematic review and meta-analysis are often used interchangeably, so a meta-analytic study should include a description about the systematic nature of the search for pertinent studies. In this way, individuals can have more confidence in the results of meta-analytic studies when it appears the researchers made concerted efforts to find a good sampling of studies in the area of interest.
Advances in technology have helped meta-analytic researchers identify potential studies. Counseling researchers can use a number of search engines and databases such as PsycINFO and ERIC through the libraries of many universities. With these databases, researchers generally use multiple terms or descriptors (e.g., career counseling, career development, interventions, occupational guidance, and occupational choice). Meta-analytic researchers also frequently use the reference lists from pertinent studies and possibly existing reviews of research in the area. One of the decisions a meta-analytic researcher needs to make is whether to include both published and unpublished studies. In the past, many meta-analyses were conducted with only published studies because of the difficulties with identifying and getting copies of unpublished studies. Some university libraries, however, have databases of dissertations and theses (e.g., ProQuest) that make the retrieval of unpublished studies a little easier. The inclusion of unpublished studies strengthens the findings of a meta-analysis because it generally means that the authors have a more representative sample of all of the possible studies that might exist.
Sometimes researchers will wonder about sample size and how many studies they need to conduct a meta-analysis. In most cases, counseling researchers should attempt to find as many studies as possible. Those interested in power analyses to estimate a sufficient sample size for a specific meta-analytic study are directed to Valentine et al. (2010) and Hedges and Pigott (2001). In determining power, the meta-analytic researcher will need to determine if they are conducting a fixed- or randomeffects meta-analysis, which we discuss later. Another technique is the fail-safe N, which is calculated after the average effect size has been computed. Fail-safe N is a calculation of the number of unfound studies with null results that would be needed to reduce the average effect size to the point of nonsignificance (Lipsey & Wilson, 2001).
* Step 4: Determine Inclusion Criteria and Develop the Coding Manual
As indicated previously, the determination of whether a study can be used in a meta-analysis is based primarily on the formulation of the research questions. One criticism of meta-analysis is that researchers combine studies of diverse constructs resulting in an analysis of "fruits" but it may be more relevant to know about "apples" and "oranges" separately. Thus, the first step in determining whether a study can be included is whether that research study involved the independent and dependent measures that were operationally defined in the first step of the meta-analytic process. Some meta-analytic researchers only select studies in which the dependent measure is assessed with specified instruments for which the meta-analytic researchers have investigated the reliability and validity evidence. Other selection criteria for studies to be included in a meta-analysis may involve whether the clinicians used a treatment manual, the type of control group, whether there was random assignment to treatment groups, minimal sample sizes, a minimum number of counseling sessions provided, whether the treatment was provided in a school setting, or other inclusion criteria that relate to the research question. A final decision of whether a study can be included often involves whether the study contains sufficient information (e.g., means and standard deviations or correlations) to calculate the type of effect size the counseling researcher selected in earlier steps.
Before a counseling researcher can begin recording study information to be used in calculating an average effect size, the counseling researcher should develop a coding manual that will guide the systematic coding of pertinent information from each study that will be used to calculate results. In the process of developing the coding manual, counseling researchers need to consider how they will address issues of independence of effect sizes per study. One of the assumptions of meta-analysis is that the effect sizes are independent (Hedges, 2009). It is not unusual for researchers to use multiple dependent measures (e.g., career maturity and career decidedness), or a researcher could evaluate an intervention at the conclusion of treatment (e.g., level of substance use) and then again 6 months later to see if the positive effects of treatment continued. The problem of nonindependence could also occur if the researcher used the same sample in two different studies. For example, there would be a problem with dependency if meta-analytic researchers included one study published by researchers regarding the relationship between self-efficacy and academic achievement and another study on the relationship between self-efficacy and degree of planning for college with the same sample. There are multiple approaches to addressing issues of dependency, such as selecting the smallest effect size. A common approach to issues of dependency is to average the effect size from all of the different dependent measures, but this approach may blur findings if diverse dependent measures are used. Another approach is to select the most relevant or psychometrically sound dependent measures. The issue of dependency needs to be resolved before a researcher starts extracting information from studies and, therefore, is critical in the development of the coding manual.
The primary purpose of the coding manual for the metaanalysis is to guide the counseling researcher's extraction of study information that will provide him or her with interesting data to analyze. As Wilson (2009) indicated, replication is the bedrock of the scientific method, and the coding manual ensures that other coders will record the same data from the studies. The description of a coding manual in a meta-analytic study documents to the reader that a potential replication of a meta-analysis would result in the same findings. A coding manual usually starts with a process for evaluating whether a potential study meets the criteria for inclusion. Furthermore, the coding manual provides instructions for recording information from each study that will be used to calculate effect sizes and the analyses of moderator variables; hence, recording instructions for all potential moderators should be included in the coding manual or protocol. For example, if a counseling researcher had not thought about how effect sizes may vary among men and women and has already extracted information from 50 out of 100 studies, he or she may be disinclined to recode those 50 studies already completed to be able to examine gender differences. Not only do metaanalytic researchers need to identify interesting moderator variables in developing a coding manual, but they also need to consider study quality variables (see Valentine, 2009). One of the criticisms of meta-analysis is that studies of poor quality are given the same weight as more methodologically rigorous studies in the calculation of effect size (Chow, 1987); therefore, study characteristics are often recorded so that analyses can be conducted regarding the characteristics such as random assignment, reliability and validity of measures, and participant attrition rate.
The coding manual is primarily developed to ensure systematic coding, and the reliability of the coding will be better when the manual is precise, detailed, and comprehensive. A meticulous coding manual reduces the chances of variables being miscoded and, therefore, improves the legitimacy of the results. Usually, the study coders will conduct a trial run on the coding process with a couple of studies and make revisions to the coding manual based on these experiences. These studies will be officially coded later in the process; the intent of this initial coding is only to improve the coding manual.
* Step 5: Extract and Code Study Information
To extract the necessary information from each study to conduct a meta-analysis, researchers will need a copy of the entire published or nonpublished study. Coders of research studies for a meta-analysis have the advantage of becoming quite knowledgeable about research in the area of the metaanalysis because they read and record detailed information from each of the studies they code.
Reliability and validity of dependent and independent variables are of issue in any experimental study, therefore, the reliability and validity of data gathered by the coder or coders are also of concern in a meta-analysis. A common problem in meta-analysis is coder drift (Wilson, 2009), wherein subtle changes in the coding process evolve during the laborious coding process and the coding procedure at the end is somewhat different from the initial coding process. In most meta-analytic studies, a portion or all of the studies are double-coded and indicators of reliability are calculated to give the readers evidence of whether drift occurred. Researchers typically select procedures such as kappa or intraclass correlations to calculate interrater reliability in meta-analyses (Wilson, 2009). A comprehensive and detailed coding manual and training of coders can often reduce differences in coding and result in good interrater reliability estimates.
As a part of designing the coding process, counseling researchers should also consider the next step, data analyses, and how to structure the database to facilitate data analyses. Some meta-analytic researchers have developed methods by which their coding is recorded directly into a database. There are various software options to conduct meta-analysis, and counseling researchers need to determine their methods for analyzing data early on so that the coding process easily leads to data entry and data analyses. For example, if the counseling researchers are using the SPSS macros written by Lipsey and Wilson (2001), then the data files must correspond to those macro files.
* Step 6: Data Analyses
The data analysis in a meta-analysis involves a process by which the information from individual studies is synthesized using various calculations, which produces statistical results. Based on the results of these calculations, statistical inferences are then made about the population from which the study samples are drawn (Cooper, 2010). The statistical analyses in a metaanalysis focus on the variation and distribution of effect sizes and the relationships between effect sizes and moderators of interest. Typically in meta-analyses, effect size estimates are often treated as the dependent variable, whereas the moderator variables are typically considered as independent variables (Durlak & Lipsey, 1991). In meta-analytic data analyses, the researcher at a minimum produces results concerning (a) average effect size, (b) confidence intervals for the effect size, (c) tests of significance, and (d) homogeneity analysis. The average effect size is an indicator of the average magnitude of the effect, whether it is correlational or group differences. The next step is to calculate confidence intervals, which examine the degree to which there is variability in effect sizes across the different studies. This step is crucial because confidence intervals typically are used to examine significance of the effect size and later to begin the process of examining whether there are differences in effect sizes across studies. There are multiple methods for determining both the statistical and practical significance of an average effect size, but a common method is examining the 95% confidence intervals and determining if the confidence interval includes zero (Lipsey & Wilson, 2001).
Consistent with other statistical procedures, meta-analysis involves sampling variance and probability. In meta-analysis, however, instead of examining the results from a sample of people and the results occurring by chance, the counseling researcher is attempting to determine if the variation among studies is a result of study difference or if the differences are what would be expected based on the probability of sampling error. In meta-analysis, this is expressed in tests of homogeneity (Hedges & Olkin, 1985) or comparisons of observed and expected variance (Hunter & Schmidt, 2004). These analyses inform the counseling researchers whether the variance is substantial enough to proceed with investigations or whether there is systematic variation related to moderator variables.
Initial Data Analysis Considerations
Before we discuss data analysis related to average effect size, confidence intervals, and test of homogeneity, it important to discuss some preliminary considerations related to conducting a meta-analysis. There are a series of decisions that will guide the process of combining effect size estimates from individual studies and additional data analyses. These different sets of decisions have an influence on the capacity to which it is possible to explain variation in effect size measures and how confident the counseling researcher can be in determining if the moderator variables influence effect sizes.
Underlying assumptions of meta-analysis. The first decision counseling researchers need to make before proceeding with data analysis is whether their data meet the following underlying assumptions of meta-analysis: (a) All individual findings are related to the same group differences or the correlations examine the same constructs, (b) comparisons or tests performed in the meta-analysis are independent of each other, and (c) the results from the primary studies are accurate and valid. We have previously discussed, in brief, the importance of these three assumptions, but to summarize, first, a meta-analysis is a synthesis of research in a specific area; therefore, if a researcher includes studies outside that area then the calculation of effect sizes are "contaminated." Second, the statistical procedures used in meta-analysis assume independence. For example, if some studies were from the same sample and this sample produced unusually high or low effect sizes, then the overall effect size could be skewed if these dependent measures were used. The third assumption concerns the accuracy of the reported results from each study and the belief that the primary researchers made valid assumptions when computing their results (Cooper, 20 1 0). Any deviation or violation of these assumptions may introduce bias or distortion to the results of the meta-analysis, which must be considered, and adjustments are typically made to correct or minimize the influence of not meeting an assumption. For example, several statistical techniques are discussed in Gleser and Olkin (2009) to adjust for issues of interdependence.
Unit of analysis. The second question a counseling researcher needs to consider is what unit of analysis will be used for each study in calculating d index or standardized mean differences (i.e., group differences), r index or correlation coefficients (i.e., relationship measure), or the odds-ratio effect sizes. Related to decisions about unit of analysis are issues of dependence, addressed earlier in this article, when multiple dependent or outcome measures are used in one study (Durlak & Lipsey, 1991). One approach is to use every outcome or dependent measure in each study as the unit of analysis regardless of the number of measures in a study. This approach often leads to disproportional weighting for studies that use numerous measures and potential interdependencies among effect size measures. Alternatively, some meta-analytic researchers use the average of the effect sizes from multiple outcome measures (see Whiston et al., 1998), but this approach may result in a combination of unrelated measures and the average effect size could be based on both sound and unsound measures. The third approach is to treat each construct domain as the unit of analysis and keep each dependent or outcome measure separate in the tests of the influences of moderators (see Anderson & Whiston, 2005). Whereas this approach provides specificity, in some cases there may be too few studies that used those specific dependent or outcome measures to conduct tests of the effects of moderator variables. Lipsey and Wilson (2001) provided a detailed discussion and comparison of these approaches.
Fixed-effect and random-effect models. Another decision a counseling researcher needs to make when conducting a meta-analysis is whether a fixed-effect (FE) or a random-effect (RE) model is more appropriate because this influences the calculations within the meta-analysis and the inferences that can be drawn from the findings. This decision centers on how the counseling researcher conceptualizes the meta-analysis and the degree to which the researcher can generalize the results (Hedges, 2009). If the counseling researcher only wishes to make inferences about the effect size parameters from the specific set of studies included in the meta-analysis or sets of studies identical or similar enough to it, then the FE model is appropriate. However, if a counseling researcher wishes to make inferences and generalizations beyond the observed studies and about the hypothetical population from which the studies are drawn, then the RE model should be used. Currently, there are still controversies surrounding this issue (Hedges, 2009). Nevertheless, Hunter and Schmidt (2000) recommended that individuals routinely select the RE model over the FE approach because of the larger biases that the FE model may introduce.
In Chapter 7 of their classic book on meta-analysis, Hedges and Olkin (1985) described the procedures for the FE approach and analyzing moderator variables using categorical models, general linear models, and multiple regression. The RE models have two major variations: the classical approach and the Bayesian approach. The difference is in their different ways to conceptualize the random variation considered in the RE models (Raudenbush, 2009). The classic approach is more similar to the FE model; however, the counseling researcher must incorporate a good estimate of the random effects variance, which is used in calculating of mean effect size, confidence intervals, tests of significance, and tests of homogeneity of effect sizes. Konstantopoulos and Hedges (2009) and Raudenbush (2009) provided detailed descriptions and discussions of the FE and RE models, respectively.
Data Analyses and Adjustments for Biases
According to Erford et al. (2010), average effect sizes are either biased on unbiased. For example, biased effects do not account for sample size, whereas unbiased estimates of effect size do account for sample size. There are a number of methods for adjusting effect sizes to reduce sources of possible bias.
Calculation of effect size. One of the goals in many metaanalyses is to calculate a main effect or overall effect size; as indicated in Step 2, there are three primary types of effect sizes (i.e., d index, r index, or odds ratio). When calculating effect sizes, one needs to consider sources of biases such as differences in sample size and study quality across individual studies. In meta-analysis, counseling researchers need to estimate potential biases and apply weights to adjust for them. For example, statistical theories posit that studies with larger samples provide more accurate estimates of population parameters and allow for more reliable inferences from the findings, whereas studies with small sample sizes (e.g., less than 20) often lead to an overestimation of the population effect (Cooper, 2010). Although sample size is an obvious concern in weighting effect sizes, there are various methods for adjusting overall effect sizes. For example, many researchers who are using d index or group difference effect sizes use the procedures described by Hedges and Olkin (1985), whereas for r index or correlational effect sizes, researchers would be advised to consult Hunter and Schmidt (2004). Meanwhile, counseling researchers need to consider whether they are using the FE model (see Konstantopoulos & Hedges, 2009) or RE model (see Raudenbush, 2009). Moreover, counseling researchers also need to examine their effect sizes for all of the studies and determine the influence of effect sizes that may be quite high or low and determine the influence of these outliers on their calculation of overall effect size.
Studies included in a meta-analysis also vary in their qualities in terms of research designs, reliability and validity of measurements, and other research factors. For example, theoretically, studies using random assignment design provide sounder results than those using quasi-experimental designs (Shadish & Haddock, 2009). The difference between designs can be examined after effect sizes are combined. Also, different studies may use diverse instruments to measure the same construct, and, therefore, the reliability and validity of these measures may vary. Measurement theory has illustrated that r index and d index effect size estimates based on unreliable measures tend to be smaller than those resulting from more reliable measures (Shadish & Haddock, 2009). For adjusting effect sizes for study characteristics or artifacts, interested readers are directed to Schmidt, Le, and Oh (2009).
Confidence intervals. Confidence intervals (CIs) are calculations of the range in which one would find the "true" effect size. In meta-analysis, the CIs are calculated in such a way that one would find the true effect size 95% of the time (Quintana & Minami, 2006). The calculation of CI will also vary depending on the type of meta-analysis that the counseling researcher is conducting and whether an RE or FE model is being used. Not only do CIs provide an indication of anticipated effect size ranges, but they also are used to test significance. In most meta-analytic situations, if the CI does not include zero (e.g., -.50 to -.25 or .33 to .67), then the average effect size is considered significant.
Test of homogeneity of effect sizes. One commonly used test to examine the two sources of variance in data is called homogeneity analysis. It compares the observed variance in the effect sizes with the expected variance in theoretical effect sizes under the assumption that sampling error alone provides the source of variance; in other words, the sample comes from the same population. The null hypothesis of homogeneity analysis is that the observed variance in effect sizes is not statistically different than what is expected based solely on sampling error. If the null hypothesis is rejected, it means that the variance in results cannot be explained only by sampling error alone or the effect sizes estimate different population values (Cooper, 2010). In this case, the researcher can then explore sources that may explain variance such as study characteristics (e.g., types of treatment, number of sessions, different outcome measures). The analyses of moderator variables can involve tests of homogeneity and other statistical techniques, and interested readers are encouraged to see Cooper et al. (2009), Hedges and Olkin (1985), and Hunter and Schmidt (2004).
* Step 7: Writing Meta-Analytic Manuscripts
The Publication Manual of the American Psychological Association (6th ed.; American Psychological Association, 2010) includes substantial information about manuscript development and provides specific information that needs to be included in a meta-analytic study. Familiarity with this information before beginning to write the manuscript will assist counseling researchers in developing a manuscript that is more likely to be published in a counseling journal. Another good resource to consult in writing a meta-analysis for publication is Quintana and Minami 's (2006) article. We also suggest that counseling researchers model their manuscripts after other well-cited meta-analyses.
Our goal for this article was to provide a brief introduction to meta-analytic techniques and to encourage counseling researchers to consider it as a method of quantitatively reviewing literature. The use of meta-analyses has been growing in diverse disciplines (Cooper, 2010), and we suggest that counseling researchers could also benefit from more meta-analyses being published in counseling journals. Meta-analyses provide unique information by quantitatively aggregating the results of numerous studies and can assist counselors in understanding the magnitude of the effect. Not only do meta-analyses assist counselors in understanding an area of research, but the results can also be used to document the effectiveness of counseling interventions and services to governmental officials and legislators, community or school board members, administrators, parents, and even clients. We have purposely avoided statistical formulas in this overview; however, knowledge and understanding of the mathematical foundations of meta-analysis is necessary. For individuals unfamiliar with meta-analysis, we recommend that they begin the process with Lipsey and Wilson's (2001) practical text.
Ann, H., & Wampold, B. E. (2001). Where oh where are the specific ingredients? A meta-analysis of component studies in counseling and psychotherapy. Journal of Counseling Psychology, 48, 251-257. doi:10.1037/0022-0184.108.40.206
Allumbaugh, D. L., & Hoyt, W T. (1999). Effectiveness of grief therapy: A meta-analysis. Journal of Counseling Psychology, 46, 370-380. doi: 1 0. 1 037/0022-0 1 220.127.116.110
American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.
Anderson, L. A., & Whiston, S. C. (2005). Sexual assault education programs: A meta-analytic examination of their effectiveness. Psychology of Women Quarterly, 29, 374-388. doi: 10.1 111/ J.1471-6402.2005.00237.X
Bauer, M., Tharmanathan, P., VoIz, H., Moeller, H., & Freemantle, N. (2009). The effect of venlafaxine compared with other antidepressants and placebo in the treatment of major depression: A meta-analysis. European Archives of Psychiatry and Clinical Neuroscience, 259, 172-185. doi: 10. 1007/ S00406-008-0849-0
Borenstein, M., Hedges, L. V, Higgins, J. P. T, & Rothstein, H. R. (2009). Introduction to meta-analysis. New York, NY: Wiley.
Chow, S. L. (1987). Meta-analysis of pragmatic and theoretical research: A critique. Journal of Psychology: Interdisciplinary and Applied, 121, 259-271.
Cooper, H. (2010). Research synthesis and meta-analysis (4th ed.). Thousand Oaks, CA: Sage.
Cooper, H., Hedges, L. V, & Valentine, J. C. (2009). The handbook of research synthesis and meta-analysis (2nd ed.). New York, NY: Russell Sage Foundation.
Cooper, H., Robinson, J. C, & Dorr, N. (2006). Conducting a metaanalysis. In F. T. L. Leong & J. T. Austin (Eds.), The psychology research handbook (2nd ed., pp. 315-325). Thousand Oaks, CA: Sage.
Durlak, J. A. (1995). Understanding meta-analysis. In G. L. Grim & P. R. Yarnold (Eds.), Reading and understanding multivariate statistics (pp. 3 19-352). Washington, DC: American Psychological Association.
Durlak, J. A., & Lipsey, M. W. (1991). A practitioner's guide to meta-analysis. American Journal of Community Psychology, 19, 291-332.
Durlak, J. A., Meerson, L, & Foster, C. J. E. (2003). Meta-analysis. In J. C. Thomas & M. Hersen (Eds.), Understanding research in clinical and counseling psychology (pp. 243-267). Mahwah, NJ: Erlbaum.
Erford, B. T, Savin-Murphy, J. A., & Butler, C. (2010). Conducting a meta-analysis of counseling outcome research: Twelve steps and practical procedures. Counseling Outcome Research and Evaluation, 1, 19-42. doi:10.1 177/2150137809356682
Fleiss, J. L., & Berlin, J. A. (2009). Effect sizes for dichotomous data. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 237-253). New York, NY: Russell Sage Foundation.
Glass, G V (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5, 3-8.
Glass, G V (2000). Meta-analysis at 25. Retrieved from http://www. gvglass.info/papers/meta25.html
Gleser, L. J., & Olkin, I. (2009). Stochastically dependent effect sizes. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 357-376). New York, NY: Russell Sage Foundation.
Hedges, L. V (2009). Statistical considerations. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 37-47). New York, NY: Russell Sage Foundation.
Hedges, L. V, & Olkin, I. (1985). Statistical methods for metaanalysis. New York, NY: Academic Press.
Hedges, L. V, & Pigott, T. D. (2001). The power of statistical tests in meta-analysis. Psychological Methods, 6, 203-217. doi:10.1037/1082-989X.6.3.203
Helms, J. E. (1 999). Another meta-analysis of the White Racial Identity Attitude Scale's Cronbach alphas: Implications for validity. Measurement and Evaluation in Counseling and Development, 32, 122-137.
Hunter, J. E., & Schmidt, F. L. (2000). Fixed effects vs. random effects meta-analysis models: Implications for cumulative research knowledge. International Journal of Selection and Assessment, 8, 275-292. doi: 10.1 1 1 1/1468-2389.001 56
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research findings. Thousand Oaks, CA: Sage.
Kern, M. L., & Friedman, H. S. (2008). Do conscientious individuals live longer? A quantitative review. Health Psychology, 27, 505-512. doi:10.1037/0278-618.104.22.1685
Konstantopoulos, S., & Hedges, L. V (2009). Analyzing effect sizes: Fixedeffects models. The handbook of research synthesis and meta-analysis (2nd ed., pp. 257-293). New York, NY: Russell Sage Foundation.
Lipsey, M. W. (2009). Identifying interesting variables and analysis opportunities. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 147-158). New York, NY: Russell Sage Foundation.
Lipsey, M. W, & Wilson, D. B (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.
Martin, D. J., Garske, J. P., & Davis, M. K. (2000). Relation of the therapeutic alliance with outcome and other variables: A metaanalytic review. Journal of Consulting and Clinical Psychology, 68, 438-450. doi:10.1037/0022-006X.68.3.438
Miller, S., Wampold, B., & Varhely, K. (2008). Direct comparisons of treatment modalities for youth disorders: A meta-analysis. Psychotherapy Research, 18, 5-14.doi:10.1080/10503300701472131
Quintana, S. M., & Minami, T (2006). Guidelines for meta-analyses of counseling psychology research. The Counseling Psychologist, 34, 839-877. doi:10.1 177/001 1000006286991
Raudenbush, S. W (2009). Analyzing effect sizes: Random-effects models. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 279-315). New York, NY: Russell Sage Foundation.
Schmidt, F. L., Le, H., & Oh, I.-S. (2009). Correcting for the distorting effects of study artifacts in meta-analysis. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 317-333). New York, NY: Russell Sage Foundation.
Shadish, W. R., & Haddock, C. K. (2009). Combining estimates of effect size. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 257-277). New York, NY: Russell Sage Foundation.
Sheu, H., Lent, R. W, Brown, S. D., Miller, M. J., Hennessy, K. D., & Duffy, R. D. (2010). Testing the choice model of social cognitive career theory across Holland themes: A meta-analytic path analysis. Journal of Vocational Behavior, 76, 252-264. doi:10.1016/j.jvb.2009.10.015
Spokane, A. R., & Oliver, L. W. (1983). Outcomes of vocational intervention. In S. H. Osipow & W B. Walsh (Eds.), Handbook of vocational psychology (pp. 99-136). Hillsdale, NJ: Erlbaum.
Thompson, B. (2002). "Statistical," "practical," and "clinical": How many kinds of significance do counselors need to consider? Journal of Counseling & Development, 80, 64-71.
Trusty, J., Thompson, B., & Petrocelli, J. V (2004). Practical guide for reporting effect size in quantitative research in the Journal of Counseling & Development. Journal ofCounseling & Development, 82, 107-110.
Valentine, J. C. (2009). Judging the quality of primary research. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 129-146). New York, NY: Russell Sage Foundation.
Valentine, J. C, Pigott, T D, & Rothstein, H. R. (2010). How many studies do you need? A primer on statistical power for metaanalysis. Journal of Educational and Behavioral Statistics, 35, 2 1 5-247. doi: 1 0.3 1 02/1 07699860934696 1
Wampold, B. E., Minami, T., Baskin, T. W, & Tierney, S. C. (2002). A meta-(re)analysis of the effects of cognitive therapy versus "other therapies" for depression. Journal of Affective Disorders, 68, 159-165. doi:10.1016/S0165-0327(00)00287-l
Whiston, S. C, Sexton, T. L., & Lasoff, D. L. (1998). Careerintervention outcome: A replication and extension of Oliver and Spokane (1988). Journal ofCounseling Psychology, 45, 150-165. doi:10.1037/0022-022.214.171.124
Wilson, D. B. (2009). Systematic coding. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed, pp. 159-176). New York, NY: Russell Sage Foundation.
Susan C. Whiston and Peiwei Ll, Department of Counseling and Educational Psychology, Indiana University, Bloomington. Correspondence concerning this article should be addressed to Susan C. Whiston, Department of Counseling and Educational Psychology, Indiana University, 201 North Rose Avenue, Bloomington, IN 47405-1006 (e-mail: firstname.lastname@example.org).
© 2011 by the American Counseling Association. All rights reserved.