Papers by Joshua R Polanin
Pyott, Terri D., and Joshua R. Polanin, High-Quality Meta-Analysis in a Systematic Review, Review of Educational Research, 90(1, 2020), 24-46
Reviews best practices for doing meta-analysis in systematic reviews of quantitative research for... more Reviews best practices for doing meta-analysis in systematic reviews of quantitative research for transparency and accountability
Locating Unreported Outcome Data for Use in a Meta-Analysis: Results From a Synthesis of Intervention Studies Reducing Cyberbullying
Proceedings of the 2019 AERA Annual Meeting
The Consequences of School Violence: A Systematic Review and Meta-Analysis

Journal of Clinical Epidemiology, 2019
Background and Objectives: Sharing individual participant data (IPD) among researchers, on reques... more Background and Objectives: Sharing individual participant data (IPD) among researchers, on request, is an ethical and responsible practice. Despite numerous calls for this practice to be standard, however, research indicates that primary study authors are often unwilling to share IPD, even for use in a meta-analysis. This study sought to examine researchers' reservations about data sharing and to evaluate the impact of sending a data-sharing agreement on researchers' attitudes toward sharing IPD. Methods: To investigate these questions, we conducted a randomized controlled trial in conjunction with a Web-based survey. We searched for and invited primary study authors of studies included in recent meta-analyses. We emailed more than 1,200 individuals, and 247 participated. The survey asked individuals about their transparent research practices, general concerns about sharing data, attitudes toward sharing data for inclusion in a meta-analysis, and concerns about sharing data in the context of a meta-analysis. We hypothesized that participants who were randomly assigned to receive a data-sharing agreement would be more willing to share their primary study's IPD. Results: Results indicated that participants who received a data-sharing agreement were more willing to share their data set, compared with control participants, even after controlling for demographics and pretest values (d 5 0.65, 95% CI [0.39, 0.90]). A member of the control group is 24 percent more likely to share her data set should she receive the data-sharing agreement. Conclusions: These findings shed light on data-sharing practices, attitudes, and concerns and can be used to inform future metaanalysis projects seeking to collect IPD, as well as the field at large.
Evidence Gap Maps in Education Research
Journal of Research on Educational Effectiveness
Postpartum Worry Scale--Revised
PsycTESTS Dataset, 2014
Campbell Systematic Reviews, 2016
Guides on how to im plem ent specific system atic review m ethods Cam pbell Collaboration Methods... more Guides on how to im plem ent specific system atic review m ethods Cam pbell Collaboration Methods Discussion Papers are published to prom ote discussion of new and in novative m ethods in system atic reviews, m akin g these approaches available to a broad audience. Papers are published as subm itted by th e authors. They are n ot subject to review or editing by the Cam pbell Collaboration. The views expressed are those of the authors, and m ay not be attributed to the Cam pbell Collaboration. Cam pbell Collaboration Methods Discussion Papers do not represent Cam pbell policy.
The Consequences of School Violence: A Systematic Review and Meta-Analysis, Global, 1990-2016
This project seeks to to provide clear and comprehensive answers to the questions that plague res... more This project seeks to to provide clear and comprehensive answers to the questions that plague researchers on how school violence impacts future student outcomes. To that end, the principal investigators plan to review, organize, and synthesize extant research on consequences of school violence and aggression for perpetrators and victims by conducting a systematic review and meta-analysis on longitudinal studies of school violence and outcomes. The primary goal of the current study is to conduct a systematic review and meta-analysis of the extant longitudinal research literature on the consequences of school violence.
Supplemental material, Polanin_Supplemental_Material for Transparency and Reproducibility of Meta... more Supplemental material, Polanin_Supplemental_Material for Transparency and Reproducibility of Meta-Analyses in Psychology: A Meta-Review by Joshua R. Polanin, Emily A. Hennessy and Sho Tsuji in Perspectives on Psychological Science

1 Estimating the Difference Between Published and Unpublished Effect Sizes: A Meta-Review
Practitioners and policymakers rely on meta-analyses to inform decision making around the allocat... more Practitioners and policymakers rely on meta-analyses to inform decision making around the allocation of resources to individuals and organizations. It is therefore paramount to consider the validity of these results. A well-documented threat to the validity of research synthesis results is the presence of publication bias, a phenomenon where studies with large and/or statisti-cally significant effects, relative to studies with small or null effects, are more likely to be published. We investigated this phenomenon empirically by reviewing meta-analyses published in top-tier journals between 1986 and 2013 that quantified the difference between effect sizes from published and unpublished research. We reviewed 383 meta-analyses of which 81 had suf-ficient information to calculate an effect size. Results indicated that pub-lished studies yielded larger effect sizes than those from unpublished studies ( d; = 0.18, 95 % confidence interval [0.10, 0.25]). Moderator analyses revealed that th...

Prevention Science, 2021
Evidence suggests that cyberbullying among school-age children is related to problem behaviors an... more Evidence suggests that cyberbullying among school-age children is related to problem behaviors and other adverse school performance constructs. As a result, numerous school-based programs have been developed and implemented to decrease cyberbullying perpetration and victimization. Given the extensive literature and variation in program effectiveness, we conducted a comprehensive systematic review and meta-analysis of programs to decrease cyberbullying perpetration and victimization. Our review included published and unpublished literature, utilized modern, transparent, and reproducible methods, and examined confirmatory and exploratory moderating factors. A total of 50 studies and 320 effect sizes spanning 45,371 participants met the review protocol criteria. Results indicated that programs significantly reduced cyberbullying perpetration (g = −0.18, SE = 0.05, 95% CI [−0.28, −0.09]) and victimization (g = −0.13, SE = 0.04, 95% CI [−0.21, −0.05]). Moderator analyses, however, yielde...
Meta-analysis and reproducibility

A Meta-Analysis of School-Based Bullying Prevention Programs' Effects on Bystander Intervention Behavior
School Psychology Review, 2012
Abstract. This meta-analysis synthesized bullying prevention programs' effectiveness at incre... more Abstract. This meta-analysis synthesized bullying prevention programs' effectiveness at increasing bystander intervention in bullying situations. Evidence from 12 school-based programs, involving 12,874 students, indicated that overall the programs were successful (Hedges's g = .20, 95% confidence interval [CI] = .11 to .29, p < .001), with larger effects for high school (HS) samples compared to kindergarten through eighth-grade (K-8) student samples (HS effect size [ES] = 0.43, K-8 ES = 0.14; p < .05). A secondary synthesis from eight of the studies that reported empathy for the victim revealed treatment effectiveness that was positive but not significantly different from zero (g = .05, 95% CI = −.07 to .17, p = .45). Nevertheless, this meta-analysis indicated that programs increased bystander intervention both on a practical and statistically significant level. These results suggest that researchers and school administrators should consider implementing programs that focus on bystander intervention behavior supplementary to bullying prevention programs.
Campbell Systematic Reviews, 2013

A meta-analysis of school-based bulling prevention programs' effects on bystander intervention behavior
School psychology review
Copyright - © National Association of School Psychologists 2012, Date revised - 20120507, Languag... more Copyright - © National Association of School Psychologists 2012, Date revised - 20120507, Language of summary - English, Number of references - 84, Pages - 47-65, ProQuest ID - 1011896036, SubjectsTermNotLitGenreText - 6401; 7219 4256; 1105 6912 221 7631 853; 1128, Last updated - 2012-09-10, Corporate institution author - Polanin, Joshua R.; Espelage, Dorothy L.; Pigott, Therese D., DOI - PSIN-2012-09813-003; 2012-09813-003; 0279-6015, Andreou, Eleni, Didaskalou, Eleni 2008 Outcomes of a curriculum-based anti-bullying intervention program on students' attitudes and behavior Emotional and behavioural difficulties 13 4 235-248, Astor, R. A., Astor, R. A. 2005 School safety interventions: Best practices and programs Children & Schools 27 1 17-32, Borenstein, Michael, Hedges, Larry V. 2010 A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods 1 2 97-111, Borenstein, M. Hedges, L.V., Higgins, J.P.T., & Rothstein, H., 2009. Introd...

Estimating the Difference Between Published and Unpublished Effect Sizes
Review of Educational Research, 2016
Practitioners and policymakers rely on meta-analyses to inform decision making around the allocat... more Practitioners and policymakers rely on meta-analyses to inform decision making around the allocation of resources to individuals and organizations. It is therefore paramount to consider the validity of these results. A well-documented threat to the validity of research synthesis results is the presence of publication bias, a phenomenon where studies with large and/or statistically significant effects, relative to studies with small or null effects, are more likely to be published. We investigated this phenomenon empirically by reviewing meta-analyses published in top-tier journals between 1986 and 2013 that quantified the difference between effect sizes from published and unpublished research. We reviewed 383 meta-analyses of which 81 had sufficient information to calculate an effect size. Results indicated that published studies yielded larger effect sizes than those from unpublished studies ([Formula: see text] = 0.18, 95% confidence interval [0.10, 0.25]). Moderator analyses reve...

Background / Context: A renewed effort to insure that publicly funded and collected data remains ... more Background / Context: A renewed effort to insure that publicly funded and collected data remains accessible to researchers has recently gained governmental and academic interests (Council on Governmental Relations, 2006). Organizations such as the Inter-University Consortium for Political and Social Research (ICPSR) have long archived and collected large data sets, and the National Institute of Health (NIH), and the National Science Foundation (NSF) both have formal requirements for grantees around plans for sharing and archiving data. These databases have the potential to enable advanced analysis for both policy and practice. While these data archives provide researchers the opportunity to perform secondary analyses, they also engender the opportunity for new methods of meta-analysis. In medicine, where individual patient data is more commonly available than in the social sciences, methodologists have outlined a number of methods for combining individual participant data with the more traditional aggregated data usually collected in a meta-analysis. The purpose of this presentation is to illustrate methods of meta-analysis that combine both individual participant data (IPD) and aggregated data (AD) from traditional meta-analyses. Our example is based on an ongoing project that uses data from Greenwald, Hedges, and Laine's (1996) metaanalysis of 60 primary research studies that synthesized aggregated data on education production functions. At least six of the studies included in this meta-analysis used data from publicly available data sets. The presentation will compare the results from traditional aggregated data meta-analysis with a range of methods that incorporate both aggregated and individual level data. Cooper & Patall (2009) recently outlined the benefits and limitations of IPD metaanalysis for issues in the social sciences. The advantages of incorporating individual participant data include but are not limited to: • Increased collaboration across researchers: As mentioned earlier, the National Science Foundation and the National Institutes of Health both have developed policies for data sharing. The National Institutes of Health (2003) statement on sharing research data indicates that all applications with direct costs above $500,000 must address data sharing. Curran & Hussong (2009) and Shrout (2009) both provide examples of collaborations that have been developed around the pooled data sets. • Obtaining missing data and checking original analyses: One advantage Cooper & Patall (2009) cite for IPD is the ability to check the original data from the primary studies, and to fit models that were not possible with only the data provided in the studies. For example, the primary data set may include outcome measures or characteristics of participants not reported in the original study. The problem of outcome reporting bias has been discussed by Orwin & Cordray (1985) in the social science literature, and is a source of considerable discussion in medicine (Turner, Matthews, Linardatos, Tell, & Rosenthal, 2008; Vedula, Bero, Scherer, & Dickersin, 2009). Missing data is also a problem in aggregated data analysis (Pigott, 2009) when particular moderators of effect size are not all reported across studies or when information to compute an effect size is not present. With the original data, effect sizes can be computed with full information, and analyses of effect size variation can use more detailed background characteristics of the study and participants.

The Institute of Education Sciences (IES) publishes practice guides in education to provide educa... more The Institute of Education Sciences (IES) publishes practice guides in education to provide educators with the best available evidence and expertise on current challenges in education. The What Works Clearinghouse (WWC) develops practice guides in conjunction with an expert panel, combining the panel's expertise with the findings of existing rigorous research to produce specific recommendations for addressing these challenges. The WWC and the panel rate the strength of the research evidence supporting each of their recommendations. See Appendix A for a full description of practice guides. The goal of this practice guide is to offer specific, evidence-based recommendations for college and university faculty, administrators, and advisors working to improve the success of students academically underprepared for college. Each recommendation includes an overview of the practice, a summary of evidence used in support of the evidence rating, guidance on how to carry out the recommendation, and suggested approaches to overcome potential roadblocks. Each recommendation includes an implementation checklist, as guidance for getting started with implementing the recommendation.
Effects of After-School Programs on Attendance and Externalizing Behaviors with Primary and Secondary School Students: A Systematic Review and Meta-Analysis
Society for Research on Educational Effectiveness, 2015

Meta-analysts rely on the availability of data from previously conducted studies. That is, they r... more Meta-analysts rely on the availability of data from previously conducted studies. That is, they rely on primary study authors to register their outcome data, either in a study's text or on publicly available websites, and report the results of their work, either again in a study's text or on publicly accessible data repositories. If a primary study author does not register data collection and similarly does not report the data collection results, the meta-analyst is at risk of failing to include the collected data. The purpose of this study is to attempt to locate one type of metaanalytic data: findings from studies that neither registered nor reported the collected outcome data. To do so, we conducted a large-scale search for potential studies and emailed an author query request to more than 600 primary study authors to ask if they had collected eligible outcome data. We received responses from 75 authors (12.3%), three of whom sent eligible findings. The results of our search confirmed our proof of concept (i.e., that authors collect data but fail to register or report it publicly), and the meta-analytic results indicated that excluding the identified studies would change some of our substantive conclusions. Cost analyses indicated, however, a high price to finding the missing studies. We end by reaffirming our calls for greater adoption of primary study preregistration as well as data archiving in publicly available repositories.
Uploads
Papers by Joshua R Polanin