Archived Posts

RMME Faculty Member, Dr. Bianca Montrosse-Moorhead, Named Co-Editor-In-Chief of Leading Evaluation Journal

Congratulations to RMME Faculty Member, Dr. Bianca Montrosse-Moorhead, who was recently named Co-Editor-In-Chief of the exceptional evaluation journal, New Directions for Evaluation (see announcement here). This publication is one of two published by the premier professional organization for evaluators, the American Evaluation Association. Dr. Montrosse-Moorhead will serve in this role for the next three years with colleague, Dr. Sarah Mason, from the University of Mississippi. Check out this  UConn Today article for more. And congratulations again to the well-deserving Dr. Bianca Montrosse-Moorhead on this outstanding appointment!

 

Upcoming RMME/STAT Colloquium (10/7): Edsel A. Pena, “Searching for Truth through Data”

RMME/STAT Joint Colloquium

Searching for Truth through Data

Dr. Edsel A. Pena
University of South Carolina

Friday, October 7, at 11:15AM ET, AUST 108

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m9667e91caf1197b47fc45f50529388b9

This talk concerns the role of statistical thinking in the Search for Truth using data. This will bring us to a discussion of p-values, a much-used tool in scientific research, but at the same time a controversial concept which has elicited much, sometimes heated, debate and discussion. In March 2016, the American Statistical Association (ASA) was compelled to release an official statement regarding p-values; a psychology journal has even gone to the extreme of banning the use of p-values in its articles; and in 2018, a special issue of The American Statistician was fully devoted to this issue. A main concern in the use of p-values is the introduction of a somewhat artificial threshold, usually the value of 0.05, when used in decision-making, with implications on reproducibility and replicability of reported scientific results. Some new perspectives on the use of p-values and in the search for truth through data will be discussed. In particular, this will touch on the representation of knowledge and its updating based on observations. Related to the issue of p-values, the following question arises: “When given the p-value, what does it provide in the context of the updated knowledge of the phenomenon under consideration, and what additional information should accompany it?” To be addressed also is the question of whether it is time to move away from hard thresholds such as 0.05 and whether we are on the verge of — to quote Wasserstein, Schirm and Lazar (2019) — a “World Beyond P < 0.05.”

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (9/9): Kosuke Imai, “Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment”

RMME/STAT Joint Colloquium

Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment

Dr. Kosuke Imai
Harvard University

Friday, September 9, at 11:00AM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m486f7b13e6881ba895b350f338b0c90d

Despite an increasing reliance on fully-automated algorithmic decision-making in our day-to-day lives, human beings still make highly consequential decisions. As frequently seen in business, healthcare, and public policy, recommendations produced by algorithms are provided to human decision-makers to guide their decisions. While there exists a fast-growing literature evaluating the bias and fairness of such algorithmic recommendations, an overlooked question is whether they help humans make better decisions. We develop a general statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We also show how to examine whether algorithmic recommendations improve the fairness of human decisions and derive the optimal decision rules under various settings. We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment (PSA) in the criminal justice system. A goal of the PSA is to help judges decide which arrested individuals should be released. On the basis of the preliminary data available, we find that providing the PSA to the judge has little overall impact on the judge’s decisions and subsequent arrestee behavior. Our analysis, however, yields some potentially suggestive evidence that the PSA may help avoid unnecessarily harsh decisions for female arrestees regardless of their risk levels while it encourages the judge to make stricter decisions for male arrestees who are deemed to be risky. In terms of fairness, the PSA appears to increase an existing gender difference while having little effect on any racial differences in judges’ decisions. Finally, we find that the PSA’s recommendations might be unnecessarily severe unless the cost of a new crime is sufficiently high.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Commmunity Members Present New STATA Package: mlmeval

Dr. Anthony J. Gambino (RMME alumnus), Dr. Sarah D. Newton (RMME alumna), and Dr. D. Betsy McCoach (current RMME faculty member) unveiled their new STATA package, mlmeval, at the STATA Conference in Washington, DC this week. Their work pushes the field forward by offering a new tool that provides users with information about both model fit and adequacy for multilevel model evaluation.

 

Abstract:

Model evaluation is an unavoidable facet of multilevel modeling (MLM). Current guidance encourages researchers to focus on two overarching model-selection factors: model fit and model adequacy (McCoach et al. 2022). Researchers routinely use information criteria to select from a set of competing models and assess the relative fit of each candidate model to their data. However, researchers must also consider the ability of their models and their various constituent parts to explain variance in the outcomes of interest (i.e., model adequacy). Prior methods for assessing model adequacy in MLM are limited. Therefore, Rights and Sterba (2019) proposed a new framework for decomposing variance in MLM to estimate R2 measures. Yet there is no Stata package that implements this framework. Thus, we propose a new Stata package that computes both (1) a variety of model fit criteria and (2) the model adequacy measures described by Rights and Sterba to facilitate multilevel model selection for Stata users. The goal of this package is to provide researchers with an easy way to utilize a variety of complementary methods to evaluate their multilevel models.

RMME Community Members Publish Article: Omitted Response Patterns

Merve Sarac (an RMME alumna) and Dr. Eric Loken (a current RMME faculty member) recently published a new article, entitled: “Examining Patterns of Omitted Responses in a Large-scale English Language Proficiency Test” in the International Journal of Testing. Congratulations to Merve and Eric on this excellent accomplishment!

 

Abstract:

This study is an exploratory analysis of examinee behavior in a large-scale language proficiency test. Despite a number-right scoring system with no penalty for guessing, we found that 16% of examinees omitted at least one answer and that women were more likely than men to omit answers. Item-response theory analyses treating the omitted responses as missing rather than wrong showed that examinees had underperformed by skipping the answers, with a greater underperformance among more able participants. An analysis of omitted answer patterns showed that reading passage items were most likely to be omitted, and that native language-translation items were least likely to be omitted. We hypothesized that since reading passage items were most tempting to skip, then among examinees who did answer every question there might be a tendency to guess at these items. Using cluster analyses, we found that underperformance on the reading items was more likely than underperformance on the non-reading passage items. In large-scale operational tests, examinees must know the optimal strategy for taking the test. Test developers must also understand how examinee behavior might impact the validity of score interpretations.

New Program Evaluation Student, Emily Acevedo, Completes Dissertation

One of RMME’s newest members of the Graduate Certificate Program in Program Evaluation has reached an academic milestone! Dr. Emily Acevedo, a current kindergarten teacher in New York, recently completed her dissertation entitled, “Teacher’s Implementation of Play-Based Learning Practices and Barriers Encountered in Kindergarten Classrooms,” at Walden University. Congratulations to Dr. Acevedo on this outstanding accomplishment!

RMME Faculty Member, Dr. D. Betsy McCoach Releases New Books

Congratulations to RMME faculty member, Dr. D. Betsy McCoach, who recently released two new outstanding statistical modeling books. Both works include contributions from RMME Community members. Be sure to check these out today:

Introduction to Modern Modelling Methods – Co-authored by RMME alumnus, Dr. Dakota Cintron, this book introduces readers to multilevel modeling, structural equation modeling, and longitudinal modeling. A fantastic resource for quantitative researchers!

Multilevel Modeling Methods with Introductory and Advanced Applications – Including contributions from current and former RMME faculty (D. Betsy McCoach, Chris Rhoads, H. Jane Rogers, Aarti P. Bellara), as well as RMME alumni (Sarah D. Newton, Anthony J. Gambino, Eva Yujia Li), this text offers readers a comprehensive introduction to multilevel modeling. It is an excellent resource for aspiring and established multilevel modelers, covering foundational skills through cutting-edge, advanced multilevel techniques. A must-have for every multilevel modeler’s bookshelf!

RMME Community Members Publish Article: Mixture Models & Classification

RMME Alumnus, Dr. Dakota W. Cintron, and RMME faculty members, Drs. Eric Loken and D. Betsy McCoach recently published a new article entitled, “A Cautionary Note about Having the Right Mixture Model but Classifying the Wrong People”. This article will appear in Multivariate Behavioral Research and is currently available online. Congratulations to Dakota, Eric, and Betsy!

 

Abstract:

 

RMME Instructor, Dr. Leslie Fierro, Serves as Evaluation Panelist

On June 2, 2022, Dr. Leslie Fierro (RMME Instructor and Co-Editor of New Directions for Evaluation) contributed to a panel session entitled, “Issues in Evaluation: Surveying the Evaluation Policy Landscape in 2022”. The Government Accountability Office (GAO) and Data Foundation co-sponsored this webinar in which panelists discussed the state of evaluation policy today. Visit this website and register to watch the recording of this excellent webinar for free! Congratulations on this work, Dr. Fierro!

Drs. D. Betsy McCoach & Sarah D. Newton Offer Spring 2022 Workshops to I-MTSS Research Network Early Career Scholars

RMME Community members, Dr. D. Betsy McCoach and Dr. Sarah D. Newton, collaborated with colleagues this spring to offer several methodological workshops for members of the I-MTSS Research Network’s Early Career Scholars Program. Workshops included:

 

Workshop Title Description Facilitators
Learning how to “p” (December 2021) Everyone uses p’s, but very few know how to p. In this session, we will discuss the good, the bad, and the ugly of p-values and we will provide more nuanced guidance on how to make sense of your research results. Betsy McCoach and Yaacov Petscher
Hungry for Power (November 2021) All researchers seek power— statistical power, that is. In this session, we will explore the power game and how to “play” it. Betsy McCoach and Yaacov Petscher
A Bird’s Eye View of Nesting
(January 2022)
Nested data are the norm in educational studies. Some consider nesting a nuisance, but nested data also provides opportunities to ask and answer a wide variety of research questions that are important to educational researchers. Betsy McCoach and Yaacov Petscher
Data Cleanup in Aisle 2! (Mop and Bucket Not Included)
(February 2022)
This workshop will help participants to develop a clearer sense of the data cleaning and preparation process: (1) Setting up workflows and structures for success, (2) Identifying data entry errors; (3) Creating, recoding, and naming variables for analysis; (4) Conducting preliminary analyses; (5) Knowing your software; and (6) Understanding your planned analysis and its needs (with special attention given to multilevel modeling). Sarah D. Newton and Kathleen Lynne Lane
What’s Your Logic? Tell Me–What’s Your Logic? (May 2022) The current workshop focuses on how to use logic models to convey the theory of change (TOC) underlying a program/intervention of interest in research and/or evaluation contexts. In this hands-on workshop, participants will collaborate in groups to build a TOC model for the I-MTSS Research Network project with which they are most familiar. Participants will then share and briefly describe their work for the larger group. Sarah D. Newton and Nathan Clemens

 

Congratulations on your contributions to a successful workshop series!