Archived Posts

Upcoming RMME/STAT Colloquium (9/9): Kosuke Imai, “Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment”

RMME/STAT Joint Colloquium

Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment

Dr. Kosuke Imai
Harvard University

Friday, September 9, at 11:00AM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m486f7b13e6881ba895b350f338b0c90d

Despite an increasing reliance on fully-automated algorithmic decision-making in our day-to-day lives, human beings still make highly consequential decisions. As frequently seen in business, healthcare, and public policy, recommendations produced by algorithms are provided to human decision-makers to guide their decisions. While there exists a fast-growing literature evaluating the bias and fairness of such algorithmic recommendations, an overlooked question is whether they help humans make better decisions. We develop a general statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We also show how to examine whether algorithmic recommendations improve the fairness of human decisions and derive the optimal decision rules under various settings. We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment (PSA) in the criminal justice system. A goal of the PSA is to help judges decide which arrested individuals should be released. On the basis of the preliminary data available, we find that providing the PSA to the judge has little overall impact on the judge’s decisions and subsequent arrestee behavior. Our analysis, however, yields some potentially suggestive evidence that the PSA may help avoid unnecessarily harsh decisions for female arrestees regardless of their risk levels while it encourages the judge to make stricter decisions for male arrestees who are deemed to be risky. In terms of fairness, the PSA appears to increase an existing gender difference while having little effect on any racial differences in judges’ decisions. Finally, we find that the PSA’s recommendations might be unnecessarily severe unless the cost of a new crime is sufficiently high.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Commmunity Members Present New STATA Package: mlmeval

Dr. Anthony J. Gambino (RMME alumnus), Dr. Sarah D. Newton (RMME alumna), and Dr. D. Betsy McCoach (current RMME faculty member) unveiled their new STATA package, mlmeval, at the STATA Conference in Washington, DC this week. Their work pushes the field forward by offering a new tool that provides users with information about both model fit and adequacy for multilevel model evaluation.

 

Abstract:

Model evaluation is an unavoidable facet of multilevel modeling (MLM). Current guidance encourages researchers to focus on two overarching model-selection factors: model fit and model adequacy (McCoach et al. 2022). Researchers routinely use information criteria to select from a set of competing models and assess the relative fit of each candidate model to their data. However, researchers must also consider the ability of their models and their various constituent parts to explain variance in the outcomes of interest (i.e., model adequacy). Prior methods for assessing model adequacy in MLM are limited. Therefore, Rights and Sterba (2019) proposed a new framework for decomposing variance in MLM to estimate R2 measures. Yet there is no Stata package that implements this framework. Thus, we propose a new Stata package that computes both (1) a variety of model fit criteria and (2) the model adequacy measures described by Rights and Sterba to facilitate multilevel model selection for Stata users. The goal of this package is to provide researchers with an easy way to utilize a variety of complementary methods to evaluate their multilevel models.

RMME Community Members Publish Article: Omitted Response Patterns

Merve Sarac (an RMME alumna) and Dr. Eric Loken (a current RMME faculty member) recently published a new article, entitled: “Examining Patterns of Omitted Responses in a Large-scale English Language Proficiency Test” in the International Journal of Testing. Congratulations to Merve and Eric on this excellent accomplishment!

 

Abstract:

This study is an exploratory analysis of examinee behavior in a large-scale language proficiency test. Despite a number-right scoring system with no penalty for guessing, we found that 16% of examinees omitted at least one answer and that women were more likely than men to omit answers. Item-response theory analyses treating the omitted responses as missing rather than wrong showed that examinees had underperformed by skipping the answers, with a greater underperformance among more able participants. An analysis of omitted answer patterns showed that reading passage items were most likely to be omitted, and that native language-translation items were least likely to be omitted. We hypothesized that since reading passage items were most tempting to skip, then among examinees who did answer every question there might be a tendency to guess at these items. Using cluster analyses, we found that underperformance on the reading items was more likely than underperformance on the non-reading passage items. In large-scale operational tests, examinees must know the optimal strategy for taking the test. Test developers must also understand how examinee behavior might impact the validity of score interpretations.

New Program Evaluation Student, Emily Acevedo, Completes Dissertation

One of RMME’s newest members of the Graduate Certificate Program in Program Evaluation has reached an academic milestone! Dr. Emily Acevedo, a current kindergarten teacher in New York, recently completed her dissertation entitled, “Teacher’s Implementation of Play-Based Learning Practices and Barriers Encountered in Kindergarten Classrooms,” at Walden University. Congratulations to Dr. Acevedo on this outstanding accomplishment!

RMME Faculty Member, Dr. D. Betsy McCoach Releases New Books

Congratulations to RMME faculty member, Dr. D. Betsy McCoach, who recently released two new outstanding statistical modeling books. Both works include contributions from RMME Community members. Be sure to check these out today:

Introduction to Modern Modelling Methods – Co-authored by RMME alumnus, Dr. Dakota Cintron, this book introduces readers to multilevel modeling, structural equation modeling, and longitudinal modeling. A fantastic resource for quantitative researchers!

Multilevel Modeling Methods with Introductory and Advanced Applications – Including contributions from current and former RMME faculty (D. Betsy McCoach, Chris Rhoads, H. Jane Rogers, Aarti P. Bellara), as well as RMME alumni (Sarah D. Newton, Anthony J. Gambino, Eva Yujia Li), this text offers readers a comprehensive introduction to multilevel modeling. It is an excellent resource for aspiring and established multilevel modelers, covering foundational skills through cutting-edge, advanced multilevel techniques. A must-have for every multilevel modeler’s bookshelf!

RMME Community Members Publish Article: Mixture Models & Classification

RMME Alumnus, Dr. Dakota W. Cintron, and RMME faculty members, Drs. Eric Loken and D. Betsy McCoach recently published a new article entitled, “A Cautionary Note about Having the Right Mixture Model but Classifying the Wrong People”. This article will appear in Multivariate Behavioral Research and is currently available online. Congratulations to Dakota, Eric, and Betsy!

 

Abstract:

 

RMME Instructor, Dr. Leslie Fierro, Serves as Evaluation Panelist

On June 2, 2022, Dr. Leslie Fierro (RMME Instructor and Co-Editor of New Directions for Evaluation) contributed to a panel session entitled, “Issues in Evaluation: Surveying the Evaluation Policy Landscape in 2022”. The Government Accountability Office (GAO) and Data Foundation co-sponsored this webinar in which panelists discussed the state of evaluation policy today. Visit this website and register to watch the recording of this excellent webinar for free! Congratulations on this work, Dr. Fierro!

Drs. D. Betsy McCoach & Sarah D. Newton Offer Spring 2022 Workshops to I-MTSS Research Network Early Career Scholars

RMME Community members, Dr. D. Betsy McCoach and Dr. Sarah D. Newton, collaborated with colleagues this spring to offer several methodological workshops for members of the I-MTSS Research Network’s Early Career Scholars Program. Workshops included:

 

Workshop Title Description Facilitators
Learning how to “p” (December 2021) Everyone uses p’s, but very few know how to p. In this session, we will discuss the good, the bad, and the ugly of p-values and we will provide more nuanced guidance on how to make sense of your research results. Betsy McCoach and Yaacov Petscher
Hungry for Power (November 2021) All researchers seek power— statistical power, that is. In this session, we will explore the power game and how to “play” it. Betsy McCoach and Yaacov Petscher
A Bird’s Eye View of Nesting
(January 2022)
Nested data are the norm in educational studies. Some consider nesting a nuisance, but nested data also provides opportunities to ask and answer a wide variety of research questions that are important to educational researchers. Betsy McCoach and Yaacov Petscher
Data Cleanup in Aisle 2! (Mop and Bucket Not Included)
(February 2022)
This workshop will help participants to develop a clearer sense of the data cleaning and preparation process: (1) Setting up workflows and structures for success, (2) Identifying data entry errors; (3) Creating, recoding, and naming variables for analysis; (4) Conducting preliminary analyses; (5) Knowing your software; and (6) Understanding your planned analysis and its needs (with special attention given to multilevel modeling). Sarah D. Newton and Kathleen Lynne Lane
What’s Your Logic? Tell Me–What’s Your Logic? (May 2022) The current workshop focuses on how to use logic models to convey the theory of change (TOC) underlying a program/intervention of interest in research and/or evaluation contexts. In this hands-on workshop, participants will collaborate in groups to build a TOC model for the I-MTSS Research Network project with which they are most familiar. Participants will then share and briefly describe their work for the larger group. Sarah D. Newton and Nathan Clemens

 

Congratulations on your contributions to a successful workshop series!

 

Upcoming RMME/STAT Colloquium (4/29): Luke Keele, “Approximate Balancing Weights for Clustered Observational Study Designs”

RMME/STAT Joint Colloquium

Approximate Balancing Weights for Clustered Observational Study Designs

Dr. Luke Keele
University of Pennsylvania

Friday, April 29, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m35b82d4dc6d3e77536aa48390a02485b

In a clustered observational study, a treatment is assigned to groups and all units within the group are exposed to the treatment. Clustered observational studies are common in education where treatments are given to all students within some schools but withheld from all students in other schools. Clustered observational studies require specialized methods to adjust for observed confounders. Extant work has developed specialized matching methods that take key elements of clustered treatment assignment into account. Here, we develop a new method for statistical adjustment in clustered observational studies using approximate balancing weights. An approach based on approximate balancing weights improves on extant matching methods in several ways. First, our methods highlight the possible need to account for differential selection into clusters. Second, we can automatically balance interactions between unit level and cluster level covariates. Third, we can also balance high moments on key cluster level covariates. We also outline an overlap weights approach for cases where common support across treated and control clusters is poor. We introduce an augmented estimator that accounts for outcome information. We show that our approach has dual representation as an inverse propensity score weighting estimator based on a hierarchical propensity score model. We apply this algorithm to assess a school-based intervention through which students in treated schools were exposed to a new reading program during summer school. Overall, we find that balancing weights tend to produce superior balance relative to extant matching methods. Moreover, an approximate balancing weight approach tends to require less input from the user to achieve high levels of balance.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME Evaluation Colloquium (4/1): Cassandra Myers & Joan Levine, “UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation”

RMME Evaluation Colloquium

UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation

Cassandra Myers, The HRP Consulting Group
Joan Levine, University of Connecticut

Friday, April 1, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=me8fe20df2d511c754f1bd3f3539991b4

The UConn-Storrs Human Research Protection Program (HRPP) is dedicated to the protection of human subjects in research activities conducted under its auspices. The HRPP reviews human subjects research to ensure appropriate safeguards for the ethical, compliant, and safe conduct of research, as well as the protection of the rights and welfare of the human subjects who volunteer to participate. As the regulatory framework for the protections of human subjects is complex and multi-faceted, this session’s goals are to review the regulatory framework and how it applies to research and evaluation, the requirements for consent, when consent can be waived, and how to navigate the IRB process at UConn. This session will also review historical case studies to understand current requirements and how these events still affect populations, policies, and regulations.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab