Upcoming Events

RMME Upcoming Events

Upcoming RMME/STAT Colloquium (3/24): Joseph L. Schafer, “Modeling Coarsened Categorical Variables: Techniques and Software”

RMME/STAT Joint Colloquium

Modeling Coarsened Categorical Variables: Techniques and Software

Dr. Joseph L. Schafer
U.S. Census Bureau

Friday, March 24, at 11AM ET

https://tinyurl.com/rmme-Schafer

Coarsened data can express intermediate states of knowledge between fully observed and fully missing. For example, when classifying survey respondents by cigarette smoking behavior as 1=never smoked, 2=former smoker, or 3=current smoker, we may encounter some who reported having smoked in the past but whose current activity is unknown (either 2 or 3, but not 1). Software for categorical data modeling typically provides codes for missing values but lacks convenient ways to convey states of partial  knowledge. A new R package cvam: Coarsened Variable Modeling, extends R’s implementation of categorical variables (factors) and fits log-linear and latent-class models to incomplete datasets containing coarsened and missing values. Methods include maximum likelihood estimation using an expectation-maximization algorithm, approximate Bayesian and Bayesian inference via Markov chain Monte Carlo. Functions are also provided for comparing models, predicting missing values, creating multiple imputations, and generating partially or fully synthetic data. In the first major application of this software, data from the U.S. Decennial Census and administrative records were combined to predict citizenship status for 309 million residents of the United States.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME Evaluation Colloquium (3/10): Laura Peck, “The Health Profession Opportunity Grant (HPOG) Impact Study: A Behind-the-Scenes Look at Experimental Evaluation in Practice”

RMME Evaluation Colloquium

The Health Profession Opportunity Grant (HPOG) Impact Study: A Behind-the-Scenes Look at Experimental Evaluation in Practice

Dr. Laura Peck
Abt Associates

Friday, March 10, at 11AM ET

https://tinyurl.com/eval-Peck

In 2010, the U.S. Department of Health and Human Services’ Administration for Children and Families awarded Health Profession Opportunity Grants (HPOG 1.0) to 32 organizations in 23 states. The purpose of the HPOG Program is to provide education and training to Temporary Assistance for Needy Families (TANF) recipients and other low-income individuals for occupations in the healthcare field that pay well and aim to meet local areas’ healthcare sector labor shortages. To assess its effectiveness, an experimental evaluation design assigned eligible program applicants at random to a “treatment” group that could access the program or a “control” group that could not. Beyond the impact analysis, the evaluation also probed questions about what drove program impacts, using various strategies. This colloquium will discuss how the HPOG 1.0 impact study was designed/implemented and introduce attendees to various design and analysis choices used by investigators, in partnership with the government funder, to address research questions. Specific topics will include: experimental design, multi-armed experimental design, experimental impact analysis, planned variation, natural variation, endogenous subgroup analysis, evaluation in practice.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (2/24): Ben Domingue, “Bookmaking for Binary Outcomes: Prediction, Profits, and the IMV”

RMME/STAT Joint Colloquium

Bookmaking for Binary Outcomes: Prediction, Profits, and the IMV

Dr. Ben Domingue
Stanford University

Friday, February 24, at 11AM ET

https://tinyurl.com/rmme-Domingue

Understanding the “fit” of models designed to predict binary outcomes is a long-standing problem. We propose a flexible, portable, and intuitive metric for such scenarios: the InterModel Vigorish (IMV). The IMV is based on a series of bets involving weighted coins, well-characterized physical systems with tractable probabilities. The IMV has a number of desirable properties including an interpretable and portable scale and an appropriate sensitivity to outcome prevalence. We showcase its flexibility across examples spanning the social, biomedical, and physical sciences. We demonstrate how it can be used to provide straightforward interpretation of logistic regression coefficients and to provide insights about the value of different types of item response theory (IRT) models. The IMV allows for precise answers to questions about changes in model fit in a variety of settings in a manner that will be useful for furthering research with binary outcomes.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

SAVE THE DATE! Modern Modeling Methods Returns to UConn!

 

 

Mark your calendar! The Modern Modeling Methods (M3) conference returns to UConn after a lengthy pandemic-induced hiatus. From June 26-28, 2023, M3 will resume as an in-person conference on the Storrs campus. Keynote speakers and workshop presenters include Bengt MuthenTihomir Asparouhov, and Ellen Hamaker. Remember to check the M3 website regularly for more information and updates.

 

Upcoming RMME/STAT Colloquium (11/11): Dylan Small, “Testing an Elaborate Theory of a Causal Hypothesis”

RMME/STAT Joint Colloquium

Testing an Elaborate Theory of a Causal Hypothesis

Dr. Dylan Small
University of Pennsylvania

Friday, November 11, at 11AM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m8da0e35b64c861fc97a21dd36fb29ded

When R.A. Fisher was asked what can be done in observational studies to clarify the step from association to causation, he replied, “Make your theories elaborate” — when constructing a causal hypothesis, envisage as many different consequences of its truth as possible and plan observational studies to discover whether each of these consequences is found to hold. William Cochran called “this multi-phasic attack…one of the most potent weapons in observational studies.” Statistical tests for the various pieces of the elaborate theory help to clarify how much the causal hypothesis is corroborated. In practice, the degree of corroboration of the causal hypothesis has been assessed by verbally describing which of the several tests provides evidence for which of the several predictions. This verbal approach can miss quantitative patterns. So, we developed a quantitative approach to making statistical inference about the amount of the elaborate theory that is supported by evidence. This is joint work with Bikram Karmakar.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (10/7): Edsel A. Pena, “Searching for Truth through Data”

RMME/STAT Joint Colloquium

Searching for Truth through Data

Dr. Edsel A. Pena
University of South Carolina

Friday, October 7, at 11:15AM ET, AUST 108

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m9667e91caf1197b47fc45f50529388b9

This talk concerns the role of statistical thinking in the Search for Truth using data. This will bring us to a discussion of p-values, a much-used tool in scientific research, but at the same time a controversial concept which has elicited much, sometimes heated, debate and discussion. In March 2016, the American Statistical Association (ASA) was compelled to release an official statement regarding p-values; a psychology journal has even gone to the extreme of banning the use of p-values in its articles; and in 2018, a special issue of The American Statistician was fully devoted to this issue. A main concern in the use of p-values is the introduction of a somewhat artificial threshold, usually the value of 0.05, when used in decision-making, with implications on reproducibility and replicability of reported scientific results. Some new perspectives on the use of p-values and in the search for truth through data will be discussed. In particular, this will touch on the representation of knowledge and its updating based on observations. Related to the issue of p-values, the following question arises: “When given the p-value, what does it provide in the context of the updated knowledge of the phenomenon under consideration, and what additional information should accompany it?” To be addressed also is the question of whether it is time to move away from hard thresholds such as 0.05 and whether we are on the verge of — to quote Wasserstein, Schirm and Lazar (2019) — a “World Beyond P < 0.05.”

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (9/9): Kosuke Imai, “Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment”

RMME/STAT Joint Colloquium

Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment

Dr. Kosuke Imai
Harvard University

Friday, September 9, at 11:00AM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m486f7b13e6881ba895b350f338b0c90d

Despite an increasing reliance on fully-automated algorithmic decision-making in our day-to-day lives, human beings still make highly consequential decisions. As frequently seen in business, healthcare, and public policy, recommendations produced by algorithms are provided to human decision-makers to guide their decisions. While there exists a fast-growing literature evaluating the bias and fairness of such algorithmic recommendations, an overlooked question is whether they help humans make better decisions. We develop a general statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We also show how to examine whether algorithmic recommendations improve the fairness of human decisions and derive the optimal decision rules under various settings. We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment (PSA) in the criminal justice system. A goal of the PSA is to help judges decide which arrested individuals should be released. On the basis of the preliminary data available, we find that providing the PSA to the judge has little overall impact on the judge’s decisions and subsequent arrestee behavior. Our analysis, however, yields some potentially suggestive evidence that the PSA may help avoid unnecessarily harsh decisions for female arrestees regardless of their risk levels while it encourages the judge to make stricter decisions for male arrestees who are deemed to be risky. In terms of fairness, the PSA appears to increase an existing gender difference while having little effect on any racial differences in judges’ decisions. Finally, we find that the PSA’s recommendations might be unnecessarily severe unless the cost of a new crime is sufficiently high.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Instructor, Dr. Leslie Fierro, Serves as Evaluation Panelist

On June 2, 2022, Dr. Leslie Fierro (RMME Instructor and Co-Editor of New Directions for Evaluation) contributed to a panel session entitled, “Issues in Evaluation: Surveying the Evaluation Policy Landscape in 2022”. The Government Accountability Office (GAO) and Data Foundation co-sponsored this webinar in which panelists discussed the state of evaluation policy today. Visit this website and register to watch the recording of this excellent webinar for free! Congratulations on this work, Dr. Fierro!

Upcoming RMME/STAT Colloquium (4/29): Luke Keele, “Approximate Balancing Weights for Clustered Observational Study Designs”

RMME/STAT Joint Colloquium

Approximate Balancing Weights for Clustered Observational Study Designs

Dr. Luke Keele
University of Pennsylvania

Friday, April 29, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m35b82d4dc6d3e77536aa48390a02485b

In a clustered observational study, a treatment is assigned to groups and all units within the group are exposed to the treatment. Clustered observational studies are common in education where treatments are given to all students within some schools but withheld from all students in other schools. Clustered observational studies require specialized methods to adjust for observed confounders. Extant work has developed specialized matching methods that take key elements of clustered treatment assignment into account. Here, we develop a new method for statistical adjustment in clustered observational studies using approximate balancing weights. An approach based on approximate balancing weights improves on extant matching methods in several ways. First, our methods highlight the possible need to account for differential selection into clusters. Second, we can automatically balance interactions between unit level and cluster level covariates. Third, we can also balance high moments on key cluster level covariates. We also outline an overlap weights approach for cases where common support across treated and control clusters is poor. We introduce an augmented estimator that accounts for outcome information. We show that our approach has dual representation as an inverse propensity score weighting estimator based on a hierarchical propensity score model. We apply this algorithm to assess a school-based intervention through which students in treated schools were exposed to a new reading program during summer school. Overall, we find that balancing weights tend to produce superior balance relative to extant matching methods. Moreover, an approximate balancing weight approach tends to require less input from the user to achieve high levels of balance.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME Evaluation Colloquium (4/1): Cassandra Myers & Joan Levine, “UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation”

RMME Evaluation Colloquium

UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation

Cassandra Myers, The HRP Consulting Group
Joan Levine, University of Connecticut

Friday, April 1, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=me8fe20df2d511c754f1bd3f3539991b4

The UConn-Storrs Human Research Protection Program (HRPP) is dedicated to the protection of human subjects in research activities conducted under its auspices. The HRPP reviews human subjects research to ensure appropriate safeguards for the ethical, compliant, and safe conduct of research, as well as the protection of the rights and welfare of the human subjects who volunteer to participate. As the regulatory framework for the protections of human subjects is complex and multi-faceted, this session’s goals are to review the regulatory framework and how it applies to research and evaluation, the requirements for consent, when consent can be waived, and how to navigate the IRB process at UConn. This session will also review historical case studies to understand current requirements and how these events still affect populations, policies, and regulations.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab