Upcoming Events

RMME Upcoming Events

Upcoming RMME/STAT Colloquium (2/24): Ben Domingue, “Bookmaking for Binary Outcomes: Prediction, Profits, and the IMV”

RMME/STAT Joint Colloquium

Bookmaking for Binary Outcomes: Prediction, Profits, and the IMV

Dr. Ben Domingue
Stanford University

Friday, February 24, at 11AM ET

https://tinyurl.com/rmme-Domingue

Understanding the “fit” of models designed to predict binary outcomes is a long-standing problem. We propose a flexible, portable, and intuitive metric for such scenarios: the InterModel Vigorish (IMV). The IMV is based on a series of bets involving weighted coins, well-characterized physical systems with tractable probabilities. The IMV has a number of desirable properties including an interpretable and portable scale and an appropriate sensitivity to outcome prevalence. We showcase its flexibility across examples spanning the social, biomedical, and physical sciences. We demonstrate how it can be used to provide straightforward interpretation of logistic regression coefficients and to provide insights about the value of different types of item response theory (IRT) models. The IMV allows for precise answers to questions about changes in model fit in a variety of settings in a manner that will be useful for furthering research with binary outcomes.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

SAVE THE DATE! Modern Modeling Methods Returns to UConn!

 

 

Mark your calendar! The Modern Modeling Methods (M3) conference returns to UConn after a lengthy pandemic-induced hiatus. From June 26-28, 2023, M3 will resume as an in-person conference on the Storrs campus. Keynote speakers and workshop presenters include Bengt MuthenTihomir Asparouhov, and Ellen Hamaker. Remember to check the M3 website regularly for more information and updates.

 

Upcoming RMME/STAT Colloquium (11/11): Dylan Small, “Testing an Elaborate Theory of a Causal Hypothesis”

RMME/STAT Joint Colloquium

Testing an Elaborate Theory of a Causal Hypothesis

Dr. Dylan Small
University of Pennsylvania

Friday, November 11, at 11AM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m8da0e35b64c861fc97a21dd36fb29ded

When R.A. Fisher was asked what can be done in observational studies to clarify the step from association to causation, he replied, “Make your theories elaborate” — when constructing a causal hypothesis, envisage as many different consequences of its truth as possible and plan observational studies to discover whether each of these consequences is found to hold. William Cochran called “this multi-phasic attack…one of the most potent weapons in observational studies.” Statistical tests for the various pieces of the elaborate theory help to clarify how much the causal hypothesis is corroborated. In practice, the degree of corroboration of the causal hypothesis has been assessed by verbally describing which of the several tests provides evidence for which of the several predictions. This verbal approach can miss quantitative patterns. So, we developed a quantitative approach to making statistical inference about the amount of the elaborate theory that is supported by evidence. This is joint work with Bikram Karmakar.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (10/7): Edsel A. Pena, “Searching for Truth through Data”

RMME/STAT Joint Colloquium

Searching for Truth through Data

Dr. Edsel A. Pena
University of South Carolina

Friday, October 7, at 11:15AM ET, AUST 108

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m9667e91caf1197b47fc45f50529388b9

This talk concerns the role of statistical thinking in the Search for Truth using data. This will bring us to a discussion of p-values, a much-used tool in scientific research, but at the same time a controversial concept which has elicited much, sometimes heated, debate and discussion. In March 2016, the American Statistical Association (ASA) was compelled to release an official statement regarding p-values; a psychology journal has even gone to the extreme of banning the use of p-values in its articles; and in 2018, a special issue of The American Statistician was fully devoted to this issue. A main concern in the use of p-values is the introduction of a somewhat artificial threshold, usually the value of 0.05, when used in decision-making, with implications on reproducibility and replicability of reported scientific results. Some new perspectives on the use of p-values and in the search for truth through data will be discussed. In particular, this will touch on the representation of knowledge and its updating based on observations. Related to the issue of p-values, the following question arises: “When given the p-value, what does it provide in the context of the updated knowledge of the phenomenon under consideration, and what additional information should accompany it?” To be addressed also is the question of whether it is time to move away from hard thresholds such as 0.05 and whether we are on the verge of — to quote Wasserstein, Schirm and Lazar (2019) — a “World Beyond P < 0.05.”

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (9/9): Kosuke Imai, “Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment”

RMME/STAT Joint Colloquium

Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment

Dr. Kosuke Imai
Harvard University

Friday, September 9, at 11:00AM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m486f7b13e6881ba895b350f338b0c90d

Despite an increasing reliance on fully-automated algorithmic decision-making in our day-to-day lives, human beings still make highly consequential decisions. As frequently seen in business, healthcare, and public policy, recommendations produced by algorithms are provided to human decision-makers to guide their decisions. While there exists a fast-growing literature evaluating the bias and fairness of such algorithmic recommendations, an overlooked question is whether they help humans make better decisions. We develop a general statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We also show how to examine whether algorithmic recommendations improve the fairness of human decisions and derive the optimal decision rules under various settings. We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment (PSA) in the criminal justice system. A goal of the PSA is to help judges decide which arrested individuals should be released. On the basis of the preliminary data available, we find that providing the PSA to the judge has little overall impact on the judge’s decisions and subsequent arrestee behavior. Our analysis, however, yields some potentially suggestive evidence that the PSA may help avoid unnecessarily harsh decisions for female arrestees regardless of their risk levels while it encourages the judge to make stricter decisions for male arrestees who are deemed to be risky. In terms of fairness, the PSA appears to increase an existing gender difference while having little effect on any racial differences in judges’ decisions. Finally, we find that the PSA’s recommendations might be unnecessarily severe unless the cost of a new crime is sufficiently high.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Instructor, Dr. Leslie Fierro, Serves as Evaluation Panelist

On June 2, 2022, Dr. Leslie Fierro (RMME Instructor and Co-Editor of New Directions for Evaluation) contributed to a panel session entitled, “Issues in Evaluation: Surveying the Evaluation Policy Landscape in 2022”. The Government Accountability Office (GAO) and Data Foundation co-sponsored this webinar in which panelists discussed the state of evaluation policy today. Visit this website and register to watch the recording of this excellent webinar for free! Congratulations on this work, Dr. Fierro!

Upcoming RMME/STAT Colloquium (4/29): Luke Keele, “Approximate Balancing Weights for Clustered Observational Study Designs”

RMME/STAT Joint Colloquium

Approximate Balancing Weights for Clustered Observational Study Designs

Dr. Luke Keele
University of Pennsylvania

Friday, April 29, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m35b82d4dc6d3e77536aa48390a02485b

In a clustered observational study, a treatment is assigned to groups and all units within the group are exposed to the treatment. Clustered observational studies are common in education where treatments are given to all students within some schools but withheld from all students in other schools. Clustered observational studies require specialized methods to adjust for observed confounders. Extant work has developed specialized matching methods that take key elements of clustered treatment assignment into account. Here, we develop a new method for statistical adjustment in clustered observational studies using approximate balancing weights. An approach based on approximate balancing weights improves on extant matching methods in several ways. First, our methods highlight the possible need to account for differential selection into clusters. Second, we can automatically balance interactions between unit level and cluster level covariates. Third, we can also balance high moments on key cluster level covariates. We also outline an overlap weights approach for cases where common support across treated and control clusters is poor. We introduce an augmented estimator that accounts for outcome information. We show that our approach has dual representation as an inverse propensity score weighting estimator based on a hierarchical propensity score model. We apply this algorithm to assess a school-based intervention through which students in treated schools were exposed to a new reading program during summer school. Overall, we find that balancing weights tend to produce superior balance relative to extant matching methods. Moreover, an approximate balancing weight approach tends to require less input from the user to achieve high levels of balance.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME Evaluation Colloquium (4/1): Cassandra Myers & Joan Levine, “UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation”

RMME Evaluation Colloquium

UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation

Cassandra Myers, The HRP Consulting Group
Joan Levine, University of Connecticut

Friday, April 1, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=me8fe20df2d511c754f1bd3f3539991b4

The UConn-Storrs Human Research Protection Program (HRPP) is dedicated to the protection of human subjects in research activities conducted under its auspices. The HRPP reviews human subjects research to ensure appropriate safeguards for the ethical, compliant, and safe conduct of research, as well as the protection of the rights and welfare of the human subjects who volunteer to participate. As the regulatory framework for the protections of human subjects is complex and multi-faceted, this session’s goals are to review the regulatory framework and how it applies to research and evaluation, the requirements for consent, when consent can be waived, and how to navigate the IRB process at UConn. This session will also review historical case studies to understand current requirements and how these events still affect populations, policies, and regulations.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (3/25): Elizabeth Stuart, “Combining Experimental and Population Data to Estimate Population Treatment Effects”

RMME/STAT Joint Colloquium

Combining Experimental and Population Data to Estimate Population Treatment Effects

Dr. Elizabeth Stuart
Johns Hopkins Bloomberg School of Public Health

Friday, March 25, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=mb26cc940795502d8ae9ff7e274d435bb

With increasing attention being paid to the relevance of studies for real-world practice (especially in comparative effectiveness research), there is also growing interest in external validity and assessing whether the results seen in randomized trials would hold in target populations. While randomized trials yield unbiased estimates of the effects of interventions in the sample of individuals in the trial, they do not necessarily inform what the effects would be in some other, potentially somewhat different, population. While there has been increasing discussion of this limitation of traditional trials, relatively little statistical work has been done developing methods to assess or enhance the external validity of randomized trial results. In addition, new “big data” resources offer the opportunity to utilize data on broad target populations. This talk will discuss design and analysis methods for assessing and increasing external validity, as well as general issues that need to be considered when thinking about external validity. The primary analysis approach discussed will be a reweighting approach that equates the sample and target population on a set of observed characteristics. Underlying assumptions and methods to assess robustness to violation of those assumptions will be discussed. Implications for how future studies should be designed in order to enhance the ability to assess generalizability will also be discussed.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Instructor, Ummugul Bezirhan, Earns 2022 Dissertation Prize!

Congratulations to RMME instructor, Ummugul Bezirhan! She recently earned the Psychometric Society’s 2022 Dissertation Prize for her research entitled, “Conditional dependence between response time and accuracy in cognitive diagnostic models”. She will present this work as a keynote speaker at the upcoming International Meeting of the Psychometric Society (IMPS), which will be held from July 11-15, 2022, at the University of Bologna, in Bologna, Italy. See this Psychometric Society announcement for more information.

We are so thrilled to congratulate Dr. Bezirhan on this fantastic accomplishment–congratulations, Gul!