Upcoming Events

RMME Upcoming Events

RESCHEDULED RMME/STAT Colloquium (3/4): Donald Hedeker, “Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data”

RMME/STAT Joint Colloquium

Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data

Dr. Donald Hedeker
University of Chicago

Friday, March 4, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6944095dfb2736dba214a9c6f6397805

Intensive longitudinal data are increasingly encountered in many research areas. For example, ecological momentary assessment (EMA) and/or mobile health (mHealth) methods are often used to study subjective experiences within changing environmental contexts. In these studies, up to 30 or 40 observations are usually obtained for each subject over a period of a week or so, allowing one to characterize a subject’s mean and variance and specify models for both. In this presentation, we focus on an adolescent smoking study using EMA where interest is on characterizing changes in mood variation. We describe how covariates can influence the mood variances and also extend the statistical model by adding a subject-level random effect to the within-subject variance specification. This permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their mood responses. The random effects are then shared in a modeling of future smoking levels. These mixed-effects location scale models have useful applications in many research areas where interest centers on the joint modeling of the mean and variance structure.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (2/25): Donald Hedeker, “Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data”

RMME/STAT Joint Colloquium

Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data

Dr. Donald Hedeker
University of Chicago

Friday, February 25, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6944095dfb2736dba214a9c6f6397805

Intensive longitudinal data are increasingly encountered in many research areas. For example, ecological momentary assessment (EMA) and/or mobile health (mHealth) methods are often used to study subjective experiences within changing environmental contexts. In these studies, up to 30 or 40 observations are usually obtained for each subject over a period of a week or so, allowing one to characterize a subject’s mean and variance and specify models for both. In this presentation, we focus on an adolescent smoking study using EMA where interest is on characterizing changes in mood variation. We describe how covariates can influence the mood variances and also extend the statistical model by adding a subject-level random effect to the within-subject variance specification. This permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their mood responses. The random effects are then shared in a modeling of future smoking levels. These mixed-effects location scale models have useful applications in many research areas where interest centers on the joint modeling of the mean and variance structure.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (1/28): Andrew Ho, “Test Validation for a Crisis: Five Practical Heuristics for the Best and Worst of Times”

RMME/STAT Joint Colloquium

Test Validation for a Crisis: Five Practical Heuristics for the Best and Worst of Times

Dr. Andrew Ho
Harvard University

Friday, January 28, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=me0f80ec702d5508cf83ae6a23183fc3d

The COVID-19 pandemic has raised debate about the place of education and testing in a hierarchy of needs. What do tests tell us that other measures do not? Is testing worth the time? Do tests expose or exacerbate inequality? The academic consensus in the open-access AERA/APA/NCME Standards has not seemed to help proponents and critics of tests reach common ground. I propose five heuristics for test validation and demonstrate their usefulness for navigating test policy and test use in a time of crisis: 1) A “four quadrants” framework for purposes of educational tests. 2) The “Five Cs,” a mnemonic for the five types of validity evidence in the Standards. 3) “RTQ,” a mantra reminding test users to read items. 4) The “3 Ws,” a user-first perspective on testing. And 5) the “Two A’s Tradeoff” between Assets and Accountability. I illustrate application of these heuristics to the challenge of reporting aggregate-level test scores when populations and testing conditions change as they have over the pandemic (e.g., An, Ho, & Davis, in press; Ho, 2021). I define and discuss these heuristics in the hope that they increase consensus and improve test use in the best and worst of times.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (12/10): Jaime Lynn Speiser, “Machine Learning Prediction Modeling for Longitudinal Outcomes in Older Adults”

RMME/STAT Joint Colloquium

Machine Learning Prediction Modeling for Longitudinal Outcomes in Older Adults

Dr. Jaime Lynn Speiser
Wake Forest School of Medicine

Friday, December 10th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=macf5fd1f3af4a057a735eeefe6e40af0

Prediction models aim to help medical providers, individuals and caretakers make informed, data-driven decisions about risk of developing poor health outcomes, such as fall injury or mobility limitation in older adults. Most models for outcomes in older adults use cross-sectional data, although leveraging repeated measurements of predictors and outcomes over time may result in higher prediction accuracy. This seminar talk will focus on longitudinal risk prediction models for mobility limitation in older adults using the Health, Aging, and Body Composition dataset with a novel machine learning method called Binary Mixed Model (BiMM) forest. I will give an overview of two common machine learning methods, decision tree and random forest, before introducing the BiMM forest method. I will then apply the BiMM forest method for developing prediction models for mobility limitation in older adults.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME Evaluation Colloquium (11/19): Holli Bayonas, “Behind the Evaluation: Holli Bayonas”

RMME Evaluation Colloquium

Behind the Evaluation: Holli Bayonas

Dr. Holli Bayonas
iEvaluate, LLC

Friday, November 19th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m83cfb05ec06ab0e6aecf026ad3e414f6

This colloquium gives participants an inside look at one evaluator’s pathway to becoming an evaluation professional. Dr. Bayonas will describe her personal career trajectory, along with the day-to-day responsibilities associated with her current position at iEvaluate. She will compare and contrast her opportunities to work in industry versus working for herself as an independent evaluation consultant. In addition, Dr. Bayonas will discuss her approach to balancing career/professional goals and the demands of homelife, including how she and her partner navigated the prioritization and support of each other’s career aspirations. She will close this talk with career and personal advice for her younger self.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (11/5): Jerry Reiter, “How Auxiliary Information Can Help Your Missing Data Problem”

RMME/STAT Joint Colloquium

How Auxiliary Information Can Help Your Missing Data Problem

Dr. Jerry Reiter
Duke University

Friday, November 5th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m86ce051dbd968c3317ff09c343d31f40

Many surveys (and other types of databases) suffer from unit and item nonresponse. Typical practice accounts for unit nonresponse by inflating respondents’ survey weights, and accounts for item nonresponse using some form of imputation. Most methods implicitly treat both sources of nonresponse as missing at random. Sometimes, however, one knows information about the marginal distributions of some of the variables subject to missingness. In this talk, I discuss how such information can be leveraged to handle nonignorable missing data, including allowing different mechanisms for unit and item nonresponse (e.g., nonignorable unit nonresponse and ignorable item nonresponse). I illustrate the methods using data on voter turnout from the Current Population Survey.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (10/1): Fan Li, “Overlap Weighting for Causal Inference”

RMME/STAT Joint Colloquium

Overlap Weighting for Causal Inference

Dr. Fan Li
Duke University

Friday, October 1st, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=ma4999c9bf3ac28d40a9686eec33d70ed

Covariate balance is crucial for causal comparisons. Weighting is a common strategy to balance covariates in observational studies. We propose a general class of weights—the balancing weights—that balance the weighted distributions of the covariates between treatment groups. These weights incorporate the propensity score to weight each group to an analyst-selected target population. This class unifies existing weighting methods, including commonly used weights such as inverse-probability weights as special cases. Within the class, we highlight the overlap weighting method, which has been widely adopted in applied research. The overlap weight of each unit is proportional to the probability of that unit being assigned to the opposite group. The overlap weights are bounded and minimize the asymptotic variance of the weighted average treatment effect among the class of balancing weights. The overlap weights also possess a desirable exact balance property. Extension of overlap weighting to multiple treatments, survival outcomes, and subgroup analysis will also be discussed.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (9/10): Susan Murphy, “Assessing Personalization in Digital Health”

RMME/STAT Joint Colloquium

Assessing Personalization in Digital Health

Dr. Susan Murphy
Harvard University

Friday, September 10th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m883b79a16b8b2c21038a80da6301cba3

Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in Digital Health. However, after a reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? I discuss some first approaches to addressing these questions.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (6/18): Jon Krosnick, “The Collapse of Scientific Standards in the World of High Visibility Survey Research”

RMME/STAT Joint Colloquium

The Collapse of Scientific Standards in the World of High Visibility Survey Research

Dr. Jon Krosnick
Stanford University

Friday, June 18th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6b0af866c35360de3b7819e6204bc121

In parallel to the explosion of the replication crisis across the sciences, survey research has experienced its own crisis of credibility – and very publicly. Election after election, pre-election polls in recent years in the U.S., Britain, Israel, and elsewhere have been widely viewed as inaccurate. After each failure to accurately predict election outcomes, the survey research profession has implemented a self-study to try to explain its inaccuracies, presumably in order to learn useful lessons for improving practices. And yet inaccuracies have continued unabated. This talk will review the evidence of inaccuracy and propose and test an explanation that has received little attention: that leading survey researchers have all but abandoned well-validated scientific procedures for data collection and data analysis and have misrepresented their procedures as having more scientific integrity than they in fact have. Interestingly, the lessons learned have implications for academic research in the social sciences, in medicine, and in other fields.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (5/21): David Kaplan, “Developments and Extensions in the Quantification of Model Uncertainty: A Bayesian Perspective”

RMME/STAT Joint Colloquium

Developments and Extensions in the Quantification of Model Uncertainty: A Bayesian Perspective

Dr. David Kaplan
University of Wisconsin-Madison

Friday, May 21st, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m9c9a2619f1a5b404889a0fda12b7a6bc

Issues of model selection have dominated the theoretical and applied statistical literature for decades. Model selection methods such as ridge regression, the lasso and the elastic net have replaced ad hoc methods such as stepwise regression as a means of model selection. In the end, however, these methods lead to a single final model that is often taken to be the model considered ahead of time, thus ignoring the uncertainty inherent in the search for a final model. One method that has enjoyed a long history of theoretical developments and substantive applications, and that accounts directly for uncertainty in model selection, is Bayesian model averaging (BMA). BMA addresses the problem of model selection by not selecting a final model, but rather by averaging over a space of possible models that could have generated the data. The purpose of this paper is to provide a detailed and up-to-date review of BMA with a focus on its foundations in Bayesian decision theory and Bayesian predictive modeling. We consider the selection of parameter and model priors as well as methods for evaluating predictions based on BMA. We also consider important assumptions regarding BMA and extensions of model averaging methods to address these assumptions, particularly the method of Bayesian stacking. Extensions to problems of missing data and probabilistic forecasting in large-scale educational assessments are discussed.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab