RMME student, Briana Hennessy, successfully defended her doctoral dissertation entitled, “Estimating School-Level Performance on Test Subdimensions.” Congratulations, Dr. Hennessy!
Archived Posts
Upcoming RMME Evaluation Colloquium (11/19): Holli Bayonas, “Behind the Evaluation: Holli Bayonas”
RMME Evaluation Colloquium
Behind the Evaluation: Holli Bayonas
Dr. Holli Bayonas
iEvaluate, LLC
Friday, November 19th, at 12:00PM ET
https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m83cfb05ec06ab0e6aecf026ad3e414f6
This colloquium gives participants an inside look at one evaluator’s pathway to becoming an evaluation professional. Dr. Bayonas will describe her personal career trajectory, along with the day-to-day responsibilities associated with her current position at iEvaluate. She will compare and contrast her opportunities to work in industry versus working for herself as an independent evaluation consultant. In addition, Dr. Bayonas will discuss her approach to balancing career/professional goals and the demands of homelife, including how she and her partner navigated the prioritization and support of each other’s career aspirations. She will close this talk with career and personal advice for her younger self.
Upcoming RMME/STAT Colloquium (11/5): Jerry Reiter, “How Auxiliary Information Can Help Your Missing Data Problem”
RMME/STAT Joint Colloquium
How Auxiliary Information Can Help Your Missing Data Problem
Dr. Jerry Reiter
Duke University
Friday, November 5th, at 12:00PM ET
https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m86ce051dbd968c3317ff09c343d31f40
Many surveys (and other types of databases) suffer from unit and item nonresponse. Typical practice accounts for unit nonresponse by inflating respondents’ survey weights, and accounts for item nonresponse using some form of imputation. Most methods implicitly treat both sources of nonresponse as missing at random. Sometimes, however, one knows information about the marginal distributions of some of the variables subject to missingness. In this talk, I discuss how such information can be leveraged to handle nonignorable missing data, including allowing different mechanisms for unit and item nonresponse (e.g., nonignorable unit nonresponse and ignorable item nonresponse). I illustrate the methods using data on voter turnout from the Current Population Survey.
Dani Yomtov Successfully Defends Doctoral Dissertation
Upcoming RMME/STAT Colloquium (10/1): Fan Li, “Overlap Weighting for Causal Inference”
RMME/STAT Joint Colloquium
Overlap Weighting for Causal Inference
Dr. Fan Li
Duke University
Friday, October 1st, at 12:00PM ET
https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=ma4999c9bf3ac28d40a9686eec33d70ed
Covariate balance is crucial for causal comparisons. Weighting is a common strategy to balance covariates in observational studies. We propose a general class of weights—the balancing weights—that balance the weighted distributions of the covariates between treatment groups. These weights incorporate the propensity score to weight each group to an analyst-selected target population. This class unifies existing weighting methods, including commonly used weights such as inverse-probability weights as special cases. Within the class, we highlight the overlap weighting method, which has been widely adopted in applied research. The overlap weight of each unit is proportional to the probability of that unit being assigned to the opposite group. The overlap weights are bounded and minimize the asymptotic variance of the weighted average treatment effect among the class of balancing weights. The overlap weights also possess a desirable exact balance property. Extension of overlap weighting to multiple treatments, survival outcomes, and subgroup analysis will also be discussed.
Xiaowen Liu Successfully Defends Doctoral Dissertation
RMME student, Xiaowen Liu, successfully defended her doctoral dissertation entitled, “The Impact of Missing Data on Parameter Estimation in Computerized Adaptive Testing.” Congratulations, Dr. Liu!
Anthony J. Gambino Successfully Defends Doctoral Dissertation
Anthony J. Gambino successfully defended his doctoral dissertation entitled, “Evaluating the Performance of Continuous Analysis of Symmetrically Predicted Endogenous Subgroups.” Congratulations, Dr. Gambino!
Upcoming RMME/STAT Colloquium (9/10): Susan Murphy, “Assessing Personalization in Digital Health”
RMME/STAT Joint Colloquium
Assessing Personalization in Digital Health
Dr. Susan Murphy
Harvard University
Friday, September 10th, at 12:00PM ET
https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m883b79a16b8b2c21038a80da6301cba3
Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in Digital Health. However, after a reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? I discuss some first approaches to addressing these questions.
RMME Master’s Student, Daniel Doerr, Secures New Position
Congratulations to Daniel Doerr, a part-time Master’s student in the Research Methods, Measurement, & Evaluation program at UConn! Daniel currently serves as the Director of Student Affairs Planning, Assessment, and Evaluation, in the Office of the Vice President for Student Affairs at the University of Connecticut. However, he recently accepted new employment as an Associate Performance Auditor, with the State of Connecticut Auditors of Public Accounts Office. Daniel will start work in his new position on July 16th.
Please join the RMME community, as we congratulate Daniel on this career-changing milestone!
RMME Faculty & Students Publish New Article: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?”
Congratulations to Bianca Montrosse-Moorhead, Anthony J. Gambino, Laura M. Yahn, Mindy Fan, and Anne T. Vo on their recent publication: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?” This article appears in the American Journal of Evaluation (https://doi.org/10.1177/10982140211020326).
For more information, visit: https://journals.sagepub.com/doi/10.1177/10982140211020326
A budding area of research is devoted to studying evaluator curriculum, yet to date, it has focused exclusively on describing the content and emphasis of topics or competencies in university-based programs. This study aims to expand the foci of research efforts and investigates the extent to which evaluators agree on what competencies should guide the development and implementation of evaluator education. This study used the Delphi method with evaluators (n = 11) and included three rounds of online surveys and follow-up interviews between rounds. This article discusses on which competencies evaluators were able to reach consensus. Where consensus was not found, possible reasons are offered. Where consensus was found, the necessity of each competency at both the master’s and doctoral levels is described. Findings are situated in ongoing debates about what is unique about what novice evaluators need to know and be able to do and the purpose of evaluator education.