News & Updates

RMME News & Updates

Dr. Bianca Montrosse-Moorhead To Give Evaluation Café Talk, 2/28

What are RMME faculty up to? Check out Dr. Bianca Montrosse-Moorhead’s Evaluation Café talk on February 28, 2024, at 12pm ET, to find out! Her featured presentation is entitled, “Modernizing Evaluation’s Cartography, Architectural Blueprint, and Definition”. Register here to reserve your spot in this excellent session: http://tinyurl.com/EvalCafe2024-BMM

 

ABSTRACT: In this Evaluation Café presentation, Dr. Montrosse-Moorhead will preview the Country of the Mind map, where the Evaluation Building is located on this map, and a close-up of the building itself. None of these have been visualized before in published scholarship. Dr. Montrosse-Moorhead will also share the proposed amended definition, why it is necessary, and the implications of adopting the amended definitions for evaluation practice; the implications for the instruments, methods, and techniques we use; and the implications for evaluation’s theoretical and metatheoretical scholarship.

 

Check out Dr. Bianca Montrosse-Moorhead’s upcoming Evaluation Café talk, on Februay 28, at 12pm!

RMME Programs Celebrates its Fall 2023 Grads!!!

UConn’s Research Methods, Measurement, & Evaluation (RMME) Programs are excited to celebrate our newest graduates from RMME Master’s degree program and RMME’s Graduate Certificate in Program Evaluation program! We cannot wait to see all of the many ways you will make us proud, as a new RMME graduate! Congratulations, all, from the Research Methods, Measurement, & Evaluation Community!

Celebrating Our Fall 2023 RMME Programs Graduates! Congratulations to all!
Celebrating Our Fall 2023 RMME Programs Graduates! Congratulations to all!

 

 

 

Upcoming RMME/STAT Colloquium (12/1): Irini Moustaki, “Some New Developments on Pairwise Likelihood Estimation & Testing in Latent Variable Models”

RMME/STAT Joint Colloquium

Some New Developments on Pairwise Likelihood Estimation & Testing in Latent Variable Models

Dr. Irini Moustaki
London School of Economics

Friday, December 1, at 11AM ET

https://tinyurl.com/rmme-Moustaki

Pairwise likelihood is a limited-information method used to estimate latent variable models, including factor analyses of categorical data. It avoids evaluating high-dimensional integrals and, thus, is computationally more efficient than full information maximum likelihood. This talk will discuss two new developments in the estimation and testing of latent variable models for binary data under the pairwise likelihood framework. The first development is about estimation and limited information goodness-of-fit test statistics under complex sampling. The performance of the estimation and the proposed test statistics under simple random sampling and unequal probability sampling is evaluated using simulated data. The second development focuses on computational aspects of pairwise likelihood. Despite its computational advantages it can still be demanding for large-scale problems that involve many observed variables. We propose an approximation of the pairwise likelihood estimator, derived from an optimization procedure relying on stochastic gradients. The stochastic gradients are constructed by subsampling the pairwise log-likelihood contributions, for which the subsampling scheme controls the per-iteration computational complexity. The stochastic estimator is shown to be asymptotically equivalent to the pairwise likelihood one. However, finite sample performances can be improved by compounding the sampling variability of the data with the uncertainty introduced by the subsampling scheme. We demonstrate the performance of the proposed method using simulation studies and two real data applications.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Faculty and Students Present at NERA 2023

Dr. D. Betsy McCoach (RMME Faculty member, Discussant), Amanda Sutter (RMME PhD Student, Presenter), Marcus Harris (RMME PhD Student, Presenter), Claudia Ventura (RMME PhD Student, Presenter), Faeze Safari (RMME PhD Student, Presenter), & Kirsten Reyna (RMME PhD Student, Presenter) present a symposium on “The Future of Educational Measurement” at NERA 2023. Congratulations on this excellent symposium, from the Research Methods, Measurement, & Evaluation Community!

 

RMME Faculty and Students Present at NERA 2023

 

Symposium Chair/Discussant: D. Betsy McCoach

Symposium Presenters: Amanda Sutter, Marcus Harris, Claudia Ventura, Faeze Safari, & Kirsten Reyna

Symposium Abstract: Emergent measurement scholars provide their perspective on current issues and future directions in educational measurement. The four presentations focus on 4 critical areas: 1.) Equity and social justice, 2.) The context in which we operate, 3.) The rise of artificial intelligence, and 4.) Graduate training in educational measurement.

Symposium Presentations:

1) Context Matters
Amanda Sutter, Marcus Harris

2) Equity and Social Justice Issues and Values in Measurement
Claudia Ventura, Amanda Sutter

3) Exploring the Challenges and Potential of Artificial Intelligence in Educational Measurement
Faeze Safari, Kirsten Reyna

4) Transforming Graduate School Training: Advancing Measurement and Open Science
Marcus Harris, Kirsten Reyna

 

 

RMME Instructor, Dr. Brenna Butler, Presents at AEA 2023

Dr. Brenna Butler, RMME instructor, presents at the AEA 2023 Conference. Dr. Butler shared two presentations at AEA 2023, including “Getting down to the essentials: How to effectively measure need on a needs assessment survey using a gap analysis approach” and “How can one-on-one client interactions be measured as organization-wide impacts?” Congratulations on these fantastic presentations, from the Research Methods, Measurement, & Evaluation Community!

 

RMME Instructor, Dr. Brenna Butler, Presents at AEA 2023

 

Author and Presenter: Brenna Butler

Presentation Title: Getting down to the essentials: How to effectively measure need on a needs assessment survey using a gap analysis approach

Abstract: Measuring need through a needs assessment survey is often not easy, given the many different methods that are often used to define what a “need” is. In this Ignite presentation, participants will learn about one concrete way of measuring participants’ needs through a gap analysis, which measures both participants’ present state and their desired state (Watkins et al., 2012). Participants will be provided examples of survey questions to measure these states in a valid manner that is housed in a framework of a systems thinking approach, meaning that participants’ present and desired states under investigation are done so with personal, social, and societal influences in consideration (Arther & McMahon, 2005). The strengths and weaknesses of measuring need through this approach compared to other approaches will be briefly described. Participants will leave this Ignite presentation with one contextual framework to apply to their own needs assessment surveys along with a template of survey questions that can be structured to measure needs effectively.

 

Presenters: Brenna Butler, Michael Hamel, Matthew Spindler, Malinda Suprise

Presentation Title: Hidden stories: How can one-on-one client interactions be measured as organization-wide impacts?

Abstract: In large organizations, such as Cooperative Extensions, impacts that derive from one-on-one interactions with clientele are often missed in “hidden stories” due to a lack of effective data tracking measures present within the organization. This session will describe the methodology used to create a survey tool that captures the outputs and outcomes of educator-clientele interactions at Penn State Extension, focusing on supporting data utility at multiple levels (i.e., educator, supervisor, and leadership council). How the online survey built a foundation in the organization for data analytics supported through consistent and structured data input and storage processes will be described. Those data input and storage processes will facilitate future data analysis for complex decision-making throughout the organization’s evaluative cycles of activities and programs. The importance of stakeholder involvement in the development process of this survey as a form of boundary-making will be discussed in relation to maximizing the utility of data collection throughout the organization. Data collection is inherently a form of boundary-making that determines which elements of a situation should be included in the informational picture constructed of a context and which elements should be excluded (Schwandt, 2018). Boundary-making was used in this instrument development process to utilize the important role of stakeholder involvement in defining what information should be collected and curated (Archibald, 2020). Participants should leave this demonstration with the knowledge and tools to employ a similar methodology at their own organization to track individual interactions using a structure that allows for data aggregation at an organizational level.

 

 

RMME PhD student, Amanda Sutter, Presents at AEA 2023

RMME PhD student, Amanda Sutter, discusses “The Story Behind an Evaluation Practice Survey: Insights from Cognitive Interviews” at the 2023 AEA Conference. Congratulations on this wonderful presentation, from the Research Methods, Measurement, & Evaluation Community!

 

RMME PhD student, Amanda Sutter, presentes at AEA 2023

 

Author and Presenter: Amanda Sutter

Title: The Story Behind an Evaluation Practice Survey: Insights from Cognitive Interviews

Abstract: Abstract Information: What do evaluators think about when they are asked questions about evaluation practice? What narratives are triggered by what words? This research study focused on understanding the thought processes and beliefs of evaluators as they reflect on their practice through cognitive interviews. Cognitive interviews are a powerful instrument design strategy that helps reveal the narratives and meanings behind survey responses. Building on last year’s field pilot of a new evaluation practice instrument, this research on evaluation study involved cognitive interviews with 20 evaluators recruited through purposive nonprobability sampling to gather diverse perspectives on the evaluation practice instrument. The semi-structured interview process followed the Willis Method, offering participants an easy process to explore their responses in-depth. This paper shares findings from the interviews and how the data will be used to improve the instrument, particularly to ensure that the phrasing of questions and subsequent interpretations align with the intended construct definitions and narrative of the instrument. The paper will be of interest to a variety of audiences. Practitioners and commissions will have opportunities to think about their own surveys and how cognitive interviews may be a useful tool for evaluation studies. Other researchers will also benefit from these opportunities, and they will learn about a potential instrument that may be useful in their own scholarly work.