Month: March 2022

Upcoming RMME Evaluation Colloquium (4/1): Cassandra Myers & Joan Levine, “UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation”

RMME Evaluation Colloquium

UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation

Cassandra Myers, The HRP Consulting Group
Joan Levine, University of Connecticut

Friday, April 1, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=me8fe20df2d511c754f1bd3f3539991b4

The UConn-Storrs Human Research Protection Program (HRPP) is dedicated to the protection of human subjects in research activities conducted under its auspices. The HRPP reviews human subjects research to ensure appropriate safeguards for the ethical, compliant, and safe conduct of research, as well as the protection of the rights and welfare of the human subjects who volunteer to participate. As the regulatory framework for the protections of human subjects is complex and multi-faceted, this session’s goals are to review the regulatory framework and how it applies to research and evaluation, the requirements for consent, when consent can be waived, and how to navigate the IRB process at UConn. This session will also review historical case studies to understand current requirements and how these events still affect populations, policies, and regulations.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (3/25): Elizabeth Stuart, “Combining Experimental and Population Data to Estimate Population Treatment Effects”

RMME/STAT Joint Colloquium

Combining Experimental and Population Data to Estimate Population Treatment Effects

Dr. Elizabeth Stuart
Johns Hopkins Bloomberg School of Public Health

Friday, March 25, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=mb26cc940795502d8ae9ff7e274d435bb

With increasing attention being paid to the relevance of studies for real-world practice (especially in comparative effectiveness research), there is also growing interest in external validity and assessing whether the results seen in randomized trials would hold in target populations. While randomized trials yield unbiased estimates of the effects of interventions in the sample of individuals in the trial, they do not necessarily inform what the effects would be in some other, potentially somewhat different, population. While there has been increasing discussion of this limitation of traditional trials, relatively little statistical work has been done developing methods to assess or enhance the external validity of randomized trial results. In addition, new “big data” resources offer the opportunity to utilize data on broad target populations. This talk will discuss design and analysis methods for assessing and increasing external validity, as well as general issues that need to be considered when thinking about external validity. The primary analysis approach discussed will be a reweighting approach that equates the sample and target population on a set of observed characteristics. Underlying assumptions and methods to assess robustness to violation of those assumptions will be discussed. Implications for how future studies should be designed in order to enhance the ability to assess generalizability will also be discussed.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab