Frontiers in Biostatistics Seminar

The Frontiers in Biostatistics seminar features speakers whose work in biostatistical methodology has relevance in oncology research applications. The series aims to highlight topics of broad interest to the Department, and focuses on inferential approaches to analysis, clinical trials designs, biomarker evaluation, and other topics in translational biostatistics.

Want to be on the mailing list? Click here to sign up

Coming Up

May 19, 2020
1:00 PM

Marianne Menictas, Ph.D.
Postdoctoral Fellow with Susan Murphy
Department of Statistics
Harvard University

https://dfci.zoom.us/j/95449270059?pwd=ZUZhbVFOcXNoemswc21GVWRCRmtZUT09

Streamlined empirical Bayes estimation for contextual bandits with applications in mobile health

Mobile health (mHealth) technologies are increasingly being employed to deliver interventions to users in their natural environments. With the advent of increasingly sophisticated sensing devices (e.g., GPS) and phone-based EMA, it is becoming possible to deliver interventions at moments when they can most readily influence a person’s behavior. For example, for someone trying to increase physical activity, moments when the person can be active are critical decision points when a well-timed intervention could make a difference.

The promise of mHealth hinges on the ability to provide interventions at times when users need the support and are receptive to it. Thus, our goal is to learn the optimal time and intervention for a given user and context. A significant challenge to learning is that there are often only a few opportunities per day to provide treatment. Additionally, when there is limited time to engage users, a slow learning rate can pose problems, potentially raising the risk that users will abandon the intervention.

To prevent disengagement, a learning algorithm should learn quickly in spite of noisy measurements. To accelerate learning, information may be pooled across users and time
in a dynamic manner, combining a contextual bandit algorithm with a Bayesian random effects model for the reward function. As information accumulates, however, tuning
user and time specific hyperparameters becomes computationally intractable. In this talk, we focus on solving this computational bottleneck using streamlined empirical Bayes.


TBD, 2020
1:00 PM

Peter Thall, Ph.D.
Anise J. Sorrell Professor
Department of Biostatistics
M.D. Anderson Cancer Center

A New Hybrid Phase I-II-III Clinical Trial Paradigm

Conventional evaluation of a new drug, A, is done in three phases. Phase I relies on toxicity to determine a “maximum tolerable dose” (MTD) of A, in phase II it is decided whether A at the MTD is “promising” in terms of response probability, and if so a large randomized phase III trial is conducted to compare A to a control treatment, C, based on survival time or progression free survival time. This paradigm has many flaws. The first two phases may be combined by conducting a phase I-II trial, which chooses an optimal dose based on both efficacy and toxicity, with evaluation of A at the optimal phase I-II dose then done in phase III. In this talk, I will describe a new paradigm, motivated by the possibility that the optimal phase I-II dose may not maximize mean survival time with A. A hybrid phase I-II-III design is presented that allows the optimal phase I-II dose of A to be re-optimized based on survival time data after the first stage of phase III. The hybrid design relies on a mixture model for the survival time distribution as a function of efficacy, toxicity, and dose. A simulation study is presented to evaluate the design’s properties, including comparison to the more conventional approach that does not re-optimize the dose of A in phase III.

Frontiers May 2020 Marianne Menictas

Past Seminars

September 24, 2019
12:00PM

Nikesh Kotecha, PhD
Vice President, Informatics
Parker Institute for Cancer Immunology

Systems Immunology in IO: A view from the Parker Institute

Abstract: The introduction of immunotherapies has revolutionized the treatment of cancer and ushered in a corresponding explosion of research into cancer, the immune system, and their interaction. This talk will introduce the Parker Institute for Cancer Immunotherapy, its mission to accelerate the development of breakthrough immune therapies, highlight the informatics opportunities and challenges presented in this space and our approaches to address them.


October 15, 2019
1:00PM

Miguel Hernan
Kolokotrones Professor of Biostatistics and Epidemiology
Department of Epidemiology
Department of Biostatistics
Harvard School of Public Health

Observational studies – How do we learn what works?

 Randomized experiments are the preferred method to quantify causal effects. When randomized experiments are not feasible or available, causal effects are often estimated from non-experimental or observational databases. Therefore, causal inference from observational databases can be viewed as an attempt to emulate a hypothetical randomized experiment—the target trial—that would quantify the causal effect of interest. This talk outlines a general algorithm for causal inference using observational databases that makes the target trial explicit. This causal framework channels counterfactual theory for comparing the effects of sustained treatment strategies, organizes analytic approaches, provides a structured process for the criticism of observational analyses, and helps avoid common methodologic pitfalls.



April 28, 2020
1:00 PM

Zoom webinar: https://dfci.zoom.us/j/99223287019?pwd=MlVmbDdheEFoSzlDK1JvZThiSlc3dz09
Password: 642740

Noah Simon, Ph.D.
Associate Professor
Department of Biostatistics
University of Washington

Reframing proportional-hazards modeling for large time-to-event datasets with applications to deep learning

To build inferential or predictive survival models, it is common to assume proportionality of hazards and fit a model by maximizing the partial likelihood. This has been combined with non-parametric and high dimensional techniques, eg. spline expansions and penalties, to flexibly build survival models.

New challenges require extension and modification of that approach. In a number of modern applications there is interest in using complex features such as images to predict survival. In these cases, it is necessary to connect more modern backends to the partial likelihood (such as deep learning infrastructures based on eg. convolutional/recurrent neural networks). In such scenarios, large numbers of observations are needed to train the model. However, in cases where those observations are available, the structure of the partial likelihood makes optimization difficult (if not completely intractable).

In this talk we show how the partial likelihood can be simply modified to easily deal with large amounts of data. In particular, with this modification, stochastic gradient-based methods, commonly applied in deep learning, are simple to employ. This simplicity holds even in the presence of left truncation/right censoring, and time-varying covariates. This can also be applied relatively simply with data stored in a distributed manner.


November 19, 2018

Susan Halabi, Ph.D.

Professor of Biostatistics and Bioinformatics, Biostatistics & Bioinformatics and Basic Science Departments, Duke University

Design of stratified biomarker trials with measurement error

 

December 18, 2018

Lee-Jen Wei, PhD

Professor of Biostatistics, Harvard TH Chan School of Public Health

Moving beyond the comfort zone in practicing translational statistics

 

January 22, 2018

Steven Piantadosi, MD, PhD

Associate Senior Biostatistician, Surgical Oncology, Brigham And Women's Hospital

Structuring Data in the Electronic Health Record: Implications for Clinical Trials and Other Research Studies

 

April 23, 2019

Zhenzhen Xu, PhD

Mathematical Statistician, Center for Biologics Evaluation and Research, US Food and Drug Administration

Designing cancer immunotherapy trials with random treatment time-lag effect