Pharmaceutical Statistics, volume 24, issue 2

WATCH: A Workflow to Assess Treatment Effect Heterogeneity in Drug Development for Clinical Trial Sponsors

Konstantinos Sechidis 1
Sophie Sun 2
Yao Chen 2
Jiarui Lu 3
Cong Zhang 4
Mark Baillie 1
David Ohlssen 2
Marc Vandemeulebroecke 5
Rob Hemmings 6
Stephen J. Ruberg 7
Björn Bornkamp 1
Show full list: 11 authors
1
 
Advanced Methodology and Data Science Novartis Pharma AG Basel Switzerland
2
 
Advanced Methodology and Data Science Novartis Pharmaceuticals Corporation East Hanover New Jersey USA
3
 
Department of Biostatistics Vertex Pharmaceuticals Boston Massachusetts USA
4
 
China Novartis Institutes for Bio‐Medical Research Co Shanghai China
5
 
Development Analytics and Statistical Innovation UCB Farchim SA Bulle Switzerland
6
 
Consilium Hemmings (UK) Ltd Woking UK
7
 
Analytix Thinking LLC Indiana Indianapolis USA
Publication typeJournal Article
Publication date2024-12-26
scimago Q1
SJR1.074
CiteScore2.7
Impact factor1.3
ISSN15391604, 15391612
PubMed ID:  39726118
Abstract
ABSTRACT

This article proposes a Workflow for Assessing Treatment effeCt Heterogeneity (WATCH) in clinical drug development targeted at clinical trial sponsors. WATCH is designed to address the challenges of investigating treatment effect heterogeneity (TEH) in randomized clinical trials, where sample size and multiplicity limit the reliability of findings. The proposed workflow includes four steps: analysis planning, initial data analysis and analysis dataset creation, TEH exploration, and multidisciplinary assessment. The workflow offers a general overview of how treatment effects vary by baseline covariates in the observed data and guides the interpretation of the observed findings based on external evidence and the best scientific understanding. The workflow is exploratory and not inferential/confirmatory in nature but should be preplanned before database lock and analysis start. It is focused on providing a general overview rather than a single specific finding or subgroup with a differential effect.

Müller M.M., Reeve H.W., Cannings T.I., Samworth R.J.
2024-11-06 citations by CoLab: 1 Abstract  
Abstract Given a sample of covariate–response pairs, we consider the subgroup selection problem of identifying a subset of the covariate domain where the regression function exceeds a predetermined threshold. We introduce a computationally feasible approach for subgroup selection in the context of multivariate isotonic regression based on martingale tests and multiple testing procedures for logically structured hypotheses. Our proposed procedure satisfies a non-asymptotic, uniform Type I error rate guarantee with power that attains the minimax optimal rate up to poly-logarithmic factors. Extensions cover classification, isotonic quantile regression, and heterogeneous treatment effect settings. Numerical studies on both simulated and real data confirm the practical effectiveness of our proposal, which is implemented in the R package ISS.
Lipkovich I., Svensson D., Ratitch B., Dmitrienko A.
Statistics in Medicine scimago Q1 wos Q1
2024-07-25 citations by CoLab: 4 Abstract  
In this paper, we review recent advances in statistical methods for the evaluation of the heterogeneity of treatment effects (HTE), including subgroup identification and estimation of individualized treatment regimens, from randomized clinical trials and observational studies. We identify several types of approaches using the features introduced in Lipkovich et al (Stat Med 2017;36: 136‐196) that distinguish the recommended principled methods from basic methods for HTE evaluation that typically rely on rules of thumb and general guidelines (the methods are often referred to as common practices). We discuss the advantages and disadvantages of various principled methods as well as common measures for evaluating their performance. We use simulated data and a case study based on a historical clinical trial to illustrate several new approaches to HTE evaluation.
Bornkamp B., Zaoli S., Azzarito M., Martin R., Müller C.P., Moloney C., Capestro G., Ohlssen D., Baillie M.
Pharmaceutical Statistics scimago Q1 wos Q2
2024-02-07 citations by CoLab: 1 Abstract  
AbstractWe present the motivation, experience, and learnings from a data challenge conducted at a large pharmaceutical corporation on the topic of subgroup identification. The data challenge aimed at exploring approaches to subgroup identification for future clinical trials. To mimic a realistic setting, participants had access to 4 Phase III clinical trials to derive a subgroup and predict its treatment effect on a future study not accessible to challenge participants. A total of 30 teams registered for the challenge with around 100 participants, primarily from Biostatistics organization. We outline the motivation for running the challenge, the challenge rules, and logistics. Finally, we present the results of the challenge, the participant feedback as well as the learnings. We also present our view on the implications of the results on exploratory analyses related to treatment effect heterogeneity.
Bretz F., Greenhouse J.B.
2023-07-24 citations by CoLab: 2
Sun S., Sechidis K., Chen Y., Lu J., Ma C., Mirshani A., Ohlssen D., Vandemeulebroecke M., Bornkamp B.
Biometrical Journal scimago Q1 wos Q2
2022-11-27 citations by CoLab: 4 Abstract  
The identification and estimation of heterogeneous treatment effects in biomedical clinical trials are challenging, because trials are typically planned to assess the treatment effect in the overall trial population. Nevertheless, the identification of how the treatment effect may vary across subgroups is of major importance for drug development. In this work, we review some existing simulation work and perform a simulation study to evaluate recent methods for identifying and estimating the heterogeneous treatments effects using various metrics and scenarios relevant for drug development. Our focus is not only on a comparison of the methods in general, but on how well these methods perform in simulation scenarios that reflect real clinical trials. We provide the R package benchtm that can be used to simulate synthetic biomarker distributions based on real clinical trial data and to create interpretable scenarios to benchmark methods for identification and estimation of treatment effect heterogeneity.
Riehl J., Fritsch A., Ickstadt K.
2022-11-14 citations by CoLab: 1
Baillie M., Moloney C., Mueller C.P., Dorn J., Branson J., Ohlssen D.
2022-05-16 citations by CoLab: 6
Baillie M., le Cessie S., Schmidt C.O., Lusa L., Huebner M.
PLoS Computational Biology scimago Q1 wos Q1 Open Access
2022-02-24 citations by CoLab: 28 PDF
Aas K., Jullum M., Løland A.
Artificial Intelligence scimago Q1 wos Q1
2021-09-01 citations by CoLab: 389 Abstract  
Explaining complex or seemingly simple machine learning models is an important practical problem. We want to explain individual predictions from such models by learning simple, interpretable explanations. Shapley value is a game theoretic concept that can be used for this purpose. The Shapley value framework has a series of desirable theoretical properties, and can in principle handle any predictive model. Kernel SHAP is a computationally efficient approximation to Shapley values in higher dimensions. Like several other existing methods, this approach assumes that the features are independent. Since Shapley values currently suffer from inclusion of unrealistic data instances when features are correlated, the explanations may be very misleading. This is the case even if a simple linear model is used for predictions. In this paper, we extend the Kernel SHAP method to handle dependent features. We provide several examples of linear and non-linear models with various degrees of feature dependence, where our method gives more accurate approximations to the true Shapley values.
Sechidis K., Kormaksson M., Ohlssen D.
Statistics in Medicine scimago Q1 wos Q1
2021-07-30 citations by CoLab: 9 Abstract  
One of the key challenges of personalized medicine is to identify which patients will respond positively to a given treatment. The area of subgroup identification focuses on this challenge, that is, identifying groups of patients that experience desirable characteristics, such as an enhanced treatment effect. A crucial first step towards the subgroup identification is to identify the baseline variables (eg, biomarkers) that influence the treatment effect, which are known as predictive variables. Many subgroup discovery algorithms return importance scores that capture the variables' predictive strength. However, a major limitation of these scores is that they do not answer the core question: "Which variables are actually predictive?" With our work we answer this question by using the knockoff framework, which is a general framework for controlling the false discovery rate when performing prognostic variable selection. In contrast, our work is the first that uses knockoffs for predictive variable selection. We introduce two novel knockoff filters: one parametric, building on variable importance scores derived from a penalized linear regression model, and one non-parametric, building on causal forest variable importance scores. We conduct extensive simulations to validate performance of the proposed methodology and we also apply the proposed methods to data from a randomized clinical trial.
Amatya A.K., Fiero M.H., Bloomquist E.W., Sinha A.K., Lemery S.J., Singh H., Ibrahim A., Donoghue M., Fashoyin-Aje L.A., de Claro R.A., Gormley N.J., Amiri-Kordestani L., Sridhara R., Theoret M.R., Kluetz P.G., et. al.
Clinical Cancer Research scimago Q1 wos Q1
2021-06-11 citations by CoLab: 20 Abstract  
Abstract Subgroup analyses are assessments of treatment effects based on certain patient characteristics out of the total study population and are important for interpretation of pivotal oncology trials. However, appropriate use of subgroup analyses results for regulatory decision-making and product labeling is challenging. Typically, drugs approved by the FDA are indicated for use in the total patient population studied; however, there are examples of restriction to a subgroup of patients despite positive study results in the entire study population and also extension of an indication to the entire study population despite positive results appearing primarily in one or more subgroups. In this article, we summarize key issues related to subgroup analyses in the benefit–risk assessment of cancer drugs and provide case examples to illustrate approaches that the FDA Oncology Center of Excellence has taken when considering the appropriate patient population for cancer drug approval. In general, if a subgroup is of interest, the subgroup analysis should be hypothesis-driven and have adequate sample size to demonstrate evidence of a treatment effect. In addition to statistical efficacy considerations, the decision on what subgroups to include in labeling relies on the pathophysiology of the disease, mechanistic justification, safety data, and external information available. The oncology drug review takes the totality of the data into consideration during the decision-making process to ensure the indication granted and product labeling appropriately reflect the scientific evidence to support patient population for whom the drug is safe and effective.
Ruberg S.J.
Pharmaceutical Statistics scimago Q1 wos Q2
2021-03-03 citations by CoLab: 7 Abstract  
Heterogeneity is an enormously complex problem because there are so many dimensions and variables that can be considered when assessing which ones may influence an efficacy or safety outcome for an individual patient. This is difficult in randomized controlled trials and even more so in observational settings. An alternative approach is presented in which the individual patient becomes the subgroup, and similar patients are identified in the clinical trial database or electronic medical record that can be used to predict how that individual patient may respond to treatment.
Schandelmaier S., Briel M., Varadhan R., Schmid C.H., Devasenapathy N., Hayward R.A., Gagnier J., Borenstein M., van der Heijden G.J., Dahabreh I.J., Sun X., Sauerbrei W., Walsh M., Ioannidis J.P., Thabane L., et. al.
CMAJ scimago Q1 wos Q1 Open Access
2020-08-09 citations by CoLab: 427 Abstract  
BACKGROUND: Most randomized controlled trials (RCTs) and meta-analyses of RCTs examine effect modification (also called a subgroup effect or interaction), in which the effect of an intervention varies by another variable (e.g., age or disease severity). Assessing the credibility of an apparent effect modification presents challenges; therefore, we developed the Instrument for assessing the Credibility of Effect Modification Analyses (ICEMAN). METHODS: To develop ICEMAN, we established a detailed concept; identified candidate credibility considerations in a systematic survey of the literature; together with experts, performed a consensus study to identify key considerations and develop them into instrument items; and refined the instrument based on feedback from trial investigators, systematic review authors and journal editors, who applied drafts of ICEMAN to published claims of effect modification. RESULTS: The final instrument consists of a set of preliminary considerations, core questions (5 for RCTs, 8 for meta-analyses) with 4 response options, 1 optional item for additional considerations and a rating of credibility on a visual analogue scale ranging from very low to high. An accompanying manual provides rationales, detailed instructions and examples from the literature. Seventeen potential users tested ICEMAN; their suggestions improved the user-friendliness of the instrument. INTERPRETATION: The Instrument for assessing the Credibility of Effect Modification Analyses offers explicit guidance for investigators, systematic reviewers, journal editors and others considering making a claim of effect modification or interpreting a claim made by others.
Gelman A., Hill J., Vehtari A.
2020-07-23 citations by CoLab: 285 Abstract  
Most textbooks on regression focus on theory and the simplest of examples. Real statistical problems, however, are complex and subtle. This is not a book about the theory of regression. It is about using regression to solve real problems of comparison, estimation, prediction, and causal inference. Unlike other books, it focuses on practical issues such as sample size and missing data and a wide range of goals and techniques. It jumps right in to methods and computer code you can use immediately. Real examples, real stories from the authors' experience demonstrate what regression can do and its limitations, with practical advice for understanding assumptions and implementing methods for experiments and observational studies. They make a smooth transition to logistic regression and GLM. The emphasis is on computation in R and Stan rather than derivations, with code available online. Graphics and presentation aid understanding of the models and model fitting.
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?