ICH E9(R1) Section 4.5
A planned statistical analysis conducted before all participants have completed the study, typically to evaluate accumulating data for evidence of efficacy, futility, or safety concerns that might warrant early termination of the trial.
Interim analyses provide opportunities to evaluate accumulating evidence during a trial, enabling ethical decisions about whether to continue, modify, or terminate the study based on observed data. These analyses serve multiple purposes: detecting overwhelming efficacy that would make continued randomization to placebo unethical, identifying futility when the treatment is unlikely to demonstrate benefit even with complete enrollment, and monitoring safety signals that might require study termination or modification. The timing, number, and methodology of interim analyses must be pre-specified to maintain trial integrity.
The statistical challenges of interim analyses arise from repeated testing of accumulating data. Each look at the data provides an opportunity to declare success, and without adjustment, the probability of a false-positive finding increases with each interim analysis. Statistical methods such as group sequential designs address this multiplicity by adjusting significance thresholds for each analysis to maintain the overall Type I error rate at the desired level. Spending functions, such as O'Brien-Fleming or Pocock boundaries, define how the total alpha is allocated across planned analyses.
Data Safety Monitoring Boards (DSMBs) or Data Monitoring Committees (DMCs) typically oversee interim analyses to preserve trial integrity and blinding. These independent committees receive unblinded results and assess whether predefined stopping boundaries have been crossed or whether other findings warrant study modification or termination. The sponsor and investigators remain blinded to interim results unless the DSMB recommends unblinding. This separation ensures that trial conduct is not influenced by accumulating results while enabling appropriate response to emerging evidence.
Efficacy stopping
"At the pre-specified interim analysis conducted when 60% of events had occurred, the observed treatment effect crossed the O'Brien-Fleming efficacy boundary, leading the DSMB to recommend early termination due to overwhelming benefit."
Futility assessment
"The interim analysis demonstrated conditional power below 10%, indicating that even with complete enrollment and follow-up, the trial had minimal probability of achieving statistical significance, leading to recommendation for early termination due to futility."
A range of values calculated from study data that is expected to contain the true treatment effect with a specified probability, typically 95%, providing information about both the estimated effect size and the precision of that estimate.
A statistical analysis strategy that includes all randomized participants in the groups to which they were originally assigned, regardless of whether they completed the study treatment or adhered to the protocol.
The probability of obtaining results at least as extreme as those observed in the study, assuming that the null hypothesis of no treatment effect is true.
A statistical analysis that includes only participants who completed the study according to protocol requirements, without major protocol violations, adequate treatment exposure, and complete outcome assessments.
The primary endpoint is the main outcome measure used to evaluate whether the treatment hypothesis is supported and forms the basis for regulatory approval decisions, while secondary endpoints provide supportive evidence and characterize additional treatment effects.
ICH E9(R1) Section 4.5
A planned statistical analysis conducted before all participants have completed the study, typically to evaluate accumulating data for evidence of efficacy, futility, or safety concerns that might warrant early termination of the trial.
Interim analyses provide opportunities to evaluate accumulating evidence during a trial, enabling ethical decisions about whether to continue, modify, or terminate the study based on observed data. These analyses serve multiple purposes: detecting overwhelming efficacy that would make continued randomization to placebo unethical, identifying futility when the treatment is unlikely to demonstrate benefit even with complete enrollment, and monitoring safety signals that might require study termination or modification. The timing, number, and methodology of interim analyses must be pre-specified to maintain trial integrity.
The statistical challenges of interim analyses arise from repeated testing of accumulating data. Each look at the data provides an opportunity to declare success, and without adjustment, the probability of a false-positive finding increases with each interim analysis. Statistical methods such as group sequential designs address this multiplicity by adjusting significance thresholds for each analysis to maintain the overall Type I error rate at the desired level. Spending functions, such as O'Brien-Fleming or Pocock boundaries, define how the total alpha is allocated across planned analyses.
Data Safety Monitoring Boards (DSMBs) or Data Monitoring Committees (DMCs) typically oversee interim analyses to preserve trial integrity and blinding. These independent committees receive unblinded results and assess whether predefined stopping boundaries have been crossed or whether other findings warrant study modification or termination. The sponsor and investigators remain blinded to interim results unless the DSMB recommends unblinding. This separation ensures that trial conduct is not influenced by accumulating results while enabling appropriate response to emerging evidence.
Efficacy stopping
"At the pre-specified interim analysis conducted when 60% of events had occurred, the observed treatment effect crossed the O'Brien-Fleming efficacy boundary, leading the DSMB to recommend early termination due to overwhelming benefit."
Futility assessment
"The interim analysis demonstrated conditional power below 10%, indicating that even with complete enrollment and follow-up, the trial had minimal probability of achieving statistical significance, leading to recommendation for early termination due to futility."
A range of values calculated from study data that is expected to contain the true treatment effect with a specified probability, typically 95%, providing information about both the estimated effect size and the precision of that estimate.
A statistical analysis strategy that includes all randomized participants in the groups to which they were originally assigned, regardless of whether they completed the study treatment or adhered to the protocol.
The probability of obtaining results at least as extreme as those observed in the study, assuming that the null hypothesis of no treatment effect is true.
A statistical analysis that includes only participants who completed the study according to protocol requirements, without major protocol violations, adequate treatment exposure, and complete outcome assessments.
The primary endpoint is the main outcome measure used to evaluate whether the treatment hypothesis is supported and forms the basis for regulatory approval decisions, while secondary endpoints provide supportive evidence and characterize additional treatment effects.