Continuing review timelines and the cost of a regulatory lapse
Quantifies the operational and regulatory consequences of a continuing review lapse and establishes the early-warning infrastructure that prevents it across a multi-study portfolio.
The regulatory emergency that is entirely preventable
An IRB approval expiration is not like a driver's license renewal -- inconvenient if you miss it, but easily remedied with a trip to the DMV. When IRB approval lapses on a clinical trial, the consequences are immediate and cascading. Enrollment stops. Scheduled participant visits cannot proceed. The sponsor receives notification that one of their sites has lost regulatory coverage. The clinical research associate documents the lapse in a monitoring report. And every day the lapse continues, the damage to the site's reputation compounds.
I have watched this scenario unfold at sites that were, in every other respect, competent and well-run. The problem was never that someone did not care about continuing review. The problem was that no system existed to ensure that the continuing review timeline was visible across the portfolio early enough to prevent the crisis. A coordinator tracking one study knows that study's expiration date. But when 15 studies have 15 different expiration dates spread across 12 months, and each requires a submission package that takes three to four weeks to prepare, the margin for error is measured in days -- and the consequences of missing that margin are measured in months of recovery.
This lesson does not teach you what continuing review is. You know that from your CRC training. This lesson teaches you what happens when it goes wrong and how to build the operational infrastructure that ensures it never does.
What you will learn
By the end of this lesson, you will be able to:
1
Analyze the regulatory, operational, and relationship consequences of a continuing review lapse per ICH E6(R3) Annex 1, Section 1.2.4 and U.S. regulations (21 CFR 56.108(a))
2
Design an early-warning system with 90-day, 60-day, and 30-day alert milestones that integrates with your submission tracking infrastructure
3
Evaluate the operational cost of a regulatory lapse across the portfolio, including enrollment disruption, sponsor confidence erosion, and long-term competitive damage
Anatomy of a lapse: what happens when IRB approval expires
Per ICH E6(R3) Annex 1, Section 1.2.4, the IRB/IEC should conduct continuing review of each ongoing trial at intervals appropriate to the degree of risk to participants. Under U.S. regulations (21 CFR 56.108(a)), that review must occur at least once per year. When that review does not occur before the current approval period expires, the regulatory status of the trial changes fundamentally. The site no longer has IRB authorization to conduct the research. This is not a technicality -- it is a regulatory state change with immediate operational consequences.
The cascade is predictable and, in my experience, remarkably consistent across institutions. It unfolds in stages, each compounding the one before it.
Stage 1: Immediate enrollment freeze. No new participants can be enrolled. Screening visits in progress must be paused. Participants who were scheduled for their first study visit may need to be rescheduled or, depending on the protocol's enrollment window, withdrawn. For a trial actively enrolling, every day of the freeze represents lost participants -- participants who may enroll at a competitor site instead.
Stage 2: Participant visit disruption. Already-enrolled participants present a more complicated question. Most institutions require that study visits be suspended during a lapse, except for interventions necessary to protect participant safety. This means that a participant halfway through a chemotherapy protocol may continue receiving treatment, but protocol-required assessments, blood draws, and imaging studies that exist solely for research purposes cannot proceed. The disruption to data integrity is significant. Protocol deviations accumulate. Windows for time-sensitive assessments are missed.
Stage 3: Sponsor notification and escalation. Per ICH E6(R3) Annex 1, Section 2.5.2, the investigator should comply with the protocol, GCP, and applicable regulatory requirements. A lapse in approval is a compliance failure that the sponsor must be informed of. The clinical research associate will document it. The sponsor's regulatory team will assess whether the lapse constitutes a significant deviation requiring reporting to the FDA. The site's name appears in internal reports -- and not in a favorable light.
Stage 4: Institutional and competitive consequences. The site's research office is notified. Depending on institutional policy, a formal corrective action plan may be required. Other investigators at the institution may face heightened scrutiny. And the sponsor's site selection team, when evaluating sites for the next trial, now has a documented regulatory lapse in your site's history. In a competitive enrollment environment where sponsors can choose from dozens of qualified sites, a lapse is the kind of mark that does not wash off easily.
A lapse is not merely a delayed approval
Some sites treat a lapse as if it were equivalent to a late submission that the IRB simply processes on a delayed timeline. It is not. A lapse creates a regulatory gap -- a period during which the site lacked authorization to conduct the trial. Even after IRB approval is restored, the gap itself must be documented, the participant impact assessed, and the corrective actions reported. The time required to manage the aftermath of a lapse typically exceeds the time that would have been required to prevent it by a factor of ten or more.
The portfolio multiplier: why one lapse becomes three
Here is the aspect of continuing review lapses that I find most underappreciated by sites that have not yet experienced one: a lapse does not only affect the study that lapsed. It destabilizes the entire portfolio.
Consider the mechanics. When a continuing review lapse is discovered, the regulatory coordinator must immediately shift from proactive portfolio management to crisis response. The lapsed study demands emergency attention: assembling the overdue package, negotiating expedited IRB review, notifying the sponsor, documenting the lapse, and managing the fallout. Every hour spent on crisis management is an hour not spent on the continuing reviews, amendments, and safety reports for the other 14 studies in the portfolio.
This is how one lapse becomes two. And two becomes three. The RC who is spending 40% of their week managing the consequences of a lapse on Study A does not have the bandwidth to notice that Study B's continuing review is now 45 days out with an incomplete package. By the time the Study A crisis is resolved, Study B is in the same position. The cascade is not hypothetical -- I have seen it happen at sites that were, on paper, adequately staffed and well-organized. The root cause is always the same: the absence of an early-warning system that would have flagged the approaching deadline with enough lead time to prevent the crisis.
Designing the early-warning system: 90, 60, and 30 days
The antidote to a regulatory lapse is not vigilance. Vigilance is a human capacity, and human capacities fail under sustained load. The antidote is infrastructure -- a system that generates alerts at defined intervals regardless of how busy you are, how many other deadlines are competing for your attention, or whether the coordinator who normally tracks that study is on vacation.
The early-warning system I am about to describe is not complex. But it must be implemented with the same rigor you would apply to a safety reporting system, because the consequences of its failure are comparable. It operates on three alert milestones, each triggering a specific set of actions.
Figure 1: The 90/60/30-day early-warning system -- alert milestones mapped against the continuing review preparation and IRB processing timeline
The 90-day alert: begin preparation
At 90 days before the current approval expiration date, the tracking system generates the first alert. This is the preparation trigger. At this milestone, the RC takes three actions.
First, verify the expiration date. This sounds redundant, but I have seen sites operate on incorrect expiration dates pulled from outdated approval letters or transposed incorrectly into the tracking system. The 90-day alert is the moment to confirm the date against the most recent IRB approval letter. If the date in your system does not match the letter, fix it now.
Second, identify the data streams that will feed the continuing review package for this study. Which coordinator maintains the enrollment log? Who compiles the adverse event summary? Is the deviation tracker current, or does it need reconciliation before data can be pulled? The goal is not to assemble the package now -- that comes at the 60-day milestone -- but to identify any data collection gaps that would delay assembly later. Per ICH E6(R3) Annex 1, Sections 2.12.1 and 2.12.2, the investigator should ensure the integrity of trial data and maintain adequate source records that are attributable, legible, contemporaneous, original, accurate, and complete. The 90-day check verifies that those records are being maintained continuously, not scrambled together at the last minute.
Third, check for dependencies. Does this study's continuing review require a PI signature? If so, is the PI available during the assembly and submission window, or do you need to plan around a conference, sabbatical, or clinical rotation? Does the IRB require any institutional sign-offs -- pharmacy committee confirmation, radiation safety committee renewal -- that have their own lead times?
The 60-day alert: assembly checkpoint
At 60 days, the system generates the second alert. This is the assembly checkpoint. By this milestone, three conditions must be met.
The data streams identified at the 90-day mark are now locked. The coordinator has provided the enrollment summary current through the data-lock date. The adverse event tally is complete. The deviation log is reconciled. These data elements are in the RC's hands, not still sitting in the coordinator's pending tasks.
The IRB-specific template for this continuing review is populated with the study-level data. Common elements -- study title, protocol number, PI name, approval dates -- are filled. IRB-specific sections -- the format they want for the enrollment summary, the level of detail they expect in the deviation narrative -- are drafted.
And the PI review window is scheduled. Not "we will get the PI to look at it sometime in the next two weeks" -- but a specific date, confirmed with the PI or the PI's assistant, during which the PI will review and sign the package. If the PI is unavailable during the planned review window, the 60-day checkpoint is the last moment at which alternative arrangements can be made without creating timeline risk.
The 60-day checkpoint as portfolio management discipline
In practice, the 60-day checkpoint is where portfolio-level management earns its value. When you are managing 15 studies, you may have three or four studies simultaneously at the 60-day mark. The checkpoint forces you to confirm -- for each of them -- that assembly is on track, data is locked, and PI review is scheduled. Without this structured verification, you are relying on memory and ad hoc tracking to keep four parallel preparation streams on schedule. That works until it does not.
The 30-day alert: submission deadline
At 30 days before expiration, the package must be submitted to the IRB. Not "nearly ready." Not "waiting for one signature." Submitted.
This milestone is non-negotiable because it accounts for the one variable you cannot control: IRB processing time. Most IRBs process continuing reviews in 15 to 25 business days. Some are faster. Some, particularly local institutional IRBs that meet on a monthly committee cycle, may take longer. But 30 calendar days provides the minimum safe buffer for most IRB types. If you are working with an IRB whose processing time routinely exceeds 20 business days, adjust your 30-day standard to 45 days and cascade the 60 and 90-day alerts accordingly.
What happens if you reach the 30-day mark and the package is not ready? This is an escalation event, not a scheduling adjustment. The RC must immediately assess what is preventing submission, escalate to whoever can resolve the block, and explore expedited options. Some IRBs offer expedited or emergency continuing review processing for packages submitted close to expiration. But these expedited pathways are favors, not entitlements. A site that uses them routinely signals a process problem that the IRB will eventually address through more uncomfortable channels.
Integrating early-warning into the tracking system
The early-warning system is not a separate tool. It is a feature of the submission tracking infrastructure you designed in Module 1, Lesson 3. If your tracking system captures each study's current IRB approval expiration date, then the early-warning system is a calculation: expiration date minus 90, 60, and 30 days, with each threshold generating an alert.
The implementation varies by tracking platform. A spreadsheet-based system uses conditional formatting to highlight studies approaching each threshold -- green at 90 days, amber at 60, red at 30. A CTMS module may offer automated email alerts at configured intervals. A shared calendar system uses recurring reminders set backward from the expiration date.
The specific technology matters less than the discipline. The system must generate alerts automatically, without requiring anyone to remember to check. It must be visible to the RC regardless of their current workload or focus. And it must be updated immediately when a continuing review is approved and a new expiration date is established -- because the cycle begins again the moment the current one ends.
Resetting the clock
One of the most common maintenance failures in early-warning systems is forgetting to update the expiration date after a continuing review is approved. The old date passes, the system stops alerting, and the new expiration date is never entered. Twelve months later, the cycle repeats -- this time as a genuine emergency. Build the date update into the post-approval workflow as a mandatory step, not an afterthought. The approval letter arrives, the new date goes into the tracker within 24 hours, and the next 90/60/30 cycle is automatically triggered.
Case Study
"22 days and counting"
Clinical ResearchIntermediate10-15 minutes
Scenario
Marcus Williams, a clinical research coordinator at Riverside Medical Center in Columbus, Ohio, has been managing the BEACON-1 study for eight months. The site's previous regulatory coordinator left six weeks ago, and the replacement -- a new RC -- is still onboarding. Marcus has been handling continuing review tracking informally, using a personal calendar with reminders he set when he took over the study.
On a Tuesday morning, Marcus receives an email from the sponsor's CRA asking for confirmation that the BEACON-1 continuing review has been submitted. He checks the IRB approval letter and realizes, with a sinking feeling, that the current approval expires in 22 calendar days. He has not started assembling the package.
He calls Dr. Sarah Chen, the principal investigator, to discuss the timeline. Dr. Chen's assistant informs him that the PI left for a two-week international oncology conference on Saturday and has limited email access. The local IRB's standard processing time is 15 business days -- and they require PI signature on the continuing review application.
The challenge:
Marcus contacts the new RC for guidance. The RC must address both the immediate crisis and the systemic failure that created it.
Analysis
Immediate triage: Determine whether the 22-day window is workable. With 15 business days of IRB processing, the package must be submitted within 5 business days (approximately 7 calendar days). This is extremely tight but potentially feasible if the RC and Marcus work together to assemble the package now.
PI signature strategy: Contact Dr. Chen directly -- conference or no conference -- and arrange for electronic signature via DocuSign or equivalent. Most PIs, when they understand the regulatory consequences, will make time. If Dr. Chen is genuinely unreachable, determine whether the IRB accepts a designated sub-investigator signature with a PI co-signature to follow, and confirm this with the IRB before proceeding.
IRB communication: Contact the IRB proactively to explain the situation, confirm processing timeline for an expedited submission, and ask whether any accommodation is possible if the submission arrives at the end of this week rather than the standard 30 days prior to expiration.
Sponsor notification: Inform the CRA that the continuing review is being assembled on an expedited timeline. Transparency now is far better than a lapse notification later.
Systemic prevention: After the immediate crisis is managed, the RC builds the 90/60/30-day early-warning system for every study in the portfolio. Marcus's personal calendar reminders are replaced with institutional infrastructure. The RC audits every study's expiration date, identifies any other approaching deadlines, and ensures the tracking system captures all of them.
Check your understanding
1 of 3
A study's IRB approval expires in 28 days. The continuing review package is assembled but lacks the PI signature. The PI is available but has not reviewed the package. The IRB's standard processing time is 18 business days. What is the RC's most appropriate immediate action?
Key takeaways
A continuing review lapse is not a paperwork delay -- it is a regulatory state change that immediately halts enrollment, disrupts participant visits, triggers sponsor escalation, and damages the site's competitive position. The consequences cascade across the portfolio because crisis management for the lapsed study consumes the bandwidth needed to prevent lapses on other studies.
The 90/60/30-day early-warning system is the operational infrastructure that prevents lapses. At 90 days, verify the expiration date, identify data collection needs, and check for dependencies. At 60 days, confirm that data is locked, the template is populated, and the PI review window is scheduled. At 30 days, the package must be submitted -- no exceptions. This system integrates with the submission tracking infrastructure from Module 1 and generates alerts automatically, without relying on individual memory.
The cost of building and maintaining this early-warning system is measured in hours per month. The cost of a single lapse is measured in months of recovery, lost enrollment, damaged relationships, and institutional corrective action. The math is not close.
Regulatory Coordinator
Full course · Regulatory Submissions & Stakeholder Management
Continuing review timelines and the cost of a regulatory lapse
Quantifies the operational and regulatory consequences of a continuing review lapse and establishes the early-warning infrastructure that prevents it across a multi-study portfolio.
The regulatory emergency that is entirely preventable
An IRB approval expiration is not like a driver's license renewal -- inconvenient if you miss it, but easily remedied with a trip to the DMV. When IRB approval lapses on a clinical trial, the consequences are immediate and cascading. Enrollment stops. Scheduled participant visits cannot proceed. The sponsor receives notification that one of their sites has lost regulatory coverage. The clinical research associate documents the lapse in a monitoring report. And every day the lapse continues, the damage to the site's reputation compounds.
I have watched this scenario unfold at sites that were, in every other respect, competent and well-run. The problem was never that someone did not care about continuing review. The problem was that no system existed to ensure that the continuing review timeline was visible across the portfolio early enough to prevent the crisis. A coordinator tracking one study knows that study's expiration date. But when 15 studies have 15 different expiration dates spread across 12 months, and each requires a submission package that takes three to four weeks to prepare, the margin for error is measured in days -- and the consequences of missing that margin are measured in months of recovery.
This lesson does not teach you what continuing review is. You know that from your CRC training. This lesson teaches you what happens when it goes wrong and how to build the operational infrastructure that ensures it never does.
What you will learn
By the end of this lesson, you will be able to:
1
Analyze the regulatory, operational, and relationship consequences of a continuing review lapse per ICH E6(R3) Annex 1, Section 1.2.4 and U.S. regulations (21 CFR 56.108(a))
2
Design an early-warning system with 90-day, 60-day, and 30-day alert milestones that integrates with your submission tracking infrastructure
3
Evaluate the operational cost of a regulatory lapse across the portfolio, including enrollment disruption, sponsor confidence erosion, and long-term competitive damage
Anatomy of a lapse: what happens when IRB approval expires
Per ICH E6(R3) Annex 1, Section 1.2.4, the IRB/IEC should conduct continuing review of each ongoing trial at intervals appropriate to the degree of risk to participants. Under U.S. regulations (21 CFR 56.108(a)), that review must occur at least once per year. When that review does not occur before the current approval period expires, the regulatory status of the trial changes fundamentally. The site no longer has IRB authorization to conduct the research. This is not a technicality -- it is a regulatory state change with immediate operational consequences.
The cascade is predictable and, in my experience, remarkably consistent across institutions. It unfolds in stages, each compounding the one before it.
Stage 1: Immediate enrollment freeze. No new participants can be enrolled. Screening visits in progress must be paused. Participants who were scheduled for their first study visit may need to be rescheduled or, depending on the protocol's enrollment window, withdrawn. For a trial actively enrolling, every day of the freeze represents lost participants -- participants who may enroll at a competitor site instead.
Stage 2: Participant visit disruption. Already-enrolled participants present a more complicated question. Most institutions require that study visits be suspended during a lapse, except for interventions necessary to protect participant safety. This means that a participant halfway through a chemotherapy protocol may continue receiving treatment, but protocol-required assessments, blood draws, and imaging studies that exist solely for research purposes cannot proceed. The disruption to data integrity is significant. Protocol deviations accumulate. Windows for time-sensitive assessments are missed.
Stage 3: Sponsor notification and escalation. Per ICH E6(R3) Annex 1, Section 2.5.2, the investigator should comply with the protocol, GCP, and applicable regulatory requirements. A lapse in approval is a compliance failure that the sponsor must be informed of. The clinical research associate will document it. The sponsor's regulatory team will assess whether the lapse constitutes a significant deviation requiring reporting to the FDA. The site's name appears in internal reports -- and not in a favorable light.
Stage 4: Institutional and competitive consequences. The site's research office is notified. Depending on institutional policy, a formal corrective action plan may be required. Other investigators at the institution may face heightened scrutiny. And the sponsor's site selection team, when evaluating sites for the next trial, now has a documented regulatory lapse in your site's history. In a competitive enrollment environment where sponsors can choose from dozens of qualified sites, a lapse is the kind of mark that does not wash off easily.
A lapse is not merely a delayed approval
Some sites treat a lapse as if it were equivalent to a late submission that the IRB simply processes on a delayed timeline. It is not. A lapse creates a regulatory gap -- a period during which the site lacked authorization to conduct the trial. Even after IRB approval is restored, the gap itself must be documented, the participant impact assessed, and the corrective actions reported. The time required to manage the aftermath of a lapse typically exceeds the time that would have been required to prevent it by a factor of ten or more.
The portfolio multiplier: why one lapse becomes three
Here is the aspect of continuing review lapses that I find most underappreciated by sites that have not yet experienced one: a lapse does not only affect the study that lapsed. It destabilizes the entire portfolio.
Consider the mechanics. When a continuing review lapse is discovered, the regulatory coordinator must immediately shift from proactive portfolio management to crisis response. The lapsed study demands emergency attention: assembling the overdue package, negotiating expedited IRB review, notifying the sponsor, documenting the lapse, and managing the fallout. Every hour spent on crisis management is an hour not spent on the continuing reviews, amendments, and safety reports for the other 14 studies in the portfolio.
This is how one lapse becomes two. And two becomes three. The RC who is spending 40% of their week managing the consequences of a lapse on Study A does not have the bandwidth to notice that Study B's continuing review is now 45 days out with an incomplete package. By the time the Study A crisis is resolved, Study B is in the same position. The cascade is not hypothetical -- I have seen it happen at sites that were, on paper, adequately staffed and well-organized. The root cause is always the same: the absence of an early-warning system that would have flagged the approaching deadline with enough lead time to prevent the crisis.
Designing the early-warning system: 90, 60, and 30 days
The antidote to a regulatory lapse is not vigilance. Vigilance is a human capacity, and human capacities fail under sustained load. The antidote is infrastructure -- a system that generates alerts at defined intervals regardless of how busy you are, how many other deadlines are competing for your attention, or whether the coordinator who normally tracks that study is on vacation.
The early-warning system I am about to describe is not complex. But it must be implemented with the same rigor you would apply to a safety reporting system, because the consequences of its failure are comparable. It operates on three alert milestones, each triggering a specific set of actions.
Figure 1: The 90/60/30-day early-warning system -- alert milestones mapped against the continuing review preparation and IRB processing timeline
The 90-day alert: begin preparation
At 90 days before the current approval expiration date, the tracking system generates the first alert. This is the preparation trigger. At this milestone, the RC takes three actions.
First, verify the expiration date. This sounds redundant, but I have seen sites operate on incorrect expiration dates pulled from outdated approval letters or transposed incorrectly into the tracking system. The 90-day alert is the moment to confirm the date against the most recent IRB approval letter. If the date in your system does not match the letter, fix it now.
Second, identify the data streams that will feed the continuing review package for this study. Which coordinator maintains the enrollment log? Who compiles the adverse event summary? Is the deviation tracker current, or does it need reconciliation before data can be pulled? The goal is not to assemble the package now -- that comes at the 60-day milestone -- but to identify any data collection gaps that would delay assembly later. Per ICH E6(R3) Annex 1, Sections 2.12.1 and 2.12.2, the investigator should ensure the integrity of trial data and maintain adequate source records that are attributable, legible, contemporaneous, original, accurate, and complete. The 90-day check verifies that those records are being maintained continuously, not scrambled together at the last minute.
Third, check for dependencies. Does this study's continuing review require a PI signature? If so, is the PI available during the assembly and submission window, or do you need to plan around a conference, sabbatical, or clinical rotation? Does the IRB require any institutional sign-offs -- pharmacy committee confirmation, radiation safety committee renewal -- that have their own lead times?
The 60-day alert: assembly checkpoint
At 60 days, the system generates the second alert. This is the assembly checkpoint. By this milestone, three conditions must be met.
The data streams identified at the 90-day mark are now locked. The coordinator has provided the enrollment summary current through the data-lock date. The adverse event tally is complete. The deviation log is reconciled. These data elements are in the RC's hands, not still sitting in the coordinator's pending tasks.
The IRB-specific template for this continuing review is populated with the study-level data. Common elements -- study title, protocol number, PI name, approval dates -- are filled. IRB-specific sections -- the format they want for the enrollment summary, the level of detail they expect in the deviation narrative -- are drafted.
And the PI review window is scheduled. Not "we will get the PI to look at it sometime in the next two weeks" -- but a specific date, confirmed with the PI or the PI's assistant, during which the PI will review and sign the package. If the PI is unavailable during the planned review window, the 60-day checkpoint is the last moment at which alternative arrangements can be made without creating timeline risk.
The 60-day checkpoint as portfolio management discipline
In practice, the 60-day checkpoint is where portfolio-level management earns its value. When you are managing 15 studies, you may have three or four studies simultaneously at the 60-day mark. The checkpoint forces you to confirm -- for each of them -- that assembly is on track, data is locked, and PI review is scheduled. Without this structured verification, you are relying on memory and ad hoc tracking to keep four parallel preparation streams on schedule. That works until it does not.
The 30-day alert: submission deadline
At 30 days before expiration, the package must be submitted to the IRB. Not "nearly ready." Not "waiting for one signature." Submitted.
This milestone is non-negotiable because it accounts for the one variable you cannot control: IRB processing time. Most IRBs process continuing reviews in 15 to 25 business days. Some are faster. Some, particularly local institutional IRBs that meet on a monthly committee cycle, may take longer. But 30 calendar days provides the minimum safe buffer for most IRB types. If you are working with an IRB whose processing time routinely exceeds 20 business days, adjust your 30-day standard to 45 days and cascade the 60 and 90-day alerts accordingly.
What happens if you reach the 30-day mark and the package is not ready? This is an escalation event, not a scheduling adjustment. The RC must immediately assess what is preventing submission, escalate to whoever can resolve the block, and explore expedited options. Some IRBs offer expedited or emergency continuing review processing for packages submitted close to expiration. But these expedited pathways are favors, not entitlements. A site that uses them routinely signals a process problem that the IRB will eventually address through more uncomfortable channels.
Integrating early-warning into the tracking system
The early-warning system is not a separate tool. It is a feature of the submission tracking infrastructure you designed in Module 1, Lesson 3. If your tracking system captures each study's current IRB approval expiration date, then the early-warning system is a calculation: expiration date minus 90, 60, and 30 days, with each threshold generating an alert.
The implementation varies by tracking platform. A spreadsheet-based system uses conditional formatting to highlight studies approaching each threshold -- green at 90 days, amber at 60, red at 30. A CTMS module may offer automated email alerts at configured intervals. A shared calendar system uses recurring reminders set backward from the expiration date.
The specific technology matters less than the discipline. The system must generate alerts automatically, without requiring anyone to remember to check. It must be visible to the RC regardless of their current workload or focus. And it must be updated immediately when a continuing review is approved and a new expiration date is established -- because the cycle begins again the moment the current one ends.
Resetting the clock
One of the most common maintenance failures in early-warning systems is forgetting to update the expiration date after a continuing review is approved. The old date passes, the system stops alerting, and the new expiration date is never entered. Twelve months later, the cycle repeats -- this time as a genuine emergency. Build the date update into the post-approval workflow as a mandatory step, not an afterthought. The approval letter arrives, the new date goes into the tracker within 24 hours, and the next 90/60/30 cycle is automatically triggered.
Case Study
"22 days and counting"
Clinical ResearchIntermediate10-15 minutes
Scenario
Marcus Williams, a clinical research coordinator at Riverside Medical Center in Columbus, Ohio, has been managing the BEACON-1 study for eight months. The site's previous regulatory coordinator left six weeks ago, and the replacement -- a new RC -- is still onboarding. Marcus has been handling continuing review tracking informally, using a personal calendar with reminders he set when he took over the study.
On a Tuesday morning, Marcus receives an email from the sponsor's CRA asking for confirmation that the BEACON-1 continuing review has been submitted. He checks the IRB approval letter and realizes, with a sinking feeling, that the current approval expires in 22 calendar days. He has not started assembling the package.
He calls Dr. Sarah Chen, the principal investigator, to discuss the timeline. Dr. Chen's assistant informs him that the PI left for a two-week international oncology conference on Saturday and has limited email access. The local IRB's standard processing time is 15 business days -- and they require PI signature on the continuing review application.
The challenge:
Marcus contacts the new RC for guidance. The RC must address both the immediate crisis and the systemic failure that created it.
Analysis
Immediate triage: Determine whether the 22-day window is workable. With 15 business days of IRB processing, the package must be submitted within 5 business days (approximately 7 calendar days). This is extremely tight but potentially feasible if the RC and Marcus work together to assemble the package now.
PI signature strategy: Contact Dr. Chen directly -- conference or no conference -- and arrange for electronic signature via DocuSign or equivalent. Most PIs, when they understand the regulatory consequences, will make time. If Dr. Chen is genuinely unreachable, determine whether the IRB accepts a designated sub-investigator signature with a PI co-signature to follow, and confirm this with the IRB before proceeding.
IRB communication: Contact the IRB proactively to explain the situation, confirm processing timeline for an expedited submission, and ask whether any accommodation is possible if the submission arrives at the end of this week rather than the standard 30 days prior to expiration.
Sponsor notification: Inform the CRA that the continuing review is being assembled on an expedited timeline. Transparency now is far better than a lapse notification later.
Systemic prevention: After the immediate crisis is managed, the RC builds the 90/60/30-day early-warning system for every study in the portfolio. Marcus's personal calendar reminders are replaced with institutional infrastructure. The RC audits every study's expiration date, identifies any other approaching deadlines, and ensures the tracking system captures all of them.
Check your understanding
1 of 3
A study's IRB approval expires in 28 days. The continuing review package is assembled but lacks the PI signature. The PI is available but has not reviewed the package. The IRB's standard processing time is 18 business days. What is the RC's most appropriate immediate action?
Key takeaways
A continuing review lapse is not a paperwork delay -- it is a regulatory state change that immediately halts enrollment, disrupts participant visits, triggers sponsor escalation, and damages the site's competitive position. The consequences cascade across the portfolio because crisis management for the lapsed study consumes the bandwidth needed to prevent lapses on other studies.
The 90/60/30-day early-warning system is the operational infrastructure that prevents lapses. At 90 days, verify the expiration date, identify data collection needs, and check for dependencies. At 60 days, confirm that data is locked, the template is populated, and the PI review window is scheduled. At 30 days, the package must be submitted -- no exceptions. This system integrates with the submission tracking infrastructure from Module 1 and generates alerts automatically, without relying on individual memory.
The cost of building and maintaining this early-warning system is measured in hours per month. The cost of a single lapse is measured in months of recovery, lost enrollment, damaged relationships, and institutional corrective action. The math is not close.
Regulatory Coordinator
Full course · Regulatory Submissions & Stakeholder Management