Sign inJoin Free
DashboardSign out
Free Lesson Preview
Module 1: Lesson 1

Introduces portfolio thinking as the defining intellectual competency of the RC role — the mental model shift from serial task management to parallel portfolio management
There is a moment in every regulatory coordinator's career -- and I have watched it happen dozens of times across sites in Boston, Houston, and Durham -- when the old way of thinking stops working. Not gradually. Not with warning. It breaks on a Tuesday afternoon when three continuing reviews come due the same week a new study is launching, a sponsor audit is scheduled for Thursday, and the principal investigator is at a conference in Vienna and unreachable until Friday. The RC who has been managing each study as a self-contained unit, holding the details in working memory, processing obligations one at a time, suddenly discovers that the approach that worked beautifully for five studies produces chaos at 18.
This is not a failure of effort. It is a failure of mental model.
Module 3 mapped the relationships that define the RC's professional life -- the principal investigator, the clinical research coordinators, the sponsors and monitors, the IRBs and regulatory authorities. But relationships are necessary and insufficient. You can have excellent stakeholder partnerships and still watch compliance failures cascade across your portfolio if you lack the intellectual framework to manage all of those relationships simultaneously, across all of those studies, without dropping anything that matters. Module 4 introduces that framework. And this first lesson addresses the foundational shift that everything else in this module builds upon: the move from study-level thinking to portfolio-level thinking.
In Module 1, Lesson 2 established the distinction between regulatory task execution and regulatory systems management -- the difference between filing one continuing review and operating the system that ensures all continuing reviews are filed on time. That distinction was conceptual. This module makes it operational. Portfolio thinking is what systems management looks like in practice, and it requires a different set of cognitive habits than anything the CRC role demands.
By the end of this lesson, you will be able to:
Consider what it means, operationally, to manage the regulatory compliance for a single clinical study. You have one protocol, one set of amendments, one IRB of record, one sponsor, one monitoring team, one delegation log, one set of continuing review deadlines, one investigational product accountability chain. The information fits in your head. You can review the study's status in the time it takes to walk to the filing cabinet and back. When something changes -- a new amendment, a safety report, a deviation -- you know exactly what it affects because you hold the entire context.
Now multiply that by 20. Not the work -- the context. Twenty protocols with different visit schedules, inclusion criteria, and safety reporting requirements. Twenty sets of amendments in various stages of IRB review. Potentially four or five different IRBs, each with its own submission format, meeting schedule, and review timeline. Eight or ten different sponsors, each with different monitoring expectations, deviation reporting thresholds, and communication preferences. Twenty continuing review deadlines scattered across the calendar year, none of them aligned with one another. And every one of these obligations is the RC's responsibility.
The arithmetic is not the hard part. The hard part is that the cognitive demands of portfolio management are not linear. They are, in the language of systems theory, combinatorial. Each study interacts with every other study through shared resources -- your time, the PI's signature capacity, the IRB's meeting schedule, the coordinator pool's bandwidth. A new amendment on Study 7 does not merely add work to Study 7's queue. It competes for PI review time that Study 12's continuing review also needs. It requires IRB submission capacity that is already strained by Studies 3 and 15, which both have amendments pending. It demands coordinator attention that is currently absorbed by Study 19's monitoring visit preparation. The interactions multiply faster than the studies themselves.
I have found that the challenges at portfolio scale cluster into four categories, and understanding them is the first step toward managing them.
Information overload. A single study generates a manageable volume of regulatory information: one set of IRB correspondence, one monitoring visit report series, one safety report stream, one amendment history. At portfolio scale, the RC receives all of these information streams simultaneously, from all studies, and must distinguish signal from noise across the aggregate. The amendment notification that arrives at 3:00 PM is not inherently more or less urgent than the safety report that arrived at 2:15 PM -- but they apply to different studies with different risk profiles and different deadline pressures, and the RC must make that assessment in real time. This is not a filing problem. It is a triage problem, and it requires a framework that single-study management never demanded.
Competing deadline clusters. Individual study deadlines are manageable in isolation. But IRB continuing review dates are set by initial approval dates, which means they are distributed quasi-randomly across the calendar. In a 20-study portfolio, statistical clustering is inevitable. You will have weeks where four continuing reviews overlap with two amendment submissions and a new study start-up. You will have other weeks where nothing is due. The variation is not a scheduling failure -- it is a mathematical certainty in any portfolio above about 12 studies. And the response cannot be "work harder during the clusters," because the quality of a continuing review package prepared under acute time pressure is measurably lower than one prepared with adequate lead time.
Cross-study resource conflicts. The RC does not operate in isolation. Every regulatory submission requires inputs from other people -- the PI's signature, the coordinator's enrollment figures, the pharmacy's drug accountability records. These shared resources are finite, and every study draws on the same pool. When Study 4 needs the PI's review of an amendment response and Study 11 needs the PI's signature on a continuing review, those two demands compete for the same hour of the same person's day. At five studies, these conflicts are infrequent and easily negotiated. At 20 studies, they are constant, and the RC who does not anticipate them will spend the majority of their time managing crises rather than preventing them.
Working memory saturation. The human brain holds approximately four to seven items in active working memory -- this is one of the most replicated findings in cognitive psychology, and it has direct operational implications. At five studies, a skilled RC can hold each study's current status, pending obligations, and upcoming deadlines in working memory simultaneously. At 20 studies, this is impossible. Not difficult -- impossible. The RC who attempts it will inevitably forget that Study 14's continuing review is due in three weeks, or that Study 8's amendment was submitted but not yet approved, or that Study 21's new coordinator has not yet completed delegation log training. The failures will be invisible until they become compliance findings.

Figure 1: The portfolio complexity gap -- actual complexity scales non-linearly with study count, diverging sharply above 10-12 concurrent studies due to cross-study interactions
If the scaling problem is the diagnosis, portfolio management is the treatment. And the treatment is not "more effort" or "better organization" in the conventional sense. It is a set of intellectual principles that restructure how the RC thinks about the work -- moving from "what does each study need?" to "how does the portfolio behave as a system, and where does it require my attention?"
Three principles, drawn from portfolio management theory and adapted to the regulatory environment, form the foundation. I want to be direct about their origin: these are not regulatory concepts. You will not find them in ICH E6(R3). But the E6(R3) framework -- particularly Principle 7's proportionate approaches (Sections 7.1 through 7.4) and Principle 6's identification of factors critical to quality (Section 6.2) -- creates both the intellectual space and the operational necessity for this kind of thinking. When the guideline instructs that the nature, size, and complexity of a trial should determine the approach to quality management, it is implicitly endorsing the idea that not every study in your portfolio demands the same level of regulatory attention. Portfolio thinking makes that endorsement explicit and operational.
Principle 1: Categorize by phase and complexity, not by order of arrival. The instinct at study-level is to process work in the order it arrives -- first in, first out. At portfolio scale, this is a recipe for misallocated attention. A Phase I first-in-human study with a complex safety reporting structure requires fundamentally different regulatory management than a Phase III study in its fourth year of enrollment with a stable protocol and well-established processes. The RC who treats both studies' amendments with equal urgency is not being thorough -- they are being inefficient in a way that degrades quality across the portfolio.
Principle 2: Allocate attention by risk, not by volume. Some studies generate more paperwork than others. A multi-site, multi-amendment oncology study with frequent safety reports will produce more IRB submissions, more monitoring visit follow-ups, and more regulatory correspondence than a stable behavioral study with minimal amendments. But volume is not risk. The stable behavioral study may actually carry higher compliance risk in a given quarter if its continuing review deadline is approaching and the coordinator assigned to it has been diverted to a higher-profile trial. Risk-stratified attention means the RC identifies where compliance failure is most likely and allocates disproportionate oversight there -- regardless of which study is generating the most paper.
Principle 3: Standardize what can be standardized, and reserve judgment for what cannot. Not every regulatory obligation requires bespoke handling. Continuing review packages have a standard structure. Amendment submission formats follow predictable patterns. IRB correspondence templates can be reused across studies with minor modifications. The RC who reinvents each submission from scratch wastes cognitive capacity that should be reserved for the tasks that genuinely require judgment: assessing whether a deviation warrants immediate IRB notification, determining whether a new safety signal changes a study's risk profile, deciding which studies in the portfolio need the PI's attention this week versus next week.
These portfolio principles are not imposed on ICH E6(R3) from outside. They are, I would argue, the natural operational consequence of the guideline's own framework.
Principle 7, Section 7.1, establishes that trial processes should be proportionate to the risks inherent in the trial and the importance of the information collected. While this language is directed at sponsors designing monitoring plans, the intellectual framework applies with equal force to the RC managing regulatory operations across a portfolio. If the guideline recognizes that different trials merit different levels of oversight based on their risk profile, then the RC who applies uniform regulatory intensity across all 20 studies is not following the spirit of E6(R3) -- they are contradicting it.
Section 6.2 introduces the concept of factors critical to quality -- those aspects of a trial that, if compromised, would most significantly affect participant safety, the reliability and interpretability of the trial results, or data integrity. At the portfolio level, the RC's job is to identify which studies contain the highest concentration of critical-to-quality regulatory obligations. A study with a vulnerable population, an investigational product with a narrow therapeutic window, or a protocol requiring complex informed consent for ongoing genetic sampling has more critical-to-quality regulatory touchpoints than a stable observational study with minimal intervention. The RC must see this differential and allocate accordingly.
Section 3.10 requires the sponsor to implement an appropriate system to manage quality throughout all stages of the trial process. For the RC, this means that the quality of regulatory operations should be highest where the stakes are highest -- not uniformly excellent across every study regardless of context. Uniformity sounds rigorous. In practice, it means the RC spreads attention too thin across low-risk studies and has too little bandwidth remaining for the high-risk ones that genuinely need close management.
The practical application of these principles begins with classification. The RC must be able to look at the portfolio -- all 20 or 25 or 30 studies -- and rapidly assess each study's regulatory demand profile. This is not an exercise in ranking studies from "easy" to "hard." It is a structured classification that identifies where the portfolio's compliance risks concentrate so the RC can allocate time, attention, and quality controls accordingly.
The framework below uses two dimensions. The first is study phase and status -- where the study sits in its lifecycle, which determines the type and volume of regulatory activity it generates. A study in start-up generates different regulatory demands than a study in active enrollment, which differs again from a study in close-out. The second dimension is regulatory complexity -- the number and difficulty of regulatory touchpoints the study presents, driven by factors like the investigational product's risk profile, the study population, the number of IRBs involved, the sponsor's monitoring intensity, and the frequency of protocol amendments.
Category | Study phase/status | Regulatory complexity indicators | RC management approach |
|---|---|---|---|
Category A: High demand | Start-up (pre-enrollment) or active enrollment with frequent amendments | Phase I or first-in-human; vulnerable population; complex consent; multiple IRBs; high-frequency safety reporting; investigational product with narrow therapeutic index | Judgment-driven: active PI engagement, proactive deadline management, heightened submission quality review, real-time safety signal monitoring |
Category B: Moderate demand | Active enrollment with stable protocol or late-phase enrollment winding down | Phase II/III with established processes; single IRB; moderate amendment frequency; standard safety reporting; well-characterized investigational product | Hybrid: standardized workflows with periodic risk reassessment; flag for escalation if amendment frequency increases or safety signals emerge |
Category C: Routine demand | Maintenance phase (enrollment complete, follow-up ongoing) or close-out | Phase III/IV with minimal amendments; established IRB relationship; low safety signal frequency; stable delegation and training status | Process-driven: template-based submissions, calendar-triggered workflows, checklist-verified compliance; minimum viable PI engagement for signatures |
Category D: Transitional | Studies moving between phases (e.g., enrollment completing, new site activation, protocol amendment changing study risk profile) | Complexity is in flux; the study may shift from Category C to Category A if a major amendment changes the risk profile, or from Category A to Category B as start-up activities resolve | Reassessment-driven: review category assignment monthly or at each regulatory milestone; avoid assuming current classification is permanent |
The value of this categorization is not theoretical. It is operational. When the RC sits down on Monday morning and asks, "Where should my attention go this week?" the answer should come from the portfolio's risk distribution, not from whatever email arrived most recently.
Consider a portfolio of 14 studies. Three are in start-up with site activation activities underway -- IRB initial submissions, regulatory package assembly, delegation log creation, protocol-specific training plans. Two are Phase I oncology studies with weekly safety reporting requirements and a history of frequent dose-modification amendments. Five are stable Phase III studies in their second or third year of enrollment, operating under well-established processes with a single central IRB. Two are in the follow-up phase with enrollment complete and no anticipated amendments. And two are in close-out, with final regulatory packages being assembled for archival.
Under the categorization framework, the three start-up studies and two Phase I oncology studies are Category A -- they require judgment-driven management and disproportionate RC attention. The five stable Phase III studies are Category B or C depending on their specific circumstances. The two follow-up studies are solidly Category C. The two close-out studies are Category D -- transitional, because close-out introduces its own regulatory complexity around document reconciliation and final IRB notifications.
But here is the insight that portfolio thinking produces and study-level thinking does not: the compliance risk in this portfolio is not distributed evenly. It concentrates in the five Category A studies, which represent roughly a third of the portfolio by count but likely generate 60% or more of the regulatory workload and contain nearly all of the high-stakes obligations. If the RC allocates time proportionally -- one-fourteenth of their attention to each study -- the Category A studies will be under-managed and the Category C studies will be over-managed. Both outcomes waste resources, but only one creates compliance risk.
This lesson introduced the conceptual foundation. You understand why portfolio-scale management differs qualitatively from study-level management. You can identify the four challenges that emerge at scale -- information overload, deadline clusters, resource conflicts, and working memory saturation. You have a framework for categorizing studies by their regulatory demand profile. And you understand the E6(R3) principles that provide the intellectual warrant for proportionate portfolio management.
But a framework without operational tools is philosophy, not practice. The remaining lessons in this module build the operational layer. Lesson 2 addresses timeline architecture -- the concrete methods for mapping regulatory milestones across the portfolio so that deadline collisions become visible weeks in advance rather than on the day they arrive. Lesson 3 tackles resource allocation and bottleneck identification -- what to do when everything is due the same week and the PI has 45 minutes available. And Lesson 4 introduces risk stratification in operational depth -- how to apply the categorization framework from this lesson to real portfolio decisions with real consequences.
The shift from study-level thinking to portfolio-level thinking is the hardest intellectual transition in the move from CRC to RC. It requires you to let go of the reassuring habit of knowing every detail of every study and instead develop comfort with a different kind of control -- the control that comes from knowing which details matter most, where the risks concentrate, and how to allocate finite resources across competing demands. That is not a lesser form of mastery. It is, in my experience, the higher one.
Regulatory Coordinator
Full course · The Regulatory Coordinator: Role, Scope & Professional Identity
Free Lesson Preview
Module 1: Lesson 1

Introduces portfolio thinking as the defining intellectual competency of the RC role — the mental model shift from serial task management to parallel portfolio management
There is a moment in every regulatory coordinator's career -- and I have watched it happen dozens of times across sites in Boston, Houston, and Durham -- when the old way of thinking stops working. Not gradually. Not with warning. It breaks on a Tuesday afternoon when three continuing reviews come due the same week a new study is launching, a sponsor audit is scheduled for Thursday, and the principal investigator is at a conference in Vienna and unreachable until Friday. The RC who has been managing each study as a self-contained unit, holding the details in working memory, processing obligations one at a time, suddenly discovers that the approach that worked beautifully for five studies produces chaos at 18.
This is not a failure of effort. It is a failure of mental model.
Module 3 mapped the relationships that define the RC's professional life -- the principal investigator, the clinical research coordinators, the sponsors and monitors, the IRBs and regulatory authorities. But relationships are necessary and insufficient. You can have excellent stakeholder partnerships and still watch compliance failures cascade across your portfolio if you lack the intellectual framework to manage all of those relationships simultaneously, across all of those studies, without dropping anything that matters. Module 4 introduces that framework. And this first lesson addresses the foundational shift that everything else in this module builds upon: the move from study-level thinking to portfolio-level thinking.
In Module 1, Lesson 2 established the distinction between regulatory task execution and regulatory systems management -- the difference between filing one continuing review and operating the system that ensures all continuing reviews are filed on time. That distinction was conceptual. This module makes it operational. Portfolio thinking is what systems management looks like in practice, and it requires a different set of cognitive habits than anything the CRC role demands.
By the end of this lesson, you will be able to:
Consider what it means, operationally, to manage the regulatory compliance for a single clinical study. You have one protocol, one set of amendments, one IRB of record, one sponsor, one monitoring team, one delegation log, one set of continuing review deadlines, one investigational product accountability chain. The information fits in your head. You can review the study's status in the time it takes to walk to the filing cabinet and back. When something changes -- a new amendment, a safety report, a deviation -- you know exactly what it affects because you hold the entire context.
Now multiply that by 20. Not the work -- the context. Twenty protocols with different visit schedules, inclusion criteria, and safety reporting requirements. Twenty sets of amendments in various stages of IRB review. Potentially four or five different IRBs, each with its own submission format, meeting schedule, and review timeline. Eight or ten different sponsors, each with different monitoring expectations, deviation reporting thresholds, and communication preferences. Twenty continuing review deadlines scattered across the calendar year, none of them aligned with one another. And every one of these obligations is the RC's responsibility.
The arithmetic is not the hard part. The hard part is that the cognitive demands of portfolio management are not linear. They are, in the language of systems theory, combinatorial. Each study interacts with every other study through shared resources -- your time, the PI's signature capacity, the IRB's meeting schedule, the coordinator pool's bandwidth. A new amendment on Study 7 does not merely add work to Study 7's queue. It competes for PI review time that Study 12's continuing review also needs. It requires IRB submission capacity that is already strained by Studies 3 and 15, which both have amendments pending. It demands coordinator attention that is currently absorbed by Study 19's monitoring visit preparation. The interactions multiply faster than the studies themselves.
I have found that the challenges at portfolio scale cluster into four categories, and understanding them is the first step toward managing them.
Information overload. A single study generates a manageable volume of regulatory information: one set of IRB correspondence, one monitoring visit report series, one safety report stream, one amendment history. At portfolio scale, the RC receives all of these information streams simultaneously, from all studies, and must distinguish signal from noise across the aggregate. The amendment notification that arrives at 3:00 PM is not inherently more or less urgent than the safety report that arrived at 2:15 PM -- but they apply to different studies with different risk profiles and different deadline pressures, and the RC must make that assessment in real time. This is not a filing problem. It is a triage problem, and it requires a framework that single-study management never demanded.
Competing deadline clusters. Individual study deadlines are manageable in isolation. But IRB continuing review dates are set by initial approval dates, which means they are distributed quasi-randomly across the calendar. In a 20-study portfolio, statistical clustering is inevitable. You will have weeks where four continuing reviews overlap with two amendment submissions and a new study start-up. You will have other weeks where nothing is due. The variation is not a scheduling failure -- it is a mathematical certainty in any portfolio above about 12 studies. And the response cannot be "work harder during the clusters," because the quality of a continuing review package prepared under acute time pressure is measurably lower than one prepared with adequate lead time.
Cross-study resource conflicts. The RC does not operate in isolation. Every regulatory submission requires inputs from other people -- the PI's signature, the coordinator's enrollment figures, the pharmacy's drug accountability records. These shared resources are finite, and every study draws on the same pool. When Study 4 needs the PI's review of an amendment response and Study 11 needs the PI's signature on a continuing review, those two demands compete for the same hour of the same person's day. At five studies, these conflicts are infrequent and easily negotiated. At 20 studies, they are constant, and the RC who does not anticipate them will spend the majority of their time managing crises rather than preventing them.
Working memory saturation. The human brain holds approximately four to seven items in active working memory -- this is one of the most replicated findings in cognitive psychology, and it has direct operational implications. At five studies, a skilled RC can hold each study's current status, pending obligations, and upcoming deadlines in working memory simultaneously. At 20 studies, this is impossible. Not difficult -- impossible. The RC who attempts it will inevitably forget that Study 14's continuing review is due in three weeks, or that Study 8's amendment was submitted but not yet approved, or that Study 21's new coordinator has not yet completed delegation log training. The failures will be invisible until they become compliance findings.

Figure 1: The portfolio complexity gap -- actual complexity scales non-linearly with study count, diverging sharply above 10-12 concurrent studies due to cross-study interactions
If the scaling problem is the diagnosis, portfolio management is the treatment. And the treatment is not "more effort" or "better organization" in the conventional sense. It is a set of intellectual principles that restructure how the RC thinks about the work -- moving from "what does each study need?" to "how does the portfolio behave as a system, and where does it require my attention?"
Three principles, drawn from portfolio management theory and adapted to the regulatory environment, form the foundation. I want to be direct about their origin: these are not regulatory concepts. You will not find them in ICH E6(R3). But the E6(R3) framework -- particularly Principle 7's proportionate approaches (Sections 7.1 through 7.4) and Principle 6's identification of factors critical to quality (Section 6.2) -- creates both the intellectual space and the operational necessity for this kind of thinking. When the guideline instructs that the nature, size, and complexity of a trial should determine the approach to quality management, it is implicitly endorsing the idea that not every study in your portfolio demands the same level of regulatory attention. Portfolio thinking makes that endorsement explicit and operational.
Principle 1: Categorize by phase and complexity, not by order of arrival. The instinct at study-level is to process work in the order it arrives -- first in, first out. At portfolio scale, this is a recipe for misallocated attention. A Phase I first-in-human study with a complex safety reporting structure requires fundamentally different regulatory management than a Phase III study in its fourth year of enrollment with a stable protocol and well-established processes. The RC who treats both studies' amendments with equal urgency is not being thorough -- they are being inefficient in a way that degrades quality across the portfolio.
Principle 2: Allocate attention by risk, not by volume. Some studies generate more paperwork than others. A multi-site, multi-amendment oncology study with frequent safety reports will produce more IRB submissions, more monitoring visit follow-ups, and more regulatory correspondence than a stable behavioral study with minimal amendments. But volume is not risk. The stable behavioral study may actually carry higher compliance risk in a given quarter if its continuing review deadline is approaching and the coordinator assigned to it has been diverted to a higher-profile trial. Risk-stratified attention means the RC identifies where compliance failure is most likely and allocates disproportionate oversight there -- regardless of which study is generating the most paper.
Principle 3: Standardize what can be standardized, and reserve judgment for what cannot. Not every regulatory obligation requires bespoke handling. Continuing review packages have a standard structure. Amendment submission formats follow predictable patterns. IRB correspondence templates can be reused across studies with minor modifications. The RC who reinvents each submission from scratch wastes cognitive capacity that should be reserved for the tasks that genuinely require judgment: assessing whether a deviation warrants immediate IRB notification, determining whether a new safety signal changes a study's risk profile, deciding which studies in the portfolio need the PI's attention this week versus next week.
These portfolio principles are not imposed on ICH E6(R3) from outside. They are, I would argue, the natural operational consequence of the guideline's own framework.
Principle 7, Section 7.1, establishes that trial processes should be proportionate to the risks inherent in the trial and the importance of the information collected. While this language is directed at sponsors designing monitoring plans, the intellectual framework applies with equal force to the RC managing regulatory operations across a portfolio. If the guideline recognizes that different trials merit different levels of oversight based on their risk profile, then the RC who applies uniform regulatory intensity across all 20 studies is not following the spirit of E6(R3) -- they are contradicting it.
Section 6.2 introduces the concept of factors critical to quality -- those aspects of a trial that, if compromised, would most significantly affect participant safety, the reliability and interpretability of the trial results, or data integrity. At the portfolio level, the RC's job is to identify which studies contain the highest concentration of critical-to-quality regulatory obligations. A study with a vulnerable population, an investigational product with a narrow therapeutic window, or a protocol requiring complex informed consent for ongoing genetic sampling has more critical-to-quality regulatory touchpoints than a stable observational study with minimal intervention. The RC must see this differential and allocate accordingly.
Section 3.10 requires the sponsor to implement an appropriate system to manage quality throughout all stages of the trial process. For the RC, this means that the quality of regulatory operations should be highest where the stakes are highest -- not uniformly excellent across every study regardless of context. Uniformity sounds rigorous. In practice, it means the RC spreads attention too thin across low-risk studies and has too little bandwidth remaining for the high-risk ones that genuinely need close management.
The practical application of these principles begins with classification. The RC must be able to look at the portfolio -- all 20 or 25 or 30 studies -- and rapidly assess each study's regulatory demand profile. This is not an exercise in ranking studies from "easy" to "hard." It is a structured classification that identifies where the portfolio's compliance risks concentrate so the RC can allocate time, attention, and quality controls accordingly.
The framework below uses two dimensions. The first is study phase and status -- where the study sits in its lifecycle, which determines the type and volume of regulatory activity it generates. A study in start-up generates different regulatory demands than a study in active enrollment, which differs again from a study in close-out. The second dimension is regulatory complexity -- the number and difficulty of regulatory touchpoints the study presents, driven by factors like the investigational product's risk profile, the study population, the number of IRBs involved, the sponsor's monitoring intensity, and the frequency of protocol amendments.
Category | Study phase/status | Regulatory complexity indicators | RC management approach |
|---|---|---|---|
Category A: High demand | Start-up (pre-enrollment) or active enrollment with frequent amendments | Phase I or first-in-human; vulnerable population; complex consent; multiple IRBs; high-frequency safety reporting; investigational product with narrow therapeutic index | Judgment-driven: active PI engagement, proactive deadline management, heightened submission quality review, real-time safety signal monitoring |
Category B: Moderate demand | Active enrollment with stable protocol or late-phase enrollment winding down | Phase II/III with established processes; single IRB; moderate amendment frequency; standard safety reporting; well-characterized investigational product | Hybrid: standardized workflows with periodic risk reassessment; flag for escalation if amendment frequency increases or safety signals emerge |
Category C: Routine demand | Maintenance phase (enrollment complete, follow-up ongoing) or close-out | Phase III/IV with minimal amendments; established IRB relationship; low safety signal frequency; stable delegation and training status | Process-driven: template-based submissions, calendar-triggered workflows, checklist-verified compliance; minimum viable PI engagement for signatures |
Category D: Transitional | Studies moving between phases (e.g., enrollment completing, new site activation, protocol amendment changing study risk profile) | Complexity is in flux; the study may shift from Category C to Category A if a major amendment changes the risk profile, or from Category A to Category B as start-up activities resolve | Reassessment-driven: review category assignment monthly or at each regulatory milestone; avoid assuming current classification is permanent |
The value of this categorization is not theoretical. It is operational. When the RC sits down on Monday morning and asks, "Where should my attention go this week?" the answer should come from the portfolio's risk distribution, not from whatever email arrived most recently.
Consider a portfolio of 14 studies. Three are in start-up with site activation activities underway -- IRB initial submissions, regulatory package assembly, delegation log creation, protocol-specific training plans. Two are Phase I oncology studies with weekly safety reporting requirements and a history of frequent dose-modification amendments. Five are stable Phase III studies in their second or third year of enrollment, operating under well-established processes with a single central IRB. Two are in the follow-up phase with enrollment complete and no anticipated amendments. And two are in close-out, with final regulatory packages being assembled for archival.
Under the categorization framework, the three start-up studies and two Phase I oncology studies are Category A -- they require judgment-driven management and disproportionate RC attention. The five stable Phase III studies are Category B or C depending on their specific circumstances. The two follow-up studies are solidly Category C. The two close-out studies are Category D -- transitional, because close-out introduces its own regulatory complexity around document reconciliation and final IRB notifications.
But here is the insight that portfolio thinking produces and study-level thinking does not: the compliance risk in this portfolio is not distributed evenly. It concentrates in the five Category A studies, which represent roughly a third of the portfolio by count but likely generate 60% or more of the regulatory workload and contain nearly all of the high-stakes obligations. If the RC allocates time proportionally -- one-fourteenth of their attention to each study -- the Category A studies will be under-managed and the Category C studies will be over-managed. Both outcomes waste resources, but only one creates compliance risk.
This lesson introduced the conceptual foundation. You understand why portfolio-scale management differs qualitatively from study-level management. You can identify the four challenges that emerge at scale -- information overload, deadline clusters, resource conflicts, and working memory saturation. You have a framework for categorizing studies by their regulatory demand profile. And you understand the E6(R3) principles that provide the intellectual warrant for proportionate portfolio management.
But a framework without operational tools is philosophy, not practice. The remaining lessons in this module build the operational layer. Lesson 2 addresses timeline architecture -- the concrete methods for mapping regulatory milestones across the portfolio so that deadline collisions become visible weeks in advance rather than on the day they arrive. Lesson 3 tackles resource allocation and bottleneck identification -- what to do when everything is due the same week and the PI has 45 minutes available. And Lesson 4 introduces risk stratification in operational depth -- how to apply the categorization framework from this lesson to real portfolio decisions with real consequences.
The shift from study-level thinking to portfolio-level thinking is the hardest intellectual transition in the move from CRC to RC. It requires you to let go of the reassuring habit of knowing every detail of every study and instead develop comfort with a different kind of control -- the control that comes from knowing which details matter most, where the risks concentrate, and how to allocate finite resources across competing demands. That is not a lesser form of mastery. It is, in my experience, the higher one.
Regulatory Coordinator
Full course · The Regulatory Coordinator: Role, Scope & Professional Identity
Enjoyed this preview?
Enroll to access all courses in the Regulatory Coordinator track.
Unlock the full courseEnjoyed this preview?
Enroll to access all courses in the Regulatory Coordinator track.
Unlock the full course