Sign inJoin Free
DashboardSign out
Free Lesson Preview
Module 1: Lesson 1

Defines the operational shift from per-study submission execution to portfolio-level pipeline management, identifying five pipeline-only capabilities and the failure modes that emerge without them.
Picture a site running 15 active studies. A coordinator managing one of those studies -- a Phase III oncology trial with a central IRB -- spends two hours preparing a continuing review package, submits it through the portal, and moves on. The submission is complete. The work is done.
Now picture the person responsible for ensuring that all 15 continuing reviews, across three different IRBs, are submitted correctly and on time over the next 90 days. That person is not doing the same work faster. They are doing fundamentally different work. They are managing a pipeline.
This distinction -- between executing a single submission and managing a submission pipeline -- is the defining operational shift of the regulatory coordinator role. Course 1 introduced the difference between task-level execution and systems-level thinking. This lesson makes that difference concrete in the domain where it matters most: the flow of regulatory submissions through your site.
By the end of this lesson, you will be able to:
A clinical research coordinator preparing an IRB submission needs to know the specific requirements for that study's IRB, assemble the correct documents, obtain the necessary signatures, and submit the package by the deadline. This is skilled work that demands attention to detail, familiarity with the protocol, and competence with the IRB's portal or forms. It is not, however, systems work.
The coordinator does not need to know that four other studies at the site have continuing reviews due within the same two-week window. The coordinator does not need to reconcile the fact that the principal investigator whose signature is needed for this submission also has signatures pending for two other studies with earlier deadlines. The coordinator does not need to evaluate whether the IRB that reviews this study processes amendments faster than the IRB reviewing the oncology trial down the hall -- or whether submitting this package on Tuesday rather than Friday would avoid a processing bottleneck that delays three other pending submissions.
These are pipeline-level concerns. They exist only when someone is looking across all studies simultaneously, and they require operational capabilities that no single-study workflow produces.
The difference between per-study execution and pipeline management is not merely one of volume -- doing more of the same thing. It is a difference in kind. Managing a pipeline creates operational capabilities that are structurally impossible when submissions are handled study-by-study, no matter how competent the individual coordinators are.
I find it useful to name these capabilities explicitly, because they define what the RC role adds to the site's regulatory operations that no collection of CRC-level work can replicate.
Capability | What it means | Why it requires portfolio visibility |
|---|---|---|
Cross-study deadline awareness | The ability to see every upcoming submission deadline across all active studies in a single view, identifying clusters, conflicts, and resource bottlenecks before they become crises | No individual study file contains information about other studies' deadlines. Only a portfolio-level view reveals that four continuing reviews, two amendments, and an initial submission all converge in the same week. |
Resource conflict resolution | The ability to sequence submissions based on aggregate demand -- prioritizing the continuing review that prevents a regulatory lapse over the amendment that can wait 10 days without consequence | A coordinator working on Study A cannot know that the PI signature they need is also needed for Studies B and C with earlier deadlines. Only the RC can sequence signature requests across the portfolio. |
IRB-specific pattern recognition | The ability to detect submission processing patterns across studies -- which IRB is consistently slow on amendments, which portal has recurring technical issues, which committee meeting schedule creates seasonal bottlenecks | Pattern recognition requires repeated observation across multiple submissions to the same IRB. A coordinator submitting to that IRB twice a year sees isolated events. The RC submitting 30 times a year sees patterns. |
Standardization and quality normalization | The ability to establish consistent submission quality standards across all studies, ensuring that the amendment package for Study A meets the same documentation standard as the continuing review for Study B | Individual coordinators develop individual habits. Without portfolio-level quality standards, the site's submission quality varies by coordinator, by study, and by day. Only the RC can define and enforce a unified standard. |
Institutional memory and continuity | The ability to maintain regulatory knowledge that transcends individual study timelines -- what the IRB flagged two years ago, how a previous close-out was handled, what documentation standard a monitor expected | Staff turn over. Studies close. New studies open. A coordinator who leaves takes their institutional knowledge with them. A pipeline-level system preserves that knowledge as organizational infrastructure. |
These five capabilities are not aspirational ideals. They are operational necessities at any site managing more than a handful of concurrent studies. And here is the critical insight: they do not emerge organically from individual coordinators doing their jobs well. You can have five excellent coordinators, each managing their own studies flawlessly, and still have a site with no cross-study deadline awareness, no resource conflict resolution, and no standardized quality baseline. The capabilities are properties of the system, not of the individuals within it.
This is why the RC role exists. Not to do what coordinators do, only more of it, but to create and maintain the operational infrastructure that produces these five capabilities.
ICH E6(R3) Annex 1, Section 3.10.1.1 requires the sponsor to identify risks that may have a meaningful impact on critical to quality factors. In the context of regulatory submissions, this means identifying how the absence of pipeline-level management creates systemic risk -- risk that is invisible from within any individual study but devastating when it materializes.
I have seen these failure modes at dozens of sites over the years, and what strikes me most is how predictable they are. They are not exotic disasters. They are the inevitable consequences of managing submissions study-by-study at a site that has outgrown that approach.
When each coordinator tracks only their own studies' deadlines, no one sees the aggregate picture. Four continuing reviews due in the same week is manageable if discovered six weeks in advance -- and catastrophic if discovered three days out. The problem is not that individual coordinators are careless. The problem is that the information needed to prevent the crisis exists across multiple study files that no one is aggregating. Per ICH E6(R3) Annex 1, Section 3.10.1.1, risks to process integrity should be identified proactively. Deadline collision blindness represents a systemic failure to identify a predictable, preventable risk.
At most sites, the principal investigator's signature is the single most constrained resource in the submission process. PIs have clinical responsibilities, teaching obligations, and competing demands. When coordinators independently request PI signatures for their own submissions without visibility into other pending requests, the PI faces an uncoordinated flood of signature demands. The result: some submissions get signed and others wait, with no rational prioritization based on regulatory urgency. The continuing review that prevents a lapse waits in the same queue as the minor administrative amendment. Pipeline-level management sequences signature requests by consequence of delay.
When three coordinators each prepare IRB submissions according to their own understanding of what constitutes a complete package, the site produces three different quality levels. Coordinator A includes a cover letter summarizing changes. Coordinator B does not. Coordinator C includes the cover letter but uses an outdated institutional letterhead. None of these individual decisions is visible to anyone else. Over time, the IRB forms an impression of the site based on the lowest common denominator, and rejection rates creep upward for everyone. Portfolio-level quality control prevents this by establishing standards that apply across all submissions regardless of who prepares them.
A coordinator who has managed a study for three years knows that the local IRB expects protocol deviation reports within 10 business days, that the committee chair has strong views about consent document readability, and that the portal's file upload function fails silently on PDFs larger than 25 MB. When that coordinator leaves, every bit of this knowledge vanishes. The next person starts from zero. At a pipeline level, this knowledge is documented as part of IRB-specific operational profiles -- institutional infrastructure that persists regardless of who occupies which role.

Figure 1: Conceptual architecture of a submission pipeline -- inputs, processing, outputs, and feedback
A submission pipeline is not a metaphor. It is a concrete operational architecture with four components: inputs, processing, outputs, and feedback. Understanding this architecture is the foundation for everything you will build in this module.
Inputs are the events that generate submission work. A new study requires an initial submission. An approaching approval expiration date triggers continuing review preparation. A protocol amendment creates an amendment submission. A safety event generates a safety report to the IRB. A study ending initiates close-out notification. At any given moment, a site with 15 active studies may have 20 or more input events in various stages of readiness.
Processing is where pipeline-level capabilities live. This is where deadlines are aggregated into a unified view, where resource conflicts are identified and resolved, where quality standards are applied consistently, and where each submission is routed to the correct IRB through the correct portal with the correct forms. Processing is the RC's domain -- the operational infrastructure that transforms raw input events into submission-ready packages.
Outputs are completed submissions delivered to the appropriate IRB. But "completed" is a precisely defined state: the package meets the IRB's specific requirements, includes all required signatures, contains current documents, and has passed the site's quality control process. A submission that reaches the IRB incomplete is not an output -- it is a processing failure that will generate rework.
Feedback closes the loop. Post-submission tracking captures what the IRB does with each submission: approval, approval with modifications, deferral, request for additional information. This feedback informs the processing layer. If a particular IRB consistently requests additional documentation for a specific submission type, the processing layer adjusts the quality control standard for that combination. If submissions to a particular portal are taking 30% longer than expected, the processing layer adjusts timelines. Without feedback, the pipeline operates blind.
Not every site needs the same pipeline. A site managing four studies through a single IRB has different infrastructure requirements than a site managing 25 studies across four IRBs. ICH E6(R3) Principle 7 (Sections 7.1--7.4) establishes that quality management approaches should be proportionate to risks and the importance of the data. The same principle applies to submission pipeline design.
The essential question is not "How elaborate should my pipeline be?" but rather "At what point does the absence of pipeline-level management create unacceptable risk?" For most sites, that threshold arrives somewhere between five and eight concurrent studies -- the point at which no single person can reliably hold all deadlines, dependencies, and quality requirements in working memory simultaneously.
Below that threshold, a diligent coordinator with a good calendar may suffice. Above it, you need infrastructure: a tracking system, quality control checkpoints, defined escalation procedures, and someone whose job it is to look across all studies simultaneously. That infrastructure is the submission pipeline. And building it is the work of the next three lessons.
The shift from per-study submission execution to pipeline management is not about doing more of the same work. It is about creating operational capabilities -- cross-study deadline awareness, resource conflict resolution, IRB pattern recognition, quality normalization, and institutional memory -- that cannot exist without portfolio-level visibility.
When a site manages submissions study-by-study, four predictable failure modes emerge: deadline collision blindness, signature bottleneck cascades, quality variance, and institutional memory loss. These failures are not caused by individual incompetence. They are structural consequences of the absence of a pipeline.
A submission pipeline has a concrete architecture: inputs (submission-generating events), processing (where the RC's pipeline capabilities operate), outputs (completed submissions to IRBs), and feedback (post-submission tracking that informs future operations). The next lesson begins building this pipeline by teaching you to see your submission landscape -- the full scope of what you are managing.
Regulatory Coordinator
Full course · Regulatory Submissions & Stakeholder Management
Free Lesson Preview
Module 1: Lesson 1

Defines the operational shift from per-study submission execution to portfolio-level pipeline management, identifying five pipeline-only capabilities and the failure modes that emerge without them.
Picture a site running 15 active studies. A coordinator managing one of those studies -- a Phase III oncology trial with a central IRB -- spends two hours preparing a continuing review package, submits it through the portal, and moves on. The submission is complete. The work is done.
Now picture the person responsible for ensuring that all 15 continuing reviews, across three different IRBs, are submitted correctly and on time over the next 90 days. That person is not doing the same work faster. They are doing fundamentally different work. They are managing a pipeline.
This distinction -- between executing a single submission and managing a submission pipeline -- is the defining operational shift of the regulatory coordinator role. Course 1 introduced the difference between task-level execution and systems-level thinking. This lesson makes that difference concrete in the domain where it matters most: the flow of regulatory submissions through your site.
By the end of this lesson, you will be able to:
A clinical research coordinator preparing an IRB submission needs to know the specific requirements for that study's IRB, assemble the correct documents, obtain the necessary signatures, and submit the package by the deadline. This is skilled work that demands attention to detail, familiarity with the protocol, and competence with the IRB's portal or forms. It is not, however, systems work.
The coordinator does not need to know that four other studies at the site have continuing reviews due within the same two-week window. The coordinator does not need to reconcile the fact that the principal investigator whose signature is needed for this submission also has signatures pending for two other studies with earlier deadlines. The coordinator does not need to evaluate whether the IRB that reviews this study processes amendments faster than the IRB reviewing the oncology trial down the hall -- or whether submitting this package on Tuesday rather than Friday would avoid a processing bottleneck that delays three other pending submissions.
These are pipeline-level concerns. They exist only when someone is looking across all studies simultaneously, and they require operational capabilities that no single-study workflow produces.
The difference between per-study execution and pipeline management is not merely one of volume -- doing more of the same thing. It is a difference in kind. Managing a pipeline creates operational capabilities that are structurally impossible when submissions are handled study-by-study, no matter how competent the individual coordinators are.
I find it useful to name these capabilities explicitly, because they define what the RC role adds to the site's regulatory operations that no collection of CRC-level work can replicate.
Capability | What it means | Why it requires portfolio visibility |
|---|---|---|
Cross-study deadline awareness | The ability to see every upcoming submission deadline across all active studies in a single view, identifying clusters, conflicts, and resource bottlenecks before they become crises | No individual study file contains information about other studies' deadlines. Only a portfolio-level view reveals that four continuing reviews, two amendments, and an initial submission all converge in the same week. |
Resource conflict resolution | The ability to sequence submissions based on aggregate demand -- prioritizing the continuing review that prevents a regulatory lapse over the amendment that can wait 10 days without consequence | A coordinator working on Study A cannot know that the PI signature they need is also needed for Studies B and C with earlier deadlines. Only the RC can sequence signature requests across the portfolio. |
IRB-specific pattern recognition | The ability to detect submission processing patterns across studies -- which IRB is consistently slow on amendments, which portal has recurring technical issues, which committee meeting schedule creates seasonal bottlenecks | Pattern recognition requires repeated observation across multiple submissions to the same IRB. A coordinator submitting to that IRB twice a year sees isolated events. The RC submitting 30 times a year sees patterns. |
Standardization and quality normalization | The ability to establish consistent submission quality standards across all studies, ensuring that the amendment package for Study A meets the same documentation standard as the continuing review for Study B | Individual coordinators develop individual habits. Without portfolio-level quality standards, the site's submission quality varies by coordinator, by study, and by day. Only the RC can define and enforce a unified standard. |
Institutional memory and continuity | The ability to maintain regulatory knowledge that transcends individual study timelines -- what the IRB flagged two years ago, how a previous close-out was handled, what documentation standard a monitor expected | Staff turn over. Studies close. New studies open. A coordinator who leaves takes their institutional knowledge with them. A pipeline-level system preserves that knowledge as organizational infrastructure. |
These five capabilities are not aspirational ideals. They are operational necessities at any site managing more than a handful of concurrent studies. And here is the critical insight: they do not emerge organically from individual coordinators doing their jobs well. You can have five excellent coordinators, each managing their own studies flawlessly, and still have a site with no cross-study deadline awareness, no resource conflict resolution, and no standardized quality baseline. The capabilities are properties of the system, not of the individuals within it.
This is why the RC role exists. Not to do what coordinators do, only more of it, but to create and maintain the operational infrastructure that produces these five capabilities.
ICH E6(R3) Annex 1, Section 3.10.1.1 requires the sponsor to identify risks that may have a meaningful impact on critical to quality factors. In the context of regulatory submissions, this means identifying how the absence of pipeline-level management creates systemic risk -- risk that is invisible from within any individual study but devastating when it materializes.
I have seen these failure modes at dozens of sites over the years, and what strikes me most is how predictable they are. They are not exotic disasters. They are the inevitable consequences of managing submissions study-by-study at a site that has outgrown that approach.
When each coordinator tracks only their own studies' deadlines, no one sees the aggregate picture. Four continuing reviews due in the same week is manageable if discovered six weeks in advance -- and catastrophic if discovered three days out. The problem is not that individual coordinators are careless. The problem is that the information needed to prevent the crisis exists across multiple study files that no one is aggregating. Per ICH E6(R3) Annex 1, Section 3.10.1.1, risks to process integrity should be identified proactively. Deadline collision blindness represents a systemic failure to identify a predictable, preventable risk.
At most sites, the principal investigator's signature is the single most constrained resource in the submission process. PIs have clinical responsibilities, teaching obligations, and competing demands. When coordinators independently request PI signatures for their own submissions without visibility into other pending requests, the PI faces an uncoordinated flood of signature demands. The result: some submissions get signed and others wait, with no rational prioritization based on regulatory urgency. The continuing review that prevents a lapse waits in the same queue as the minor administrative amendment. Pipeline-level management sequences signature requests by consequence of delay.
When three coordinators each prepare IRB submissions according to their own understanding of what constitutes a complete package, the site produces three different quality levels. Coordinator A includes a cover letter summarizing changes. Coordinator B does not. Coordinator C includes the cover letter but uses an outdated institutional letterhead. None of these individual decisions is visible to anyone else. Over time, the IRB forms an impression of the site based on the lowest common denominator, and rejection rates creep upward for everyone. Portfolio-level quality control prevents this by establishing standards that apply across all submissions regardless of who prepares them.
A coordinator who has managed a study for three years knows that the local IRB expects protocol deviation reports within 10 business days, that the committee chair has strong views about consent document readability, and that the portal's file upload function fails silently on PDFs larger than 25 MB. When that coordinator leaves, every bit of this knowledge vanishes. The next person starts from zero. At a pipeline level, this knowledge is documented as part of IRB-specific operational profiles -- institutional infrastructure that persists regardless of who occupies which role.

Figure 1: Conceptual architecture of a submission pipeline -- inputs, processing, outputs, and feedback
A submission pipeline is not a metaphor. It is a concrete operational architecture with four components: inputs, processing, outputs, and feedback. Understanding this architecture is the foundation for everything you will build in this module.
Inputs are the events that generate submission work. A new study requires an initial submission. An approaching approval expiration date triggers continuing review preparation. A protocol amendment creates an amendment submission. A safety event generates a safety report to the IRB. A study ending initiates close-out notification. At any given moment, a site with 15 active studies may have 20 or more input events in various stages of readiness.
Processing is where pipeline-level capabilities live. This is where deadlines are aggregated into a unified view, where resource conflicts are identified and resolved, where quality standards are applied consistently, and where each submission is routed to the correct IRB through the correct portal with the correct forms. Processing is the RC's domain -- the operational infrastructure that transforms raw input events into submission-ready packages.
Outputs are completed submissions delivered to the appropriate IRB. But "completed" is a precisely defined state: the package meets the IRB's specific requirements, includes all required signatures, contains current documents, and has passed the site's quality control process. A submission that reaches the IRB incomplete is not an output -- it is a processing failure that will generate rework.
Feedback closes the loop. Post-submission tracking captures what the IRB does with each submission: approval, approval with modifications, deferral, request for additional information. This feedback informs the processing layer. If a particular IRB consistently requests additional documentation for a specific submission type, the processing layer adjusts the quality control standard for that combination. If submissions to a particular portal are taking 30% longer than expected, the processing layer adjusts timelines. Without feedback, the pipeline operates blind.
Not every site needs the same pipeline. A site managing four studies through a single IRB has different infrastructure requirements than a site managing 25 studies across four IRBs. ICH E6(R3) Principle 7 (Sections 7.1--7.4) establishes that quality management approaches should be proportionate to risks and the importance of the data. The same principle applies to submission pipeline design.
The essential question is not "How elaborate should my pipeline be?" but rather "At what point does the absence of pipeline-level management create unacceptable risk?" For most sites, that threshold arrives somewhere between five and eight concurrent studies -- the point at which no single person can reliably hold all deadlines, dependencies, and quality requirements in working memory simultaneously.
Below that threshold, a diligent coordinator with a good calendar may suffice. Above it, you need infrastructure: a tracking system, quality control checkpoints, defined escalation procedures, and someone whose job it is to look across all studies simultaneously. That infrastructure is the submission pipeline. And building it is the work of the next three lessons.
The shift from per-study submission execution to pipeline management is not about doing more of the same work. It is about creating operational capabilities -- cross-study deadline awareness, resource conflict resolution, IRB pattern recognition, quality normalization, and institutional memory -- that cannot exist without portfolio-level visibility.
When a site manages submissions study-by-study, four predictable failure modes emerge: deadline collision blindness, signature bottleneck cascades, quality variance, and institutional memory loss. These failures are not caused by individual incompetence. They are structural consequences of the absence of a pipeline.
A submission pipeline has a concrete architecture: inputs (submission-generating events), processing (where the RC's pipeline capabilities operate), outputs (completed submissions to IRBs), and feedback (post-submission tracking that informs future operations). The next lesson begins building this pipeline by teaching you to see your submission landscape -- the full scope of what you are managing.
Regulatory Coordinator
Full course · Regulatory Submissions & Stakeholder Management
Enjoyed this preview?
Enroll to access all courses in the Regulatory Coordinator track.
Unlock the full courseEnjoyed this preview?
Enroll to access all courses in the Regulatory Coordinator track.
Unlock the full course