
Coordinator-level quality review: the checklist that catches what gets missed
Teaches the RC to design tiered QC checklists for initial submissions, implement a delegation model that distributes review responsibility between CRCs and the RC, and evaluate QC effectiveness through quarterly metrics and continuous improvement cycles.
The scalability problem
Module 1, Lesson 4 established the three-stage quality control framework for the submission pipeline: pre-assembly verification, pre-submission review, and post-submission confirmation. That framework answers the question of when quality checks should occur. This lesson answers a different question: who performs them, and how does the RC ensure quality at scale without becoming the bottleneck?
The math is unforgiving. An RC managing 18 active studies with an average of four submissions per study per year processes approximately 72 submissions annually. If the RC personally conducts a full pre-submission review on every package -- reading every document, verifying every version number, checking every signature line -- and each review takes an average of 25 minutes, that is 30 hours per year spent on pre-submission review alone. Manageable, perhaps, in isolation. But the RC also manages the tracking system, coordinates PI signatures, handles IRB correspondence, troubleshoots sponsor queries, and occasionally sleeps.
The solution is not to skip quality review. The solution is to design a delegation model that distributes review responsibility intelligently -- placing the right level of scrutiny at the right point in the process, performed by the right person. The CRC who assembles a package should be the first line of quality defense. The RC should focus review time where it produces the most value: high-risk submissions, pattern analysis, and system improvement.
This is not about working less. This is about designing a system that scales without sacrificing quality.
What you will learn
By the end of this lesson, you will be able to: