
Root cause analysis for regulatory findings: getting past the symptom to the system failure
Teaches structured root cause analysis methodologies that drive past proximate causes to systemic process and design failures across multiple studies -- because corrective actions aimed at symptoms guarantee recurrence.
One finding or four?
Four monitoring visits. Four different studies. Four different sponsors. And the same observation appears in each visit report: "Source documentation for eligibility criteria is incomplete." The CRAs each write it up as a finding in their respective monitoring reports. The investigators each receive a follow-up letter. And at most sites, what happens next is four separate responses -- one per study -- each promising to "retrain staff on eligibility documentation requirements."
This is how most sites handle the situation. And it is, with respect, the wrong approach.
The RC who encounters the same finding across four independent studies is not looking at four problems. The RC is looking at one problem that has manifested four times. The question that matters is not "how do we fix the eligibility documentation in Study A, Study B, Study C, and Study D?" The question is: what is it about the way this site documents eligibility -- the process, the tools, the training design, the workflow architecture -- that produces this result regardless of which study, which CRC, or which sponsor is involved?
That question is the beginning of root cause analysis. And until it is answered, every corrective action the site undertakes will address symptoms while leaving the system that created them undisturbed.
What you will learn
By the end of this lesson, you will be able to:
The proximate-cause trap
When a finding appears in a monitoring visit report, the instinct -- and I say this without judgment, because it is a near-universal instinct -- is to identify the most immediate explanation and address it. Eligibility documentation is incomplete? The CRC did not complete the worksheet. SAE reporting was late? The coordinator did not notice the timeline. Delegation log is outdated? Nobody remembered to update it.
These are proximate causes. They answer the question "what went wrong?" at the shallowest possible level. And they are almost always true -- the CRC really did not complete the worksheet, the coordinator really did not notice the timeline. But proximate causes share a defining characteristic: they describe individual failures within a system rather than the system conditions that made those failures predictable.
ICH E6(R3), Annex 1, Section 3.12.2 addresses this directly. When significant noncompliance is identified, the guideline requires that root cause analysis be conducted and that corrective and preventive actions follow. The operative phrase is "root cause" -- not "immediate cause," not "obvious cause," not "the explanation that first comes to mind." The guideline demands that the analysis reach the level of the system.
The distinction is not merely academic. It determines the effectiveness of every corrective action the site undertakes. A corrective action aimed at a proximate cause -- "retrain the CRC on eligibility documentation" -- addresses a person, not a process. It assumes the CRC failed because of insufficient knowledge. But if the root cause is a fragmented documentation workflow with no process checkpoint, retraining will not change the outcome. The next CRC managing three simultaneous screening visits will face the same systemic conditions and produce the same result. I have watched this cycle repeat at site after site, and the signature of the proximate-cause trap is always the same: "We retrained staff on this last quarter. I do not understand why it is happening again."
The RC who hears that sentence should recognize it not as evidence of staff deficiency but as evidence of inadequate root cause analysis. Recurrence after retraining is a diagnostic signal. It tells the RC that the original analysis stopped too soon.
The 5-Whys: structured depth for portfolio findings
The 5-Whys method is deceptively simple. You state the finding, ask "why?" and then ask "why?" again for each answer, continuing until you reach a cause that describes a systemic process or design failure rather than an individual action. The name suggests five iterations, but the actual number varies -- sometimes three are sufficient, sometimes seven are required. The point is not the count. The point is to keep asking until the answer describes a system condition rather than a person's behavior.
Applied to portfolio-level findings, the 5-Whys has a specific virtue: it forces the RC to move past study-specific explanations toward site-level systemic conditions. Here is how it works in practice, using the eligibility documentation finding that appeared across four studies.
Source documentation for eligibility criteria is incomplete across four active studies (identified by four independent monitoring visits during the same quarter).
Knowing when you have arrived
A common difficulty with the 5-Whys is knowing when to stop. Two signs indicate the analysis has reached a root cause. First, the answer describes a system condition -- a process design, a resource allocation, a structural absence -- rather than a person's action or decision. "No site-level SOP exists" is a system condition. "The CRC forgot" is a person's action. Second, the answer is something the site has the authority and ability to change at the organizational level. If the final answer is "the FDA does not require a standardized eligibility documentation format," the analysis has gone past the root cause into a regulatory context the site cannot control. The root cause should describe something the site's own systems failed to provide.
A practical caution: the 5-Whys is powerful for linear causal chains but can oversimplify findings with multiple contributing factors. When a finding has branching causes -- when the answer to "why?" is genuinely "for three different reasons" -- the 5-Whys must be applied to each branch independently. And for findings where multiple system failures interact, more structured methods are necessary.
Ishikawa/fishbone analysis: mapping the systemic terrain
Where the 5-Whys provides depth along a single causal chain, the Ishikawa diagram -- also called a fishbone diagram, for reasons that become obvious when you see one -- provides breadth across multiple cause categories. It is particularly valuable for portfolio-level regulatory findings where the root cause may not be a single failure but a convergence of failures across different operational domains.
The Ishikawa diagram places the finding at the head of the "fish." Six cause categories extend as branches (the "bones"): People, Process, Policy, Systems/Technology, Environment, and Measurement. For each category, the RC asks: "What failures in this domain contributed to the finding?" The result is a comprehensive map of every systemic factor that permitted the finding to occur.
Applied to the eligibility documentation finding, the Ishikawa analysis reveals causes that a linear 5-Whys might miss.
The value of this broader view is that it reveals how many systems must change for the finding to be truly resolved. Retraining addresses only the People branch -- and only partially, since the training design itself is a Process and Policy issue. A corrective action that redesigns the workflow, standardizes the documentation tool, introduces a pre-enrollment checkpoint, and establishes an ongoing completeness metric addresses all six branches. It is more work. It is also the only approach that will actually work.
I want to be direct about something: the Ishikawa diagram is not a creative exercise. Every sub-cause entered on the diagram must be supported by evidence -- from the gap analysis, the monitoring visit reports, the staff interviews, or the process observations conducted in Module 2. The RC who fills in an Ishikawa diagram with hypothetical causes is generating speculation, not conducting root cause analysis. Section 3.10.1.1 of ICH E6(R3) establishes that risk identification should be systematic and based on evidence. The same standard applies here.
Fault tree analysis: when the question is "what had to fail?"
The third methodology -- fault tree analysis -- approaches root cause from a different angle. Instead of asking "why did this happen?" it asks "what conditions had to be present for this outcome to occur?" The fault tree starts with the undesired event at the top and works downward through logic gates (AND/OR) to identify the combinations of failures that produced the outcome.
For portfolio-level regulatory findings, fault tree analysis is especially useful when the RC suspects that the finding resulted from multiple independent failures coinciding. Using an OR gate means any single failure could produce the outcome; using an AND gate means multiple failures had to coincide. For the eligibility documentation finding, a simplified fault tree might show that incomplete documentation occurs when (training gap OR workflow design failure) AND (no pre-enrollment checkpoint) AND (no ongoing completeness metric). All three AND-conditions had to be present simultaneously. Removing any one of them -- establishing a checkpoint, creating a metric, or fixing the training -- would have prevented or detected the finding.
Cross-study pattern analysis: the RC's distinctive contribution
Here is where root cause analysis at the portfolio level diverges most sharply from study-level deviation investigation. A CRC investigating a deviation in a single study asks: "What went wrong in this study?" The RC conducting root cause analysis across the portfolio asks a fundamentally different question: "Is the same thing going wrong in multiple studies, and if so, what does that pattern tell me about the site's systems?"
Cross-study pattern analysis is the RC's distinctive analytical contribution -- the work that no one else at the site is positioned to perform. The investigator sees one study. The CRC sees one or two studies. The sponsor's CRA sees one study at this site. Only the RC has visibility across the entire portfolio, and only the RC can detect the patterns that reveal systemic causes.
The analytical process has three steps. First, aggregate findings from all available sources: monitoring visit reports across studies, gap analysis results, mock inspection observations, and quality metric trends. Second, classify findings by type -- not by study, but by the nature of the finding. "Incomplete eligibility documentation" in Study A and "missing inclusion/exclusion verification" in Study B may use different language but describe the same underlying process failure. Third, look for findings that appear in three or more studies, findings that appear across different therapeutic areas or sponsor relationships, and findings that persist despite study-specific corrective actions.
When the evidence points to a systemic root cause, the implication for corrective action design is profound. A systemic root cause cannot be addressed by study-level corrective actions, no matter how many studies implement them independently. If the root cause is the absence of a site-level eligibility documentation SOP, then writing individual corrective action responses to four different sponsors -- each promising study-specific retraining -- addresses the symptom four times while addressing the cause zero times. Per ICH E6(R3), Annex 1, Section 3.11.1, quality assurance includes implementing risk-based strategies to enable corrective and preventive actions for serious noncompliance. Section 3.12.2 further requires root cause analysis when significant noncompliance is identified. The RC's job is to ensure that when findings share a root cause, the corrective action addresses the root cause once, at the site level, rather than the symptom repeatedly, at the study level.
Applying root cause analysis to a portfolio finding
The following case study presents the scenario introduced at the start of this lesson -- four monitoring visits surfacing the same eligibility documentation finding -- and asks you to work through the analysis from pattern recognition through root cause identification.
"One finding or four?"
Key takeaways
Root cause analysis is the discipline that separates corrective actions that work from corrective actions that are merely documented. Three principles should anchor every analysis the RC conducts at the portfolio level.
First, the depth of analysis determines the durability of the corrective action. Corrective actions aimed at proximate causes -- retraining, reminders, increased oversight -- produce temporary improvements that decay as soon as attention moves elsewhere. Corrective actions aimed at root causes -- process redesign, workflow integration, system-level controls -- produce lasting change because they alter the conditions that generated the finding rather than the behavior of the individuals who encountered those conditions.
Second, cross-study patterns are diagnostic. When the same finding appears across multiple studies, CRCs, and sponsors, the pattern itself is evidence of a systemic root cause. The RC who treats each instance as an independent finding is missing the signal that the site's systems are generating. Section 3.12.2 of ICH E6(R3) does not limit root cause analysis to individual noncompliance events; it applies to patterns that indicate systemic deficiencies.

