Root cause analysis for regulatory findings: getting past the symptom to the system failure
Teaches structured root cause analysis methodologies that drive past proximate causes to systemic process and design failures across multiple studies -- because corrective actions aimed at symptoms guarantee recurrence.
One finding or four?
Four monitoring visits. Four different studies. Four different sponsors. And the same observation appears in each visit report: "Source documentation for eligibility criteria is incomplete." The CRAs each write it up as a finding in their respective monitoring reports. The investigators each receive a follow-up letter. And at most sites, what happens next is four separate responses -- one per study -- each promising to "retrain staff on eligibility documentation requirements."
This is how most sites handle the situation. And it is, with respect, the wrong approach.
The RC who encounters the same finding across four independent studies is not looking at four problems. The RC is looking at one problem that has manifested four times. The question that matters is not "how do we fix the eligibility documentation in Study A, Study B, Study C, and Study D?" The question is: what is it about the way this site documents eligibility -- the process, the tools, the training design, the workflow architecture -- that produces this result regardless of which study, which CRC, or which sponsor is involved?
That question is the beginning of root cause analysis. And until it is answered, every corrective action the site undertakes will address symptoms while leaving the system that created them undisturbed.
What you will learn
By the end of this lesson, you will be able to:
1
Apply structured root cause analysis methodologies (5-Whys, Ishikawa/fishbone, fault tree analysis) to portfolio-level regulatory findings, driving past proximate causes to identify systemic process and design failures
2
Distinguish between proximate causes, contributing causes, and root causes in the context of regulatory findings that span multiple studies
3
Analyze cross-study patterns to determine whether apparently independent findings share a common systemic root cause
The proximate-cause trap
When a finding appears in a monitoring visit report, the instinct -- and I say this without judgment, because it is a near-universal instinct -- is to identify the most immediate explanation and address it. Eligibility documentation is incomplete? The CRC did not complete the worksheet. SAE reporting was late? The coordinator did not notice the timeline. Delegation log is outdated? Nobody remembered to update it.
These are proximate causes. They answer the question "what went wrong?" at the shallowest possible level. And they are almost always true -- the CRC really did not complete the worksheet, the coordinator really did not notice the timeline. But proximate causes share a defining characteristic: they describe individual failures within a system rather than the system conditions that made those failures predictable.
ICH E6(R3), Annex 1, Section 3.12.2 addresses this directly. When significant noncompliance is identified, the guideline requires that root cause analysis be conducted and that corrective and preventive actions follow. The operative phrase is "root cause" -- not "immediate cause," not "obvious cause," not "the explanation that first comes to mind." The guideline demands that the analysis reach the level of the system.
Three levels of causation in regulatory findings
Proximate cause: The immediate error or omission. "The CRC did not complete the eligibility documentation worksheet." This is the visible event -- the thing that shows up in the monitoring report. Proximate causes describe what happened.
Contributing cause: The environmental or workload condition that made the proximate cause more likely. "The CRC was managing screening visits for three studies simultaneously, and the eligibility worksheet is a separate paper form stored in a different location from the screening assessments." Contributing causes describe the conditions under which the error occurred.
Root cause: The systemic process or design failure that allowed the contributing and proximate causes to exist. "The site has no integrated eligibility verification workflow -- each study uses its own documentation tool, there is no checkpoint requiring eligibility confirmation before enrollment, and training addresses the content of eligibility criteria but not the documentation process." Root causes describe why the system permitted the failure to happen.
The distinction is not merely academic. It determines the effectiveness of every corrective action the site undertakes. A corrective action aimed at a proximate cause -- "retrain the CRC on eligibility documentation" -- addresses a person, not a process. It assumes the CRC failed because of insufficient knowledge. But if the root cause is a fragmented documentation workflow with no process checkpoint, retraining will not change the outcome. The next CRC managing three simultaneous screening visits will face the same systemic conditions and produce the same result. I have watched this cycle repeat at site after site, and the signature of the proximate-cause trap is always the same: "We retrained staff on this last quarter. I do not understand why it is happening again."
The RC who hears that sentence should recognize it not as evidence of staff deficiency but as evidence of inadequate root cause analysis. Recurrence after retraining is a diagnostic signal. It tells the RC that the original analysis stopped too soon.
The 5-Whys: structured depth for portfolio findings
The 5-Whys method is deceptively simple. You state the finding, ask "why?" and then ask "why?" again for each answer, continuing until you reach a cause that describes a systemic process or design failure rather than an individual action. The name suggests five iterations, but the actual number varies -- sometimes three are sufficient, sometimes seven are required. The point is not the count. The point is to keep asking until the answer describes a system condition rather than a person's behavior.
Applied to portfolio-level findings, the 5-Whys has a specific virtue: it forces the RC to move past study-specific explanations toward site-level systemic conditions. Here is how it works in practice, using the eligibility documentation finding that appeared across four studies.
Finding: Source documentation for eligibility criteria is incomplete across four active studies (identified by four independent monitoring visits during the same quarter).
Why? CRCs in all four studies are not consistently completing the eligibility documentation at the time of screening assessments.
Why? The eligibility documentation process is separate from the screening assessment workflow -- CRCs complete screening assessments in real time during the visit, but eligibility documentation is a post-visit task performed from memory or partial notes.
Why? The site has no integrated eligibility verification workflow. Each study uses a different documentation tool (two use sponsor-provided worksheets, one uses a site-designed checklist, one has no structured tool), and none are embedded in the screening visit workflow.
Why? The site's training program for new studies covers eligibility criteria content -- what the criteria are and how to assess them -- but does not address documentation workflow design. There is no site-level SOP for eligibility documentation process; the process is left to each study team to determine independently.
Why? The site has never established a standard operating procedure for eligibility documentation across its portfolio. Each study's documentation approach was designed ad hoc during study start-up, without reference to a site-level standard.
That fifth answer is the root cause. It describes a systemic absence -- no site-level SOP, no standardized workflow, no portfolio-wide documentation design -- that made the proximate cause (CRCs not completing documentation) predictable rather than anomalous. And notice: the root cause has nothing to do with any individual CRC's competence or diligence. It describes a design failure in the site's operational infrastructure.
Figure 1: 5-Whys analysis applied to a portfolio-level eligibility documentation finding
Knowing when you have arrived
A common difficulty with the 5-Whys is knowing when to stop. Two signs indicate the analysis has reached a root cause. First, the answer describes a system condition -- a process design, a resource allocation, a structural absence -- rather than a person's action or decision. "No site-level SOP exists" is a system condition. "The CRC forgot" is a person's action. Second, the answer is something the site has the authority and ability to change at the organizational level. If the final answer is "the FDA does not require a standardized eligibility documentation format," the analysis has gone past the root cause into a regulatory context the site cannot control. The root cause should describe something the site's own systems failed to provide.
A practical caution: the 5-Whys is powerful for linear causal chains but can oversimplify findings with multiple contributing factors. When a finding has branching causes -- when the answer to "why?" is genuinely "for three different reasons" -- the 5-Whys must be applied to each branch independently. And for findings where multiple system failures interact, more structured methods are necessary.
Ishikawa/fishbone analysis: mapping the systemic terrain
Where the 5-Whys provides depth along a single causal chain, the Ishikawa diagram -- also called a fishbone diagram, for reasons that become obvious when you see one -- provides breadth across multiple cause categories. It is particularly valuable for portfolio-level regulatory findings where the root cause may not be a single failure but a convergence of failures across different operational domains.
The Ishikawa diagram places the finding at the head of the "fish." Six cause categories extend as branches (the "bones"): People, Process, Policy, Systems/Technology, Environment, and Measurement. For each category, the RC asks: "What failures in this domain contributed to the finding?" The result is a comprehensive map of every systemic factor that permitted the finding to occur.
Applied to the eligibility documentation finding, the Ishikawa analysis reveals causes that a linear 5-Whys might miss.
Figure 2: Ishikawa diagram adapted for portfolio-level regulatory quality analysis
The value of this broader view is that it reveals how many systems must change for the finding to be truly resolved. Retraining addresses only the People branch -- and only partially, since the training design itself is a Process and Policy issue. A corrective action that redesigns the workflow, standardizes the documentation tool, introduces a pre-enrollment checkpoint, and establishes an ongoing completeness metric addresses all six branches. It is more work. It is also the only approach that will actually work.
I want to be direct about something: the Ishikawa diagram is not a creative exercise. Every sub-cause entered on the diagram must be supported by evidence -- from the gap analysis, the monitoring visit reports, the staff interviews, or the process observations conducted in Module 2. The RC who fills in an Ishikawa diagram with hypothetical causes is generating speculation, not conducting root cause analysis. Section 3.10.1.1 of ICH E6(R3) establishes that risk identification should be systematic and based on evidence. The same standard applies here.
Fault tree analysis: when the question is "what had to fail?"
The third methodology -- fault tree analysis -- approaches root cause from a different angle. Instead of asking "why did this happen?" it asks "what conditions had to be present for this outcome to occur?" The fault tree starts with the undesired event at the top and works downward through logic gates (AND/OR) to identify the combinations of failures that produced the outcome.
For portfolio-level regulatory findings, fault tree analysis is especially useful when the RC suspects that the finding resulted from multiple independent failures coinciding. Using an OR gate means any single failure could produce the outcome; using an AND gate means multiple failures had to coincide. For the eligibility documentation finding, a simplified fault tree might show that incomplete documentation occurs when (training gap OR workflow design failure) AND (no pre-enrollment checkpoint) AND (no ongoing completeness metric). All three AND-conditions had to be present simultaneously. Removing any one of them -- establishing a checkpoint, creating a metric, or fixing the training -- would have prevented or detected the finding.
This insight is operationally powerful because it identifies multiple intervention points. The RC does not have to fix everything at once. The fault tree reveals which single intervention would break the causal chain most effectively -- which is often the one with the highest reliability and lowest implementation complexity.
Choosing the right method
No single root cause analysis method is universally superior. A practical approach: start with the 5
Whys to establish causal depth, then apply the Ishikawa diagram if the 5
Whys reveals that multiple categories of failure contributed. Reserve fault tree analysis for complex findings where the RC needs to identify the minimum set of interventions that would break the causal chain. For most portfolio-level regulatory findings, a 5
Whys analysis supplemented by an Ishikawa diagram provides sufficient depth and breadth for effective corrective action design.
Cross-study pattern analysis: the RC's distinctive contribution
Here is where root cause analysis at the portfolio level diverges most sharply from study-level deviation investigation. A CRC investigating a deviation in a single study asks: "What went wrong in this study?" The RC conducting root cause analysis across the portfolio asks a fundamentally different question: "Is the same thing going wrong in multiple studies, and if so, what does that pattern tell me about the site's systems?"
Cross-study pattern analysis is the RC's distinctive analytical contribution -- the work that no one else at the site is positioned to perform. The investigator sees one study. The CRC sees one or two studies. The sponsor's CRA sees one study at this site. Only the RC has visibility across the entire portfolio, and only the RC can detect the patterns that reveal systemic causes.
The analytical process has three steps. First, aggregate findings from all available sources: monitoring visit reports across studies, gap analysis results, mock inspection observations, and quality metric trends. Second, classify findings by type -- not by study, but by the nature of the finding. "Incomplete eligibility documentation" in Study A and "missing inclusion/exclusion verification" in Study B may use different language but describe the same underlying process failure. Third, look for findings that appear in three or more studies, findings that appear across different therapeutic areas or sponsor relationships, and findings that persist despite study-specific corrective actions.
Reference Table
Indicators that distinguish systemic from independent findings
Indicator
Suggests systemic root cause
Suggests independent causes
Frequency across portfolio
Same finding type in 3+ studies
Finding isolated to one study
Persistence after correction
Recurs after study-level retraining or correction
Resolves after study-specific intervention
Staff involvement
Occurs across multiple CRCs, not just one individual
Traceable to one specific team member
Sponsor independence
Appears regardless of sponsor, CRO, or protocol complexity
Correlated with a specific sponsor's requirements
Temporal pattern
Emerged gradually as site grew, not linked to a specific event
Appeared after a specific procedural change or staff departure
Process dependency
Occurs at a consistent process step (e.g., screening, enrollment, closeout)
Occurs at different process steps in each study
When the evidence points to a systemic root cause, the implication for corrective action design is profound. A systemic root cause cannot be addressed by study-level corrective actions, no matter how many studies implement them independently. If the root cause is the absence of a site-level eligibility documentation SOP, then writing individual corrective action responses to four different sponsors -- each promising study-specific retraining -- addresses the symptom four times while addressing the cause zero times. Per ICH E6(R3), Annex 1, Section 3.11.1, quality assurance includes implementing risk-based strategies to enable corrective and preventive actions for serious noncompliance. Section 3.12.2 further requires root cause analysis when significant noncompliance is identified. The RC's job is to ensure that when findings share a root cause, the corrective action addresses the root cause once, at the site level, rather than the symptom repeatedly, at the study level.
Why "retrain staff" is almost never a sufficient root cause response
Retraining is the most common corrective action in clinical research -- and the least effective for systemic findings. Retraining assumes the root cause is a knowledge deficit. But when the same finding appears across multiple studies, multiple CRCs, and multiple time periods, the evidence indicates that the cause is not what people know but how the system is designed. Retraining a CRC on eligibility documentation does not change the fact that the documentation workflow is fragmented, the tools are inconsistent, and no verification checkpoint exists. Section 3.12.2 of ICH E6(R3) requires that root cause analysis identify the actual cause -- and for systemic findings, the actual cause is almost never "insufficient training."
Applying root cause analysis to a portfolio finding
The following case study presents the scenario introduced at the start of this lesson -- four monitoring visits surfacing the same eligibility documentation finding -- and asks you to work through the analysis from pattern recognition through root cause identification.
Case Study
"One finding or four?"
Clinical ResearchIntermediate10-15 minutes
Scenario
Helen Marchetti, Senior Regulatory Coordinator at Riverside Medical Center, has received monitoring visit reports from four different sponsors during the same quarter. Each report identifies a variation of the same finding: incomplete source documentation for eligibility criteria verification. The specifics vary by study -- one report cites missing lab values confirming inclusion criteria, another cites undocumented medical history verification for an exclusion criterion, a third cites missing investigator attestation of eligibility, and a fourth cites an eligibility worksheet with two of eight criteria left blank.
Danielle Park, a CRA from one of the sponsoring CROs, mentions during a follow-up call that she has seen the same pattern at her other monitored sites -- but only when the site lacks a standardized eligibility documentation process. She notes that her other sites with site-level eligibility SOPs have significantly fewer findings of this type.
Helen reviews the last 12 months of monitoring visit reports and confirms the pattern: eligibility documentation findings have appeared in 9 of the site's 22 active studies, across all three therapeutic areas, involving seven different CRCs. Study-level corrective actions (retraining) were implemented for three of these studies six months ago. All three studies have generated the same finding again in subsequent monitoring visits.
The challenge:
Conduct a root cause analysis that determines (1) whether these are systemic or independent findings, (2) what the root cause is, and (3) why "retrain CRCs on eligibility documentation" is an inadequate corrective action.
Analysis
Systemic determination: The evidence decisively indicates a systemic root cause. The finding appears in 9 of 22 studies (41% of the portfolio), spans all therapeutic areas and multiple sponsors, involves seven different CRCs (not one individual), and persists after study-level retraining. Per the pattern indicators, every signal points to a site-level system failure rather than independent study-level errors.
5-Whys analysis to root cause: (1) CRCs are not consistently completing eligibility documentation. (2) The documentation process is separate from the clinical screening workflow -- it is performed retrospectively rather than in real time. (3) Each study uses a different documentation tool with no site-level standard. (4) The site's study start-up process does not include eligibility workflow design -- documentation approaches are determined ad hoc by each study team. (5) The site has no standard operating procedure governing eligibility documentation methodology across its portfolio. The root cause is a structural absence: no site-level eligibility documentation SOP and no standardized workflow embedded in the screening process.
Why retraining is inadequate: Retraining addresses the proximate cause (CRC knowledge) but not the root cause (process design). The evidence confirms this: three studies implemented retraining six months ago and generated the same finding again. Retraining failed because the problem was never a knowledge deficit -- every CRC knows what eligibility criteria are. The problem is that the documentation workflow is fragmented, retrospective, and study-specific, making incomplete documentation the predictable result of the system design regardless of what CRCs know. Per Section 3.12.2, the corrective action must address the root cause: the site needs a standardized eligibility documentation SOP with an integrated real-time verification workflow, a consistent documentation tool across all studies, and a pre-enrollment checkpoint that prevents enrollment confirmation before eligibility documentation is complete.
Check your understanding
1 of 3
A regulatory coordinator reviews monitoring visit reports and discovers that four studies at the site have been flagged for the same finding -- incomplete source documentation for eligibility verification. The finding involves different CRCs across different therapeutic areas, and study-level retraining was implemented for two of these studies six months ago without resolving the issue. Based on the cross-study pattern indicators, which determination is best supported by this evidence?
Key takeaways
Root cause analysis is the discipline that separates corrective actions that work from corrective actions that are merely documented. Three principles should anchor every analysis the RC conducts at the portfolio level.
First, the depth of analysis determines the durability of the corrective action. Corrective actions aimed at proximate causes -- retraining, reminders, increased oversight -- produce temporary improvements that decay as soon as attention moves elsewhere. Corrective actions aimed at root causes -- process redesign, workflow integration, system-level controls -- produce lasting change because they alter the conditions that generated the finding rather than the behavior of the individuals who encountered those conditions.
Second, cross-study patterns are diagnostic. When the same finding appears across multiple studies, CRCs, and sponsors, the pattern itself is evidence of a systemic root cause. The RC who treats each instance as an independent finding is missing the signal that the site's systems are generating. Section 3.12.2 of ICH E6(R3) does not limit root cause analysis to individual noncompliance events; it applies to patterns that indicate systemic deficiencies.
Third, "retrain staff" is a hypothesis, not a conclusion. Retraining is an appropriate corrective action when root cause analysis confirms that a knowledge deficit is the actual root cause. But when the evidence shows that trained staff produce the same finding across different studies, the root cause is not knowledge -- it is process design. The RC's obligation, per Section 3.11.1, is to pursue corrective actions that address the actual cause. And that begins with an analysis rigorous enough to find it.
The next lesson will build directly on this foundation. Once the root cause is identified, the question becomes: how do you design a corrective action that actually corrects? Module 3, Lesson 2 covers the distinction between corrections and corrective actions, and the design principles that determine whether a corrective action addresses the root cause or merely addresses the symptom under a more formal name.
Looking ahead to Lesson 2
This lesson taught you to identify root causes. The next lesson -- designing corrective actions -- will teach you to design interventions that address those root causes with the specificity, scope, and accountability required to produce lasting change. The difference between a root cause analysis that leads to effective CAPA and one that leads to another round of retraining is the quality of the corrective action design.
Enjoyed this preview?
Enroll to access all courses in the Regulatory Coordinator track.
Root cause analysis for regulatory findings: getting past the symptom to the system failure
Teaches structured root cause analysis methodologies that drive past proximate causes to systemic process and design failures across multiple studies -- because corrective actions aimed at symptoms guarantee recurrence.
One finding or four?
Four monitoring visits. Four different studies. Four different sponsors. And the same observation appears in each visit report: "Source documentation for eligibility criteria is incomplete." The CRAs each write it up as a finding in their respective monitoring reports. The investigators each receive a follow-up letter. And at most sites, what happens next is four separate responses -- one per study -- each promising to "retrain staff on eligibility documentation requirements."
This is how most sites handle the situation. And it is, with respect, the wrong approach.
The RC who encounters the same finding across four independent studies is not looking at four problems. The RC is looking at one problem that has manifested four times. The question that matters is not "how do we fix the eligibility documentation in Study A, Study B, Study C, and Study D?" The question is: what is it about the way this site documents eligibility -- the process, the tools, the training design, the workflow architecture -- that produces this result regardless of which study, which CRC, or which sponsor is involved?
That question is the beginning of root cause analysis. And until it is answered, every corrective action the site undertakes will address symptoms while leaving the system that created them undisturbed.
What you will learn
By the end of this lesson, you will be able to:
1
Apply structured root cause analysis methodologies (5-Whys, Ishikawa/fishbone, fault tree analysis) to portfolio-level regulatory findings, driving past proximate causes to identify systemic process and design failures
2
Distinguish between proximate causes, contributing causes, and root causes in the context of regulatory findings that span multiple studies
3
Analyze cross-study patterns to determine whether apparently independent findings share a common systemic root cause
The proximate-cause trap
When a finding appears in a monitoring visit report, the instinct -- and I say this without judgment, because it is a near-universal instinct -- is to identify the most immediate explanation and address it. Eligibility documentation is incomplete? The CRC did not complete the worksheet. SAE reporting was late? The coordinator did not notice the timeline. Delegation log is outdated? Nobody remembered to update it.
These are proximate causes. They answer the question "what went wrong?" at the shallowest possible level. And they are almost always true -- the CRC really did not complete the worksheet, the coordinator really did not notice the timeline. But proximate causes share a defining characteristic: they describe individual failures within a system rather than the system conditions that made those failures predictable.
ICH E6(R3), Annex 1, Section 3.12.2 addresses this directly. When significant noncompliance is identified, the guideline requires that root cause analysis be conducted and that corrective and preventive actions follow. The operative phrase is "root cause" -- not "immediate cause," not "obvious cause," not "the explanation that first comes to mind." The guideline demands that the analysis reach the level of the system.
Three levels of causation in regulatory findings
Proximate cause: The immediate error or omission. "The CRC did not complete the eligibility documentation worksheet." This is the visible event -- the thing that shows up in the monitoring report. Proximate causes describe what happened.
Contributing cause: The environmental or workload condition that made the proximate cause more likely. "The CRC was managing screening visits for three studies simultaneously, and the eligibility worksheet is a separate paper form stored in a different location from the screening assessments." Contributing causes describe the conditions under which the error occurred.
Root cause: The systemic process or design failure that allowed the contributing and proximate causes to exist. "The site has no integrated eligibility verification workflow -- each study uses its own documentation tool, there is no checkpoint requiring eligibility confirmation before enrollment, and training addresses the content of eligibility criteria but not the documentation process." Root causes describe why the system permitted the failure to happen.
The distinction is not merely academic. It determines the effectiveness of every corrective action the site undertakes. A corrective action aimed at a proximate cause -- "retrain the CRC on eligibility documentation" -- addresses a person, not a process. It assumes the CRC failed because of insufficient knowledge. But if the root cause is a fragmented documentation workflow with no process checkpoint, retraining will not change the outcome. The next CRC managing three simultaneous screening visits will face the same systemic conditions and produce the same result. I have watched this cycle repeat at site after site, and the signature of the proximate-cause trap is always the same: "We retrained staff on this last quarter. I do not understand why it is happening again."
The RC who hears that sentence should recognize it not as evidence of staff deficiency but as evidence of inadequate root cause analysis. Recurrence after retraining is a diagnostic signal. It tells the RC that the original analysis stopped too soon.
The 5-Whys: structured depth for portfolio findings
The 5-Whys method is deceptively simple. You state the finding, ask "why?" and then ask "why?" again for each answer, continuing until you reach a cause that describes a systemic process or design failure rather than an individual action. The name suggests five iterations, but the actual number varies -- sometimes three are sufficient, sometimes seven are required. The point is not the count. The point is to keep asking until the answer describes a system condition rather than a person's behavior.
Applied to portfolio-level findings, the 5-Whys has a specific virtue: it forces the RC to move past study-specific explanations toward site-level systemic conditions. Here is how it works in practice, using the eligibility documentation finding that appeared across four studies.
Finding: Source documentation for eligibility criteria is incomplete across four active studies (identified by four independent monitoring visits during the same quarter).
Why? CRCs in all four studies are not consistently completing the eligibility documentation at the time of screening assessments.
Why? The eligibility documentation process is separate from the screening assessment workflow -- CRCs complete screening assessments in real time during the visit, but eligibility documentation is a post-visit task performed from memory or partial notes.
Why? The site has no integrated eligibility verification workflow. Each study uses a different documentation tool (two use sponsor-provided worksheets, one uses a site-designed checklist, one has no structured tool), and none are embedded in the screening visit workflow.
Why? The site's training program for new studies covers eligibility criteria content -- what the criteria are and how to assess them -- but does not address documentation workflow design. There is no site-level SOP for eligibility documentation process; the process is left to each study team to determine independently.
Why? The site has never established a standard operating procedure for eligibility documentation across its portfolio. Each study's documentation approach was designed ad hoc during study start-up, without reference to a site-level standard.
That fifth answer is the root cause. It describes a systemic absence -- no site-level SOP, no standardized workflow, no portfolio-wide documentation design -- that made the proximate cause (CRCs not completing documentation) predictable rather than anomalous. And notice: the root cause has nothing to do with any individual CRC's competence or diligence. It describes a design failure in the site's operational infrastructure.
Figure 1: 5-Whys analysis applied to a portfolio-level eligibility documentation finding
Knowing when you have arrived
A common difficulty with the 5-Whys is knowing when to stop. Two signs indicate the analysis has reached a root cause. First, the answer describes a system condition -- a process design, a resource allocation, a structural absence -- rather than a person's action or decision. "No site-level SOP exists" is a system condition. "The CRC forgot" is a person's action. Second, the answer is something the site has the authority and ability to change at the organizational level. If the final answer is "the FDA does not require a standardized eligibility documentation format," the analysis has gone past the root cause into a regulatory context the site cannot control. The root cause should describe something the site's own systems failed to provide.
A practical caution: the 5-Whys is powerful for linear causal chains but can oversimplify findings with multiple contributing factors. When a finding has branching causes -- when the answer to "why?" is genuinely "for three different reasons" -- the 5-Whys must be applied to each branch independently. And for findings where multiple system failures interact, more structured methods are necessary.
Ishikawa/fishbone analysis: mapping the systemic terrain
Where the 5-Whys provides depth along a single causal chain, the Ishikawa diagram -- also called a fishbone diagram, for reasons that become obvious when you see one -- provides breadth across multiple cause categories. It is particularly valuable for portfolio-level regulatory findings where the root cause may not be a single failure but a convergence of failures across different operational domains.
The Ishikawa diagram places the finding at the head of the "fish." Six cause categories extend as branches (the "bones"): People, Process, Policy, Systems/Technology, Environment, and Measurement. For each category, the RC asks: "What failures in this domain contributed to the finding?" The result is a comprehensive map of every systemic factor that permitted the finding to occur.
Applied to the eligibility documentation finding, the Ishikawa analysis reveals causes that a linear 5-Whys might miss.
Figure 2: Ishikawa diagram adapted for portfolio-level regulatory quality analysis
The value of this broader view is that it reveals how many systems must change for the finding to be truly resolved. Retraining addresses only the People branch -- and only partially, since the training design itself is a Process and Policy issue. A corrective action that redesigns the workflow, standardizes the documentation tool, introduces a pre-enrollment checkpoint, and establishes an ongoing completeness metric addresses all six branches. It is more work. It is also the only approach that will actually work.
I want to be direct about something: the Ishikawa diagram is not a creative exercise. Every sub-cause entered on the diagram must be supported by evidence -- from the gap analysis, the monitoring visit reports, the staff interviews, or the process observations conducted in Module 2. The RC who fills in an Ishikawa diagram with hypothetical causes is generating speculation, not conducting root cause analysis. Section 3.10.1.1 of ICH E6(R3) establishes that risk identification should be systematic and based on evidence. The same standard applies here.
Fault tree analysis: when the question is "what had to fail?"
The third methodology -- fault tree analysis -- approaches root cause from a different angle. Instead of asking "why did this happen?" it asks "what conditions had to be present for this outcome to occur?" The fault tree starts with the undesired event at the top and works downward through logic gates (AND/OR) to identify the combinations of failures that produced the outcome.
For portfolio-level regulatory findings, fault tree analysis is especially useful when the RC suspects that the finding resulted from multiple independent failures coinciding. Using an OR gate means any single failure could produce the outcome; using an AND gate means multiple failures had to coincide. For the eligibility documentation finding, a simplified fault tree might show that incomplete documentation occurs when (training gap OR workflow design failure) AND (no pre-enrollment checkpoint) AND (no ongoing completeness metric). All three AND-conditions had to be present simultaneously. Removing any one of them -- establishing a checkpoint, creating a metric, or fixing the training -- would have prevented or detected the finding.
This insight is operationally powerful because it identifies multiple intervention points. The RC does not have to fix everything at once. The fault tree reveals which single intervention would break the causal chain most effectively -- which is often the one with the highest reliability and lowest implementation complexity.
Choosing the right method
No single root cause analysis method is universally superior. A practical approach: start with the 5
Whys to establish causal depth, then apply the Ishikawa diagram if the 5
Whys reveals that multiple categories of failure contributed. Reserve fault tree analysis for complex findings where the RC needs to identify the minimum set of interventions that would break the causal chain. For most portfolio-level regulatory findings, a 5
Whys analysis supplemented by an Ishikawa diagram provides sufficient depth and breadth for effective corrective action design.
Cross-study pattern analysis: the RC's distinctive contribution
Here is where root cause analysis at the portfolio level diverges most sharply from study-level deviation investigation. A CRC investigating a deviation in a single study asks: "What went wrong in this study?" The RC conducting root cause analysis across the portfolio asks a fundamentally different question: "Is the same thing going wrong in multiple studies, and if so, what does that pattern tell me about the site's systems?"
Cross-study pattern analysis is the RC's distinctive analytical contribution -- the work that no one else at the site is positioned to perform. The investigator sees one study. The CRC sees one or two studies. The sponsor's CRA sees one study at this site. Only the RC has visibility across the entire portfolio, and only the RC can detect the patterns that reveal systemic causes.
The analytical process has three steps. First, aggregate findings from all available sources: monitoring visit reports across studies, gap analysis results, mock inspection observations, and quality metric trends. Second, classify findings by type -- not by study, but by the nature of the finding. "Incomplete eligibility documentation" in Study A and "missing inclusion/exclusion verification" in Study B may use different language but describe the same underlying process failure. Third, look for findings that appear in three or more studies, findings that appear across different therapeutic areas or sponsor relationships, and findings that persist despite study-specific corrective actions.
Reference Table
Indicators that distinguish systemic from independent findings
Indicator
Suggests systemic root cause
Suggests independent causes
Frequency across portfolio
Same finding type in 3+ studies
Finding isolated to one study
Persistence after correction
Recurs after study-level retraining or correction
Resolves after study-specific intervention
Staff involvement
Occurs across multiple CRCs, not just one individual
Traceable to one specific team member
Sponsor independence
Appears regardless of sponsor, CRO, or protocol complexity
Correlated with a specific sponsor's requirements
Temporal pattern
Emerged gradually as site grew, not linked to a specific event
Appeared after a specific procedural change or staff departure
Process dependency
Occurs at a consistent process step (e.g., screening, enrollment, closeout)
Occurs at different process steps in each study
When the evidence points to a systemic root cause, the implication for corrective action design is profound. A systemic root cause cannot be addressed by study-level corrective actions, no matter how many studies implement them independently. If the root cause is the absence of a site-level eligibility documentation SOP, then writing individual corrective action responses to four different sponsors -- each promising study-specific retraining -- addresses the symptom four times while addressing the cause zero times. Per ICH E6(R3), Annex 1, Section 3.11.1, quality assurance includes implementing risk-based strategies to enable corrective and preventive actions for serious noncompliance. Section 3.12.2 further requires root cause analysis when significant noncompliance is identified. The RC's job is to ensure that when findings share a root cause, the corrective action addresses the root cause once, at the site level, rather than the symptom repeatedly, at the study level.
Why "retrain staff" is almost never a sufficient root cause response
Retraining is the most common corrective action in clinical research -- and the least effective for systemic findings. Retraining assumes the root cause is a knowledge deficit. But when the same finding appears across multiple studies, multiple CRCs, and multiple time periods, the evidence indicates that the cause is not what people know but how the system is designed. Retraining a CRC on eligibility documentation does not change the fact that the documentation workflow is fragmented, the tools are inconsistent, and no verification checkpoint exists. Section 3.12.2 of ICH E6(R3) requires that root cause analysis identify the actual cause -- and for systemic findings, the actual cause is almost never "insufficient training."
Applying root cause analysis to a portfolio finding
The following case study presents the scenario introduced at the start of this lesson -- four monitoring visits surfacing the same eligibility documentation finding -- and asks you to work through the analysis from pattern recognition through root cause identification.
Case Study
"One finding or four?"
Clinical ResearchIntermediate10-15 minutes
Scenario
Helen Marchetti, Senior Regulatory Coordinator at Riverside Medical Center, has received monitoring visit reports from four different sponsors during the same quarter. Each report identifies a variation of the same finding: incomplete source documentation for eligibility criteria verification. The specifics vary by study -- one report cites missing lab values confirming inclusion criteria, another cites undocumented medical history verification for an exclusion criterion, a third cites missing investigator attestation of eligibility, and a fourth cites an eligibility worksheet with two of eight criteria left blank.
Danielle Park, a CRA from one of the sponsoring CROs, mentions during a follow-up call that she has seen the same pattern at her other monitored sites -- but only when the site lacks a standardized eligibility documentation process. She notes that her other sites with site-level eligibility SOPs have significantly fewer findings of this type.
Helen reviews the last 12 months of monitoring visit reports and confirms the pattern: eligibility documentation findings have appeared in 9 of the site's 22 active studies, across all three therapeutic areas, involving seven different CRCs. Study-level corrective actions (retraining) were implemented for three of these studies six months ago. All three studies have generated the same finding again in subsequent monitoring visits.
The challenge:
Conduct a root cause analysis that determines (1) whether these are systemic or independent findings, (2) what the root cause is, and (3) why "retrain CRCs on eligibility documentation" is an inadequate corrective action.
Analysis
Systemic determination: The evidence decisively indicates a systemic root cause. The finding appears in 9 of 22 studies (41% of the portfolio), spans all therapeutic areas and multiple sponsors, involves seven different CRCs (not one individual), and persists after study-level retraining. Per the pattern indicators, every signal points to a site-level system failure rather than independent study-level errors.
5-Whys analysis to root cause: (1) CRCs are not consistently completing eligibility documentation. (2) The documentation process is separate from the clinical screening workflow -- it is performed retrospectively rather than in real time. (3) Each study uses a different documentation tool with no site-level standard. (4) The site's study start-up process does not include eligibility workflow design -- documentation approaches are determined ad hoc by each study team. (5) The site has no standard operating procedure governing eligibility documentation methodology across its portfolio. The root cause is a structural absence: no site-level eligibility documentation SOP and no standardized workflow embedded in the screening process.
Why retraining is inadequate: Retraining addresses the proximate cause (CRC knowledge) but not the root cause (process design). The evidence confirms this: three studies implemented retraining six months ago and generated the same finding again. Retraining failed because the problem was never a knowledge deficit -- every CRC knows what eligibility criteria are. The problem is that the documentation workflow is fragmented, retrospective, and study-specific, making incomplete documentation the predictable result of the system design regardless of what CRCs know. Per Section 3.12.2, the corrective action must address the root cause: the site needs a standardized eligibility documentation SOP with an integrated real-time verification workflow, a consistent documentation tool across all studies, and a pre-enrollment checkpoint that prevents enrollment confirmation before eligibility documentation is complete.
Check your understanding
1 of 3
A regulatory coordinator reviews monitoring visit reports and discovers that four studies at the site have been flagged for the same finding -- incomplete source documentation for eligibility verification. The finding involves different CRCs across different therapeutic areas, and study-level retraining was implemented for two of these studies six months ago without resolving the issue. Based on the cross-study pattern indicators, which determination is best supported by this evidence?
Key takeaways
Root cause analysis is the discipline that separates corrective actions that work from corrective actions that are merely documented. Three principles should anchor every analysis the RC conducts at the portfolio level.
First, the depth of analysis determines the durability of the corrective action. Corrective actions aimed at proximate causes -- retraining, reminders, increased oversight -- produce temporary improvements that decay as soon as attention moves elsewhere. Corrective actions aimed at root causes -- process redesign, workflow integration, system-level controls -- produce lasting change because they alter the conditions that generated the finding rather than the behavior of the individuals who encountered those conditions.
Second, cross-study patterns are diagnostic. When the same finding appears across multiple studies, CRCs, and sponsors, the pattern itself is evidence of a systemic root cause. The RC who treats each instance as an independent finding is missing the signal that the site's systems are generating. Section 3.12.2 of ICH E6(R3) does not limit root cause analysis to individual noncompliance events; it applies to patterns that indicate systemic deficiencies.
Third, "retrain staff" is a hypothesis, not a conclusion. Retraining is an appropriate corrective action when root cause analysis confirms that a knowledge deficit is the actual root cause. But when the evidence shows that trained staff produce the same finding across different studies, the root cause is not knowledge -- it is process design. The RC's obligation, per Section 3.11.1, is to pursue corrective actions that address the actual cause. And that begins with an analysis rigorous enough to find it.
The next lesson will build directly on this foundation. Once the root cause is identified, the question becomes: how do you design a corrective action that actually corrects? Module 3, Lesson 2 covers the distinction between corrections and corrective actions, and the design principles that determine whether a corrective action addresses the root cause or merely addresses the symptom under a more formal name.
Looking ahead to Lesson 2
This lesson taught you to identify root causes. The next lesson -- designing corrective actions -- will teach you to design interventions that address those root causes with the specificity, scope, and accountability required to produce lasting change. The difference between a root cause analysis that leads to effective CAPA and one that leads to another round of retraining is the quality of the corrective action design.
Enjoyed this preview?
Enroll to access all courses in the Regulatory Coordinator track.