A CDISC standard that defines the structure and content of analysis-ready datasets derived from SDTM data, supporting efficient generation of statistical analyses and displays for regulatory submissions.
The Analysis Data Model extends the CDISC framework beyond raw data tabulation to address the analysis-ready datasets used for statistical programming and regulatory reporting. While SDTM captures data as collected, ADaM datasets are derived datasets optimized for analysis, containing derived variables, analysis flags, and structures that directly support the statistical analyses specified in the Statistical Analysis Plan. This separation of tabulation and analysis data maintains traceability while providing efficient datasets for programming tables, listings, and figures.
ADaM defines several fundamental dataset types and structures. The Subject-Level Analysis Dataset contains one record per subject with variables summarizing subject characteristics, treatment assignment, and disposition. Basic Data Structure datasets contain one or more records per subject per analysis parameter per time point, supporting analyses of endpoints with repeated measurements. Time-to-Event datasets contain one record per subject per analysis parameter for survival analyses. Occurrence Data Structure datasets support analysis of adverse events and other categorical outcomes. Each structure type has specified required variables and conventions that ensure consistency.
The relationship between SDTM and ADaM is foundational to regulatory submission integrity. ADaM datasets must be traceable to their SDTM sources, meaning that any derived variable or analysis population flag can be verified against the underlying tabulation data. This traceability is documented through ADaM analysis dataset specifications and define files that describe the derivation of each variable. Regulatory reviewers rely on this traceability to verify that analysis results accurately reflect the collected data, making clear documentation of SDTM-to-ADaM mapping essential for submission success.
Efficacy analysis
"The ADaM Basic Data Structure dataset for the primary efficacy endpoint contained one record per subject per visit with the derived change from baseline value and analysis population flags, directly supporting the primary analysis specified in the Statistical Analysis Plan."
Traceability documentation
"The ADaM specification document detailed the derivation of each variable in the analysis datasets, including source SDTM variables, transformation logic, and references to the Statistical Analysis Plan sections where each variable was used."
A secure, computer-generated, time-stamped electronic record that automatically captures the creation, modification, or deletion of data, including the identity of the operator and the date and time of the action.
An international nonprofit organization that develops and supports global data standards for clinical research, enabling consistent and efficient exchange of clinical trial information.
The process of detecting, correcting, and resolving inaccurate, incomplete, or inconsistent data in the clinical trial database to ensure data quality and reliability for analysis.
The degree to which data are complete, consistent, accurate, trustworthy, and reliable throughout the data lifecycle.
The formal process of making the clinical trial database unmodifiable once all data have been entered, reviewed, cleaned, and verified, marking the transition from data collection to statistical analysis.
A CDISC standard that defines the structure and content of analysis-ready datasets derived from SDTM data, supporting efficient generation of statistical analyses and displays for regulatory submissions.
The Analysis Data Model extends the CDISC framework beyond raw data tabulation to address the analysis-ready datasets used for statistical programming and regulatory reporting. While SDTM captures data as collected, ADaM datasets are derived datasets optimized for analysis, containing derived variables, analysis flags, and structures that directly support the statistical analyses specified in the Statistical Analysis Plan. This separation of tabulation and analysis data maintains traceability while providing efficient datasets for programming tables, listings, and figures.
ADaM defines several fundamental dataset types and structures. The Subject-Level Analysis Dataset contains one record per subject with variables summarizing subject characteristics, treatment assignment, and disposition. Basic Data Structure datasets contain one or more records per subject per analysis parameter per time point, supporting analyses of endpoints with repeated measurements. Time-to-Event datasets contain one record per subject per analysis parameter for survival analyses. Occurrence Data Structure datasets support analysis of adverse events and other categorical outcomes. Each structure type has specified required variables and conventions that ensure consistency.
The relationship between SDTM and ADaM is foundational to regulatory submission integrity. ADaM datasets must be traceable to their SDTM sources, meaning that any derived variable or analysis population flag can be verified against the underlying tabulation data. This traceability is documented through ADaM analysis dataset specifications and define files that describe the derivation of each variable. Regulatory reviewers rely on this traceability to verify that analysis results accurately reflect the collected data, making clear documentation of SDTM-to-ADaM mapping essential for submission success.
Efficacy analysis
"The ADaM Basic Data Structure dataset for the primary efficacy endpoint contained one record per subject per visit with the derived change from baseline value and analysis population flags, directly supporting the primary analysis specified in the Statistical Analysis Plan."
Traceability documentation
"The ADaM specification document detailed the derivation of each variable in the analysis datasets, including source SDTM variables, transformation logic, and references to the Statistical Analysis Plan sections where each variable was used."
A secure, computer-generated, time-stamped electronic record that automatically captures the creation, modification, or deletion of data, including the identity of the operator and the date and time of the action.
An international nonprofit organization that develops and supports global data standards for clinical research, enabling consistent and efficient exchange of clinical trial information.
The process of detecting, correcting, and resolving inaccurate, incomplete, or inconsistent data in the clinical trial database to ensure data quality and reliability for analysis.
The degree to which data are complete, consistent, accurate, trustworthy, and reliable throughout the data lifecycle.
The formal process of making the clinical trial database unmodifiable once all data have been entered, reviewed, cleaned, and verified, marking the transition from data collection to statistical analysis.