Saturday, June 14, 2025

Managing Pregnancies in Clinical Trials: Regulatory Guidance and Best Practices

Clinical research historically has excluded pregnant women, creating major data gaps. Today, less than 1% of trial participants are pregnant, and most approved drugs lack pregnancy safety data. There are various reasons pregnant women are often excluded from clinical trials: 
  • Scientific and medical concerns: potential fetal harm, unknown pharmacokinetics and dosing, Complex mother-fetus risks, and impact on maternal health.
  • Religious or cultural considerations: Emphasis on fetal protection, Paternalistic IRB attitudes, Participation and social norms. 
  • Liability and legal concerns: Fear of fetal-harm lawsuits, Insurance and compensation, Regulatory ambiguity, Cascading regulatory costs
Regulators such as US FDA now emphasize a balanced approach – gathering information on drug use in pregnancy by allowing inclusion when appropriate, while protecting fetal safetyf FDA guidance and recent legislation stress that trials should broadly reflect real-world populations, with any exclusions (like pregnancy) justified by safety and scientific rationale. In fact, pregnancy is explicitly identified as a historically underrepresented group in FDA diversity initiatives. This shift in thinking – codified by laws like FDARA and FDORA and by draft FDA guidances – is intended to improve evidence on treating pregnant patients.

Regulatory Context and Diversity Guidance

FDA regulations and guidances set the framework for handling pregnancy in trials. The FDA’s 21 CFR Part 50 Subpart B (pregnant women, fetuses, and neonates) and 21 CFR Part 56 (IRB requirements) require heightened protections for pregnant participants. For example, FDA recommends that if a study offers benefit solely to the fetus, both the pregnant woman and her partner must consent (with narrow exceptions). Crucially, FDA guidance mandates that “each individual providing consent is fully informed regarding the reasonably foreseeable impact of the research on the fetus or neonate”. In practice, this means consent forms and discussions must spell out known pregnancy risks (and uncertainties) based on available animal or prior data. IRBs reviewing these trials must include members experienced with maternal-fetal medicine and ensure extra safeguards (per 21 CFR 56.111(a)(2) and (b)).

At a higher level, FDA’s draft guidance on pregnant women (2018) explicitly endorses “judicious inclusion of pregnant women in clinical trials” to inform safe drug use in pregnancy. Similarly, the 2020 FDA guidance on broadening trial diversity urges sponsors to continuously “broaden eligibility criteria” as safety data accrues, narrowing unnecessary exclusions. These documents reinforce that excluding pregnant women should not be automatic: instead, inclusion should be considered whenever the potential benefits outweigh risks. In summary, the regulatory message is clear: plan trials for diversity (as now required by law with Diversity Action Plans) and, where scientifically justified, include or at least carefully manage pregnant participants.

Informed Consent and Pre-Screening Procedures

Before enrollment, trials must address pregnancy risk. Protocols almost universally exclude pregnant or breastfeeding women, and require women of childbearing potential (WOCBP) to use reliable contraception and have negative pregnancy tests at screeningonbiostatistics.blogspot.com. Per longstanding FDA advice, informed consent must cover pregnancy issues explicitly: if nonclinical reproduction studies are lacking, sponsors must “fully inform” women and advise on contraceptive measuresonbiostatistics.blogspot.com. In practice, consent forms and discussions should include pregnancy-specific language and summarize what is (and isn’t) known about fetal risk.

  • Contraception and Testing: Investigators should ensure WOCBP agree to use effective birth control (including abstinence) and take a pregnancy test before each dosing period. FDA guidance has long recommended pregnancy tests at screening and counseling on contraceptiononbiostatistics.blogspot.com. Some trials repeat tests periodically to catch early pregnanciesonbiostatistics.blogspot.com.

  • Risk Disclosure: Consent discussions must describe potential risks to a fetus. The FDA draft guidance stresses that subjects be informed about “reasonably foreseeable impact” of trial participation on the fetus or neonate. Even if evidence is limited, the consent should transparently explain any known animal or human data, and state unknowns. This empowers participants to make decisions with full awareness of pregnancy-related risks.

Managing Pregnancies Discovered During Trials

Despite precautions, some participants will become pregnant during a study. Best practices focus on timely detection, notification, and safety:

  • Detection: Continue pregnancy testing throughout the study (e.g. at regular visits or follow-ups). One biostatistics review notes that trials “usually perform pregnancy tests periodically” so pregnancies can be caught earlyonbiostatistics.blogspot.com.

  • Immediate Actions: If a pregnancy is identified, the standard practice is to stop the study drug to avoid further exposureonbiostatistics.blogspot.com. The investigator should promptly notify the sponsor and IRB, and arrange any needed medical follow-up. FDA guidance goes further: it recommends unblinding the subject’s treatment (drug vs placebo) so the woman and her physician can discuss risks based on actual exposure. The patient should then undergo a second informed-consent process that incorporates the new risk-benefit assessment. For example, if the drug may benefit the mother, continuation might be allowed if potential benefits outweigh fetal risks (with the mother’s informed agreement). Otherwise, she should be withdrawn from treatment.

  • Data and Follow-Up: Regardless of continuation, sponsors must collect detailed follow-up data. FDA guidance explicitly states that “regardless of whether the woman continues in the trial, it is important to collect and report the pregnancy outcome”. In practice, this means recording gestational age at drug exposure, duration of exposure, and all outcomes (live birth, gestational age at delivery, congenital events, etc.). The pregnant participant should receive standard obstetrical care and fetal monitoring alongside the study’s safety assessments. For example, cord blood or neonatal samples might be collected for drug levels if relevant.

  • Discontinuation and Missing Data: If the participant discontinues the study, subsequent efficacy visits are often ceased; statisticians typically handle missing data using predefined methods. Notably, many sponsors maintain a pregnancy registry or report form for study pregnanciesonbiostatistics.blogspot.com. These registers feed into post-market surveillance (and often meet regulatory reporting requirements). Any adverse fetal outcome (miscarriage, birth defect, etc.) must be reported as a serious adverse event per 21 CFR 312.32/64.

  • Monitoring: Trials with pregnant subjects or exposures require specialized monitoring. For example, the protocol should specify involvement of obstetric/maternal-fetal experts on the safety monitoring team. Ongoing review of maternal and fetal safety signals is essential – a dedicated data monitoring committee may be warranted. In extreme cases (e.g. clear evidence of harm), the trial’s stopping criteria may trigger halting enrollment.

Reporting and IRB Notification

Pregnancy events are subject to regulatory and ethics oversight. Investigators should report any pregnancy to the IRB as soon as it is identified (often as an “unanticipated problem” in light of initial exclusion criteria). FDA guidance notes that IRBs reviewing such protocols must have the right expertise and must ensure “additional safeguards” are in place for pregnant subjects. For example, IRBs should verify that consent materials address pregnancy, that medical backup (e.g. obstetric care) is arranged, and that procedures (e.g. prohibition of termination inducements) are followed.

Sponsors, in turn, must follow safety-reporting rules. Under FDA IND regulations, any pregnancy with drug exposure is reported to FDA (particularly if it results in a serious fetal or neonatal outcome). The IRB and health authorities should be kept informed according to institutional policies. In practice, many trials use a structured form or registry entry to document trial pregnancies and outcomesonbiostatistics.blogspot.com. This ensures timely communication of safety information to all stakeholders.

Ethical Considerations of Inclusion vs. Exclusion

The exclusion of pregnant women raises ethical issues. Traditionally, fear of fetal harm led to a protective approach, but contemporary ethicists argue that systematic exclusion often causes more harm. As one commentary notes, refusing to study drugs in pregnancy “merely shifts risk to the clinical context” – doctors and patients then must decide on therapies with no evidence, which is “hardly an ethical approach”. High-profile cases (e.g. thalidomide) illustrate the dangers of not studying drugs in pregnancy. In fact, a commissioned analysis found no liability cases from including pregnant women in trials, whereas many lawsuits have arisen from unforeseen drug harms in pregnant patients after approval.

Conversely, including pregnant participants – with proper precautions – yields direct benefits. It allows rigorous collection of safety/efficacy data in a controlled setting, reducing uncertainty. Experts stress it is a humane and scientific responsibility to prioritize pregnant women’s inclusion when possible. Denying them evidence essentially “denies pregnant women—and their healthcare providers—the evidence necessary to make informed decisions”. Modern ethical guidance and FDA policy now urge balancing fetal protection with the pregnant woman’s health needs, rather than default exclusion.

Recommendations and Future Directions

Given this landscape, sponsors and investigators can follow several best practices:

  • Education and Consent: Develop clear, patient-friendly consent materials that discuss pregnancy risks and emphasize the importance of contraception for WOCBP. Train staff to discuss these issues openly.

  • Safety Monitoring: Include pregnancy in safety monitoring plans. Engage obstetric/maternal-fetal medicine consultants in trial planning and oversight.

  • Trial Design: Where feasible, design trials to allow pregnant participants (or planned pregnancy cohorts) for conditions that affect women of childbearing age. FDA’s 2018 draft guidance and recent NIH recommendations encourage such trials for pregnancy-specific or relevant indications.

  • Community Engagement: Build partnerships with obstetric care providers and clinics. Embedding trial activities in OB settings can greatly improve recruitment and trust among pregnant patients.

  • Regulatory Planning: Anticipate the need for pregnancy considerations in regulatory submissions. Under the Pregnancy and Lactation Labeling Rule (PLLR), FDA expects clear labeling of pregnancy data (or lack thereof). Sponsors should be prepared to update labels as new pregnancy data emerge.

  • Continued Advocacy: Engage with regulatory agencies. FDA’s task forces and guidance initiatives (e.g. on diversity action plans) reflect ongoing shifts. Industry input can help shape policies that balance scientific goals with patient safety.

In summary, managing trial pregnancies requires a structured approach: robust informed consent, vigilant monitoring, regulatory reporting, and ethical reflection. With these measures, sponsors can protect participants while generating the much-needed data on drug safety in pregnancy. As one expert put it, including pregnant women in research is no longer a question of if but when and how – given the broad benefits of more equitable, evidence-based care.

References: 

Sunday, June 08, 2025

Composite Strategy for Intercurrent Event and the Use of Trimmed Means in Clinical Trial Data Analyses

When composite strategy is used to handle the intercurrent event (ICE), specially the terminal event such as death, the occurrence of the ICE is integrated into the endpoint definition, often by assigning a specific value to participants who experience the event.

In ICH E9-R1 "Addendum on Estimands and Sensitivity Analysis in Clinical Trials" training material, about the composite strategy to handle the intercurrent event, trimmed mean is mentioned to be an approach in handling the intercurrent event.


A trimmed mean may also be called truncated mean and is the arithmetic mean of data values after a certain number or proportion of the highest and/or lowest data values have been discarded. The data values to be discarded can be one-sided or two-sided. A trimmed mean can be defined as a robust average computed by discarding a specified fraction of the lowest and highest observations and averaging the remainder. In effect, it “trims” the tails of the data, reducing the influence of outliers. For example, a 50% trimmed mean discards the bottom 25% and top 25% of values, averaging the middle 50% In general, an alpha‑trimmed mean removes the lowest and highest alpha/2 fraction of data (where alpha is expressed as a percentage of the total).

After trimming, the mean of the remaining values is computed by the usual arithmetic formula. In practical use, common trims range from 5–20% per tail (e.g. alpha=10%-40% total) in robust estimation, though some clinical examples have trimmed up to 50%. By removing extreme observations, trimmed means downweight outliers and model the assumption that missing or dropout outcomes are worse than any observed values.

Advantages of Trimmed Means in Handling Outliers and Skewed Data

Trimmed means are recognized as robust estimators of central tendency, demonstrating less sensitivity to deviations from assumed models or distributions, such as the presence of outliers or non-normality, when compared to classical methods like the sample mean. This robustness translates into more stable and reliable results under challenging data conditions.

For asymmetric distributions, where variability is more pronounced on one side, trimmed means can provide a superior estimation of the location of the main body of observations. By removing extreme values, they offer a more robust estimate of the central value and are less influenced by skewed data distributions. Furthermore, the standard error of the trimmed mean is less susceptible to the effects of outliers and asymmetry than that of the traditional mean, which can lead to increased statistical power for tests employing trimmed means.

The advantages of trimmed means extend beyond mere statistical robustness; they enable a more clinically meaningful interpretation of treatment effects, particularly in heterogeneous patient populations or when extreme outcomes (e.g., severe adverse events, rapid disease progression) might otherwise obscure the true effect in the majority of patients. This aligns with the need for statistical methods that accurately reflect real-world clinical practice and patient experience. If extreme values arise from factors not directly related to the treatment's intended effect on the typical patient—such as rare severe adverse events or non-adherence driven by external circumstances—then their removal allows for a clearer assessment of the treatment's impact on the majority. Conversely, if extreme values are an inherent part of the treatment effect, such as severe lack of efficacy leading to patient dropout, the

trimmed mean can define an estimand for the subpopulation that did not experience these extreme negative outcomes. Both scenarios offer a more focused and potentially more interpretable clinical picture.

It is important to acknowledge the inherent trade-off between robustness and efficiency: more robust methods, including trimmed means, may sacrifice some efficiency (precision or variability) compared to optimal methods under ideal statistical assumptions. The selection of a method ultimately depends on the nature of the data and the specific goals of the analysis.

Comparison with Traditional Measures of Central Tendency (Mean, Median)

The choice among the mean, median, and trimmed mean is not merely a statistical decision but reflects a fundamental determination about the target estimand and the specific clinical question being addressed.

● Mean: The traditional arithmetic mean calculates the average of all values in a dataset. It is highly sensitive to extreme values, which can significantly distort the measure of central tendency and lead to a less representative average, especially in the presence of outliers or skewed distributions.

● Median: The median represents the middle value in an ordered dataset and is highly resistant to the influence of extreme values.3 Conceptually, the median can be viewed as an extreme form of a trimmed mean, where all but one or two central observations are effectively removed. While both the trimmed mean and the median reduce the impact of outliers, the median is generally considered more robust in certain contexts due to its reliance solely on rank order.

The trimmed mean strikes a balance between these two traditional measures. It retains more information from the dataset than the median, which discards a significant portion of the data, while simultaneously offering greater robustness to outliers than the arithmetic mean.3 This allows for a nuanced definition of "average effect" that acknowledges the presence of extreme outcomes without being unduly influenced by them (like the raw mean) or implicitly discarding them entirely (like the median). This highlights the importance of defining the estimand before selecting the statistical method, a principle strongly emphasized by ICH E9(R1).

Regulatory Context and Trial Examples

Trimmed means have been discussed in statistical literature and regulatory settings as a way to handle dropout or intercurrent events. Permutt and Li (FDA biostatisticians) first proposed using trimmed means for “symptom trials” with dropout, treating each dropout as a complete (nonnumeric) observation ranked as the worst outcome. Under this approach, all subjects who discontinue before the endpoint are implicitly assigned the worst possible values and then an equal percentage are trimmed from each arm. In effect, trimming favors treatments with fewer dropouts, since “having more completers is a beneficial effect of the drug”.

Notably, trimmed means correspond to a composite estimand in the ICH E9(R1) framework: one can assign intercurrent events (e.g. dropouts) a worst-case value and then summarize the outcome distribution by a median or trimmed mean. For instance, the ICH E9 addendum training materials explicitly cite trimmed means (alongside medians) as summary measures under a composite strategy when dropouts are scored as extreme unfavourable outcomes. A recent FDA review of glaucoma drug Rocklatan (netarsudil/latanoprost) illustrates this: the statistical reviewer computed an “adaptive trimmed mean” IOP reduction by coding patients who withdrew for lack of efficacy or adverse events as worst outcomes. The reviewer noted this analysis aligns with the composite strategy in ICH E9(R1) and can reinforce the main intent-to-treat result. 

However, we found no FDA-approved trial in which a trimmed mean was the pre‑specified primary endpoint analysis. In all examples, trimmed means have been used as sensitivity or supportive analyses rather than the main test. For example, in a uterine fibroid drug application (NDA 214846), a trimmed‐mean analysis of menstrual blood loss was reported: the trimmed mean in each arm used the 50% best-performing patients, reflecting a 50% trim. Similarly, an ophthalmology (geographic atrophy) trial review (NDA 217171) applied trimmed‐mean + multiple imputation as a sensitivity: certain dropouts were “excluded (trimmed)” in the reviewer’s analysis. In each case, the trimmed mean analysis “assumes missing data as ‘bad outcomes’” (i.e. MNAR). These examples underscore that trimmed means appear in regulatory documents as robust or conservative analyses (often labeled “completer” or MNAR analyses) rather than as the primary efficacy metric.

Key examples:

  • Glaucoma (Rocklatan NDA 208259): Reviewer performed an adaptive trimmed mean IOP analysis, excluding patients who withdrew for lack of effect and assigning worst values (citing Permutt & Li as method).

  • Uterine fibroids (NDA 214846): Primary efficacy (menstrual blood loss) was also examined by trimmed means (50% trim) as sensitivity. The FDA report notes the trimmed‐mean is based on the “top 50% best performers in each arm”.

  • Geographic atrophy (NDA 217171): A trimmed-mean + MI analysis was done by the reviewer; dropouts due to adverse events or lack of efficacy were “excluded (trimmed)” from one scenario.

These applications align with recent methodological studies showing trimmed means estimate a unique estimand: essentially, the mean outcome in the subpopulation of patients who would have remained on treatment. This estimand privileges treatments that prevent dropout, but it relies on the strong assumption that every dropout truly has an “unfavourable” outcome. As Wang et al. note, “the trimmed mean estimates a unique estimand” and its validity “hinges on the reasonableness of its assumptions: dropout is an equally bad outcome in all patients”. Ocampo et al. similarly emphasize that trimmed means work well when discontinuation is strongly associated with poor outcome, but give biased estimates if data are actually missing at random.

Calculation and Assumptions

Formula: To calculate an alpha‑trimmed mean, one typically sorts the data and discards a proportion alpha/2 from each end. For example, the 50% trimmed mean drops the lowest 25% and highest 25% of values, averaging the middle 50%. In general, if alpha (0–1) is the total fraction trimmed, remove the lowest alpha/2 and highest alpha/2 observations, then compute the mean of the remaining values. (The interquartile mean is the special case alpha=0.5.) Typical choices of alpha are guided by robustness needs: small trims (e.g. alpha=0.1 for 5% each tail) mildly reduce outlier influence, while large trims (up to 0.5) exclude half the data.

Statistical assumptions: Trimmed‐mean analyses assume that any trimmed/missing values are in fact the worst outcomes. In the dropout context, this treats early withdrawal as if the patient’s true result were extremely poor. As Ocampo et al. explain, the trimmed mean approach “sets missing values as the worst observed outcome and then trims away a fraction of the distribution”. Under this MNAR assumption, the trimmed mean can provide an unbiased estimate of the treatment effect (on the subpopulation that remains). However, if outcomes are actually missing at random (MAR) without relation to extreme values, trimmed means will be biased. In simulations, trimmed means were found to fail under MAR (because the assumption of trimming “bad” outcomes is then invalid).

Interpretation: Because the trimmed mean ignores equal fractions from each arm, it essentially compares the upper quantiles of the outcome distribution. Clinically, it reflects the mean of the best-performing subset of patients. This is why Permutt and Li emphasize that their method makes no assumptions beyond randomization: it is a nonparametric “exact” test for efficacy that includes all randomized subjects (by ranking dropouts worst). Regulators caution that the trimmed-mean estimand differs from a standard ITT mean; it answers the question, “What is the mean outcome among patients who would have remained in the trial?”.

SAS Implementation Example

Trimmed means can be computed in SAS using procedures like PROC UNIVARIATE or PROC MEANS. For instance, PROC UNIVARIATE supports a trimmed= option. The snippet below computes a 10% trimmed mean (i.e. removes 10% from each tail) of a variable Y and captures the result via ODS:


/* Example: Compute a 20% trimmed mean (10% each tail) for outcome Y */ ods output TrimmedMeans=TrimmedStats; proc univariate data=trial_data trimmed=0.10; var Y; run; proc print data=TrimmedStats noobs; title "Trimmed Mean of Y"; run;

Alternatively, PROC MEANS (SAS 9.4+) also supports trimmed means. For example:

/* Using PROC MEANS with OUTPUT to get trimmed mean */
proc means data=trial_data noprint trimmed=0.10; var Y; output out=Stats (drop=_TYPE_ _FREQ_) trimmed=Y_trimmed; run; proc print data=Stats noobs; title "Trimmed Mean of Y via PROC MEANS"; run;

These code examples illustrate that one can easily obtain trimmed‐mean estimates in SAS by specifying the trimmed percentage (e.g. 0.10 for 10%) and directing the procedure output to a dataset for further use. (In practice, one would adjust trimmed= according to the planned trim proportion.)

Using Google Gemini, a comprehensive report on using trimmed means in clinical trials was generated and can be accessed here. 

Thursday, June 05, 2025

Baby KJ's Story - a remarkable success of the customized CRISPR gene editing treatment

This morning, FDA conducted a roundtable discussion on cell and gene therapies. The roundtable invited 23 panel members from the academic and industry and was attended by FDA commissionner Dr Marty Makary and FDA CBER director Dr. Vinay Prasad. At the second half of the roundtable discussion, HHS secretary, RF Kennedy Jr, NIH director, Dr Jay Bhattacharya, and CMS Administrator, Dr Mehmet Oz all gave commentary speeches. The overall tone of the roundtable is very positive and the government agencies are very supportive to the cell and gene therapy (including xenotransplantation) development. The panel suggests various ways to support the cell & gene therapies. Renewing the pediatric priority review voucher program, Reduction of CMC requirements, continued support of flexible trial design and accelerated approval, and more reliance on post approval requirements and real-world evidence (RWE) were highlighted as key tools for the agency to reduce the number of programs caught in the

During the roundtable discussions, the baby KJ's case was mentioned multiple times. It is worth discussing the baby KJ's story.

KJ Muldoon is a 10-month-old infant who became the first person in the world to receive a personalized CRISPR-based gene-editing therapy, marking a significant milestone in the treatment of rare genetic diseases.

Diagnosis and Condition

Shortly after birth, KJ was diagnosed with severe carbamoyl phosphate synthetase 1 (CPS1) deficiency, a rare and life-threatening genetic disorder that impairs the body's ability to eliminate ammonia. This condition can lead to toxic ammonia buildup, causing severe neurological damage or death in infancy. 

Development of Personalized Therapy

With conventional treatments offering limited hope, a multidisciplinary team from the Children's Hospital of Philadelphia (CHOP) and Penn Medicine embarked on an unprecedented effort to develop a customized gene-editing therapy tailored specifically to KJ's unique genetic mutations. Within six months, researchers identified two truncating variants in the CPS1 gene and designed a bespoke CRISPR-based therapy using base editing technology. This approach allowed precise correction of the faulty DNA without cutting the genetic code. 

Treatment and Outcome

KJ received multiple infusions of the experimental therapy, delivered directly to his liver using lipid nanoparticles carrying the gene-editing components. The treatment aimed to correct the genetic defect responsible for his condition. Following the therapy, KJ showed significant improvement, including better tolerance to dietary protein and reduced dependence on supportive medications. After spending over 300 days in the hospital, he was discharged and returned home with his family.

Significance and Future Implications

KJ's case represents a groundbreaking advancement in personalized medicine and gene-editing therapies. It demonstrates the potential of customized CRISPR treatments to address ultra-rare genetic disorders by rapidly developing individualized therapies. While long-term outcomes and scalability remain areas for further research, this success story offers hope for treating other rare diseases through similar personalized approaches. 

For more detailed information, you can watch the FDA roundtable discussion on cell and gene therapies where KJ's case was mentioned: FDA Roundtable Discussion.

Impact of Baby's KJ's Case on Clinical Trial Design of Gene Therapies

Baby KJ's story was extensively discussed in a follow-up podcast "FDA Direct Ep. 7: This Week at the FDA!" by FDA commissioner Dr Makary and CBER director Dr Prasad. Their discussion of Baby KJ's story extended to the clinical trial designs for gene therapy studies and ultra rare diseases. 

The FDA's approach to clinical trial designs for gene therapy studies emphasizes flexibility and a nuanced understanding of the specific condition and therapy. Here are the key points:

  • Flexible Trial Designs: The FDA acknowledges that a "one-size-fits-all" approach is not suitable. They adapt trial designs based on the specific condition, its frequency, severity, and the uniqueness of the therapy.
  • "N of 1" Trials for Rare Diseases: For extremely rare and dire conditions, a single-patient ("N of 1") trial can be sufficient for approval, especially when there is a plausible mechanism of action and clear biological markers demonstrating effectiveness. An example highlighted is the rapid approval of a custom-tailored gene editing treatment for Baby KJ's rare condition.
  • Plausible Mechanism Pathway: A strong, scientifically sound mechanism of action can support approval, even without extensive large-scale trials, if it suggests safety and effectiveness through extrapolated data.
  • Challenges with Slowly Deteriorating Conditions: For conditions with slow and variable deterioration, relying on anecdotal evidence from "N of 1" studies without clear biological correlates is more difficult, as it's harder to distinguish treatment effects from the natural disease progression.
  • Surrogate Endpoints: The FDA accepts the use of surrogate endpoints, such as tumor shrinkage in cancer, as indicators of a drug's activity.
  • Industry Desire for Predictability: The industry seeks greater predictability regarding FDA expectations for endpoints, control arms, and when randomization is necessary. Improved communication and transparency are suggested to address this.
  • Concerns with International Trial Data: There are concerns about relying solely on clinical trial data from certain countries, particularly if the majority of participants are from a single foreign country, for U.S. regulatory decisions.
  • Re-evaluation of Trial Requirements: The FDA is open to re-evaluating requirements, such as the need for two clinical trials versus one for new drug approvals, to potentially streamline the process.

The overarching theme from recent FDA discussions is a shift towards a more flexible, common-sense, and scientifically driven approach to gene therapy regulation, prioritizing patient needs and scientific plausibility, especially for rare and life-threatening conditions.

References:

Sunday, June 01, 2025

Forwarded: "Data Distortions: When Statistics Can Lead Us Astray in Drug Safety"

The DIA Global Forum's article, "Data Distortions: When Statistics Can Lead Us Astray in Drug Safety," brought attention to a common pitfall: the practice of performing individual hypothesis tests for each adverse event (AE), calculating p-values, and then concluding statistical significance based on an arbitrary threshold like . This mirrors a concern I raised earlier in my blog article, "Should hypothesis tests be performed and p-values be provided for safety variables in efficacy evaluation clinical trials?"

It's a widely held statistical consensus that conducting hypothesis tests for individual AEs is unsound, and the p-values derived from them are prone to misinterpretation. Despite arguments that such tests could be performed for "exploratory purposes" with a disclaimer, the inherent risk is that these p-values will inevitably be misused to draw misleading conclusions from the data. Unfortunately, we've seen journal articles, often at the insistence of editors, include p-values for individual AEs, perpetuating this problematic practice.

Thursday, May 01, 2025

JSON vs SAS XPT Data Formats in Clinical Trial Data Submissions

Regulatory study data (e.g. SDTM/SEND tabulations and ADaM analysis datasets) are currently exchanged in SAS XPORT (XPT) format, the legacy transport format used by the FDA. Each dataset is submitted as a separate .xpt file (for example, dm.xpt, adsl.xpt) with an accompanying define.xml to describe metadata​. See a previous blog article "Submit the Clinical Trial Datasets to FDA: Using the right .xpt file format" and FDA's "STUDY DATATECHNICAL CONFORMANCE GUIDE Technical Specifications Document".

In April 2025, FDA issued a Federal Register notice "Electronic Study Data Submission; Data Standards; Clinical Data Interchange Standards Consortium Dataset-JavaScript Object Notation;Request for Comments' stating it is exploring CDISC’s Dataset-JSON (v1.1) – a JSON-based schema – as a new exchange standard for study data, with the long-term potential to replace SAS XPT v5. The FDA is requesting public comment on adopting Dataset-JSON for future submissions. This report compares the JSON and XPT formats in the context of clinical data exchange and FDA submissions, covering their overviews, advantages/disadvantages, official regulatory stance, and practical sponsor considerations.

JSON Format (CDISC Dataset-JSON)

JSON (JavaScript Object Notation) is a text-based, human-readable data format widely used in web and health IT. For example, HL7’s FHIR standard commonly uses JSON for healthcare data exchange. CDISC’s Dataset-JSON (v1.1) is a JSON schema specifically designed to represent tabular clinical study data​. It is part of the CDISC Operational Data Model (ODM) v2.0 framework and is open-source and machine-readable. By design, each Dataset-JSON dataset can include column values in JSON and can reference a CDISC define.xml document for full metadata, linking data values to variable definitions​. This format supports both file-based and API-based exchange of data​. In practice, a set of JSON “dataset” files (one per domain) can be packaged with a define.xml or delivered via web services. The format is schema-driven and extensible, meaning it can accommodate richer metadata and longer field names than legacy formats. FDA notes that Dataset-JSON is simple to implement, very stable, and “widely supported” across software platforms​. Its use of JSON (Unicode text) makes it easy to parse with standard programming libraries (JavaScript, Python, R, etc.), and it aligns with modern data standards and the FDA’s Data Modernization goals​.

SAS XPORT (XPT) Format

The SAS XPORT (XPT) Transport Format v5 is the longstanding standard for FDA study data submission. XPT is a binary file format defined in the 1980s (SAS Technical Report TS-140) that encodes one dataset per file. In FDA submissions, each SDTM or ADaM dataset is delivered as an .xpt file (e.g. dm.xpt for demographics) along with a corresponding define.xml describing its variables​. FDA’s guidance and catalogs explicitly support XPT v5: for example, a technical guide lists DM.xpt and ADSL.xpt as required files​. The format is natively supported by SAS software (via PROC COPY or LIBNAME XPT) and by some third-party tools, ensuring that sponsors with SAS infrastructures can readily produce and consume it. However, XPT is not human-readable (it is binary) and has inherent limitations: variable names are limited to 8 characters (per the v5 spec) and labels to 40–200 characters, and there is no direct way to embed metadata (hence the separate define.xml). Because XPT v5 is a fixed, legacy format, it cannot represent nested or hierarchical data and requires separate metadata files. Despite these drawbacks, XPT is currently the required FDA exchange format for standardized study data – submissions that do not use FDA-approved formats (listed in the Data Standards Catalog) risk rejection​.



Advantages and Disadvantages

  • JSON advantages: JSON is a modern, widely-used exchange format. Dataset-JSON supports linking to define-XML and can include rich metadata within or alongside the data​. It is text-based and open, so it can be parsed by virtually any software (not just SAS), and it naturally integrates with web and API workflows​. FDA’s 2022 assessment found that JSON offers “smaller file sizes, additional metadata, and simpler processing” compared to legacy formats​. Because it is extensible, JSON removes XPT’s old limitations on field lengths and formats, enabling future evolution of data standards​. In the PhUSE pilot, sponsors noted potential for improved efficiency, hardware cost savings, and alignment with digital data ecosystems​.

  • JSON disadvantages: Dataset-JSON is not yet standard for FDA submissions, so adopting it today would require regulatory discussions or waivers. Industry tooling is nascent: sponsors must develop or acquire new processes (for example, SAS can export JSON but may need custom mapping to CDISC JSON schema). The FDA notice explicitly solicits comments on “integration challenges with existing tools and systems,” reflecting concern that current CDMS/SDTM pipelines are geared to XPT. Managing two formats during a transition also adds complexity. Because JSON is text, very large numeric datasets might be bulkier uncompressed (though gzip can mitigate this). Finally, until FDA grants formal acceptance (which would require a new guidance), sponsors using JSON would be taking a risk.

  • XPT advantages: XPT is a proven, FDA-sanctioned format. All major clinical data tools (especially SAS) can readily produce XPT. Regulatory reviewers and submission systems are already built for it, so sponsors face no surprise validation issues. Using XPT ensures immediate compliance with FDA standards (as affirmed in guidance and the Data Standards Catalog)​. The process of creating .xpt files is well-understood (e.g. using SAS PROC COPY or EXPORT), and many legacy datasets and analysis programs assume XPT input. XPT’s fixed format and single-table-per-file approach are simple and do not require on-the-fly schema negotiation. Long-term archiving of XPT files is routine (with define.xml), so sponsors have established practices for retainment.

  • XPT disadvantages: XPT is technologically outdated. Its fixed schema (8-char names, etc.) and binary nature limit flexibility​. It cannot easily accommodate new metadata or complex data types. Interoperability outside the SAS world is limited (one must use conversion tools). The format does not support streaming or API-based exchange, only static files. Because define.xml is separate, there is a risk of mismatches between data and metadata if not carefully managed. From an innovation standpoint, XPT is a single-version format (v5) with no path for evolving, so it is not aligned with modern data architectures (e.g. FHIR or big-data standards). Sponsors must also maintain SAS environments or rely on third-party readers, which may be a constraint for non-SAS shops.

  • It is noted that SAS has a procedure (Proc JSON) to facilitate the conversion of the SAS data sets to JSON format. It will not be an issue when data sets in JSON format are required for submission.

FDA Policy and Future Adoption (Federal Register Context)

According to the recent Federal Register notice, the FDA is not yet changing requirements but is actively evaluating JSON as an option. The notice explains that CDER and CBER have already conducted a pilot (with CDISC and PhUSE) showing that Dataset-JSON “has the potential to serve as a transport file for study data”​. Based on a 2022 assessment, the FDA found JSON to be the most promising modern format to potentially replace XPT v5​. FDA explicitly states it is considering Dataset-JSON “with the long-term potential to replace SAS XPORT Format (XPT)” for eStudy data​. The Agency is requesting comments on the benefits and risks of adopting JSON and on integration challenges with current tools​.

Importantly, the notice does not immediately authorize use of JSON in submissions. Until any regulatory change is finalized, sponsors must continue using FDA-supported formats (i.e. XPT v5 files with define.xml) for study data​. FDA will consider the public feedback before deciding. The notice indicates that if FDA does adopt Dataset-JSON, it will update its guiding documents (specifically the “Standardized Study Data” guidance implementing Section 745A(a)) to specify JSON as a permitted format​. In summary: FDA’s official preference today remains XPT (v5), but a future shift to JSON is on the table pending the rulemaking process and guidance revisions.

Practical Considerations for Sponsors

  • Regulatory compliance: Sponsors should follow FDA’s current standards. Until JSON is explicitly allowed, electronic study data must use formats in FDA’s Data Standards Catalog (currently XPT v5 for tabulation/analysis data)​. Any use of JSON for a submission would require prior FDA agreement (e.g. a pilot protocol or waiver). Sponsors should monitor the comment process (comments due June 9, 2025) and watch for any updated guidance.

  • Data preparation: Most sponsors build SDTM/ADaM in SAS or similar tools. Producing XPT files is straightforward in that environment (PROC COPY, EXPORT, or LIBNAME XPT). Moving to Dataset-JSON would require developing new export routines or converters. SAS 9.4 can write JSON, but additional CDISC JSON schema mapping may be needed. Conversely, new entrants or CDS/non-SAS shops may find JSON easier since many analytics platforms (R, Python, etc.) parse JSON naturally. Either way, sponsors may need to invest in tool upgrades or staff training if and when JSON becomes accepted.

  • Long-term archiving and interoperability: JSON’s plain-text nature may benefit long-term data access (no proprietary format, easily versioned). On the other hand, XPT has a long track record for archiving and reusability within regulated drug development. Sponsors should plan how to store meta-data (define.xml or embedded JSON schema) for whichever format they use.

  • Transition planning: FDA’s pilot (reported by CDER/CBER and industry) suggests promising results​. Sponsors may consider participating in further testing or industry surveys to shape the outcome. They should factor potential future regulatory changes into their IT roadmaps. For example, new statistical or data warehouse systems could be chosen with JSON capabilities in mind. Stakeholders (data managers, statisticians, IT) should communicate so that any shift in format will be smooth (e.g. ensuring traceability between old and new-format data).

  • Resource impact: In the near term, maintaining support for XPT remains essential (the FDA is not dropping it yet). In the long term, shifting to JSON may lower costs (e.g. fewer hardware needs, quicker data processing as noted in industry pilots​) but will require upfront effort. Sponsors should balance these factors and perhaps begin exploratory work (e.g. trial converting legacy XPT files to JSON) to assess any challenges ahead.

Trends and Upcoming Changes

The Federal Register notice signals a trend toward modernizing study data formats. JSON’s ubiquity in web and healthcare (e.g. FHIR) and its alignment with FDA’s Data Modernization Action Plan are strong drivers​. CDISC’s release of Dataset-JSON v1.1 (Dec 2024) and ongoing PhUSE work show industry momentum. If JSON is adopted, expect a multi-year transition: FDA will announce any implementation timeline in future Federal Register updates (similar to how new CDISC versions are phased in). Internationally, regulators (like Health Canada or PMDA) may also follow FDA’s lead. In practice, sponsors should prepare for eventual co-existence of formats: for some time, both XPT and JSON may be permitted (with effective dates).

In summary, the immediate trend is that the FDA is open to modern data standards: it found JSON superior to alternatives (SAS XPT v8 or XML) in 2022​. However, any concrete requirement change awaits the rulemaking process. Sponsors should stay informed, consider testing JSON internally, and be ready to meet whichever format the FDA ultimately endorses.

References: 

### This article is written with AI assistance ###. 

Monday, April 28, 2025

Vanda Pharmaceuticals Sues FDA Over Tradipitant Hearing Delay – A Misplaced Battle?

This past week, Vanda Pharmaceuticals filed a federal lawsuit accusing the FDA of unlawfully delaying a hearing on the company’s new drug application (NDA) for tradipitant in gastroparesis. According to Fierce Biotech, the dispute stems from a Complete Response Letter (CRL) the FDA issued in September 2024 rejecting tradipitant as a gastroparesis treatment. Vanda contends the agency “generally disregarded” its evidence, and is now blaming FDA bureaucracy and mass layoffs for stalling the hearing. However, an examination of the facts suggests Vanda’s legal aggression is largely misdirected. The FDA’s cautious stance appears justified by the clinical data: the pivotal Phase 3 trial failed to meet its primary endpoint, and the company is left relying on post-hoc analyses to claim efficacy, which is not a substitute for a positive trial result.

Tradipitant and Gastroparesis: The Clinical Background

Gastroparesis is a chronic stomach motility disorder often marked by severe nausea and vomiting. Vanda’s investigational drug, tradipitant (an NK1-receptor antagonist), was tested in diabetic and idiopathic gastroparesis patients. The key Phase 3 trial (ClinicalTrials.gov NCT04028492) randomized 201 adults (about half diabetic, half idiopathic) to tradipitant 85mg vs. placebo twice daily for 12 weeks. The primary endpoint was the change from baseline in daily nausea severity at 12 weeks. Secondary endpoints included other gastroparesis symptoms and patient-reported outcome measures.

The trial did not meet its primary endpoint. In the full intention-to-treat (ITT) analysis, the difference in nausea reduction between tradipitant and placebo was not statistically significant (P = .741). In fact, the overall results showed no significant improvement on nausea severity or other major symptoms. According to FDA briefing materials and Vanda’s own publications, both the primary and secondary endpoints failed to reach significance. Vanda’s press release and a Healio report acknowledge that “the drug failed to meet statistically significant change in nausea severity at 12 weeks vs. placebo”. These negative findings were the basis for the FDA’s CRL.

  • Primary endpoint missed: Tradipitant did not significantly outperform placebo on nausea reduction at 12 weeks (P = .741).
  • Secondary endpoints missed: Other symptom scores likewise showed no statistical benefit over placebo.
  • Post-hoc analyses only: Vanda performed subgroups and sensitivity analyses (e.g. patients with high blood levels of drug or excluding certain “confounders”) that showed some improvement. However, these were not pre-specified endpoints, and the official trial data remained negative.

Because the registered primary outcome was negative, the trial is conventionally considered “failed.” Vanda’s scientists point out that in post hoc subsets (for example, patients with adequate drug exposure) tradipitant appeared to reduce nausea significantly. But regulatory agencies are very clear: post-hoc or exploratory findings cannot replace a prospectively powered success. The FDA expects sponsors to present all trial data, not just cherry-picked favorable subsets. As the FDA guidance states, submissions must include “all data or information relevant to an evaluation of…effectiveness” and avoid “selecting only those sources that favor a conclusion of effectiveness.” Any conflicting evidence must be explained by a compelling scientific rationale. Vanda’s reliance on post-hoc signal is precisely the sort of selective evidence that regulators warn against.

Regulatory Standards: When One Trial Fails

FDA’s drug approval standard under the U.S. Food, Drug, and Cosmetic Act requires “substantial evidence” of effectiveness, typically meaning at least two adequate and well-controlled trials each convincing on its own. In practice, the FDA may accept a single trial only if it is exceptionally persuasive and is backed by confirmatory evidence. But in this case, tradipitant’s lone Phase 3 trial produced a non-significant primary result. Under these rules, that trial generally does not qualify as adequate evidence on its own.

Key FDA guidelines reinforce this approach:

  • FDA has long held that substantial evidence usually means two positive trials, or one trial plus “convincing” confirmatory data.
  • A negative primary outcome is usually regarded as inconclusive: experts advise that a failed primary endpoint generally warrants additional trials (with better design or power) rather than reinterpreting existing data. In fact, Pocock and Stone’s NEJM review “The Primary Outcome Fails – What Next?” (2016) emphasizes that missing a primary endpoint should prompt new studies, not substitution by post-hoc findings.
  • FDA guidance explicitly warns sponsors to present all data. Trials may be questioned if negative overall results are explained away by excluding unfavorable subsets without strong justification.

Furthermore, the FDA’s own Good Review Practices (GRPs) underscore that reviewers follow documented best practices – focusing on consistency, transparency, and rigor. These internal policies help ensure each application is handled fairly and methodically, not arbitrarily slowed. The agency pointed out that Vanda’s request involves 15,000 pages, including new analyses not in the original NDA, which justifies thorough review under GRPs.

In sum, the regulatory framework suggests that FDA’s request for additional data and its cautious timetable are not arbitrary delays but adherence to standard procedures. A single failed pivotal trial simply does not meet the substantial-evidence bar. According to these criteria, the CRL was scientifically grounded.

When a Primary Outcome Fails

Statisticians and regulators agree: if a trial misses its primary outcome, the conservative path is to consider the study negative (or at best inconclusive) and plan further research. Pocock and Stone’s NEJM review makes this clear. They argue that a non-significant primary result generally means the experiment didn’t prove efficacy; turning to retrospective subgroups or alternative endpoints without a new trial risks false-positive “findings.” In practical terms, a missed endpoint should lead to redesigning studies or confirming any hints of effect in fresh data. Simply reanalyzing the same data to “find” significance is discouraged.

FDA guidance mirrors this stance. Even when one trial shows some favorable signal, the agency demands that confirmatory evidence come from independent sources (for example, a second trial or external data). Confident conclusions require evidence that is not retrospectively cherry-picked. In Vanda’s case, the company essentially trimmed its data after the fact (excluding patients and focusing on those with higher drug exposure) to claim a positive result. By FDA and statistical standards, that strategy is insufficient. It falls short of the clear, prospectively defined success needed to approve a new treatment.

The Lawsuit and Vanda’s Claims

Vanda’s lawsuit is technically about FDA’s timing: the company alleges the FDA is unlawfully delaying a regulatory hearing on its NDA and related disputes. According to Fierce Biotech, FDA told Vanda it would not schedule the hearing until September 2025 because of the case’s complexity, the new material submitted, and even unrelated litigation and staff layoffs. Vanda responded by suing, accusing the FDA of stonewalling it and even blaming the agency’s Commissioner and hiring cuts.

In parallel, Vanda is fighting on multiple fronts: it has also challenged FDA rules on its insomnia drug, on clinical holds for other tradipitant studies, and more. The Fierce article notes that Vanda has pressed FDA to hold a hearing within 120 days of its January 2025 request, as the company believes is its right.

However, all these battles circle back to the core issue: tradipitant’s failed data. The CRL that triggered the hearing request was grounded in the negative phase 3 results. In public statements, Vanda’s CEO complained that FDA “generally disregarded” the evidence — but that “evidence” was largely the same trial data and post-hoc arguments that regulators found unconvincing. The FDA’s position is that the application needed additional study, in line with scientific norms.

A Misplaced Blame

It’s understandable that Vanda is frustrated: developing new drugs is costly, and delays eat into patents and investor patience. But blaming the FDA’s procedures overlooks the science. The data from the major gastroparesis trial simply didn’t demonstrate clear efficacy. In drug development, a failed trial is a standard setback, not usually a basis for litigation. Lawsuits over process cannot overcome the fact that the trial was negative by its own primary analysis.

Regulatory experts would likely say that calling for more trials is the correct outcome. As FDA guidance and statistical authorities emphasize, when a primary endpoint is missed, one doesn’t put lipstick on the dataset to make it pass. Instead, one either finds a new, adequately powered study design or gathers stronger confirmatory evidence. Vanda’s focus on legal maneuvers and after-the-fact data trimming comes across as deflecting from this core reality.

In the meantime, the FDA is merely following its structured review timelines and good practice guidelines. Delays stem from legitimate reasons (volume of material, outside litigation, workforce changes), not from any conspiracy to stall Vanda unfairly.

Ultimately, Vanda’s case serves as a reminder that science should drive decision-making, even amid disputes. The trial evidence—supported by FDA and statistical guidance—points to one conclusion: traditionpitant’s efficacy was not established by the trial as conducted. Vanda may see a silver lining in exploratory data, but regulators are right to hold the company to proven standards. Suing over process, in this instance, appears like a costly distraction from the real task: generating reliable clinical proof for tradipitant.

References:


Saturday, April 26, 2025

Comparison of “Serious Breach” (EMA) vs “Important Protocol Deviation” (FDA)

International Council for Harmonisation (ICH) E3 Guideline: Structure and Content of Clinical Study Reports Questions & Answers (R1) (2012) provided the definition for protocol deviation and important protocol deviation. The term "protocol violation" should be avoided and replaced with "protocol deviations" :

A protocol deviation is any change, divergence, or departure from the study design or procedures defined in the protocol. 

Important protocol deviations are a subset of protocol deviations that may significantly impact the completeness, accuracy, and/or reliability of the study data or that may significantly affect a subject's rights, safety, or well-being. For example, important protocol deviations may include enrolling subjects in violation of key eligibility criteria designed to ensure a specific subject population or failing to collect data necessary to interpret primary endpoints, as this may compromise the scientific value of the trial.

Protocol violation and important protocol deviation are sometimes used interchangeably to refer to a significant departure from protocol requirements. The word “violation” may also have other meanings in a regulatory context. However, in Annex IVa, Subject Disposition of the ICH E3 Guideline, the term protocol violation was intended to mean only a change, divergence, or departure from the study requirements, whether by the subject or investigator, that resulted in a subject’s withdrawal from study participation. (Whether such subjects should be included in the study analysis is a separate question.)

To avoid confusion over terminology, sponsors are encouraged to replace the phrase “protocol violation” in Annex IVa with “protocol deviation”, as shown in the example flowchart below. Sponsors may also choose to use another descriptor, provided that that the information presented is generally consistent with the definition of protocol violation provided above.

In adopting and implementing the ICH E3 guidelines, the European Medicines Agency (EMA) and the U.S. Food and Drug Administration (FDA) differ in certain respects. In particular, the EMA guidelines introduce a new term: “serious breach.” While the concept of a “serious breach” overlaps with that of an “important protocol deviation,” subtle distinctions remain between the two terms.

The EMA’s clinical trial protocol breach guideline (CTR Article 52) uses the term “serious breach” to mean a protocol or regulation violation with significant impact on a trial. In contrast, FDA’s protocol deviations guidance (Draft version, December 2024) adopts ICH definitions and uses “important protocol deviation” for critical departures from the protocol. We compare these terms by definition, thresholds, reporting, examples, and implications.

Serious Breach (EMA/CTR)

Important Protocol Deviation (FDA/ICH)

Definition

Any deviation of the approved protocol (or EU CTR) likely to affect subject safety, rights, or data reliability to a significant degree. This is a legal term under EU CTR (Art. 52).

A subset of protocol deviations that might significantly affect data completeness, accuracy, or reliability or significantly affect a subject’s rights, safety, or well‑being. (FDA guidance aligns with ICH E3(R1)).

Criteria/Threshold


Threshold is “likely to affect to a significant degree” at least one of: safety, rights, or data integrity. The impact must be substantial (regulatory text: “significant degree”). In practice, breaches that systematically endanger participants or core data are serious.

Threshold is “might significantly affect” key trial aspects. Important deviations are those that could undermine data quality or subject welfare – for example, affecting critical‑to‑quality factors (ICH E8(R1)) such as eligibility, endpoint collection, randomization, or safety monitoring. Deviations not reaching “significant” impact are considered minor.

Reporting Obligations

Mandatory notification to regulators: The sponsor must report via CTIS (EU portal) without undue delay, and within 7 calendar days of becoming aware. The sponsor is responsible for the report (it may delegate CTIS submission). All serious breaches must be entered into the EU database (CTIS). Investigators and CROs must promptly inform the sponsor of any suspected serious breach. Failure to report can trigger GCP inspection findings.

No direct mandatory FDA notification: FDA does not require sponsors to notify FDA of individual protocol deviations. Instead, FDA recommends that investigators promptly report important deviations to the sponsor and IRB, and sponsors document them. Specifically, investigators should report all deviations to the sponsor (highlighting the important ones), and device trials must report emergency deviations to sponsor/IRB within ~5 days. Sponsors should capture important deviations in oversight and include them in regulatory submissions: FDA guidance advises sponsors to discuss important deviations in the Clinical Study Report (NDA/BLA) and list all deviations in the Study Data Tabulation Model (SDTM) (with an indicator of importance).

Examples


EMA guideline Appendix I (non‑exhaustive) includes situations like: dosing a subject with the wrong drug or dose (e.g. incorrect IMP, wrong frequency); giving IMP to a pregnant subject without protocol-required test; systematically failing to administer required therapy. Other examples: temperature excursions unverified by corrective action, severe informed consent lapses, etc. These all have significant safety/data impact.

FDA guidance examples highlight failures that impair safety or data. For instance: skipping key safety assessments (e.g. missing lab tests or not administering study drug per spec); giving a prohibited concomitant treatment; failure to obtain valid informed consent or protect private data; dosing errors (wrong treatment or dose); non‑adherence to randomization; enrolling ineligible subjects or missing primary endpoint data; or unblinding inappropriately. These deviations affect subject protection or critical data (as listed in the FDA document).

Consequences/Implications


Regulatory action: Reporting a serious breach obligates Member States to evaluate it. Some breaches may require corrective actions overseen by regulators; others may trigger inspections or trial suspension. Failure to have a proper breachreporting system can itself be a GCP inspection finding. If serious breaches reveal fundamental trial defects (e.g. safety is compromised), the trial approval or application may be withdrawn. In summary, a serious breach can lead to heightened regulatory scrutiny, mandatory CAPA plans, or more severe sanctions.

Data validity risks: The FDA guidance notes that frequent or severe important deviations can compromise a trial’s validity. Reviewers may judge a study “not adequate and well‑controlled” if key deviations occur (e.g. wrong enrollment or missing data). As a result, sponsors are urged to design protocols to minimize these issues (via “quality‑by‑design”). Important deviations do not immediately trigger FDA enforcement, but they must be documented; pervasive deviations could lead FDA to question the reliability of submitted data. Investigators/IRBs also review important deviations for participant safety.

Key differences: A serious breach is a legal CT regulation concept requiring prompt notification to EU authorities; by contrast, an important protocol deviation is a statistical/GCP concept from FDA/ICH guidance, focused on documenting major protocol departures in the trial record. The EMA rule imposes timely reporting to regulators (CTIS) for serious breaches, whereas the FDA guidance places emphasis on internal reporting and documentation (investigator→sponsor/IRB, sponsor→study report) rather than immediate FDA notification.

References: