In an earlier discussion, I compared the differences
between SOPs (Standard Operation Procedures) and WPs (Working Procedures).
Pharmaceutical companies, Biotechnology companies, Clinical research organizations, and vendors providing
services in clinical trials area must have the established SOPs and these SOPs
must be followed by their employees. Adequacy of the SOPs and the compliance
with the established SOPs are the key targets when there are audits (either
from the sponsor or from the regulatory agencies).
As the regulatory agency for drug, biological products, and
device clinical trials and market authorization approvals, FDA also has its
established working procedures and FDA review staff should be trained on these
working procedures and should follow these procedures. I hope that the
compliance of these working procedures within FDA is also be monitored or
audited.
Interestingly, different divisions in FDA use different
terminologies for their working procedures (see table below).
CDER
(Center for Drug Evaluation and Research)
|
MAPP
|
|
CBER
(Center for Biologics Evaluation and Research)
|
SOPP
|
|
CDRH
(Center for Device and Radiological Health)
|
To understand what procedures FDA staff / reviewers are
following can help the industry in preparing the regulatory submission materials
to make sure that the documents FDA reviewers are looking for are included in
the submission package. For example, FDA
MAPP 6010.4 “Good
Review Practice: Statistical Review Template” can be good reference in
preparing the planned analyses and tables. The documents provided “Examples of important statistical issues that may
affect the results”
- Breaking the blind
- Unblinded or unplanned interim analyses
- High percentage of dropouts
- Inappropriate imputation for missing values
- Change of primary endpoint during conduct of the trial
- Dropping/adding treatment arms
- Sample size modification
- Inconsistency of results across subgroups
- Type I error inflation due to multiplicity
- Planned and unplanned adaptations
- Non-Inferiority
MAPP 6010.3 Rev. 1 “AttachmentB: Clinical Safety Review of an NDA or BLA” can be a good reference in
understanding how the safety data should be presented. The document provides
the detail review guidance on safety data including adverse events, vital
signs, laboratory data, ECG,… In the section “Standard Analyses and
Explorations of Laboratory Data” it specifically discussed what type of
laboratory analysis results should be presented and the hypothesis tests for
comparing the laboratory results are discouraged.
In general, this review should include three standard approaches to the analysis of laboratory data, noted as: (1) Analyses Focused on Measures of Central Tendency; (2) Analyses Focused on Outliers or Shifts From Normal to Abnormal; and (3) Marked Outliers and Dropouts for Laboratory Abnormalities. The first two analyses are based on comparative trial data. The third analysis should focus on all subjects in the phase 2 to phase 3 experience. Analyses are intended to be descriptive and should not be thought of as hypothesis testing. P-values or confidence intervals can provide some evidence of the strength of the finding, but unless the trials are designed for hypothesis testing (rarely the case), these data should be thought of as descriptive. Generally, the magnitude of change is more important than the p-value for the difference.
MAPP 6030.9 “Good Review Practice: Good Review Management
Principles and Practices for Effective IND Development and Review” discussed the SAP submission
during IND :
Statistical analysis plan. Submission of a detailed statistical analysis plan (SAP) in the initial protocol submission for phase 3 protocols is not required by CDER regulations. However, review staff should strongly encourage sponsors to include the SAP in the initial protocol submission, because phase 3 protocols generally include a detailed section devoted to statistical methods that are closely linked to trial design.
No comments:
Post a Comment