Tuesday, January 23, 2018

CDISC (CDASH and STDM) and Protocol Deviations

I have previously discussed the protocol deviation issues in clinical trials.
It is important to ensure the GCP compliance and adherence of the study protocol in clinical trials. However, due to the complexity of the clinical trial operations, it is not possible to have a perfect study of 100% according to the study protocol. It will be a miracle to complete a study without any protocol deviations. Therefore, identifying and documenting the protocol deviations become a critical task.
After the study, when a clinical study report (CSR) is prepared, there should be a section to describe the protocol deviations. ICH guideline E3 STRUCTURE AND CONTENT OF CLINICAL STUDY REPORTS stated the following:
Protocol deviations also have an impact on the statistical analysis side. Protocol deviation will not result in excluding the subjects from full analysis set (usually the intention-to-treat population), however, the important protocol deviations may result in subjects being excluded from the per-protocol population. For pivotal studies, analyses using the per-protocol population will always be performed as one type of sensitivity analyses to evaluate the robustness of the study results.
If the results from the main analyses (usually based on intention-to-treatment population) are negative, the analyses on the per-protocol population may not be very meaningful. If the results from the main study are positive, the analyses on the per-protocol population will be important. If there are a lot of protocol deviations in a study, it may trigger the regulatory reviewer’s scrutiny and damp the confidence about the study results.
The CDISC’s CDASH (the guidelines for case report form design) and SDTM (the guidelines for standardized tabulation data structure) have the detail discussions about the protocol deviations. While violation of inclusion/exclusion criteria (or study entry criteria or eligibility criteria) may also be considered as part of the protocol deviations, the CDISC discussions are only for protocol deviations occurred after the study start (subjects randomized into the study and/or received the first dose of the study drug). Violation of Inclusion/exclusion should be collected separately in IE form (domain).  
CDASH recommends identifying the protocol deviation (DV domain) through other sources, not from the case report form. It stated:
         5.14.1 Considerations Regarding Usage of a Protocol Deviations CRF
The general recommendation is to avoid the creation of a Protocol Deviations CRF (individual sponsors can determine whether it is needed for their particular company), as this information can usually be determined from other sources or derived from other data. As with all domains, Highly Recommended fields are included only if the domain is used. The DV domain table was developed as a guide that clinical teams could use for designing a Protocol Deviations CRF and study database should they choose to do so.
In practice, the protocol deviations are usually collected and maintained by the clinical team either in CTMS (clinical trial management system) or excel spreadsheet even though there are examples (maybe the future trend) of collecting the protocol deviations through the electronic data capture (EDC).

If the sponsor decides to use a case report form (paper or electronic) to capture the protocol deviations, CDASH recommends the following: 
If a sponsor decides to use a Protocol Deviations CRF, the sponsor should not rely on this CRF as the only source of protocol deviation information for a study. Rather, they should also utilize monitoring, data review and programming tools to assess whether there were protocol deviations in the study that may affect the usefulness of the datasets for analysis of efficacy and safety.


SDTM requires a protocol deviation (DV) domain for tabulation data set (dv.xpt) no matter how the original protocol deviation data is collected.


In ADAM, whether or not creating an analysis data set for protocol deviation is up to each individual's decision. It is not required. If a summary table for protocol deviation is needed, it can be programmed from the SDTM DV data set and ADAM ADSL data set.

Thursday, January 11, 2018

Statistician's nightmare - mistakes in statistical analyses of clinical trials

Statistician’s job could be risky too.

In recent news announcement, sponsor had to disclose the errors in statistical analyses. All these errors have consequences to the company’s value or even the company’s fate. I hope that the study team members who made this kind of mistakes still have a job in their company. I did have a friend ending up losing the job due to the incorrect report of the p-value.

Here are two examples. In the first example, the p-value was incorrectly calculated and announced, the later had to be corrected – very embarrassing for the statistician who made this mistake. In the second example, the mistake is more on the programming and data management side. Had the initial results been positive, the sponsor might never go back to re-assess the outcomes and the errors might never be identified.

Example #1:
Axovant Sciences (NASDAQ:AXON) today announced a correction to the data related to the Company’s investigational drug nelotanserin previously reported in its January 8, 2018 press release. In the results of the pilot Phase 2 Visual Hallucination study, the post-hoc subset analysis of patients with a baseline Scale for the Assessment of Positive Symptoms - Parkinson's Disease (SAPS-PD) score of greater than 8.0 was misreported. The previously reported data for this population (n=19) that nelotanserin treatment at 40 mg for two weeks followed by 80 mg for two weeks resulted in a 1.21 point improvement (p=0.011, unadjusted) were incorrect. While nelotanserin treatment at 40 mg for two weeks followed by 80 mg for two weeks did result in a 1.21 point improvement, the p-value was actually 0.531, unadjusted. Based on these updated results, the Company will continue to discuss a larger confirmatory nelotanserin study with the U.S. Food and Drug Administration (FDA) that is focused on patients with dementia with Lewy bodies (DLB) with motor function deficits. The Company may further evaluate nelotanserin for psychotic symptoms in DLB and Parkinson’s disease dementia (PDD) patients in future clinical studies.
Example #2:  
(note: PE: pulmonary exacerbation; PEBAC: pulmonary exacerbation blinded adjudication committee) 
Re-Assessment of Outcomes
Following database lock and unblinding of treatment assignment, the Applicant performed additional data assessments due to errors identified in the programming/data entry that impacted identification of PEs. This led to changes in the final numbers of PEs. Based on discussion the Applicant had with the PEBAC Chair, it was decided that 10 PEs initially adjudicated by the PEBAC were to be re-adjudicated by the PEBAC using complete and final subject-level information. This led to a re-adjudication by the PEBAC who were blinded to subject ID, site ID, and treatment. Result(s) of prior adjudication were not provided to the PEBAC.
 Efficacy results presented in Section 7.3 reflect the revised numbers. Further details regarding the reassessment by the PEBAC are discussed in Section 7.3.6.
7.3.6 Primary Endpoint Changes after Database Lock and Un-Blinding
Following database lock and treatment assignment un-blinding, the Applicant performed additional data assessments leading to changes in the final numbers of PEs. Specifically, per the Applicant, during a review of the ORBIT-3 and ORBIT-4 data occurring after database locking and data un-blinding (for persons involved in the data maintenance and analyses), ‘personnel identified errors in the programming done by Accenture Inc. (data analysis contract research organization (CRO)) and one data entry error that impacted identification of PEs. Because of the programming errors, the Applicant states that they chose to conduct a ‘comprehensive audit of all electronic Case Report Forms (eCRFs) entries for signs, symptoms or laboratory abnormalities as entered in the PE worksheets for all patients in ARD-3150-1201 and ARD-3150-1202’ (ORBIT-3 and ORBIT-4). From this audit, the Applicant notes ‘that no further programming errors’ were identified but instead 10 PE events (three from ORBIT-4 and seven from ORBIT-3) were found for which the PE assessment by the PEBAC was considered potentially incorrect. This was based on the premise that subject-level data provided to the PEBAC during the original PE adjudication were updated at the time of the database lock. Reasons provided are: 1) the clinical site provided update information to the eCRF after
 the initial PEBAC review (2 PEs), 2) incorrect information was supplied to the PEBAC during initial adjudication process (2 PEs), 3) inconsistency between visit dates and reported signs and symptoms (6 PEs). After discussion with the PEBAC Chair, it was decided that these 10 PEs initially deemed PEs by the PEBAC were to be re-assessed by the PEBAC using complete and final subject-level information. This led to a re-adjudication by the PEBAC during a closed session on January 25, 2017. This re-adjudication was coordinated by Synteract (Applicant’s CRO) who provided data to the PEBAC that were blinded to subject ID, site ID, and treatment. In addition, result(s) of prior adjudication were not provided. While the PEBAC was provided with subject profiles for other relevant study visits, the PEBAC focus was only on the selected visits for which data were updated or corrected.
 Because of the identified programming errors and PEBAC re-adjudication, there were two new first PEs added to the Cipro arm in ORBIT-3 and two new first PEs added to the placebo arm in ORBIT-4. Given these changes, the log-rank p-value in ORBIT-4 changed from 0.058 to 0.032 (when including sex and prior PEs strata). The p-value in ORBIT-3 changed from 0.826 to 0.974 remaining insignificant. These changes are summarized in Table 9. Note that there were no overall changes in the results of the secondary endpoints analyses from changes in PE status described above.

It is inevitable to make mistakes during the statistical analysis if there is no adequate procedures to prevent them. The following procedures can minimize the chances of making the mistakes as the examples above. 
  • Independent validation process (double programming): The probability for two independent people to make the same mistake is very very low. 
  • Dry-run process: using the dirty data, perform the statistician analysis using the dummy randomization schedule, i.e., perform the statistical analysis with the real data, but fake treatment assignment. The purpose is to do the programming work up front and to check the data upfront so that the issues and mistakes can be identified and corrected. 




Tuesday, January 02, 2018

Adverse Event Collection: When to Start and When to Stop?

In clinical trials, the most critical safety information is the adverse event (AE). There are numerous guidance and guidelines regarding the AE collection. However, there are still a lot of confusions. The very basic question is when to start the AE collection and when to stop the AE collection. For example, here are some discussions:

When to start the AE collection?

It is a very common practice in industry-sponsored clinical trials that AE record keeping begin after informed consent. Adverse events will be collected even for those patients who signed informed consent, but subsequently failed the inclusion/exclusion criteria during the screening period. If we attend the GCP training, it is very likely we will be told this is the way we are supposed to do for adverse event collection in order to be compliant with GCP.

However, the AE definition in the ICH E2A guidance document suggests that adverse event can be recorded at or after the first treatment, not the signing of the informed consent form (ICF). The ICH E2A defined the AE as:
Adverse Event (or Adverse Experience) Any untoward medical occurrence in a patient or clinical investigation subject administered a pharmaceutical product and which does not necessarily have to have a causal relationship with this treatment. An adverse event (AE) can therefore be any unfavourable and unintended sign (including an abnormal laboratory finding, for example), symptom, or disease temporally associated with the use of a medicinal product, whether or not considered related to the medicinal product.
A. Commonly, the study period during which the investigator must collect and report all AEs and SAEs to the sponsor begins after informed consent is obtained and continues through the protocol-specified post-treatment follow-up period. Since the ICH E2A guidance document defines an AE as “any untoward medical occurrence in a patient or clinical investigation subject administered a pharmaceutical product…” This definition clearly excludes the period prior to the IMP’s administration (in this context a placebo comparator used in a study is considered an IMP. Untoward medical occurrences in subjects who never receive any study treatment (active or blinded) are not treatment emergent AEs and would not be included in safety analyses. Typically, the number of subjects “evaluable for safety” comprises the number of subjects who received at least one dose of the study treatment. This includes subjects who were, for whatever reason, excluded from efficacy analyses, but who received at least one dose of study treatment.
 There are situations in which the reporting of untoward medical events that occur after informed consent but prior to the IMP’s administration may be mandated by the protocol and/or may be necessary to meet country-specific regulatory requirements. For example, it is considered good risk management for sponsors to require the reporting of serious medical events caused by protocol-imposed screening/diagnostic procedures, and medication washout or no treatment run-in periods that precede IMP administration. For example, a protocol-mandated washout period, during which subjects are taken off existing treatments (such as during crossover trials) that they are receiving before the test article is administered, may experience withdrawal symptoms from removal of the treatment and must be monitored closely. If the severity and/or frequency of AEs occurring during washout periods are considered unacceptable, the protocol may have to be modified or the study halted. Some protocols may also require the structured collection of signs and symptoms associated with the disease under study prior to IMP administration to establish a baseline against which post-treatment AEs can be compared. In some countries, regulatory authorities require the expedited reporting of these events to assess the safety of the human research.
For a specific study, the screening procedure and the potential injury of the screening procedure should be considered when deciding when to start the AE collection. For a study with very minimal or routine screening procedure (such as phase I study / clinical pharmacology study in healthy volunteers at phase I clinic), it may be ok to collect the AE starting from the first treatment.  For a study with comprehensive screening procedures or with invasive screening procedures, it is advised that the AE collection should start once the subject signs the ICF. For example, in a study assessing the effect of a thrombolytic agent in ischemic stroke patients, the screening procedures include CT scan and arteriogram to assess the location and size of the clot – which can cause adverse effects / injuries to the study participants. In this situation, it is strongly advised that the AE is collected at the ICF signing.

If the AE is collected from the ICF signing, during the statistical analysis, the AEs can be divided into non-treatment emergent AE and treatment emergent AEs (TEAE). Non-TEAEs are those AEs occurred prior to the first study treatment and TEAEs are those AEs with onset date/time at or after the first study treatment. Non-TEAE and TEAE will be summarized separately and the extensive safety analyses will be mainly based on the TEAE.

When to Stop the AE collection?

It is even more murky in terms of when to stop the AE collection because the end of the study is trickier than the start of the study. A study may have a follow-up period after the completion of the study treatment. A subject may discontinue the study treatment earlier, but remain in the study to the end.

There is no clear guidance how long after the last study treatment the AEs need to be collected. In practice, it is common to continue reporting AEs following the last study treatment – the period for post study treatment may be 7 days following the last treatment or 30 days following the last treatment.  The decision of AE collection during the follow-up period should be based on the half life of the study drug, whether there are AEs of special interest related to the study drug in investigation, and whether it is in pediatric or adult population.   

In oncology clinical trials, it is typical not to collect the adverse events during the long-term follow-up period. Adverse events may just be collected for short period after the last treatment, for example 30 days or 3 months or 6 months following the last study treatment. During the long-term follow-up period, only the study endpoint (tumor related events) such as death, tumor progression, or secondary malignant event will be collected.

Should adverse events be collected for subjects who discontinued the study treatment earlier? There is a good question and answer discussion at firstclinica.comAE Reporting for Discontinued Patient
QUESTION: What are the investigator's responsibilities in terms of reporting the post-discontinuation adverse events? On one hand, since the patient discontinued from the study, some think that the investigator has no right to review the patient's clinical record under HIPAA (authorization terminated) or informed consent regulations (consent withdrawn) and consequently has no authority or responsibility to report the adverse events. On the other hand, there does not appear to be any variances to an investigator's IND obligations (even when a patient discontinues from the study) with respect to reporting adverse events according to 21 CFR 312.64. Also, would the investigator's reporting responsibilities be the same for Situation A and Situation B?
ANSWER:
FDA has stated that clinical investigators need to capture information about adverse effects resulting from the use of investigational products, whether or not they are conclusively linked to the product. The fact that a subject has voluntarily withdrawn from the study does not preclude FDA's need for such information. In fact, withdrawal is often due to adverse effects, some already realized and others beginning and that will later progress. For your first scenario, that is obviously not a real problem since the investigator is also the individual's private physician and obviously has this information. While you are correct to worry about privacy issues in both scenarios, the public welfare is a larger issue. Failure to capture and report adverse effects, particularly serious adverse effects, will not only be a problem for the individual in question but potentially for other actual and potential study subjects. It is also essential to capture the information so that the total picture is available to FDA when a marketing decision is imminent. The individual in question may be one of very few who would evidence the particular adverse effect, particularly given the limited number of individuals included in a study. However, this information could have major ramifications for the potentially large population of users of the drug once legally marketed. How to best go about collecting the details of the adverse effect is obviously a different issue.
In summary, the AE collection can be depicted as the following where TEAE stands for treatment-emergent adverse event: