Friday, April 28, 2017


Last week, I attended the 10th Annual Conference on Statistical Issues in Clinical Trials held in the camps of the University of Pennsylvania. This year, the topic is about "Current Issues Regarding Data and Safety Monitoring Committees in Clinical Trials".

The conference is supposed to discuss the current issues and emerging challenges in the practice of data monitoring committees, however, these issues and challenges discussed are not new. These issues are known for long time and remain as issues.

One thing I like this one-day annual conference is that all presentation slides including the panel discussions are posted online. For DMC discussions this year, the presentation slides are available at:

The early days, the term DSMB (data and safety monitoring board) was used. After FDA issued its guidance "The Establishment and Operation of Clinical Trial Data Monitoring Committees for Clinical Trial Sponsors", the term DMC (data monitoring committee) become popular. In the conference this year, a new term DSMC (data and safety monitoring committee) is also used.

There are several talks about the training for DMC members and the need to train more people who can serve on the data monitoring committee. I felt that the discussion about the training for DMC members is targeting the wrong audience. Instead of targeting the statisticians, the focus should be on those MDs in the medical field. Very often, we have difficulties to find the MDs who can serve on the data monitoring committee, let alone to fine the MDs who have prior experience serving in the data monitoring committee. For a large scale clinical trial, there may be several committees: steering committee, event or endpoint adjudication committee, and data monitoring committee. There is often a shortage of members for these committees. 

Nowadays, the study protocols are getting more and more complicated. The DMC committee members may not understand the complexity of the complicated trial design.  This is especially true for clinical trials using adaptive design and Bayesian design. This poses challenges to the DMC members.

We used to debate whether or not the data monitoring committee should be given the semi-unblinded materials for review where the treatment group is designated as "X" or "Y" instead of the true treatment assignment. The conference presentations said loudly that DMC should have sole access to interim results with complete unblinding (not semi-unblinding) on relative efficacy & relative safety of interventions.

Here are some points from Dr Fleming’s presentation “DMCs:Pomoting Best Practices to Address Emerging Challenges
Mission of the DMC
  • To Safeguard the Interests of the Study Participants
  • To Preserve Trial Integrity and Credibility to enable the clinical trial to provide timely and reliable insights to the broader clinical community
 Proposed Best Practices and Operating Principles
  • Achieving adequate training/experience in DMC process
  • Indemnification
  • Addressing confidentiality issues
  • Implementing procedures to enhance DMC independence
                DMC meeting format
                Creating an effective DMC Charter
                DMC recommendations through consensus, not by voting
                DMC contracting process
  • Defining the role of the Statistical Data Analysis Center
  • Better integration of regulatory authorities in DMC process


Friday, April 07, 2017

Estimate the number of subjects needed to detect at least one event and the rule of three

It is sometimes important to know how likely a study is to detect a relatively uncommon event, particularly if that event is severe, such as intracranial hemorrhage (ICH), progressive multifocal leukoencephalopathy (PML), bone marrow failure, or life-threatening arrhythmia. A great many people must be observed in order to have a good chance to detecting even one such event, much less to establish a relatively stable estimate of its frequency. For most clinical trials, sample size is planned to be sufficient to detect main effects in efficacy and sample size is likely to be well short of the number needed to detect these rare events. Sometimes, we do see the request from FDA for a sample size large enough to detect at least one rare event.

Cempra is a pharmaceutical company in Chapel Hill, North Carolina with a very promising antibiotics drug. However, their NDA submission was not approved by FDA, not due to the efficacy, but due to the safety issue (precisely, due to FDA’s concern about the potential liver toxicity). In FDA’s complete response letter (CRL, in other words, rejection letter), it says:
“To address this deficiency, the FDA is recommending a comparative study to evaluate the safety of solithromycin in patients with CABP. Specifically, the CRL recommends that Cempra consider a study of approximately 9,000 patients exposed to solithromycin to enable exclusion of serious drug induced liver injury (DILI) events occurring at a rate of approximately 1:3000 with a 95 percent probability.”
The request for a study with these many subjects is unusual for a pre-market study. It will certainly not be feasible for an antibacterial drug in community-acquired bacterial pneumonia indication. It is interesting to see that the FDA applied the ‘rule of three’ in proposing the sample size.

The rule of three says: to have a good chance of detecting a 1/x events, one must observe 3x people. For example, to detect at least one event if the underlying rate is 1/1,000, one would need to observe 3,000 people. In Solithromycin case, to detect at least one DILI event if the background rate of DILI event is 1/3,000, the number of subjects to be observed would be 3 x 3,000 = 9,000 subjects.

I had previously written an articleabout the rule of three from the angle of calculating the 95% confidence interval when there is no event occurred in x number of subjects. The same rule can be applied to the situation for estimating the sample size to detect at least one event.    

In one of our studies with a thrombolytic agent, FDA is concerned about the potential side effect of intracranial hemorrhage (ICH). In FDA’s comment on IND, FDA asked us to have a safety database containing at least 560 subjects exposed to the tested thrombolytic agent. They stated:
In order to permit the occurrence of 1 ICH event without necessitating halting the study, the safety database would have to have at least 560 subjects exposed to your XYZ drug with only one ICH event to keep the ICH rate under 1%. Please comment.
In this case, the background rate of ICH is assumed to be 1%. Based on the rule of three, the safety database would be 3 x 100 = 300. However, the number of 560 is coming from the direct calculation assuming a binomial distribution.

Based on an ICH rate of 1%, assuming a binomial distribution, to detect at least one ICH event, 560 subjects will be needed in order to keep upper bound of the exact two-sided 95% confidence interval below 1%.

A small SAS program below can be used for the calculation. Adjust the sample size and then see what the exact 95% confidence intervals are.

 data test;
  input ICH $ count;
  Have 1
  No   560

proc freq data=stopping;
  weight count;
  tables ICH /  binomial (p=0.01) alpha=0.05 cl;
           *p=0.01 indicates the background rate;
  exact binomial*Obtain the exact p-value;

In clinical studies with potential side effect of rare events, if we need to base the safety database on detecting at least one such event, the sample size could be very large. As we see from the calculation or simply apply the rule of three, the sample size largely depends on the assumed background event rate. It is usually the case that the background event rate is from the literature. The literature usually have variety of the results. Using DILI as an example, the background rate from the literature could vary depending on if we talk about the rate in normal population, in community-acquired bacterial pneumonia patients, or in community-acquired bacterial pneumonia patients treated with other antibiotics.