Sunday, September 26, 2010

Do we really need to statistical soluation for everything?

I came back from last week's FDA/Industry Statistics Workshop with more questions than answers. While the theme for this year's workshop is on risk benefit assessment, the old regular issues such as multiplicity, missing data, meta analysis, adaptive design, subgroup analysis,...are still the hot topics. For both the new (risk benefit assessment) and the old topics, there are more questions being raised and for many, there are no clear answer to these questions.

For adaptive design and non-inferiority clinical trials, FDA issued the draft guidance early this year; however, both guidance were written more like a technical report for statistician, and unlikely to be understood by the non-statisticians. For non-inferiority design, more questions were raised about the subjectivity / objectivity in determining the non-inferiority margin. For risk-benefit assessment, perhaps, we have to rely on the medical experts in the specific therapeutic area to make their subjective judgment based on the separate marginal analyses of Benefit (efficacy) and Risk (Safety) instead of different weighted modeling approaches. Perhaps, there is no simple mathematical and statistical solution for the benefit risk assessment. I believe that the advisory committee members were making subjective judgments based on their experience in voting in favor of or against a product for benefit and risk assessment - like Jury's verdict.


it is not a good thing that as statisticians, we come up with some complicated statistical methodology which we can not explain well to the non-statisticians. Eventually, we may need to go back to the basics to follow the KISS (Keep it simple) principle. Several years ago, the complicated and bad math that nobody could really understand caused the financial crisis. A working paper, Computational complexity and informational asymmetry in financial products, Sanjeev Arora, Boaz Barak, Markus Brunnermeier, Rong Ge. sheds some light on the complex mathematical models upon which credit default obligations and other derivatives are based.

Sunday, September 19, 2010

Number of Events vs. Number of Subjects with Events

Often in clinical trial safety data analysis, people are confused with the basic concept of "number of events" vs "number of subjects with events". Obviously, the number of events counts the events (event level) while number of subjects counts the subjects (subject level).

Using adverse event (AE) summary as an example,the difference between “the number of AEs” and “the number of subjects with AEs” sometimes may not be very obvious for some people. For the number of AEs, since the same subject can have more than one adverse events, we can not really calculate the percentage since the numerator and denominator could be any number. It is a mistake if you divide the number of events with the number of subjects under a treatment arm. You could have an unreasonably large percentage (sometimes larger than 100%).

For the number of subjects with AEs, we always count by subject. If a subject has more than one AEs, it will be counted one once. Therefore, the numerator (the number of subjects with AEs) is always smaller than the denominator (number of subjects exposed). We can calculate the percentage and the percentage should always be less than 100%. We can this percentage as 'incidence of AEs'. The following table (extracted from a document in FDA's website) is an example of AE presentation (counted by subjects).


The statistical summary tables for adverse events are often constructed to present the both total # of AEs and # of subjects with AEs (or precisely the number of subjects with at least one AEs). However, in the table, there will be no percentage calculated for total # of AEs. If the readers are not clear about the concept of "# of AEs" vs "# of subjects with AEs), they could question the correctness of the summary table. Very often, they might count the # of subjects with AEs and compare with the # of AEs and find discrepancies (sure there will be discrepancies). The reason? some subjects must have more than one AEs.

In some situations, we can indeed calculate the rate, proportion for # of Events (number of adverse events). For example, for total number of AEs, we could calculate how many of these events are mild, moderate, severe. You could see this information presented in package insert for some approved drugs on the market. We could also calculate the incidence rate of AEs by using the total number of AEs as numerator and total number of infusions, total number of dose distributed, or total number of person years as denominator. In these situations, we should always understand what the numerator is and what the denominator is. For a good presentation of the statistical summary table, the numerator and denominator used for calculation should be specified in the footnote. A few years ago, I saw a commercial presentation comparing a company's product safety with other competitive products. When they calculate the AE frequency for their product, they use (total number of AEs) / (the total number of doses). When they calculate the AE frequency for other competitive products, they use (total number of AEs) / (total number of subjects). Since each subject receives more than one doses, their calculation of the AE frequency for their product is markedly lower. However, this trick is wrong and unethical. 

Sunday, September 12, 2010

Restricted randomization, stratified randomization, and forced randomization

Randomization is a fundamental aspect of randomized controlled trials (RCT). When we judge a quality of a clinical trial, whether or not it is a randomized trial is a critical point to consider. However, there are different ways in implementing the randomization and some of the terminologies could be very confusing, for example, 'restricted randomization', 'stratified randomization', and 'forced randomization'.

Without any restriction, the randomization is called 'simple randomization' where there is no block, no stratification applied. Simple randomization will usually not be able to achieve the exact balance of the treatment assignments if the # of randomized subjects are small. In contrary, the restricted randomization refer to any procedure used with random assignment to achieve balance between study groups in size or baseline characteristics. The first technique for restricted randomization is to apply the blocks. Blocking or block randomization is used to ensure that comparison groups will be of approximately the same size. Suppose we are planning to randomize 100 subjects to two treatment groups, with simple randomization, if we enroll entire 100 subjects, we may have approximately equal number of subjects in one of the treatment groups. However, if we enroll a small amount of subjects (for example 10 subjects), we may see quite some deviation from equal assignments and there may not be 5 subjects in each treatment arms. With the application of blocking (block size=10), we can ensure that with every 10 subjects, there will be 5 to each treatment arm.

Stratified randomization is used to ensure that equal numbers of subjects with one or more characteristic(s) thought to affect the treatment outcome in efficacy measure will be allocated to each comparison group. The characteristics (stratification factor) could be patient's demographic information (gender, age group,...) or disease characteristics (baseline disease severity, biomarkers,...). If we conduct a randomized, controlled, dose escalation study, the dose cohort itself can be considered as a stratification factor.  With stratification randomization, we essentially generate the randomization within each stratum. # of strata depends on the number stratification factors used in randomization. If we implement 4 randomization factors with each factor having two levels, we will have a total of 16 strata, which means that our overall randomization schema will include a total 16 portions of the randomization with each portion for a stratum. In determining the # of strata used in randomization, the total number of subjects need to be considered. Overstratification could make the study design complicated and might also be prone to the randomization error. For example, in a stratified randomization with gender as one of the stratification factor, a male subject could be mistakenly entered as female subject and a randomization number from female portion instead of male portio nof the randomization schema could be chosen. This may have impact on the overall balance in treatment assignment as we originally planned. A paper by Kernan et al had an excellent discussion on stratified randomization.

One of the misconception about the stratification is that equal number of subjects are required for each stratum. for example, when we talk about randomization stratified by gender (male and female), people will think that we would like to have 50% of male and 50% of female subjects in the trial. This is not true. What we need is to (assuming 1:1 randomization ratio) have 50% of subjects randomized to each treatment arm in male subjects and in female subjects. This issue has been discussed in one of my old articles

The forced randomization is another story and it basically to force the random assignment to deviate from the original assignment to deal with some special situation. For example, in a randomized trial with moderate and severe degree of subjects, we may put a cap on the # of severe subjects to be randomized. When the cap is achieved, the severe subjects will not be randomized any more, but the moderate subjects can still be randomized. We could enforce a cap for # of subjects at a specific country/site or limit the number of subjects for a specific treatment arm to be randomized at a particular country/site. The forced randomization is usually required to deal with the operation issues and is implemented through IVRS or IWRS. Too much forced randomization will neutralize the advantages of the randomization.

all three terms (restricted, stratified, and forced randomization) belong to the fixed sample size randomization in contrary to the dynamic randomization in adaptive designs.  

Sunday, September 05, 2010

Immunogenicity and its impact in clinical trials

There is a shift in drug development field from the chemical compounds to the biological products - protein products and a shift from traditional pharmaceutical companies to biotechnology companies. For those who work on clinical trials on biological products, the term 'immunogenicity' must be a familiar term. Immunogenicity is the ability of a particular substance, such as an antigen or epitope, to provoke an immune response. In other words, if our drug is protein product, immunogenecity is the ability of the protein to induce humoral and/or cell-mediated immune responses. Immunogenicity testing is a way to determine whether patients are producing antibodies to biologics that can block the efficacy of the drugs. The development of anti-drug antibodies can also cause allergic or anaphylactic reactions, and/or induction of autoimmunity.

Several workshops have been organized to discuss the immunogenicity issues. Regulatory agencies are developing the guidelines to give industry the directions to incorporate the immunogenicity testing into the clinical development program for biological products.

Since the immunogenicity testing relies on the assay to measure the immune responses, the results of immunogenicity depend on which assay is used for testing. Therefore, the development of immunogenicity assay is also a critical issue in immunogenicity testing. Using an ultra sensitive assay could detect many false positives in immunogenicity testing. Using a less sensitive assay could under-estimate the immune response.

The following collection regarding immunogenicity testing should provide a good resource for this topic.

If a company develops a follow-on biologicals, a generic form of biological product or a copycat of a biotechnology product, immunogenicity testing is typically one of the critical points that need to be addressed. The regulatory, therefore, issued the guidelines on the immunogenicity testing when developing the follow-on biologicals.