Sunday, October 31, 2010

Regulatory Science


Regulatory Science is the science of developing new tools, standards, and approaches to assess the safety, efficacy, quality, and performance of all FDA-regulated products including drug, biological products, medical device and more. On February 24, 2010, FDA along with NIH launched its Advancing Regulatory Science Initiative (ARS) aim to accelerate the process from scientific breakthrough to the availability of new, innovative medical therapies for patients.

On October 6, 2010, the U.S. Food and Drug Administration unveiled an overview of initiatives to advance regulatory science and help the agency assess the "safety, efficacy, quality and performance of FDA-regulated products." And published its white paper  Advancing Regulatory Science for Public Health - A Framework for FDA's Regulatory Science Initiative” The white paper outlines the agency's effort to modernize its tools and processes for evaluating everything from nanotechnology to medical devices to tobacco products.

In companion to the release of the white paper, FDA commissioner, Dr Hamburg gave a speech to the National Press Club in Washington, DC

In the white paper, the section I “Accelerating the Delivery of New Medical Treatments to Patients” has specific meaning to statisticians. “Adaptive design” was not specifically mentioned in the white paper, however, any approach or methodology in clinical trial design that can expedite the drug development process should be encouraged. The personalized medicine should also be encouraged.

Even though the regulatory science or regulatory affairs is critical in drug development field, the professionals working in the field are very diversified and come from variety of different backgrounds. Perhaps, you can only learn the regulatory science through the experience and on-job training. However, I do notice that USC has a graduate program in regulatory science. Considering that FDA is increasing its investment in regulatory science and the regulatory laws are getting more and more complicated, the graduates from this program should not have any difficulty in finding a job.


Friday, October 08, 2010

Missing data in clinical trials - the new guideline from EMEA and National Academies

Missing data issues have been discussed and debated for many years. Handling of missing data in clinical trials has been recognized as an important issue not only for statisticians who analyze the data, but also for the clinical study team who conduct the study.  While we are still waiting for FDA to issue its guidance on missing data in clinical trials, there are several guidelines published recently.

EMEA just issued its final rule of "Guideline on missing data in confirmatory clinical trials". This guideline provided the guidance on handling the missing data from the perspective of European regulatory authorities. Comparing to the FDA's guidance on non-inferiority and adaptive design, EMEA's missing data guidance is written in plain language and can be easily understood by the non-statisticians.

The recent trend is to discourage the use of LOCF and other single imputation methods (ie, replace the missing value with the last measured value, with averaged value, or with baseline value,...). It is noted that LOCF is mentioned as one of the single imputation methods in EMEA's guideline. The guideline acknowledged that "Only under certain restrictive assumptions does LOCF produce an unbiased estimate of the treatment effect. Moreover, in some situations, LOCF does not produce conservative estimates. However, this approach can still provide a conservative estimate of the treatment effect in some circumstances.". The guideline further elaborated that LOCF may be a good technique for studies (e.g. depression, chronic pain) where the condition is expected to improve spontaneously over time, but may not be conservative for studies (e.g. Alzeimer's disease) where the condition is expected to worsen over time.

In the United States, the Division of Behavioral and Social Sciences and Education under National Research Council of the National Academies have been working on a project "Handling missing data in clinical trials". The working group recently makes its draft report available. The draft report is titled "The prevention and treatment of missing data in clinical trials". I like the word 'prevention' in the title since it is critical to prevent or minimize the occurrence of missing data. Once the missing data has happened, there is no universal method to handle the missing data perfectly. The assumptions of MACR, MAR, and MNAR can never been fully verified.

Academies' report on missing data has a stronger language in discouraging the use of LOCF and other simple imputation approaches. The recommendation #10 stated "Single imputation methods like last observation carried forward and baseline observation carried forward should not be used as the primary approach to the treatment of missing data unless the assumptions that underlie them are scientifically justified."

So far, there is no official guideline from FDA regarding the missing data handling (even though the topic has been the perennial topic in almost all statistics conferences and workshops). Nevertheless, a presentation by Dr. O'Neill to the International Society of Clinical Biostatistics may give some insides.

Sunday, October 03, 2010

Individual response vs. group response

In clinical trials, the efficacy endpoints are often measured as continuous variables. The hypothesis tests are used to determine whether or not there are statistically significant differences between one group vs. another group. This is desired by the statisticians. However, for treating physicians, the treatment effect on group basis may not translate to the effect to an individual patient. When we move toward to the personalized medicine, the individual response may be more important than just the group response.

It is interesting that the individual response and individual assessment (or within patient analysis, intra-subject changes...) were greatly discussed in this year's FDA/Industry Statistics Workshop.

For patient reported outcome, the statistically significant group change does not necessarily imply a meaningful difference for individual patients. To provide meaningful interpretation of patient reported outcome intervention and treatment effects, there should be a responder definition to classify each individual subjects as responder or non-responder. The FDA guidance stated "Regardless of whether the primary endpoint for the clinical trial is based on individual responses to treatment or the group response, it is usually useful to display individual responses, often using an a priori responder definition (i.e., the individual patient PRO score change over a predetermined time period that should be interpreted as a treatment benefit). The responder definition is determined empirically and may vary by target population or other clinical trial design characteristics. Therefore, we will evaluate an instrument’s responder definition in the context of each specific clinical trial." The challenging issue is how to determine the cutpoint or benchmark for the definition of the responder. Several approaches have been proposed in the literature. We had actually implemented various approaches to determine the responder (or clinical meaningful difference) in a neurology disease. In the article, two of the anchors are used: one based on physician's assessment and one based on global assessment by the patient (question #2 in SF-35 instrument). It is interesting that the statistical approaches are employed to find the clinical meaningful difference.

Once the cutpoint (clinical meaningful difference) is decided, the continous variable will be dichotomized into responder and non-responder. The analysis will them be shifted from the parametric method (t-test, ANOVA, ANCOVA,...) to categorical data analysis method (chisquare, logistic regression, generalized linear model,...). Statistician will argue that by doing so, we lost a lot of efficiency in statistical testing. A paper by Snappin and Jiang titled "Responder analyses and the assessment of a clinically relevant treatment effect" just did this argument.

In the recently published EMEA "Guideline on missing data in confirmatory clinical trials", responder analysis was mentioned to have a benefit of handling the missing data.  It stated:

"In some circumstances, the primary analysis of a continuous variable is supported by a responder analysis. In other circumstances, the responder analysis is designated as primary. How missing data are going to be categorised in such analyses should be pre-specified and justified. If a patient prematurely withdraws from the study it would be normal to consider this patient as a treatment failure. However, the best way of categorisation will depend on the trial objective (e.g. superiority compared to non-inferiority).

In a situation where responder analysis is not foreseen as the primary analysis, but where the proportion of missing data may be so substantial that no imputation or modelling strategies can be considered reliable, a responder analysis (patients with missing data due to patient withdrawal treated as failures) may represent the most meaningful way of investigating whether there is sufficient evidence of the existence of a treatment effect."


Within-patient analyses were brought up again in assessing benefit:risk. Currently, the benefit:risk assessment relies on separate marginal analyses. The efficacy (benefit) and safety (risk) are analyzed separately. The aggregation of the benefit:risk relies on the assessment of medical reviewers, not statisticians. The aggregate analyses of benefit:risk are typically qualitative, not quantitative with significant subjectivity. With within-patient analyses, each patient was assessed for benefit and risk before performing the group comparison for treatment effect. One of these approaches is so called Q-Twist (The quality-adjusted time without symptoms of disease or toxicity of treatment) where the toxicity or safety information is incorporated into the efficacy assessment for each patient before any group comparison. The paper by Sherrill et al is one of these examples.