CDER and CBER are evaluating an approach to transition to general acceptance of laboratory data in clinical trials that are measured and reported in Système International (SI) units instead of U.S. Conventional units. The objective is to establish an agency-wide policy on the acceptance of SI units in product submissions.
CDER and CBER recognize that SI units are the worldwide standard and international trials regularly measure and report lab tests using SI units. The Centers also acknowledge that the majority of U.S. healthcare providers are trained using U.S. conventional units. Lab results reported using U.S. conventional units often convey the most clinical meaning to U.S. healthcare providers, including CDER and CBER reviewers. In the absence of a holistic transition within the U.S. healthcare community to SI units, conversion of certain lab test results to U.S. conventional units may be a necessary interim step toward a transition to full SI unit reporting.
CDER and CBER are currently evaluating common and therapeutic area-specific lab tests to determine which pose significant interpretation risks during the review of new drug applications. While this evaluation is underway, sponsors are strongly encouraged to solicit input from review divisions as early in the development cycle as possible to minimize the potential for conversion needs during NDA/BLA review. CDER and CBER encourage sponsors to discuss this issue with FDA before the start of Phase 3 trials. In some cases the issue may warrant discussion with FDA at the End-of-Phase 2 meeting.
If conversion requests are received, sponsors are advised to discuss the conversion request as early as possible with the review division and if needed, provide a proposal for what can be reasonably accomplished to meet the review division’s needs without undue burden in time or costs.
October 25, 2013
Monday, November 24, 2014
Previously, I wrote an article to discuss the SI unit versus US conventional unit. FDA actually issued its position statement about these two units.
Sunday, November 16, 2014
VALOR Trial - A Successful and Failed Phase III Study with Adaptive Sample Size Re-stimation for Promising Zone
Motivated by searching for the innovative clinical trial methodologies to increase the clinical trial success and minimize the clinical trial cost, various adaptive design methods have been proposed. Initially, clinical trials using the adaptive designs are usually in the early phase (phase I or II) clinical trials. For phase III confirmatory clinical trials, the traditional clinical trial methods are still dominating. Many publications about using the adaptive design in late stage trials are based on retrospective assessment or simulation: had the original studies been done with adaptive design, how much cost would have been saved or a failed trial might have been rescued. After many years of education and advocate, adaptive designs with innovative methods in phase III studies have actually been implemented and some of the trial results start to surface. One of the examples is a trial called VALOR – a phase III, placebo-controlled, randomized, double-blind study in relapsed/refractory Acute Myeloid Leukemia (AML). The study adopted one of the key adaptive design features - the Sample Size Re-estimation (SSR).
The rationale behind the Sample Size Re-estimation is that the assumptions for designing the confirmatory trial is either not entirely available or is available but with a high degree of uncertainty. This uncertainty could result in the incorrect or inaccurate estimates of sample size during the design stage. With the Sample Size Re-estimation, an interim analysis can be performed during the middle of the study to re-check these assumptions. Depending on the findings from interim analysis, the decision about the next step can be made.
In VALOR study, the Sample Size Re-estimation was based on a Promising Zone approach. The SSR based on Promising Zone was proposed by Mehta and Pocock and described in their paper “Adaptive increase in sample size when interim results are promising: A practical guide with examples”. The general idea is to start a phase III trial with the best or better scenario with optimistic assumptions. The optimistic assumptions will require a trial with smaller sample size to start with and consequently require less commitment in resources and finance in the beginning. During the study, an interim analysis is performed to check the reality and to plan for the next step with the following choices.
- Stop early if overwhelming evidence of efficacy
- Stop early for futility if low conditional power
- Increase the number of sample size if results are promising
This can be illustrated in the diagram below. Notice that with this method, the sample size can only be adjusted up (not down), can only be increased (not decreased). The sample size increase is one-time with pre-specified fixed number preferred.
Since VALOR study was initiated in December 2010, this SSR method with Promising Zone approach had been widely followed in statistical community and had been the topic in many adaptive design discussions. See the presentation by Zoran Antonijevic "Harvard Catalyst Adaptie Clinical Trials Case Study - The VALOR Trial for AML". There is also a youtube video titled "The Phase 3 VALOR Trial: Adaptive Sample Size Re-estimation"
Cytel Inc. had built the SSR with Promising Zone approach in their EAST software for study design. They advocate that adaptive sample size re-estimation in EAST reduces risk and enhances the clinical trial success. With Promising Zone SSR method, an adaptive design can:
- DE-RISK INVESTMENT – Avoid expensive up-front commitments of sample size
- ENHANCE SUCCESS – Boost power when initial assumptions fail
- PROMISING ZONETM – Increase sample size conditional on interim data
- ALPHA CONTROL – Guarantee strong type I error control required by regulators
Had the VALOR study achieved the primary efficacy endpoint of statistical significance, it would be a wonderful story to tell how the Promising Zone SSR method had De-Risked Investment, Enhanced success.
Unfortunately, after all of these extra efforts (in adaptive design, DSMB, interim analysis, sample size re-estimation), the study failed and did not reach the statistical significance for the primary efficacy endpoint. P-value just missed the magical number of p=0.05. Here is the announcement from the VALOR study sponsor – Sunesis Pharmaceuticals:
Sunesis Announces Results From Pivotal Phase 3 VALOR Trial of Vosaroxin and Cytarabine in Patients With First Relapsed or Refractory Acute Myeloid Leukemia
“Sunesis Pharmaceuticals, Inc. (Nasdaq:SNSS) today announced results from the pivotal Phase 3 VALOR trial, a randomized, double-blind, placebo-controlled trial of vosaroxin and cytarabine in patients with first relapsed or refractory acute myeloid leukemia (AML). At more than 100 leading international sites, the trial enrolled 711 patients, who were stratified for age, geography and disease status. The trial did not meet its primary endpoint of demonstrating a statistically significant improvement in overall survival, with a median overall survival of 7.5 months for vosaroxin and cytarabine compared to 6.1 months for placebo and cytarabine (HR=0.865, p=0.06).”
Additional details about the trial design are coming to surface. See the screen shot from the Sunesis presentation:
The study was planned based on the most optimistic assumption (i.e., HR=0.71) and the sample size re-estimation was based on the most conservative assumption (i.e., HR=0.80) at that time. Unfortunately, the actual result of HR=0.865 was beyond the most conservative assumption of HR=0.80. It would be interesting to know what exactly the HR was from the interim analysis.
I guess that Sunesis and Cytel are now analyzing the data to search for the clue why the study did not meet the primary endpoint. It is very possible that the study conduct, patient population might be different before and after the interim analysis. While the study team were strictly blinded to the details of the interim analysis results, the decision on whether or not to increase the sample size had to be announced. This announcement could have impact on the patient characteristics or conduct of the study. Here was a discussion about the announcement of increasing the sample size after the interim analysis at that time. It is clear the announcement of increasing the sample size have some impacts on the financial analyst, potentially also have some impacts on the study team / investigators in the study.
When last September the Data and Safety Monitoring Board (DSMB) recommended expanding the sample size of the study based on interim data that suggested a "promising" outcome, vosaroxin garnered even more investor attention.
Valor Trial Design And Alpha SpendAt the analyst meeting in October 2012, Sunesis provided an update on the adaptive design of the study that allows for a potential one-time sample size increase of the patient population. Based on its review, the DSMB recommended the Valor study increase the sample size to 675 patients for a 90% statistical power to detect a 30% overall survival difference (5 months versus 6.5 months) with an HR of 0.77. The DSMB concluded that the interim data indicated a "promising" outcome - ruling out futility and an "unfavorable" scenario, but falling short of a "favorable" result.
Based on the nuances of statistical analysis, ruling out both favorable and unfavorable scenarios for a promising outcome strongly suggests that vosaroxin was closer to non-inferiority and in need of a larger sample size in order to show a statistically significant treatment difference. It was a smart idea by management to utilize the first interim analysis of Valor as a proxy for a randomized Phase 2 study whereby it could better estimate the sample size needed to demonstrate a clinical effect. Powering the study has thus been the main factor in influencing its "promising" outcome
VALOR study is a well-conducted study. From the standpoint of the study implementation including the sample size re-estimation, the study is a success. However, the study failed to reach the statistical significance for the primary efficacy endpoint.
In the end, the statistics is about the uncertainty. While the sample size re-estimation can reduce the uncertainty to some degree, it can not eliminate the uncertainty. We will never be able to design a study to guarantee the success.
Saturday, November 01, 2014
For randomized, controlled clinical trials, the selection of the control group is one of the key issues in the study design. This is why ICH has a specific guideline (E10) for “CHOICE OF CONTROL GROUP AND RELATED ISSUES IN CLINICAL TRIALS”. The choice of the control group will decide whether or not the trial is a superiority or non-inferiority study, double-blinded/single-blinded/open label, and will decide the sample size.
It becomes pretty common that the Standard of Care (SOC) may be chosen as the control group. We often run into an issue that for a specific disease (indication), there is no regulatory-approved therapy (existing therapy) and it is not ethical to conduct the Placebo-controlled study, the comparison of experimental therapy versus Standard of Care seems to be the only choice.
What is the Definition of the SOC?
There is no standard definition for SOC from regulatory guidelines. According to Webster’s New World Medical Dictionary, SOC is defined as “the level at which the average, prudent provider in a given situation would managed the patient’s care under the same or similar circumstances.”
From National Cancer Institute: “standard of care” is defined as “treatment that experts agree is appropriate, accepted, and widely used. Also called best practice, standard medical care, and standard therapy.”
There are more definitions, but all similar.
“A standard of care is a formal diagnostic and treatment process a doctor will follow for a patient with a certain set of symptoms or a specific illness. That standard will follow guidelines and protocols that experts would agree with as most appropriate, also called "best practice."
In legal terms, a standard of care is used as the benchmark against a doctor's actual work. For example, in a malpractice lawsuit, the doctor's lawyers would want to prove that the doctor's actions were aligned with the standard of care. The plaintiff's lawyers would want to show how a doctor violated the accepted standard of care and was therefore negligent.”
Standards of care are developed in a number of ways: Sometimes they are simply developed over time, and in other cases, they are the result of clinical findings. In modern era, the SOC are typically based on the evidence-based medicine. The SOC are based on the results of clinical trials, the Meta analysis results if there are multiple clinical trials, and the Cochrane systematic review of evidences. The SOC may come out as suggestions and treatment guidelines issued by the professional societies. There are actually so many treatment guidelines by different professional societies and by different countries. Just to list a couple of treatment guidelines below:
Does A Standard of Care therapy have to be approved by regulatory authority (such as FDA)?
Not necessarily. As a matter of fact, some of the SOCs may not be regulated by FDA at al. For example, the surgery and the plasma exchange are techniques and procedures that may not be part of FDA regulation.
In FDA’s guidance “Expedited Programs for Serious Conditions – Drugs and Biologics”, SOC was discussed as part of the discussions for ‘available therapy’. The guidance states:
“For purposes of this guidance, FDA generally considers available therapy (and the terms existing treatment and existing therapy) as a therapy that:
§ Is approved or licensed in the
United Statesfor the same indication being considered for the new drug and
§ Is relevant to current
U.S.standard of care (SOC) for the indication
FDA’s available therapy determination generally focuses on treatment options that reflect the current SOC for the specific indication (including the disease stage) for which a product is being developed. In evaluating the current SOC, FDA considers recommendations by authoritative scientific bodies (e.g., National Comprehensive Cancer Network,
of Neurology) based on clinical evidence and other reliable information that reflects current clinical practice. When a drug development program targets a subset of a broader disease population (e.g., a subset identified by a genetic mutation), the SOC for the broader population, if there is one, generally is considered available therapy for the subset, unless there is evidence that the SOC is less effective in the subset. American Academy
Over the course of new drug development, it is foreseeable that the SOC for a given condition may evolve (e.g., because of approval of a new therapy or new information about available therapies). FDA will determine what constitutes available therapy at the time of the relevant regulatory decision for each expedited program a sponsor intends to use (e.g., generally early in development for fast track and breakthrough therapy designations, at time of biologics license application (BLA) or new drug application (NDA) submissions for priority review designation, during BLA or NDA review for accelerated approval). FDA encourages sponsors to discuss available therapy considerations with the Agency during interactions with FDA.
As appropriate, FDA may consult with special Government employees or other experts when making an available therapy determination.”
The newly issued FDA Guidance on Available Therapy echoes the similar opinion:
“available therapy (and the terms existing treatments and existing therapy) should be interpreted as therapy that is specified in the approved labeling of regulated products, with only rare exceptions.
FDA recognizes that there are cases where a safe and effective therapy for a disease or condition exists but it is not approved for that particular use by FDA. However, for purposes of the regulations and policy statements described in Section III, which are intended to permit prompt FDA approval of medically important therapies, only in exceptional cases will a treatment that is not FDA-regulated (e.g., surgery) or that is not labeled for use but is supported by compelling literature evidence (e.g., certain established oncologic treatments) be considered available therapy.”
FDA guidance Non-Inferiority Clinical Trials answered the question if the active comparator for a non-inferiority study can be a product without label. The active comparator could be a SOC.
“Can a drug product be used as the active comparator in a study designed to show non-inferiority if its labeling does not have the indication for the disease being studied, and could published reports in the literature be used to support a treatment effect of the active control?
The active control does not have to be labeled for the indication being studied in the NI study, as long as there are adequate data to support the chosen NI margin. FDA does, in some cases, rely on published literature and has done so in carrying out the meta-analyses of the active control used to define NI margins. An FDA guidance for industry on Providing Clinical Evidence of Effectiveness for Human Drug and Biological Products describes the approach to considering the use of literature in providing evidence of effectiveness, and similar considerations would apply here. Among these considerations are the quality of the publications (the level of detail provided), the difficulty of assessing the endpoints used, changes in practice between the present and the time of the studies, whether FDA has reviewed some or all of the studies, and whether FDA and the sponsor have access to the original data. As noted above, the endpoint for the NI study could be different (e.g., death, heart attack, and stroke) from the primary endpoint (cardiovascular death) in the studies if the alternative endpoint is well assessed”
How Standard are the Standards of Care?
It depends on the specific disease area and the available treatment. A standard of care in one country, one hospital may not necessarily be the same standard in another. Further, one doctor's standard can vary from another doctor's. In many cases, even though the same therapy is considered as the standard of care, the usage of the therapy may be quite different. For example, the tPA is considered as a standard of care in US to treat the leg attack (peripheral arterial occlusion). However, different medical centers and different doctors may give tPA therapy differently – the differences are reflected in the total amount of the tPA dose, bolus versus continuous infusion, infusion rate, total length of the tPA treatment.
The article below also discussed the heterogeneity of the SOC: How standard is standard care? Exploring control group outcomes in behaviour change interventions for young people with type 1 diabetes.
The heterogeneity of the standard of care presents great challenges in conducting clinical trials using the standard of care as the control group. This issue was extensively discussed in FDA’s guidance on Chronic Cutaneous Ulcer and Burn Wounds — Developing Products for Treatment. If we think about doing a multi-national clinical trial with the standard of care as the control group, the challenges will be even greater or the trial is not entirely feasible because of the difficulties in defining the SOC for a specific disease treatment. Here are the paragraphs from FDA’s guidance concerning about using the Standard of Care as the control group.
“Standard care refers to generally accepted wound care procedures, other than the investigational product, that will be used in the clinical trial. Good standard care procedures in a wound-treatment product trial are a prerequisite for assessing safety and efficacy of a product. Since varying standard care procedures can confound the outcome of a clinical trial, it is generally advisable that all participating centers agree to use the same procedures and these procedures are described within the clinical protocol. If it is not practical to apply uniform standard care procedures across study centers, randomization stratified by study center should be considered. It is also important that the sample size within study centers and wound care records be adequate to assess the effect of wound care variation.
A number of standard procedures for ulcer and burn care are widely accepted. Several professional groups have initiated development of care guidelines for ulcers and burns. The Agency does not require adherence to any specific guidelines, the basic principle being that standard care regimens in wound-treatment product trials should optimize conditions for healing and be prospectively defined in the protocol. The rationale for the standard care chosen should be included in the protocol, and the study plan should be of sufficient detail for consistent and uniform application across study centers. Case report forms (CRFs) should be designed such that, at each visit, investigators describe the type of ulcer or burn care actually delivered (e.g., extent of debridement, use of concomitant medications). For outpatients, the CRF should also capture compliance with standard care measures, including wound dressing, off-loading, and appropriate supportive factors, such as dietary intake.The value of study site consistency in standard care regimens within a trial cannot be over-emphasized because of the profound effects these procedures have on clinical outcome for burns and chronic wounds. Consistency in standard care regimens is important for minimizing variability and allowing assessment of treatment effect. It may be reasonable to evaluate a single standard care regimen in early trials to minimize this variability. If comparison of an investigational product to more than one commonly used standard care option is desired, the overall development plan should include specific assessment of the effect of these standard care options on the experimental treatment. These common options should be identified and addressed prospectively in clinical trial design including being clearly described in the clinical protocol and compliance captured via the CRFs; criteria for data poolability should be defined prospectively. Every attempt should be made to minimize deviations from the procedures described in the protocol and subject compliance recorded in CRFs. If more than one standard care regimen is used in the same clinical trial, then randomized treatment allocation within strata defined by these options in standard care is important.”
To minimize the heterogeneity of the standard of care, cluster randomization may also be emplyed. As stated in FDA’s guidance “Antibacterial Therapies for Patients With Unmet Medical Need for the Treatment of Serious Bacterial Diseases”, with cluster randomization, “Patients enrolled at sites randomized to the standard-of-care arm would be treated no differently than is usual practice at that site, while patients enrolled at sites randomized to the investigational drug arm would be treated with the investigational drug.”
When a clinical trial uses standard of case as control group, should the study be designed as superiority or non-inferiority?
It depends on whether or not the experimental treatment group is a stand alone (without standard of case) or add-on (on top of the standard of care) therapy.
If the experimental treatment group is an add-on therapy and the experimental treatment is given on top of the existing standard of case, the trial design must be a superiority study to demonstrate that the add-on therapy is superior to the existing standard of case.
If the experimental treatment group is a stand alone therapy and can be given without the standard of care, the trial design can be either non-inferiority or superiority depending on the effect size of the experimental therapy.
In FDA’s guidance “Non-Inferiority Clinical Trials”, the ‘Add-on study’ was suggested as an alternative to the non-inferiority study design. In the guidance, ‘treatment that are already available’ can include the standards of care. The combo therapy of the novel treatment plus the existing treatment must be shown to be superior to the existing treatment (standard of care alone) or the existing treatment + Placebo.
In many cases, for a pharmacologically novel treatment, the most interesting question is not whether it is effective alone but whether the new drug can add to the effectiveness of treatments that are already available. The most pertinent study would therefore be a comparison of the new agent and placebo, each added to established therapy. Thus, new treatments for heart failure have added new agents (e.g., ACE inhibitors, beta blockers, and spironolactone) to diuretics and digoxin. As each new agent became established, it became part of the background therapy to which any new agent and placebo would be added. This approach is also typical in oncology, in the treatment of seizure disorders, and, in many cases, in the treatment of AIDS. “
Here is a real example of an add-on study: Influence of early goal-directed therapy using arterial waveform analysis on major complications after high-risk abdominal surgery: study protocol for a multicenter randomized controlled superiority trial.
“In this multicenter, randomized, controlled superiority trial, 542 patients scheduled for elective, high-risk abdominal surgery will be included. Patients are allocated to standard care (control group) or early goal-directed therapy (intervention group) using a randomization procedure stratified by center and type of surgery. In the control group, standard perioperative hemodynamic monitoring is applied. In the intervention group, early goal-directed therapy is added to standard care, based on continuous monitoring of cardiac output with arterial waveform analysis.”