Sunday, April 15, 2012

Should the design and conduct of clinical trials be simplified?

As mentioned in one of FDA’s guidance, “in the past two decades, the number and complexity of clinical trials have grown dramatically. These changes create new challenges in clinical trial oversight such as increased variability in investigator experience, ethical oversight, site infrastructure, treatment choices, standards of health care, and geographic dispersion.”. According to an article by Mr Getz titled “The Heavy Burden of Protocol Design More complex and demanding protocols are hurting clinical trial performance and success”, companies sponsoring clinical research have openly acknowledged that protocol design negatively impacts clinical trial performance and may well be the single largest source of delays in getting studies completed. When designing a clinical trial, sponsors often try to include too many endpoints and too many measures hoping that all of these endpoints (if results are good) will contribute to the overall evidence of the clinical efficacy. The sponsors attempt to collect a lot of information that is not must-to-have, but nice-to-have (for future data dredging, marketing, publications,…). We want to do it all in one clinical study. This obviously increases the complexity of the study protocol that subsequently increases the length of the clinical trial, the cost of the clinical trial, and the quality (more protocol incompliance) of clinical trial data.
In terms of the conduct of the clinical trials, emphasis on the compliance of good clinical practice has resulted in perceptions that the clinical trial data must be 100% monitored and source-verified, all data programming and analysis must be independently validated, over-reporting adverse events must be requirement of the GCP compliance…
Over the last two years, FDA has issues several guidance in an attempt to change these perceptions.
In FDA’s new draft guidance "Determining the extent of safety data collection needed in late stage premarket and post-approval clinical investigations", it states that its intention is “to assist clinical trial sponsors in determining the amount and types of safety data to collect in late-stage premarket and post-market clinical investigations for drugs or biological products, based on existing information about a product’s safety profile.” This new guidance addresses the circumstances in which it may be acceptable to acquire a reduced amount of safety information during clinical trials. In some situations, excessive data collection may be unnecessary and not helpful.
In Guidance for Industry “Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring”, FDA encourages the sponsors to implement the alternative clinical monitoring instead of the only way of on-site monitoring. The webinar and presentations slides can be found at http://www.fda.gov/Training/GuidanceWebinars/ucm277044.htm
“For major efficacy trials, companies typically conduct on-site monitoring visits at approximately four- to eight-week intervals,8 at least partly because of the perception that the frequent on-site monitoring visit model, with 100% verification of all data, is FDA’s preferred way for sponsors to meet their monitoring obligations. In contrast, academic coordinating centers, cooperative groups, and government organizations use on-site monitoring less extensively. For example, some government agencies and oncology cooperative groups typically visit sites only once every two or three years to qualify/certify clinical study sites to ensure they have the resources, training, and safeguards to conduct clinical trials. FDA also recognizes that data from critical outcome studies (e.g., many National Institutes of Health-sponsored trials, Medical Research Council-sponsored trials in the United Kingdom, International Study of Infarct Survival, and GISSI), which had no regular on-site monitoring and relied largely on centralized and other alternative monitoring methods, have been relied on by regulators and practitioners. These examples demonstrate that use of alternative monitoring approaches should be considered by all sponsors, including commercial sponsors when developing risk-based monitoring strategies and plans”
Many sponsors may be reluctant to adopt this guidance and stick with the status quo approach of frequent on-site visits with 100% verification of all data. They may be worried that loosing the clinical monitoring could incur the incompliance.


In Guidance for Clinical Investigators, Sponsors, and IRBs Adverse Event Reporting to IRBs — Improving Human Subject Protection, FDA advised the sponsors to report to IRB (and FDA presumably) the AE only if it were unexpected, serious, and would have implications for the conduct of the study, not all unanticipated AEs. The sponsor should analyze the unanticipated AEs before reporting.
“the practice of local investigators reporting individual, unanalyzed events to IRBs, including reports of events from other study sites that the investigator receives from the sponsor of a multi-center study—often with limited information and no explanation of how the event represents an unanticipated problem—has led to the submission of large numbers of reports to IRBs that are uninformative. IRBs have expressed concern that the way in which investigators and sponsors of IND studies typically interpret the regulatory requirement to inform IRBs of all "unanticipated problems" does not yield information about adverse events that is useful to IRBs and thus hinders their ability to ensure the protection of human subjects.”
 
There are many other areas in clinical trial practices that can and should be simplified. For example, some protocols instruct investigators to record and report all untoward events that occur during a study as AEs/SAEs, which could include common symptoms of the disease under study and/or other expected clinical outcomes that are not study endpoints. Over-reporting of AE/SAEs can incur additional burdens and can dilute or obscure signal identification. Another example is to spend too much efforts on the screening failure subjects. It is true that the recording of the adverse events starts once the informed consent form is signed. However it is unnecessary to write a full-blown SAE narrative for a screening failure subject that has nothing to do with the assessement of the safety of the experimental product.

No comments: