Friday, August 02, 2019

ICH E19 Optimisation of Safety Data Collection and FDA Guidance on Collecting Less Safety Data

The U.S. Food and Drug Administration is announcing the availability of a draft guidance entitled “E19 Optimisation of Safety Data Collection.” The guidance was prepared under the auspices of the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) and provides recommendations regarding appropriate use of a selective approach to safety data collection in some late-stage pre- or postmarketing studies of drugs where the safety profile, with respect to commonly occurring adverse events, is well understood and documented. The agency intends for the draft guidance to advance important clinical research questions through clinical investigations that collect relevant patient data. This will enable an adequate benefit-risk assessment of the drug for its intended use, while reducing the burden to patients from unnecessary tests that may yield limited additional information. Interested parties may submit comments to the docket until August 26, 2019.

FDA publishes ICH guidelines as FDA guidances. This guidance reflects just one element in FDA’s work with regulatory authorities and industry associations from around the world to promote international harmonization of regulatory requirements under the ICH. One of the goals of harmonization is to identify and reduce differences in technical requirements for drug development among regulatory agencies. FDA is committed to seeking scientifically based harmonized technical procedures for the development and manufacture of pharmaceuticals.


Duke-Margolis Center for Health Policy organized a two-day workshop "Leveraging Randomized Clinical Trials to Generate Real-World Evidence for Regulatory Purposes"

Dr Ellis Unger from FDA discussed the topic "Safety Monitoring in Clinical Trials for Generation of RWE" (see the video @1:10)
"Once a drug has been approved, its risks/harms are well-characterized; safety monitoring in studies for new indications can be less intensive (in may circumstances)"

"...about the defensive research, which is really a huge problem, which is that let's collect everything because some regulators might ask us in a year, ho, well, what were the CBCs? all these nitpicking questions that require mountains of data.
We put on a guidance in 2016. it's called, 'Determining the Extentof Safety DataCollection Needed in Late-Stage Premarket and Post approval Clinical Investigations'  What it basically talks about collecting less than usual safety data in those situations where the safety data are well characterized. My particular office oversees the Division of Cardiovascular and Renal products, and we talk with many sponsors. Look, don't collect all this stuff in your next trial. You want to do a cardiovascular outcome trial, that's great. But we already know how many headaches and hangnails a drug causes.  Just collect serious adverse events. and they thank us and they shake their heads and they walk out of the door. and then when they send in the final protocol, they are collecting everything. This happens time and time again and it dawned on us. well, maybe the problem is regulators across the pond and so about 4 years ago, I think, we proposed this topic to ICH collecting less than full safety data collection and it was adopted and I'm glad to say that the step 2 guideline is out and open for public comment. it is called "optimization of safety data collection".
And the interesting thing was that a lot of the resistance to this approach was not coming from the regulators across the pond or in Asian. It was coming from companies who were just firmly entrenched in the idea that they better collet it because they may need it. I would like someone who works for a company that has done one of these outcome studies for an anti-diabetic drug could tell me exactly what they learned by getting vital signs every, lab everything, every nonserious adverse event. Tell me what you learned and tell me what it costed you. I suspect that you didn't get much bang for your buck.
So this guideline is out there and I really do hope most of the people who have a vested interest take a look at it and there's a docket in the US and other countries where you can submit your comments Through official Pathways to get them considered."

Further Readings:

Monday, July 22, 2019

Upversion of medical coding dictionaries (MedDRA and WHO-DD)

In clinical trials, adverse events, medical histories, concomitant medications (drug names) are usually entered as the free text fields (verbatim terms) in the case report forms. Data in free text format can not be systematically summarized and analyzed, medical coding becomes necessary to convert the free text fields into categories.

Adverse events and medical histories are coded using MedDRA (Medical Dictionary for Regulatory Activities). Drug names (generic or brand names) are coded using WHO-DD (World Health Organization – Drug Dictionary). Early dates, there are different dictionaries available for performing the medical coding. Nowadays, the coding dictionaries are fixed on MedDRA and WHO-DD.

Both MedDRA and WHO-DD dictionaries are updated periodically - specifically, MedDRA is updated twice a year and WHO-DD is updated four times a year.

When we do a clinical trial, we will start with the latest version of the coding dictionaries. The clinical studies usually take several years to complete. By the time we reach the completion of the study, we will do the database lock (i.e., no data changes after the database lock). The coding dictionaries selected for use at the beginning of the study will be several years old and become obsolete.

One way to resolve this issue is to perform the upversion (or up-version, upversioning) of the medical dictionaries. For a clinical trial (especially a trial with extended study duration), it is good to update the medical coding dictionaries periodically. If upversion can not be performed periodically during the study, it is better to do an upversion at least at the end of the study (before the database lock).

Historically, there was no requirement/mandate for upversion, different companies had different practice in terms of the upversion. Here is the survey result about the upversoin in practice.



However, upversion will soon become a mandate for both MedDRA and WHO-DD dictionaries.

There are two Federal Registers pertinent to the upversion of the MedDRA and WHO-DD (or WHODG) dictionaries: one for MedDRA and one for WHO-DD. Notice that WHO-DD is now called WHODG (World Health Organization Drug Global) in the Federal Register.

Electronic Study Data Submission; Data Standards; Support for Version Update of the Medical Dictionary for Regulatory Activities
“Generally, the studies included in a submission are conducted over many years and may have used different MedDRA versions to code adverse events. The expectation is that sponsors or applicants will use the most current version of MedDRA at the time of study start. However, there is no requirement to recode earlier studies. The transition date for support and requirement to use the most current version of MedDRA is March 15, 2018. Although the use of the most current version is supported as of this Federal Register notice and sponsors or applicants are encouraged to begin using it, the use of the most current version will only be required in submissions for studies that start after March 15, 2019….”
Electronic Study Data Submission; Data Standards; Support for Version Update of World Health Organization Drug Global
“FDA currently supports the use of WHODG for the coding of concomitant medications in studies submitted to CBER or CDER in NDAs, ANDAs, BLAs, and certain INDs in the electronic common technical document format. Generally, the studies included in a submission are conducted over many years and may have used different WHODG versions to code concomitant medications. The expectation is that sponsors and applicants will use the most current B3-format annual version of WHODG at the time of study start. However, there is no requirement to recode earlier studies. The transition date for support of the most current B3- format annual version of WHODG is March 15, 2018. Although the use of the current B3-format annual version of WHODG is supported as of this Federal Register notice and sponsors or applicants are encouraged to begin using it, the use of the most current B3- format annual version will only be required in submissions for studies that start after March 15, 2019.”
Additional Readings:

Thursday, July 18, 2019

Retire Statistical Significance and p-value?


In the March issue of The American Statistician, there was a special issue with 43 papers about “Statistical Inference in the 21st Century: A World Beyond p < 0.05”. The discussion about using the p-values was picked up by the scientific communities and triggered a lot of discussions. Some of the articles were provocative: “Retire Statistical Significance”, “Abandon / Retire Statistical Significance”. American Statistician Association’s president, Karen Kafadar, has also discussed this issue in his ‘president’s corner’.

For a long time, statisticians have been cautioned about the misuse of the p-values.
  • Don’t become the slave of the p-values
  • Don’t base your conclusions solely on whether an association or effect was found to be “statistically significant” (i.e., the pvalue passed some arbitrary threshold such as p < 0.05).
  • Don’t believe that an association or effect exists just because it was statistically significant.
  • Don’t believe that an association or effect is absent just because it was not statistically significant.
  • Don’t believe that your p-value gives the probability that chance alone produced the observed association or effect or the probability that your test hypothesis is true.
  • Don’t conclude anything about scientific or practical importance based on statistical significance (or lack thereof).

The intention of the special issue is to trigger a healthy debate about the p-values and statistical significance, trigger the development of better methods, and provide the educations about the appropriate use and interpretation of the p-values. However, there is a danger of the unintended consequences: non-statisticians may be confused about what to do. Worse, “by breaking free from the bonds of statistical significance” as the editors suggest and several authors urge, researchers may read the call to “abandon statistical significance” as “abandon statistical methods altogether”.

The drug development relies on the clinical trials to demonstrate the substantial evidence about the efficacy and the substantial evidence comes from adequate and well-controlled investigations.
“evidence consisting of adequate and well-controlled investigations, including clinical investigations, by experts, qualified by scientific training and experience to evaluate the effectiveness of the drug involved, on the basis of which it could fairly and responsibly be concluded by such experts that the drug will have the effect it purports or is represented to have under the conditions of use prescribed, recommended, or suggested in the labeling or proposed labeling thereof”

For a common disease, two pivotal studies (with each showing a statistical significance at alpha = 0.05) have been the requirement for FDA (see FDA guidance "Providing Clinical Evidence of Effectiveness for Human Drug and Biological Products")

FDA has applied more flexibilities in evaluating the evidence for drugs/biological products for treating rare diseases especially those with unmet medical needs. Frank Sasinowski has two articles discussing this issue.
If we need to avoid the overuse and misuse of p-values, we will need to start with the changes in the statute of the laws and changes in regulatory science.

In addition, the scientific journals and editors may judge the value of a paper based on the significance of the results and favors the studies with statistical significance for publication. However, this may be changed now. On July 18. 2019 issue of New England Journal of Medicine (NEJM), an editorial paper was published "New Guidelines for Statistical Reporting in the Journal".
"The new guidelines discuss many aspects of the reporting of studies in the Journal, including a requirement to replace P values with estimates of effects or association and 95% confidence intervals when neither the protocol nor the statistical analysis plan has specified methods used to adjust for multiplicity."
With NEJM leading the way, other journals may follow. We will see more reporting of the confidence intervals and less reporting of the p-values. 

Generate Real-World Data (RWD) and Real-World Evidence (RWE) for Regulatory Purposes

Real-world data (RWD) and real-world evidence (RWE) have been hot topics in the drug development field and in the statistical field. With the new technologies, electronic health records, and big data, it is no surprise that RWD and RWE are much discussed to be a new way to revolutionize the clinical trial design and the regulations.

FDA has its dedicated webpage for 'Real-World Evidence' and issued several guidelines for using real-world evidence to support the regulatory approvals. 

What is the Definition of RWD and RWE?

Real-world data (RWD) are the data relating to patient health status and/or the delivery of health care routinely collected from a variety of sources. RWD can come from a number of sources, for example:
  • Electronic health records (EHRs)
  • Claims and billing activities
  • Product and disease registries
  • Patient-generated data including in home-use settings
  • Data gathered from other sources that can inform on health status, such as mobile devices
Real-world evidence (RWE) is the clinical evidence regarding the usage and potential benefits or risks of a medical product derived from analysis of RWD. RWE can be generated by different study designs or analyses, including but not limited to, randomized trials, including large simple trials, pragmatic trials, and observational studies (prospective and/or retrospective).

Last week, Duke Margolis Center for Health Policy organized a symposium titled "Leveraging Randomized Clinical Trials to Generate Real-World Evidence for Regulatory Purposes". All presentation slides and the video recordings are available for free.

Day 1 focused on the study design using RWD and RWE for efficacy measure (presentation slides, presentation video)


Day 2 focused on safety monitoring using RWD and RWE (presentation slides, presentation video)


This is not the first Duke Margolis Center for Health Policy organizes the event for this topic. Here are some previous events.

Second Annual Duke-Margolis Conference on Real-World Data and Evidence

Enhancing the Application of Real-World Evidence In Regulatory Decision-Making DAY 1

Sunday, July 14, 2019

Regulations for Clinical Trials, Drug Development, Drug Approvals in China


The article by Bill Wang and  Alistair Davidson "An overview of major reforms in China’s regulatory environment" summarized the background about the regulatory environment in China. 
It is widely recognized that China is currently the second largest pharmaceutical market in the world. Historically the regulatory environment in China has been considered a highly challenging one, with: (1) major issues in the areas of comparative quality between international standards and some local products and manufacturers; (2) a timeframe for review and approval of new drugs that is longer than most major countries; and (3) a lack of capacity in the regulatory bodies that has resulted in a backlog of applications. In August 2015, the China State Council issued “Opinions on Reforming the Review and Approval System for Drugs and Medical Devices.” This was partly a result of dialogue with the local and international pharma industry that, for many years, has been pressing for major regulatory reform.1 The overarching intention of this was to “promote the structural adjustment, transformation and upgrade of the pharmaceutical industry and bring marketed products up to international standards in terms of efficacy, safety and quality, so as to better meet the public needs for drugs.” The main practical aims are to: (1) eliminate the existing backlog of registration applications; (2) establish an environment for maximizing the quality of generic drugs; (3) create a framework in China that encourages research and development of new drugs in line with global development; and (4) improve the quality and increase transparency of the review and approval process.
 Additional discussions about the regulatory environment in China for clinical trials and drug development can be found here:

Chinese regulatory agencies have published a flurry of regulations about the clinical trials and drug approval processes. Unfortunately, all regulatory guidance and policies are in Chinese. 

In the US, NIH’s ClinRegs website contains the English version of the updates about the Chinese regulations in clinical trials and drug development:



Regulatory Authorities in China and the US 

China
US
国家药品监督管理局National Medical Product Administration, NMPA
It has just launched its English version website at
http://subsites.chinadaily.com.cn/nmpa/drugs.html

国家药品监督管理局药品审评中心 (Center for Drug Evaluation)

国家药品监督管理局医疗器械技术审评中心 (Center for Medical Device Evaluation)


Guidance, Guideline, and Policies Regarding Clinical Trials, Drug Development, Drug Approval (Links to the Chinese version and the translation of the titles in English)

Guidelines for Post-marketing individual case safety reporting (ICSRs) E2B (R3)
Communications for Drug Development and Technical Evaluation (Trial)
Guidance for Accepting Data from Foreign Clinical Trials
Data Protection in Clinical Trials for Drug

Priority Review & Approval Procedure


Guidelines for Drug Application / Registration Submission
Decisions on the Adjustment of Imported Drug Registration
Pediatric Extrapolation

General Considerations to Clinical Trials for Drug

Bioequivalence Evaluation for Generic Drugs
Data Management Planning and Reporting of Statistical Analysis
Biostatistics Principles for Clinical Trials

Data Management Procedure for Clinical Trials
Clinical Trials in Pediatric Population
Self-inspection of Clinical Trial Data
Electronic Data Capture for Clinical Trials
Multi-regional Clinical Trial
Clinical Trial Registry
Adverse Drug Reaction Reporting and Monitoring
Guidance for Quality Control in Clinical Trials

Saturday, June 22, 2019

Historical Control vs. External Control in Clinical Trials

Last week, I had an opportunity to attend the annual ICSA Applied Statistics Symposium in Raleigh, North Carolina. The symposium had a lot of good sessions to discuss contemporary statistical issues. Representing the DIA NEED group, we presented a session about “historical control in clinical trials”.

What is the historical control?

(v) Historical Control. The results of treatment with the test drug are compared with experience historically derived from the adequately documented natural history of the disease or condition, or from the results of active treatment, in comparable patients or population. Because historical control populations usually cannot be as well assessed with respect to pertinent variables as can concurrently control populations, historical control designs are usually reserved for special circumstances. Examples include studies of diseases with high and predictable mortality (for example, certain malignancies) and studies in which the effect of the drug is self-evident (general anesthetics, drug metabolism).

“The external control can be a group of patients treated at an earlier time (historical control),…”

In a recent FDA guidance (2019) “Rare Diseases: Common Issues in Drug Development”, the historical control and external control was used interchangeably.
1. Historical (external) controlsFor serious rare diseases with unmet medical need, interest is frequently expressed in using an external, historical, control in which all enrolled patients receive the investigational drug, and there is no randomization to a concurrent comparator group (e.g., placebo/standard of care). The inability to eliminate systematic differences between nonconcurrent treatment groups, however, is a major problem with that design. This situation generally restricts use of historical control designs to assessment of serious disease when (1) there is an unmet medical need; (2) there is a well-documented, highly predictable disease course that can be objectively measured and verified, such as high and temporally predictable mortality; and (3) there is an expected drug effect that is large, self-evident, and temporally closely associated with the intervention. However, even diseases with a highly predictable clinical course and an objectively verifiable outcome measure may have important prognostic covariates that are either unknown or unrecorded in the historical data.
What is the difference between historical control and external control?

The historical control was used to be one type of external controls that had a time (early time) component. In more recent guidelines, the historical control and external control are used interchangeably. The concept of historical control has a broader meaning now. In terms of the clinical trial design and statistical analyses, the same issues will apply no matter it is a study using historical control or external control.

Examples of Clinical Trials with Successful use of Historical or External Control

The randomized, controlled trials (RCTs) are still the golden standard, the study with historical control or external control can be used when concurrent controls are impractical or unethical. Many drugs, biological products or medical devices have successfully been approved or cleared by regulatory agencies for marketing authorization using the evidence generated from the clinical trials with historical or external control.

Here are some examples:

Brineura for Battten Disease
Brineura for Batten disease was approved by FDA based on a non-randomized, single-arm study in 22 subjects and a comparison with 42 subjects from a natural history cohort (a historical control group)

Venetoclax for Relapsed/Refractory Chronic Lymphocytic Leukemia

Venetoclas for R/R CLL was approved by FDA based on a single-arm study in 106 subjects with a comparison of the overall response rate to a 40% response rate that was considered as clinically meaningful.


Multiple IGIV products were approved based on FDA guidance. The guidance suggested to measure the rate of serious bacterial infections during regularly repeated administration of the investigational IGIV product in adult and pediatric subjects for 12 months (to avoid seasonal biases) and compare the observed infection rate to a relevant historical standard - a statistical demonstration of a serious infection rate per person-year less than 1.0.




FDA recently approved XVIVI XPS EVLP device to help increase access to more lungs for transplant. According to Summary of Safety and Effectiveness, the PMA approval was based on a single-arm study with a matched control to demonstrate the lung transplants with EVLP lungs were not inferior to the matched control group (all other lungs transplanted at that transplant center during the same time period). The one-year survival rate was compared to the matched control group and also the large database from UNOS (United Network for Organ Sharing). This is a good example of a study using ‘external control’.   








Monday, June 03, 2019

Six-Minute Walk Test (6MWT), 2-Minute Walk Test (2MWT), 12-Minute Walk Test (12MWT), and Timed Walk (T25FW, T10MW)

Six-Minute Walk Test (6MWT) is to measure the distance in a fixed duration (6 minutes). It has been used as a clinical trial endpoint to measure the functional capacity in many therapeutical areas especially in pulmonary diseases (such as COPD, Pulmonary Hypertension) and neurology diseases (such as Duchenne Muscular Dystrophy) and others (such as the treatment of Mucopolysaccharidosis type VII (MPS VII, Sly syndrome)). The distances measured through 6MWT is 'Six-Minute Walk Distance' (6MWD).

Guidelines for Performing Standardized 6MWT

There are several guidelines for performing standardized 6MWT. The guidelines by the  American Thoracic Society is the one we usually follow:


6MWT is one of the approaches to measure 'exercise capacity' and is considered as a simulated test for measuring the function. FDA has a long-standing position that the clinical trial endpoint needs to measure patients' feel, function, and survival. 6MWT is measuring patients' function.

In FDA's guidance "Chronic Obstructive Pulmonary Disease: Developing Drugs for Treatment", 6MWT along with other exercise capacity measures were described as the following:
"Exercise capacity. Reduced capacity for exercise is a typical consequence of airflow obstruction in COPD patients, particularly because of dynamic hyperinflation occurring during exercise. Assessment of exercise capacity by treadmill or cycle ergometry combined with lung volume assessment potentially can be a tool to assess efficacy of a drug. Alternate assessments of exercise capacity, such as the Six Minute Walk or Shuttle Walk, also can be used. However, all these assessments have limitations. For instance, the Six Minute Walk test reflects not only physiological capacity for exercise, but also psychological motivation. Some of these assessments are not rigorously precise and may prove difficult in standardizing and garnering consistent results over time. These factors may limit the sensitivity of these measures and, therefore, limit their utility as efficacy endpoints, since true, but small, clinical benefits may be obscured by measurement noise."
History of the Six-Minute Walk Test: 

On "ATS Statement: Guidelines for the Six-Minute Walk Test" contained the following descriptions about the history of 6MWT.
Assessment of functional capacity has traditionally been done by merely asking patients the following: “How many flights of stairs can you climb or how many blocks can you walk?” However, patients vary in their recollection and may report overestimations or underestimations of their true functional capacity. Objective measurements are usually better than self-reports. In the early 1960s, Balke developed a simple test to evaluate the functional capacity by measuring the distance walked during a defined period of time. A 12-minute field performance test was then developed to evaluate the level of physical fitness of healthy individuals. The walking test was also adapted to assess disability in patients with chronic bronchitis. In an attempt to accommodate patients with respiratory disease for whom walking 12 minutes was too exhausting, a 6-minute walk was found to perform as well as the 12-minute walk. A recent review of functional walkingtests concluded that “the 6MWT is easy to administer, better tolerated, and more reflective of activities of daily living than the other walk tests”.
History of the Six-Minute Walk Test in Pulmonary Arterial Hypertension:

6MWD has been accepted by the FDA as the primary efficacy endpoint in the drug development in pulmonary arterial hypertension (PAH). According to a presentation by Dr. Barbara LeVarge "Exercise physiology and noninvasive assessment in PAH', the use of 6MWT in PAH started with the clinical development program of Epoprostenol.


6MWT versus Timed Walk
I am curious why 6MWT is a popular measure, but not the timed walk. To assess the functional capacity, we can either fix the time, then measure the distance (such as 6MWT), or fix the distance, then measure the time (such as Timed 25 Foot Walk [T25FW] and Timed 10-Meter Walk [T10MW or 10-MWT]). In sports, for all events in track and field and swimming, we always fix the distance and then measure the time.

In terms of the measurement accuracy, timed walk (such as T25FW and T6MW or 10-MWT) seems to be more accurate than 6MWT. For the timed walk, we need to make sure the recording of the time is accurate because the distance is fixed. For 6MWT, we need to make sure the recordings of both time and distance are accurate - while time is fixed, it usually needs to be measured as well.

The timed walk is actually used in clinical trials in neurology area and is accepted by the FDA as a clinical trial endpoint, for example, the timed walk is used to measure the improvement of walking ability in multiple sclerosis patients

2MWT, 6MWT, and 12MWT 

While 6MWT is the most commonly used, the 12-minute Walk Test (12MWT) was initially used to measure the functional capacity by Balke and 2-Minute Walk Test (2MWT) has also used in some clinical trials.

Leung et al (2006) did a study to validate the 6MWT in severe COPD "Reliability, Validity, and Responsiveness of a 2-Min Walk Test To Assess Exercise Capacity of COPD Patients" and they concluded:
The 2MWT was shown to be a reliable and valid test for the assessment of exercise capacity and responsive following rehabilitation in patients with moderate-to-severe COPD. It is practical, simple, and well-tolerated by patients with severe COPD symptoms.
Grifols is currently conducting a pivotal FORCE study "Study of the Efficacy and Safety of Immune Globulin Intravenous (Human) Flebogamma 5% DIF in Patients with Post-Polio Syndrome" where 2MWD is the primary efficacy endpoint.


Monday, May 13, 2019

Pediatric Extrapolation for Pediatric Indication


In a previous post "Pediatric Study Plan (PSP) and Paediatric Investigation Plan (PIP)", we discussed the requirements for PSP and PIP and the importance of incorporating the pediatric investigation plan into the overall clinical development program.
Doing clinical trials in the pediatric population is always challenging. It is not feasible to have a pediatric investigation plan that is too big to implement. Ethically, it is also not a wise decision to expose too many children in clinical trials (especially the placebo-controlled trials).
Regulatory agencies (such as FDA and EMA) realized the challenges in the clinical development program in children and have issued guidelines that encourage the sponsors to use an approach called 'pediatric extrapolation'.

We have already seen that some sponsors use the pediatric extrapolation to obtain the pediatric indication successfully.