Saturday, October 18, 2014

The fixed margin method or the two confidence interval method for obtaining the non-inferiority margin

For non-inferiority clinical trials, the key issue is to pre-specify the non-inferiority margin and the non-inferiority margin has to be based on the historical supporting data from the studies that compare the active control group with Placebo. If there are multiple historical studies comparing the active control group with Placebo, meta analysis will need to be performed. From the meta analysis, the point estimate and the 95% confidence interval will be obtained.

As indicated in FDA’s guidance "Non-Inferiority Clinical Trials", there are essentially two approaches to derive the non-inferiority margin:

“Having established a reasonable assumption for the control agent’s effect in the NI study,  there are essentially two different approaches to analysis of the NI study, one called the fixed  margin method (or the two confidence interval method) and the other called the synthesis method. Both approaches are discussed in later sections of section IV and use the same data  from the historical studies and NI study, but in different ways.”

The guidance further explained the fixed margin method as:
 “in the fixed margin method, the margin M1 is based upon estimates of the effect of the active comparator in previously conducted studies, making any needed adjustments for changes in trial circumstances. The NI margin is then pre-specified and it is usually chosen as a margin smaller than M1 (i.e., M2), because it is usually felt that for an important endpoint a reasonable fraction of the effect of the control should be preserved. The NI study is successful if the results of the NI study rule out inferiority of the test drug to the control by the NI margin or more. It is referred to as a fixed margin analysis because the past studies comparing the drug with placebo are used to derive a single fixed value for M1, even though this value is based on results of placebo-controlled trials (one or multiple trials versus placebo) that have a point estimate and confidence interval for the comparison with placebo. The value typically chosen is the lower bound of the 95% CI (although this is potentially flexible) of a placebo-controlled trial or meta-analysis of trials. This value becomes the margin M1, after any adjustments needed for concerns about constancy. The fixed margin M1, or M2 if that is chosen as the NI margin, is then used as the value to be excluded for C-T in the NI study by ensuring that the upper bound of the 95% CI for C-T is < M1 (or M2). This 95% lower bound is, in one sense, a conservative estimate of the effect size shown in the historical experience. It is recognized, however, that although we use it as a “fixed” value, it is in fact a random variable, which cannot invariably be assumed to represent the active control effect in the NI study.”

Suppose we are planning to design a non-inferiority study to compare a new experimental thrombolytic agent the meta analysis of "Thrombolysis for acute ischaemicstroke"

“Thrombolytic therapy, mostly administered up to six hours after ischaemic stroke, significantly reduced the proportion of patients who were dead or dependent (modified Rankin 3 to 6) at three to six months after stroke (odds ratio (OR) 0.81, 95% confidence interval (CI) 0.72 to 0.90).”


(Existing Thrombolysis Agent) / Placebo = 0.90 (0.90 is the upper bound of 95% confidence interval)

1-0.90 = 0.10 is the treatment effect of Existing Thrombolysis Agent in reduction in patients with unfavorable outcome

If we plan to do a trial to compare the new thrombolytic agent with Existing Thrombolysis Agent and we would like to preserve 50% of the treatment effect of Existing Thrombolysis Agent, the non-inferiority margin would be calculated as:


(new thrombolytic agent / Placebo)               0.90 + 0.10/2
_____________________ __________ =    __ _______              =  1.06
(Existing Thrombolysis Agent / Placebo)               0.90

The non-inferiority margin would be 1.06.

From the non-inferiority trial comparing New Thromblitic Agent with Existing Thrombolysis Agent, we will need to calculate the 95% confidence interval for odds ratio of (New Thrombolytic Agent / Existing Thromblysis Agent). We will then compare the upper bound of this 95% confidence interval with the non-inferiority margin of 1.06 calculated above. The non-inferiority can be declared if the upper bound of this 95% confidence interval is below the non-inferiority margin of 1.06 – This is why the fixed margin method is called two confidence interval method. Two confidence intervals are involved in the study design: the first 95% confidence interval is from the comparison of the Active Control with Placebo from the historical data; the second 95% confidence interval is from the comparison of the new experimental treatment with Active Control from the new non-inferiority trial.

Several comments on the fixed margin method:

1. Depending on the outcome being good or bad, either the lower bound or upper bound of 95% confidence interval of the Active Control versus Placebo should be used when deriving the non-inferiority margin

2. Depending on the statistics being the numeric difference (difference between two means) or ratio (for example, odds ratio, risk ratio, hazard ratio), the treatment effect M1 is based on the 95% CI of the difference (distance from 0) or the ratio (the distance from 1) including odds ratio, risk ratio, hazard ratio.


3.  While it is typical to choose a M2 (non-inferiority margin) to preserve at least 50% of the treatment effect of the active control group in comparison with Placebo, depending on the disease, the number of 50% may be adjusted. In the thrombolytic treatment for ischemia stroke situation, it may be acceptable to preserve 30-40% of the treatment effect of active control. In other words, in terms of the assay sensitivity, we are willing to accept a lose of large percentage of treatment effect of the active control group (over the historical placebo) in order to have a reasonable non-inferiority margin and to have a feasible sample size for the clinical trial.  

4. in some therapeutical areas (for example in antibacterial and orphan disease areas), there is no historical data to support the statistical justification of the non-inferiority margin and there is no data available for the calculate the first 95% confidence interval in deriving the non-inferiority margin. 

Additional Reading: 



Saturday, September 13, 2014

N of 1 Clinical Trial Design and its Use in Rare Disease Studies

In the beginning (February) of this year, I attended a workshop titled “Clinical Trial Design for Alpha-1 Deficiency: A Model for Rare Diseases”. During the meeting, the N of 1 design was mentioned as one of the study methods to address the challenges in clinical trials in rare disease areas.

This was echoed in FDA’s “Public Workshop – Complex Issues in Developing Drug and Biological Products for Rare Diseases”. Session 2: “Complex Issues for Trial Design: Study Design, Conduct and Analysis” had some extensive discussions about the N of 1 trial design and its potential use in rare disease clinical trials.

In a presentation by Dr. Temple in FDA titled “The Regulatory Pathway for Rare Diseases Lessons Learned from Examples of Clinical Study Designs for Small Populations”, N of 1 study design was mentioned along with other methods such as randomized withdrawal, enrichment, crossover designs.  

According to Wikipedia, “an N of 1 trial is a clinical trial in which a single patient is the entire trial, a single case study. A trial in which random allocation can be used to determine the order in which an experimental and a control intervention are given to a patient is an N of 1 randomized controlled trial. The order of experimental and control interventions can also be fixed by the researcher. “

While N of 1 is not commonly used in clinical trials, the concept of the N of 1 method with focusing on the single patient is actually pretty common in the clinical trial setting. There is some similarities between Aggregated N of 1 and the typically crossover design, especially the high order cross over design. For safety assessment in clinical trials, challenge – dechallenge – rechallenge (or CDR) is often used to assess if an event is indeed caused by the drug. CDR can be considered as an simple N of 1 design.
“Challenge–dechallenge–rechallenge (CDR) is a medical testing protocol in which a medicine or drug is administered, withdrawn, then re-administered, while being monitored for adverse events at each stage. The protocol is used when statistical testing is inappropriate due to an idiosyncratic reaction by a specific individual, or a lack of sufficient test subjects and unit of analysis is the individual. During the withdraw phase, the medication is allowed to wash out of the system in order to determine what effect the medication is having on an individual.
 CDR is one means of establishing the validity and benefits of medication in treating specific conditions as well as any adverse drug reactions. The Food and Drug Administration of the United States lists positive dechallenge reactions (an adverse event which disappears on withdrawal of the medication) as well as negative (an adverse event which continues after withdrawal), as well as positive rechallenge (symptoms re-occurring on re-administration) and negative rechallenge (failure of a symptom to re-occur after re-administration). It is one of the standard means of assessing adverse drug reactions in France.”
While N of 1 is the experiment on a single patient, using aggregated single patient  (N-of-1) trials will involve multiple patients – quantitative analyses become more feasible. See examples below for using aggregated N of 1 trials.  
N of 1 clinical trials could involve some complicated statistical analyses. See the discussions below:
N of 1 clinical trial design is rarely discussed in statistical conferences, perhaps because of the perception that not too much statistics is involved in the analysis of N of 1 study data. However, we do see that N of 1 study can be a very effective method in demonstrating the efficacy if the characteristics of the indication/drug fit.
One of the key questions is that the N of 1 study design is only applicable in certain situations – it  depends on the disease characteristics, treatment (short washout period), endpoint (quick measurements). We can see some of the discussions about the situations where the N of 1 study design may be used from the transcripts of the FDA Public Workshop on Complex Issues in Rare Disease Drug Development Complex Issues for Trial Design: Study Design, Conduct  and Analysis:
“Ellis Unger: We have no ... No comments right now, so let me put a question to the group. I  presented us a slide on the N of 1 study, which we almost never see. Just to remind you, the N of 1 study is a scenario where a patient doesn't contribute and end, but of course the treatment contributes an end, and of course the treatment can be capped in a certain number above ... weeks.
 Unless someone has the amount of interest, in which case, you give up on that course, that aborts that course to treatment and then they re-randomize. Are there therapies, disease states people around the table can think off that would be ... where this design could be applicable, because we don't see these studies. Dr. Walton?
 Marc Walton: I'll just mention that by firing away in all the clinical trials I've reviewed, the most powerful piece of evidence about the effectiveness of a drug came from a N of 1 type of study where it was a study with Pulmozyme cystic fibrosis where patients were treated Pulmozyme that are pulmonary function tested, then the Pulmozyme was discontinued and then tested again, and then several cycles, and I think it was maybe five cycles and you saw such remarkably reproducible effects that it was utterly convincing that the drug was effective for that.
The utility comes about though when you have, as you have said, a disorder that has enough stability and drugs that have a short enough washout period, that you are able to have that repeatedly look as if it was a new exposure to the patient. In disorders where we have that and treatments that are expected to have that sort of reversible effect, this N of 1 becomes a truly powerful piece of information, as well worth considering when those circumstances present themselves.
 Ellis Unger: Typically, a company will come in and say, "You randomize to our treatment or placebo. We're going to count the number of exacerbations or pain episodes or whatever over the course of the study." This is basically saying once you have one of these events, we're going to re-randomize you. Just again, so anybody around the room ... Okay, Dr. Summar.
 Marshall Summar: Yeah, it seems like from the intermediary metabolism, the effects where you have frequent attacks of hyperammonemia, acidosis, things like that, that actually might be a fairly ideal group washout for most of the treatment is pretty fast. That seems like a group where that might actually play out pretty well. I have to think about that but it seems to make some sense.
 Ellis Unger: Dr. Kakkis?
Nicole Hamblett: Thanks. I think the N of 1s, studies are incredibly intriguing and I think one thing I need to wrap my head around is the consigning by commons in medications, for instance, if you're measuring exacerbations during on and off periods in their treatment for that event could alter what's going to happen during the next events. I think that's a little bit difficult to the chronic study, but I guess I also wonder what are the parameters for being able to use an N of 1 study or N of 1 studies for your pivotal trial, as well as difficult enough to conduct confirmatory study. How would we define that for these types of newer or more customized study designs?
 Ellis Unger: Well, I think the N of 1 study, again, you have to have a treatment where there's an offset that's reasonably rapid and you're not expecting the effect on the disease to be ... the effect is not lasting. It's not like Dr. Hyde was mentioning in a gene therapy, as that would be the extreme opposite where you couldn't do this, but if you have something where there's an offset in a reasonable amount of time and patients are subjected to repeated events, I think that's the key.
 If it's progression and it happens slowly with time, you're not going to be able to do an N of 1 study, but if you have some episodic issue and you have a drug with a reasonable offset, I think it will lends itself to this and we're talking about a dozen patients to do a study, the whole deal, and that could be your phase 3 study. I mean that the example I showed was just about a dozen patients. You don't need a lot of patients. “

It is clearly that the N of 1 study design is not appropriate for a study with the efficacy endpoint measured for very long period of time. N of 1 study design may be applicable for short–term endpoints (bio-markers, metabolites, …). However, over the last 10-20 years, the direction of regulatory agencies is moving toward to the long-term endpoints. For an enzyme replacement therapy, a drug showing the increase in enzyme level would be considered sufficient for approval 20 years ago. Nowadays, an endpoint measuring the long term clinical benefit may be required. Similarly, for a thrombolytic agent, it is not sufficient to show the thrombolysis in short term, the long-term benefit of the thrombolytic agent will be required. This trend of requiring the long-term measures in efficacy endpoints make the N of 1 study design unlikely to be used in the licensure studies.

Saturday, September 06, 2014

Full Analysis Set and Intention-to-Treat Population in Non-randomized Clinical Trials?

Intention to treatment principle has now been an routine term in the statistical analysis for randomized, controled clinical trials. If a publication is for a randomized, controled clinical trial, it is almost universal that the intention to treatement principle will be mentioned even though the actual analysis may not exactly follow the intention to treat principle in some studies.

Strictly speaking, the intention to treatment principle indicates that the intention to treatment population includes all randomized patients in the groups to which they were randomly assigned, regardless of their adherence with the entry criteria, regardless of the treatment they actually received, and regardless of subsequent withdrawal from treatment or deviation from the protocol. See one of my early articles and the presentation on ITT versus mITT.

According to ICH E9 “STATISTICAL PRINCIPLES FOR CLINICAL TRIALS”, Full Analysis Set (FAS) is identical to the Intentio-to-Treat (ITT) population. It states:

“The intention-to-treat (see Glossary) principle implies that the primary analysis should include all randomised subjects. Compliance with this principle would necessitate complete follow-up of all randomised subjects for study outcomes. In practice this ideal may be difficult to achieve, for reasons to be described. In this document the term 'full analysis set' is used to describe the analysis set which is as complete as possible and as close as possible to the intention-to-treat ideal of including all randomised subjects.

Here both FAS and ITT population are tied to the randomization. However, in the real world, there are also a lot of non-randomized trials, for example, a clinical study without concurrent control, an early phase dose escalation study without a concurrent control, a long-term safety follow up study where all subjects receive the experimental medication. In these situations, since there is no randomization, it is inappropriate to define an ITT population even though the general principle should remain the same, ie, to preserve as many subjects as possible to avoid bias. The issue is that without randomization, what will be trigger point for defining the ITT population? It looks like that the trigger point could be the time of the administration of the first study medication. Instead of allocating subjects in ITT once randomized’, the subject is in ITT ‘once dosed’. This seems to be the case in the following example, according to CSL’s RIASTAP summary basis of approval, the ITT population was defined for a study without concurrent control and without randomization. It implied that the ITT population includes all subjects who received the study medication, which is essentially the same as the Safety population.

For non-randomized studies, it may be better to use Full Analysis Set instead of ITT population. It seems to be logical to define the full analysis set to include any subjects who receive any amount of the study medication. If this definition is used with the trigger point being the first dose of the study medication, most likely, the full anlaysis set and the safety population will be identical. It is not uncommon that we define two populations that is identidical, but use it for different analyses. For safety analyses, the safety population is used; for efficacy analyses, full analysis set is used.

Another term we can use in non-randomized studies is Evaluable Population which is usually defined as any subjects who receive any amount of the study medication and have at least one post-baseline efficacy measurement. Evaluable population in non-randomized clinical trials is similar to the modified ITT population in randomized clinical trials where some randomized subjects are excluded from the analysis with justifiable rationales.

While the ICH E9 did not use the term ‘modified Intention-to-Treat’, the following paragraphs are intented to provide the guidelines or examples when the subjects can be excluded from the full analysis data set or Intention to treatment population:

“There are a limited number of circumstances that might lead to excluding randomised subjects from the full analysis set including the failure to satisfy major entry criteria (eligibility violations), the failure to take at least one dose of trial medication and the lack of any data post randomisation. Such exclusions should always be justified.Subjects who fail to satisfy an entry criterion may be excluded from the analysis without the possibility of introducing bias only under the following circumstances:(i) the entry criterion was measured prior to randomisation;
(ii) the detection of the relevant eligibility violations can be made completely objectively;
(iii) all subjects receive equal scrutiny for eligibility violations; (This may be difficult to ensure in an open-label study, or even in a double-blind study if the data are unblinded prior to this scrutiny, emphasising the importance of the blind review.)
(iv) all detected violations of the particular entry criterion are excluded.
In some situations, it may be reasonable to eliminate from the set of all randomised subjects any subject who took no trial medication. The intention-to-treat principle would be preserved despite the exclusion of these patients provided, for example, that the decision of whether or not to begin treatment could not be influenced byknowledge of the assigned treatment. In other situations it may be necessary to eliminate from the set of all randomised subjects any subject without data post randomisation. No analysis is complete unless the potential biases arising from these specific exclusions, or any others, are addressed.
In some situations, it may be reasonable to eliminate from the set of all randomised subjects any subject who took no trial medication. The intention-to-treat principle would be preserved despite the exclusion of these patients provided, for example, that the decision of whether or not to begin treatment could not be influenced byknowledge of the assigned treatment. In other situations it may be necessary to eliminate from the set of all randomised subjects any subject without data post randomisation. No analysis is complete unless the potential biases arising from these specific exclusions, or any others, are addressed.
Because of the unpredictability of some problems, it may sometimes be preferable to defer detailed consideration of the manner of dealing with irregularities until the blind review of the data at the end of the trial, and, if so, this should be stated in the protocol.”
In summary, while the general principle is the same, the different terms may be preferred to be used depending on a study being a randomized or non-randomized.

Randomized studies
Non-randomized studies
ITT
Full Analysis Set
Safety
Safety
mITT
Evaluable


Interesting talks about the Intention to treatment principle:


Friday, August 29, 2014

Subgroup Analysis in Clinical Trials - Revisited

I had previously written an article about the sub-group analysis in clinical trials. I would like to revisit this topic. The subgroup analysis has been one of the regular discussion topics in statistical conferences recently. The pitfalls of the subgroup analyses are well-understood in statistical communities. However, the subgroup analyses in regulatory setting for product approval, in multi-regional clinical trials, in confirmatory trials are quite complicated.

EMA is again ahead of FDA in issuing its regulatory guidelines on this topic. Following an expert workshop on subgroup analysis, EMA issued its draft guideline titled “Guideline on the investigation of subgroups in confirmatory clinical trials”. In addition to the general considerations, they provided the guidelines on issues to be addressed during the study planning stage and the issues to be addressed during the assessment stage.

In practice, the sub-group analysis is almost always conducted. For a study with negative results, the purpose of the sub-group analysis is usually to see if there is a sub-group where the statistical significant results can be found. For a study with positive results, the purpose of the sub-group analysis is usually to see if the result is robust across different sub-groups. The sub-group analysis is not just performed in industry sponsor trials, it may even more often performed in academic clinical studies for publication purpose.

Sometimes it is not so easy to explain the caveats of the sub-group analysis (especially the unplanned sub-group analysis) to non-statisticians. The explanation of the sub-group analysis issues needs the good understanding of the multiplicity adjustments and the statistical power. I recently saw some presentation slides on sub-group analysis issues and pitfalls of the sub-group analysis were well explained in the table below. Either way can make the sug-group analysis results unreliable.


Dr George (2004) “Subgroup analyses in clinical trials
When H0 is true
Increased probability of type I error
Too many “differences”
  • Because the probability of each “statistically significant difference” not being real is 5%
  • So lots of 5% all add together
  • Some of the apparent effects (somewhere) will not be real
  • We have no way of knowing which ones are and which ones aren’t
When H1 is true
Decreased power (increased type II error) in individual subgroup
  • Not enough “differences”
  • The more data we have, the higher the probability of detecting a real effect (“power”)
  • But sub-group analyses “cut the data”
  • Trials are expensive and we usually fix the size of the trial to give high “power” to detect important differences overall (primary efficacy endpoint)
  • When we start splitting the data (only look at men, or only look at women, or only look at renally impaired; or only look at the elderly; etc., etc.), the sample size is smaller … the power is much reduced 

In clinical trials for licensure, the regulatory agencies such as FDA may require the sub-group analyses (planned or unplanned) to see if the results are consistent across different sub-groups or if there are different risk-benefit profiles across different sub-groups. The reviewers may also perform their own sub-group analyses. However, they are aware of the pitfalls of these sub-group analyses. The recently approved Zontivity by FDA is a great example for this exact issue. See Pink Sheet article "FDA Changed Course On Zontivity Because Of Skepticism Of Subgroups At High Levels". Initially, FDA reviewers performed sub-group analyses and identified that the subjects with weight less than 60 kg had different risk-benefit profile comparing to subjects with weight greater than 60 kg. An advisory committee meeting was organized to discuss the issue if the approved indication should be limited to the specific sub-group. However, eventually FDA changed the course and did not impose the label restriction for specific sub-group. They commented that “The point is that one has to be careful not to over-interpret these subgroup findings.”









Friday, August 15, 2014

SAE Reconciliation and Determining/recording the SAE Onset Date


Traditionally clinical operations and drug safety / pharmacovigilence departments have elected to independently collect somewhat different sets of safety data from clinical trials. For serious adverse events (SAE), drug safety / pharmacovigilence department will collect the information through the SAE form and the information will be maintainted in a safety database. In clinical operation or data management departments, the adverse events (AE) including SAEs will be collected on case report form (CRFs) or eCRFs if it is an EDC study. For SAEs, the information from safety database and clinical database come from the same source (the investigational sites).  During the study or at the end of the study, the key fields regarding the SAEs from two independently maintained databases will need to be reconciled and the key data fields must match in both databases.

A poster by Chamberlain et al “Safety Data Reconciliation for Serious Adverse Events (SAE)” has nicely described the SAE reconciliation process. They stated that for these fields to be reconciled, “some will require a one to one match with no exception, while some may be deemed as acceptable discrepancies based on logical match. “

They also gave examples for fields which require an exact match or logical determination are in the following Table 1



Among these fields, the onset date is the one usually causing problems. It is due to the different interpretation of the regulatory guidelines by the clinical operations and the drug safety/pharmacovigilence departments. The onset date of SAE could be reported as the first date when signs and symptoms appears or as the date when the event meets one of the following SAE criteria (as defined in ICH E2A).

* results in death,
* is life-threatening,
* requires inpatient hospitalisation or prolongation of existing hospitalisation,
* results in persistent or significant disability/incapacity, or
            * is a congenital anomaly/birth defect.

Klepper and Dwards did a survery and published their results in their paperIndividual Case Safety Reports – How to Determine the Onset Date of an Adverse Reaction”. The results indicated the variability of determining the onset date of a suspected adverse reaction. They recommend that a criterion for onset time, i.e., beginning of signs or symptoms of the event, or date of diagnosis, be chosen as the standard.

However, many companies and organizations (such as NIH and NCI) indicated in their SAE completion guidelines that event start date should be the date when the event satisfied one of the serious event criteria (for example, if criteria “required hospitalization” was met, the date of admission to the hospital would be the Event Start Date).  If the event started prior to becoming serious (was less severe), it should be recorded on the AE page as non-serious AE with a different severity.

In NIDCR Serious Adverse Event Form Completion Instructions, SAE onset date is torecord the date that the event became serious

In SAE Recording and Reporting Guidelines for Multiple Study Products by Division of Microbiology and Infectious Disease, NIH, the onset date of SAE is instructed to be the date the investigator considers the event to meet one of the serious categories

In the HIV Prevention Trials Network, Adverse Event Reporting and Safety Monitoring section indicated that  

“If an AE increases in severity or frequency (worsens) after it has been reported on an Adverse Experience Log case report form, it must be reported as a new AE, at the increased severity or frequency, on a new AE Log. In this case, the status outcome of the first AE will be documented as “severity/frequency increased.” The status of the second AE will be documented as “continuing”. The outcome date of the first AE and the onset date of the new (worsened) AE should be the date upon which the severity or frequency increased.”

In Serious Adverse Event Form Instructions for Completion by National Cancer Institute Division of Cancer Prevention, the event onset date is to be entered as the date the outcome of the event fulfilled one of the serious criteria.

In Good Clinical Practice Q&A: Focus on Safety Reporting in Journal of Clinical Research Best Practice, it contains the following example for reporting SAE onset date.

What would an SAE’s onset date be if a patient on study develops symptoms of congestive heart failure (CHF) on Monday and is admitted to the hospital the following Friday?
 If known, the complete onset date (month-day-year) of the first signs and/or symptoms of
the most recent CHF episode should be recorded. In this case, it would be Monday. If the
onset date of the first signs and/or symptoms is unknown, the date of hospitalization or
diagnosis should be recorded.”

If the SAE onset date is recorded as the date when one of the SAE criteria is met (this seems to be more popular in practice), it may essentially require the splitting of the event. If an event start as non-serious and later on meet one of the serious criteria, the same event will be recorded as two events: one as non-serious event with onset date being the first sign and symptom date and one as serious adverse event with the onset date being the date when one of the SAE criteria is met. Therefore this approach results in a late onset date and a short SAE duration; but double counting perhaps the same event.


If the SAE onset date is recorded as the date when the first sign or symptom appears, it will result in an early onset date and a longer SAE duration. Since SAE reporting to the regulatory authorities / IRBs is based on the SAE onset date, this may be more stringent in meeting the SAE reporting requirement. 

Tuesday, July 15, 2014

Pre-screening procedures in clinical trials

For typical clinical trials, the study design will include a screening period before the randomization / start of the study medication. The purpose of the screening period is to make sure that only subjects who meet the inclusion/exclusion criteria are included in the study. The screening period usually starts from the informed consent signing to the randomization or the start of the study medication.

In some situations, depending on the timing of the informed consent and the randomization, a pre-screening period may be needed. For a study with both pre-screening and screening periods, the pre-screening period will filter out majority of those subjects who may not meet a simple inclusion criteria for the study before the formal screening procedures are performed. In this way, the screening failure rate can be significantly lowered. It is typical that the pre-screening results may be recorded on a pre-screening log, but not necessarily recorded on the study case report form, which will minimize the burden on the data recording for screen failure subjects if the screening failure rate is too high.

1) pre-screening is performed prior to the formal screening procedures are performed.
2) with pre-screening process, there will actually be two steps for screening the subjects for the study.
3) depending on what kind of pre-screening procedures are performed, the pre-screening may or may not require the informed consent from the patients.
4) if the consent is required for pre-screening, there will be two informed consents for the study: one for pre-screening and one for formal screening.
5) the results from pre-screening may be recorded in a pre-screening sheet instead of the case report forms. This is especially true if the pre-screening only involves reviewing of the patient medical charters and the review of the non-study specific test results. In this way, we can minimize recording the large amount of data for pre-screening failure subjects, which is not be valuable to the objectives of the clinical study.

In PROACT II study, in order to have 180 subjects eligible for randomization, 12323 stroke patients were screened. It will be a disaster if we consent all 12323 patients and record the information for all 12323 patients. Here a pre-screening process will be very useful. Since the pre-screening procedures involves only the non-study specific procedures (these procedures will be performed for any stroke patients anyway), there is no separate informed consent needed.

There is no formal regulatory guidance on the pre-screening procedure, however, the FDA’s opinion can be seen in “Screening Tests Prior to Study Enrollment - Information Sheet -Guidance for Institutional Review Boards and Clinical Investigators
For some studies, the use of screening tests to assess whether prospective subjects are appropriate candidates for inclusion in studies is an appropriate pre-entry activity. While an investigator may discuss availability of studies and the possibility of entry into a study with a prospective subject without first obtaining consent, informed consent must be obtained prior to initiation of any clinical procedures that are performed solely for the purpose of determining eligibility for research, including withdrawal from medication (wash-out). When wash-out is done in anticipation of or in preparation for the research, it is part of the research.
Procedures that are to be performed as part of the practice of medicine and which would be done whether or not study entry was contemplated, such as for diagnosis or treatment of a disease or medical condition, may be performed and the results subsequently used for determining study eligibility without first obtaining consent. On the other hand, informed consent must be obtained prior to initiation of any clinical screening procedures that is performed solely for the purpose of determining eligibility for research. When a doctor-patient relationship exists, prospective subjects may not realize that clinical tests performed solely for determining eligibility for research enrollment are not required for their medical care. Physician-investigators should take extra care to clarify with their patient-subjects why certain tests are being conducted.
Clinical screening procedures for research eligibility are considered part of the subject selection and recruitment process and, therefore, require IRB oversight. If the screening qualifies as a minimal risk procedure [21 CFR 56.102(i)], the IRB may choose to use expedited review procedures [21 CFR 56.110]. The IRB should receive a written outline of the screening procedure to be followed and how consent for screening will be obtained. The IRB may find it appropriate to limit the scope of the screening consent to a description of the screening tests and to the reasons for performing the tests including a brief summary description of the study in which they may be asked to participate. Unless the screening tests involve more than minimal risk or involve a procedure for which written consent is normally required outside the research context, the IRB may decide that prospective study subjects need not sign a consent document [21 CFR 56.109(c)]. If the screening indicates that the prospective subject is eligible, the informed consent procedures for the study, as approved by the IRB, would then be followed.
Certain clinical tests, such as for HIV infection, may have State requirements regarding (1) the information that must be provided to the participant, (2) which organizations have access to the test results and (3) whether a positive result has to be reported to the health department. Prospective subjects should be informed of any such requirements and how an unfavorable test result could affect employment or insurance before the test is conducted. The IRB may wish to confirm that such tests are required by the protocol of the study.
In a document issued by partners.org, it stated that the pre-screening information can be recorded on pre-screening sheets.
V. Retaining Information from Individuals who are Pre-Screened but not Enrolled
It is acceptable to retain non-identifying information about individuals who are pre-screened for a study, but do not actually pursue the study or enroll. In fact, this is often desirable or even requested by industrial or academic sponsors to obtain information about the entire pool of individuals interested or potentially eligible for the study. Pre-screening sheets from individuals who did not provide identifying information can be retained with no further action. Pre-screening sheets with identifying information gathered to obtain written authorization and prior to enrollment (signing of informed consent form) may also be retained in research files, but must have segments containing identifiable information blacked out or cut off as soon as it is clear that the individual will not be enrolled. If identifiable health information is to be retained, the investigator must obtain an authorization from each of the persons screened.
Separating the pre-screening and screening periods may have different purpose. For example, in an IBS-D study, both prescreening and screening periods are included in the study. The informed consent would be required for the pre-screening since the pre-screening procedures are beyond the typical patient care. The formal screening process was to establish the baseline since the baseline measure requires recording the daily symptom over 7-14 day period.
“this study consisted of an initial prescreening period, a screening period of 2 to 3 weeks, a 12-week double-blind treatment period, and a 2-week post-treatment period. During the 1-week prescreening period, patients underwent a physical examination, provided blood and urine for routine testing, and discontinued any prohibited medications. Patients who met the inclusion and exclusion criteria entered the screening period and began using an interactive voice response system (IVRS) to provide daily symptom assessments. After the screening period of 2-3 weeks, patients who continued to meet eligibility criteria and were compliant with the IVRS system for at least 6 of 7 days during the week before and 11 of 14 days during the 2 weeks before were randomized in parallel…..”

Saturday, July 12, 2014

Clinical Trial Registries in US, EU, WHO, and other countries

For the past decade, there was increasing pressure on pharmaceutical companies to be transparent on the clinical trials and the clinical trials results. It is considered not a good practice if the pharmaceutical companies publish only the clinical trial with positive results and hide the clinical trial with negative results.

On September 27, 2007, the President George W Bush signed U.S. Public Law 110-85. The law includes a section on clinical trial databases (Title VIII) that expands the types of clinical trials that must be registered in ClinicalTrials.gov, increases the number of data elements that must be submitted, and also requires submission of certain results data.

After the law is enacted, the clinical trial registries have blossomed. In US, all clinical trials are required to be registered in clinicaltrials.gov database.

In the early stage, some companies used a perfunctory way in compliant with the registries and provided very little or meaningless information on the clinical trial registries. For example, one of the clinical trial entry listed Primary Outcome Measures ‘Efficacy’; Secondary Outcome Measures ‘Safety’ instead of providing what the outcomes are.

With the time, the companies are now more serious about the clinical trial registries. For those companies who are not compliant with clinical trial registries, there are penalties.
“FDAAA 801 establishes penalties for Responsible Parties who fail to comply with registration or results submission requirements. Penalties include civil monetary penalties and, for federally funded studies, the withholding of grant funds.

See the statutory provisions amending Civil Money Penalties (PDF) and Clinical Trials Supported by Grants From Federal Agencies (PDF). “
More importantly, the clinical trial registration is now becoming a requirement for the publication. For medical journal, it will be difficult to get clinical trial results published if the study is not registered at the time of the clinical trial start. International Committee of Medical Journal Editors (ICMJE) indicated that the following registries are acceptable
ICMJE also stated:
“In addition to the above registries, starting in June 2007 the ICMJE will also accept registration in any of the primary registries that participate in the WHO International Clinical Trials Portal (see http://www.who.int/ictrp/network/primary/en/index.html). Because it is critical that trial registries are independent of for-profit interests, the ICMJE policy requires registration in a WHO primary registry rather than solely in an associate registry, since for-profit entities manage some associate registries. Trial registration with missing or uninformative fields for the minimum data elements is inadequate even if the registration is in an acceptable registry.”
The same clinical trial may be registered in multiple clinical trial databases, but is not required to be registered in multiple clinical trials databases. In reality, we often found that the same clinical trials are registered in multiple places and the data entries are for the same trial are actually different in different registries.