Sunday, March 29, 2009

Jadad Scale to assess the quality of clinical trials

In a cost-effectiveness assessment report, a detail descriptions were provided for the approaches in choosing the clinical trial data for meta analysis. After many clinical trials are selected, a 'Jadad Scale' was used to assess the quality of clinical trials.

Jadad Scale sometimes known as Jadad scoring or the Oxford quality scoring system, is a procedure to independently assess the methodological quality of a clincal trial. It is the most widely used such assessment in the world.

The Jadad score was used as the 'gold standard' to assess the methodological quality of studies. This validated score lies in the range 0-5. Studies are scored according to the presence of three key methodological features of randomization, blinding and accountability of all patients, including withdrawals.

According to NIH website Appendix E: The Jadad Score
A Method for assessing the quality of controlled clinical trials
Basic Jadad Score is assessed based on the answer to the following 5 questions.
The maximum score is 5.

Question Yes No
1. Was the study described as random? 1 0
2. Was the randomization scheme described and appropriate? 1 0
3. Was the study described as double-blind? 1 0
4. Was the method of double blinding appropriate? (Were both the patient and the assessor appropriately blinded?) 1 0
5. Was there a description of dropouts and withdrawals? 1 0

Quality Assessment Based on Jadad Score

Range of Score Quality
0–2 Low
3–5 High

Wikipedia has a pretty good summary of the use of Jadad Scale.

Jadad Scale has been frequently used as a study selection criteria when the literature review or meta analysis are performed.

References:
1. Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Control Clin Trials 1996;17:1-12.

Stratified randomization to achieve the balance of treatment assignment within each strata

Stratified randomization refers to the situation in which strata are constructed based on values of prognostic variables or baseline covariates and a randomization scheme is performed separately within each stratum. One misconception is to think that the stratified randomization is going to require the equal number of subjects for each strata.

For example, suppose that in a two-arm, parallel design study, we would like to stratify the randomization for age group (<18 versus >=18 years old). But we don't know how many subjects in each age group we could enroll. The purpose is to make sure that within each age group, there are equal numbers of subjects assigned to treatment A or treatment B.

After the study, there may be quite different total number of subjects in each age group, but within each age group, there should be approximately equal number of subjects in treatment A or treatment B.

The strata size usually vary (maybe there are relatively fewer young males and young females with the disease of interest). The objective of stratified randomization is to ensure balance of the treatment groups with respect to the various combinations of the prognostic variables. Simple randomization will not ensure that these groups are balanced within these strata so permuted blocks are used within each stratum are used to achieve balance.

When the stratified randomization is utilized, the # of stratification factors is typically limited to 1 or 2. The number of strata is exponentially increased if too many randomization factors are included. For example, if we have 4 stratification factors and each factor has two levels, then the # of strata = 2^4 = 16 strata, which is not practical.

If there are too many strata in relation to the target sample size, then some of the strata will be empty or sparse. This can be taken to the extreme such that each stratum consists of only one patient each, which in effect would yield a similar result as simple randomization. Keep the number of strata used to a minimum for good effect.

I have also seen a trial to require the equal number of subjects for each strata and with each strata, then equal number of subjects assigned to two treatment groups. In a trial to study the IBS (irritable bowel Syndrome), the protocol required the equal number of subjects in two type of IBSs (IBS-C vs. IBS-M). Within IBS-C or IBS-M group, there should be equal number of subjects assigned to treatment A or treatment B. The things turned out not nice because there were a lot of more subjects with IBS-C than IBS-M. During the study, while enrollment target for IBS-C was achieved, there was still a lot of IBS-M subjects to be enrolled.

IBS-C=Irritable Bowel Syndrome (constipation dominant)
IBS-M=Irritable Bowel Syndrome (mixed - constipation and diarrria)

Saturday, March 28, 2009

Too good to be true?

Typically, the regulatory authority requires two pivotal studied to demonstrate the efficacy. If the results from two studies show the conflicting or inconsistent results, the evidence for efficacy may be considered as not convincing.

On the other side, if two studies show the results almost identical, it could raise the issue with regulatory reviewers for suspicious fraud. In the most recent ASA's biopharmaceutical report, two examples were discussed.

NDA 022145 Merck's Isentress

Nearly identical results were observed in the investigational treatment group in two pivotal phase III trials for the applicant’s primary efficacy endpoint.
As part of the data verification process, the statistical review team requested copies of original source documents (laboratory reports) for HIV RNA data from the four sites that were inspected, from the site with the largest number of patients and from an additional site that had highly statistically significant results in favor of the investigational drug.
Because the applicant used an IVRS, there were no fixed randomization lists available prior to enrollment of the patients in the trial and no treatment codes available in envelopes at the sites that DSI inspected. Therefore the statistical review team also requested that copies of original source documents for treatment randomization schedules be sent directly to the FDA from the external vendors. In addition, the statistical reviewer requested the applicant’s standard operating procedures for randomization schedule generation and certification from the external vendors that the randomization code documents were obtained from the original electronic file sent to the vendors from the applicant prior to study initiation. A sample of treatment codes and laboratory data were compared to corresponding values in the SAS data sets and appeared to match.

Of note, in this NDA, two pivotal studies were allowed to be combined. The final assessment is based on the integrated summary of efficacy (ISE). It appears that the dynamic randomization was used in these two studies even though there was no detail description about the randomizaton procedure (ie, dynamic allocation for baseline covariate or dynamic allocaton for response?)

GSK's Relenza (NDA021036)


Two phase III studies assessed post-exposure prophylaxis in household contacts of an index case of influenza. In the first household study, the index case was treated while the index case was untreated in the second study. The primary efficacy endpoint for the two phase III household prophylaxis studies was the proportion of households with at least one previously uninfected household member who contracted symptomatic, laboratory-confirmed influenza.

Nearly identical rates were observed for the primary efficacy endpoint in the two household studies. Such a high degree of coincidence is rare.

Of note, the biopharmaceutical report is a quarterly report by biopharmaceutical section under american statistical association. Unfortunately, the report was not updated on their website. I have to put the report under a temporary web location.

Wednesday, March 18, 2009

Should expected clinical outcomes of the disease under study, which are efficacy endpoints, be reported as AEs/SAEs?

The paragraphs below are from the following website:

http://firstclinical.com/journal/2008/0806_GCP35.pdf

Some protocols instruct investigators to record and report all untoward events that occur
during a study as AEs/SAEs, which could include common symptoms of the disease under
study and/or other expected clinical outcomes. This approach enables frequency
comparisons of all events between treatment groups, but can make event recording in the
CRF burdensome, result in more expedited reports from investigators to sponsors, and fill
safety databases with many untoward events that most likely have no relationship to study
treatment and that could obscure signal identification.
In some clinical trials, disease symptoms and/or other expected clinical outcomes
associated with the disease under study, which might technically meet the ICH definition of
an AE or SAE, are collected and assessed as efficacy parameters rather than safety
parameters. An example might be severity scoring of prospectively defined disease
symptoms at each clinic visit during a rheumatoid arthritis study. The hypothesis underlying
this approach is that the study treatment will have a positive impact on disease symptoms.
If prospectively defined clinical outcomes, such as symptoms of a studied chronic disease or
death due to disease progression in an oncology trial, are to be assessed as efficacy
endpoints and not as AEs/SAEs, the methods for recording and analyzing these data should
be clearly described in the protocol. In addition, sponsors are advised to consult with
applicable regulatory authorities to ensure that safety reporting instructions in protocols are
acceptable, especially if certain clinical outcomes are to be excluded from traditional AE/SAE
reporting.
In high morbidity/mortality trials, independent data monitoring committees (IDMC)
generally monitor all acquired AE/SAE and clinical outcomes data to assess benefit and risk
on an ongoing basis. A reviewing IDMC could halt a trial if there was significant
improvement in pre-specified clinical outcomes in the treatment group compared to the
control group. It is also possible that a study treatment might unexpectedly worsen prespecific
disease symptoms and/or other clinical outcomes that are being assessed as
efficacy parameters.1

Reference
1. “Good Clinical Practice: A Question & Answer Reference Guide”, Barnett International,
2007, #9.10 p. 215

Source
“Good Clinical Practice: A Question & Answer Reference Guide 2007,” is available for $39.95
at http://www.barnettinternational.com/

Adverse events (AE), treatment emergent adverse events (TEAE), and adverse drug reaction (ADR)

There is some debate and inconsistencies regarding the definition of Adverse Drug Reactions. If you call it an adverse event, you may not have a culprit drug in mind, whereas calling it an adverse drug reaction is already linking it to a suspected drug. Regardless of whether or not there is a suspected drug, an AE or an ADR is commonly defined as any adverse change in health or un-desired "side-effect" that occurs in a person while on a medical treatment (for example, drug or device) or within a pre-specified period after treatment is complete. Not every adverse event is causally related to the treatment or test being studied. However, regardless of causality, people who experienced adverse reactions, or their doctors, are encouraged to report these events to the FDA or the relevant regulatory authority in the country where the drug or device is registered.

Adverse event (AE) is any untoward medical occurrence including:
  • undesirable signs & symptoms
  • disease or accidents
  • abnormal lab finding (leading to dose reduction/discontinuation/intervention)
during treatment with a pharmaceutical product in a patient or a human volunteer that does not necessarily have a relationship with the treatment given.
Adverse events is typically collected after signing the informed consent form and could be related or unrelated to the study drug.
Adverse drug reaction (ADR) is defined as:
  • For approved pharmaceutical product: a noxious and unintended response at doses normally used or tested in humans;
  • for a new unregistered pharmaceutical product: a noxious and unintended response at any dose.
WHO defines "a response to a drug which is noxious & unintended and which occurs at doses normally used for prophylaxis diagnosis or therapy of a disease or for modification of a physiological function.
The difference between AE and ADR is that AE event does not imply causality, but for ADR, a causal rule is suspected.

Another confusion is about the term 'treatment-emergent adverse event (TEAE)'. A treatment-emergent adverse event is defined as any event not present prior to the initiation of the treatments or any event already present that worsens in either intensity or frequency following exposure to the treatments. Since the starting point for AE collection is the signing of the informed consent, not the start of the study treatment, there are some adverse events occurred prior to the initiation of the study treatment. These AEs may be called "baseline-emergent adverse event" which defined as any event which occurs or worsens during the staged screening process (after informed consent) including the randomization visit. It is common to have separate summaries for AEs occurred piror to the initiation of the treatment and AEs occurred after the initiation of the treatment (ie, summary of treatment emergent adverse events).

I was asked about a programming practice to define the TEAE used in some companies. For any AE with onset date/time after the first study drug administration date/time,they compare if there is a same AE with the same severity. If yes, AE is not counted as TEAE (even though the onset date/time is after the study drug administration). For example, a subject has a mild headache 30 days after using the study medication and subjects also has a mild headache event before using the study medication,the programming will identify this event as non treatment emergent. However I think this is wrong these are two distinct events and the second one should be counted as treatment emergent AE.

The TEAE is different from the drug-related adverse events. While the treatment emergent AEs refers to adverse events temporally related to the study treatment, the drug-related AEs refers to the causality assessment by the investigator.

Friday, March 06, 2009

What is the easiest way to start a meta analysis?

I used Revman program to do my meta analysis and generated the nice Forest plot two years ago. It worked for me very well at that time. Revman is a free program developed for Cochrane Review - the most reliable reviews for evidence-based medicine.

http://community.cochrane.org/tools/review-production-tools/revman-5

Read the instruction and tutorial about how to use this program. The algorithm used in this program is described in the attached file (following the weblink below).

http://community.cochrane.org/tools/review-production-tools/revman-5/resources
http://community.cochrane.org/sites/default/files/uploads/inline-files/RevMan_5.3_User_Guide.pdf


One of the authors Julian Higgins, is one of the speakers in last year’s Meta analysis workshop sponsored by SAMSI. His topic then is titled "Practical obstables in Meta Analysis".

Since I am a heavy SAS user, I also try to do meta analysis in SAS. The following references may be useful:

Biosimilar, Follow-up Biologics, Biogenerics, and Generic Biologics

Biosimilars or Follow-on biologics are terms used to describe officially approved new versions of innovator biopharmaceutical products, following patent expiry.

Unlike the more common "small-molecule" drugs, biologics generally exhibit high molecular complexity, and may be quite sensitive to manufacturing process changes. The follow-on manufacturer does not have access to the originator's molecular clone and original cell bank, nor to the exact fermentation and purification process. Finally, nearly undetectable differences in impurities and/or breakdown products are known to have serious health implications. This has created a concern that copies of biologics might perform differently than the original branded version of the drug. However, similar concerns also apply to any production changes by the maker of the original branded version. So new versions of biologics are not authorized in the US or the European Union through the simplified procedures allowed for small molecule generics.

While the term 'biosimilar' or 'follow-on biologics' are getting popular, other terms may also be used in one way or another. Other terms include 'biogenerics', 'generic biologics',...

The Obama administration supports the use and introduction of generic drugs into the market. In his new budget proposal, Obama calls for generic biotech drugs (see CNBC news or forbes news).

on November 21, 2008, FTC held a Roundtable on Follow-on Biologic Drugs: Framework For Competition and Continued Innovation. This workshop signals continuing interest in the issue. The trascript and the videos are available from the website.


Some other readings:

Sunday, March 01, 2009

iDMC, iSTAT, iDM, and more

I guess that the iDMC (stands for independent Data Monitoring Committee) has been used for a while, however, I first time heard the terms iSTAT and iDM in the recent data monitoring committee conference. iSTAT stands for independent Statistician and iDM stands for independent Data Management.

To support the iDMC who could review the interim data during the study, an independent statistical programming team is typically needed. Within the same organization (sponsor or CRO), there could be two teams: one is the study team and is always blinded to the study treatment (prior to the study unblinding) and one is the independent team that could have access to the randomization codes and prepare the unblinded interim information for iDMC.

Currently there are many different structures in arranging the iDMC operation with statistical support. The iSTAT could be with the sponsor, with CROs (contract research organization), or ARO (academic research organization). Each modol has its own pros and cons.

IS (independent statistician). See the talk about Pat O'meara
IDC (independent data center)