Wednesday, December 05, 2007

Translational Medicine

I have heard the term "evidence-based medicine", "socialized medicine", "individulized medicine", now there is a new term "translational medicine".

" Translational medicine is the continuum – often known as "bench to bedside" – by which the biomedical community takes a focused point of view to move research discoveries from the laboratory into clinical practice to diagnose and treat patients.

Translational medicine is often used synonymously with "Molecular Medicine" and "Personalized Medicine", both of which are used to refer to the process of applying molecular insights from laboratory discovery to clinical care.
Specifically, today’s process of translational medicine involves:

  • A scientific search to discover the origins and mechanisms of disease
  • The identification of and insight into specific biological events, biomarkers, or pathways of disease
  • The use of such insights to systematically discover and develop new diagnostics and therapeutic methods and products
  • The adoption of such new diagnostic and therapeutic approaches into the routine standard of care

Translational medicine represents a paradigm shift in the biomedical research enterprise. Traditionally, research, drug development, and clinical medicine were three virtually separate endeavors: bench scientists, drug developers, and clinical researchers rarely, if ever, met together, shared ideas, or even used the same vocabulary.

This dramatic change has come about in recent years as a result of the genomics and bioinformatics revolution. Patients provide the biospecimens from which "disease signatures" at the molecular level can be identified and are then used to develop diagnostics and drugs targeted at sub-groups of disease. The role of patient advocates has also been critical to this change in research and clinical care. They have catalyzed a more patient-centric approach to medicine."

A good example is the recent publication in Nature Medicine talking about the potential effect of Avandia on Osteoporosis. See the weblink below:

A statement about interim analysis

I cite the following statement about the interim analysis due to two reasons:

1. Hazard ratio sometimes is very tricky. When hazard ratio is used, everything could be reversed, making the interpretation of the results difficult. In one of my studies, when hazard ratio is calculated for time to relapse, I have to calculate the hazard ratio for Placebo / Active to make the results explanable. In the example below, the greater the hazard ratio, the better. While it is still called 'non-inferiority', everything seems to be flipped.

2. It is interesting to see that the pre-specified alpha levels are used to avoid the calculation of the alpha spending function.

The null hypothesis is that treatment with Drug A is inferior to treatment
with Drug B with respect to the duration of disease-free survival (HR >
1.305). The alternative hypothesis is that there is no difference between
treatment with Drug A and treatment with Drug B with respect to the duration
of disease-free survival (HR = 1).

Statistical Efficacy Analyses:
The two treatment groups will be compared for the duration of
disease-free survival based on the two sided 95% confidence interval for
the treatment disease-free survival hazard ratio from a Cox proportional
hazards regression model stratified for repeat TURBT (no/yes), high
grade papillary tumors (no/yes), CIS (no/yes), and region (North
America/Europe/Australia). Drug A will be considered to be non-inferior to
Drug B if the upper bound of the two sided 95% confidence interval for the
stratified disease-free survival hazard ratio of Drug A versus Drug B is below
1.305. Drug A will be considered to be superior to Drug B if the upper bound
of the two-sided 95% confidence interval for the stratified disease-free
survival hazard ratio of Drug A versus Drug B is below 1.0.

Interim Analyses:
Two planned interim analyses will be conducted by the DMC; one will be
performed six months prior to reaching the foreseen sample size of 811
patients and another when at least 50 percent of the expected efficacy
endpoints, defined as a diagnosis of non-muscle or muscle invasive
tumors or death, have occurred. Analysis will be limited to the
evaluation of futility and extreme efficacy. For the futility analysis
(or obvious treatment failure), a recommendation will be made to stop
the trial if the Primary Analysis is deemed futile at the p<0.001 level.
The extreme efficacy analysis will be assessed for success at the
p<0.0001 level for the Primary Analysis and a recommendation will be
made to stop the study. There will be no adjustment for overall alpha
spending for any of these analyses."

Sunday, November 25, 2007

Categories of AE Frequency

A standard category of frequency for adverse drug reactions are provided in "Guidelines for Preparing Core Clinical-Safety Information on Drugs" - Report of CIOMS Working Group III (1995).
Very common >= 1/10 (>= 10%)
Common >= 1/100 and <>= 1% and < 10%)
Uncommon >= 1/1000 and <>= 0.1% and < 1%)
Rare >= 1/10,000 and <>= 0.01% and < 0.1%)
Very rare < 1/10,000 (< 0.01%)

CIOMS working group recognizes it is always difficult to estimate incidence on the basis of spontaneous reports, owing to the uncertainty inherent in estimating the denominator and degree of under-reporting. However, whenever possible, the AE frequency should be provided.

Core Safety Information

When reporting the adverse events in core safety information (CSI),
sometimes, the corporate faces a dilemma. On the one hand, due to the legal consideration, they may want to include all adverse events as many as possible so that nobody can say that certain side reactions are not warned if some bad things happen, on the other hand, over reporting/over inclusion are not suggested by CIOMS Working Group.

Routine inclusion of an extensive, indiscriminate list of adverse events is ill-advised for several reasons:

information included uncritically makes it more difficult to distinguish disease-related events or events that may be related to concomitant therapy from those that are due to the subject drug.

Dilution: over-inclusion can obscure or devalue the truly significant adverse experiences, thereby diluting the focus on important safety-information.

Mistake: by including "unsubstantiated" information, the physician may be led to do the "wrong" thing. For example, inclusion of an incompletely studies or ill-documented weak signal of a possible birth-defect could lead to unjustified abortion; overwarning for an important medical product could result in a change to a different medication not carrying the same type of warning, yet less safe or less effective.

Diversion: the inclusion of ill-substantiated information may discourage further spontaneous reporting of problems, which might have confirmed or clarified the extent and nature of the adverse event.

Clutter: ease of reading and understanding is critical; the fewer words and the less extraneous information the better.

Saturday, November 24, 2007

Crossover trials should not be used to test treatments when negative correlation exists

I recently read a book by T.J. Cleophas titled "Statistics Applied to Clinical Trials". It is interesting that the author em phased the importance that the crossover study should not be used to test treatments with different chemical class.

"..Clinical trials comparing treatments with a totally different chemical class/mode of action are at risk of negative correlation between treatment responses. Such negative correlations have to be added to the standard errors in a cross-over trial, thus reducing the sensitivity of testing differences, making the design a flawed method for evaluating new treatments. "
"So far, statisticians have assumed that a negative correlation in cross-over studies was virtually non-existent, because one subject is used for comparison of two treatments. For example, Grieve recently stated one should not contemplate a cross-over design if there is any likelihood of correlation not being positive. The examples in the current paper show, however, that with completely different treatments, the risk of a negative correlation is a real possibility, and that it does give rise to erroneously negative studies. It makes sense, therefore, to restate Grieve's statement as follows: one should not contemplate a cross-over design if treatments with a totally different chemical class/mode of action are to be compared."
"At the same time, however, we should admit that the cross-over design is very sensitive for comparing treatments of one class and presumably one mode of action. The positive correlation in such treatment comparisons adds sensitivity, similarly to the way it reduces sensitivity with negative correlations: the pooled SEM is approximately sqrt(1-r) times smaller with positive correlation than it would have been with a zero correlation (parallel-group study), and this increases the probability level of testing accordingly. This means that the cross-over is a very sensitive method for evaluating studies with presumable positive correlation between treatment responses, and that there is, thus, room left for this study design in drug research."

One example the author mentioned is Ferrous sulphate and folic acid used for improving hemoglobin. There was an inverse correlation between the two treatments: Ferrous sulphate was only beneficial when folic acid was not, and so was folic acid when ferrous sulphate was not.

Sunday, November 04, 2007

Measure Enrollment Imbalance


"A Sponsor PM would like to know how many subjects can be enrolled at their largest-enrolling site before it begins to skew the study results. She says a previous CRO was able to calculate this number for her."

My response:

"I don't think we really need to calculate the probability to get to the answer. There may be no easy way to calculate the probabilities.

It depends on # of subjects for the study and # of sites. I typically divide the # of total subjects by # of sites to get an average # of subjects per site. Then you truly believe that there may be the site difference, you could choose a cut point (arbitrary) of 3-5 times the average # of subjects. For example, if a study is intended to enroll total 100 subjects at 10 sites, the average will be 10 subject per site.

I will probably choose a number (30, 40, or 50) as a cut point as a cap for maximum # of subject a site can enroll."

Question: Many assume that the FDA advocates that the subjects enrolled in any clinical trials be well dispersed among study sites involved in a particular clinical program. Does the FDA have any guidance, rules of thumb, or limits regarding severe imbalances in enrollment between sites?

Answer: In an informal response to this question, the FDA noted that "it does have concerns about how many patients are enrolled in studies from specific study sites. The most important issue [is] that all sites conducting the research use the same protocol. Sometimes this requirement for using the same protocol can be an issue that requires input by FDA's review division because different standards of care (i.e., standard treatments that are typically used within a particular country for the treatment of various cancers) exist in different countries. As long as the protocol is being followed, the studies are conducted in conformance with good clinical practice, and the study/study records can be audited by FDA, differences in recruitment at the various sites do not present a problem. FDA does not require specific enrollment levels at specific sites."


Clinical equipoise provides the ethical basis for conduct of randomized clinical trials. This principal states that a clinical trial is acceptable only insofar as there is professional disagreement between researchers concerning uncertainty regarding the outcome of the study.1and thus even if a clinician prefers one arm over another, randomization is still sound when there are others who believe the other way around.

However, there are often conflicts between regulatory and investigator regarding the interpretation of equipoise and biases generated from publications suggestive of the efficacy of one product over another may cause lack of equipoise. Lack of clinical equipoise causes unwillingness at the investigator level to enroll patients because of strong belief by the majority of physicians in one treatment being superior over another. This results in difficultly in designing studies to support licensing a product such that physicians are unwilling to participate in a study that is necessary to satisfy regulatory agency(s) requirements for trial data demonstrating efficacy and safety.

Freedman B: Equipoise and the ethics of clinical research. N Engl J Med 317: 141-145, 1987
Lilford RJ Declaratio of Helsinki should be strengthened. BMJ 322(7281) 299
Ashcroft R: Equipoise, knowledge and ethics in clinical research and practice. Bioethics 13 (3/4):314-326, 1999
Royall RM: Ethic and Statistics in Randomized Clinical trials. Statistical science 6(1): 52-66 1991