A standard category of frequency for adverse drug reactions are provided in "Guidelines for Preparing Core Clinical-Safety Information on Drugs" - Report of CIOMS Working Group III (1995).
Very common >= 1/10 (>= 10%)
Common >= 1/100 and <>= 1% and < 10%)
Uncommon >= 1/1000 and <>= 0.1% and < 1%)
Rare >= 1/10,000 and <>= 0.01% and < 0.1%)
Very rare < 1/10,000 (< 0.01%)
CIOMS working group recognizes it is always difficult to estimate incidence on the basis of spontaneous reports, owing to the uncertainty inherent in estimating the denominator and degree of under-reporting. However, whenever possible, the AE frequency should be provided.
CQ's web blog on the issues in biostatistics and clinical trials.
Sunday, November 25, 2007
Core Safety Information
When reporting the adverse events in core safety information (CSI),
sometimes, the corporate faces a dilemma. On the one hand, due to the legal consideration, they may want to include all adverse events as many as possible so that nobody can say that certain side reactions are not warned if some bad things happen, on the other hand, over reporting/over inclusion are not suggested by CIOMS Working Group.
Routine inclusion of an extensive, indiscriminate list of adverse events is ill-advised for several reasons:
Differentiation: information included uncritically makes it more difficult to distinguish disease-related events or events that may be related to concomitant therapy from those that are due to the subject drug.
Dilution: over-inclusion can obscure or devalue the truly significant adverse experiences, thereby diluting the focus on important safety-information.
Mistake: by including "unsubstantiated" information, the physician may be led to do the "wrong" thing. For example, inclusion of an incompletely studies or ill-documented weak signal of a possible birth-defect could lead to unjustified abortion; overwarning for an important medical product could result in a change to a different medication not carrying the same type of warning, yet less safe or less effective.
Diversion: the inclusion of ill-substantiated information may discourage further spontaneous reporting of problems, which might have confirmed or clarified the extent and nature of the adverse event.
Clutter: ease of reading and understanding is critical; the fewer words and the less extraneous information the better.
sometimes, the corporate faces a dilemma. On the one hand, due to the legal consideration, they may want to include all adverse events as many as possible so that nobody can say that certain side reactions are not warned if some bad things happen, on the other hand, over reporting/over inclusion are not suggested by CIOMS Working Group.
Routine inclusion of an extensive, indiscriminate list of adverse events is ill-advised for several reasons:
Differentiation: information included uncritically makes it more difficult to distinguish disease-related events or events that may be related to concomitant therapy from those that are due to the subject drug.
Dilution: over-inclusion can obscure or devalue the truly significant adverse experiences, thereby diluting the focus on important safety-information.
Mistake: by including "unsubstantiated" information, the physician may be led to do the "wrong" thing. For example, inclusion of an incompletely studies or ill-documented weak signal of a possible birth-defect could lead to unjustified abortion; overwarning for an important medical product could result in a change to a different medication not carrying the same type of warning, yet less safe or less effective.
Diversion: the inclusion of ill-substantiated information may discourage further spontaneous reporting of problems, which might have confirmed or clarified the extent and nature of the adverse event.
Clutter: ease of reading and understanding is critical; the fewer words and the less extraneous information the better.
Saturday, November 24, 2007
Crossover trials should not be used to test treatments when negative correlation exists
I recently read a book by T.J. Cleophas titled "Statistics Applied to Clinical Trials". It is interesting that the author em phased the importance that the crossover study should not be used to test treatments with different chemical class.
"..Clinical trials comparing treatments with a totally different chemical class/mode of action are at risk of negative correlation between treatment responses. Such negative correlations have to be added to the standard errors in a cross-over trial, thus reducing the sensitivity of testing differences, making the design a flawed method for evaluating new treatments. "
"So far, statisticians have assumed that a negative correlation in cross-over studies was virtually non-existent, because one subject is used for comparison of two treatments. For example, Grieve recently stated one should not contemplate a cross-over design if there is any likelihood of correlation not being positive. The examples in the current paper show, however, that with completely different treatments, the risk of a negative correlation is a real possibility, and that it does give rise to erroneously negative studies. It makes sense, therefore, to restate Grieve's statement as follows: one should not contemplate a cross-over design if treatments with a totally different chemical class/mode of action are to be compared."
"At the same time, however, we should admit that the cross-over design is very sensitive for comparing treatments of one class and presumably one mode of action. The positive correlation in such treatment comparisons adds sensitivity, similarly to the way it reduces sensitivity with negative correlations: the pooled SEM is approximately sqrt(1-r) times smaller with positive correlation than it would have been with a zero correlation (parallel-group study), and this increases the probability level of testing accordingly. This means that the cross-over is a very sensitive method for evaluating studies with presumable positive correlation between treatment responses, and that there is, thus, room left for this study design in drug research."
One example the author mentioned is Ferrous sulphate and folic acid used for improving hemoglobin. There was an inverse correlation between the two treatments: Ferrous sulphate was only beneficial when folic acid was not, and so was folic acid when ferrous sulphate was not.
"..Clinical trials comparing treatments with a totally different chemical class/mode of action are at risk of negative correlation between treatment responses. Such negative correlations have to be added to the standard errors in a cross-over trial, thus reducing the sensitivity of testing differences, making the design a flawed method for evaluating new treatments. "
"So far, statisticians have assumed that a negative correlation in cross-over studies was virtually non-existent, because one subject is used for comparison of two treatments. For example, Grieve recently stated one should not contemplate a cross-over design if there is any likelihood of correlation not being positive. The examples in the current paper show, however, that with completely different treatments, the risk of a negative correlation is a real possibility, and that it does give rise to erroneously negative studies. It makes sense, therefore, to restate Grieve's statement as follows: one should not contemplate a cross-over design if treatments with a totally different chemical class/mode of action are to be compared."
"At the same time, however, we should admit that the cross-over design is very sensitive for comparing treatments of one class and presumably one mode of action. The positive correlation in such treatment comparisons adds sensitivity, similarly to the way it reduces sensitivity with negative correlations: the pooled SEM is approximately sqrt(1-r) times smaller with positive correlation than it would have been with a zero correlation (parallel-group study), and this increases the probability level of testing accordingly. This means that the cross-over is a very sensitive method for evaluating studies with presumable positive correlation between treatment responses, and that there is, thus, room left for this study design in drug research."
One example the author mentioned is Ferrous sulphate and folic acid used for improving hemoglobin. There was an inverse correlation between the two treatments: Ferrous sulphate was only beneficial when folic acid was not, and so was folic acid when ferrous sulphate was not.
Sunday, November 04, 2007
Measure Enrollment Imbalance
Question:
"A Sponsor PM would like to know how many subjects can be enrolled at their largest-enrolling site before it begins to skew the study results. She says a previous CRO was able to calculate this number for her."
My response:
"I don't think we really need to calculate the probability to get to the answer. There may be no easy way to calculate the probabilities.
It depends on # of subjects for the study and # of sites. I typically divide the # of total subjects by # of sites to get an average # of subjects per site. Then you truly believe that there may be the site difference, you could choose a cut point (arbitrary) of 3-5 times the average # of subjects. For example, if a study is intended to enroll total 100 subjects at 10 sites, the average will be 10 subject per site.
I will probably choose a number (30, 40, or 50) as a cut point as a cap for maximum # of subject a site can enroll."
Question: Many assume that the FDA advocates that the subjects enrolled in any clinical trials be well dispersed among study sites involved in a particular clinical program. Does the FDA have any guidance, rules of thumb, or limits regarding severe imbalances in enrollment between sites?
Answer: In an informal response to this question, the FDA noted that "it does have concerns about how many patients are enrolled in studies from specific study sites. The most important issue [is] that all sites conducting the research use the same protocol. Sometimes this requirement for using the same protocol can be an issue that requires input by FDA's review division because different standards of care (i.e., standard treatments that are typically used within a particular country for the treatment of various cancers) exist in different countries. As long as the protocol is being followed, the studies are conducted in conformance with good clinical practice, and the study/study records can be audited by FDA, differences in recruitment at the various sites do not present a problem. FDA does not require specific enrollment levels at specific sites."
"A Sponsor PM would like to know how many subjects can be enrolled at their largest-enrolling site before it begins to skew the study results. She says a previous CRO was able to calculate this number for her."
My response:
"I don't think we really need to calculate the probability to get to the answer. There may be no easy way to calculate the probabilities.
It depends on # of subjects for the study and # of sites. I typically divide the # of total subjects by # of sites to get an average # of subjects per site. Then you truly believe that there may be the site difference, you could choose a cut point (arbitrary) of 3-5 times the average # of subjects. For example, if a study is intended to enroll total 100 subjects at 10 sites, the average will be 10 subject per site.
I will probably choose a number (30, 40, or 50) as a cut point as a cap for maximum # of subject a site can enroll."
Question: Many assume that the FDA advocates that the subjects enrolled in any clinical trials be well dispersed among study sites involved in a particular clinical program. Does the FDA have any guidance, rules of thumb, or limits regarding severe imbalances in enrollment between sites?
Answer: In an informal response to this question, the FDA noted that "it does have concerns about how many patients are enrolled in studies from specific study sites. The most important issue [is] that all sites conducting the research use the same protocol. Sometimes this requirement for using the same protocol can be an issue that requires input by FDA's review division because different standards of care (i.e., standard treatments that are typically used within a particular country for the treatment of various cancers) exist in different countries. As long as the protocol is being followed, the studies are conducted in conformance with good clinical practice, and the study/study records can be audited by FDA, differences in recruitment at the various sites do not present a problem. FDA does not require specific enrollment levels at specific sites."
Equipoise
Clinical equipoise provides the ethical basis for conduct of randomized clinical trials. This principal states that a clinical trial is acceptable only insofar as there is professional disagreement between researchers concerning uncertainty regarding the outcome of the study.1and thus even if a clinician prefers one arm over another, randomization is still sound when there are others who believe the other way around.
However, there are often conflicts between regulatory and investigator regarding the interpretation of equipoise and biases generated from publications suggestive of the efficacy of one product over another may cause lack of equipoise. Lack of clinical equipoise causes unwillingness at the investigator level to enroll patients because of strong belief by the majority of physicians in one treatment being superior over another. This results in difficultly in designing studies to support licensing a product such that physicians are unwilling to participate in a study that is necessary to satisfy regulatory agency(s) requirements for trial data demonstrating efficacy and safety.
References:
Freedman B: Equipoise and the ethics of clinical research. N Engl J Med 317: 141-145, 1987
Lilford RJ Declaratio of Helsinki should be strengthened. BMJ 322(7281) 299
Ashcroft R: Equipoise, knowledge and ethics in clinical research and practice. Bioethics 13 (3/4):314-326, 1999
Royall RM: Ethic and Statistics in Randomized Clinical trials. Statistical science 6(1): 52-66 1991
However, there are often conflicts between regulatory and investigator regarding the interpretation of equipoise and biases generated from publications suggestive of the efficacy of one product over another may cause lack of equipoise. Lack of clinical equipoise causes unwillingness at the investigator level to enroll patients because of strong belief by the majority of physicians in one treatment being superior over another. This results in difficultly in designing studies to support licensing a product such that physicians are unwilling to participate in a study that is necessary to satisfy regulatory agency(s) requirements for trial data demonstrating efficacy and safety.
References:
Freedman B: Equipoise and the ethics of clinical research. N Engl J Med 317: 141-145, 1987
Lilford RJ Declaratio of Helsinki should be strengthened. BMJ 322(7281) 299
Ashcroft R: Equipoise, knowledge and ethics in clinical research and practice. Bioethics 13 (3/4):314-326, 1999
Royall RM: Ethic and Statistics in Randomized Clinical trials. Statistical science 6(1): 52-66 1991