As stated in Demets' paper "Liability issues for data monitoring committee members" (Clinical
Trials 2004; 1: 525–531): In randomized clinical trials, a data monitoring committee (DMC) is often appointed to review interim data to determine whether there is early convincing evidence of intervention benefit, lack of benefit or harm to study participants. Because DMCs
bear serious responsibility for participant safety, their members may be legally liable
for their actions.
With increasing DMC monitoring in clinical trials, the liability and indemnification issues are the topic of the recent data monitoring committee conference. In the situation where a study was terminated based on DMC's suggestion, the study participants could file lawsuit on either the DMC members or the sponsor for not doing the diligent work to stop the trial or stop the trial sooner enough. For example, Pfizer was sued for its Torcetrapib trial even though Pfizer is cleared of any wrongdoing. Recent events (eg Cox-IIs, Vioxx) have raised the potential for litigation and DMC members have been gotten a subpoena. For protection, DMC charters for industry trials now often cover indemnification clauses.
However, there is no indemnification yet for government-sponsored trials. For example, in NCI's guidance, it is specified "The government is prohibited by statute from indemnifying any party without specific legislative authority and consultation with the United States Department of Justice. Government liability for its own actions is usually limited by the Federal Tort Claims Act."
So what is 'indemnification'?
According to Wikipedia, "An indemnity is a sum paid by A to B by way of compensation for a particular loss suffered by B. The indemnifying party (A) may or may not be responsible for the loss suffered by the indemnified party (B). Forms of indemnity include cash payments, repairs, replacement, and reinstatement."
In the United States, Indemnification is a legal document laying down the legal protection or exemption from liability for compensation or damages from a third party, investigator and/or hospital or institution from claims made by the study subject (or relatives) that harm
was caused to the subject as a result of participation in the clinical trial.
CQ's web blog on the issues in biostatistics and clinical trials.
Saturday, February 28, 2009
Friday, February 27, 2009
Confidence interval for correlation coefficient
When we perform the correlation analysis, we typically calculate the correlation coefficient and then test if this correlation coefficient is statistically significant or not. We will then judge the degree of the correlation based on the numerical value of the correlation coefficient. Sometimes, we may want to calculate the confidence interval for correlation coefficient to see if the correlation coefficient has reasonable precision.
The easy way to calculate the confidence interval for correlation coefficient is to use FISHER option in SAS procedure. FISHER option is available after SAS version 9. FISHER option specifies the Fisher's z transformation to estimate 95% confidence intervals for a correlation.
When we use the confidence interval to make a judgment about the procision, we need to be aware that this is largely related to the sample size used in the calculation of the correlation coefficient. The larger the sample size, the narrower the confidence interval.
The easy way to calculate the confidence interval for correlation coefficient is to use FISHER option in SAS procedure. FISHER option is available after SAS version 9. FISHER option specifies the Fisher's z transformation to estimate 95% confidence intervals for a correlation.
- Stan Brown has a nice description of the Fisher's z transformation
- SAS Provided an explanation for Applications of Fisher’s z Transformation
- A SUGI paper by David Shen provide the SAS codes
When we use the confidence interval to make a judgment about the procision, we need to be aware that this is largely related to the sample size used in the calculation of the correlation coefficient. The larger the sample size, the narrower the confidence interval.
Thursday, February 19, 2009
Most testing for US drug industry's late-stage human trials done outside the country, study indicates
The Wall Street Journal (2/19, Wang) reports, "Most testing for the US drug industry's late-stage human trials is now done at sites outside the country, where results often can be obtained cheaper and faster, according to a study" published in the New England Journal of Medicine. What "make overseas trials cheaper and faster, [is that] patients in developing countries are often more willing to enroll in studies because of lack of alternative treatment options, and often they aren't taking other medicines. Such 'drug-naïve' patients can be sought after because it is easier to show that experimental treatments are better than placebos, rather than trying to show an improvement over currently available drugs."
According to the New York Times (2/19, B7, Singer), the study "raises questions about the ethics and the science of increasingly conducting studies outside the United States -- when the studies are meant to gather evidence for new drugs to gain approval in this country." The study conducted "by several Duke University researchers, suggests an ethical quagmire when drugs intended for wealthy nations are tested on people in developing countries." The researchers "suggest that human volunteers in foreign countries may be unduly influenced with the promise of financial compensation or free medical care to participate in clinical trials. The report, 'Ethical and Scientific Implications of the Globalization of Clinical Research,' also asks whether drug research conducted in developing countries is relevant to the treatment of American patients." Individuals of East Asian origin, for example, have a genetic variance that may reduce the effects of nitroglycerin treatment.
The researchers' "review of a US government clinical trials registry and of 300 published reports in major medical journals revealed this: A third (157 of 509) of Phase III trials -- typically the largest and most significant trial in the development of a drug -- led by major US pharmaceutical companies were being conducted entirely outside the United States," HealthDay (2/18, Gardner) reported. "In addition, half of the study sites (13,521 of 24,206) used in these trials were located overseas, with many in Eastern Europe and Asia."
On its website, CNN (2/19, Watkins) adds that the researchers "reported one study that found only 56 percent of 670 researchers surveyed in developing countries said their work had been reviewed by a local institutional review board or a health ministry. Another study reported that 18 percent of published trials carried out in China in 2004 adequately discussed informed consent for subjects considering participating in research."
According to the New York Times (2/19, B7, Singer), the study "raises questions about the ethics and the science of increasingly conducting studies outside the United States -- when the studies are meant to gather evidence for new drugs to gain approval in this country." The study conducted "by several Duke University researchers, suggests an ethical quagmire when drugs intended for wealthy nations are tested on people in developing countries." The researchers "suggest that human volunteers in foreign countries may be unduly influenced with the promise of financial compensation or free medical care to participate in clinical trials. The report, 'Ethical and Scientific Implications of the Globalization of Clinical Research,' also asks whether drug research conducted in developing countries is relevant to the treatment of American patients." Individuals of East Asian origin, for example, have a genetic variance that may reduce the effects of nitroglycerin treatment.
The researchers' "review of a US government clinical trials registry and of 300 published reports in major medical journals revealed this: A third (157 of 509) of Phase III trials -- typically the largest and most significant trial in the development of a drug -- led by major US pharmaceutical companies were being conducted entirely outside the United States," HealthDay (2/18, Gardner) reported. "In addition, half of the study sites (13,521 of 24,206) used in these trials were located overseas, with many in Eastern Europe and Asia."
On its website, CNN (2/19, Watkins) adds that the researchers "reported one study that found only 56 percent of 670 researchers surveyed in developing countries said their work had been reviewed by a local institutional review board or a health ministry. Another study reported that 18 percent of published trials carried out in China in 2004 adequately discussed informed consent for subjects considering participating in research."
Saturday, February 14, 2009
Evidence-based medicine - the Evidence Gap
New York Times had a series of articles to explore medical treatments used despite scant proof they work and examining steps toward medicine based on evidence.
Evidence-based medicine (EBM) aims to apply evidence gained from the scientific method to certain parts of medical practice. It seeks to assess the quality of evidence relevant to the risks and benefits of treatment (including lack of treatment). According to the Centre for Evidence-Based Medicine, "Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients."
The key for evidence-based medicine is the quality of evidence. Obviously the regulatory such as FDA applied very strict efficacy standard. According to a slide on FDA's website, FDA does not permit Sponsors To Promote Off-Label Uses because such behaviour
Evidence-based medicine (EBM) aims to apply evidence gained from the scientific method to certain parts of medical practice. It seeks to assess the quality of evidence relevant to the risks and benefits of treatment (including lack of treatment). According to the Centre for Evidence-Based Medicine, "Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients."
The key for evidence-based medicine is the quality of evidence. Obviously the regulatory such as FDA applied very strict efficacy standard. According to a slide on FDA's website, FDA does not permit Sponsors To Promote Off-Label Uses because such behaviour
- Would diminish or eliminate incentive to study the use and obtain definitive data.
- Could result in harm to patients from unstudied uses that actually lead to bad results, or that are merely ineffective.
- Would diminish the use of evidence-based medicine.
- Could ultimately erode the efficacy standard.
However, there are also different voices.
Cronbach's alpha - reliability coefficient
Cronbach's Alpha is a tool for assessing the reliability of scales (for example a quality of life instrument). Cronbach's alpha can be easily calculated from SAS Proc Corr.
To compute Cronbach's alpha for a set of variables, use the ALPHA option in PROC CORR as follows:
PROC CORR DATA=dataset ALPHA;
VAR item1-item10;
RUN;
SAS website provides an example about calculating the Cronbach's alpha.
Very often, 95% confidence interval may be required, the calculation is not straightforward, but there are SAS macros available from the SAS web site.
Some references about Cronbach's alpha can be found below:
To assess the reliability of an instrument, the good reliability features include:
In one of comments on FDA's guidance on PROM (patient reported outcome measures), Cronbach's alpha was cited to measure Internal Consistency and construct validity (with scale analysis) - Cronbach's alpha > 0.70
http://www.fda.gov/ohrms/dockets/dockets/06d0044/06d-0044-EC13-Attach-1.pdf
To compute Cronbach's alpha for a set of variables, use the ALPHA option in PROC CORR as follows:
PROC CORR DATA=dataset ALPHA;
VAR item1-item10;
RUN;
SAS website provides an example about calculating the Cronbach's alpha.
Very often, 95% confidence interval may be required, the calculation is not straightforward, but there are SAS macros available from the SAS web site.
Some references about Cronbach's alpha can be found below:
- http://en.wikipedia.org/wiki/Cronbach
- http://support.sas.com/documentation/cdl/en/procstat/59629/HTML/default/procstat_corr_sect019.htm
- http://www.creative-wisdom.com/teaching/assessment/alpha.html
- http://www2.sas.com/proceedings/sugi26/p246-26.pdf
To assess the reliability of an instrument, the good reliability features include:
- Internal consistency = Cronbach's alpha >= 0.70 for new measures
- Stability = reliability coefficient >= 0.70
- Equivalence = Kappa statistic >= 0.61
In one of comments on FDA's guidance on PROM (patient reported outcome measures), Cronbach's alpha was cited to measure Internal Consistency and construct validity (with scale analysis) - Cronbach's alpha > 0.70
http://www.fda.gov/ohrms/dockets/dockets/06d0044/06d-0044-EC13-Attach-1.pdf
Thursday, February 12, 2009
Blood plasma and serum
Blood plasma, or plasma, is prepared by obtaining a sample of blood and removing the blood cells. The red blood cells and white blood cells are removed by spinning with a centrifuge. Chemicals are added to prevent the blood's natural tendency to clot. If these chemicals include sodium, than a false measurement of plasma sodium content will result. Serum is prepared by obtaining a blood sample, allowing formation of the blood clot, and removing the clot using a centrifuge. Both plasma and serum are light yellow in color.
Plasma is the liquid portion of the blood that is separated from the blood cells by centrifugation. One of the characteristics of plasma is that it clots easily which is important for hemophiliacs needing a transfusion but is a nuisance in most other applications. By agitating the plasma, one can precipitate the clotting factors as a large clot, and the leftover fluid is called serum. So, serum plus clotting factors is plasma, and clotted plasma yields serum (as an interesting aside, "serum" is Latin for whey, the liquid portion of clotted milk removed in making cheese).
The following course note describes the contents of the blood, plasma, and serum.
Plasma is the liquid portion of the blood that is separated from the blood cells by centrifugation. One of the characteristics of plasma is that it clots easily which is important for hemophiliacs needing a transfusion but is a nuisance in most other applications. By agitating the plasma, one can precipitate the clotting factors as a large clot, and the leftover fluid is called serum. So, serum plus clotting factors is plasma, and clotted plasma yields serum (as an interesting aside, "serum" is Latin for whey, the liquid portion of clotted milk removed in making cheese).
The following course note describes the contents of the blood, plasma, and serum.
Thursday, February 05, 2009
DMC (Data Monitoring Committee) vs. DSMB (Data Safety Monitoring Board)
DMC (Data Monitoring Committee) vs. DSMB (Data Safety Monitoring Board) are the same thing. The term DMC is used more now because it is the term used in FDA's guidance "Establishment and Operation of Clinical Trial Data Monitoring Committees" and EMEA's Guidance on Data Monitoring Committees. However, the World Health Organization used DSMB in its guidance titled "Operational Guidelines for the Establishment and Functioning of Data Safety Monitoring Boards". When searching for articles, it is recommended to try both terms "DMC" and "DSMB".
Some further discussions prior to FDA's issurance of DMC guidance are worth to read. These include:
EMEA guidance said "In case of a submission the working procedures of a DMC as well as all DMC reports (open and closed sessions) should form part of the submission."
The internal discussion notes said "A special circumstance is the case in which the sponsor wishes to use interim data in support of a regulatory submission, with the intent to continue the trial to its conclusion. Because of the risks to the trial’s credibility, analysis and use of interim data for this purpose is often ill advised. Exceptional circumstances may arise, however, in which such use could be appropriate. Before accessing and using interim data for this purpose, sponsors should confer with FDA and the DMC (or DMC chair) and consider all potential implications of such actions. "
According to FDA guidance "The agency recommends in the guidance that the DMC or the group preparing the interim reports to the DMC maintain all meeting records. This information should be submitted to FDA with the clinical study report (Sec. 314.50(d)(5)(ii) (21 CFR 314.50(d)(5)(ii)))."
Post-analysis DMC meeting: what are the pros and cons of having the DMC convene post-analyiss so they can make an assessment on complete and clean data?
The principal role of DMC is to ensure the safety of patients, which they do by analyzing adverse events and by performing interim analyses of the clinical outcome data. Due to the time constraints, the DMC analyses are typically based on the data that is incomplete or not totally cleaned. Analyses post DMC meeting are typically not needed unless there are serious issues with the data.
One interesting question is the role of the DMC after the study has been completed. My understanding is that the DMC plays the big role during the study. After the study has been completed, DMC would hand the responsibilities back to the sponsor and investigator since all subjects have been off the study. If there is any DMC meeting after the study completion, it is mainly for the courtesy or information purpose.
If DMC made the suggestion to stop the trial after reviewing the interim analysis data, after their suggestion, it is up to the sponsor and investigators (or steering committees or executive committees) to handle the rest (close out the study, disclose the study results, write manuscript,…). In this situation, no post-DMC meeting is needed. The final analyses will be performed by the sponsor or investigators. Investigators will publish the study results. Some examples are: Novartis ACCOMPLISH trial - stopped for efficacy; Pfizer’s ILLUMINATE trial - stopped for futility.
Some further discussions prior to FDA's issurance of DMC guidance are worth to read. These include:
- An internal discussion note.
- FDA internal assessment of annual report burden
- Notes from CBER open public meeting on DMC
- Comments on draft DMC guidance
- NIH policy on data and safety monitoring
- NIH policy on data and safety monitoring of Phase I and II trials
- NCI policy on data and safety monitoring
- NCI's essential elements on data and safety monitoring
- Template from NHLBI
- Template from NIA
- DMC policy from ECOG
- http://www.ctu.mrc.ac.uk/files/DMCcharter_general.pdf
- Template from Applied Clinical Trials
- Slutsky et al (2004) Data Safety and Monitoring Board. NEJM 350:1143-1147
- Freidlin, B., Korn, E. L. (2009). Monitoring for Lack of Benefit: A Critical Component of a Randomized Clinical Trial. JCO 27: 629-633
- Miller and Wendler (2008). Is it ethical to keep interim findings of randomised controlled trials confidential?. J. Med. Ethics 34: 198-201
- Borer et al (2008) When should data and safety monitoring committees share interim results in cardiovascular trials? JAMA Apr 9;299(14):1710-2
- Mueller et al (2007) Ethical Issues in Stopping Randomized Trials Early Because of Apparent Benefit. ANN INTERN MED 146: 878-881
- Goodman (2007) Stopping at Nothing? Some Dilemmas of Data Monitoring in Clinical Trials. ANN INTERN MED 146: 882-887
- Silverman (2007) Ethical Issues during the Conduct of Clinical Trials. Proc Am Thorac Soc 4: 180-184
- Chen-Mok et al (2006) Experiences and challenges in data monitoring for clinical trials within an international tropical disease research network. Clin Trials 3: 469-477
- Ellenberg, Fleming, Demets (2002) Data Monitoring Committees in clinical trials: a practical perspective
- Demets, Friedman, Furberg (2006) Data Monitoring in clinical trials: a case studies approach
- Moffett (2006) Statistical monitoring of clinical trials: a unified approach
EMEA guidance said "In case of a submission the working procedures of a DMC as well as all DMC reports (open and closed sessions) should form part of the submission."
The internal discussion notes said "A special circumstance is the case in which the sponsor wishes to use interim data in support of a regulatory submission, with the intent to continue the trial to its conclusion. Because of the risks to the trial’s credibility, analysis and use of interim data for this purpose is often ill advised. Exceptional circumstances may arise, however, in which such use could be appropriate. Before accessing and using interim data for this purpose, sponsors should confer with FDA and the DMC (or DMC chair) and consider all potential implications of such actions. "
According to FDA guidance "The agency recommends in the guidance that the DMC or the group preparing the interim reports to the DMC maintain all meeting records. This information should be submitted to FDA with the clinical study report (Sec. 314.50(d)(5)(ii) (21 CFR 314.50(d)(5)(ii)))."
Post-analysis DMC meeting: what are the pros and cons of having the DMC convene post-analyiss so they can make an assessment on complete and clean data?
The principal role of DMC is to ensure the safety of patients, which they do by analyzing adverse events and by performing interim analyses of the clinical outcome data. Due to the time constraints, the DMC analyses are typically based on the data that is incomplete or not totally cleaned. Analyses post DMC meeting are typically not needed unless there are serious issues with the data.
One interesting question is the role of the DMC after the study has been completed. My understanding is that the DMC plays the big role during the study. After the study has been completed, DMC would hand the responsibilities back to the sponsor and investigator since all subjects have been off the study. If there is any DMC meeting after the study completion, it is mainly for the courtesy or information purpose.
If DMC made the suggestion to stop the trial after reviewing the interim analysis data, after their suggestion, it is up to the sponsor and investigators (or steering committees or executive committees) to handle the rest (close out the study, disclose the study results, write manuscript,…). In this situation, no post-DMC meeting is needed. The final analyses will be performed by the sponsor or investigators. Investigators will publish the study results. Some examples are: Novartis ACCOMPLISH trial - stopped for efficacy; Pfizer’s ILLUMINATE trial - stopped for futility.
Tuesday, February 03, 2009
Standard Error of Mean vs. Standard Error of Measurement
Everybody with basic statistical knowledge should understand the differences between the standard deviation (SD) and the standard error of mean (SE or SEM). However, people may be confused with the terms of Standard Error of Mean (SEM) vs. Standard Error of Measurement (SEM). While both shares the same acronym, the meaning and the calculation are quite different. At least, this is the situation when I saw the term 'standard error of measurement'.
I first saw this term in a literature discussing various approaches to identify the minimal clinically important difference (MCID). In an article by Copay et al, SEM (standard error of measurement) was quoted as one of the many approaches in evaluating the MCID. This method was also discussed in a paper by Wyrwich et al. Initially, I mistakenly thought that SEM was for standard error of mean. After further exploration, I realized that this SEM is quite different from that SEM.
The standard error of the mean (SEM) is the standard deviation of the sample mean estimate of a population mean. (It can also be viewed as the standard deviation of the error in the sample mean relative to the true mean, since the sample mean is an unbiased estimator.) SEM is usually estimated by the sample estimate of the population standard deviation (sample standard deviation) divided by the square root of the sample size (assuming statistical independence of the values in the sample).
The standard error of measurement (SEM) estimates how repeated measures of a person on the same instrument tend to be distributed around his or her "true" score. The true score is always an unknown because no measure can be constructed that provides a perfect reflection of the true score. SEM is directly related to the reliability of a test; that is, the larger the SEm, the lower the reliability of the test and the less precision there is in the measures taken and scores obtained. Since all measurement contains some error, it is highly unlikely that any test will yield the same scores for a given person each time they are retested.
Ar article by Dr. James Brown at University of Hawai'i at Manoa gave an good comparison of these two concepts. Also, an free paper by Harvill LM from East Tennessee State University explained in detail how the standard error of measurement is calculated.
I first saw this term in a literature discussing various approaches to identify the minimal clinically important difference (MCID). In an article by Copay et al, SEM (standard error of measurement) was quoted as one of the many approaches in evaluating the MCID. This method was also discussed in a paper by Wyrwich et al. Initially, I mistakenly thought that SEM was for standard error of mean. After further exploration, I realized that this SEM is quite different from that SEM.
The standard error of the mean (SEM) is the standard deviation of the sample mean estimate of a population mean. (It can also be viewed as the standard deviation of the error in the sample mean relative to the true mean, since the sample mean is an unbiased estimator.) SEM is usually estimated by the sample estimate of the population standard deviation (sample standard deviation) divided by the square root of the sample size (assuming statistical independence of the values in the sample).
The standard error of measurement (SEM) estimates how repeated measures of a person on the same instrument tend to be distributed around his or her "true" score. The true score is always an unknown because no measure can be constructed that provides a perfect reflection of the true score. SEM is directly related to the reliability of a test; that is, the larger the SEm, the lower the reliability of the test and the less precision there is in the measures taken and scores obtained. Since all measurement contains some error, it is highly unlikely that any test will yield the same scores for a given person each time they are retested.
Ar article by Dr. James Brown at University of Hawai'i at Manoa gave an good comparison of these two concepts. Also, an free paper by Harvill LM from East Tennessee State University explained in detail how the standard error of measurement is calculated.
Subscribe to:
Posts (Atom)