Sunday, August 30, 2009

SAS IQ and OQ

I am not sure how many people really know these abbreviations: DOE, IQ, OQ, PQ, PV. These are the terms used in the validation of a software or computerized systems.

DOE = Design of Experiment
IQ = Installation Qualification
OQ = Operational Qualification
PQ = Performance Qualification
PV = Process Validation

For off-the-shelf software, I have never really thought about the validation or qualification issue. I always think that the software like SAS (we use it almost daily) is just like the Miscrosoft Office. Once you install on your PC, you are ready to use.

I really learned that if the software is used for regulatory submission, certain level of validation (precisely qualification) need to be performed. There is no exception for SAS.

For SAS software, the IQ and OQ need to be performed. The instructions for the SASInstallation Qualification (IQ) and Operational Qualification (OQ) tools can be found at SAS website and at http://support.sas.com/kb/17/046.html. This was also mentioned in SAS quality document. SAS actually has a SOP for IQ and OC.

By digging in this issue further, the verification and validation of a software or computerized system is not a trivial task. Wikipedia has a topic discussing about verification and validation. The recent issue of DIA Global forum has an article by Chamberlain and they discussed qualification (vs validation) of the infrastructure.

However, I think that the validation for off-the-shelf software should be much simpler than a self-developed computerized system (such as an internal EDC system). FDA has a pertinent guidance about the computerized systems: Computerized systems used in clinical investigations and its old version titled "Computerized systems used in clinical trials".

CDRH also had a guidance titled "General Principles of Software Validation; Final Guidance for
Industry and FDA Staff
". This guidance indicated that the software used in medical device need to go through a full-scale validation process.

Sunday, August 23, 2009

Hochberg procedure for adjustment for multiplicity - an illustration

In a May article, I discussed several practical procedures for multiple testing issue. One of the procedures is Hockberg's procedure. The original paper is pretty short and published in Biometrika.

Hochberg (1988) A sharper Bonferroni procedure for multiple tests of significance. Biometrika 75(4):800-802

Hochberg's procedure is a step-up procedure and its comparison with other procedures are discussed in a paper by Huang & Hsu.

To help the non-statisticians to understand the application of Hochberg's procedure, we can use the hypothetical examples (three situations with three pairs of p-values).

Suppose we have k=2 t-tests
Assume target alpha(T)=0.05

Unadjusted p-values are ordered from the largest to the smallest

Situation #1:
P1=0.074
P2=0.013

For the jth test, calculate alpha(j) = alpha(T)/(k – j +1)

For test j = 2,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(2 – 2 + 1)
= 0.05

P1=0.074 is greater than 0.05, we can not reject the null hypothesis. Proceed to the next test

For test j = 1,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(2 – 1 + 1)
= 0.025

P2=0.013 is less than 0.025, reject the null hypothesis.

Situation #2:
P1=0.074
P2=0.030
For the jth test, calculate alpha(j) = alpha(T)/(k – j +1)

For test j = 2,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(2 – 2 + 1)
= 0.05

P1=0.074 is greater than 0.05, we can not reject the null hypothesis. Proceed to the next test

For test j = 1,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(2 – 1 + 1)
= 0.025

P2=0.030 is greater than 0.025, we can not reject the null hypothesis.

Situation #3:
P1=0.013
P2=0.001
For the jth test, calculate alpha(j) = alpha(T)/(k – j +1)

For test j = 2,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(2 – 2 + 1)
= 0.05

P1=0.013 is less than 0.05, we reject the null hypothesis.
Since the all p-values are less than 0.05, we reject all null hypothesis at 0,05.

More than two comparisons
If we have more than two comparisons, we can still use the same logic

For the jth test, calculate alpha(j) = alpha(T)/(k – j +1)

For example, if there are three comparisons with p-values as:
p1=0.074
p2=0.013
p3=0.010

For test j = 3,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(3 – 3 + 1)
= 0.05

For test j=3, the observed p1 = 0.074 is less than alpha(j) = 0.05, so we can not reject the null hypothesis. We proceed to the next test.


For test j = 2,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(3 – 2 + 1)
= 0.05 / 2
= 0.025

For test j=2, the observed p2 = 0.013 is less than alpha(j) = 0.025, so we reject all remaining null hypothesis.



For example, if there are three comparisons with p-values as:
p1=0.074
p2=0.030
p3=0.010

For test j = 3,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(3 – 3 + 1)
= 0.05

For test j=3, the observed p1 = 0.074 is less than alpha(j) = 0.05, so we can not reject the null hypothesis. We proceed to the next test.


For test j = 2,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(3 – 2 + 1)
= 0.05 / 2
= 0.025

For test j=2, the observed p2 = 0.030 is greater than alpha(j) = 0.025, so we can not reject the null hypothesis. We proceed to the next test.

For test j = 1,
alpha(j) = alpha(T)/(k – j +1)
= 0.05/(3 – 1 + 1)
= 0.05 / 3
= 0.017

For test j=2, the observed p2 = 0.010 is less than alpha(j) = 0.017, so we can reject the null hypothesis.
.

Saturday, August 08, 2009

Poisson regression and zero-inflated Poisson regression

Poisson regression is a method to model the frequency of event counts or the event rate, such as the number of adverse events of a certain type or frequency of epileptic seizures during a clinical trial, by a set of covariates. The counts are assumed to follow a Poisson distribution with other variables that are modeled as a function of the covariates. The Poisson regression model is a special case of a generalized linear model (GLM) with a log link - this is why the Poisson regression may also be called Log-Linear Model . Consequently, it is often presented as an example in the broader context of GLM theory.

Poisson regression is the simplest regression model for count data and assumes that each observed count Yi is drawn from a Poisson distribution with the conditional mean ui on a given vector Xi for case i. The number of events follows the Poisson distribution that is described blow:

f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!},\,\!

where

  • e is the base of the natural logarithm (e = 2.71828...)
  • k is the number of occurrences of an event - the probability of which is given by the function
  • k! is the factorial of k
  • λ is a positive real number, equal to the expected number of occurrences that occur during the given interval, the interval could be a time interval or other offset variables (denominators).
The most important feature of the Poisson regressin is that the parameter λ is not only the mean number of occurrences, but also its variance. In other words, to follow the Poisson distribution, the mean equals to the variance. However, with the empirical data (observations), this feature may not always fit - a situation called overdisperse or underdisperse. When the observed variance is higher than the variance of a theoretical model (or for Poisson distribution, the observed variance is higher than the observed mean), overdispersion has occurred. Conversely, underdispersion means that there was less variation in the data than predicted.

When overdisperse occurs, an alternative model with additional free parameters may provide a better fit. In the case of the count data, an alternaitve model such as negative binomial distribution may be used.

In practice, we often see the count data with excessive zero counts (no event), which may cause the deviation from the Poisson distribution - overdispersion or underdispersion. If this is the case, zero-inflated Poisson regression may be used.

In SAS, several procedures in both STAT and ETS modules can be used to estimate Poisson regression. While GENMOD, GLIMMIX (from SAS/Stat), and COUNTREG (from SAS/ETS) are easy to use with standard MODEL statement, NLMIXED, MODEL, NLIN provide great flexibility to model count data by specifying the log likelihood function explicitly.

Saturday, August 01, 2009

Epidemiology terms, but used in clinical trials

Some terms often used in epidemiology fields are actually pretty commonly used in clinical trials. At least for me, I first learned these terms in epidemiology classes. Some of these terms are very close, but different.

Ratio, Proportion, Rate
  • Ratio: Division of two unrelated numbers
  • Proportion: Division of two related numbers; numerator is a subset of denominator
  • Rate: Division of two numbers; time is always in denominator
For example, 'sex ratio' is a ratio (the number of males divided by the number of females). Rising sex-ratio imbalance is a danger in China.
In a clinical trial, there are males subjects and female subjects. We summarized the data using the percentage of male subjects among the total - this is proportion. The # of male subjects (numerator) is a subset of the total subjects (denominator).
A rate is also one number divided by another, but time is an integral part of the denominator. For example, the speed limit is a rate (65 miles per hour). In clinical trial, rate is often used to describe the incidence of adverse events.

Incidence and prevalence
  • Incidence: measures the occurrence of new disease; deals with the transition from health to disease; defined as the occurrence of new cases of disease that develop in a candidate population over a specified time period
  • Prevalence: measures the existence of current disease; focuses on the period of time that a person lives with a disease; measures the frequency with which new disease develops; defined as the proportion of the total population that is diseased.
Incidence is typically used for describing the acute disease while the prevalence is typically used for describing the chronic disease.

Both incidence and prevalence are rate, not ratio, not proportion. In practice, it comes the terms such as incidence rate and prevalence rate.

While prevalence is purely an epidemiology term, 'incidence' or 'incidence rate' is commonly used in the analysis of clinical trial data, especially the adverse event data. Clinical trial design is always prospective and can be considered as a special case of cohort study in epidemiology term.
In statistics and demography, a cohort is a group of subjects who have shared a particular experience during a particular time span. Cohorts may be tracked over extended periods of time in a cohort study. Notice that we also use the term 'cohort' in dose-escalation clinical studies where the cohort refers to a group of subjects who receive the same level of dose (this is contrary to the term 'arm' used in parallel design).

Incidence vs. Incidence Rate

In clinical trial, when we summarize the adverse event data, should we say "incidence of adverse events" or "incidence rate of adverse events"?

While both terms may be used, "incidence of adverse events" should be more accurate and is more frequently used. This can be easily seen in FDA guidance documents, for example:
"Incidence rate" may be more appropriate when the # of adverse events are normalized by the person year or patient year or normalized by the # of infusions, for example, the following terms may be used: "incidence rate of adverse events per infusion" "The incidence rates per 100 patients-year"