Friday, August 29, 2014

Subgroup Analysis in Clinical Trials - Revisited

I had previously written an article about the sub-group analysis in clinical trials. I would like to revisit this topic. The subgroup analysis has been one of the regular discussion topics in statistical conferences recently. The pitfalls of the subgroup analyses are well-understood in statistical communities. However, the subgroup analyses in regulatory setting for product approval, in multi-regional clinical trials, in confirmatory trials are quite complicated.

EMA is again ahead of FDA in issuing its regulatory guidelines on this topic. Following an expert workshop on subgroup analysis, EMA issued its draft guideline titled “Guideline on the investigation of subgroups in confirmatory clinical trials”. In addition to the general considerations, they provided the guidelines on issues to be addressed during the study planning stage and the issues to be addressed during the assessment stage.

In practice, the sub-group analysis is almost always conducted. For a study with negative results, the purpose of the sub-group analysis is usually to see if there is a sub-group where the statistical signficant results can be found. For a study with positive results, the purpose of the sub-group analysis is usually to see if the result is robust across different sub-groups. The sub-group analysis is not just performed in industry sponsor trials, it may even more often performed in academic clinical studies for publication purpose.

Sometimes it is not so easy to explain the caveats of the sub-group analysis (especially the unplaned sug-group analysis) to non-statisticians. The explanation of the sub-group analysis issues needs the good understanding of the multiplicity adjustments and the statistical power. I recently saw some presentation slides on sub-group analysis issues and pitfalls of the sub-group analysis were well explained in the table below. Either way can make the sug-group analysis results unreliable.


Dr George (2004) “Subgroup analyses in clinical trials
When H0 is true
Increased probability of type I error
Too many “differences”
  • Because the probability of each “statistically significant difference” not being real is 5%
  • So lots of 5% all add together
  • Some of the apparent effects (somewhere) will not be real
  • We have no way of knowing which ones are and which onese aren’t
When H1 is true
Decreased power (increased type II error) in individual sugroup
  • Not enough “differences”
  • The more data we have, the higher the probability of detecting a real effect (“power”)
  • But sub-group analyese “cut the data”
  • Trials are expensive and we usually fix the size of the trial to give high “power” to detect important differences overall (primary efficacy endpoint)
  • When we start splitting the data (only look at men, or only look at women, or only look at renally impaired; or only look at the elderly; etc., etc.), the sample size is smaller … the power is much reduced 




Friday, August 15, 2014

SAE Reconciliation and Determining/recording the SAE Onset Date


Traditionally clinical operations and drug safety / pharmacovigilence departments have elected to independently collect somewhat different sets of safety data from clinical trials. For serious adverse events (SAE), drug safety / pharmacovigilence department will collect the information through the SAE form and the information will be maintainted in a safety database. In clinical operation or data management departments, the adverse events (AE) including SAEs will be collected on case report form (CRFs) or eCRFs if it is an EDC study. For SAEs, the information from safety database and clinical database come from the same source (the investigational sites).  During the study or at the end of the study, the key fields regarding the SAEs from two independently maintained databases will need to be reconciled and the key data fields must match in both databases.

A poster by Chamberlain et al “Safety Data Reconciliation for Serious Adverse Events (SAE)” has nicely described the SAE reconciliation process. They stated that for these fields to be reconciled, “some will require a one to one match with no exception, while some may be deemed as acceptable discrepancies based on logical match. “

They also gave examples for fields which require an exact match or logical determination are in the following Table 1



Among these fields, the onset date is the one usually causing problems. It is due to the different interpretation of the regulatory guidelines by the clinical operations and the drug safety/pharmacovigilence departments. The onset date of SAE could be reported as the first date when signs and symptoms appears or as the date when the event meets one of the following SAE criteria (as defined in ICH E2A).

* results in death,
* is life-threatening,
* requires inpatient hospitalisation or prolongation of existing hospitalisation,
* results in persistent or significant disability/incapacity, or
            * is a congenital anomaly/birth defect.

Klepper and Dwards did a survery and published their results in their paperIndividual Case Safety Reports – How to Determine the Onset Date of an Adverse Reaction”. The results indicated the variability of determining the onset date of a suspected adverse reaction. They recommend that a criterion for onset time, i.e., beginning of signs or symptoms of the event, or date of diagnosis, be chosen as the standard.

However, many companies and organizations (such as NIH and NCI) indicated in their SAE completion guidelines that event start date should be the date when the event satisfied one of the serious event criteria (for example, if criteria “required hospitalization” was met, the date of admission to the hospital would be the Event Start Date).  If the event started prior to becoming serious (was less severe), it should be recorded on the AE page as non-serious AE with a different severity.

In NIDCR Serious Adverse Event Form Completion Instructions, SAE onset date is torecord the date that the event became serious

In SAE Recording and Reporting Guidelines for Multiple Study Products by Division of Microbiology and Infectious Disease, NIH, the onset date of SAE is instructed to be the date the investigator considers the event to meet one of the serious categories

In the HIV Prevention Trials Network, Adverse Event Reporting and Safety Monitoring section indicated that  

“If an AE increases in severity or frequency (worsens) after it has been reported on an Adverse Experience Log case report form, it must be reported as a new AE, at the increased severity or frequency, on a new AE Log. In this case, the status outcome of the first AE will be documented as “severity/frequency increased.” The status of the second AE will be documented as “continuing”. The outcome date of the first AE and the onset date of the new (worsened) AE should be the date upon which the severity or frequency increased.”

In Serious Adverse Event Form Instructions for Completion by National Cancer Institute Division of Cancer Prevention, the event onset date is to be entered as the date the outcome of the event fulfilled one of the serious criteria.

In Good Clinical Practice Q&A: Focus on Safety Reporting in Journal of Clinical Research Best Practice, it contains the following example for reporting SAE onset date.

What would an SAE’s onset date be if a patient on study develops symptoms of congestive heart failure (CHF) on Monday and is admitted to the hospital the following Friday?
 If known, the complete onset date (month-day-year) of the first signs and/or symptoms of
the most recent CHF episode should be recorded. In this case, it would be Monday. If the
onset date of the first signs and/or symptoms is unknown, the date of hospitalization or
diagnosis should be recorded.”

If the SAE onset date is recorded as the date when one of the SAE criteria is met (this seems to be more popular in practice), it may essentially require the splitting of the event. If an event start as non-serious and later on meet one of the serious criteria, the same event will be recorded as two events: one as non-serious event with onset date being the first sign and symptom date and one as serious adverse event with the onset date being the date when one of the SAE criteria is met. Therefore this approach results in a late onset date and a short SAE duration; but double counting perhaps the same event.


If the SAE onset date is recorded as the date when the first sign or symptom appears, it will result in an early onset date and a longer SAE duration. Since SAE reporting to the regulatory authorities / IRBs is based on the SAE onset date, this may be more stringent in meeting the SAE reporting requirement. 

Tuesday, July 15, 2014

Pre-screening procedures in clinical trials

For typical clinical trials, the study design will include a screening period before the randomization / start of the study medication. The purpose of the screening period is to make sure that only subjects who meet the inclusion/exclusion criteria are included in the study. The screening period usually starts from the informed consent signing to the randomization or the start of the study medication.

In some situations, depending on the timing of the informed consent and the randomization, a pre-screening period may be needed. For a study with both pre-screening and screening periods, the pre-screening period will filter out majority of those subjects who may not meet a simple inclusion criteria for the study before the formal screening procedures are performed. In this way, the screening failure rate can be significantly lowered. It is typical that the pre-screening results may be recorded on a pre-screening log, but not necessarily recorded on the study case report form, which will minimize the burden on the data recording for screen failure subjects if the screening failure rate is too high.

1) pre-screening is performed prior to the formal screening procedures are performed.
2) with pre-screening process, there will actually be two steps for screening the subjects for the study.
3) depending on what kind of pre-screening procedures are performed, the pre-screening may or may not require the informed consent from the patients.
4) if the consent is required for pre-screening, there will be two informed consents for the study: one for pre-screening and one for formal screening.
5) the results from pre-screening may be recorded in a pre-screening sheet instead of the case report forms. This is especially true if the pre-screening only involves reviewing of the patient medical charters and the review of the non-study specific test results. In this way, we can minimize recording the large amount of data for pre-screening failure subjects, which is not be valuable to the objectives of the clinical study.

In PROACT II study, in order to have 180 subjects eligible for randomization, 12323 stroke patients were screened. It will be a disaster if we consent all 12323 patients and record the information for all 12323 patients. Here a pre-screening process will be very useful. Since the pre-screening procedures involves only the non-study specific procedures (these procedures will be performed for any stroke patients anyway), there is no separate informed consent needed.

There is no formal regulatory guidance on the pre-screening procedure, however, the FDA’s opinion can be seen in “Screening Tests Prior to Study Enrollment - Information Sheet -Guidance for Institutional Review Boards and Clinical Investigators
For some studies, the use of screening tests to assess whether prospective subjects are appropriate candidates for inclusion in studies is an appropriate pre-entry activity. While an investigator may discuss availability of studies and the possibility of entry into a study with a prospective subject without first obtaining consent, informed consent must be obtained prior to initiation of any clinical procedures that are performed solely for the purpose of determining eligibility for research, including withdrawal from medication (wash-out). When wash-out is done in anticipation of or in preparation for the research, it is part of the research.
Procedures that are to be performed as part of the practice of medicine and which would be done whether or not study entry was contemplated, such as for diagnosis or treatment of a disease or medical condition, may be performed and the results subsequently used for determining study eligibility without first obtaining consent. On the other hand, informed consent must be obtained prior to initiation of any clinical screening procedures that is performed solely for the purpose of determining eligibility for research. When a doctor-patient relationship exists, prospective subjects may not realize that clinical tests performed solely for determining eligibility for research enrollment are not required for their medical care. Physician-investigators should take extra care to clarify with their patient-subjects why certain tests are being conducted.
Clinical screening procedures for research eligibility are considered part of the subject selection and recruitment process and, therefore, require IRB oversight. If the screening qualifies as a minimal risk procedure [21 CFR 56.102(i)], the IRB may choose to use expedited review procedures [21 CFR 56.110]. The IRB should receive a written outline of the screening procedure to be followed and how consent for screening will be obtained. The IRB may find it appropriate to limit the scope of the screening consent to a description of the screening tests and to the reasons for performing the tests including a brief summary description of the study in which they may be asked to participate. Unless the screening tests involve more than minimal risk or involve a procedure for which written consent is normally required outside the research context, the IRB may decide that prospective study subjects need not sign a consent document [21 CFR 56.109(c)]. If the screening indicates that the prospective subject is eligible, the informed consent procedures for the study, as approved by the IRB, would then be followed.
Certain clinical tests, such as for HIV infection, may have State requirements regarding (1) the information that must be provided to the participant, (2) which organizations have access to the test results and (3) whether a positive result has to be reported to the health department. Prospective subjects should be informed of any such requirements and how an unfavorable test result could affect employment or insurance before the test is conducted. The IRB may wish to confirm that such tests are required by the protocol of the study.
In a document issued by partners.org, it stated that the pre-screening information can be recorded on pre-screening sheets.
V. Retaining Information from Individuals who are Pre-Screened but not Enrolled
It is acceptable to retain non-identifying information about individuals who are pre-screened for a study, but do not actually pursue the study or enroll. In fact, this is often desirable or even requested by industrial or academic sponsors to obtain information about the entire pool of individuals interested or potentially eligible for the study. Pre-screening sheets from individuals who did not provide identifying information can be retained with no further action. Pre-screening sheets with identifying information gathered to obtain written authorization and prior to enrollment (signing of informed consent form) may also be retained in research files, but must have segments containing identifiable information blacked out or cut off as soon as it is clear that the individual will not be enrolled. If identifiable health information is to be retained, the investigator must obtain an authorization from each of the persons screened.
Separating the pre-screening and screening periods may have different purpose. For example, in an IBS-D study, both prescreening and screening periods are included in the study. The informed consent would be required for the pre-screening since the pre-screening procedures are beyond the typical patient care. The formal screening process was to establish the baseline since the baseline measure requires recording the daily symptom over 7-14 day period.
“this study consisted of an initial prescreening period, a screening period of 2 to 3 weeks, a 12-week double-blind treatment period, and a 2-week post-treatment period. During the 1-week prescreening period, patients underwent a physical examination, provided blood and urine for routine testing, and discontinued any prohibited medications. Patients who met the inclusion and exclusion criteria entered the screening period and began using an interactive voice response system (IVRS) to provide daily symptom assessments. After the screening period of 2-3 weeks, patients who continued to meet eligibility criteria and were compliant with the IVRS system for at least 6 of 7 days during the week before and 11 of 14 days during the 2 weeks before were randomized in parallel…..”

Saturday, July 12, 2014

Clinical Trial Registries in US, EU, WHO, and other countries

For the past decade, there was increasing pressure on pharmaceutical companies to be transparent on the clinical trials and the clinical trials results. It is considered not a good practice if the pharmaceutical companies publish only the clinical trial with positive results and hide the clinical trial with negative results.

On September 27, 2007, the President George W Bush signed U.S. Public Law 110-85. The law includes a section on clinical trial databases (Title VIII) that expands the types of clinical trials that must be registered in ClinicalTrials.gov, increases the number of data elements that must be submitted, and also requires submission of certain results data.

After the law is enacted, the clinical trial registries have blossomed. In US, all clinical trials are required to be registered in clinicaltrials.gov database.

In the early stage, some companies used a perfunctory way in compliant with the registries and provided very little or meaningless information on the clinical trial registries. For example, one of the clinical trial entry listed Primary Outcome Measures ‘Efficacy’; Secondary Outcome Measures ‘Safety’ instead of providing what the outcomes are.

With the time, the companies are now more serious about the clinical trial registries. For those companies who are not compliant with clinical trial registries, there are penalties.
“FDAAA 801 establishes penalties for Responsible Parties who fail to comply with registration or results submission requirements. Penalties include civil monetary penalties and, for federally funded studies, the withholding of grant funds.

See the statutory provisions amending Civil Money Penalties (PDF) and Clinical Trials Supported by Grants From Federal Agencies (PDF). “
More importantly, the clinical trial registration is now becoming a requirement for the publication. For medical journal, it will be difficult to get clinical trial results published if the study is not registered at the time of the clinical trial start. International Committee of Medical Journal Editors (ICMJE) indicated that the following registries are acceptable
ICMJE also stated:
“In addition to the above registries, starting in June 2007 the ICMJE will also accept registration in any of the primary registries that participate in the WHO International Clinical Trials Portal (see http://www.who.int/ictrp/network/primary/en/index.html). Because it is critical that trial registries are independent of for-profit interests, the ICMJE policy requires registration in a WHO primary registry rather than solely in an associate registry, since for-profit entities manage some associate registries. Trial registration with missing or uninformative fields for the minimum data elements is inadequate even if the registration is in an acceptable registry.”
The same clinical trial may be registered in multiple clinical trial databases, but is not required to be registered in multiple clinical trials databases. In reality, we often found that the same clinical trials are registered in multiple places and the data entries are for the same trial are actually different in different registries.

Saturday, June 21, 2014

Is MMRM Good Enough in Handling the Missing Data in Longitudinal Clinical Trials?

In last 7-8 years, there is a gradual shift from using the single-imputation method such as LOCF to MMRM (mixed model for repeated measurement) especially after the PhRMA initiative published the paper "Recommendations for the Primary Analysis of Continuous Endpoints in Longitudinal Clinical Trials” in 2008 and National Research Council (NAS) published its report “Prevention & Treatment of Missing Data in Clinical Trials” in 2010. MMRM approach handling missing data do not employ formal imputation. MMRM analysis made use of all available data, including subjects with partial data (i.e., with missing data) in order to arrive at an estimate of the mean treatment effect without filling in the missing items. 


The use of MMRM is based on the assumption of missing at random (MAR) and the assumption that dropouts would behave similarly to other patients in the same treatment group, and possibly with similar covariate values, had they not dropped out. While MAR seems to be a comprise between the missing completely at random (MCAR) and the missing not at random (MNAR), MAR assumption as the basis of using MMRM has been scrutinized or criticized in two recently FDA advisory committee meetings.

In a paper by Dr. Lisa LaVange “The Role of Statistics in Regulatory Decision Making, these two advisory committee meetings were mentioned as examples how FDA statisticians helped in dealing with the interpretation of the results from MMRM analysis.

“When a patient discontinues therapy for intolerability, however, such an assumption may not be reasonable. More important, the estimated treatment effect will be biased in this case, leaving regulators with some uncertainty as to what information should be included in the product label, if approved.”

In both advisory committee meetings she cited, FDA statisticians challenged the assumptions of using the MMRM in handling the missing data.

The first advisory committee meeting is Jan 30, 2014 meeting of Pulmonary-Allergy Drugs Advisory Committee to review the  New Drug application (NDA) from Pharmaxis Ltd. seeking approval of mannitol inhalation powder for Cystic Fibrosis (CF)

“In January 2013, an advisory committee meeting was held to review mannitol inhalation powder for the treatment of cystic fibrosis, and missing data issues were an important part of the discussion. Results in favor of mannitol were highly significant in one phase 3 study, but differential dropout rates were observed, indicating that some patients receiving mannitol could not tolerate therapy. The prespecified primary analysis involved a comparison between treatment groups of the average outcome across visits in the context of an MMRM. This analysis was flawed in the presence of informatively missing data, but a sensitivity analysis using baseline values in lieu of missing observations was significantly in favor of mannitol. Citing the NAS report, committee members raised questions about the usefulness of this analysis during the committee’s discussion. The statistical review team provided alternative sensitivity analyses based on empirical distribution functions that appropriately addressed the tolerability issue and provided clarity to the discussion.”
The second advisory commettee meeting is September 10, 2013 Meeting of the Pulmonary-Allergy Drugs Advisory Committee to review GSK’s new drug application (NDA) 203975 umeclidinium and vilanterol powder for inhalation in treating COPD
“At a second advisory committee meeting held in September 2013 to discuss umeclidinium and vilanterol inhalation powder for the treatment of chronic obstructive pulmonary disease, differential dropout rates were observed in the phase 3 studies, with more placebo patients discontinuing due to lack of efficacy compared with those treated with the investigational drug. The statistical reviewer presented a sensitivity analysis using a ‘‘jump to reference’’ method that assumed any patient discontinuing treatment early would behave similarly to placebo patients post discontinuation, arguing that such an assumption was reasonable given the drug’s mechanism of action. The results of this and other analyses helped inform committee members about the impact of missing data on the primary results and also helped focus the discussion on the importance of not only how much data were missing but the reasons why and the way in which postdiscontinuation data were incorporated in the analysis”

the Transcript for the September 10, 2013 Meeting of the Pulmonary-Allergy Drugs Advisory Committee (PADAC) detailed the discussions of the issue with the application of MMRM.

“Given the large amount of patient dropout in the primary efficacy studies, it is important to consider the potential effect of missing data on the reliability of efficacy results. Exploratory analyses showed that patients who dropped out on the active treatment arms tended to show benefit over placebo with respect to FEV1 prior to withdrawal. The primary MMRM model assumes that data after dropout are missing at random. Therefore, if the interest is in the effectiveness of the treatment assignment in all randomized subjects, regardless of adherence, i.e., the intention-to-treat estimand, then the primary analysis assumes that patients who dropped out went out to maintain that early treatment effect, even after treatment discontinuation. This assumption is not scientifically plausible because bronchodilators are symptomatic and not disease-modifying treatments, and thus any effect of the treatment will go away within a few days of stopping therapy. Therefore, a sensitivity analysis to evaluate the intention-to-treat estimand should not assume that any early treatment effect was maintained through 24 weeks in patients who prematurely stopped treatment. The sensitivity analysis carried out by the applicant that we considered most reasonable was a Jump to Reference (J2R) multiple amputation approach. The assumptions of this approach in comparison to those of  the MMRM are illustrated in this figure, which displays hypothetical data for patients dropping out after week eight.
 Average FEV1 values over time are displayed by circles for patients on UMEC/VI and by triangles for patients on placebo. The red lines display observed data prior to dropout, illustrating an early treatment benefit. The green lines display the trends in pulmonary function after dropout that are assumed by the MMRM model, i.e., an assumption that the early benefit was maintained throughout the 24 weeks. The blue lines display the trends assumed by the Jump to Reference sensitivity analysis. This analysis, like the MMRM, assumes that placebo patients continued on the trend observed prior to dropout. However, unlike the MMRM, the Jump to Reference (J2R) approach multiply imputes missing data in patients on active treatment under the assumption that any treatment effect observed prior to dropout would have gone away by the time of the next visit. In other words, the assumption is that pulmonary function in these patients after dropout tends to look like that observed in patients on placebo.

The results of this sensitivity analysis approach as compared to those of the primary analysis are shown in this table for Study 373. In all relative treatment comparisons, statistical significance was maintained in the sensitivity analysis. However, estimated magnitudes of treatment effects were approximately 20 to 30 percent smaller in the sensitivity analyses than in the primary analyses. For example, the estimated effect size relative to placebo for UMEC/VI at the proposed 62.5/25 dose was about 0.13 liters, as compared to 0.17 liters in the primary analysis. Notably, all sensitivity analyses to address the missing data are based on untestable assumptions about the nature of the unobserved data. “
The caveat in using the MMRM approach is also mentioned in EMA's "GUIDELINE ON MISSING DATA IN CONFIRMATORY CLINICAL TRIALS" released in 2009. The guideline mentioned that different type of variance-covariance matrix for MMRM model and assumptions to model the un-observed measurements could lead to different conclusions. it suggested that "the precise option settings must be fully justified and predefined in advance in detail, so that the results could be replicated by an external analyst"

"The methods above (e.g. MMRM and GLMM) are unbiased under the MAR assumption and can be thought of as aiming to estimate the treatment effect that would have been observed if all patients had continued on treatment for the full study duration. Therefore, for effective treatments these methods have the potential to overestimate the size of the treatment effect likely to be seen in practice and hence to introduce bias in favour of experimental treatment in some circumstances. In light of this the point estimates obtained can be similar to those from a complete cases analysis. This is problematic in the context of a regulatory submission as confirmatory clinical trials should estimate the effect of the experimental intervention in the population of patients with greatest external validity and not the effect in the unrealistic scenario where all patients receive treatment with full compliance to the treatment schedule and with a complete follow-up as per protocol. The appropriateness of these methods will be judged by the same standards as for any other approach to missing data (i.e. absence of important bias in favour of the experimental treatment) but in light of the concern above, the use of only these methods to investigate the efficacy of a medicinal product in a regulatory submission will only be sufficient if missing data are negligible. The use of these methods as a primary analysis can only be endorsed if the absence of important bias in favour of the experimental treatment can be substantiated"



When the study endpoint is continuous and measured longitudinally, if MMRM is used, the assumptions for using MMRM may be challenged. Some good practices in using MMRM may be: 1) always thinking about the assumptions for the use of MMRM; 2) using more than one imputation approaches for sensitivity analyses; 3) considering using the most conservative imputation approach such as J2R.   

Tuesday, June 10, 2014

Mixed effect Model Repeat Measurement (MMRM) and Random Coefficient Model Using SAS

The clinical trial data presented to us are often in longitudinal format with repeated measurements. For Continuous Endpoints in Longitudinal Clinical Trials, both Mixed effect Model Repeat Measurement (MMRM) and Random Coefficient Model can be used for data analyses.

These two models are very similar, but there are differences. MMRM is used when we compare the treatment difference at the end of the study. Random Coefficient Model is used when we compare the treatment difference in slopes. If SAS mixed model is used, the key difference will be the use of Repeated statement if MMRM model and the use of Random statement if random coefficient model is used.

MMRM

MMRM has been extensively used in the analysis of longitudinal data especially when missing data is a concern and the missing at random (MAR) is assumed.

In a paper by Mallinckrod et al, “Recommendations for the primary analysis of continuous endpoints in longitudinal clinical trials”, the MMRM is recommended over the single imputation methods such as LOCF. The companion slides provided further explanation and how to use MMRM in analysis of longitudinal data.

In a recent paper by Mallinckrod et al (2013), “Recent Developments in the Prevention and Treatment of Missing Data“, the MMRM is again mentioned as one of the preferred method when missing data follow MAR. in this paper, an example was provided and the detail implementation of the MMRM is described as the following:

 “The primary analysis used a restricted maximum likelihood (REML)–based repeated-measures approach. The analyses included the fixed, categorical effects of treatment, investigative site, visit, and treatment-by-visit interaction as well as the continuous, fixed covariates of baseline score and baseline score-by-visit interaction. An unstructured (co)variance structure shared across treatment groups was used to model the within-patient errors. The Kenward-Roger approximation was used to estimate denominator degrees of freedom and adjust standard errors. Analyses were implemented with SAS PROC MIXED.20 The primary comparison was the contrast (difference in least squares mean [LSMEAN]) between treatments at the last visit (week 8).”

For MMRM, if SAS mixed model is used, the sample SAS codes will be like the following:

proc mixed;
            class subject treatment time site;
             model Y  = baseline treatment time site
                                treatment*time baseline*time/ddfm=kr;
             repeated time /  sub = subject type = un;
             lsmeans treatment / cl diff at time = t1;
             lsmeans treatment / cl diff at time = t2;
             lsmeans treatment / cl diff at time = tx….;
             run;
Where the treatment difference is obtained with lsmean statement for the treatment difference at time tx.

Random Coefficient Model

A longitudinal model using the RANDOM statement is called random coefficient model because the regression coefficients for one or more covariates are assumed to be a random sample from some population of possible coefficients. Random coefficient models may also be called hierarchical linear models or multi-level model and are useful for highly unbalanced data with many repeated measurements per subject. In random coefficient models, the fixed effect parameter estimates represent the expected values of the population of intercept and slopes. The random effects for intercept represent the difference between the intercept for the ith subject and the overall intercept. The random effects for slope represent the difference between the slope for the ith subject and the overall slope. SAS documents provided an example of using random coefficient model


If we intent to compare the differences in slopes between two treatment groups, the MMRM model above can be rewritten as:

proc mixed;
            class subject treatment time site;
             model Y  = baseline treatment time site
                                treatment*time baseline*time/ddfm=kr;
             random intercept time /  sub = subject type = un;
            ESTIMATE ‘SLOPE, TRT’ TIME 1 TIME*TREAT 1 0/CL;
            ESTIMATE ‘SLOPE, PLACEBO’ TIME 1 TIME*TREAT 0 1/CL;
            ESTIMATE ‘SLOPE DIFF & CI’ TIME*TREAT 1 -1 /CL;
run;

From the model, the estimate statement is used to obtain the difference in slopes between two treatment groups. In some case, the main effect (treatment) may not be significant, but the interaction term (treatment * time), a reflection of the difference in two slopes, may be statistically significant.


In a paper by Dirksen et al (2009) Exploring the role of CT densitometry: a randomised study of augmentation therapy in α1-antitrypsin deficiency, the random coefficient model was employed to obtain the differences in slopes between two treatment groups:
"In Methods 1 and 2 for the densitometric analysis, treatment differences (Prolastin® versus placebo) were tested by linear regression on time of PD15 measurement in a random coefficient regression model as follows. Method 1: TLC-adjusted PD15 from CT scan as the dependent variable; treatment, centre and treatment by time interaction as the fixed effects; and intercept and time as the random effects. Method 2: PD15 from CT scan as the dependent variable; treatment, centre and treatment by time interaction as the fixed effects; logarithm of TLV as a time-dependent covariate; and intercept and time as the random effects. The estimated mean slope for each treatment group represented the rate of lung density change with respect to time. The tested treatment difference was the estimated difference in slope between the two groups, considered to be equivalent to the difference in the rates of emphysema progression."

Sunday, June 08, 2014

Meetings Materials on FDA's website

Because of the FDA Transparency Initiative, the meetings, conferences, and workshops sponsored or co-sponsored by FDA are not open to the public. Shortly after the meetings, conferences, and workshops, the materials and webcasts are usually available to the public. This is a good step forward.

The list of meetings, conferences, and workshops sponsored or co-sponsored by the Center for Drug Evaluation and Research (CDER) can be found at the following website:

Meetings, Conferences, & Workshops (Drugs)


For example, for several public meetings that I am interested in, the meeting materials are all available to the public on the website. 

Periodically, FDA organizes the Advisory Committee (AC) meetings. While FDA may or may not follow the recommendations from Advisory Committees, the materials presented at the advisory committee meetings are always important. Sometimes, the statistical issues such as the endpoint selection, missing data handling, interpretation of the safety results are extensively discussed during the AC meeting and in the meeting materials. Fortunately, the AC meeting materials are usually posted on the web and sometimes, the webcasts are available too. 

Advisory Committees & Meeting Materials