Saturday, April 16, 2011

Emerging Statistical Issues in the Conduct and Monitoring of Clinical Trials

This Wednesday, I had a chance to attend “University of Pennsylvania Annual Conference on Statistical Issues in Clinical Trials”. The topic for this year is “Emerging Statistical Issues in the Conduct and Monitoring of Clinical Trials”. The number of participants was just right in size and the conference was organized pretty well.

In terms of the topics, there are some of them I like and some of them I don’t like. The presentation slides will eventually be posted on conference’s website, however, I would like to give one or two sentences commenting on each topic.

“Sample size estimation incorporating disease progression” – the key issue is the adequacy of the study endpoint. A good endpoint will incorporate the impact of the disease progression.

“Hurdles and future work in adaptive designs” – it is good to hear the discussion about the hurdles, caveats of the adaptive designs. Still very often, a lot of people only talk about the advantages of adaptive designs – too good to be true.  Similarly, a recent article "a once-rare type of clinical trial that violates one of the sacred tenets of trial design is taking off, but is it worth the risk? " from The-Scientist magazine gave some objective assessments on implementing the adaptive designs.

“Predicting accrual in ongoing trials” – utilizing the complicated statistical model to predict the accrual is a waste of time. Accrual in ongoing clinical trials is 95% clinical operations issues, 5% related to statistics. Is it worth to modeling the accrual?

“New incentive approaches for adherence” – money incentives including lottery is a sensitive topic and ethic issue could follow no matter it is incentive for adherence or for study visit compliance. Money incentives are different depending on participants’ social economic status (family income). $100 lottery may be very incentive to some, but not to others.  

“Efficient source dada verifications in cancer trials” – I always thought that all data fields had to be 100% source data verified. It is not entirely true in large scale trials in oncology or in studies with cardiovascular endpoint. In industry, we are rather conservative.

“Estimation of effect size in trials stopped early” – trials stopped early due to efficacy is not very common and should not be encouraged. Difficulty in estimating the effect size still exists for trials stopped early.

“Accounting in analysis for data errors discovered through sampling” – Unreliable data or large % of missing data is always a concern, even for observational studies. Statistical approach may not be a good option. When data is garbage, the results we draw from the data will also be garbage – so called ‘garbage in, garbage out’ no matter which statistical model is utilized to address the data issues.

“Some practical issues in the evaluation of multiple endpoints” – It is so correct that we should play down the importance of differentiating ‘primary endpoint’, ‘secondary endpoint’, ‘tertiary endpoint’,… Multiple comparison has been expanded so much and is everywhere now (co-primary, primary and secondary, co-secondary, secondary superiority test after non-inferiority test, interim analysis, meta analysis, ISE,…). Are we overdoing this?

1 comment:

Tom Spradlin said...

Two brief comments:

Are we overdoing the multiple endpoint stuff?? Oh my god yes.

Why do people talk about estimating sample size? The sample size is not an unknown quantity, like mean treatment effect, but instead a DECISION that one has to make. The word "estimate" is completely out of place.