Monday, January 13, 2020

Pre-specifications of the statistical analysis plan - story of Ampligen in Chronic Fatigue Syndrome


In the previous post, 'Pre-specification and SAP', various regulatory guidelines and guidance were cited to demonstrate the importance of the pre-specification of the statistical analysis plan in clinical trials - especially the pivotal clinical trials. In the eye of the FDA reviewers, the most critical issue is not to choose or switch the statistical analysis method after the study unblinding or after the sponsor has the knowledge of the unblinded results. If the cherry-picking is allowed, a negative study might be turned into a positive study. 

One example is a drug Ampligen developed by Hemispherx Biopharma Inc. Hemispherx developed Ampligen for the treatment of chronic fatigue syndrome (Myalgic encephalomyelitis). Prior to the PDUFA date, an advisory committee meeting was organized. The discussion of whether or not Ampligen is efficacious was focused on the statistical analysis using untransformed data (result not statistically significant)or log-transformed data (result statistically significant). FDA accused the sponsor of changing to the statistical analysis using untransformed data after the planned analysis using log-transformed data was not statistically significant. 

To “log transform” or not to was one of the major issues the FDA raised in their background report; the fact that log untransformed data had shown that Ampligen had a significantly positive benefit while log-transformed data indicated the drug (almost did but) didn’t have that effect.

This technical question would dog Hemispherx throughout the hearing; and they would ultimately answer it but one had the feel that it was too late…that the damage had been done. Hemispherx’s statistician, with years of FDA experience under his belt, showed instructions from the FDA stating that log transformation should not be used unless necessary because it could skew the data, and then

The FDA officials were focused on something else, though, when and why Hemispherx decided to log transform or not the data. The big question was whether Hemispherx saw the data before it decided whether or not to transform the data. The statistician appeared to argue that Hemispherx had to check the variance to determine if the transformation was warranted but in the end stated the biostatistician who prepared the data was no longer with the company and they didn’t know. That was a huge blow…

Never mind that Hemispherx had demonstrated that the log transformation data was not warranted or that non transformed data was appropriate …the FDA was mostly interested in whether the biostatistician had ‘followed the rules’. It was a bizarre thing as a patient to watch a drug that could help ill people be held up on procedural issues but there it was.

This is how the FDA approves drugs? More impromptu debate than rigorous analysis the discussion session kind of flowed along from topic to topic with a pro-FDA moderator calling the shots. In fact, an actual debate, with each side arguing pro’s and con’s of each issue would have been much better.

Instead of issues being drawn up, presented on the screens and then discussed in an organized manner with each side given equal time, the conversation lurched from topic to topic with the Hemispherx reps being frustrated spectators too many times.

After lunch, Hemispherx had appeared confident even after the morning pummeling they’d taken from the FDA team. They felt they had answers to the FDA’s concerns but after the meeting, several members of the team felt they simply were not provided the opportunity to produce them. The moderator did call on Hemispherx several times but, for the most part, the FDA personnel to held the floor.

The short early discussion period clearly left many questions hanging at a time when the issues were fresh in the reviewers minds but the later discussion period felt hurried as well.

Given the ad hoc nature of the discussion period drugs, companies must shake in their boots and investors must tremble when they approach these meetings. Then again, this is an FDA that gives companies 220 pages of background materials and a list of questions to be answered two days before the meeting. The FDA team clearly has the upper hand in these hearings and that’s apparently how they want it.
The consequence is that Ampligen was voted down by the advisory committee and NDA was subsequently rejected by FDA. The sponsor gave up the further pursuit of the Ampligen for the treatment of CFS. Instead, the sponsor changed its name to AIM ImmunoTech Inc. and retool the Ampligen for oncology indications.
On the question: "Based on the information included in the briefing materials and presentations, has the applicant provided sufficient efficacy and safety data to support marketing of Ampligen for the treatment of CFS?," the AAC voted 8 no, 5 yes and 1 non-vote.

Thursday, January 02, 2020

Pre-specification and statistical analysis plan

For industry-sponsored clinical trials especially those trials for the regulatory submission purpose (i.e., adequate and well-controlled studies), a prespecified analysis plan is essential. In a previous post, When to Finalize the Statistical Analysis Plan (SAP)? the timing of the SAP development/finalization was discussed. 

According to the current guidelines (such as ICH E9 STATISTICAL PRINCIPLES FOR CLINICAL TRIALS", the SAP may be written as a separate document (as we usually do) and should be finalized before breaking the blind.
"5.1 Prespecification of the Analysis When designing a clinical trial the principal features of the eventual statistical analysis of the data should be described in the statistical section of the protocol. This section should include all the principal features of the proposed confirmatory analysis of the primary variable(s) and the way in which anticipated analysis problems will be handled. In case of exploratory trials this section could describe more general principles and directions. The statistical analysis plan may be written as a separate document to be completed after finalising the protocol. In this document, a more technical and detailed elaboration of the principal features stated in the protocol may be included. The plan may include detailed procedures for executing the statistical analysis of the primary and secondary variables and other data. The plan should be reviewed and possibly updated as a result of the blind review of the data and should be finalised before breaking the blind. Formal records should be kept of when the statistical analysis plan was finalised as well as when the blind was subsequently broken. If the blind review suggests changes to the principal features stated in the protocol, these should be documented in a protocol amendment. Otherwise, it will suffice to update the statistical analysis plan with the considerations suggested from the blind review. Only results from analyses envisaged in the protocol (including amendments) can be regarded as confirmatory.

In the statistical section of the clinical study report the statistical methodology should be clearly described including when in the clinical trial process methodology decisions were made (see ICH E3). "
Similar languages are in ICH E8 "GENERAL CONSIDERATIONS FOR CLINICAL TRIALS":
3.2.4 Analysis
The study protocol should have a specified analysis plan that is appropriate for the objectives and design of the study, taking into account the method of subject allocation, the measurement methods of response variables, specific hypotheses to be tested, and analytical approaches to common problems including early study withdrawal and protocol violations. A description of the statistical methods to be employed, including timing of any planned interim analysis(es) should be included in the protocol (see ICH E3, ICH E6 and ICH E9).  The results of a clinical trial should be analysed in accordance with the plan prospectively stated in the protocol and all deviations from the plan should be indicated in the study report. Detailed guidance is available in other ICH guidelines on planning of the protocol (ICH E6), on the analysis plan and statistical analysis of results (ICH E9) and on study reports (ICH E3).
However, there are randomized controlled studies being single-blind or no blinding (open-label study). According to the ICH E14 revision 1 "GENERAL CONSIDERATIONS FOR CLINICAL STUDIESE8(R1)", the statistical analysis plan should be finalized before the unblinding of study data (for blinded studies) and before the conduct of the study (for open-label studies):
5.1.6 Statistical Analysis

The statistical analysis of a study encompasses important elements necessary to achieving the study objectives. The study protocol should include a statistical methods section that is appropriate for the objectives and study design (ICH E6 and E9). A separate statistical analysis plan may be used to provide the necessary details for implementation. The protocol should be finalised before the conduct of the study, and the statistical analysis plan should be finalised before the unblinding of study data, or in the case of an open-label study, before the conduct of the study. These steps will increase confidence that important aspects of analysis planning were not based on accumulating data in the study or inappropriate use of external data, both of which can negatively impact the reliability of study results. For example, the choice of analysis methods in a randomised clinical trial should not change after examining unblinded study data, and external control subjects should not be selected based on outcomes to be used in comparative analyses with treated study subjects.
It is commonly accepted that the pre-specified analysis plan needs to be included in the protocol for primary efficacy endpoint and perhaps also the secondary efficacy endpoints. A separate statistical analysis plan will be prepared (usually after the study protocol has been implemented and some patient data (in a blinded fashion) is available).

FDA has several guidelines for FDA reviewers for their review of the SAP.  We can see what the FDA's expectations are for the SAP.

Good Review Practice: Statistical Review Template
Data and Analysis Quality
Review the quality and integrity of the submitted data. Examples of relevant issues include the following: 
  • Whether it is possible to reproduce the primary analysis dataset, and in particular the primary endpoint, from the original data source 
  • Whether it is possible to verify the randomized treatment assignments 
  • Findings from the Division of Scientific Investigation or other source(s) that question the usability of the data 
  • Whether the applicant submitted documentation of data quality control/assurance procedures (see ICH E3,1 section 9.6; also ICH E6,2 section 5.1) 
  • Whether the blinding/unblinding procedures were well documented (see ICH E3, section 9.4.6)
  • Whether a final statistical analysis plan (SAP) was submitted and relevant analysis decisions (e.g., pooling of sites, analysis population membership, etc.) were made prior to unblinding.
Applicants are expected to submit data of high quality and make it possible for the FDA to reproduce their results. In turn, FDA reviewers should provide adequate documentation so that the applicant or another data user could reproduce their independent findings. The level of documentation needed will depend on the complexity and novelty of the analysis. If an ordinary ANOVA or ANCOVA is used, for example, it would suffice to identify the dependent and independent variables. If a more unusual analysis is performed, then it may be necessary to provide code. The code should be either included in the report or put in an appropriate digital archive. 
Good Review Practice: Clinical Review of Investigational New DrugApplications

8. STATISTICAL ANALYSIS PLANS
Sponsors should be encouraged to include the SAP as part of the protocol, rather than providing it in a separate document, even if the SAP has not been finalized. If the SAP is changed late in the trial, particularly after the data may be available, it is critical for the sponsor to assure the FDA that anyone making such changes has been unaware of the results. Sponsors should be encouraged to describe the methods used to ensure compliance. Additional information on the principles of statistical analyses of clinical trials is available in ICH E9. 75 The review of the SAP requires close collaboration with the biostatistical reviewer. 
8.1 Planned Analyses Analyses intended to support a marketing application (generally analyses for the phase 3 efficacy trials) should be prospectively identified in the protocol and described in adequate detail. An incomplete description of the proposed analyses in the protocol can leave ambiguity after trial completion in how the trial will be analyzed. 
Nonprospectively defined analyses pose problems because they leave the possibility that various statistical methods were tried and only the most favorable analysis was reported. In such cases, the estimates of drug effect may be biased by the selection of the analysis, and the proper correction for such bias can be impossible to determine. Preplanning of analyses reduces the potential for bias and often reduces disputes between sponsors and the FDA on the interpretation of results. The same principles apply to supportive and/or sensitivity analyses. These analyses should be prospectively specified, despite the fact that the results of such analyses cannot be used as a substitute for the primary analysis. If the protocol pertains to a multinational trial, it is important that an analysis of the regional  differences be prespecified. Clinical reviewers should review these considerations for planned analyses in collaboration with statistical reviewers. 
Although detailed prespecification is essential for the primary efficacy analysis, the ability to interpret findings on other outcomes, such as important secondary efficacy endpoints for which a claim might be sought, is also dependent on the presence of a prospectively described analysis plan. Observations of potential interest, termed descriptive endpoints because the trial will almost always be underpowered in their respect, may be considered in a trial that is successful on its primary endpoint to further explore consistency in demographic subgroups (e.g., sex, age, and race) or evaluate regional differences in multinational trials. Safety outcomes are also important and should be specified prospectively. They will often not be part of the primary analysis unless the trial was designed to assess such an endpoint. Analyses not prospectively defined will in most cases be considered exploratory; see section 8.2.2.1, Descriptive Analysis, for potential use of such descriptive analyses. 
Interim analyses may play an important role in trial design. They present complex issues, including preservation of overall Type I error (alpha spending function), re-estimation of sample size, and stopping guidelines. Plans for interim analyses should be prospectively determined and reviewers should discuss these plans with the statistician. See section 8.1.3, Interim Analysis Plans, for further discussion of these plans. 
8.1.1 Adequacy of the Statistical Analysis Plan
When reviewing the SAP, it is critical to consider whether there is ambiguity about the planned analyses. Particular attention should be paid to the primary endpoint and how it will be analyzed. If there are multiple primary endpoints or analyses, the Type 1 error rate should be controlled appropriately. If there is a single primary endpoint, details of the analysis are important. For example, an SAP that defines the primary analysis as a comparison of the time to event between treatment arms leaves open many possibilities, such as the specific analytical approach (e.g., Cox regression, log rank test), whether the analyses will be adjusted for covariates (and which covariates would be included), and the method for this adjustment. Censoring for subjects who drop out of the trial or who are lost to follow-up should be discussed, particularly since dropout may not be random. Post dropout follow-up may have different implications for superiority and noninferiority trials. 
Consideration also should be paid to other preplanned analyses, such as secondary endpoint analysis, population subset analysis, regional analysis, and interim analysis. Both clinical and statistical reviewers should collaborate in order to make appropriate recommendations. 
When there are possible secondary efficacy endpoints (e.g., different time points, population subsets, different statistical tests, different outcome measures), it is critical to determine how they will be analyzed and their role in the efficacy assessment. In general, secondary analyses are not considered in regulatory decision-making unless there is an effect on the primary endpoint, so that no Type 1 error adjustment is needed for the primary endpoint. A secondary endpoint intended to represent a trial finding (and thus a possible claim) after success on the primary endpoint should be considered as part of the overall SAP and, if there is more than one of these, a multiplicity adjustment or gatekeeper approach may be necessary to protect the Type 1 error rate at a desired level (alpha = 0.05) for such analyses. Positive results in a secondary analysis when the primary endpoint did not demonstrate a statistically significant difference generally will not be considered evidence of effectiveness. 
Protection of the overall (family-wide) Type 1 error rate at a desired level (alpha = 0.05) is essential when the protocol has designated multiple hypotheses testing. Examples include efficacy comparisons among multiple doses with respect to primary and secondary endpoints, subpopulation analysis, and regional analysis. Various commonly used statistical procedures can be used for this multiplicity adjustment (e.g., Bonferroni, Dunnett, Hochberg, Holm, Hommel, and gatekeeping procedures), and these procedures will be considered in a multiplicity guidance under development. The proper use of each procedure depends on the priority of the hypotheses to be tested and the definition of a successful trial outcome. The following two examples are illustrative:
Example 1. A placebo-controlled trial with one primary endpoint and three treatment doses (low, medium, and high) is planned. To assess the efficacy of the three doses as compared to placebo, a commonly used hierarchical procedure tests sequentially from high dose to low against placebo, each at alpha = 0.05, until a pvalue ≤ 0.05 is not attained for a dose. Significance is then declared for all doses that achieved a p-value ≤ 0.05. 
The Bonferroni correction approach also can be used to share alpha = 0.05 among the three doses and test each one at alpha = 0.05/3 = 0.017. This method will be less efficient than the sequential method, if the effect is likely to be positively associated with dose. The primary analysis could also evaluate all three doses pooled versus placebo (less efficient if the low doses are not effective) or of the two highest doses versus placebo. 
Example 2. A placebo-controlled trial of two endpoints, A and B, and three treatment doses (low, medium, and high) is planned. Suppose endpoint A is thought to be more indicative of the true effect than B and so is placed higher in the hierarchy than B. Also, suppose the medium and high doses are hypothesized to be equally effective while the low dose is considered less likely to exhibit significance. Multiple clinical decision rules are designated in hierarchical order to demonstrate the efficacy: 
  • Show benefit for each of two higher doses individually compared to placebo with respect to endpoint A and endpoint B 
  • Show benefit for the two higher doses pooled compared to placebo with respect to endpoint A or B 
  • Show benefit for the low dose with respect to endpoint A 
  • Show benefit for the low dose with respect to endpoint B 
Although the Bonferroni, Holm, or Hommel procedure can be applied to test these four hypotheses, a gatekeeper procedure that sequentially tests the four sets of hypotheses in hierarchical order, each at alpha = 0.05, is likely to be more efficient. 
How missing data (particularly from dropouts) are handled can profoundly affect trial outcomes. Sponsors should be encouraged to detail how they plan to minimize dropouts and to specify particular methods for assessing data from dropouts. It is not credible to design these analyses once unblinded data are available. Reviewers should consider the best approach for the particular situation, recognizing that such classic methods as last observation carried forward (known as LOCF) can bias a trial for or against a drug, depending on the reasons for attrition, the time course of the disease, and response to treatment. Dropout may not be random, as subjects may drop out of either the new drug or the control therapy for toxicity or for lack of efficacy. Depending on the cause of the dropout, the use of modeling approaches might have advantages. 
8.1.2 Reviewing Changes to the Statistical Analysis Plan
Changes to critical elements of the analysis (e.g., the primary endpoint, handling of dropouts) during a trial can raise concerns regarding bias, specifically whether the changes could reflect knowledge of unblinded data. Concerns are inevitably greatest when the change is made late and has an important effect on outcome. In theory, if such changes are unequivocally made blindly (e.g., because of data from other trials or careful reconsideration) they should not pose problems, but the assurance of blinding can be hard to provide. For obvious reasons, changes made with data in hand (but purportedly still blinded) pose the greatest difficulties and are hard to support. 
When changes to the original SAP are proposed during the course of conducting the trial, it is critical to determine exactly what information, if any, regarding trial outcomes was available to those involved in proposing the change. Changes made with knowledge of results can introduce bias that can be substantial and impossible to measure. Note that such biases can occur subtly (e.g., the likelihood of adoption of a proposal made by an individual with no knowledge of data can be influenced by the comments or nonverbal communication of an individual who does have such knowledge). Therefore, major protocol changes are not credible if knowledge of interim outcome data is available to any individual who is involved with those planning the change. If there is any potential for such changes, sponsors should be encouraged to describe fully who has had access to data and how the firewalls were maintained, among other information. 
After trial data collection is completed, and before unblinding, there is often a blinded data cleanup phase. During that phase, previously unaddressed specific concerns about the data may be identified (e.g., types and amounts of missing data, concomitant therapies), and decisions are often made by the sponsor as to how to address those concerns. Typically, any changes made during this data cleanup phase should be minor clarifications of the SAP. If more than minor clarifications are made to the SAP, sponsors should be encouraged to submit these changes to the FDA for review as protocol amendments.

Statistical analysis plan. Submission of a detailed statistical analysis plan (SAP) in the initial protocol submission for phase 3 protocols is not required by CDER regulations. However, review staff should strongly encourage sponsors to include the SAP in the initial protocol submission, because phase 3 protocols generally include a detailed section devoted to statistical methods that are closely linked to trial design.
Good Review Practice: Clinical Review Template
"Reviewers are expected to use the study/clinical trial protocol for discussions on study/clinical trial design and planned efficacy analyses and not the final report itself, because documentation of the study/clinical trial design and the statistical analysis plan within the final report are occasionally incomplete or inaccurate."
"With respect to adequate and well-controlled clinical trials, the reviewer should consider: • Minimization of bias (adequacy of blinding, randomization, endpoint committees, prospective statistical analysis plan, and identification of endpoints) • Choice of control group and the limitations of various choices, especially for historical controls or noninferiority clinical trials, including adequacy of documented effect size for the control drug"
"6.1.5 Analysis of Secondary Endpoint(s) Reviewers should describe the secondary endpoints and their potential supportive role. Was an analysis plan prespecified? Were the secondary endpoints considered for analysis as a hierarchical structure? Should any secondary endpoint be assessed if the primary endpoint fails to achieve statistical significance? "
Pre-specified analyses plan should also describe how the safety data should be analyzed even though the pre-specification for safety analyses are not as critical as the efficacy. Unlike the efficacy analyses where statistical analysis methods vary depending on the study design, the endpoint measure, and other issues, the analysis of safety data is relatively standardized. 

Wednesday, January 01, 2020

Pediatric Drug Delivery: Challenges And Solutions

Drug development usually starts with adult patients and then move onto pediatric patients unless the drug is developed only for pediatric indication. When developing the drug for pediatric indication, the drug delivery (formula and routes) is critical, for example, an oral medication in tablets and capsules may be perfect for adult patients, it will not be ok for pediatric patients. 

Here is an article from Life Science & Leader discussing challenges and solutions in pediatric drug delivery: 
================================================================

Pediatric Drug Delivery: Challenges And Solutions

Source: Catalent

The following is a summary of a Q&A with two formulation and bioavailability experts from Catalent addressing some common formulation challenges for pediatric populations.
Uwe Hanenberg, Ph.D., Director Product Development and Oral Solids, Catalent. A well-known industry expert on formulation and analytics of oral solids dosage forms.
Pascale Clement, Director, Project Management, Catalent. An expert on bioavailability enhancement techniques for challenging molecules.  
Q1 - What are the special challenges in the development of medicines for pediatric use?
Recognizing the fact that physiological changes happen from birth through the adolescent years – leading to differences in pharmacokinetics (PK) and pharmacodynamics (PD) - the task to develop medicines for the various pediatric age groupings can be challenging. This can require different formulations, different dosage forms and strengths, or different routes of administration to ensure proper treatment of children of all age groups.
Looming over this scenario are limitations of the various dosage forms and their possible adverse impact on patient’s safety, acceptability as well as swallowability.
For example, oral solid dosage forms are associated with limited dose flexibility and risk of aspiration or choking, depending on the size and shape of the tablet or capsule. Oral liquids have challenges in terms of physical, chemical, and microbiological stability. Both oral solids and liquids have to deal with palatability issues. Measurement and administration of oral liquid and oral solid dosage forms can lead to improper dosing and potential toxicity concerns.
Non-oral routes of drug administration are limited by the difficulty in application or administration and the potential for local site irritation. Like for adults, the parenteral and topical administration also faces challenges in measuring and administrating small dose volumes that have the potential to cause dose variation and error. Special attention must also be given on the use of appropriate excipients for children from different age groups to avoid the consequences of excipient toxicity.
The potential drug-food or vehicle interaction in children adds to the complexity. It is quite common for medication to be mixed with or dissolved in food or liquids to improve delivery and palatability. Quantity and composition of food required to generate a food effect in children are not clear at the moment, and there is no guidance to support in vitro or in silico risk assessment to understand food effect.
Oral solid dosage forms are gaining favor over oral liquids for use in pediatric patients. Mini-tablets have been shown to be an acceptable dosage form for toddlers. Granules and multiparticulates with taste masking technologies may be appropriate dosage forms for infants.
Arguably the biggest challenge in pediatric oral solid formulation development is to develop flexible dosage forms with measurable and easy to administer dosage, preferably formulated with taste-masking properties for better acceptance of the drug formulation in children.
Despite these challenges, the industry is nevertheless committed to providing safe and effective age-appropriate medicine to children. Many initiatives within industry, regulators, and academia have been spurred related to the development of medicines for pediatric age groups and to the improved availability of information on the use of medicines in children.
For example, the European Pediatric Formulation Initiative (EuPFI), a group composed of pediatric formulation experts from industry, academia, and clinical pharmacy, was founded with the aim to raise awareness of pediatric formulation issues and provide recommendations for formulation development plans. Other network such as the European Network of Pediatric Research (Enpr-EMA) was established by the European Medicines Agency to encourage collaboration with academic and industry members from within and outside the European Union.
Q2 - You mentioned PK/PD differences between adults and children – can you please explain?
The differences in PK/PD are caused by the physical, metabolic and physiological processes inherent to growth and reveal that children can’t be regarded as small adults.
Let’s have a look at the differences to better understand this:
From birth to adulthood several important factors driving the PK/PD values are changing constantly: gastric pH (first three years, especially first weeks), intestinal fluid volume and composition, immaturity of secretion of bile and pancreatic fluid, and intestinal transit time can have a significant impact on the exposure of the drug.
A further significant difference between children and adults can be the permeation of the drug through the epithelial layer of the gastrointestinal tract, which has often a smaller value in children compared to adults – the permeability of APIs can, but must not, be lower. In some cases, (e.g. Dolasetron, Ketoprofen or Voriconazole) this leads to a switch in the BCS class from 1 (adult) to 3 (children) or from 2 (adult) to 4 (children) with related impact on formulation and bioavailability enhancement requirement of the pediatric formulation.
Differences related to total body water, plasma-protein binding, metabolic enzymes, first pass effect, glomerular filtration, renal secretion, and renal absorption are leading to differences in the clearance between adults and children.
Q3 - Does the regulatory body reflect the specifics of pediatric medicines in their regulations?
The fine nuances of specifics of pediatric medicine are definitely reflected in the current foundation of drug regulatory environment. Generally, pediatric legislations in the US and EU are welcomed for the guidance provided for the pharmaceutical industry.
The underlying principles of pediatric regulatory requirements are that pharmaceutical substances and products intended for children should be manufactured to ensure that children in the target age groups will have access to consistent quality and age-appropriate formulations with an acceptable risk benefit profile.
The past 10 years have seen enormous changes in the legislation of pediatric medicine. For example, in the US, the Best Pharmaceuticals for Children Act and Pediatric Research Equity Act that were previously subjected to reauthorization every 5 years, were made permanent under Food and Drug Administration Safety and Innovation Act in 2012.
By making these Acts permanent, the law ensures pediatric medicine will have a permanent place on the agenda for drug research and development in the US. In the European Union, Pediatric Regulation came into effect in 2007 and since then, no new drugs can be registered in the EU without a detailed Pediatric Investigation Plans being approved by the EMA's Pediatric Committee.
Regulatory authorities have published a number of useful guidelines and recommendations. And there has been a considerable amount of revisions on these guidelines, where the authorities have periodically improved regulatory tools to provide clearer guidance as well as to incentivize industry to conduct research and development of drugs in children.
For instance, the EMA has published a guideline for pharmaceutical development in children that came into effect in 2014 that addresses issues in the route of administration and dosage form, dosing frequency, modified release preparations, safety of excipients, and how formulations should be adapted to the needs of children.
Another example would be the 6-month public consultation that was recently launched in October 2016 by the EMA on the addendum to the current ICH E11 (guideline on clinical investigation of medicinal products in the pediatric population). The proposed addendum intends to provide clarification and current regulatory perspective on various topics such as age classification and pediatric subgroups, issues to aid scientific discussions at various stages of pediatric drug development in different regions, just to mention a few.
Q4- What are the specifics for the development of pediatric medicines?
Driven by the previously explained differences between adults and children during development and growth, the following specifics need to be reflected:
  • Pediatric medication may need a different drug delivery technology compared to adult medication with the same API.
  • Not all excipients suitable for adults can be used in pediatric formulations.
  • Selected excipients should be reduced to the minimum needed
  • Minimal dosing frequency
  • Swallowability needs to be considered
  • Risk of choking needs to be considered
  • Acceptance of treatment needs to be in strong focus (influenced by age, culture, health status, behavior, social background, route of administration, taste of medication, duration of treatment, convenience of administration)
Q5 - Which oral dose form is preferred by children?
Based on a report released by FDA in 2014, 69% of the approved product labeling for pediatric use is in the form of tablets. However, in general, we knew that the preference towards dosage forms primarily differed based on age and prior use.
About 4 years ago, the EMA specifically recommended that the evaluation of the patient acceptability of a pediatric preparation should be an integral part of the pharmaceutical and clinical development.
Since then, we have acquired more understanding on pediatric drug development and begin to bridge the knowledge gap on the acceptability of dosage forms based on evidence gathered in clinical trials in children.
For example, initial findings revealed that minitablets and syrups were found to be the most acceptable formulation to toddlers and infants. Another study also demonstrated that children of 2 to 3 years old had no difficulty in swallowing multiple units of minitablets that were suspended in a fruity jelly on a spoon.
Subsequently, another clinical trial reported that neonates have a higher level of swallowability of minitablets compared with syrup. Latest data published November last year reported a preference for chewable and orodispersible preparations across ages when compared with multiparticulates such as sprinkles.
There is a trend toward oral solid formulations with a focus on novel preparations, including flexible, dispersible, and multiparticulate oral solid dosage forms. These clinical evidence further confirmed the shift of paradigm from liquid toward small-sized solid drug formulations.
Q6 – How is the industry meeting the requirement of weight or age dependent dosing of oral dose forms?
Industry is addressing the need for easy, reliable, flexible dosing of pediatric oral dose forms by using
  • Dosing Mini-tablets with counting device
  • Powders for reconstitution; solution to be dosed by volume (e.g. powder in a bottle, powder in a stick pack)
  • Liquids/syrups to be dosed by volume
  • Conventional solid formulations in different dosage strength (different formulations may be required)
Q7 - With all the challenges in developing a suitable formulation for pediatric use, what are the most promising technologies available today?
There is no single technology that fits perfectly for pediatric drug development. Technologies that offer options for age appropriate formulation would be desirable. Therefore, technologies that produce small oral dosage forms like mini-tablets or pellets, chewables or orally dispersible tablets stand a promising chance for better compliance in the pediatric population.
In Catalent, we have the expertise and capability to develop pediatric-friendly softgels that are small and easy to swallow. For example, the OptiGel™ Mini Technology is capable of producing 30% smaller size than traditional softgels in various shapes for ease of use in children.
In addition, to address issues on formulation that requires dose titration, Catalent’s OptiGel™ Micro Technology can produce spherical form capsules as small as 1mm in diameter. These micro-capsules can then be packaged into a sachet to accommodate for different dosing levels. The Zydis® Orally Disintegrating Tablets (ODT) tablets offer orally disintegrating tablets for children and infants.
Given trends from recent clinical trials that provided some evidence towards rational for solid dosage form design for pediatric use, the technologies that we have in Catalent offers the opportunity to develop and deliver an optimal formulation dosage form that addresses issues such as adherence and acceptability in the pediatric population.

Friday, December 27, 2019

Determining the Dose in Clinical Trials: ED50, NOAEL, MTD, RP2D, MRSD, RP2D, MinED,

After a new compound is discovered, a series of pre-clinical and clinical studies need to be conducted to identify the suitable dose for clinical trials. Rigorous steps are taken to ensure that the safe dose is used in clinical trials. There are multiple FDA guidelines (some adopted from ICH guidelines) discussing the dose selection issues:
Drug development is a stepwise approach: from pre-clinical testing to identify the ED50, NOAEL to first-in-human clinical trial to identify MTD and RP2D to phase 2 dose-response study or dose-ranging study to identify MinED. Here are various terms used to describe the doses. 

NOAEL (No Observed Adverse Effect Level): The highest dose tested in animal species that does not produce a significant increase in adverse effects compared to control group

ED50 (Median Effective Dose): a pharmacological term for the dose or amount of drug that produces a therapeutic response or desired effect in 50% of the subjects taking it. In a more extreme case, the term LD50 is used for the median lethal dose.

MRSD (Maximum Recommended Starting Dose): the dose for first-in-human clinical trials of new molecular entities in adult healthy volunteers.

HED (Human Equivalent Dose): a dose in humans converted from the NOAEL identified in animal studies by applying a conversion factor or scaling factor. See FDA Guidance for Industry (2005) Estimating the Maximum Safe Starting Dose in Initial Clinical Trials for Therapeutics in Adult Healthy Volunteers for details on how to calculate the HED from NOAEL.

MTD Maximum Tolerated Dose (or Maximum Tolerable Dose): it is usually identified through the first-in-human trial designed as a dose-escalation study (such as 3+3 study design commonly used in oncology studies). The study will include multiple ascending doses with a predefined algorithm for the dose increase to the next cohort. The study will be stopped at a cohort with a dose-limiting toxicity (DLT) rate crossing the pre-specified boundary. The MTD is then the dose one level below.

RP2D recommended phase II dose: This is the term specifically used in oncology clinical trials. he primary goal of phase I cancer clinical trials is to identify the dose to recommend for further evaluation (the recommended phase II dose [RP2D]), while exposing as few patients as possible to potentially sub-therapeutic or intolerable doses. In oncology, the RP2D is usually the highest dose with acceptable toxicity, usually defined as the dose level producing around 20% of dose-limiting toxicity. In North America, the MTD is the RP2D, whereas in the rest of the world, the MTD is considered the dose level above the RP2D.

Minimum Effective Dose (MinED or simply MED) and Maximum Useful Dose: a concept derived from dose-response studies with placebo control and derived based on the efficacy signal (instead of safety/DLT). The terms were mentioned in FDA guidance "Dose-Response Information to Support Drug Registration "... the concepts of minimum effective dose and maximum useful dose do not adequately account for individual differences and do not allow a comparison, at various doses, of both beneficial and undesirable effects. Any given dose provides a mixture of desirable and undesirable effects, with no single dose necessarily optimal for all patients."

According to FDA's Good Review Practice: Clinical Review of Investigational New Drug Applications:
"Inclusion of a placebo group in a dose-response trial can provide critical information in interpreting trials in which all doses tested resulted in indistinguishable outcomes, usually because the doses are all above the minimum effective dose (on the plateau) or because the doses are too close together. Without the presence of a placebo group, it may be impossible to tell whether any of the doses were effective at all in the trial. In such a case, the trial provides no evidence of effectiveness and no useful dose-response information. With a placebo group, the trial can provide evidence of effectiveness and, if efficacy is seen, may be able to identify where on the dose-response curve of the examined doses fall."
MFD (Maximum Feasible Dose): a concept from ICH M3(R2) "Nonclinical Safety Studies for the Conduct of Human Clinical Trials and Marketing Authorization for Pharmaceuticals" derived from animal studies.

MSD (Maximum Safe Dose):  an approved maximum prescribing dose restricted by the product label, in other words, the highest approved dose. 

References: 


Thursday, December 26, 2019

Basket Trial: challenges and disadvantages


With the advances in biomarker identification and precision medicine, the biomarker-based clinical trial design becomes a new trend. In 2017, FDA approved Merck's Keytruda (pembrolizumab) as the first cancer treatment for any solid tumor with a specific genetic feature (for the treatment of adult and pediatric patients with unresectable or metastatic solid tumors that have been identified as having a biomarker referred to as microsatellite instability-high (MSI-H) or mismatch repair deficient (dMMR). 

A special class of biomarker designs is "master protocols” which includes Basket trials, umbrella trials, platform trials.

  • Master Protocol: An over-arching protocol or trial mechanism comprised of several parallel sub-trials differing by molecular features or other objectives.
  • Basket Trial: A master protocol where each sub-trial enrolls multiple tumor types ("the basket"). According to NCI definition, basket trial is defined as "A type of clinical trial that tests how well a new drug or other substance works in patients who have different types of cancer that all have the same mutation or biomarker. In basket trials, patients all receive the same treatment that targets the specific mutation or biomarker found in their cancer. Basket trials may allow new drugs to be tested and approved more quickly than traditional clinical trials. Basket trials may also be useful for studying rare cancers and cancers with rare genetic changes. Also called bucket trial."
  • Umbrella Trial: A master protocol where all patients (and all sub-trials) share a common tumor type ("the umbrella"). According to NCI definition, the umbrella trial is defined as "A type of clinical trial that tests how well new drugs or other substances work in patients who have the same type of cancer but different gene mutations (changes) or biomarkers. In umbrella trials, patients receive treatment based on the specific mutation or biomarker found in their cancer. The drugs being tested may change during the trial, as new targets and drugs are found. Umbrella trials may allow new drugs to be tested and approved more quickly than traditional clinical trials."
  • Platform Trial: A master protocol where sub-trials may be added or removed in an operationally seamless way. I-SPY trials are good examples of a platform trial. 

In industry especially the biotechnology companies, it is not easy to implement the umbrella trial or platform trial without collaborating with other sponsors or partners. The basket trial design is the one that may be more practically implemented. 

While the application of master protocols and basket trials is mainly limited in the oncology trials, there have been some discussions in other areas such as clinical trials in arthritis and rare diseases. 

The popularity of the master protocol and basket trial reminds me of the adaptive design about 15 years ago. The advantages of the new trial designs were over-emphasized and the limitations/disadvantages were de-emphasized. At one time, I had to explain to our senior management why each of our clinical trials was not a good candidate for adopting the adaptive design. Now, if we work on the oncology area, we may face a similar situation and maybe asked why the master protocol and basket trial design are not used. 

For a clinical development program for drugs/biologicals in the oncology area, we will need to evaluate the genetic biomarkers and consider the basket trial design after fully evaluating the pros and cons of using such a design. 

Recently, there have been a lot of discussions about the advantages and disadvantages of the basket trial design.

In a paper by Renfro and Sargent "Statistical controversies in clinical research: basket trials, umbrella trials, and other master protocols: a review and examples", the advantages and disadvantages of the basket trial were discussed.
Advantages of basket trials
Basket trials have several advantages. First, they can provide access to molecularly targeted agents for patients across a broad range of tumor types, potentially including those not otherwise studied in clinical trials of targeted therapies. Secondly, in many cases, molecular testing is carried out locally and confirmation by a central assay is not required before patient enrollment, though tumor and plasma are often banked for subsequent companion diagnostic testing and validation. This feature reduces the time between initial diagnosis and/or eligibility confirmation and later cohort assignment and initiation of treatment. Thirdly, cohorts within basket trials are often small and utilize single-stage or two-stage designs, which yield quick results, given sufficient accrual.
Limitations of basket trials
One major limitation of basket trials is the assumption that molecular profiling may be sufficient to replace histological tumor typing, as, in some cases, histological tumor type has been found to predict response to treatment more strongly than the biomarkers or mutations comprising the studied cohorts. Even outside the context of a basket trial, it was recognized that V600E BRAF-mutant melanoma or hairy cell leukemia are responsive to BRAF inhibition, while colon tumors with the same BRAF mutation are not. This issue may be anticipated, as it is well accepted that the environment and location in which a tumor develops may impact its mutational profile as well as differentially predict treatment response across similar profiles. To this end, many have noted that current clinical evidence is insufficient to conclude that molecular descriptors should replace histological tumor typing, and it has been suggested that future studies integrate anatomic with mutational and functional molecular profiling through the use of proteomic technologies and explore multi-gene signatures with combination therapies.

In an ASA webinar "Basket Trials: Features, Examples, and Challenges" by Lindsay Renfro, the advantages and disadvantages were listed as the following: 
Basket Trials: Advantages: 
  • Operational efficiencies compared to designing and conducting individual targeted trials without shared infrastructure
  • Relatively small sample size per sub-study
  • Increased "hit rate" by enrolling patients with rare molecular features across tumor types
  • Array of novel therapeutics offered to a broader group of patients who may benefit
Basket Trials: Disadvantages
  • Prognostic heterogeneity across tumor types
  • Single arm sub-studies generally require a tumor response rate endpoint (with a high bar)
  • Challenging to define historical controls across diseases. For this reason, time-to-event endpoints (though often relevant) usually not primary
  • Practical challenges with screening may arise
In a recent webinar "Trial Design Concept for a Confirmatory Basket Trial - Dr. Robert Beckman", Dr Bob Beckman discussed the features of the basket trial and then listed 13 challenges (or disadvantages) of the basket trial: 

Features of the Basket Design:
  • Tumor histologies are grouped together, each with their own control group (shared control group if common SOC)
  • Randomized control is preferred. Single arm cohorts with registry controls may be permitted in exceptional circumstances as illustrated by Imantinib B225 and others
  • In an example of particular interest, each indication cohort (each sub-study) is sized for accelerated approval based on a surrogate endpoint such as progressive free survival (PFS) - this may typically be 25-30% of the size of a phase 3 study
  • In another approach, an interim evaluation of partial information on the definitive endpoint may be used
  • Initial indications are carefully selected as one bad indication can spoil the entire pooled result
  • Indications are further "pruned" if unlikely to succeed, based on 1) external data (maturing definitive endpoint from phase 2; other data from class); 2) internal data on surrogate endpoint OR partial information on definitive endpoint
  • Sample size of remaining indications may be adjusted based on pruning
  • Type I error threshold will be adjusted to control type I error (false positive rate) in the face of pruning. Pruning based on external data does not incur a statistical penalty.
  • Study is positive if pooled analysis of remaining indications is positive for the primary definitive endpoint. 1) remaining indications are eligible for full approval in the event of a positive study; 2) full pooling chosen for simplicity; 3) Some of the remaining indications may not be approved if they do not show a trend for positive risk benefit as judged by definitive endpoint. 

Challenges of the Basket Design

Challenge #1: Having a Control Group
In some settings, a control group is not ethical
Resolution: randomized trial should be applied, if and only if:
  • There is a clinical equipoise between the two randomized arms
  • Experimental agent is not expected to be transformational, only beneficial
  • There is a standard of care (SOC) for control:   Example: steroids +/- rituximab for refractory autoimmune diseases
  • Current generation of non-randomized basket studies for transformational agents provides SOC baseline for future randomized trials
Challenge #2: Risks of Pooling
One of more indications can lead to a failed study for all indications in a basket
Histology can affect the validity of a molecular predictive hypothesis, in ways which cannot always be predicted in advance
Vemurafenib is effective for BRAF 600E mutant melanoma, but not for analogous colorectal cancer (CRC) tumors
This was not predicted in advance but subsequently feedback loops leading to resistance were characterized
Basket trials are recommended primarily after there has been a lead indication approved (by optimized conventional methods) which has validated the drug, the predictive biomarker hypothesis, and the companion diagnostic.               -    Example, melanoma was lead indication preceding Brookings trial proposal in V600E mutant tumors
Indications should be carefully selected
Indications should be pruned in several steps before pooling

Challenge 3: Different Indications May Have Different Endpoints
Less of an issue for oncology
Even in auto-immune diseases, generalized interim endpoints can be created across diverse diseases:
Interim: improvement (response)
Final: time to worsening

Challenge 4: Timescales of endpoint development may differ
Resolution:
What matters is relative improvement
If necessary, TTE data may be normalized to the medians on control arms of the different indications
Study completes when data is mature on all arms

Challenge 5: SOC may differ between arms
Resolution:
What matters is relative improvement in a redefined disease entity based on a molecular biomarker
Safety must be assessed both as an individual analysis relative to individual control and as a pooled analysis relative to pooled control
Safety data should be available from reference indications and from phase 2 studies

Challenge 6: Threshold for Approval May Different Between Arms
Resolution: study is judged by pooled result of relative improvement with statistical and clinical significance
Thresholds for such criteria are well known

Challenge 7: Clinical validity of the predictive biomarker hypothesis
The clinical validity of the predictive biomarker can only be verified by inclusion of “biomarker negative” patients in the confirmatory study
Addressing the challenge
Recommend a smaller pooled, stratified cohort for biomarker negative patients, powered on surrogate endpoint
Would need to expand the biomarker negative cohort (to evaluate definitive endpoint) if surrogate endpoint shows possible benefit
Prior evidence should permit this if
An approved lead indication has already provided clinical evidence for the predictive biomarker hypothesis
Prior phase 2 studies support the predictive biomarker hypothesis in other indications

Challenge 8: Adjusting for Pruning
Pruning indications that are doing poorly on surrogate endpoints may be seen as cherry picking
This can inflate the false positive rate, an effect termed “random high bias”
Addressing the challenge:
Emphasize use of external data, especially maturing Phase 2 studies, for pruning
Pruning with external data does not incur a penalty for random high bias
Applying statistical penalty for control of type I error when applying pruning using internal data
Methods for calculating the penalty are described in stat methods papers
Rules for applying penalty must be prospective
Penalty is not large enough to offset advantages of design

Challenge 9: Strong Control of FWER
This problem is still open
Challenge:
One or more strongly positive indications can drive an overall pooled positive result and negative indications are carried along
Simulation involves a large number of cases and the degree to which active indications are active affects the results.
A recent MSKCC study simulated a popular Bayesian basket trial design (using a Bayesian hierarchical model) and found FWER of up to 57%.
Authors advocate characterization of FWER by simulation

Challenge 10: Availability of tissue
Tissue sampling and processing are variables that can greatly affect the outcome of a study based on a predicative biomarker
Basket studies will require cooperation and uniformity across departments organized by histology
Addressing the challenge:
The sponsor must have extensive contact with the pathology department and relevant clinical departments at all investigative sites and provide standard methods for tissue sampling, handling, and processing
The sponsor should engage an expert pathologist who is dedicated to training prior to trial start, and troubleshooting during the trial

Challenge 11: High Screen Failure Rate
Pro: patients will have access to tailored therapy
Con: patient has a high risk of being a screen failure if biomarker positive subgroup is low prevalence
Addressing the challenge:
Study should provide a broad-based test like HGS which will give the patient some guidance on alternative therapies if they are screen failures for basket study

Challenge 12: Interim endpoints may not predict definitive endpoints
Addressing the challenge:
Prefilter indications based on maturing definitive endpoint data from phase 2 or other external data
Require consistent trend in definitive endpoints for final full approval

Challenge 14: The Standoff
Health authorities “understandably” won’t commit until given a real example to consider
Sponsors “understandably” cautious about being first to innovate in confirmatory space
Resolution:
FDA, under PDUFA VI pilot program, will be engaging with selected sponsors to bring forward complex innovative designs
We must take this risk for our patients.

Further Readings:

Friday, November 22, 2019

CAR T-Cell Therapies: Current Limitations & Future Opportunities

CAR-T therapies have been a hot area right now. Pharmaceutical companies and biotechnology companies are all trying to jump into the CAR-T area. Just typing the 'CAR-T' in the search field in clinicaltrials.gov, we can see a list of more than 800 clinical trials in CAR-T area: the majority of CAR-T trials are conducted in the US or in China. 
Now that there are two CAR-T drugs approved by FDA, we can take a look at FDA review documents to see how the clinical trials in CAR-T are designed and what kind of issues the regulatory agencies have. 
The clinical trial results for KYMRIAH and YESCARTA had already been published in the New England Journal of Medicine 
I read a very good article about CAR-T cell therapies in Life Science Leader and can't let it go without citing the article here: 

By Anamika Ghosh, Ph.D., and Dana Gheorghe, Ph.D., DRG Oncology
A novel and exciting approach to cancer treatment, CAR T cell therapies bring forth a new paradigm in cancer immunotherapy, wherein a patient’s own T cells are bioengineered to express chimeric antigen receptors (CARs) that identify, attach to, and subsequently kill tumor cells. 

Novartis’ Kymriah, the first ever such therapy to receive regulatory approval for the treatment of B-cell acute lymphoblastic leukemia (ALL), a hematological malignancy, entered the U.S. market in August 2017 and was followed in October 2017 by Gilead/Kite Pharma’s Yescarta — also a CAR T cell therapy — targeting diffuse large B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL), subtypes of non-Hodgkin’s lymphoma (NHL). Kymriah was subsequently granted an FDA label expansion to include its use in patients with DLBCL in May 2018. Geographic expansion soon followed, with Kymriah receiving marketing authorization from the EU in August 2018 and from Japan’s MHLW in March 2019 for treatment of B-cell ALL and DLBCL and Yescarta receiving EU approval in August 2019 for treatment of DLBCL and PMBCL.
The landmark approvals and clinical success of Kymriah and Yescarta opened new and encouraging avenues for developers of cellular immunotherapies. Research in the field of CAR T cells has progressed rapidly, and novel technologies to address areas left unaddressed by Kymriah and Yescarta have started streaming into the research arena.
This article aims to focus on the barriers to widespread commercial adoption of the currently available CAR T cell therapies and how these weaknesses are presenting opportunities for developers of the next generation of CAR T cells.

Limitations Directly Affecting Patients
Life-Threatening Adverse Events
Close patient monitoring is a crucial part of the treatment protocol for both Kymriah and Yescarta, as the therapies are associated with high-risk side effects such as cytokine release syndrome (CRS) and CAR T-related encephalopathy syndrome (CRES). CRS, a type of systemic inflammatory response, is typically characterized by high fever, lower-than-normal blood pressure, and difficulty breathing. CRES, a toxic encephalopathic state, often manifests with symptoms of confusion and delirium, seizures, and cerebral edema. Administration of CAR T cells must be followed by strict adherence to patient safety protocols to ensure that proper measures are taken to immediately manage these high-risk side effects.
Wait During Vein-To-Vein Time
The manufacturing process of autologous CAR T cells requires leukapheresis, followed by extraction of patients’ T cells, transportation to the manufacturing facility, genetic engineering to incorporate CARs, and transportation of the finished product back to the treatment center. The highly personalized therapy is then administered to the patient. The period in between, referred to as vein-to-vein time, ranges between three and four weeks for both Yescarta and Kymriah. This period can be daunting for the patients awaiting treatment and renders these CAR T cells unsuitable for patients with rapidly progressing disease.
Treatment Is Restricted To Heavily Pretreated Patients
Patients must have progressed on at least two lines of systemic therapies to be eligible for Kymriah or Yescarta treatment. Heavily pretreated patients can be weakened by progressing disease and prior therapies and thus be unable to withstand the severe side effects of CAR T cells. Thus, the eligible patient pool to qualify for these therapies gets further limited to heavily pretreated patients with good performance status.
Limitations Directly Affecting Healthcare Practitioners
Complex Patient Referral Pathway
Because of the complex nature of the therapy and its associated high-risk side effects, access to CAR T cells is highly regulated, being available only at certified centers. Primary care oncologists must refer eligible patients to CAR T cell therapy specialists, a process that hinders the widespread adoption of CAR T cell therapy. To offset this complexity, Gilead is now training its oncology representatives to inform physicians about CAR T cells, encourage identification of Yescarta-eligible patients, and help them with patient referrals.
Accreditation Of CAR T-Cells Specialty Centers And Training Of Hospital Staff
The FDA mandates CAR T cells be available only through a restricted and regulated program, in certified centers and administered by trained healthcare providers (HCPs) who adhere to risk evaluation and mitigation strategies (REMS) guidelines. Training of HCPs is a mandatory step toward getting a center certified as a CAR T cell specialist center. The long training process and the increasing demand for CAR T cells, however, are increasing patient waiting lists as new centers await certification.
Lack Of Clarity In Placement Of CAR T Cells In Treatment Practice
Novel drug classes with limited clinical data, such as CAR T cells, require research to ascertain some practical aspects of patient treatment in the commercial setting. Some physicians are skeptical about prescribing CAR T cells, as they are unsure about this therapy’s place in the treatment algorithm and its impact on further lines of therapy.

Limitations Associated With Complicated Manufacturing Process
Failure In Production
Being a highly personalized therapy, the complex, multistep process of generating autologous CAR T cells increases the risk of production failure, an event that delays and, in some instances, even denies access to the therapy.
Commercial Scalability Challenges
With each product representing a fresh manufacturing batch, the production of autologous CAR T cells that meet commercial demand and anticipated label and geographical expansions, while maintaining product quality and clinical equivalence, remains a challenge.
Limitations Due To Exceptionally High Therapy Cost And Complicated Payer Policies
In the United States, CMS recently raised reimbursement of the total cost of CAR T cell therapies from 50 percent to 65 percent, effective from 2020. Treating physicians, however, maintain that given the extremely high cost of therapy (ranging between $373,000 and $475,000 per infusion) and patient management (which can go as high as, and sometimes also over, $0.5 million), the reimbursement gap remains unsustainable and is a huge impediment to patient access. Novartis offers outcomes-based pricing for Kymriah (only for the treatment of B-cell ALL) — an agreement that ties the therapy’s clinical success to its payment. However, this arrangement does not include the hospital expenses associated with the therapy. While the access and reimbursement policies are being ironed out, the queue of patients waiting for insurance clearance is continuing to grow.

Opportunities & Developments
Despite the challenges listed above, the overall attitude about CAR T cells is decidedly positive. Investors are convinced that CAR T cells are a revolutionary cancer treatment. While physicians indicate that the safety issues that are synonymous with CAR T cell therapy are a huge concern and call for an urgent solution, research is already underway to devise solutions that can address the pain points of the currently available CAR T cells. Some noteworthy concepts and developments are discussed below.
Improving Safety
Advanced Safety Mechanisms
Being “live” drugs, many of the safety issues of CAR T cells are attributed to the difficulty in controlling the cells’ proliferation and activation, which can lead to symptoms of an immune system in overdrive. Various companies are employing novel techniques to address this problem. Researchers are working on tunable CAR T cells whose proliferation, concentration, activation, and elimination can be regulated with an inducer agent. For example, Juno Therapeutics’ lisocabtagene maraleucel contains a truncated form of epidermal growth factor (EGFR), EGFRt, that enables rapid elimination of these CAR T cells using cetuximab, an EGFR inhibitor. Bellicum Pharma’s CAR T cell candidate, BPX-601, employs an inducible MyD88/CD40 activation switch, and the therapeutic effect and level of activation of BPX-601 can be modulated by regulating the concentration of a small-molecule inducer, rimiducid. Similarly, Autolus’ AUTO-2 and AUTO-4 can be turned off by administering monoclonal antibody rituximab. Autolus is also developing next-generation CAR T cells for solid tumors that incorporate a suicide cassette called rapaCasp9 that is controlled by rapamycin, a compound with a better tissue penetration and faster effect than rituximab.
Improving On-Target/Off-Tumor Targeting And Overcoming Risk Of Resistance Due To Antigen Loss
Tumor plasticity leading to loss or modulation of antigen is one of the primary tumor escape mechanisms that results in development of resistance to antineoplastic therapies. To overcome this risk, researchers are developing bi-specific [e.g., Autolus’ AUTO-2 (TACI/BCMA-specific), AUTO-3 (CD19/CD22-specific)] and multi-targeted [e.g., Celyad’s CYAD-01 (NKG2D receptor-specific)] CAR T cells. It is expected that such multi-targeted CAR T cells will have better on-target/off-tumor specificity and will thus have lesser side effects than single-targeted CAR T cells.

Expanding The Scope Of Treatment
Going Beyond CD19-Targeting
Both Yescarta and Kymriah are CD19-targeting CAR T cells, and many emerging CAR T-cell therapy developers are continuing to focus on this antigen. CD19, a target expressed mostly on B-cells, has served as an excellent target for the first generation of successful CAR T cells; however, researchers are gradually beginning to shift their focus to other tumor antigens with the aim of expanding the scope of cancer treatment beyond B-cell hematological malignancies. Some of the most advanced and noteworthy of this new wave of CAR T cells are bluebird bio’s bb2121 (BCMA-specific for multiple myeloma), Mustang Bio’s MB-102 (CD123-targeting for AML), and Juno Therapeutics’ JCAR018 (CD22-targeting for follicular lymphoma and B-cell ALL).
Treating Solid Tumors
Solid tumors are undeniably a much larger market (and hence, attractive to investors) than hematological malignancies, and being able to launch a successful CAR T-cell therapy in a solid tumor indication represents a holy Grail. Achieving success in solid tumors, however, is an enormous challenge because of target antigen heterogeneity, a general lack in specific cell surface antigens, physical barriers (like dense stroma or obscure tumor location), and immunosuppressive microenvironment. One of the approaches being adopted to overcome some of these challenges is intratumoral delivery of CAR T cells [e.g., Mustang Bio’s MB-103 for glioma, Leucid Bio’s 4ab T1E28z+ T-cells for squamous cell carcinoma of the head and neck (SCCHN)]. Other researchers are focusing their efforts on well-established solid tumor antigens (such as CEA-targeting CAR T cells by Sorrento Therapeutics for metastatic liver tumors and Novartis’ mesothelin-targeting NIU-440 for various mesothelin-positive cancers). To improve tumor targeting and potency, development is also focused on multi-targeted CAR T cells, such as Aurora BioPharma’s AU-105 (HER2/CMV antigen targeting) or multifunctional CAR T cells [e.g., Celyad’s CYAD-01 (NKG2D receptor-specific) and Baylor College of Medicine’s GD2-targeted Epstein-Barr virus-specific cytotoxic T lymphocytes (CTLs)].

Off-The-Shelf CAR T Cells To Address Logistic Challenges And Waiting Periods
Most of the logistic challenges associated with the complex manufacturing process of the current generation of autologous CAR T cells will likely get addressed with allogeneic, off-the-shelf CAR T cells. Allogeneic CAR T cells are generated from healthy donor cells that are better in both quality and quantity than cells derived from patients. These CAR T cells will be readily available for patients, thus reducing the gap between prescribing and administering the therapy. This would be especially beneficial for patients with rapidly progressive disease. Additionally, as each batch of allogeneic CAR T cells could be used to treat multiple patients, the overall therapy costs would diminish, and the scalability challenges would be overcome. However, anticipated safety challenges, like graft-versus-host disease (GvHD) and immune rejection, cannot be disregarded. Developers of allogeneic CAR T cells are testing various gene editing techniques to generate universal CAR T cells. For example, CRISPR Therapeutics’ CTX-110 employs clustered regularly interspaced short palindromic repeat (CRISPR)/Cas9 multiplexed gene editing technique to eliminate T cell receptor (TCR) and major histocompatibility complex class I (MHC-I) expression, thereby minimizing the risk of GvHD and recognition and rejection by a patient’s own T cells. In Servier/Allogene Therapeutics’ UCART19, TRAC and CD52 genes are disrupted, thereby allowing administration in non-HLA (human leukocyte antigen)-matched patients.

Increasing Potency
Increasing CAR T Cells’ Persistence With Defined Cell Composition
Biological characteristics of different subsets of T cells can be exploited to attain distinct characteristics in CAR T cells. For example, Poseida Therapeutics’ P-BCMA-101 is enriched in T-stem cell memory (Tscm) cells. Tscm cells are long-lived, are multipotent, and gradually produce T-effector cells; these properties are anticipated to render CAR T cells with more durable therapeutic response than the current CAR T cells, which are composed largely of the short-lived T-effector cells. Baylor College of Medicine is employing NK-T cells (GD2-targeting, IL15-expressing CAR NK-T cells) that are known to co-localize with tumor-associated macrophages (TAMs) and can effectively permeate into solid tumor tissues. City of Hope and NCI are collaboratively developing CAR T cells based on T-central memory (Tcm)-enriched CD8+ T cells that are known to have better persistence and migration potential to secondary lymphoid tissues than standard T cells.
Overcoming Immunosuppressive Tumor Microenvironment
Tumors with immunosuppressive environs, referred to as immunotherapy-cold tumors, present a particularly difficult challenge for immunotherapies. To address this challenge, CAR T cell developers are coming up with novel mechanisms to combine CAR T cells with pro-inflammatory cytokines. One of the techniques being employed to offset the side effects of systemic administration of cytokines is the incorporation of the cytokine gene within the CAR T-cell construct. An example of such an approach is Juno Therapeutics’ MUC16-targeting, IL12-secreting “armored” CAR T cells – JCAR-020, currently in an early-phase trial in solid tumors. Another interesting concept being tested by Baylor College of Medicine is the TGFβ-resistant (TGFβ being an immunosuppressing cytokine) HER2-targeting, Epstein-Barr virus (EBV)-specific cytotoxic T lymphocytes (EBV-specific CTLs).
Combination With Immune Checkpoint Inhibition To Overcome T-Cell Exhaustion
Immune checkpoints can attenuate the activity of CAR T cells and quicken T cell exhaustion. CAR T cell developers are addressing this challenge by testing the combination of CAR T cells with immune checkpoint inhibitors (e.g., Autolus’ AUTO-3 in combination with Merck & Co.’s Keytruda), by incorporating an immune checkpoint inhibitor-secretory gene within the CAR construct (e.g., Marino Biotechnology’s PD1 shRNA-expressing iPD1-CD19-eCAR T cells), or by creating immune checkpoint-resistant CAR T cells (e.g., Innovative Cellular Therapeutics’ dominant negative PD1 CAR T cells, ICTCAR-014).

Conclusion
CAR T cells show immense potential, but they also face substantial challenges to more widespread adoption. Since their launch, sales of Yescarta and Kymriah have been increasing at a relatively slow pace, with barriers such as reimbursement, patient selection and access, and manufacturing issues hindering their commercial success. These hurdles will need to be overcome in order to fully capitalize on the potential of these therapies. Nevertheless, encouraged by the clinical activity demonstrated by Kymriah and Yescarta, researchers have turned their focus to immune cells other than T cells, such as macrophages and NK cells. While researchers are fine-tuning cellular immunotherapies with novel concepts or technologies, the medical community is eagerly waiting for the therapy that can address all the limitations of the currently approved CAR T cells