Case Study 1: How bioinformatics analysis and personalised medicine approach was able to salvage a compound.
Scenario
A pharma start-up had invested heavily in the development of a compound using bio-simulation techniques. The related biomarkers were known to be present in several cancer types, based on pre-existing research. The identification of drug targets based on specific genomic biomarkers related to disease progression lead to an immuno-therapeutic compound.
The results of initial animal studies were promising and suggested that the new anti-body based drug would be more effective than the existing in-class alternative currently on the market in the treatment of colorectal (CRC) and non-small cell lung (NSCLC) cancer.
The company had worked with a full-service CRO to conduct a phase I and II study in humans. The phase II results struggled to define an optimal effective dose and a pilot phase III study for efficacy using a crossover-design had not been able to establish efficacy/equivalence in a small patient sample. The company was looking at abandoning the compound but came to us for further advice.
Solution
Upon reviewing the patient data it was noted that roughly half of the patients in their clinical trial responded positively to the treatment while other patients didn’t show a sufficient response. The patients who didn’t respond also seemed to have worse side effects. This affected the overall efficacy of the therapeutic, as determined by the study, as well as it’s side effect profile.
Our biostatistics team decided to analyse the clinical data of patients who showed a clinically meaningful response to the therapeutic against those who didn’t to see if there were any differences between the two groups. After comparing demographics, baseline disease, and other characteristics between the two groups there did not appear to be a compelling difference. It was noted however that responders were slightly more likely to have been in the group that received the novel compound at sage 2 of the study (group 2), rather than stage 1.
Genomic data had also been collected from the patients which we subject to bioinformatic analysis. A few biomarkers of interest were identified which included a mutation in the KRAS pathway, a signalling pathway with a role in several key cell processes, including proliferation. One biomarker in particular was present only in patients with a limited response to the therapeutic.
The company decided to conduct a follow-up study. This study was a biomarker-guided clinical trial, which aimed to compare treatment efficacy of the novel compound vs an existing anti-body based treatment. Subjects were restricted those we had established in the genomic analysis to be likely responders.
A parallel design was used. There were two reasons for this choice. Firstly, by restricting the patient sample to those evidenced as likely to respond, there was no longer the same level of treatment risk that necessitated a cross-over design in the previous study.
Secondly, the fact that in the cross-over study most responders had been clustered in group 2, and that some non-responders in group 1 lacked the biomarker associated with non-response hinted at the possibility of paradoxical progression in some patients taking the novel treatment. If this were the case, it would mean that a cross-over design was not the optimal way to assess efficacy moving forward.
Outcome
This follow-up study showed that the new therapeutic was more effective than the standard treatment. As predicted there was a delayed response in some patients. The company was able to focus on marketing the novel compound to patients without the biomarker using a personalised medicine approach. Significant financial loss was mitigated compared to if the compound would have been abandoned.
Case Study 2: Use of Bayesian adaptive methods in a medical device clinical trial.
Scenario (Stent)
A med-tech company had invested in the research and development of a new polymer material for coronary artery stents. While the newly proposed material for use in stents was promising, the amount of quality data on its use in biological implants was limited. Therefore, extensive safety studies during R&D and clinical trials had to be carried out. At the R&D phase biocompatibility testing data was collected in animals. The device was then tested to ensure minimal endovascular trauma, mechanical stability, and that the material was MRI-safe as well as suitable for fluoroscopic guidance to enable safe implantation into the subject by catheter. Advice was sought as to analysis of the R&D data as well as the design of an initial clinical study to compare their new stents with existing bare metal and drug-eluting stents.
Solution
The biocompatability data from R&D studies was analysed and factors including biotoxicity, haemocompatibility, and the potential for leachables were evaluated against their respective benchmarks.
The company had planned to follow these R&D stages with clinical studies. A pivotal study was designed using Bayesian adaptive methods..
A statistical analysis plan (SAP), sample size report and randomisation schedule were produced for the study and these were then used to inform the study protocol. A parallel groups equivalence design was chosen based on a repeated measures ANOVA design with some adjustments for multiplicity. Two insertion methods were included for each stent type in the study (angiography guided vs without). Some endpoints of the study included the success rate of the initial procedure (e.g. resulting in improved blood flow and a widened artery lumen), restenosis rate, and mean time to restenosis. Kaplan Meier method was used to model time to adverse events and a Cox PH frailty model adjusting for relevant covariates was used for time to restenosis.
The sample size for the clinical trial was planned using Bayesian meta-analysis to derive the parameters necessary for sample size calculation. This included patient clinical data/ literature of stent safety/efficacy for comparable devices. During this process a hierarchical random-effects model using MCMC methods was used to accommodate the heterogeneity of the included studies. Informative prior distributions were used to simulate sample size based on the study design.
An interim analysis at week 12 was planned to assess for differences between the two insertion methods in terms of success rate of the procedure and restenosis rate. This data would be used to for an updated prior for the final analysis. A sample size readjustment using Bayesian methods was to be performed if there was a significantly lower success rate of the initial procedure or a higher rate of restenosis in one insertion method versus the other.
Following the pivotal trial, post-market surveillance studies were planned to monitor device effectiveness and long term biocompatibility compared to bare metal and drug-eluting stents. Adverse events for this were 5 year all cause mortality rate, cardiovascular death, spontaneous MI, procedural MI, stroke, repeat re-vascularisation.
- The benefits of taking a Bayesian approach were:
- The incorporation of external data,
- The use of data from intermediate outcomes in an interim analysis and adjust the design if necessary.
- Better manage any missing endpoint data which was important given a smaller sample size.
- Able to better manage and adjust for multiplicity.
Outcome
The med-tech company would benefit from:
- Device backed by quality evidence that was able to make use of all available data.
- Proven device safety and thereby complies with regulatory requirements.
- Statistical support at each stage of the study, therefore having the best evidence at early stages to prove device safety and use to determine future research intensity.
- The company was able to incorporate a Bayesian adaptive design for device surveillance.
Case Study 3: Sample size audit for a pilot study.
Scenario
A small medtech company was looking to run a pilot comparison study. A regenerative medicine injectable adjunct therapy was to be compared with the standard only treatment for Achilles tendon injury. The purpose of the pilot was to gain some initial data on efficacy, assess study feasibility, and estimate parameters necessary to calculating sample size for a full-scale clinical trial.
The researchers had decided on a non-inferiority study design with a non-inferiority margin of 15 which corresponded to the clinically meaningful difference in ATRS score. The intended study design had 5 endpoints and repeated measures data collected over 4 different time points post-treatment. The researchers had already decided on a sample size of 5 patients based on clinical advice. The company requested an audit of the study design prior to a statistical analysis plan and contribution to statistical sections of the study protocol.
Audit
After auditing the study design in view of the study goals some issues were identified:
- For the stated goals of the pilot study it ran the risk of being underpowered due to an overly small sample size.
- The non-inferiority margin of 15 was too wide. A non-inferiority margin should typically not be based on the clinically meaningful difference that defines sufficient improvement in the condition. A large margin in the context of a non-inferiority design would not fairly assess efficacy and could impair the accurate design of a subsequent trial.
- Given a non-inferiority design was planned, this meant that unless the novel adjunct treatment was highly inferior to the standard treatment it would be deemed non-inferior. This situation would be exacerbated in a full-scale trial by the multiple endpoints and time points being investigated.
- Due to the nature of the novel treatment and it being administered in addition to standard treatment a superiority study design was more appropriate in this case.
Solution
The goals of pilot study had to be better defined as follows:
- Estimate the SD of the primary outcome, ATRS score, to calculate a sample size accurately for a full-scale clinical trial.
- Produce a preliminary estimate of effect size of the difference between the novel adjunct and standard treatments to both determine whether the treatment is worth pursuing and to inform the sample size calculation for a full-scale trial.
- Estimate proportion of eligible participants willing to sign up (participation rate).Estimate compliance rate.
- Estimate drop-out rate.
- Provide preliminary data of some initial patients which can then form part of the data of a full-scale trial.
It was not necessary to conduct a sample size calculation for the pilot study however given the important role the pilot study would play in informing the sophisticated design of a full clinical trial it was crucial to get the sample size into the ballpark of what would enable this to happen.
To stay with the sample size of 5 with a set of 5 endpoints, 4 time points, and a MID of 15 was not going to derive the necessary data and would mean two things:
- An effective treatment might not gain a sufficient effect size estimate and would risk being abandoned.
- The entire budget of the pilot study could be wasted as it would not be able to produce the data necessary to achieve the subsequent study goals.
To accurately estimate SD that will later for the input to the sample size of the full clinical trial, a sample size between 12 to 25 per study arm is recommended for the pilot study as a rule of thumb.
In this case, a sample size of 25 was chosen for the experimental arm and at least 12 for the standard comparator arm. This was appropriate due to the relative ease of administering treatments, as well as the fact that preliminary efficacy was to be assessed. A superiority margin of 13.5 was chosen which corresponded to what was established to be the lowest margin of the minimally important difference in ATRS score.
While this would mean a higher up-front cost for the company, 2 key benefits should be noted:
- Produce accurate input data for sample size calculation for the subsequent trial that could accommodate the multiplicity inherent in the intended 5 endpoints and 4 time-points.
- Data collected from the pilot study could be counted as part of the sample size of a subsequent full-scale clinical trial.
Deliverables
- An Audit Report of the original study design was produced.
- A detailed Statistical Analysis Plan was compiled for the pilot study which would be replicated in the case of a full-scale clinical trial.
- Statistical sections of the protocol were prepared.
Outcome
Thankfully, the pitfalls of running a less optimally designed pilot study were avoided. While the up-front cost of running a pilot study on up to 50 patients rather than 10 patients was significantly higher, the results of such a pilot will be reliable. These results will enable a properly designed full-scale clinical trial in the case where this is appropriate. Additionally, the data from a properly conducted pilot study can be incorporated into a full-scale clinical trial if it does take place. In this way resources are not wasted.
Tell us about your unique situation.
Take advantage of our Complimentary Initial Consultation.