Blog Post

Mini Report: Why do clinical trials fail? 


Clinical studies are time-consuming, expensive, and frequently pose challenges for participants and sponsors alike. This article explores some of the numerous factors contributing the failure of a clinical study and offers suggestions on how to increase the likelihood of designing and carrying out effective clinical trials.

Pharmaceutical and medical device clinical trials present several chances for failure. Failures can occur due to lack of treatment effectiveness, problems with safety, or a failure to demonstrate either of these through use of the appropriate study design, as well as though budgetary constraint [1]. Other factors include not adhering to MHRA or FDA guidelines, failure to maintain acceptable study protocol, or issues with patient recruiting, enrolment, and retention. [2] In order to decide whether or not a clinical trial should continue, it is crucial to produce accurate and sufficient results and insights from data.

Briefly, some common reasons for failure at any phase include:

  1. The treatment is not effective: The treatment may not work as well as expected, or may not work for a significant portion of the study population.
  2. The treatment has unacceptable side effects: The treatment may have serious or unacceptable side effects that outweigh its potential benefits.
  3. The treatment is not better than existing treatments: The treatment may not be significantly better than existing treatments, or may not offer a clear advantage in terms of efficacy or convenience. If the study design is a non-inferiority or equivalence design this would not be a concern, thus it is important to choose the optimal study design for a specific context.
  4. The study design is flawed: The study may be poorly designed or executed, leading to inaccurate or misleading results.
  5. The study population is not representative: The study population may not accurately reflect the population for which the treatment is intended, leading to results that do not generalize to the wider population or to the study population of the subsequent phase of the clinical trial.

Overall, the high failure rate of clinical trials is a reflection of the complex and uncertain nature of medical research and the difficulty of developing new treatments that are both effective and safe.

One of the most expensive stages for a therapeutic clinical trial to fail in at the Phase III or phase IV stage. This is because so much investment of time, money and resources has already taken place across the previous clinical trial phases, as well as at the R&D and pre-clinical stages Unfortunately one of the most common phases for a clinical trial to fail is at phase III.

A 2016 review found that  46% of orphan-designated drugs were approved by regulatory agencies compared to 34% of non-orphan drugs (OR=2.3; 95% CI: 1.4-3.7); 27% of oncological agents were approved compared to 39% of agents targeted at other diseases (non-oncological) (OR=0.5; 95% CI: 0.3-0.9) and; 28% of agents sponsored by small and medium-size companies were approved vs 42% for those sponsored by larger firms (OR=0.4; 95% CI: 0.3-0.7). [5]

Earlier reviews of clinical trial results have found that on average there is a trend towards increased response rates in phase II trials than in the subsequent phase III trials of the same therapeutics. (zia et al, Clinical oncology 2005>. Further to this, 2019 study found a ratio of the hazard ratios for overall survival (OS) and progression free survival (PFS) of phase II compared to phase III trials of .074 indicating a significant reduction in observed hazard ratios in both the positive or negative direction (away from 1) for the phase III trials.[13]

Phase III clinical trials are the final stage of testing before a potential new treatment is submitted for approval to regulatory authorities. These trials are conducted on large groups of people, usually several hundred to several thousand, to confirm the effectiveness of the treatment, monitor side effects, and compare it to existing treatments. Despite careful planning and design, phase III clinical trials can fail for a variety of reasons.

Factors associated with patient eligibility: exclusion and inclusion criteria

The inclusion and exclusion criteria of a clinical study should ideally result in a population that proportionally matches the general patient population who experiences the disease, or would benefit from the therapeutic under investigation, to the extent that that is practical from a recruitment and safety stand-point.[4] Inclusion criteria may vary across studies or across sites of a single study and this has the potential impact the clarity of results. Having inclusion criteria that is too specific or narrow can also reduce the pool of eligible participants which typically translates to longer recruitment time horizons. On study found that 16% of protocol amendments are due to changes in inclusion or exclusion criteria,[5] this can lead to differences in the patient populations before and after the amendments.

Having a long list of exclusion criteria for a clinical trial can becomes problematic when it limits the resulting sample of patients too severely. It can artificially reduce the variance to the extent that the results of the trial are not generalisable. This creates problems at later trial phases and may have a negative impact on the chances of gaining regulatory approval. It could prevent the sponsor from acquiring timely knowledge regarding patient populations that are likely to be more reflective of real world use cases for the therapeutic under investigation.[1] In some cases this act of limitation in phase II of clinical development in particular, has been justified as a means of reducing variability;[2] and as a counter-point there is little published evidence to state universally increasing the diversity of the patient population with other criteria will inevitably increase the variability of the primary endpoint. This having narrower criteria could conceivably be justified in some contexts if carefully considered.[3]

Patient factors related to recruitment and retention

A failure to enrol a sufficient number of patients is a long standing problem with a UK study of 114 trials 10 indicated that only 31% met enrolment goals.[1] Bottlenecks or other problems with patient recruitment can hamper the success of clinical trials. If too many companies are using the same preferred trial site this can dilute to potential subject pool, and where the target disease is rare this can pose particular problems with recruiting.

Logistical constraints may also occur where patients who are ill cannot easily travel to designated hospitals. Some companies are trying to address this problem by potentially bringing the trial to people’s homes, however this could present further issues.[2] Some studies offer certain remunerations to patients to cover expenses in the hope that recruitment could be improved, however evidence suggests that paying patients to participate does not tend to generate better recruitment as it does scant to overcome the logistical challenges.[3] Although financial incentives did not result in better recruitment, it was reported that financial incentives did increase participant’s response to questionnaires for the trial.[4]

Biological effects of the drug do not translate across species or across patient populations

One review of clinical trial failure found that out of 640 phase III trials of novel therapeutics, 57% failed due to inability to demonstrate treatment efficacy[7]

Failure at phase I may result due to biological unsuitability of the therapeutic in humans, evidenced by the fact that successful animal studies do not translate into comparable effects in humans largely due do inter-species biomolecular differences. For example there may be unforeseen adverse events which make the therapeutic unsafe to consume even at an active dose. Failure at a later phase due to biological unsuitability may occur when the therapeutic does demonstrate a clinical effect against the disease but this effect does not translate into increased overall survival of the disease.

Toxicity or adverse events in the diseased population could be one explanation for this scenario. As phase I trials are typically conducted on healthy volunteers and phase II trials on less ill patients than Phase II trials, these patients may be more able to tolerate the therapeutic at a particular dose than those patients weakened by disease. Alternatively, this difference in study population could mean that the determined dose may be biologically active yet sub-optimal to combat the disease.

Changes in patient population between trial phases

If a successful phase II trial is conducted and supported by strong evidence and then the patient population is changed in some way for the phase III trial this may be another avenue to failure. This is a common occurrence as typically less ill patients will be chosen for a phase II study in determining therapeutic dose than in a phase III study, both for safety reasons and due to the uncertainty of whether the treatment will be shown to be sufficiently effective.

Accelerated Approvals

Certain therapeutics may receive accelerated approval to enable early patient access before phase III and IV trials have been successfully completed. It should be noted that these agents still need to complete the full clinical trials process in order to remain on the market. Therapeutics receiving accelerated approvals by the FDA at phase II, have unfortunately been found to have increased failure rates at subsequent phases. A 2018 study by Beaver et al, showed that 72% of the clinical studies receiving accelerated approval were single arm studies and almost 90% of studies receiving accelerated approval had response rate as the primary endpoint. [14] This points to an issue with study design whereby patient outcomes are being too easily over-estimated. The fact that 66% of accelerated approvals were granted to oncology therapeutics may go some way to explain these choices. However, given these figures and the obvious methodological problems manifest with accelerated approvals it isn’t difficult to explain the reduced approval rate observed for oncological agents compared to agents for other diseases (mentioned in a previous section of this article). In fact, single arm studies are not an advisable study design unless due to recruitment or other constraints there is absolutely no other alternative. Where possible a two armed randomised design should be prioritised.

Hacking bias

Phase II trials in which the analysis has relaxed the alpha cut-of from a p value of 0.05 to 0.01, thus deeming a trial successful (statistically significant) based on essentially insufficient evidence, can also lead to failure in the subsequent phase. Hacking bias can also occur if sub-group analyses were conducted posthumously to salvage the results of an unpromising trial result. This practice can be legitimate but if similar sub-groups are not considered at later phase trials then any positive findings may not be replicated. This is a good example of where it might be safer to cut losses at an earlier phase rather than forcing a result only to invest more time and budget chasing what will ultimately be considered an ineffective treatment. [16]

Focusing on the wrong endpoints is another mishap which could come under the umbrella of hacking bias or equally be reflective of poor study design. A common example of this is where response rate (RR) is chosen as a primary endpoint for a phase II trial in situations where response rate does not translate well to overall survival (OS) or even progression free survival (PFS), which may be the more informative outcome. This choice could lead to a therapeutic being deemed effective at phase II on the basis of RR then falling short in subsequent phases.

Optimism bias and regression to the mean

Single arm studies and patient differences aside, there are scenarios where a randomised phase II trial with adequate sample size and no real changes to the patient population in phase III is still not able to reproduce positive results. This can be explained on a statistical basis by optimism bias and regression to the mean. [16]

Clinical studies only progress to the next phase of a clinical trial if the previous phase study was found to be successful. This translates to statistically significant for superiority designs or non-significant in the case of non-inferiority and equivalence designs. For example, Phase III studies only result from successful phase II studies. Phase II studies that were not statistically significant, in other words where the results were neutral or negative, are ignored in the decision to progress to the next phase. This means that false positives are potentially forming the evidence base for progression to the net phase leading to a biased expectation of success at the next phase. The result of this is overly-optimistic expectations related to variability or treatment delta and effect size that may lead to underestimation of the sample size required for the subsequent trial and generally inadequate study design to fairly evaluate the therapeutic at the next phase. It may also lead to an ineffective treatment being pursued when it shouldn’t be.

There may be many reasons for a study to fail before being repeated successfully, such as due to an adjustment in patient sample or other methodological changes. Regardless of whether these reasons seem justified it is important not to abandon the unsuccessful results in calculating expectations for the study design of the subsequent phase. If an average was taken of the failures and successes at a particular phase, a more conservative and potentially less optimistic expectation of future therapeutic performance and outcomes at the subsequent phase would result – the by-product of this being a better designed next-phase trial.

Exaggeration Bias

A 2021 study based on the results of nearly 24,000 clinical trials observed a tendency towards the observed treatment effects of the clinical trial being larger than the true treatment effects and puts forward the idea of an exaggeration ratio. As a result of regression to the mean the ratio of the observed treatment effect to the true treatment effect tends to demonstrate a higher observed treatment effect than the true effect on average. This exaggerated expectation means that when clinical studies go on to the next phase it can be harder to replicate the results. At the 0.05 alpha level the paper estimated demonstrated a 25% chance that the observed effect is >3x larger than the true effect of the treatment; 50% chance that the observed effect is >1.5x larger than the true effect; and a 25% chance that the observed effect more or less approximated the true value. While based on a large but limited subsection of all clinical trials, the authors of this paper suggest taking this exaggeration ratio into account and adjusting for it in any subsequent clinical study design such as in the next phase of a study. [15]

Study design

A poor study design can lead to trial failures, for instance selecting the wrong patients or the wrong endpoint, not to mention bad data, can lead to problems in the trial.[1] However data sources can help sponsors be sure that the right patients are then recruited as well as choosing the proper and correct sites and countries to enhance the likely hood of success.

Another common cause of failure in clinical research is based on not being able to meet criteria that have been predetermined by the MHRA or FDA. As well as this, it is important to recognise that a sponsor is necessary to move a drug or device forward in the clinical trial process. If studies are rushed into phase III after a successful phase II it could lack time for reflection on how to address safety in phase III. [2]

Data-related biases

Problems in data collection, missing data, attrition bias may that are not sufficiently accounted for may also lead to unexpected failure, as may rater-bias and unintentional un-blinding. Factors such as non-compliance to the treatment protocol, whether for site-based logistic reasons or for reasons resulting from the individual disease state, in certain patient populations can often be a factor influence results.

Financial concerns

One review found that 22% of phase III studies failed due to a lack of funding[3]. This financial burden also leads to ethical issues regarding the patients that are involved in the trial, patients are under the impression that their involvement would lead to the advancement of the trial and its successful completion.[4] Therefore underfunded trials are likely to lack the enrolment needed to demonstrate efficacy.

Financial risks occur at all stages of product development, however the costs associated with having to re-do studies or to delay studies will escalate the cost further. Taking steps to identify and address risks early on in the development process is key. Companies that do not carefully monitor for risks sometimes don’t identify problems until much further down the line when it is difficult to address them cost-effectively.[5] Sometimes this comes from the hesitation of companies to terminate a project prematurely. In a study of 842 molecules and 637 development program failures, it was evident that the companies that took time to identify problems early on and stop development on an imperilled trial, had a much higher likelihood of reaching the market with their drug.[6]

Other factors that can result in trial failure include; misspent funding, lack of a correct design study, insufficient funding designated to the trial from the outset, which implies costing may not have been accurately calculated.[7] Patient dropout rates also effect the financial stability of clinical trials and difficulties with treatment adherence such as side effects, or a lack of follow ups will also contribute to the financial impact.

Delays and unforeseen costs & challenges

With patient recruiting, there are additional expenses that might be challenging to predict and become very changeable. It is clear that marketing tactics like advertising can have a significant impact on a trial’s capacity to make a profit. [5] Additionally, healthcare professionals can have a big impact on patient recruitment. For example, recruitment and retention may suffer if staff members are absent or appear to be absent, or if there is a regular turnover of staff members and no rapport can form between them and the patients. Building this trust and communication may result in increased participation.

The experiment is impacted by all of these patient recruiting issues, some of which result in significant delays. Only 6% of clinical trials are finished before the deadline, and another 80% are at least a month behind schedule. [6] These delays increase the possibility for loss and have an impact on study costs as well as subsequent sales. Since costs are considerable, there is a risk of financial loss; consequently, increasing the rate of recruitment and retention will yield enormous benefits. [7]

Ethical issues increase the risk of trial failure, severely damaging the reputation of all parties involved, i.e. the pharmaceutical or medical-device company, the CRO and the associated physicians.[1] Many industry cases illustrate that a pursuit of short-term gains can rapidly turn into long-term losses.[2] The general problems with the ethics of clinical trials come from the fact that participants bear the risk and burden. Participation in a clinical trial has an increased level of risk; this is because of the exposure to effects of new treatment. These risks however are not “offset by a prospective clinical benefit”[3], this is because the goal of the trial is not to treat trial participants but to produce generalised medical knowledge.

Take-aways from a statistical perspective

  • Avoid single arm studies where possible or at least consider the possible trade-offs.
  • Be careful about switching to more promising endpoints, alpha levels or sub-groups as a short-term tactic to get a study over the line.
  • Be cognisant that changing the study population from one phase to the next may alter the study outcomes, and not always in the desired direction.
  • Consider “Exaggeration Bias” and adjust for it when designing the next phase of your clinical trial.


[1] Saberwal, Gayatri. “Biobusiness in Brief: What Ails Clinical Trials?” Current Science, vol. 115, no. 9, Current Science Association, 2018, pp. 1648–52,

[2] Pharmafile “clinical trials and their patient” (2016)

[3] Chris Plaford “why do most clinical trials fail” (2015)

 [4]  Hwang T.J., Carpenter D., Lauffenburger J.C., Wang B., Franklin J.M., Kesselheim A.S. Failure of investigational drugs in late-stage clinical development and publication of trial results. JAMA Intern. Med. 2016;176:1826–1833.

 [5] Tukey, John W. “Use of Many Covariates in Clinical Trials.” International Statistical Review / Revue Internationale de Statistique, vol. 59, no. 2, [Wiley, International Statistical Institute (ISI)], 1991, pp. 123–37,

[6] Worrall, John. “<em>What</Em> Evidence in Evidence‐Based Medicine?” Philosophy of Science, vol. 69, no. S3, [The University of Chicago Press, Philosophy of Science Association], 2002, pp. S316–30,

[7] Altman, Douglas G. “Size Of Clinical Trials.” British Medical Journal (Clinical Research Edition), vol. 286, no. 6381, BMJ, 1983, pp. 1842–43,

[8] Fogel DB. Factors associated with clinical trials that fail and opportunities for improving the likelihood of success: A review. Contemp Clin Trials Commun. 2018;11:156-164. Published 2018 Aug 7. doi:10.1016/j.conctc.2018.08.001

[9] Jansen, Lynn A. “The Problem with Optimism in Clinical Trials.” IRB: Ethics & Human Research, vol. 28, no. 4, Hastings Center, 2006, pp. 13–19,

[10] Chris Plaford “why do most clinical trials fail” (2015)

[11] Kobak Kenneth “why do clinical trials fail? Journal of Clinical Psychopharmacology: February 2007 – Volume 27 – Issue 1 – p 1-5 doi: 10.1097/JCP.0b013e31802eb4b7

[12] Liang et al. European Journal of Cancer 2019; 121:19-28

[13] Tap et al, JAMA 2020; 323(13):1266-1276

[14] Beaver et al. JAMA Oncology 2018; 4:849-856

[15] van Zwet et al, Significance, December 2021; 16

[16] Michiels & Wason, European Journal of Cancer 2019; 123:116

Related Posts