The role of Biostatisticians, Bioinformaticians & other Data Experts in Clinical Research

As a medical researcher or a small enterprise in the life sciences industry, you are likely to encounter many experts using statistical and computational techniques to study biological, clinical and other health data. These experts can come from a variety of fields such as biostatistics, bioinformatics, biometrics, clinical data science and epidemiology. Although these fields do overlap in certain ways they differ in purpose, focus, and application. All four areas listed above focus on analysing and interpreting either biological, clinical data or public health data but they typically do so in different ways and with different goals in mind. Understanding these differences can help you choose the most appropriate specialists for your research project and get the most out of their expertise. This article will begin with a brief description of these disciplines for the sake of disambiguation, then focus on biostatistics and bioinformatics, with a particular overview of the roles of biostatisticians and bioinformatics scientists in clinical trials.

Biostatisticians

Biostatisticians use advanced biostatistical methods to design and analyse pre-clinical experiments, clinical trials, and observational studies predominantly in the medical and health sciences. They can also work in ecological or biological fields which will not be the focus of this article. Biostatisticians tend to work on varied data sets, including a combination of medical, public health and genetic data in the context of clinical studies. Biostatisticians are involved in every stage of a research project, from planning and designing the study, to collecting and analysing the data, to interpreting and communicating the results. They may also be involved in developing new statistical methods and software tools. In the UK the term “medical statistician” has been in common use over the past 40 years to describe a biostatistician, particularly one working in clinical trials, but it is becoming less used due to the global nature of the life sciences industry.

Bioinformaticians

Bioinformaticians use computational and statistical techniques to analyse and interpret large datasets in the life sciences. They often work with multi-omics data such as genomics, proteomics transcriptomics data and use tools such as large databases, algorithms, and specialised software programs to analyse and make sense of sequencing and other data. Bioinformaticians develop analysis pipelines and fine-tune methods and tools for analysing biological data to fit the evolving needs of researchers.

Clinical data scientists

Data scientists use statistical and computational modelling methods to make predictions and extract insights from a wide range of data. Often, data is real-world big data of which it might not be practical to analyse using other methods. In a clinical development context data sources could include medical records, epidemiological or public health data, prior clinical study data, or IOT and IOB sensor data. Data scientists may combine data from multiple sources and types. Using analysis pipelines, machine learning techniques, neural networks, and decision tree analysis this data can be made sense of. The better the quality of the input data the more precise and accurate any predictive algorithms can be.

Statistical programmers

Statistical programmers help statisticians to efficiently clean and prepare data sets and mock TFLs in preparation for analysis. They set up SDTM and ADaM data structures in preparation for clinical studies. Quality control of data and advanced macros for database management are also key skills.

Biometricians

Biometricians use statistical methods to analyse data related to the characteristics of living organisms. They may work on topics such as growth patterns, reproductive success, or the genetic basis of traits. Biometricians may also be involved in developing new statistical methods for analysing data in these areas. Some use the terms biostatistician and biometrician interchangeably however for the purpose of this article they remain distinct.

Epidemiologists

Epidemiologists study the distribution and determinants of diseases in populations. Using descriptive, analytical, or experimental techniques, such as cohort or case-control studies, they identify risk factors for diseases, evaluate the effectiveness of public health interventions, as well as track or model the spread of infectious diseases. Epidemiologists use data from laboratory testing, field studies, and publicly available health data. They can be involved in developing new public health policies and interventions to prevent or control the spread of diseases.

Clinical trials and the role of data experts

Clinical trials involve testing new treatments, interventions, or diagnostic tests in humans. These studies are an important step in the process of developing new medical therapies and understanding the effectiveness and safety of existing treatments.

Biostatisticians are crucial to the proper design and analysis of clinical trials. So that optimal study design can take place, they may first have to conduct extensive meta-analysis of previous clinical studies and RWE generation based on available real-world data sets or R&D results. They may also be responsible for managing the data and ensuring its quality, as well as interpreting and communicating the results of the trial. From developing the statistical analysis plan and contributing to the study protocol, to final analysis and reporting, biostatisticians have a role to play across the project time-line.

During a clinical trial, statistical programmers may prepare data sets to CDISC standards and pre-specified study requirements, maintain the database, as well as develop and implement standard SAS code and algorithms used to describe and analyse the study data.

Bioinformaticians may be involved in the design and analysis stages of clinical trials, particularly if the trial design involves the use of large data sets such as sequencing data for multi-omics analysis. They may be responsible for managing and analysing this data, as well as developing software tools and algorithms to support the analysis.

Data scientists may be involved in designing and analysing clinical trials at the planning stage, as well as in developing new tools and methods. The knowledge gleaned from data science models can be used to improve decision-making across various contexts, including life sciences R&D and clinical trials. Some applications include optimising the patient populations used in clinical trials; feasibility analysis using simulation of site performance, region, recruitment and other variables, to evaluate the impacts of different scenarios on project cost and timeline.

Biometricians and epidemiologists may also contribute to clinical trials, particularly if the trial is focused on a specific population or on understanding the factors that influence the incidence or severity of a disease. They may contribute to the design of the study, collecting and analysing the data, or interpreting the results.

Overall, the role of these experts in clinical trials is to use their varied expertise in statistical analysis, data management, and research design to help understand the safety and effectiveness of new treatments and interventions.

The role of biostatistician in clinical trials

Biostatisticians may be responsible for developing the study protocol, determining the sample size, producing the randomisation schedule, and selecting the appropriate statistical methods for analysing the data. They may also be responsible for managing the data and ensuring its quality, as well as interpreting and communicating the results of the trial.

SDTM data preparation

The Study Data Tabulation Model (SDTM) is a data standard that is used to structure and organize clinical study data in a standardized way. Depending on how a CRO is structured, either biostatisticians, statistical programmers, or both will be involved in mapping the data collected in a clinical trial to the SDTM data set, which involves defining the structure and format of the data and ensuring that it is consistent with the standard. This helps to ensure that the data is organised in a way that is universally interpretable. This process involves working with the research team to ensure the appropriate variables and categories are defined before reviewing and verifying the data to ensure that it is accurate, complete and in line with industry standards. Typically the SDTM data set will be established early at the protocol phase and populated later once trial data is accumulated.

Creating and analysing the ADaM dataset

In clinical trials, the Analysis Data Model (ADaM) is a data set model used to structure and organize clinical trial data in a standardized way for the purpose of statistical analysis. ADaM data sets are used to store the data that will be analysed as part of the clinical trial, and are typically created from the Study Data Tabulation Model (SDTM) data sets, which contain the raw data collected during the trial. This helps to ensure the reliability and integrity of the data, and makes it easier to analyse and interpret the results of the trial.

Biostatisticians and statistical programmers are responsible for developing ADaM data sets from the SDTM data, which involves selecting the relevant variables and organizing them in a way that is appropriate for the particular statistical analyses that will be conducted. While statistical programmers may create derived variables, produce summary statistics, TFLs, and organise the data into appropriate datasets and domains, biostatisticians are responsible for conducting detailed statistical analyses of the data and interpreting the results. This may include tasks such as testing hypotheses, identifying patterns and trends in the data, and developing statistical models to understand the relationships between the data and the research questions the trial seeks to answer.

The role of biostatisticians, specifically, in developing ADaM data sets from SDTM data is to use their expertise in statistical analysis and research design to guide statistical programmers in ensuring that the data is organised, structured, and formatted in a way that is appropriate for the analyses that will be conducted, and to help understand and interpret the results of the trial.

A Biostatistician’s role in study design & planning

Biostatisticians play a critical role in the design, analysis, and interpretation of clinical trials. The role of the biostatistician in a clinical trial is to use their expertise in statistical analysis and research design to help ensure that the trial is conducted in a scientifically rigorous and unbiased way, and to help understand and interpret the results of the trial. Here is a general overview of the tasks that a biostatistician might be involved in during the different stages of a clinical trial:

Clinical trial design: Biostatisticians may be involved in designing the clinical trial, including determining the study objectives, selecting the appropriate study population, and developing the study protocol. They are responsible for determining the sample size and selecting the appropriate statistical methods for analysing the data. Often in order to carry out these tasks, preparatory analysis will be necessary in the form of detailed meta-analysis or systematic review.

Sample size calculation: Biostatisticians are responsible for determining the required sample size for the clinical trial. This is an important step, as the sample size needs to be large enough to detect a statistically significant difference between the treatment and control groups, but not so large that the trial becomes unnecessarily expensive or time-consuming. Biostatisticians use statistical algorithms to determine the sample size based on the expected effect size, the desired level of precision, and the expected variability of the data. This information is informed by expert opinion and simulation of the data from previous comparable studies.

Randomisation schedules: Biostatisticians develop the randomisation schedule for the clinical trial, which is a plan for assigning subjects to the treatment and control groups in a random and unbiased way. This helps to ensure that the treatment and control groups are similar in terms of their characteristics, which helps to reduce bias or control for confounding factors that might affect the results of the trial.

Protocol development: Biostatisticians are involved in developing the statistical and methodological sections of the clinical trial protocol, which is a detailed plan that outlines the objectives, methods, and procedures of the study. In addition to outlining key research questions and operational procedures the protocol should include information on the study population, the interventions being tested, the outcome measures, and the data collection and analysis methods.

Data analysis: Biostatisticians are responsible for analysing the data from the clinical trial, including conducting interim analyses and making any necessary adjustments to the protocol. They play a crucial role in interpreting the results of the analysis and communicating the findings to the research team and other stakeholders.

Final analysis and reporting: Biostatisticians are responsible for conducting the final analysis of the data and preparing the final report of the clinical trial. This includes summarising the results, discussing the implications of the findings, and making recommendations for future research.

The role of bioinformatician in biomarker-guided clinical studies.

Biomarkers are biological characteristics that can be measured and used to predict the likelihood of a particular outcome, such as the response to a particular treatment. Biomarker-guided clinical trials use biomarkers as a key aspect of the study design and analysis. In biomarker-guided clinical trials where the biomarker is based on genomic sequence data, bioinformaticians may play a particularly important role in managing and analysing the data. Genomic and other omics data is often large and complex, and requires specialised software tools and algorithms to analyse and interpret. Bioinformaticians develop and implement these tools and algorithms, as well as for managing and analysing the data to identify patterns and relationships relevant to the trial. Bioinformaticians use their expertise in computational biology to to help understand the relationship between multi-omics data and the outcome of the trial, and to identify potential biomarkers that can be used to guide treatment decisions.

Processing sequencing data is a key skill of bioinformaticians that involves several steps, which may vary depending on the specific goals of the analysis and the type of data being processed. Here is a general overview of the steps that a bioinformatician might take to process sequencing data:

  1. Data pre-processing: Cleaning and formatting the data so that it is ready for analysis. This may include filtering out low-quality data, correcting errors, and standardizing the format of the data.
  2. Mapping: Aligning the sequenced reads to a reference genome or transcriptome in order to determine their genomic location. This can be done using specialized software tools such as Bowtie or BWA.
  3. Quality control: Checking the quality of the data and the alignment, and identifying and correcting any problems that may have occurred during the sequencing or mapping process. This may involve identifying and removing duplicate reads, or identifying and correcting errors in the data.
  4. Data analysis: Using statistical and computational techniques to identify patterns and relationships in the data such as identifying genetic variants, analysing gene expression levels, or identifying pathways or networks that are relevant to the study.
  5. Data visualization: Creating graphs, plots, and other visualizations to help understand and communicate the results of the analysis.

Once omics data has been analysed, the insights obtained can be used for tailoring therapeutic products to patient populations in a personalised medicine approach.

A changing role of data experts in life sciences R&D and clinical research

Due to the need for better therapies and health solutions, researchers are currently defining diseases at more granular levels using multi-omics insights from DNA sequencing data which allows differentiation between patients in the biomolecular presentation of their disease, demographic factors, and their response to treatment. As more and more of the resulting therapies reach the market the health care industry will need to catch up in order to provide these new treatment options to patients.

Even after a product receives regulatory approval, payers can opt not to reimburse patients, so financial benefit should be demonstrated in advance where possible. Patient reported outcomes and other health outcomes are becoming important sources of data to consider in evidence generation. Evidence provided to payers should aim to demonstrate financial as well as clinical benefit of the product.

In this context, regulators are becoming aware of the need for innovation in developing new ways of collecting treatment efficacy and other data used to assess novel products for regulatory approval. The value of observational studies and real-world-data sources as a supplement clinical trial data is being acknowledged as a legitimate and sometimes necessary part of the product approval process. Large scale digitisation now makes it easier to collect patient-centric data directly from clinical trial participants and users via devices and apps. Establishing clear evidence expectations from regulatory agencies then Collaborating with external stakeholders, data product experts, and service-providers to help build new evidence-building approaches.

Expert data governance and quality control is crucial to the success of any new methods to be implemented analytically. Data from different sources, such as IOT sensor data, electronic health records, sequencing data for multi-omics analysis, and other large data sets, has to be combined cautiously and with robust expert standards in place.

From biostatistics, bioinformatics, data science, CAS, and epidemiology for public heath or post-market modelling; a bespoke team of integrated data and analytics specialists is now as important to a product development project as the product itself to gaining competitiveness and therefore success in the marketplace. Such a team should be applying a combination of established data collection methodologies eg. clinical trials and systematic review, and innovative methods such as machine learning models that draw upon a variety of real world data sources to find a balance between advancing important innovation and mitigating risk.

Sex Differences in Clinical Trial Recruiting

The following article investigates several systematic reviews into sex and gender representation in individual clinical trial patient populations. In these studies sex ratios are assessed and evaluated by various factors such as clinical trial phase, disease type under investigation and disease burden in the population. Sex differences in the reporting of safety and efficacy outcomes are also investigated. In many cases safety and efficacy outcomes are pooled, rather than reported individually for each sex, which can be problematic when findings are generalised to the wider population. In order to get the dosage right for different body compositions and avoid unforeseen outcomes in off label use or when a novel therapeutic first reaches the market, it is important to report sex differences in clinical trials. Due to the unique nuances of disease types and clinical trial phases it is important to realise that a 50-50 ratio of male to female is not always the ideal or even appropriate in every clinical study design. Having the right sex balance in your clinical trial population will improve the efficiency and cost-effectiveness of your study. Based upon the collective findings a set of principles are put forth to guide the researcher in determining the appropriate sex ratio for their clinical trial design.

Sex difference by clinical trial phase

  • variation in sex enrolment ratios for clinical trial phases
  • females less likely to participate in early phases, due to increased risk of adverse events
  • under-representation of women in phase III when looking at disease prevalence

It has been argued that female representation in clinical trials is lacking, despite recent efforts to mitigate the gap. US data from 2000-2020 suggests that trial phase has the greatest variation in enrolment when compared to other factors, with median female enrolment being 42.9%, 44.8%, 51.7%, and 51.1% for phases I, I/II to II, II/III to III, and IV4. This shows that median female enrolment gradually increases as trials progress, with the difference in female enrolment between the final phases II/III to III and IV being <1%. Additional US data on FDA approved drugs including trials from as early as 1993 report that female participation in clinical trials is 22%, 48%, and 49% for trial phases I, II, and III respectively2. While the numbers for participating sexes are almost equal in phases II and III, women make up only approximately one fifth of phase I trial populations in this dataset2. The difference in reported participation for phase I trials between the datasets could be due to an increase in female participation in more recent years. The aim of a phase I trial is to evaluate safety and dosage, so it comes as no surprise that women, especially those of childbearing age, are often excluded due to potential risks posed to foetal development.

In theory, women can be included to a greater extent as trial phases progress and the potential risk of severe adverse events decreases. By the time a trial reaches phase III, it should ideally reflect the real-world disease population as much as possible. European data for phase III trials from 2011-2015 report 41% of participants being female1, which is slightly lower than female enrolment in US based trials. 26% of FDA approved drugs have a >20% difference between the proportion of women in phase II & III clinical trials and the prevalence of women in the US with the disease2, and only one of these drugs shows an over-representation of women.

Reporting of safety and efficacy by sex difference

  • Both safety and efficacy results tend to differ by sex.
  • Reporting these differences is inconsistent and often absent
  • Higher rates of adverse events in women are possibly caused by less involvement or non stratification in dose finding and safety studies.
  • There is a need to enforce analysis and reporting of sex differences in safety and efficacy data

Sex differences in response to treatment regarding both efficacy and safety have been widely reported. Gender subgroup analyses regarding efficacy can reveal whether a drug is more or less effective in one sex than the other. Gender subgroup analyses for efficacy are available for 71% of FDA approved drugs, and of these 11% were found to be more efficacious in men and 7% in women2. Alternatively, only 2 of 22 European Medicines Agency approved drugs examined were found to have efficacy differences between the sexes1. Nonetheless, it is important to study the efficacy of a new drug on all potential population subgroups that may end up taking that drug.

The safety of a treatment also differs between the sexes, with women having a slightly higher percentage (p<0.001) of reported adverse events (AE) than men for both treatment and placebo groups in clinical trials1. Gender subgroup analyses regarding safety can offer insights into the potential risks that women are subjected to during treatment. Despite this, gender specific safety analyses are available for only 45% of FDA approved drugs, with 53% of these reporting more side effects in women2. On average, women are at a 34% increased risk of severe toxicity for each cancer treatment domain, with the greatest increased risk being for immunotherapy (66%). Moreover, the risk of AE is greater in women across all AE types, including patient-reported symptomatic (female 33.3%, male 27.9%), haematologic (female 45.2%, male 39.1%) and objective non-haematologic (female 30.9%, male 29.0%)3. These findings highlight the importance of gender specific safety analyses and the fact that more gender subgroup safety reporting is needed. More reporting will increase our understanding of sex-related AE and could potentially allow for sex-specific interventions in the future.

Sex differences by disease type and burden

  • Several disease categories have recently been associated with lower female enrolment
  • Men are under-represented as often as women when comparing enrolment to disease burden proportions
  • There is a need for trial participants to be recruited on a case-by-case basis, depending on the disease.

Sex differences by disease type

When broken down by disease type, the sex ratio of clinical trial participation shows a more nuanced picture. Several disease categories have recently been associated with lower female enrolment, compared to other factors including trial phase, funding, blinding, etc4. Women comprised the smallest proportions of participants in US-based trials between 2000-2020 for cardiology (41.4%), sex-non-specific nephrology and genitourinary (41.7%), and haematology (41.7%) clinical trials4. Despite women being

proportionately represented in European phase III clinical studies between 2011-2015 for depression, epilepsy, thrombosis, and diabetes, they were significantly under-represented for hepatitis C, HIV, schizophrenia, hypercholesterolaemia, and heart failure and were not found to be overrepresented in trials for any of the disease categories examined1. This shows that the gap in gender representation exists even in later clinical trial phases when surveying disease prevalence, albeit to a lesser extent. Examining disease burden shows that the gap is even bigger than anticipated and includes the under-representation of both sexes.

Sex Differences by Disease Burden

It is not until the burden of disease is considered that men are shown to be under-represented as often as women. Including burden of disease can depict proportionality relative to the variety of disease manifestations between men and women. It can be measured as disability-adjusted life years (DALYs), which represent the number of healthy years of life lost due to the disease. Despite the sexes each making up approximately half of clinical trial participants overall in US-based trials between 2000-2020, all disease categories showed an under-representation of either women or men relative to disease burden, except for infectious disease and dermatologic clinical trials4. Women were under-represented in 7 of 17 disease categories, with the greatest under-representation being in oncology trials, where the difference between the number of female trial participants and corresponding DALYs is 3.6%. Men were under-represented compared with their disease burden in 8 of 17 disease categories, with the greatest difference being 11.3% for musculoskeletal disease and trauma trials.4 Men were found to be under-represented to a similar extent to women, suggesting that the under-representation of either sex could be by coincidence. Alternatively, male under-representation could potentially be due to the assumption of female under-representation leading to overcorrection in the opposite direction. It should be noted that these findings would benefit from statistical validation, although they illustrate the need for clinical trial participants to be recruited on a case-by-case basis, depending on the disease.

Takeaways to improve your patient sample in clinical trial recruiting:

  1. Know the disease burden/DALYs of your demographics for that disease.
  2. Try to balance the ratio of disease burden to the appropriate demographics for your disease
  3. Aim to recruit patients based on these proportions
  4. Stratify clinical trial data by the relevant demographics in your analysis. For example: toxicity, efficacy, adverse events etc should always be analyses separately for male and female to come up wit the respective estimates.
  5. Efficacy /toxicity etc should always be reported separately for male and female. reporting difference by ethnicity is also important as many diseases differentially affect certain ethnicity and the corresponding therapeutics can show differing degrees of efficacy and adverse events.

The end goal of these is that medication can be more personalised and any treatment given is more likely to help and less likely to harm the individual patient.

Conclusions

There is room for improvement in the proportional representation of both sexes in clinical trials and knowing a disease demographic is vital to planning a representative trial. Assuming the under-representation is on the side of female rather than male may lead to incorrect conclusions and actions to redress the balance. Taking demographic differences in disease burden into account when recruiting trial participants is needed. Trial populations that more accurately depict the real-world populations will allow a therapeutic to be tailored to the patient.

Efficacy and safety findings highlight the need for clinical study data to be stratified by sex, so that respective estimates can be determined. This enables more accurate, sex/age appropriate dosing that will maximise treatment efficacy and patient safety, as well as minimise the chance of adverse events. This also reduces the risks associated with later off label use of drugs and may avoid modern day tragedies resembling the thalidomide tragedy. Moreover, efficacy and adverse events should always be reported separately for men and women, as the evidence shows their distinct differences in response to therapeutics.

References:

1. Dekker M, de Vries S, Versantvoort C, Drost-van Velze E, Bhatt M, van Meer P et al. Sex Proportionality in Pre-clinical and Clinical Trials: An Evaluation of 22 Marketing Authorization Application Dossiers Submitted to the European Medicines Agency. Frontiers in Medicine. 2021;8.

2. Labots G, Jones A, de Visser S, Rissmann R, Burggraaf J. Gender differences in clinical registration trials: is there a real problem?. British Journal of Clinical Pharmacology. 2018;84(4):700-707.

3. Unger J, Vaidya R, Albain K, LeBlanc M, Minasian L, Gotay C et al. Sex Differences in Risk of Severe Adverse Events in Patients Receiving Immunotherapy, Targeted Therapy, or Chemotherapy in Cancer Clinical Trials. Journal of Clinical Oncology. 2022;40(13):1474-1486.

4. Steinberg J, Turner B, Weeks B, Magnani C, Wong B, Rodriguez F et al. Analysis of Female Enrollment and Participant Sex by Burden of Disease in US Clinical Trials Between 2000 and 2020. JAMA Network Open. 2021;4(6):e2113749.

Estimating the Costs Associated with Novel Pharmaceutical development: Methods and Limitations.

Data sources for cost analysis of drug development R&D and clinical trials

Cost estimates for pre-clinical and clinical development across the pharmaceutical industry differ based on several factors. One of these is the source of data used by each costing study to inform these estimates. Several studies use private data, which can include confidential surveys filled out by pharmaceutical firms/clinical trial units and random samples from private databases3,9,10,14,15,16. Other studies have based their cost estimates upon publicly available data, such as data from the FDA/national drug regulatory agencies, published peer-reviewed studies, and other online public databases1,2,12,13,17.

Some have questioned the validity of using private surveys from large multinational pharmaceutical companies to inform cost estimates, saying that survey data may be artificially inflated by pharmaceutical companies to justify high therapeutic prices 18,19,20. Another concern is that per trial spending by larger pharmaceutical companies and multinational firms would far exceed the spending of start-ups and smaller firms, meaning cost estimates made based on data from these larger companies would not be representative of smaller firms.

Failure rate of R&D and clinical trial pipelines

Many estimates include the cost of failures, which is especially the case for cost estimates “per approved drug”. As many compounds enter the clinical trial pipeline, the cost to develop one approved drug/compound includes cost of failures by considering the clinical trial success rate and cost of failed compounds. For example, if 100 compounds enter phase I trials, and 2 compounds are approved, the clinical cost per approved drug would include the amount spent on 50 compounds.

The rate of success used can massively impact cost estimates, where a low success rate or high failure rate will lead to much higher costs per approved drug. The overall probability of clinical success may vary by year and has been estimated at a range of values including 7.9%21, 11.83%10, and 13.8%22. There are concerns that some studies suggesting lower success rates have relied on small samples from industry curated databases and are thereby vulnerable to selection bias12,22.

Success rates per phase transition also affects overall costs. When more ultimately unsuccessful compounds enter late clinical trial stages, the higher the costs are per approved compound. In addition, success rates are also dependent on therapeutic area and patient stratification by biomarkers, among other factors. For example, one study estimated the lowest success rate at 1.6% for oncological trials without biomarker use compared with a peak success rate of 85.7% for cardiovascular trials utilising biomarkers22. While aggregate success rates can be used to estimate costs, using specific success rates will be more accurate to estimate the cost of a specific upcoming trial, which could help with budgeting and funding decisions.

Out-of-pocket costs vs capitalised costs & interest rates

Cost estimates also differ due to reporting of out-of-pocket and capitalised costs. An out-of-pocket cost refers to the amount of money spent or expensed on the R&D of a therapeutic. This can include all aspects of setting up therapeutic development, from initial funding in drug discovery/device design, to staff and site costs during clinical trials, and regulatory approval expenses.

The capitalised cost of a new therapeutic refers to the addition of out-of-pocket costs to a yearly interest rate applied to the financial investments funding the development of a new drug. This interest rate, referred to as the discount rate, is determined by (and is typically greater than) the cost of capital for the relevant industry.

Discount rates for the pharmaceutical industry vary between sources and can dramatically alter estimates for capitalised cost, where a higher discount rate will increase capitalised cost. Most studies place the private cost of capital for the pharmaceutical industry to be 8% or higher23,24, while the cost of capital for government is lower at around 3% to 7% for developed countries23,25. Other sources have suggested rates from as high as 13% to as low as zero13,23,26, where the zero cost of capital has been justified by the idea that pharmaceutical firms have no choice but to invest in R&D. However, the mathematical model used in many estimations for the cost of industry capital, the CAPM model, tends to give more conservative estimates23. This would mean the 10.5% discount rate widely used in capitalised cost estimates may in fact result in underestimation.

While there is not a consensus on what discount rate to use, capitalised costs do represent the risks undertaken by research firms and investors. A good approach may be to present both out-of-pocket and capitalised estimated costs, in addition to rates used, justification for the rate used, and the estimates using alternative rates in a sensitivity analysis26.

Costs variation over time

The increase in therapeutic development costs

Generally, there has been a significant increase in the estimated costs to develop a new therapeutic over time26. One study reported an exponential increase of capitalised costs from the 1970s to the mid-2010s, where the total capitalised costs rose annually 8.5% above general inflation from 1990 to 201310. Recent data has suggested that average development costs reached a peak in 2019 and had decreased the following two years9. This recent decrease in costs was associated with slightly reduced cycle times and an increased proportion of infectious disease research, likely in response to the rapid response needed for COVID-19.

Recent cost estimates

Costs can range with more than 100-fold differences for phase III/pivotal trials alone1. One of the more widely cited studies on drug costs used confidential survey data from ten multinational pharmaceutical firms and a random sample from a database of publicly available data10. In 2013, this study estimated the total pre-approval cost at $2.6 billion USD per approved new compound. This was a capitalised cost, and the addition of post-approval R&D costs increased this estimate to $2.87 billion (2013 USD). The out-of-pocket cost per approved new compound was reported at $1.395 billion, of which $965 million were clinical costs and the remaining $430 million were pre-clinical.

Another estimate reported the average cost to develop an asset at $1.296 billion in 20139. Furthermore, it reported that this cost had increased until 2019 at $2.431 billion and had since decreased to $2.376 billion in 2020 and $2.006 billion in 2021. While comparable to the previous out-of-pocket estimate for 2013, this study does not state whether their estimates are out-of-pocket or capitalised, making it difficult to meaningfully compare these estimates.

Figure 1: Recent cost estimates for drug development per approved new compound. “Clinical only” costs include only the costs of phase 0-III clinical trials, while “full” costs include pre-clinical costs. The colour of each bubble indicates the study, while bubble size indicated relative cost. A dashed border indicated the study used private data for their estimations, while a solid border indicates the study utilised publicly available data. Figure represents studies 9, 10, 12, 13 and 17 from the reference list in this report.

Publicly available data of 63 FDA-approved new biologics from 2009-2018 was used to estimate the capitalised (at 10.5%) R&D investment to bring a new drug to market at median of $985.3 million and a mean of $1.3359 billion (inflation adjusted to 2018 USD)12. These data were mostly accessible from smaller firms, smaller trials, first-in-class drugs, and further specific areas. The variation in estimated cost was, through sensitivity analysis, mostly explained by success/failure rates, preclinical expenditures, and cost of capital.

Publicly available data of 10 companies with no other drugs on the market in 2017 was used to estimate out-of-pocket costs for the development of a single cancer drug. This was reported at a median of $648 million and a mean of $719.8 million13. Capitalised costs were also reported using a 7% discount rate, with a median of $754.4 million and mean of $969.4 million. By focusing on data from companies without other drugs on the market, these estimates may better represent the development costs per new molecular entity (NME) for start-ups as the cost of failure of other drugs in the pipeline were included while any costs related to supporting existing on-market drugs were systematically impossible to include.

One study estimated the clinical costs per approved non-orphan drug at $291 million (out-of-pocket) and $412 million (capitalised 10.5%)17. The capitalised cost estimate increased to $489 million when only considering non-orphan NMEs. The difference between these estimates for clinical costs and the previously mentioned estimates for total development costs puts into perspective the amount

spent on pre-clinical trials and early drug development, with one studynoting their pre-clinical estimates comprised 32% of out-of-pocket and 42% of capitalised costs10.

Things to consider about cost estimates

The issue with these estimates is that there are so many differing factors affecting each study. This complicates cost-based pricing discussions, especially when R&D cost estimates can differ orders of magnitude apart. The methodologies used to calculate out-of-pocket costs differ between studies9,17, and the use of differing data sources (public data vs confidential surveys) seem to impact these estimates considerably.

There is also an issue with the transparency of data and methods from various sources in cost estimates. Some of this is a result of using confidential data, where some analyses are not available for public scrutiny8. This study in particular raised questions as estimates were stated without any information about the methodology or data used in the calculation of estimates. The use of confidential surveys of larger companies has also been criticised as the confidential data voluntarily submitted would have been submitted anonymously without independent verification12.

Due to the limited amount of comprehensive and published cost data17, many estimates have no option but to rely on using a limited data set and making some assumptions to arrive at a reasonable estimate. This includes a lack of transparent available data for randomised control trials, where one study reported that only 18% of FDA-approved drugs had publicly available cost data18. Therefore, there is a need for transparent and replicable data in this field to allow for more plausible cost estimates to be made, which in turn could be used to support budget planning and help trial sustainability18,26.

Despite the issues between studies, the findings within each study can be used to gather an idea of trends, cost drivers, and costs specific to company/drug types. For example, studies suggest an increasing overall cost of drug development from 1970 to peak in 201910, with a subsequent decrease in 2020 and 20219.

For a full list of references used in this article, please see the main report here: https://anatomisebiostats.com/biostatistics-blog/how-much-does-developing-a-novel-therapeutic-cost-factors-affecting-drug-development-costs-in-the-pharma-industry-a-mini-report/

How much does developing a novel therapeutic cost? – Factors Affecting Drug Development Costs across the Pharma Industry: A mini-Report

Introduction

Data evaluating the costs associated with developing novel therapeutics within the pharmaceutical industry can be used to identify trends over time and can inform more accurate budgeting for future research projects. However, the cost to develop a drug therapeutic is difficult to accurately evaluate, resulting in varying estimates ranging from hundreds of millions to billions of US dollars between studies. The high cost of drug development is not purely because of clinical trial expenses. Drug discovery, pre-clinical trials, and commercialisation also need to be factored into estimates of drug development costs.

There are limitations in trying to accurately assess these costs. The sheer number of factors that affect estimated and real costs means that studies often take a more specific approach. For example, costs will differ between large multinational companies with multiple candidates in their pipeline and start-ups/SMEs developing their first pharmaceutical. Due to the amount and quality of available data, many studies work mostly with data from larger multinational pharmaceutical companies with multiple molecules in the pipeline. When taken out of context, the “$2.6 billion USD cost for getting a single drug to market” can seem daunting for SMEs. It is very important to clarify what scale these cost estimates represent, but cost data from large pharma companies are still relevant for SMEs as they can used to infer costs for different scales of therapeutic development.

This mini-report includes what drives clinical trial costs, methods to reduce these costs, and then explores what can be learned from varying cost estimates.

What drives clinical trial costs?

There is an ongoing effort to streamline the clinical trial process to be more cost and time efficient. Several studies report on cost drivers of clinical trials, which should be considered when designing and budgeting a trial. Some of these drivers are described below:

Study size

Trial costs rise exponentially with an increasing study size, which some studies have found to be the single largest driver in trial costs1,2,3. There are several reasons for varying sample sizes between trials. For example, study size increases with trial phase progression as phases require different study sizes based on the number of patients needed to establish the safety and/or effectiveness of a treatment. Failure to recruit sufficient patients can result in trial delays which also increases costs4.

Trial site visits

A large study size is also correlated with a larger overall number of patient visits during a trial, which is associated with a significant increase in total trial costs2,3. Trial clinic visits are necessary for patient screening, treatment and treatment assessment but include significant costs for staff, site hosting, equipment, treatment, and in some cases reimbursement for patient travel costs. The number of trial site visits per patient varies between trials where more visits may indicate longer and/or more intense treatment sessions. One estimate for the number of trial visits per person was a median of 11 in a phase III trial, with $2 million added to estimated trial costs for every +1 to the median2.

Number & location of clinical trial sites

A higher number of clinical trial study sites has been associated with significant increase in total trial cost3. This is a result of increased site costs, as well as associated staffing and equipment costs. These will vary with the size of each site, where larger trials with more patients often use more sites or larger sites.

Due to the lower cost and shorter timelines of overseas clinical research5,6, there has been a shift to the globalisation of trials, with only 43% of study sites in US FDA-approved pivotal trials being in North America7. In fact, 71% of these trials had sites in lower cost regions where median regional costs were 49%-97% of site costs in North America. Most patients in these trials were either in North America (39.7%), Western Europe (21%), or Central Europe (20.4%).

Median cost per regional site as a percentage of North American median cost for comparison.

However, trials can face increased difficulties in managing and coordinating multiple sites across different regions, with concerns of adherence to the ethical and scientific regulations of the trial centre’s region5,6. Some studies have reported that multiregional trials are associated with a significant increase in total trial costs, especially those with sites in emerging markets3. It is unclear if this reported increase is a result of lower site efficiency, multiregional management costs, or outsourcing being more common among larger trials.

Clinical Trial duration

Longer trial duration has been associated with a significant increase in total trial costs3,4, where many studies have estimated the clinical period between 6-8 years8,9,10,11,12,13. Longer trials are sometimes necessary, such as in evaluating the safety and efficacy of long-term drug use in the management of chronic and degenerative disease. Otherwise, delays to starting up a trial contribute to longer trials, where delays can consume budget and diminish the relevance of research4. Such delays may occur as a result of site complications or poor patient accrual.

Another aspect to consider is that the longer it takes to get a therapeutic to market (as impacted by longer trials), the longer the wait is before a return of investment is seen by both the research organisation and investors. The period from development to on-market, often referred to as cycle time, can drive costs per therapeutic as interest based on the industry’s cost of capital can be applied to investments.

Therapeutic area under investigation

The cost to develop a therapeutic is also dependent on the therapeutic area, where some areas such as oncology and cardiovascular treatments are more cost intensive compared with others1,2,5,6,12,14. This is in part due to variation in treatment intensity, from low intensity treatments such as skin creams to high intensity treatments such as multiple infusions of high-cost anti-cancer drugs2. An estimate for the highest mean cost for pivotal trials per therapeutic area was $157.2M in cardiovascular trials compared to $45.4M in oncology, and a lowest of $20.8M in endocrine, metabolic, and respiratory disease trials1. This was compared to an overall median of $19M. Clinical

trial costs per therapeutic area also varied by clinical trial phase. For example, trials in pain and anaesthesia have been found to have the lowest average cost of a phase I study while having the highest average cost of a phase III study6.

It is important to note that some therapeutic areas will have far lower per patient costs when compared to others and are not always indicative of total trial costs. For example, infectious disease trials generally have larger sample sizes which will lead to relatively low per patient costs, whereas trials for rare disease treatment are often limited to smaller sample sizes with relatively high per patient costs. Despite this, trials for rare disease are estimated to have significantly lower drug to market costs.

Drug type being evaluated

As mentioned in the therapeutic areas section above, treatments may vary in intensity from skin creams to multiple rounds of treatment with several anti-cancer drugs. This can drive total trial costs due to additional manufacturing and the need for specially trained staff to administer treatments.

In the case of vaccine development, phase III/pivotal trials for vaccine efficacy can be very difficult to run unless there are ongoing epidemics for the targeted infectious disease. Therefore, some cost estimates of vaccine development include from the pre-clinical stages to the end of phase IIa, with the average cost for one approved vaccine estimated at $319-469 million USD in 201815.

Study design & trial control type used

Phase III trial costs vary based on the type of control group used in the trial1. Uncontrolled trials were the least expensive with an estimated mean of $13.5 million per trial. Placebo controlled trials had an estimated mean of $28.8 million, and trials with active drug comparators had an estimated mean cost of $48.9 million. This dramatic increase in costs is in part due to manufacturing and staffing to administer a placebo or active drug. In addition, drug-controlled trials require more patients compared to placebo-controlled, which also requires more patients than uncontrolled trials2.

Reducing therapeutic development costs

Development costs can be reduced through several approaches. Many articles recommend improvements to operational efficiency and accrual, as well as deploying standardised trial management metrics4. This could include streamlining trial administration, hiring experienced trial staff, and ensuring ample patient recruitment to reduce delays in starting and carrying out a study.

Another way to reduce development costs can take place in the thorough planning of clinical trial design by a biostatistician, whether in-house or external. Statistics consulting throughout a trial can help to determine suitable early stopping conditions and the most appropriate sample size. Sample size calculation is particularly important as underestimation undermines experimental results, whereas overestimation leads to unnecessary costs. Statisticians can also be useful during the pre-clinical stage to audit R&D data to select the best available candidates, ensure accurate R&D data analysis, and avoid pursuing unsuccessful compounds.

Other ways to reduce development costs include the use of personalised medicine, clinical trial digitisation, and the integration of AI. Clinical trial digitisation would lead to the streamlining of clinical trial administration and would also allow for the integration of artificial intelligence into clinical trials. There have been many promising applications for AI in clinical trials, including the use of electronic health records to enhance the enrolment and monitoring of patients, and the potential use of AI in trial diagnostics. More information about this topic can be found in our blog “Emerging use-cases for AI in clinical trials”.

For more information on the methodology by which pharmaceutical development and clinical trials costs are estimated and what data has been used please see the article: https://anatomisebiostats.com/biostatistics-blog/estimating-the-costs-associated-with-novel-pharmaceutical-development-methods-and-limitations/

Cost breakdown in more detail: How is a clinical trial budget spent?

Clinical trial costs can be broken down and divided into several categories, such as staff and non-staff costs. In a sample of phase III studies, personnel costs were found to be the single largest component of trial costs, consisting of 37% of the total, whereas outsourcing costs made up 22%, grants and contracting costs at 21%, and other expenses at 21%3.

From a CRO’s perspective, there are many factors that are considered in the cost of a pivotal trial quotation, including regulatory affairs, site costs, management costs, the cost of statistics and medical writing, and pass-through costs27. Another analysis of clinical trial cost factors determined clinical procedure costs made up 15-22% of the total budget, with administrative staff costs at 11-29%, site monitoring costs at 9-14%, site retention costs at 9-16%, and central laboratory costs at 4-12%5,6. In a study of multinational trials, 66% of total estimated trial costs were spent on regional tasks, of which 53.3% was used in trial sites and the remainder on other management7.

Therapeutic areas and shifting trends

Therapeutic area had previously been mentioned as a cost driver of trials due to differences in sample sizes and/or treatment intensity. It is however worth mentioning that, in 2013, the largest number of US industry-sponsored clinical trials were in oncology (2,560/6,199 active clinical trials with 215,176/1,148,340 patients enrolled)4,14. More recently, there has been a shift to infectious disease trials, in part due to the needed COVID-19 trials9.

Clinical trial phases

Due to the expanding sample size as a trial progresses, average costs per phase increase from phase I through III. Median costs per phase were estimated in 2016 at $3.4 million for phase I, $8.6 million for phase II, and $21.4 million for phase III3. Estimations of costs per patient were similarly most expensive in phase III at $42,000, followed by phase II at $40,000 and phase I at $38,50014. The combination of an increasing sample size and increasing per patient costs per phase leads to the drastic increase in phase costs with trial progression.

In addition, studies may have multiple phase III trials, meaning the median estimated cost of phase III trials per approved drug is higher than per trial costs ($48 million and $19 million respectively)2. Multiple phase III trials can be used to better support marketing approval or can be used for therapeutics which seek approval for combination/adjuvant therapy.

There are fewer cost data analyses available on phase 0 and phase IV on clinical trials. Others report that average Phase IV costs are equivalent to Phase III but much more variable5,6.

Orphan drugs

Drugs developed for the treatment of rare diseases are often referred to as orphan drugs. Orphan drugs have been estimated to have lower clinical costs per approved drug, where capitalised costs per non-orphan and orphan drugs were $412 million and $291 million respectively17. This is in part due to the limit to sample size imposed upon orphan drug trials by the rarity of the target disease and the higher success rate for each compound. However, orphan drug trials are often longer when compared to non-orphan drug trials, with an average study duration of 1417 days and 774 days respectively.

NMEs

New molecular entities (NMEs) are drugs which do not contain any previously approved active molecules. Both clinical and total costs of NMEs are estimated to be higher when compared to next in class drugs13,17. NMEs are thought to be more expensive to develop due to the increased amount of pre-clinical research to determine the activity of a new molecule and the increased intensity of clinical research to prove safety/efficacy and reach approval.

Conclusion & take-aways

There is no one answer to the cost of drug or device development, as it varies considerably by several cost drivers including study size, therapeutic area, and trial duration. Estimates of total drug development costs per approved new compound have ranged from $754 million12 to $2.6 billion10 USD over the past 10 years. These estimates do not only differ based on the data used, but also due to methodological differences between studies. The limited availability of comprehensive cost data for approved drugs also means that many studies rely on limited data sets and must make assumptions to arrive at a reasonable estimate.

There are still multiple practical ways that can be used to reduce study costs, including expert trial design planning by statisticians, implementation of biomarker-guided trials to reduce the risk of failure, AI integration and digitisation of trials, improving operational efficiency, improving accrual, and introducing standardised trial management metrics.

References

Moore T, Zhang H, Anderson G, Alexander G. Estimated Costs of Pivotal Trials for Novel Therapeutic Agents Approved by the US Food and Drug Administration, 2015-2016. JAMA Internal Medicine. 2018;178(11):1451-1457.

.1 Moore T, Zhang H, Anderson G, Alexander G. Estimated Costs of Pivotal Trials for Novel Therapeutic Agents Approved by the US Food and Drug Administration, 2015-2016. JAMA Internal Medicine. 2018;178(11):1451-1457.

2. Moore T, Heyward J, Anderson G, Alexander G. Variation in the estimated costs of pivotal clinical benefit trials supporting the US approval of new therapeutic agents, 2015–2017: a cross-sectional study. BMJ Open. 2020;10(6):e038863.

3. Martin L, Hutchens M, Hawkins C, Radnov A. How much do clinical trials cost?. Nature Reviews Drug Discovery. 2017;16(6):381-382.

4. Bentley C, Cressman S, van der Hoek K, Arts K, Dancey J, Peacock S. Conducting clinical trials—costs, impacts, and the value of clinical trials networks: A scoping review. Clinical Trials. 2019;16(2):183-193.

5. Sertkaya A, Birkenbach A, Berlind A, Eyraud J. Examination of Clinical Trial Costs and Barriers for Drug Development [Internet]. ASPE; 2014. Available from: https://aspe.hhs.gov/reports/examination-clinical-trial-costs-barriers-drug-development-0

6. Sertkaya A, Wong H, Jessup A, Beleche T. Key cost drivers of pharmaceutical clinical trials in the United States. Clinical Trials. 2016;13(2):117-126.

7. Qiao Y, Alexander G, Moore T. Globalization of clinical trials: Variation in estimated regional costs of pivotal trials, 2015–2016. Clinical Trials. 2019;16(3):329-333.

8. Monitor Deloitte. Early Value Assessment: Optimising the upside value potential of your asset [Internet]. Deloitte; 2020 p. 1-14. Available from: https://www2.deloitte.com/content/dam/Deloitte/be/Documents/life-sciences-health-care/Deloitte%20Belgium_Early%20Value%20Assessment.pdf

9. May E, Taylor K, Cruz M, Shah S, Miranda W. Nurturing growth: Measuring the return from pharmaceutical innovation 2021 [Internet]. Deloitte; 2022 p. 1-28. Available from: https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/life-sciences-health-care/Measuring-the-return-of-pharmaceutical-innovation-2021-Deloitte.pdf

10. DiMasi J, Grabowski H, Hansen R. Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics. 2016;47:20-33.

11. Farid S, Baron M, Stamatis C, Nie W, Coffman J. Benchmarking biopharmaceutical process development and manufacturing cost contributions to R&D. mAbs. 2020;12(1):e1754999.

12. Wouters O, McKee M, Luyten J. Estimated Research and Development Investment Needed to Bring a New Medicine to Market, 2009-2018. JAMA. 2020;323(9):844-853.

13. Prasad V, Mailankody S. Research and Development Spending to Bring a Single Cancer Drug to Market and Revenues After Approval. JAMA Internal Medicine. 2017;177(11):1569-1575.

14. Battelle Technology Partnership Practice. Biopharmaceutical Industry-Sponsored Clinical Trials: Impact on State Economies [Internet]. Pharmaceutical Research and Manufacturers of America; 2015. Available from: http://phrma-docs.phrma.org/sites/default/files/pdf/biopharmaceutical-industry-sponsored-clinical-trials-impact-on-state-economies.pdf

15. Gouglas D, Thanh Le T, Henderson K, Kaloudis A, Danielsen T, Hammersland N et al. Estimating the cost of vaccine development against epidemic infectious diseases: a cost minimisation study. The Lancet Global Health. 2018;6(12):e1386-e1396.
16. Hind D, Reeves B, Bathers S, Bray C, Corkhill A, Hayward C et al. Comparative costs and activity from a sample of UK clinical trials units. Trials. 2017;18(1).

17.Jayasundara K, Hollis A, Krahn M, Mamdani M, Hoch J, Grootendorst P. Estimating the clinical cost of drug development for orphan versus non-orphan drugs. Orphanet Journal of Rare Diseases. 2019;14(1).

19. Speich B, von Niederhäusern B, Schur N, Hemkens L, Fürst T, Bhatnagar N et al. Systematic review on costs and resource use of randomized clinical trials shows a lack of transparent and comprehensive data. Journal of Clinical Epidemiology. 2018;96:1-11.

20. Light D, Warburton R. Demythologizing the high costs of pharmaceutical research. BioSocieties. 2011;6(1):34-50.

21. Adams C, Brantner V. Estimating The Cost Of New Drug Development: Is It Really $802 Million?. Health Affairs. 2006;25(2):420-428.

22. Thomas D, Chancellor D, Micklus A, LaFever S, Hay M, Chaudhuri S et al. Clinical Development Success Rates and Contributing Factors 2011–2020 [Internet]. BIO|QLS Advisors|Informa UK; 2021. Available from: https://pharmaintelligence.informa.com/~/media/informa-shop-window/pharma/2021/files/reports/2021-clinical-development-success-rates-2011-2020-v17.pdf

23. Wong C, Siah K, Lo A. Estimation of clinical trial success rates and related parameters. Biostatistics. 2019;20(2):273-286.
24. Chit A, Chit A, Papadimitropoulos M, Krahn M, Parker J, Grootendorst P. The Opportunity Cost of Capital: Development of New Pharmaceuticals. INQUIRY: The Journal of Health Care Organization, Provision, and Financing. 2015;52:1-5.
25. Harrington, S.E. Cost of Capital for Pharmaceutical, Biotechnology, and Medical Device Firms. In Danzon, P.M. & Nicholson, S. (Eds.), The Oxford Handbook of the Economics of the Biopharmaceutical Industry, (pp. 75-99). New York: Oxford University Press. 2012.
26. Zhuang J, Liang Z, Lin T, De Guzman F. Theory and Practice in the Choice of Social Discount Rate for Cost-Benefit Analysis: A Survey [Internet]. Manila, Philippines: Asian Development Bank; 2007. Available from: https://www.adb.org/sites/default/files/publication/28360/wp094.pdf
27. Rennane S, Baker L, Mulcahy A. Estimating the Cost of Industry Investment in Drug Research and Development: A Review of Methods and Results. INQUIRY: The Journal of Health Care Organization, Provision, and Financing. 2021;58:1-11.
28. Ledesma P. How Much Does a Clinical Trial Cost? [Internet]. Sofpromed. 2020 [cited 26 June 2022]. Available from: https://www.sofpromed.com/how-much-does-a-clinical-trial-cost


Bayesian approach for sample size estimation and re-adjustment in clinical trials

Bayesian approach for sample size estimation and re-adjustment in clinical trials

            Accurate sample size calculation plays an important role in clinical research. Sample size in this context simply refers to the number of human patients, wheather healthy or diseased, taking part in the study. Clinical studies conducted using an insufficient sample size can lack the statistical power to adequately evaluate the treatment of interest, whereas a superfluous sample size can unnecessarily waste limited resources.

          Various methods can be applied for determining the optimal sample size for a specific clinical study. Methods also exist for any re-adjustments throughout the study,  if required. These methods vary widely from straightforward tests and formulas to complex, time-consuming ones, depending on the type of study and available information from which to make the estimate. Most commonly used sample size calculation procedures are developed from a frequentist perspective

Importance of knowing your study parameters

          Accurate sample size calculation requires, information on several key study and research parameters. These parameters usually include an effect size and variability estimate, derived from available sources; a clinically meaningful difference. In practice these parameters are  generally unknown and must be estimated from the existing literature or from pilot studies.

The Bayesian Framework in sample size estimations and re-adjustments

The Bayesian Framework has gradually become one of the most frequently mentioned methods when it comes to randomised clinical trial sample size estimations and re-adjustments.

In practice, sample size calculation is usually treated explicitly as a decision problem and employs a loss or utility function.

The Bayesian approach involves three key stages:

  • 1. Prior estimate

A researcher has a prior estimate about the treatment effect (and other study parameters) that has been derived from meta-analysis of existing research, pilot studies, or  based on expert opinion in absence of these.

  • 2. Likelihood

Data is simulated to derive a likelihood estimate of prior parameters.

  • 3. Posterior estimate

Based on the insights obtained, prior estimates from the first stage are updated to give a more precise final estimate.

A challenge of using this approach is knowing when to stop this cycle when enough evidence has been gathered and avoid creating bias (Dreibe,2021). Peaking at the data in order to make a stopping decision is called “optional stopping”. In general an optional stopping rule is cautioned against as it can increase type one error rates (de Heide & Grunewald, 2021).

How to decide when to stop the simulation cycle?

There are two approaches one could take.

  • 1. Posterior probability

            Calculating the posterior probability that the mean difference between the treatment and control arm is equal or greater than the estimated effect of the intervention. Based on the level of probably calculated (low or high) the cycle could be stopped and without any further need to gather more data.

  • 2. PPOS ( predictive probability of success)

        Calculating the predictive probability of achieving a successful result at the end of the study is a commonly used approach. It is really helpful when it comes to determining the success or failure of a study. Similarly, as with posterior probability based on the level of probability a decision could be made to stop or continue the study.

How to plan a Bayesian sample size calculation for a clinical trial

The key elements to consider when planning a Bayesian clinical trial are the same as for frequentists clinical trial.

Key planning stages:

  • Determine the objective of the clinical study
  • Determine and set endpoints
  • Decide on the appropriate study design
  • Run a meta analysis or review of existing evidence related to your research objective
  • Statistical test and statistical analysis plan (SAP)

Even though the key planning stages are the same for both approaches it does not mean that they can be mixed through out the study. If you have chosen to use one approach you can’t change to another method once the calculations have been generated and research started.

Bayesian approach vs Frequentist approach for sample size calculations

BayesianFrequentist
Prior and posterior( uses probability of hypothesis and data)No prior or posterior( never gives probability of hypothesis)
Sample size depends on the prior and likelihoodSample size depend on the likelihood
Requeres finding/deciding on prior in order to estimate sample sizeDoes not require prior to estimate sample size
Computationally intensive due to integration over many parametersLess computationally intense

          Frequentist measures such as p-values and confidence intervals continue to predominate the methodology across life sciences research, however, the use of the Bayesian approach in sample size estimations and re-estimation for RTCs has been increasing over time.

Bayesian approach for sample size calculations in medical device clinical trial

           In the recent years Bayesian approach has gained more popularity as the method used in clinical trials including medical device studies. One of the reasons being that if good prior information about the use of the specific therapeutic or device is available, the Bayesian approach may allow to include this information into the statistical analysis part of the clinical trial. Sometimes, the available prior information for a device of interest may be used as a justification for smaller sample size and shorten the length of the pivotal trial (Chen et al., 2011).

Computational algorithms and growing popularity of Bayesian approach

          Bayesian statistical analysis can be computationally intense. Despite that there have been multiple breakthroughs with computational algorithms and increased computing speed that have made it much easier to calculate and build more realistic Bayesian models, further contributing to the popularity of Bayesian approach. (FDA, 2010).

Markov Chain Monte Carlo (MCMC) method

          One of the basic computational tools being used is Markov Chain Monte Carlo ( MCMC) method. This method computes large number of simulations from the distributions of random quantities.

Why MCMC?

          MCMC helps to deal with computational difficulties one often can face when using Bayesian approach for needed sample  size estimations. The MCMC is an advanced random variable generation technique which allows one to simulate different samples from more sophisticated probability distributions.

Conclusion

          Sample size calculation plays an important role in clinical research. If underestimated, statistical power for the detection of a clinically meaningful difference will likely be insufficient; if overestimated, resources are wasted unnecessarilly.

          The Bayesian Framework has become quite popular approach for sample size estimation. There are advantages of using the Bayesian method, depite this there has been some criticism of this approach as a sample size estimation and re-adjustment method due to the prior being subjective and possibility of different researchers selecting different priors leading to different posteriors and final conclusions.

In reality, both the Bayesian and frequentist approaches to sample size calculation involve deriving the relevant input parameters from the literature or clinical expertise and could potentially differ due to variations in individual expert opinion as to which studies to include or exclude in this process.

          Bayesian approach is more computationally intensive compared to the traditional frequentist approaches. Therefore, when it comes to selecting a method for sample size estimation, it should be chosen carefully to best fit the particular study design and base-on advice provided by statistical professionals with expertise in clinical trials.

References:

Bokai WANG, C., 2017. Comparisons of Superiority, Non-inferiority, and Equivalence Trials. [online] PubMed Central (PMC). Available at: <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5925592/> [Accessed 28 February 2022].

Chen, M., Ibrahim, J., Lam, P., Yu, A. and Zhang, Y., 2011. Bayesian Design of Noninferiority Trials for Medical Devices Using Historical Data. Biometrics, 67(3), pp.1163-1170.

E, L., 2008. Superiority, equivalence, and non-inferiority trials. [online] PubMed. Available at: <https://pubmed.ncbi.nlm.nih.gov/18537788/> [Accessed 28 February 2022].

Gubbiotti, S., 2008. Bayesian Methods for Sample Size Determination and their use in Clinical Trials. [online] Core.ac.uk. Available at: <https://core.ac.uk/download/pdf/74322247.pdf> [Accessed 28 February 2022].

U.S. Food and Drug Administration. 2010. Guidance for the Use of Bayesian Statistics in Medical Device Clinical. [online] Available at: <https://www.fda.gov/regulatory-information/search-fda-guidance-documents/guidance-use-bayesian-statistics-medical-device-clinical-trials> [Accessed 28 February 2022].

van Ravenzwaaij, D., Monden, R., Tendeiro, J. and Ioannidis, J., 2019. Bayes factors for superiority, non-inferiority, and equivalence designs. BMC Medical Research Methodology, 19(1).

de Heide. R, Grunewald, P.D, 2021, Why optional stopping can be a problem for Bayesians; Psychonomic Bulletin & Review, 21(2), 201-208.

Dynamic Systems Modelling and Complex Adaptive Systems (CAS) Techniques in Biomedicine and Public Health

Dynamical systems modelling is a mathematical approach to studying the behaviour of systems that change over time. These systems can be physical, biological, economic, or social in nature, and they are typically characterized by a set of variables that evolve according to certain rules or equations.

CAS (Complex Adaptive Systems) models are a specific type of dynamical systems model that are used to study systems that are complex, adaptive, and composed of many interconnected parts. These systems are often found in natural and social systems, and they are characterized by a high degree of uncertainty, nonlinearity, and emergence.

To build a dynamical systems model, one typically starts by identifying the variables that are relevant to the system being studied and the relationships between them. These relationships are usually represented by a set of equations or rules that describe how the variables change over time. The model is then simulated or analysed to understand the system’s behaviour under different conditions and to make predictions about its future evolution.

CAS models are often used to study systems that exhibit emergent behaviour, where the behaviour of the system as a whole is more than the sum of its parts. These models can help us understand how complex systems self-organize, adapt, and evolve over time, and they have applications in fields such as biology, economics, social science, and computer science.

Whatever the approach, a model is intended to represent the real system, but there are limits to the application of models. The reliability of any model often falls short when attempting to operate within and apply the parameter boundaries of the model to any real life context.

 The previous article outlined some basic characteristics of complex adaptive systems (CAS). The CAS approach to modelling real world phenomena requires a different approach to the more conventional predictive modelling paradigm. Complex adaptive systems such as ecosystems, biological systems, or social systems require looking at interacting elements and observing the patterns that arise, creating boundary conditions from these patterns, running experiments or simulations, and responding to the outcomes in an adaptive way.

To further delineate the complex systems domain in practical terms we can use the Cynefin framework developed by David Snowden et al. to contrast the Simple, Complicated, Complex and Chaotic domains. For the purpose of this article the Chaotic domain will be ignored.

Enabling constraints of CAS models

In contrast to complex domain is the “known” or “simple” domain represented by ordered systems such as a surgical operating theatre or clinical trials framework. These ordered systems are rigidly constrained and can be planned and designed in advance based upon prior knowledge. In this context best practice can be applied because an optimal way of doing things is pre-determined.

The intermediary between the simple and complex domains is the “knowable” or ” complicated” domain. An example of such is the biostatistical analysis of basic clinical data. Within a complicated system there is a right answer that we can discover and design for. In this domain we can apply good practice based on expert advice (not best practice) as a right and wrong way of doing things can be determined with analysis.

Complex domain represents a system that is in flux and not predictable in the linear sense. A complex adaptive system can be operating in a state that is anywhere from equilibrium to the edge of chaos. In order to understand the system state one should perform experiments that probe relationships between entities. Due to the lack of linearity, multiple simultaneous experimental probes should occur in parallel, not in sequence, with the goal of better understanding processes. Emergent practice is determined in line with observed, evolving patterns. Ideally, decentralised Interpretation of data should be distributed to system users themselves rather than determined by a single expert in a centralised fashion.

As opposed to operating from a pre-supposed framework, the CAS structure should be allowed to emerge from the data under investigation. This avoids the confirmation bias that occurs when data are fitted to a predefined framework regardless of whether this framework best represents the data being modelled. Following on from this, model boundaries should also be allowed to emerge from the data itself.

Determining unarticulated needs from clusters of agent anecdotes or data points is a method of determining where improvement needs to occur in service provision systems. Yet this method forms an analogy that is mimicked in biological systems as well if an ABM was to be applied in a biomolecular context.

In understanding CAS, dispositionality of system states rather than linear causality should be the focus . Rather than presuming an inherent certainty as to “if I do A, B will result”, instead dispositional states arise as a result of A, which may result in B, but the evolution of which cannot be truly predicted.

“The longer you hold things in a state of transition the more options you’ve got” linear iterations based on a defined requirement with a degree of ambiguity which should be explored rather than eliminated. The opposite of standard statistical approach.

CAS modelling should include real-time feedback loops over multiple agents to avoid cognitive bias. In CAS modelling, every behaviour or interaction will produce unintended consequences, for this reason, David Snowden suggests, Small, fast experiments should be run in parallel, so that any bad, unintended consequences can be mitigated and the good ones amplified.

Modes of analysis and modelling:

System dynamics models (SDM)

  • A SDM simulates the movements of entities within the system and can be used to investigate macro behaviour of the system.
  • Changes to system state variables over time are modelled using differential equations.
  • SDMs are multi-dimentional, non-linear and include feedback mechanisms.

  • Visual representations of the model can be produced using stock and flow diagrams to summarise interdependencies between key system variables.
  • Dynamic hypotheses of the system model can be represented in a causal loop diagram
  • SDM is appropriate for modelling aggregate flows, trends, sub-system behaviour.

Agent based models (ABM)

  • ABMs can be used to investigate micro behaviour of the system from more of a bottom-up perspective through Intricate flows of individual based activity.
  • State changes of individual agents are simulated by ABMs rather than the broader entites captured by SDM
  • Multiple types of agent are operating within the same complex adaptive system modelled
  • Data within the ABM can be aggregated to infer more macro or top-down system behaviour.

Agents within the ABM can make decisions, engage in behaviour defined by simple rules and attributes, learn from experience and from feedback from interactions with other agents or the modelled environment. This is as true in models of human systems as it is with molecular scale systems. In both examples agents can par take in communication on a one to one, one to many and one to location basis. Previously popular models such as discrete event simulation (DES) was implemented to model passive agents at a finite time rather than active “decision makers” over dynamic periods that are a feature of ABMs.

Hybrid Models

  • Both ABM and SDM are complimentary techniques for simulating micro and macro level behaviour of complex adaptive systems and therefore engaging in exploratory style analysis of such systems.
  • Hybrid models emulate individual agent variability as well as variability in the the behaviour of aggregate entities they compose.
  • Simulate macro and micro level system behaviour in many areas of investigation such as health service provision, biomedical science.

Hybrid models have the ability to combine two or more types of simulation within the same model. These models can combine SDMs and ABMs, or other techniques, to address both top-down and bottom-up micro and macro dynamics in a single model that more closely captures whole system behaviour. This has the potential to elevate many of the necessary trade-offs of using one of the simulation types alone.

As software capability develops we are seeing an increased application of hybrid modelling techquiques. Previously wide-spread techniques such as DES and Markov models, which are one-dimentional, uni-directional, linear, are now proving inadequate in the task of modelling the complex adaptive and dynamic world we inhabit.

Model Validation Techniques

SDMs and ABMs are not fitted to observed data but instead use both qualitative and quantitative real world data to inform and develop the model and it’s parameters as a simulation of real world phenomena. For this reason model validation of SDMs and ABMs should be even more rigorous than for more traditional models such as maximum likelihood or least squares methods. Sensitivity analysis and validation tests such as behavioural validity tests can be used to compare model output against real-world data from organisations or experiments, relevant to the scale of the model being validated.

Structure of the model such as

  • Checking how the model behaves when subject to extreme parameter values.
  • Things like dimensional consistency, boundary adequacy, mass balance
  • Sensitivity analysis – how sensitive is the model to changes in key parameters.

Network Analysis

Data accrual from diverse data sources challenges and limitations

While complex systems theory has origins in the mathematics chaos theory, there are many examples contemporaneously where complex systems theory has been divorced form the mathematics and statistical modelling and applied in diverse fields such as business and healthcare or social services provision. Mathematical modelling adds validity to complex systems analysis. The problem with completing solely qualitative analysis without the empiricism of mathematical modelling, simulation and checking against a variety of real world data sets, the results

Complex Adaptive Systems (CAS) Approach to Biomedicine & Public Health

While a majority of biomedical and public health research still maintains a linear reductive approach to arrive at empiric insight, reality is in most cases neither of these things. A complex adaptive systems approach, like reality, is non-linear and high dimensional. There are many benefits from taking a linear cause-effect reductivist approach in that the complex problem and it’s solution becomes simplified into terms that can be understood and predicted.  Where this falls short is that predictions don’t often hold up in real world examples in which the outcomes tend to seem unpredictable.

Genomics, proteomics, transcriptomics and other “omics” techniques have generated an unprecedented amount of molecular genetics data. This data can be combined with larger scale data, from cell processes to environmental influences in disease states, to produce highly sophisticated multi-level models of contributing factors in human health and disease.

An area that is currently seeing an evolution into a more personalised nuanced approach, albeit still linear, is clinical trials. By introducing a biomarker component to clinical trials, for example to evaluate drug efficacy, the number of dimensions to the problem is slightly increased in order to arrive at more targeted and accurate solutions. More specifically, the number of patient sub-categories in the clinical trials increases to accommodate various biomarker groups which may respond more of less well to a different pharmacological approach to the same disease. Increasing the dimensions of the problem beyond this would, for now not be feasible or even helpful. On the other hand, understanding the interplay between biomolecular processes and environmental interactions in order to gain insight into disease processes themselves and thereby, which biochemical pathways for oncology drugs to target, is something that clearly benefits from a non-linear approach.

Another example of a system that benefits from a non-linear approach is public health service provision and the desire to garner insights into changes that increase prevention, early intervention and treatment effectiveness as well as reduce service cost for the government and patient. Both of the above examples require attention to both macro and micro processes.

Some components of complex adaptive systems: Connectivity, self organisation, emergent, fractal patterns, non-linear, governed by feedback loops, adaptive, nested systems, stochastic, simple rules, chaotic behaviour, iteration, sub optimal, requisite variety, may be considered optimised at the edge of chaos.

Whether modelling clinical health services networks or biological processes, complex adaptive systems consist of several common characteristics.

Components of complex adaptive systems

Massive interdependencies and elaborate connectivity

The complex adaptive systems approach shifts emphasis away from studying individual parts (such as seen in conventional medical science which produces notably fragmented results) to characterising the organisation of these parts in terms of their inherently dynamic interactions. CAS are open rather than closed systems because it is exogenous elements impacting on the system that cause the disruption required for growth.

Complex adaptive systems can be understood by relations and networks. System processes occur in networks that link component entities or agents. This approach emphasises that structures are dynamic and it is the process of becoming rather than the being itself that is of empirical interest.

Necessarily transdisciplinary or multi-disciplinary

A complex adaptive systems approach requires a transdisciplinary approach. The collaboration of numerous disparate experts is required in the combining of myriad biological, physical and societal based sciences into a holistic model. This model should aim to represent pertinent simultaneous top-down and bottom up processes that reveal contexts and relationships within the observed system dynamics.

Self-organising, emergent behaviour

Complex adaptive systems are selt-organising in the sense that observed patterns are open ended, potentially unfinished and cannot be predicted by the conventional definition. Rules of cause and effect are context dependent and can’t be applied in a rigid sense.

A self organising dynamic structure, which can be identified as a pattern, emerges as a result of individual spontaneous interactions between individual agents or elements. This pattern then impacts the interactions of individuals in a continual top down, bottom up symbiosis.

While linear models represent a reductionist, closed conceptualisation of the phenomena under analysis, a complex systems approach embraces high dimensionality true to the myriad real world phenomena composing a system. This requires that the system be treated as open and of uncertain ontology and thus lacking predictive capacity with regards to the outcomes of system dynamics.

As an emergent phenomena, the complex adaptive system can be understood by interacting with it rather than through analysis or static modelling. This approach is concerned with “state change” or to evaluate “how things are becoming”, rather than “how thing are”. How did today’s state emerge from yesterday’s trajectories and process dynamics?

Fractal engagement. Fractal engagement entails that the system as a whole orientates through multiple actions. The same data can produce frameworks at the level of responsibility of every individual agent. Using public health intervention as an example, Individual agents make decisions, based on the data, as to what changes they can make tomorrow within their own sphere of competence, rather than overarching changes being dictated and determined in a top down way, or by others.

Feedback loops

Feedback loops link individual parts into an overaching dynamic structure. Feedback loops are self-reinforcing and can be positive or negative.

Negative feedback loops are stabalising in that they have a dampening effect on oscillations that causes the system or component to move closer to equilibrium. Positive feedback loops are morphogenic and increase the frequency and amplitude of oscillations driving the system away from homeostasis and leading to changes in the underlying structure or the system.

Positive feedback loops, while facilitating growth and adaptation, tend towards chaos and decay and are thus crucially counterbalanced by simultaneously operating negative feedback loops. Evolution is supposed to occur as a series of phase transitions, back and forth, from ordered to disordered states.

Both top-down and bottom-up “causality”

While CAS models describe elements in terms of possibilities and probabilities, rather than cause and effect in the linear sense, there is a clear interplay between top down and bottom up causality and influence on the dynamic flows and trajectories of any system. This is very much a mirror of real world systems. One example of this is the human body where both conscious thought (top down) and biomolecular processes such as hormonal and neurochemical fluctuations (bottom up) effect mood, which in turn has a lot of flow on effects down stream that cause changes that shirt the system back and forth from health to disease. One such manifestation of this is stress induced illness of various kinds. As a social example, we can of course find many examples of top down and bottom up causation in a public heath or epidemiological setting.

This has been a non-exhaustive description of just some key components of complex adaptive systems. The main purpose is to differentiate the CAS paradigm from the more mainstream biomedical research paradigm and approach. For a deeper dive into the concepts mentioned see the references below.

References:

https://core-cms.prod.aop.cambridge.org/core/services/aop-cambridge core/content/view/F6F59CA8879515E3178770111717455A/9781108498692c7_100-118.pdf/role_of_the_complex_adaptive_systems_approach.pdf

Carmichael T., Hadžikadić M. (2019) The Fundamentals of Complex Adaptive Systems. In: Carmichael T., Collins A., Hadžikadić M. (eds) Complex Adaptive Systems. Understanding Complex Systems. Springer, Cham. https://doi.org/10.1007/978-3-030-20309-2_1

https://www.health.org.uk/sites/default/files/ComplexAdaptiveSystems.pdf

Milanesi, L., Romano, P., Castellani, G., Remondini, D., & Liò, P. (2009). Trends in modeling Biomedical Complex Systems. BMC bioinformatics10 Suppl 12(Suppl 12), I1. https://doi.org/10.1186/1471-2105-10-S12-I1

Sturmberg J. P. (2021). Health and Disease Are Dynamic Complex-Adaptive States Implications for Practice and Research. Frontiers in psychiatry12, 595124. https://doi.org/10.3389/fpsyt.2021.595124