Checklist for proactive regulatory compliance in medical device R&D projects

Meeting regulatory compliance in medical device research and development (R&D) is crucial to ensure the safety, efficacy, and quality of the device. Here are some strategies to help achieve regulatory compliance:

  1. Early Involvement of Regulatory Experts: Engage regulatory experts early in the R&D process. Their insights can guide decision-making and help identify potential regulatory hurdles from the outset. This proactive approach allows for timely adjustments to the development plan to meet compliance requirements.
  2. Stay Updated with Regulations: Medical device regulations are continually evolving. Stay abreast of changes in relevant regulatory guidelines, standards, and requirements in the target markets. Regularly monitor updates from regulatory authorities to ensure that the R&D process aligns with the latest compliance expectations.
  3. Build a Strong Regulatory Team: Assemble a team of professionals with expertise in regulatory affairs and compliance. This team should collaborate closely with R&D, quality, and manufacturing teams to ensure that compliance considerations are integrated throughout the product development lifecycle.
  4. Conduct Regulatory Gap Analysis: Perform a comprehensive gap analysis to identify any discrepancies between current practices and regulatory requirements. Address the gaps proactively to avoid potential compliance issues later in the development process.
  5. Implement Quality Management Systems (QMS): Establish robust QMS compliant with relevant international standards, such as ISO 13485. The QMS should cover all aspects of medical device development, from design controls to risk management and post-market surveillance.
  6. Adopt Design Controls: Implement design controls, as per regulatory guidelines (e.g., FDA Design Controls). This ensures that the R&D process is well-documented, and design changes are carefully managed and validated.
  7. Risk Management: Conduct thorough risk assessments and establish a risk management process. Identify potential hazards, estimate risk levels, and implement risk mitigation strategies throughout the R&D process.
  8. Clinical Trials and Data Collection: If required, plan and conduct clinical trials to collect essential data on safety and performance. Ensure that clinical trial protocols comply with regulatory requirements, and obtain appropriate ethics committee approvals.
  9. Preparation for Regulatory Submissions: Early preparation for regulatory submissions, such as pre-submissions (pre-IDE or pre-CE marking) or marketing applications, is essential. Compile all necessary documentation, including technical files, to support regulatory approvals.
  10. Engage with Regulatory Authorities: Maintain open communication with regulatory authorities throughout the development process. Seek feedback, clarify uncertainties, and address any questions or concerns to facilitate a smoother regulatory review.
  11. Post-Market Surveillance: Plan post-market surveillance activities to monitor the device’s performance and safety after commercialisation. This ongoing data collection ensures compliance with post-market requirements and facilitates timely response to adverse events.
  12. Training and Education: Provide continuous training and education to the R&D team and other stakeholders on regulatory requirements and compliance expectations. This ensures that all members are aware of their responsibilities in maintaining regulatory compliance.

By implementing these strategies, medical device R&D teams can navigate the complex landscape of regulatory compliance more effectively. Compliance not only ensures successful product development but also builds trust with customers, stakeholders, and regulatory authorities, paving the way for successful market entry and long-term success in the medical device industry.

Biostatistics checklist for regulatory compliance in clinical trials

  1. Early Biostatistical Involvement: Engage biostatisticians from the outset to ensure proper study design, data collection, and statistical planning that align with regulatory requirements.
  2. Compliance with Regulatory Guidelines: Stay updated with relevant regulatory guidelines (e.g., ICH E9, FDA guidance) to ensure statistical methods and analyses comply with current standards.
  3. Sample Size Calculation: Perform accurate sample size calculations to ensure the study has sufficient statistical power to detect clinically meaningful effects.
  4. Randomisation and Blinding: Implement appropriate randomisation methods and blinding procedures to minimise bias and ensure the integrity of the study.
  5. Data Quality Assurance: Establish data quality assurance processes, including data monitoring, validation, and query resolution, to ensure data integrity.
  6. Handling Missing Data: Develop strategies for handling missing data in compliance with regulatory expectations to maintain the validity of the analysis.
  7. Adherence to SAP: Strictly adhere to the Statistical Analysis Plan (SAP) to maintain transparency and ensure consistency in the analysis.
  8. Statistical Analysis and Interpretation: Conduct rigorous statistical analyses and provide accurate interpretation of the results, aligning with the study objectives and regulatory requirements.
  9. Interim Analysis (if applicable): Implement interim analysis following the SAP, if required, to monitor study progress and make data-driven decisions.
  10. Data Transparency and Traceability: Ensure data transparency and traceability through clear documentation, well-organized datasets, and proper archiving practices.
  11. Regulatory Submissions: Provide statistical sections for regulatory submissions, such as Clinical Study Reports (CSRs) or Integrated Summaries of Safety and Efficacy, as per regulatory requirements.
  12. Data Security and Privacy: Implement measures to protect data security and privacy, complying with relevant data protection regulations.
  13. Post-Market Data Analysis: Plan for post-market data analysis to assess long-term safety and effectiveness, as required by regulatory authorities.

By following this checklist, biostatisticians can play a pivotal role in ensuring that clinical trial data meets regulatory approval and maintains data integrity, contributing to the overall success of the regulatory process for medical products.

The Call for Responsible Regulations in Medical Device Innovation

In the seemingly fast-paced world of medical technology, the quest for innovation is ever-present. However, it is crucial to recognise that the engineering of medical devices should not mirror the recklessness and hubris of exploratory engineering exemplified by the recent Ocean Gate tragedy where the stubborn blinkeredness of figures like Stockton Rush is not kept in check by sufficiently stringent regulations and safety standards. While it may seem in poor taste to criticise one who has lost their life under such tragic circumstances, the incident is absolutely emblematic of everything that can go wrong when the hubris of the innovator left relatively unbridled in the service of short-term commercial gains. More troubling in this case was that American safety standards were in place to protect human life, however the company was able to operate outside the United States jurisdiction in order to by-pass those standards. Fortunately, most medical device patients will not be receiving treatment over international waters. Despite this there exist loopholes to be filled.

The jurisdictional loophole of “export only” medical device approval

As of 2022 the United States pulls in 41.8% of global sales revenue from medical devices. 10% of Americans currently have a medical device implanted and 80,000 Americans have died as a result of medical devices over the past 10 years. Interestingly Americans have the 46th highest life expectancy in the world despite having dis-proportionally high access to the most advanced medical treatments, including medical devices. Perhaps more worryingly, thousands of medical devices manufactured in the United States are FDA approved for “Export Only” meaning they do not pass the muster for use by American citizens. This “Export Only” status is one factor that partially accounts for America’s disproportionate share of the global medical device market. Foreign recipients of such medical devices are just as often from developed countries with their own high regulatory standards such as Australia, United Kingdom and Europe, and have accepted the device based on its stamp of approval by the FDA. Patients in these countries are typically not made aware of the particular risks, have not been disclosed the reasons why it has not been approved for use in the United States, nor that it has failed to gain this approval in its country of origin.

Local regulators such as the TGA in Australia, the MDR in Europe, or the MHRA in the UK, all claim to have some of the most stringent regulatory standards in the world. Despite this, American devices designated “Export Only” by the FDA, there are roughly 4600 in total, get approved predominantly due to differential device classification between the FDA and the importing country. By assigning a less risky class in the importing country the device escapes the need for clinical trials and the high level of regulatory scrutiny it was subject to in the United States. While devices that include medicines, tissues or cells are designated high risk in Australia and require thorough clinical validation, implantable devices for example can require only a CE mark by the TGA. This means that an implantable device such as a titanium shoulder replacement that has failed clinical studies in the United States and received an “Export Only” designation by the FDA can be approved by the TGA with or MDR with very little burden of evidence.

Regulatory standards must begin to evolve at the pace of technology.

Of equal concern is the need for regulatory standards that dynamically keep up with the pace of innovation and the emergent complexity of the devices we are now on a trajectory to engineer.

It is no longer enough to simply prioritise safety, regulation, and stringent quality control standards, we now need to have regular re-assessments of the standards themselves to evaluate whether they in-fact remain adequate to assess the novel case at hand. In many cases, even with current devices under validation, the answer to this question could well be “no”. It is quite possible that methods that would have previously seemed beyond consideration in the context of medical device evaluation, such as causal inference and agent-based models, may now become integrated into many a study protocol. Bayesian methods are also becoming increasingly important as a way of calibrating to increasing device complexity.

When the stakes involve devices implanted in people’s bodies or software making life-altering decisions, the need for responsible innovation becomes paramount.

If an implantable device also has a software component, the need for caution increases and exponentially so if the software is to be driven by AI. As these and other hybrid devices become the norm there is a need to test and thoroughly validate the reliability of machine learning or AI algorithms used in the device, the failure rate of software, and how this rate changes over time, software security and susceptibility to hacking or unintended influence from external stimuli, as well as the many metrics of safety and efficacy of the physical device itself.

The Perils of Recklessness:

Known for his audacious approach to deep-sea exploration, Stockton Rush has become a symbol of recklessness and disregard for safety protocols. While such daring may be thrilling in certain fields, it has no place in the medical device industry. Creating devices that directly impact human lives demands meticulous attention to detail, adherence to rigorous safety standards, and a focus on patient welfare.

There have been several class action lawsuits in recent years related to medical device misadventure. Behemoth Johnson & Johnson has been subject to several class action law suits pertaining to its medical devices. A recent lawsuit brought against the company, along with five other vaginal mesh manufacturers, was able to establish that 4000 adverse events had been reported to the FDA which included serious and permanent injury leading to loss of quality of life. Another recent class-action lawsuit relates to Johnson & Johnson surgical tools which are said to have caused at least burn injuries to at least 63 adults and children. These incidents are likely the result of recklessness in pushing these products to market and would have been avoidable had the companies involved chosen to conduct proper and thorough testing in both animals and humans. Proper testing occurs as much on the data side as in the lab and entails maintaining data integrity and statistical accuracy at all times.

Apple has recently been subject to legal action due to the their racially-biased blood oxygen sensor which, as with similar devices by other manufacturers, is able to detect blood oxygen more accurately for lighter skinned people than dark. Dark skin absorbs more light and can therefore give falsely elevated blood oxygen readings. It is being argued that users believing their blood oxygen levels to be higher than actual levels has contributed to higher incidences of death in this demographic, particularly during the pandemic. This lawsuit could have likely been avoided If the company had conducted more stringent clinical trials which recruited a broad spectrum of participants and stratified subjects by skin tone to fairly evaluate any differences in performance. If differences were identified, they should also have been transparently reported on the product label, if not also discussed openly in sales material, so that consumers can make an informed decision as to whether the watch was a good choice for them based on their own skin tone.

Ensuring Regulatory Oversight:

To prevent the emergence of a medtech catastrophes of unimagined proportions, robust regulation and vigilant oversight are crucial as we move into a newer technological era. Not just to redress current inadequacies in patient safeguarding but to also to prepare for new ones. While innovation and novel ideas drive progress, they must be tempered with accountability. Regulatory bodies play a vital role in enforcing safety guidelines, conducting thorough evaluations, and certifying the efficacy of medical devices before they reach the market. Striking the right balance between promoting innovation and safeguarding patient well-being is essential for the industry’s long-term success.

Any device given “Export Only” status by the FDA, or indeed by any other regulatory authority,  should necessitate further regulatory testing in the jurisdictions in which it is intended to be sold and should by flagged by local regulatory agencies as insufficiently validated. Currently this seems to be taking place more in word than in deed under may jurisdictions.

Stringent Quality Control Standards:

The gravity of medical device development calls for stringent quality control standards. Every stage of the development process, from design and manufacturing to post-market surveillance, must prioritize safety, reliability, and effectiveness. Employing best practices, such as adherence to recognized international standards, robust testing protocols, and continuous monitoring, helps identify and address potential risks early on, ensuring patient safety remains paramount.

Putting Patients First:

Above all, the focus of medical device developers should always be on patients. These devices are designed to improve health outcomes, alleviate suffering, and save lives. A single flaw or an overlooked risk could have devastating consequences. Therefore, a culture that fosters a sense of responsibility towards patients is vital. Developers must empathize with the individuals who rely on these devices and remain dedicated to continuous improvement, addressing feedback, and learning from past mistakes.

Putting patient safety as the very top priority is the only way to avoid costly lawsuits and bad publicity stemming from a therapeutic device that was released onto the market too early in the pursuit of short-term financial gain. While product development and proper validation is an expensive and resource consuming process, cutting corners early on in the process will inevitably lead to ramifications at a later stage of the product life cycle.

Allowing overseas patients access to “export only” medical devices is attractive to their respective companies as it allows data to be collected from the international patients who use the device, which can later be used as further evidence of safety in subsequent applications to the FDA for full regulatory approval. This may not always be an acceptable risk profile for the patients who have the potential to be harmed. Another benefit of “Export Only” status to American device companies is that marketing the device overseas can bring in much needed revenue that enables further R&D tweaks and clinical evaluation that will eventually result in FDA approval domestically. Ultimately it is the responsibility of national regulatory agencies globally to maintain strict classification and clinical evidence standards lest their citizens become unwitting guinea pigs.

Collaboration and Transparency:

The medical device industry should embrace a culture of collaboration and transparency. Sharing knowledge, research, and lessons learned can help prevent the repetition of past mistakes. Open dialogue among developers, regulators, healthcare professionals, and patients ensures a holistic approach to device development, wherein diverse perspectives contribute to better, safer solutions. This collaborative mindset can serve as a safeguard against the emergence of reckless practices.

The risks associated with medical devices demand a paradigm shift within the industry. Developers must strive to distance themselves from the medtech version of Ocean Gate and instead embrace responsible innovation. Rigorous regulation, stringent quality control standards, and a relentless focus on patient safety should be the guiding principles of medical device development. By prioritising patient well-being and adopting a culture of transparency and collaboration, the industry can continue to advance while ensuring that every device that enters the market has been meticulously evaluated and designed with the utmost care.

Further reading:

Law of the Sea and the Titan incident: The legal loophole for underwater vehicles – EJIL: Talk! (ejiltalk.org)

Drugs and Devices: Comparison of European and U.S. Approval Processes – ScienceDirect

https://www.theregreview.org/2021/10/27/salazar-addressing-medical-device-safety-crisis/

https://www.medtechdive.com/news/medtech-regulation-FDA-EU-MDR-2023-Outlook/641302/
https://www.marketdataforecast.com/market-reports/Medical-Devices-Market

FDA Permits ‘Export Only’ Medical Devices | Industrial Equipment News (ien.com)

FDA issues ‘most serious’ recall over Johnson & Johnson surgical tools (msn.com)

Jury Award in Vaginal Mesh Lawsuit Could Open Flood Gates | mddionline.com

Lawsuit alleges Apple Watch’s blood oxygen sensor ‘racially biased’; accuracy problems reported industry-wide – ABC News (inferse.com)

Effective Strategies for Regulatory Compliance

1. Establish a Regulatory Compliance Plan: Develop a comprehensive plan that outlines the regulatory requirements and compliance strategies for each stage of the product development process.

2. Engage with Regulatory Authorities Early: Build relationships with regulatory authorities and engage with them early in the product development process to ensure that all requirements are met.

3. Conduct Risk Assessments: Identify potential risks and hazards associated with the product and develop risk management strategies to mitigate those risks.

4. Implement Quality Management Systems: Establish quality management systems that ensure compliance with regulatory requirements and promote continuous improvement.

5. Document Everything: Maintain detailed records of all activities related to the product development process, including design, testing, and manufacturing, to demonstrate compliance with regulatory requirements.

Stata: Statistical Software for Regulatory Compliance in Clinical Trials

Stata is widely used in various research domains such as economics, biosciences, health and social sciences, including clinical trials. It has been utilised for decades in studies published in reputable scientific journals. While SAS has a longer history of being explicitly referenced by regulatory agencies such as the FDA, Stata can still meet regulatory compliance requirements in clinical trials. StataCorp actively engages with researchers, regulatory agencies, and industry professionals to address compliance needs and provide technical support, thereby maintaining a strong commitment to producing high-quality software and staying up to date with industry standards.

Stata’s commitment to accuracy, comprehensive documentation, integrated versioning, and rigorous certification processes provides researchers with a reliable and compliant statistical software for regulatory submissions. Stata’s worldwide reputation, excellent technical support, seamless verification of data integrity, and ease of obtaining updates further contribute to its suitability for clinical trials and regulatory compliance.

To facilitate regulatory compliance in clinical trials, Stata offers features such as data documentation and audit trails, allowing researchers to document and track data manipulation steps for reproducibility and transparency. Stata’s built-in “do-files” and “log-files” can capture commands and results, aiding in the audit trail process. Stata provides the flexibility to generate analysis outputs and tables in formats commonly required for regulatory reporting (e.g., PDF, Excel, or CSV). It also enables the automation of reproducible, fully-formatted publication standard reports. Strong TLF and CRF programming used to be the domain of SAS which explains their early industry dominance. SAS was developed in 1966 using funding from the National Institute of Health. In recent years, however, Stata has arguably surpassed what is achievable in SAS with the same efficiency, particularly in the context of clinical trials.

Stata has extensive documentation of adaptive clinical trial design. Adaptive group sequential designs can be achieved using the GDS functionality. The default graphs and tables produced using GDS analysis really do leave SAS in the dust being more visually appealing and easily interpretable. They are also more highly customisable than what can be produced in SAS. Furthermore the Stata syntax used to produce them is minimal compared to corresponding SAS commands, while still retaining full reproducibility.

Stata’s comprehensive causal inference suite enables experimental-style causal effects to be derived from observational data. This can be helpful in planning clinical trials based on observed patient data that is already available, with the process being fully documentable.

Advanced data science methods are being increasingly used in clinical trial design and planning as well as for follow-up exploratory analysis of clinical trial data. Stata has both supervised and unsupervised machine learning capability in its own right for decades. Stata can also integrate with other tools and programming languages, such as Python for PyStata and PyTrials, if additional functionalities or specific formats are needed. This can be instrumental for advanced machine learning and other data science methods goes beyond native features and user-made packages in terms of customisability. Furthermore, using Python within the Stata interface allows for compliant documentation of all analyses. Python integration is also available in SAS via numerous packages and is able to eliminate some of the limitations of native SAS, particularly when it comes to graphical outputs.

Stata for FDA regulatory compliance

While the FDA does not mandate the use of any specific statistical software, they emphasise the need for reliable software with appropriate documentation of testing procedures. Stata satisfies the requirements of the FDA and is recognized as one of the most respected and validated statistical tools for analysing clinical trial data across all phases, from pre-clinical to phase IV trials. With Stata’s extensive suite of statistical methods, data management capabilities, and graphics tools, researchers can rely on accurate and reproducible results at every step of the analysis process.

When it comes to FDA guidelines on statistical software, Stata offers features that assist in compliance. Stata provides an intuitive Installation Qualification tool that generates a report suitable for submission to regulatory agencies like the FDA. This report verifies that Stata has been installed properly, ensuring that the software meets the necessary standards.

Stata offers several key advantages when it comes to FDA regulatory compliance for clinical trials. Stata takes reproducibility seriously and is the only statistical package with integrated versioning. This means that if you wrote a script to perform an analysis in 1985, that same script will still run and produce the same results today. Stata ensures the integrity and consistency of results over time, providing reassurance when submitting applications that rely on data and results from clinical trials.

Stata also offers comprehensive manuals that detail the syntax, use, formulas, references, and examples for all commands in the software. These manuals provide researchers with extensive documentation, aiding in the verification and validity of data and analyses required by the FDA and other regulatory agencies.

To further ensure computational validity, Stata undergoes extensive software certification testing. Millions of lines of certification code are run on all supported platforms (Windows, Mac, Linux) with each release and update. Any discrepancies or changes in results, output, behaviour, or performance are thoroughly reviewed by statisticians and software engineers before making the updated software available to users. Stata’s accuracy is also verified through the National Institute of Standards (NIST) StRD numerical accuracy tests and the George Marsaglia Diehard random-number generator tests.

Data management in Stata

Stata’s Datasignature Suite and other similar features offer powerful tools for data validation, quality control, and documentation. These features enable users to thoroughly examine and understand their datasets, ensuring data integrity and facilitating transparent research practices. Let’s explore some of these capabilities:

  1. Datasignature Suite:

The Datasignature Suite is a collection of commands in Stata that assists in data validation and documentation. It includes commands such as `datasignature` and `dataex`, which provide summaries and visualizations of the dataset’s structure, variable types, and missing values. These commands help identify inconsistencies, outliers, and potential errors in the data, allowing users to take appropriate corrective measures.

2. Variable labelling:

 Stata allows users to assign meaningful labels to variables, enhancing data documentation and interpretation. With the `label variable` command, users can provide descriptive labels to variables, making it easier to understand their purpose and content. This feature improves collaboration among researchers and ensures that the dataset remains comprehensible even when shared with others.

3. Value labels:

 In addition to variable labels, Stata supports value labels. Researchers can assign descriptive labels to specific values within a variable, transforming cryptic codes into meaningful categories. Value labels enhance data interpretation and eliminate the need for constant reference to codebooks or data dictionaries.

4. Data documentation:

Stata encourages comprehensive data documentation through features like variable and dataset-level documentation. Users can attach detailed notes and explanations to variables, datasets, or even individual observations, providing context and aiding in data exploration and analysis. Proper documentation ensures transparency, reproducibility, and facilitates data sharing within research teams or with other stakeholders.

5. Data transformation:

Stata provides a wide range of data transformation capabilities, enabling users to manipulate variables, create new variables, and reshape datasets. These transformations facilitate data cleaning, preparation, and restructuring, ensuring data compatibility with statistical analyses and modelling procedures.

6. Data merging and appending:

Stata allows users to combine multiple datasets through merging and appending operations. By matching observations based on common identifiers, researchers can consolidate data from different sources or time periods, facilitating longitudinal or cross-sectional analyses. This feature is particularly useful when dealing with complex study designs or when merging administrative or survey datasets.

7. Data export and import:

Stata offers seamless integration with various file formats, allowing users to import data from external sources or export datasets for further analysis or sharing. Supported formats include Excel, CSV, SPSS, SAS, and more. This versatility enhances data interoperability and enables collaboration with researchers using different software.

These features collectively contribute to data management best practices, ensuring data quality, reproducibility, and documentation. By leveraging the Datasignature Suite and other data management capabilities in Stata, researchers can confidently analyse their data and produce reliable results while maintaining transparency and facilitating collaboration within the scientific community.

Stata and maintaining CDISC standards. How does it compare to SAS?

Stata and SAS are both statistical software packages commonly used in the fields of data analysis, including in the pharmaceutical and clinical research industries. While they share some similarities, there are notable differences between the two when it comes to working with CDISC standards:

  1. CDISC Support:

SAS has extensive built-in support for CDISC standards. SAS provides specific modules and tools, such as SAS Clinical Standards Toolkit, which offer comprehensive functionalities for CDASH, SDTM, and ADaM. These modules provide pre-defined templates, libraries, and validation rules, making it easier to implement CDISC standards directly within the SAS environment. Stata, on the other hand, does not have native, dedicated modules specifically designed for CDISC standards. However, Stata’s flexibility allows users to implement CDISC guidelines through custom programming and data manipulation.

2. Data Transformation:

SAS has robust built-in capabilities for transforming data into SDTM and ADaM formats. SAS provides specific procedures and functions tailored for SDTM and ADaM mappings, making it relatively straightforward to convert datasets into CDISC-compliant formats. Stata, while lacking specific CDISC-oriented features, offers powerful data manipulation functions that allow users to reshape, merge, and transform datasets. Stata users may need to develop custom programming code to achieve CDISC transformations.

3. Industry Adoption:

SAS has been widely adopted in the pharmaceutical industry and is often the preferred choice for CDISC-compliant data management and analysis. Many pharmaceutical companies, regulatory agencies, and clinical research organizations have established workflows and processes built around SAS for CDISC standards. Stata, although less commonly associated with CDISC implementation, is still a popular choice for statistical analysis across various fields, including healthcare and social sciences. Stata has the potential to make adherence to CDISC standards a more affordable option for small companies and therefore an increased priority.

4. Learning Curve and Community Support:

SAS has a long been the default preference in the context of CDISC compliance and is what statistical programmers are used to, thus SAS is known for its comprehensive documentation and extensive user community. Resources including training materials, user forums, and user groups, which can facilitate learning and support for CDISC-related tasks. Stata also has an active user community and provides detailed documentation, but its community may be comparatively smaller in the context of CDISC-specific workflows. Stata has the advantage of reducing the amount of programming required to achieve CDISC compliance, for example in the creation of SDTM and ADaM data sets.

While SAS offers dedicated modules and tools specifically designed for CDISC standards, Stata provides flexibility and powerful data manipulation capabilities that can be leveraged to implement CDISC guidelines. The choice between SAS and Stata for CDISC-related work may depend on factors such as industry norms, organizational preferences, existing infrastructure, and individual familiarity with the software.

While SAS has historically been more explicitly associated with regulatory compliance in the clinical trial domain, Stata is fully equipped to fulfil regulatory requirements and has been utilised effectively in clinical research since. Researchers often choose the software they are most comfortable with and consider factors such as data analysis capabilities, familiarity, and support when deciding between SAS and Stata for their regulatory compliance needs.

It is important to note that compliance requirements can vary based on specific regulations and guidelines. Researchers are responsible for ensuring their analysis and reporting processes align with the appropriate regulatory standards and should consult relevant regulatory authorities when necessary.

The Devil’s Advocate: Stata for Clinical Study Design, Data Processing, & Statistical Analysis of Clinical Trials.

Stata is a powerful statistical analysis software that offers some advantages for clinical trial and medtech use cases compared to the more widely used SAS software. Stata provides an intuitive and user-friendly interface that facilitates efficient data management, data processing and statistical analysis. Its agile and concise syntax allows for reproducible and transparent analyses, enhancing the overall research process with more readily accessible insights.

Distinct from R, which incorporates S based coding, both Stata and SAS have used C based programming languages since 1985.  All three packages can parse full Python within their environment for advanced machine learning capabilities, in addition to those available natively. In Stata’s case this is achieved through the pystata python package. Despite a common C based language, there are tangible differences between Stata and SAS syntax. Stata generally needs less lines of code on average compared to SAS to perform the same function and thus tends to be more concise. Stata also offers more flexibility to how you code as well as more informative error statements which makes debugging a quick and easy process, even for beginners.

When it comes to simulations and more advanced modelling our experience had been that the Basic Edition of Stata (BE) is faster and uses less memory to perform the same task compared to Base SAS. Stata BE certainly has more inbuilt capabilities than you would ever need for the design and analysis of advanced clinical trials and sophisticated statistical modelling of all types. There is also the additional benefit of thousands of user-built packages, such as the popular WinBugs, that can be instantly installed as add-ons at no extra cost. Often these packages are designed to make existing Stata functions even more customisable for immense flexibility and programming efficiency.  Both Stata and SAS represent stability and reliability and have enjoyed widespread industry adoption. SAS has been more widely adopted by big pharma and Stata more-so with public health and economic modelling. 

It has been nearly a decade since the Biostatistics Collaboration of Australia (BCA) which determines Biostatistics education nationwide has transitioned from teaching SAS and R as part of their Masters of Biostatistics programs to teaching Stata and R. This transition initially was made in anticipation of an industry-wide shift from SAS to Stata. Whether their predictions were accurate or not, the case for Stata use in clinical trials remains strong.

Stata is almost certainly a superior option for bootstrapped life science start-ups and SMEs. Stata licencing fees are in the low hundreds of pounds with the ability to quickly purchase over the Stata website, while SAS licencing fees span the tens to hundreds of thousands and often involve a drawn-out process just to obtain a precise quote.

Working with a CRO that is willing to use Stata means that you can easily re-run any syntax provided from the study analysis to verify or adapt it later. Of course, open-source software such as R is also available, however Stata has the advantage of a reduced learning curve being both user-friendly and sufficiently sophisticated.

Stata for clinical trials

  1. Industry Adoption:

Stata has gained significant popularity and widespread adoption in the field of clinical research. It is commonly used by researchers, statisticians, and healthcare professionals for the statistical analysis of clinical data.

2. Regulatory Compliance and CDISC standardisation:

Stata provides features and capabilities that support regulatory compliance requirements in clinical trials. While it may not have the same explicit recognition from CDISC as SAS, Stata does lend itself well to CDISC compliance and offers tools for documentation, data tracking, and audit trails to ensure transparency and reproducibility in analyses.

3. Comprehensive Statistical Procedures:

A key advantage of Stata is its extensive suite of built-in statistical functions and commands specifically designed for clinical trial data analysis. Stata offers a wide range of methods for handling missing data, performing power calculations, and of course a wide range of methods for analysing clinical trial data; from survival analysis methods, generalized linear models, mixed-effects models, causal inference, and Bayesian simulation for adaptive designs. Preparatory tasks for clinical trials such as meta-analysis, sample size calculation and randomisation schedules are arguably easier to achieve in Stata than SAS. These built-in functionalities empower researchers to conduct various analyses within a single software environment.

4. Efficient Data Management:

Stata excels in delivering agile data management capabilities, enabling efficient data handling, cleaning, and manipulation. Its intuitive data manipulation commands allow researchers to perform complex transformations, merge datasets, handle missing data, and generate derived variables seamlessly.

Perhaps the greatest technical advantage of Stata over SAS in the context of clinical research is usability and greater freedom to keep open and refer to multiple data sets with multiple separate analyses at the same time. While SAS can keep many data sets in memory for a single project, Stata can keep many data sets in siloed memory for simultaneous use in different windows to enable viewing or working on many different projects at the same time. This approach can make workflow easier because no data step is required to identify which data set you are referring to, instead the appropriate sections of any data sets can be merged with the active project as needed and due to siloing, which works similarly to tabs in a browser, you do not get the log, data or output of one project mixed up with another. This is arguably an advantage for biostatisticians and researchers alike who typically do need to compare unrelated data sets or the statistical results from separate studies side-by-side.

5. Interactive and Reproducible Analysis:

Stata provides an interactive programming environment that allows users to perform data analysis in a step-by-step manner. The built-in “do-file” functionality facilitates reproducibility by capturing all commands and results, ensuring transparency and auditability of the analysis process. The results and log window for each data set prints out the respective syntax required item by item. This syntax can easily be pasted into the do-file or the command line to edit or repeat the command with ease. SAS on the other hand tends to separate the results from the syntax used to derive it.

6. Graphics and Visualization:

While not traditionally known for this, Stata actually offers a wide range of powerful and customizable graphical capabilities. Researchers can generate high-quality publication standard  plots and charts of any description needed to visualise clinical trial results Common examples include survival curves, forest plots, spaghetti and diagnostic plots. Stata also has built-in options to perform all necessary assumption and model checking for statical model development.

These visualisations facilitate the exploration and presentation of complex data patterns, as well as the presentation, and communication of findings. There are many user-created customisation add-ons for data visualisation that rival what is possible in R customisation.

The one area of Stata that users may find limiting is that it is only possible to display one graph at a time per active data set. This means that you do need to copy graphs as they are produced and save them into a document to compare multiple graphs side by side.

7. Active User Community and Support:

Like SAS, Stata has a vibrant user community comprising researchers, statisticians, and experts who actively contribute to discussions, share knowledge, and provide support. StataCorp, the company behind Stata, offers comprehensive documentation, online resources, and user forums, ensuring users have access to valuable support and assistance when needed. Often the resources available for Stata are more direct and more easily searchable than what is available for SAS when it comes to solving customisation quandaries. This is of course bolstered by the availability of myriad instant package add-ons.

Stata’s active and supportive user community is a notable advantage. Researchers can access extensive documentation, online forums, and user-contributed packages, which promote knowledge sharing and facilitate problem-solving. Additionally, Stata’s reputable technical support ensures prompt assistance for any software-related queries or challenges.

While SAS and Stata have their respective strengths, Stata’s increasing industry adoption, statistical capabilities, data management features, reproducibility, visualisation add-ons, and support community make it a compelling choice for clinical trial data analysis.

As it stands, SAS remains the most widely used software in big-pharma for clinical trial data analysis. Stata however offers distinct advantages in terms of user-friendliness, tailored statistical functionalities, advanced graphics, and a supportive user community. Consider adopting Stata to streamline your clinical trial analyses and unlock its vast potential for gaining insights from research outcomes. An in-depth overview of Stata 18 can be found here. A summary of it’s features for biostatisticians can be found here.

Further reading:

Using Stata for Handling CDISC Complient Data Sets and Outputs (lexjansen.com)

P Values, Confidence Intervals and Clinical Trials

P values are so ubiquitous in clinical research that it’s easy to take for granted that they are being understood and interpreted correctly. After-all, one might say, they are just simple proportions and it’s not brain surgery. At times, however, its’ the simplest of things that are easiest to overlook. In fact, the definitions and interpretations of p values are sufficiently subtle that even a minute pivot from an exact definition can lead to interpretations that are wildly misleading.

In the case of clinical trials, p values have a momentous impact on decision making in terms of whether or not to pursue and invest further into the development and marketing of a given therapeutic. In the context of clinical practice p values drive treatment decisions for patients as they essentially comprise the foundational evidence upon which these treatment decisions are made. This is perhaps as it should be, as long as the definition of p values and their interpretations are sound.

A counter-point to this is the bias towards publishing only studies with a statistically significant p value, as well as the fact that many studies are not sufficiently reproducible or reproduced. This leaves clinicians with an impression that evidence for a given treatment is stronger than the full picture would suggest. This however is a publishing issue rather than an issue of significance tests themselves. This article focusses on interpretation issues only.

As p values apply to the interpretation of both parametric and non-parametric tests in much the same way, this article will focus on parametric examples.

Interpreting p values in superiority/difference study designs

This refers to studies where we are seeking to find a difference between two treatment groups or between a single group measured at two time points. In this case the null hypothesis is that there is no difference between the two treatment groups or no effect of the treatment, as the case may be.

According to the significance testing framework all p values are calculated based upon an assumption that the null hypothesis is true. If a study yields a p value of 0.05, this means that we would expect to see a difference between the two groups at least as extreme as the observed effect 5% of the time; if the study were to be repeated. In other words, if there is no true difference between the two treatment groups and we ran the experiment 20 times on 20 independent samples from the same population, we would expect to see a result this extreme once out of the 20 times.

This of course is not a very helpful way of looking at things if our goal is to make a statement about treatment effectiveness. The inverse likely makes more intuitive sense: if were were to run this study 20 times on distinct patient samples from the same population, 19 out of 20 times we would not expect a result this extreme if there was no true effect. Based on the rarity of the observed effect, we conclude that likelihood of the null hypothesis being the optimal explanation of the data is sufficiently low that we can reject it. Thus our alternative research hypothesis, that there is a difference between the two treatments, is likely to be true. As the p value does not tell us whether the difference is a positive or negative direction, care should of course be taken to confirm which of the treatments holds the advantage.

P values in non-inferiority or equivalence studies.

In non-inferiority and equivalence studies a non-statistically significant p value can be a welcome result, as can a very low p value where the differences were not clinically significant, or where the new treatment is shown to be superior to the standard treatment. By only requiring the treatment not to be inferior, more power is retained and a smaller sample size can be used.

The interpretation of the p value is much the same as for superiority studies, however the implications are different. In these types of studies it is ideal for the confidence intervals for the individual treatment effects to be narrow as this provides certainty that the estimates obtained are accurate in the absence of a statistically significant difference between the two estimates.

While alternatives to p values exist, such as Bayesian statistics, these statistics have limitations of their own and are subject to the same propensity for misuse and misinterpretation as frequentist statistics are. Thus it remains important to take caution in interpreting all statistical results.

What p values do not tell you

A p value of 0.05 is not the same as saying that there is only a 5% chance that the treatment wont work. Whether or not the treatment works in the individual is another probability entirely. It is also not the same as saying there is a 5% chance of the null hypothesis being true. The p value is a statistic that is based on the assumption that the null hypothesis is true and on that basis gives the likelihood of the observed result.

Nor does the p value represent the chance of making a type 1 error. As each repetition of the same experiment produces a different p value, it does not make sense to characterise the p value as the chance of incorrectly rejecting the null hypothesis ie making a type one error. Instead, an alpha cut-off point of 0.05 should be seen as indicating a result rare enough under the null hypothesis that we are now willing to reject the null as the most likely explanation given the data. Under a type-one error alpha of 0.05 this decision is expected to be wrong 5% of the time, regardless of the p value achieved in the statistical test. The relationship between the critical alpha and statistical power is illustrated below.

Another misconception is that a small p value provides strong support for a given research hypothesis. In reality a small p value does not necessarily translate to a big effect, nor a clinically meaningful one. The p value indicates a statistically significant result, however it says nothing about the magnitude of the effect or whether this result is clinically meaningful in the context of the study. A p value of 0.00001 may appear to be a very satisfactory result, however if the difference observed between the two groups is very small then this is not always the case. All it would be saying is that “we are really really sure that there is only minimal difference between the two treatments”, which in a superiority design may not be as desired.

Minimally important difference (MID)

This is where the importance of pre-defining a minimally important difference (MID) becomes evident. The MID, or clinically meaningful difference. should be defined and quantified in the design stage before the study is to be undertaken. In the case of clinical studies this should generally be done in consultation with the clinician or disease expert concerned.

The MID may take different forms depending on whether a study is a superiority design, versus an equivalence or non-inferiority design. In the case of a superiority design or where the goal of the study is to detect a difference, the MID is the threshold of minimum difference at which we would be willing to consider the new treatment worth pursuing over the standard treatment or control being used as the comparator. In the case of a non-inferiority design the MID would be the minimum lower threshold at which we would still consider the new treatment as equally effective or useful as the standard treatment. Equivalence design on the other hand may sometimes rely on an interval around the standard treatment effect.

When interpreting results of clinical studies it is of primary importance to keep a clinically meaningful difference in mind, rather than defaulting to the p value in isolation. In cases where the p value is statistically significant, it is important to ask whether the difference between comparison groups is also as large as the MID or larger.

Confidence Intervals

All statistical tests that involve p values can produce a corresponding confidence interval for the estimates. Unlike p values, confidence intervals do not rely on an assumption of the null hypothesis but rather on the assumption that the sample approximates the population of interest. A common estimate in clinical trials where confidence intervals become important is the treatment effect. Very often this translates to the difference in means of a surrogate endpoint between two groups, however confidence intervals are also important to consider for individual group means/ treatment effects, which are an estimate of the population means of the endpoint in these distinct groups/treatment categories.

Confidence interval for the mean

A 95% confidence interval of the estimate of the mean indicates that, if this study were to be repeated, the mean value is expected to fall within this interval 95% of the time. While this estimate is based on the real mean of the study sample our interest remains in making inferences about the wider population who might later be subject to this treatment. Thus inferentially the observed mean and it’s confidence interval are both considered an estimate of the population values.

In a nutshell the confidence interval indicates how sure we can be of the accuracy of the estimate. A narrower interval indicates greater confidence and a wider interval less. The p value of the estimate indicates how certain we can be of this result, ie the interval itself.

Confidence interval for the mean difference, treatment effects or difference in treatment effects

The mean difference in treatment effect between two groups is an important estimate in any comparison study, from superiority to non-inferiority clinical trial designs. Treatment response is mainly ascertained from repeated measures of surrogate endpoint data on the individual level. One form of mean difference is repeated measures data from the same individuals at different time points, these individuals’ differences could then be compared between two independent treatment groups. In the context of clinical trials, confidence intervals of the mean difference can relate to an individual’s treatment effect or to group differences in treatment effects.

A 95% Confidence interval of the mean difference in treatment effect indicates that 95 per cent of the time, if this study were to be repeated, the true difference in treatment effect between the groups is expected to fall within this interval. A confidence interval containing zero indicates that a statistically significant difference between the two groups has not been found. Namely, if part of the time the true population value representing the difference is expected to fall above zero on the number line and part of the time to fall below zero, indicating a difference in the opposite direction, we cannot be sure whether one group is higher or lower than the other.

Much ho-hum has been made of p values in recent years but they are here to stay. While alternatives to p values exist, such as Bayesian methods, these statistics have limitations of their own and are subject to the same propensity for misuse and misinterpretation as frequentist statistics are. Thus it remains important to take caution in interpreting all statistical results.

Sources and further reading:

Gao, P-Values – A chronic conundrum, BMC Medical Research Methodology (2020), 20:167
https://doi.org/10.1186/s12874-020-01051-6

The Royal College of Ophthalmologists, The clinician’s guide to p values, confidence intervals, and magnitude of effects, Eye (2022) 36:341–342; https://doi.org/10.1038/s41433-021-01863-w

The role of Biostatisticians, Bioinformaticians & other Data Experts in Clinical Research

As a medical researcher or a small enterprise in the life sciences industry, you are likely to encounter many experts using statistical and computational techniques to study biological, clinical and other health data. These experts can come from a variety of fields such as biostatistics, bioinformatics, biometrics, clinical data science and epidemiology. Although these fields do overlap in certain ways they differ in purpose, focus, and application. All four areas listed above focus on analysing and interpreting either biological, clinical data or public health data but they typically do so in different ways and with different goals in mind. Understanding these differences can help you choose the most appropriate specialists for your research project and get the most out of their expertise. This article will begin with a brief description of these disciplines for the sake of disambiguation, then focus on biostatistics and bioinformatics, with a particular overview of the roles of biostatisticians and bioinformatics scientists in clinical trials.

Biostatisticians

Biostatisticians use advanced biostatistical methods to design and analyse pre-clinical experiments, clinical trials, and observational studies predominantly in the medical and health sciences. They can also work in ecological or biological fields which will not be the focus of this article. Biostatisticians tend to work on varied data sets, including a combination of medical, public health and genetic data in the context of clinical studies. Biostatisticians are involved in every stage of a research project, from planning and designing the study, to collecting and analysing the data, to interpreting and communicating the results. They may also be involved in developing new statistical methods and software tools. In the UK the term “medical statistician” has been in common use over the past 40 years to describe a biostatistician, particularly one working in clinical trials, but it is becoming less used due to the global nature of the life sciences industry.

Bioinformaticians

Bioinformaticians use computational and statistical techniques to analyse and interpret large datasets in the life sciences. They often work with multi-omics data such as genomics, proteomics transcriptomics data and use tools such as large databases, algorithms, and specialised software programs to analyse and make sense of sequencing and other data. Bioinformaticians develop analysis pipelines and fine-tune methods and tools for analysing biological data to fit the evolving needs of researchers.

Clinical data scientists

Data scientists use statistical and computational modelling methods to make predictions and extract insights from a wide range of data. Often, data is real-world big data of which it might not be practical to analyse using other methods. In a clinical development context data sources could include medical records, epidemiological or public health data, prior clinical study data, or IOT and IOB sensor data. Data scientists may combine data from multiple sources and types. Using analysis pipelines, machine learning techniques, neural networks, and decision tree analysis this data can be made sense of. The better the quality of the input data the more precise and accurate any predictive algorithms can be.

Statistical programmers

Statistical programmers help statisticians to efficiently clean and prepare data sets and mock TFLs in preparation for analysis. They set up SDTM and ADaM data structures in preparation for clinical studies. Quality control of data and advanced macros for database management are also key skills.

Biometricians

Biometricians use statistical methods to analyse data related to the characteristics of living organisms. They may work on topics such as growth patterns, reproductive success, or the genetic basis of traits. Biometricians may also be involved in developing new statistical methods for analysing data in these areas. Some use the terms biostatistician and biometrician interchangeably however for the purpose of this article they remain distinct.

Epidemiologists

Epidemiologists study the distribution and determinants of diseases in populations. Using descriptive, analytical, or experimental techniques, such as cohort or case-control studies, they identify risk factors for diseases, evaluate the effectiveness of public health interventions, as well as track or model the spread of infectious diseases. Epidemiologists use data from laboratory testing, field studies, and publicly available health data. They can be involved in developing new public health policies and interventions to prevent or control the spread of diseases.

Clinical trials and the role of data experts

Clinical trials involve testing new treatments, interventions, or diagnostic tests in humans. These studies are an important step in the process of developing new medical therapies and understanding the effectiveness and safety of existing treatments.

Biostatisticians are crucial to the proper design and analysis of clinical trials. So that optimal study design can take place, they may first have to conduct extensive meta-analysis of previous clinical studies and RWE generation based on available real-world data sets or R&D results. They may also be responsible for managing the data and ensuring its quality, as well as interpreting and communicating the results of the trial. From developing the statistical analysis plan and contributing to the study protocol, to final analysis and reporting, biostatisticians have a role to play across the project time-line.

During a clinical trial, statistical programmers may prepare data sets to CDISC standards and pre-specified study requirements, maintain the database, as well as develop and implement standard SAS code and algorithms used to describe and analyse the study data.

Bioinformaticians may be involved in the design and analysis stages of clinical trials, particularly if the trial design involves the use of large data sets such as sequencing data for multi-omics analysis. They may be responsible for managing and analysing this data, as well as developing software tools and algorithms to support the analysis.

Data scientists may be involved in designing and analysing clinical trials at the planning stage, as well as in developing new tools and methods. The knowledge gleaned from data science models can be used to improve decision-making across various contexts, including life sciences R&D and clinical trials. Some applications include optimising the patient populations used in clinical trials; feasibility analysis using simulation of site performance, region, recruitment and other variables, to evaluate the impacts of different scenarios on project cost and timeline.

Biometricians and epidemiologists may also contribute to clinical trials, particularly if the trial is focused on a specific population or on understanding the factors that influence the incidence or severity of a disease. They may contribute to the design of the study, collecting and analysing the data, or interpreting the results.

Overall, the role of these experts in clinical trials is to use their varied expertise in statistical analysis, data management, and research design to help understand the safety and effectiveness of new treatments and interventions.

The role of biostatistician in clinical trials

Biostatisticians may be responsible for developing the study protocol, determining the sample size, producing the randomisation schedule, and selecting the appropriate statistical methods for analysing the data. They may also be responsible for managing the data and ensuring its quality, as well as interpreting and communicating the results of the trial.

SDTM data preparation

The Study Data Tabulation Model (SDTM) is a data standard that is used to structure and organize clinical study data in a standardized way. Depending on how a CRO is structured, either biostatisticians, statistical programmers, or both will be involved in mapping the data collected in a clinical trial to the SDTM data set, which involves defining the structure and format of the data and ensuring that it is consistent with the standard. This helps to ensure that the data is organised in a way that is universally interpretable. This process involves working with the research team to ensure the appropriate variables and categories are defined before reviewing and verifying the data to ensure that it is accurate, complete and in line with industry standards. Typically the SDTM data set will be established early at the protocol phase and populated later once trial data is accumulated.

Creating and analysing the ADaM dataset

In clinical trials, the Analysis Data Model (ADaM) is a data set model used to structure and organize clinical trial data in a standardized way for the purpose of statistical analysis. ADaM data sets are used to store the data that will be analysed as part of the clinical trial, and are typically created from the Study Data Tabulation Model (SDTM) data sets, which contain the raw data collected during the trial. This helps to ensure the reliability and integrity of the data, and makes it easier to analyse and interpret the results of the trial.

Biostatisticians and statistical programmers are responsible for developing ADaM data sets from the SDTM data, which involves selecting the relevant variables and organizing them in a way that is appropriate for the particular statistical analyses that will be conducted. While statistical programmers may create derived variables, produce summary statistics, TFLs, and organise the data into appropriate datasets and domains, biostatisticians are responsible for conducting detailed statistical analyses of the data and interpreting the results. This may include tasks such as testing hypotheses, identifying patterns and trends in the data, and developing statistical models to understand the relationships between the data and the research questions the trial seeks to answer.

The role of biostatisticians, specifically, in developing ADaM data sets from SDTM data is to use their expertise in statistical analysis and research design to guide statistical programmers in ensuring that the data is organised, structured, and formatted in a way that is appropriate for the analyses that will be conducted, and to help understand and interpret the results of the trial.

A Biostatistician’s role in study design & planning

Biostatisticians play a critical role in the design, analysis, and interpretation of clinical trials. The role of the biostatistician in a clinical trial is to use their expertise in statistical analysis and research design to help ensure that the trial is conducted in a scientifically rigorous and unbiased way, and to help understand and interpret the results of the trial. Here is a general overview of the tasks that a biostatistician might be involved in during the different stages of a clinical trial:

Clinical trial design: Biostatisticians may be involved in designing the clinical trial, including determining the study objectives, selecting the appropriate study population, and developing the study protocol. They are responsible for determining the sample size and selecting the appropriate statistical methods for analysing the data. Often in order to carry out these tasks, preparatory analysis will be necessary in the form of detailed meta-analysis or systematic review.

Sample size calculation: Biostatisticians are responsible for determining the required sample size for the clinical trial. This is an important step, as the sample size needs to be large enough to detect a statistically significant difference between the treatment and control groups, but not so large that the trial becomes unnecessarily expensive or time-consuming. Biostatisticians use statistical algorithms to determine the sample size based on the expected effect size, the desired level of precision, and the expected variability of the data. This information is informed by expert opinion and simulation of the data from previous comparable studies.

Randomisation schedules: Biostatisticians develop the randomisation schedule for the clinical trial, which is a plan for assigning subjects to the treatment and control groups in a random and unbiased way. This helps to ensure that the treatment and control groups are similar in terms of their characteristics, which helps to reduce bias or control for confounding factors that might affect the results of the trial.

Protocol development: Biostatisticians are involved in developing the statistical and methodological sections of the clinical trial protocol, which is a detailed plan that outlines the objectives, methods, and procedures of the study. In addition to outlining key research questions and operational procedures the protocol should include information on the study population, the interventions being tested, the outcome measures, and the data collection and analysis methods.

Data analysis: Biostatisticians are responsible for analysing the data from the clinical trial, including conducting interim analyses and making any necessary adjustments to the protocol. They play a crucial role in interpreting the results of the analysis and communicating the findings to the research team and other stakeholders.

Final analysis and reporting: Biostatisticians are responsible for conducting the final analysis of the data and preparing the final report of the clinical trial. This includes summarising the results, discussing the implications of the findings, and making recommendations for future research.

The role of bioinformatician in biomarker-guided clinical studies.

Biomarkers are biological characteristics that can be measured and used to predict the likelihood of a particular outcome, such as the response to a particular treatment. Biomarker-guided clinical trials use biomarkers as a key aspect of the study design and analysis. In biomarker-guided clinical trials where the biomarker is based on genomic sequence data, bioinformaticians may play a particularly important role in managing and analysing the data. Genomic and other omics data is often large and complex, and requires specialised software tools and algorithms to analyse and interpret. Bioinformaticians develop and implement these tools and algorithms, as well as for managing and analysing the data to identify patterns and relationships relevant to the trial. Bioinformaticians use their expertise in computational biology to to help understand the relationship between multi-omics data and the outcome of the trial, and to identify potential biomarkers that can be used to guide treatment decisions.

Processing sequencing data is a key skill of bioinformaticians that involves several steps, which may vary depending on the specific goals of the analysis and the type of data being processed. Here is a general overview of the steps that a bioinformatician might take to process sequencing data:

  1. Data pre-processing: Cleaning and formatting the data so that it is ready for analysis. This may include filtering out low-quality data, correcting errors, and standardizing the format of the data.
  2. Mapping: Aligning the sequenced reads to a reference genome or transcriptome in order to determine their genomic location. This can be done using specialized software tools such as Bowtie or BWA.
  3. Quality control: Checking the quality of the data and the alignment, and identifying and correcting any problems that may have occurred during the sequencing or mapping process. This may involve identifying and removing duplicate reads, or identifying and correcting errors in the data.
  4. Data analysis: Using statistical and computational techniques to identify patterns and relationships in the data such as identifying genetic variants, analysing gene expression levels, or identifying pathways or networks that are relevant to the study.
  5. Data visualization: Creating graphs, plots, and other visualizations to help understand and communicate the results of the analysis.

Once omics data has been analysed, the insights obtained can be used for tailoring therapeutic products to patient populations in a personalised medicine approach.

A changing role of data experts in life sciences R&D and clinical research

Due to the need for better therapies and health solutions, researchers are currently defining diseases at more granular levels using multi-omics insights from DNA sequencing data which allows differentiation between patients in the biomolecular presentation of their disease, demographic factors, and their response to treatment. As more and more of the resulting therapies reach the market the health care industry will need to catch up in order to provide these new treatment options to patients.

Even after a product receives regulatory approval, payers can opt not to reimburse patients, so financial benefit should be demonstrated in advance where possible. Patient reported outcomes and other health outcomes are becoming important sources of data to consider in evidence generation. Evidence provided to payers should aim to demonstrate financial as well as clinical benefit of the product.

In this context, regulators are becoming aware of the need for innovation in developing new ways of collecting treatment efficacy and other data used to assess novel products for regulatory approval. The value of observational studies and real-world-data sources as a supplement clinical trial data is being acknowledged as a legitimate and sometimes necessary part of the product approval process. Large scale digitisation now makes it easier to collect patient-centric data directly from clinical trial participants and users via devices and apps. Establishing clear evidence expectations from regulatory agencies then Collaborating with external stakeholders, data product experts, and service-providers to help build new evidence-building approaches.

Expert data governance and quality control is crucial to the success of any new methods to be implemented analytically. Data from different sources, such as IOT sensor data, electronic health records, sequencing data for multi-omics analysis, and other large data sets, has to be combined cautiously and with robust expert standards in place.

From biostatistics, bioinformatics, data science, CAS, and epidemiology for public heath or post-market modelling; a bespoke team of integrated data and analytics specialists is now as important to a product development project as the product itself to gaining competitiveness and therefore success in the marketplace. Such a team should be applying a combination of established data collection methodologies eg. clinical trials and systematic review, and innovative methods such as machine learning models that draw upon a variety of real world data sources to find a balance between advancing important innovation and mitigating risk.

Sex Differences in Clinical Trial Recruiting

The following article investigates several systematic reviews into sex and gender representation in individual clinical trial patient populations. In these studies sex ratios are assessed and evaluated by various factors such as clinical trial phase, disease type under investigation and disease burden in the population. Sex differences in the reporting of safety and efficacy outcomes are also investigated. In many cases safety and efficacy outcomes are pooled, rather than reported individually for each sex, which can be problematic when findings are generalised to the wider population. In order to get the dosage right for different body compositions and avoid unforeseen outcomes in off label use or when a novel therapeutic first reaches the market, it is important to report sex differences in clinical trials. Due to the unique nuances of disease types and clinical trial phases it is important to realise that a 50-50 ratio of male to female is not always the ideal or even appropriate in every clinical study design. Having the right sex balance in your clinical trial population will improve the efficiency and cost-effectiveness of your study. Based upon the collective findings a set of principles are put forth to guide the researcher in determining the appropriate sex ratio for their clinical trial design.

Sex difference by clinical trial phase

  • variation in sex enrolment ratios for clinical trial phases
  • females less likely to participate in early phases, due to increased risk of adverse events
  • under-representation of women in phase III when looking at disease prevalence

It has been argued that female representation in clinical trials is lacking, despite recent efforts to mitigate the gap. US data from 2000-2020 suggests that trial phase has the greatest variation in enrolment when compared to other factors, with median female enrolment being 42.9%, 44.8%, 51.7%, and 51.1% for phases I, I/II to II, II/III to III, and IV4. This shows that median female enrolment gradually increases as trials progress, with the difference in female enrolment between the final phases II/III to III and IV being <1%. Additional US data on FDA approved drugs including trials from as early as 1993 report that female participation in clinical trials is 22%, 48%, and 49% for trial phases I, II, and III respectively2. While the numbers for participating sexes are almost equal in phases II and III, women make up only approximately one fifth of phase I trial populations in this dataset2. The difference in reported participation for phase I trials between the datasets could be due to an increase in female participation in more recent years. The aim of a phase I trial is to evaluate safety and dosage, so it comes as no surprise that women, especially those of childbearing age, are often excluded due to potential risks posed to foetal development.

In theory, women can be included to a greater extent as trial phases progress and the potential risk of severe adverse events decreases. By the time a trial reaches phase III, it should ideally reflect the real-world disease population as much as possible. European data for phase III trials from 2011-2015 report 41% of participants being female1, which is slightly lower than female enrolment in US based trials. 26% of FDA approved drugs have a >20% difference between the proportion of women in phase II & III clinical trials and the prevalence of women in the US with the disease2, and only one of these drugs shows an over-representation of women.

Reporting of safety and efficacy by sex difference

  • Both safety and efficacy results tend to differ by sex.
  • Reporting these differences is inconsistent and often absent
  • Higher rates of adverse events in women are possibly caused by less involvement or non stratification in dose finding and safety studies.
  • There is a need to enforce analysis and reporting of sex differences in safety and efficacy data

Sex differences in response to treatment regarding both efficacy and safety have been widely reported. Gender subgroup analyses regarding efficacy can reveal whether a drug is more or less effective in one sex than the other. Gender subgroup analyses for efficacy are available for 71% of FDA approved drugs, and of these 11% were found to be more efficacious in men and 7% in women2. Alternatively, only 2 of 22 European Medicines Agency approved drugs examined were found to have efficacy differences between the sexes1. Nonetheless, it is important to study the efficacy of a new drug on all potential population subgroups that may end up taking that drug.

The safety of a treatment also differs between the sexes, with women having a slightly higher percentage (p<0.001) of reported adverse events (AE) than men for both treatment and placebo groups in clinical trials1. Gender subgroup analyses regarding safety can offer insights into the potential risks that women are subjected to during treatment. Despite this, gender specific safety analyses are available for only 45% of FDA approved drugs, with 53% of these reporting more side effects in women2. On average, women are at a 34% increased risk of severe toxicity for each cancer treatment domain, with the greatest increased risk being for immunotherapy (66%). Moreover, the risk of AE is greater in women across all AE types, including patient-reported symptomatic (female 33.3%, male 27.9%), haematologic (female 45.2%, male 39.1%) and objective non-haematologic (female 30.9%, male 29.0%)3. These findings highlight the importance of gender specific safety analyses and the fact that more gender subgroup safety reporting is needed. More reporting will increase our understanding of sex-related AE and could potentially allow for sex-specific interventions in the future.

Sex differences by disease type and burden

  • Several disease categories have recently been associated with lower female enrolment
  • Men are under-represented as often as women when comparing enrolment to disease burden proportions
  • There is a need for trial participants to be recruited on a case-by-case basis, depending on the disease.

Sex differences by disease type

When broken down by disease type, the sex ratio of clinical trial participation shows a more nuanced picture. Several disease categories have recently been associated with lower female enrolment, compared to other factors including trial phase, funding, blinding, etc4. Women comprised the smallest proportions of participants in US-based trials between 2000-2020 for cardiology (41.4%), sex-non-specific nephrology and genitourinary (41.7%), and haematology (41.7%) clinical trials4. Despite women being

proportionately represented in European phase III clinical studies between 2011-2015 for depression, epilepsy, thrombosis, and diabetes, they were significantly under-represented for hepatitis C, HIV, schizophrenia, hypercholesterolaemia, and heart failure and were not found to be overrepresented in trials for any of the disease categories examined1. This shows that the gap in gender representation exists even in later clinical trial phases when surveying disease prevalence, albeit to a lesser extent. Examining disease burden shows that the gap is even bigger than anticipated and includes the under-representation of both sexes.

Sex Differences by Disease Burden

It is not until the burden of disease is considered that men are shown to be under-represented as often as women. Including burden of disease can depict proportionality relative to the variety of disease manifestations between men and women. It can be measured as disability-adjusted life years (DALYs), which represent the number of healthy years of life lost due to the disease. Despite the sexes each making up approximately half of clinical trial participants overall in US-based trials between 2000-2020, all disease categories showed an under-representation of either women or men relative to disease burden, except for infectious disease and dermatologic clinical trials4. Women were under-represented in 7 of 17 disease categories, with the greatest under-representation being in oncology trials, where the difference between the number of female trial participants and corresponding DALYs is 3.6%. Men were under-represented compared with their disease burden in 8 of 17 disease categories, with the greatest difference being 11.3% for musculoskeletal disease and trauma trials.4 Men were found to be under-represented to a similar extent to women, suggesting that the under-representation of either sex could be by coincidence. Alternatively, male under-representation could potentially be due to the assumption of female under-representation leading to overcorrection in the opposite direction. It should be noted that these findings would benefit from statistical validation, although they illustrate the need for clinical trial participants to be recruited on a case-by-case basis, depending on the disease.

Takeaways to improve your patient sample in clinical trial recruiting:

  1. Know the disease burden/DALYs of your demographics for that disease.
  2. Try to balance the ratio of disease burden to the appropriate demographics for your disease
  3. Aim to recruit patients based on these proportions
  4. Stratify clinical trial data by the relevant demographics in your analysis. For example: toxicity, efficacy, adverse events etc should always be analyses separately for male and female to come up wit the respective estimates.
  5. Efficacy /toxicity etc should always be reported separately for male and female. reporting difference by ethnicity is also important as many diseases differentially affect certain ethnicity and the corresponding therapeutics can show differing degrees of efficacy and adverse events.

The end goal of these is that medication can be more personalised and any treatment given is more likely to help and less likely to harm the individual patient.

Conclusions

There is room for improvement in the proportional representation of both sexes in clinical trials and knowing a disease demographic is vital to planning a representative trial. Assuming the under-representation is on the side of female rather than male may lead to incorrect conclusions and actions to redress the balance. Taking demographic differences in disease burden into account when recruiting trial participants is needed. Trial populations that more accurately depict the real-world populations will allow a therapeutic to be tailored to the patient.

Efficacy and safety findings highlight the need for clinical study data to be stratified by sex, so that respective estimates can be determined. This enables more accurate, sex/age appropriate dosing that will maximise treatment efficacy and patient safety, as well as minimise the chance of adverse events. This also reduces the risks associated with later off label use of drugs and may avoid modern day tragedies resembling the thalidomide tragedy. Moreover, efficacy and adverse events should always be reported separately for men and women, as the evidence shows their distinct differences in response to therapeutics.

See our full report on diversity in patient recruiting for clinical trials.

References:

1. Dekker M, de Vries S, Versantvoort C, Drost-van Velze E, Bhatt M, van Meer P et al. Sex Proportionality in Pre-clinical and Clinical Trials: An Evaluation of 22 Marketing Authorization Application Dossiers Submitted to the European Medicines Agency. Frontiers in Medicine. 2021;8.

2. Labots G, Jones A, de Visser S, Rissmann R, Burggraaf J. Gender differences in clinical registration trials: is there a real problem?. British Journal of Clinical Pharmacology. 2018;84(4):700-707.

3. Unger J, Vaidya R, Albain K, LeBlanc M, Minasian L, Gotay C et al. Sex Differences in Risk of Severe Adverse Events in Patients Receiving Immunotherapy, Targeted Therapy, or Chemotherapy in Cancer Clinical Trials. Journal of Clinical Oncology. 2022;40(13):1474-1486.

4. Steinberg J, Turner B, Weeks B, Magnani C, Wong B, Rodriguez F et al. Analysis of Female Enrollment and Participant Sex by Burden of Disease in US Clinical Trials Between 2000 and 2020. JAMA Network Open. 2021;4(6):e2113749.

Estimating the Costs Associated with Novel Pharmaceutical development: Methods and Limitations.

Data sources for cost analysis of drug development R&D and clinical trials

Cost estimates for pre-clinical and clinical development across the pharmaceutical industry differ based on several factors. One of these is the source of data used by each costing study to inform these estimates. Several studies use private data, which can include confidential surveys filled out by pharmaceutical firms/clinical trial units and random samples from private databases3,9,10,14,15,16. Other studies have based their cost estimates upon publicly available data, such as data from the FDA/national drug regulatory agencies, published peer-reviewed studies, and other online public databases1,2,12,13,17.

Some have questioned the validity of using private surveys from large multinational pharmaceutical companies to inform cost estimates, saying that survey data may be artificially inflated by pharmaceutical companies to justify high therapeutic prices 18,19,20. Another concern is that per trial spending by larger pharmaceutical companies and multinational firms would far exceed the spending of start-ups and smaller firms, meaning cost estimates made based on data from these larger companies would not be representative of smaller firms.

Failure rate of R&D and clinical trial pipelines

Many estimates include the cost of failures, which is especially the case for cost estimates “per approved drug”. As many compounds enter the clinical trial pipeline, the cost to develop one approved drug/compound includes cost of failures by considering the clinical trial success rate and cost of failed compounds. For example, if 100 compounds enter phase I trials, and 2 compounds are approved, the clinical cost per approved drug would include the amount spent on 50 compounds.

The rate of success used can massively impact cost estimates, where a low success rate or high failure rate will lead to much higher costs per approved drug. The overall probability of clinical success may vary by year and has been estimated at a range of values including 7.9%21, 11.83%10, and 13.8%22. There are concerns that some studies suggesting lower success rates have relied on small samples from industry curated databases and are thereby vulnerable to selection bias12,22.

Success rates per phase transition also affects overall costs. When more ultimately unsuccessful compounds enter late clinical trial stages, the higher the costs are per approved compound. In addition, success rates are also dependent on therapeutic area and patient stratification by biomarkers, among other factors. For example, one study estimated the lowest success rate at 1.6% for oncological trials without biomarker use compared with a peak success rate of 85.7% for cardiovascular trials utilising biomarkers22. While aggregate success rates can be used to estimate costs, using specific success rates will be more accurate to estimate the cost of a specific upcoming trial, which could help with budgeting and funding decisions.

Out-of-pocket costs vs capitalised costs & interest rates

Cost estimates also differ due to reporting of out-of-pocket and capitalised costs. An out-of-pocket cost refers to the amount of money spent or expensed on the R&D of a therapeutic. This can include all aspects of setting up therapeutic development, from initial funding in drug discovery/device design, to staff and site costs during clinical trials, and regulatory approval expenses.

The capitalised cost of a new therapeutic refers to the addition of out-of-pocket costs to a yearly interest rate applied to the financial investments funding the development of a new drug. This interest rate, referred to as the discount rate, is determined by (and is typically greater than) the cost of capital for the relevant industry.

Discount rates for the pharmaceutical industry vary between sources and can dramatically alter estimates for capitalised cost, where a higher discount rate will increase capitalised cost. Most studies place the private cost of capital for the pharmaceutical industry to be 8% or higher23,24, while the cost of capital for government is lower at around 3% to 7% for developed countries23,25. Other sources have suggested rates from as high as 13% to as low as zero13,23,26, where the zero cost of capital has been justified by the idea that pharmaceutical firms have no choice but to invest in R&D. However, the mathematical model used in many estimations for the cost of industry capital, the CAPM model, tends to give more conservative estimates23. This would mean the 10.5% discount rate widely used in capitalised cost estimates may in fact result in underestimation.

While there is not a consensus on what discount rate to use, capitalised costs do represent the risks undertaken by research firms and investors. A good approach may be to present both out-of-pocket and capitalised estimated costs, in addition to rates used, justification for the rate used, and the estimates using alternative rates in a sensitivity analysis26.

Costs variation over time

The increase in therapeutic development costs

Generally, there has been a significant increase in the estimated costs to develop a new therapeutic over time26. One study reported an exponential increase of capitalised costs from the 1970s to the mid-2010s, where the total capitalised costs rose annually 8.5% above general inflation from 1990 to 201310. Recent data has suggested that average development costs reached a peak in 2019 and had decreased the following two years9. This recent decrease in costs was associated with slightly reduced cycle times and an increased proportion of infectious disease research, likely in response to the rapid response needed for COVID-19.

Recent cost estimates

Costs can range with more than 100-fold differences for phase III/pivotal trials alone1. One of the more widely cited studies on drug costs used confidential survey data from ten multinational pharmaceutical firms and a random sample from a database of publicly available data10. In 2013, this study estimated the total pre-approval cost at $2.6 billion USD per approved new compound. This was a capitalised cost, and the addition of post-approval R&D costs increased this estimate to $2.87 billion (2013 USD). The out-of-pocket cost per approved new compound was reported at $1.395 billion, of which $965 million were clinical costs and the remaining $430 million were pre-clinical.

Another estimate reported the average cost to develop an asset at $1.296 billion in 20139. Furthermore, it reported that this cost had increased until 2019 at $2.431 billion and had since decreased to $2.376 billion in 2020 and $2.006 billion in 2021. While comparable to the previous out-of-pocket estimate for 2013, this study does not state whether their estimates are out-of-pocket or capitalised, making it difficult to meaningfully compare these estimates.

Figure 1: Recent cost estimates for drug development per approved new compound. “Clinical only” costs include only the costs of phase 0-III clinical trials, while “full” costs include pre-clinical costs. The colour of each bubble indicates the study, while bubble size indicated relative cost. A dashed border indicated the study used private data for their estimations, while a solid border indicates the study utilised publicly available data. Figure represents studies 9, 10, 12, 13 and 17 from the reference list in this report.

Publicly available data of 63 FDA-approved new biologics from 2009-2018 was used to estimate the capitalised (at 10.5%) R&D investment to bring a new drug to market at median of $985.3 million and a mean of $1.3359 billion (inflation adjusted to 2018 USD)12. These data were mostly accessible from smaller firms, smaller trials, first-in-class drugs, and further specific areas. The variation in estimated cost was, through sensitivity analysis, mostly explained by success/failure rates, preclinical expenditures, and cost of capital.

Publicly available data of 10 companies with no other drugs on the market in 2017 was used to estimate out-of-pocket costs for the development of a single cancer drug. This was reported at a median of $648 million and a mean of $719.8 million13. Capitalised costs were also reported using a 7% discount rate, with a median of $754.4 million and mean of $969.4 million. By focusing on data from companies without other drugs on the market, these estimates may better represent the development costs per new molecular entity (NME) for start-ups as the cost of failure of other drugs in the pipeline were included while any costs related to supporting existing on-market drugs were systematically impossible to include.

One study estimated the clinical costs per approved non-orphan drug at $291 million (out-of-pocket) and $412 million (capitalised 10.5%)17. The capitalised cost estimate increased to $489 million when only considering non-orphan NMEs. The difference between these estimates for clinical costs and the previously mentioned estimates for total development costs puts into perspective the amount

spent on pre-clinical trials and early drug development, with one studynoting their pre-clinical estimates comprised 32% of out-of-pocket and 42% of capitalised costs10.

Things to consider about cost estimates

The issue with these estimates is that there are so many differing factors affecting each study. This complicates cost-based pricing discussions, especially when R&D cost estimates can differ orders of magnitude apart. The methodologies used to calculate out-of-pocket costs differ between studies9,17, and the use of differing data sources (public data vs confidential surveys) seem to impact these estimates considerably.

There is also an issue with the transparency of data and methods from various sources in cost estimates. Some of this is a result of using confidential data, where some analyses are not available for public scrutiny8. This study in particular raised questions as estimates were stated without any information about the methodology or data used in the calculation of estimates. The use of confidential surveys of larger companies has also been criticised as the confidential data voluntarily submitted would have been submitted anonymously without independent verification12.

Due to the limited amount of comprehensive and published cost data17, many estimates have no option but to rely on using a limited data set and making some assumptions to arrive at a reasonable estimate. This includes a lack of transparent available data for randomised control trials, where one study reported that only 18% of FDA-approved drugs had publicly available cost data18. Therefore, there is a need for transparent and replicable data in this field to allow for more plausible cost estimates to be made, which in turn could be used to support budget planning and help trial sustainability18,26.

Despite the issues between studies, the findings within each study can be used to gather an idea of trends, cost drivers, and costs specific to company/drug types. For example, studies suggest an increasing overall cost of drug development from 1970 to peak in 201910, with a subsequent decrease in 2020 and 20219.

For a full list of references used in this article, please see the main report here: https://anatomisebiostats.com/biostatistics-blog/how-much-does-developing-a-novel-therapeutic-cost-factors-affecting-drug-development-costs-in-the-pharma-industry-a-mini-report/

How much does developing a novel therapeutic cost? – Factors Affecting Drug Development Costs across the Pharma Industry: A mini-Report

Introduction

Data evaluating the costs associated with developing novel therapeutics within the pharmaceutical industry can be used to identify trends over time and can inform more accurate budgeting for future research projects. However, the cost to develop a drug therapeutic is difficult to accurately evaluate, resulting in varying estimates ranging from hundreds of millions to billions of US dollars between studies. The high cost of drug development is not purely because of clinical trial expenses. Drug discovery, pre-clinical trials, and commercialisation also need to be factored into estimates of drug development costs.

There are limitations in trying to accurately assess these costs. The sheer number of factors that affect estimated and real costs means that studies often take a more specific approach. For example, costs will differ between large multinational companies with multiple candidates in their pipeline and start-ups/SMEs developing their first pharmaceutical. Due to the amount and quality of available data, many studies work mostly with data from larger multinational pharmaceutical companies with multiple molecules in the pipeline. When taken out of context, the “$2.6 billion USD cost for getting a single drug to market” can seem daunting for SMEs. It is very important to clarify what scale these cost estimates represent, but cost data from large pharma companies are still relevant for SMEs as they can used to infer costs for different scales of therapeutic development.

This mini-report includes what drives clinical trial costs, methods to reduce these costs, and then explores what can be learned from varying cost estimates.

What drives clinical trial costs?

There is an ongoing effort to streamline the clinical trial process to be more cost and time efficient. Several studies report on cost drivers of clinical trials, which should be considered when designing and budgeting a trial. Some of these drivers are described below:

Study size

Trial costs rise exponentially with an increasing study size, which some studies have found to be the single largest driver in trial costs1,2,3. There are several reasons for varying sample sizes between trials. For example, study size increases with trial phase progression as phases require different study sizes based on the number of patients needed to establish the safety and/or effectiveness of a treatment. Failure to recruit sufficient patients can result in trial delays which also increases costs4.

Trial site visits

A large study size is also correlated with a larger overall number of patient visits during a trial, which is associated with a significant increase in total trial costs2,3. Trial clinic visits are necessary for patient screening, treatment and treatment assessment but include significant costs for staff, site hosting, equipment, treatment, and in some cases reimbursement for patient travel costs. The number of trial site visits per patient varies between trials where more visits may indicate longer and/or more intense treatment sessions. One estimate for the number of trial visits per person was a median of 11 in a phase III trial, with $2 million added to estimated trial costs for every +1 to the median2.

Number & location of clinical trial sites

A higher number of clinical trial study sites has been associated with significant increase in total trial cost3. This is a result of increased site costs, as well as associated staffing and equipment costs. These will vary with the size of each site, where larger trials with more patients often use more sites or larger sites.

Due to the lower cost and shorter timelines of overseas clinical research5,6, there has been a shift to the globalisation of trials, with only 43% of study sites in US FDA-approved pivotal trials being in North America7. In fact, 71% of these trials had sites in lower cost regions where median regional costs were 49%-97% of site costs in North America. Most patients in these trials were either in North America (39.7%), Western Europe (21%), or Central Europe (20.4%).

Median cost per regional site as a percentage of North American median cost for comparison.

However, trials can face increased difficulties in managing and coordinating multiple sites across different regions, with concerns of adherence to the ethical and scientific regulations of the trial centre’s region5,6. Some studies have reported that multiregional trials are associated with a significant increase in total trial costs, especially those with sites in emerging markets3. It is unclear if this reported increase is a result of lower site efficiency, multiregional management costs, or outsourcing being more common among larger trials.

Clinical Trial duration

Longer trial duration has been associated with a significant increase in total trial costs3,4, where many studies have estimated the clinical period between 6-8 years8,9,10,11,12,13. Longer trials are sometimes necessary, such as in evaluating the safety and efficacy of long-term drug use in the management of chronic and degenerative disease. Otherwise, delays to starting up a trial contribute to longer trials, where delays can consume budget and diminish the relevance of research4. Such delays may occur as a result of site complications or poor patient accrual.

Another aspect to consider is that the longer it takes to get a therapeutic to market (as impacted by longer trials), the longer the wait is before a return of investment is seen by both the research organisation and investors. The period from development to on-market, often referred to as cycle time, can drive costs per therapeutic as interest based on the industry’s cost of capital can be applied to investments.

Therapeutic area under investigation

The cost to develop a therapeutic is also dependent on the therapeutic area, where some areas such as oncology and cardiovascular treatments are more cost intensive compared with others1,2,5,6,12,14. This is in part due to variation in treatment intensity, from low intensity treatments such as skin creams to high intensity treatments such as multiple infusions of high-cost anti-cancer drugs2. An estimate for the highest mean cost for pivotal trials per therapeutic area was $157.2M in cardiovascular trials compared to $45.4M in oncology, and a lowest of $20.8M in endocrine, metabolic, and respiratory disease trials1. This was compared to an overall median of $19M. Clinical

trial costs per therapeutic area also varied by clinical trial phase. For example, trials in pain and anaesthesia have been found to have the lowest average cost of a phase I study while having the highest average cost of a phase III study6.

It is important to note that some therapeutic areas will have far lower per patient costs when compared to others and are not always indicative of total trial costs. For example, infectious disease trials generally have larger sample sizes which will lead to relatively low per patient costs, whereas trials for rare disease treatment are often limited to smaller sample sizes with relatively high per patient costs. Despite this, trials for rare disease are estimated to have significantly lower drug to market costs.

Drug type being evaluated

As mentioned in the therapeutic areas section above, treatments may vary in intensity from skin creams to multiple rounds of treatment with several anti-cancer drugs. This can drive total trial costs due to additional manufacturing and the need for specially trained staff to administer treatments.

In the case of vaccine development, phase III/pivotal trials for vaccine efficacy can be very difficult to run unless there are ongoing epidemics for the targeted infectious disease. Therefore, some cost estimates of vaccine development include from the pre-clinical stages to the end of phase IIa, with the average cost for one approved vaccine estimated at $319-469 million USD in 201815.

Study design & trial control type used

Phase III trial costs vary based on the type of control group used in the trial1. Uncontrolled trials were the least expensive with an estimated mean of $13.5 million per trial. Placebo controlled trials had an estimated mean of $28.8 million, and trials with active drug comparators had an estimated mean cost of $48.9 million. This dramatic increase in costs is in part due to manufacturing and staffing to administer a placebo or active drug. In addition, drug-controlled trials require more patients compared to placebo-controlled, which also requires more patients than uncontrolled trials2.

Reducing therapeutic development costs

Development costs can be reduced through several approaches. Many articles recommend improvements to operational efficiency and accrual, as well as deploying standardised trial management metrics4. This could include streamlining trial administration, hiring experienced trial staff, and ensuring ample patient recruitment to reduce delays in starting and carrying out a study.

Another way to reduce development costs can take place in the thorough planning of clinical trial design by a biostatistician, whether in-house or external. Statistics consulting throughout a trial can help to determine suitable early stopping conditions and the most appropriate sample size. Sample size calculation is particularly important as underestimation undermines experimental results, whereas overestimation leads to unnecessary costs. Statisticians can also be useful during the pre-clinical stage to audit R&D data to select the best available candidates, ensure accurate R&D data analysis, and avoid pursuing unsuccessful compounds.

Other ways to reduce development costs include the use of personalised medicine, clinical trial digitisation, and the integration of AI. Clinical trial digitisation would lead to the streamlining of clinical trial administration and would also allow for the integration of artificial intelligence into clinical trials. There have been many promising applications for AI in clinical trials, including the use of electronic health records to enhance the enrolment and monitoring of patients, and the potential use of AI in trial diagnostics. More information about this topic can be found in our blog “Emerging use-cases for AI in clinical trials”.

For more information on the methodology by which pharmaceutical development and clinical trials costs are estimated and what data has been used please see the article: https://anatomisebiostats.com/biostatistics-blog/estimating-the-costs-associated-with-novel-pharmaceutical-development-methods-and-limitations/

Cost breakdown in more detail: How is a clinical trial budget spent?

Clinical trial costs can be broken down and divided into several categories, such as staff and non-staff costs. In a sample of phase III studies, personnel costs were found to be the single largest component of trial costs, consisting of 37% of the total, whereas outsourcing costs made up 22%, grants and contracting costs at 21%, and other expenses at 21%3.

From a CRO’s perspective, there are many factors that are considered in the cost of a pivotal trial quotation, including regulatory affairs, site costs, management costs, the cost of statistics and medical writing, and pass-through costs27. Another analysis of clinical trial cost factors determined clinical procedure costs made up 15-22% of the total budget, with administrative staff costs at 11-29%, site monitoring costs at 9-14%, site retention costs at 9-16%, and central laboratory costs at 4-12%5,6. In a study of multinational trials, 66% of total estimated trial costs were spent on regional tasks, of which 53.3% was used in trial sites and the remainder on other management7.

Therapeutic areas and shifting trends

Therapeutic area had previously been mentioned as a cost driver of trials due to differences in sample sizes and/or treatment intensity. It is however worth mentioning that, in 2013, the largest number of US industry-sponsored clinical trials were in oncology (2,560/6,199 active clinical trials with 215,176/1,148,340 patients enrolled)4,14. More recently, there has been a shift to infectious disease trials, in part due to the needed COVID-19 trials9.

Clinical trial phases

Due to the expanding sample size as a trial progresses, average costs per phase increase from phase I through III. Median costs per phase were estimated in 2016 at $3.4 million for phase I, $8.6 million for phase II, and $21.4 million for phase III3. Estimations of costs per patient were similarly most expensive in phase III at $42,000, followed by phase II at $40,000 and phase I at $38,50014. The combination of an increasing sample size and increasing per patient costs per phase leads to the drastic increase in phase costs with trial progression.

In addition, studies may have multiple phase III trials, meaning the median estimated cost of phase III trials per approved drug is higher than per trial costs ($48 million and $19 million respectively)2. Multiple phase III trials can be used to better support marketing approval or can be used for therapeutics which seek approval for combination/adjuvant therapy.

There are fewer cost data analyses available on phase 0 and phase IV on clinical trials. Others report that average Phase IV costs are equivalent to Phase III but much more variable5,6.

Orphan drugs

Drugs developed for the treatment of rare diseases are often referred to as orphan drugs. Orphan drugs have been estimated to have lower clinical costs per approved drug, where capitalised costs per non-orphan and orphan drugs were $412 million and $291 million respectively17. This is in part due to the limit to sample size imposed upon orphan drug trials by the rarity of the target disease and the higher success rate for each compound. However, orphan drug trials are often longer when compared to non-orphan drug trials, with an average study duration of 1417 days and 774 days respectively.

NMEs

New molecular entities (NMEs) are drugs which do not contain any previously approved active molecules. Both clinical and total costs of NMEs are estimated to be higher when compared to next in class drugs13,17. NMEs are thought to be more expensive to develop due to the increased amount of pre-clinical research to determine the activity of a new molecule and the increased intensity of clinical research to prove safety/efficacy and reach approval.

Conclusion & take-aways

There is no one answer to the cost of drug or device development, as it varies considerably by several cost drivers including study size, therapeutic area, and trial duration. Estimates of total drug development costs per approved new compound have ranged from $754 million12 to $2.6 billion10 USD over the past 10 years. These estimates do not only differ based on the data used, but also due to methodological differences between studies. The limited availability of comprehensive cost data for approved drugs also means that many studies rely on limited data sets and must make assumptions to arrive at a reasonable estimate.

There are still multiple practical ways that can be used to reduce study costs, including expert trial design planning by statisticians, implementation of biomarker-guided trials to reduce the risk of failure, AI integration and digitisation of trials, improving operational efficiency, improving accrual, and introducing standardised trial management metrics.

References

Moore T, Zhang H, Anderson G, Alexander G. Estimated Costs of Pivotal Trials for Novel Therapeutic Agents Approved by the US Food and Drug Administration, 2015-2016. JAMA Internal Medicine. 2018;178(11):1451-1457.

.1 Moore T, Zhang H, Anderson G, Alexander G. Estimated Costs of Pivotal Trials for Novel Therapeutic Agents Approved by the US Food and Drug Administration, 2015-2016. JAMA Internal Medicine. 2018;178(11):1451-1457.

2. Moore T, Heyward J, Anderson G, Alexander G. Variation in the estimated costs of pivotal clinical benefit trials supporting the US approval of new therapeutic agents, 2015–2017: a cross-sectional study. BMJ Open. 2020;10(6):e038863.

3. Martin L, Hutchens M, Hawkins C, Radnov A. How much do clinical trials cost?. Nature Reviews Drug Discovery. 2017;16(6):381-382.

4. Bentley C, Cressman S, van der Hoek K, Arts K, Dancey J, Peacock S. Conducting clinical trials—costs, impacts, and the value of clinical trials networks: A scoping review. Clinical Trials. 2019;16(2):183-193.

5. Sertkaya A, Birkenbach A, Berlind A, Eyraud J. Examination of Clinical Trial Costs and Barriers for Drug Development [Internet]. ASPE; 2014. Available from: https://aspe.hhs.gov/reports/examination-clinical-trial-costs-barriers-drug-development-0

6. Sertkaya A, Wong H, Jessup A, Beleche T. Key cost drivers of pharmaceutical clinical trials in the United States. Clinical Trials. 2016;13(2):117-126.

7. Qiao Y, Alexander G, Moore T. Globalization of clinical trials: Variation in estimated regional costs of pivotal trials, 2015–2016. Clinical Trials. 2019;16(3):329-333.

8. Monitor Deloitte. Early Value Assessment: Optimising the upside value potential of your asset [Internet]. Deloitte; 2020 p. 1-14. Available from: https://www2.deloitte.com/content/dam/Deloitte/be/Documents/life-sciences-health-care/Deloitte%20Belgium_Early%20Value%20Assessment.pdf

9. May E, Taylor K, Cruz M, Shah S, Miranda W. Nurturing growth: Measuring the return from pharmaceutical innovation 2021 [Internet]. Deloitte; 2022 p. 1-28. Available from: https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/life-sciences-health-care/Measuring-the-return-of-pharmaceutical-innovation-2021-Deloitte.pdf

10. DiMasi J, Grabowski H, Hansen R. Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics. 2016;47:20-33.

11. Farid S, Baron M, Stamatis C, Nie W, Coffman J. Benchmarking biopharmaceutical process development and manufacturing cost contributions to R&D. mAbs. 2020;12(1):e1754999.

12. Wouters O, McKee M, Luyten J. Estimated Research and Development Investment Needed to Bring a New Medicine to Market, 2009-2018. JAMA. 2020;323(9):844-853.

13. Prasad V, Mailankody S. Research and Development Spending to Bring a Single Cancer Drug to Market and Revenues After Approval. JAMA Internal Medicine. 2017;177(11):1569-1575.

14. Battelle Technology Partnership Practice. Biopharmaceutical Industry-Sponsored Clinical Trials: Impact on State Economies [Internet]. Pharmaceutical Research and Manufacturers of America; 2015. Available from: http://phrma-docs.phrma.org/sites/default/files/pdf/biopharmaceutical-industry-sponsored-clinical-trials-impact-on-state-economies.pdf

15. Gouglas D, Thanh Le T, Henderson K, Kaloudis A, Danielsen T, Hammersland N et al. Estimating the cost of vaccine development against epidemic infectious diseases: a cost minimisation study. The Lancet Global Health. 2018;6(12):e1386-e1396.
16. Hind D, Reeves B, Bathers S, Bray C, Corkhill A, Hayward C et al. Comparative costs and activity from a sample of UK clinical trials units. Trials. 2017;18(1).

17.Jayasundara K, Hollis A, Krahn M, Mamdani M, Hoch J, Grootendorst P. Estimating the clinical cost of drug development for orphan versus non-orphan drugs. Orphanet Journal of Rare Diseases. 2019;14(1).

19. Speich B, von Niederhäusern B, Schur N, Hemkens L, Fürst T, Bhatnagar N et al. Systematic review on costs and resource use of randomized clinical trials shows a lack of transparent and comprehensive data. Journal of Clinical Epidemiology. 2018;96:1-11.

20. Light D, Warburton R. Demythologizing the high costs of pharmaceutical research. BioSocieties. 2011;6(1):34-50.

21. Adams C, Brantner V. Estimating The Cost Of New Drug Development: Is It Really $802 Million?. Health Affairs. 2006;25(2):420-428.

22. Thomas D, Chancellor D, Micklus A, LaFever S, Hay M, Chaudhuri S et al. Clinical Development Success Rates and Contributing Factors 2011–2020 [Internet]. BIO|QLS Advisors|Informa UK; 2021. Available from: https://pharmaintelligence.informa.com/~/media/informa-shop-window/pharma/2021/files/reports/2021-clinical-development-success-rates-2011-2020-v17.pdf

23. Wong C, Siah K, Lo A. Estimation of clinical trial success rates and related parameters. Biostatistics. 2019;20(2):273-286.
24. Chit A, Chit A, Papadimitropoulos M, Krahn M, Parker J, Grootendorst P. The Opportunity Cost of Capital: Development of New Pharmaceuticals. INQUIRY: The Journal of Health Care Organization, Provision, and Financing. 2015;52:1-5.
25. Harrington, S.E. Cost of Capital for Pharmaceutical, Biotechnology, and Medical Device Firms. In Danzon, P.M. & Nicholson, S. (Eds.), The Oxford Handbook of the Economics of the Biopharmaceutical Industry, (pp. 75-99). New York: Oxford University Press. 2012.
26. Zhuang J, Liang Z, Lin T, De Guzman F. Theory and Practice in the Choice of Social Discount Rate for Cost-Benefit Analysis: A Survey [Internet]. Manila, Philippines: Asian Development Bank; 2007. Available from: https://www.adb.org/sites/default/files/publication/28360/wp094.pdf
27. Rennane S, Baker L, Mulcahy A. Estimating the Cost of Industry Investment in Drug Research and Development: A Review of Methods and Results. INQUIRY: The Journal of Health Care Organization, Provision, and Financing. 2021;58:1-11.
28. Ledesma P. How Much Does a Clinical Trial Cost? [Internet]. Sofpromed. 2020 [cited 26 June 2022]. Available from: https://www.sofpromed.com/how-much-does-a-clinical-trial-cost


Medical Device Categorisation, Classification and Regulation in the United Kingdom

Contributor: Sana Shaikh

In this article

  • Overview of medical device categorisations and classifications for regulatory purposes in the United Kingdom
  • Summary of medical devices categorisations based on type, usage and risk potential during use as specified in the MDR and IVDR.
  • The class of medical device and its purpose determines the criteria required to meet regulatory approval. All medical devices in the UK must have a UKCA or CE marking depending on the legislation the device has been certified under.
  • Explanation of risk classifications for general medical devices and active implantable devices
  • Explanation of risk classifications for in vitro diagnostics

In the UK and EU medical devices are regulated under the Medical Devices Regulation (MDR) or In Vitro Diagnostics Regulation (IVDR) depending upon which category they fall under. In the UK it is the Medicines and Healthcare Products Regulatory Agency (MHRA) that is responsible for new product approval and market surveillance activities related to medical devices and other therapeutics, such as pharmaceuticals, intended for use in patients within the UK. The equivalent regulatory agency in the EU is the European Regulatory Agency (EMA). The MHRA also manages the Early Access to Medicines Scheme (EAMS) to enable patients access to pre-market therapeutics that are yet to receive regulatory approval where their medical needs are currently unmet by existing options. To qualify for EAMS a medicine must be designated as a Promising Innovative Medicine (PIM) based on early clinical data.

Having a thorough understanding of the classification and class of your medical device is vital for it to undergo the appropriate assessment route and be approved and ready for market. While the scope of medical devices is incredibly broad, for regulatory purposes they tend to be classified based on device type, duration of use and level of risk. Which risk class a device falls into will be determined in a large part by device type and duration of use, as both of these factors influence the level of risk to the patient. All medical devices in the UK must be designated a category and a risk classification in order to undertake the regulatory approval process.

Category (type) of Medical Device

The MHRA categorises medical devices into the following 5 categories:

  • Non-invasive – Devices which do not enter the body
  • Invasive – Devices which in whole or part are inserted into the body’s orifices (including the external eyeball surface) or through the surface or the body such as the skin.
  • Surgically invasive – Devices used or inserted surgically that penetrate the body through the surface of the body, such as through the skin.
  • Active – Devices requiring an external source of power, including stand-alone software.
  • Implantable – Devices intended to be totally or partially introduced into the human body (including to replace an epithelial surface or the surface of the eye) by surgical intervention and to remain in place for a period of time.

Duration of use category

Medical devices are then further categorised based upon their intended duration of use under normal circumstances.

  • Transient – intended for less than 60 minutes of continuous use.
  • Short term – intended for between 60 minutes to 30 days of continuous use.
  • Long term – intended for more than 30 days continuous use.

More information to aid accurate medical device categorisation in the UK and EU can be downloaded here: Medical devices: how to comply with the legal requirements in Great Britain – GOV.UK (www.gov.uk)

UKCA Mark & Conformity Assessment

Further to these use, duration and risk categories the HPRA designates 3 additional categories for the purposes of UKCA Mark and conformity assessment. These categories for the are:

  • General medical devices – most medical devices fall into this category.
  • Active implantable devices – devices powered by implants or partial implants intended to remain in the human body after a procedure.
  • In vitro diagnostics medical devices (IVDs) – equipment or system used in vitro to examine specimens from the human body.

UKCA mark and conformity assessment and subsequent labelling is a crucial procedure for a device to enter the UK market for use by patients. It should be noted that the UKCA mark is not recognised in the EU or Northern Ireland, who instead recognise the CE mark. Great Britain will not recognise the CE mark after 30 June 2023, thus it will be important to have both the UKCA and CE mark for widespread distribution of a medical device. These incompatibilities seem to have arisen largely as a result of Brexit.

Risk classification categories for general medical devices and active implantable devices

In The UK and EU there are 4 official risk-related classes for medical devices. These classes apply to both general medical devices and active implantable devices. As noted previously, the class a device falls into is largely informed by the category and the intended duration of use for the device.

  • Class I , which includes the subclasses Class Is (sterile no measuring function), Class Im (measuring function), and Class Ir (devices to be reprocessed or reused). Low risk of illness/injury resulting from use. Only self-assessment required to meet regulatory approval.
  • Class IIa Low to medium risk of illness/injury resulting from use. Notified Body approval required.
  • Class IIb Medium to high risk of illness/injury resulting from use. Notified Body approval required.
  • Class III high potential risk of illness/injury resulting from use. Notified Body approval required.

More details on these classes can be found below.

In Vitro Diagnostic Medical Devices (IVDs)

The IVDR categorise IVDs in to the following categories for the purpose of obtaining regulatory approval in Great Britain. IVDs do not harm patients directly in the same way that other medical devices can and are thus subject to different risk assessment.

  • General IVD medical devices
  • IVDs for self-testing – intended to be using by an individual at home.
  • IVDs stated in Part IV of the UK MDR 2002, Annex II List B
  • IVDs stated in Part IV of the UK MDR 2002, Annex II List A

A more detailed explanation of these categories can be found towards the end of this article.

The EU and Northern Ireland has moved away from this list style of classification and has recently implemented the following risk classes. There are 4 IVDR risk classes outlined in Annex VIII. It seems likely that Great Britain may follow this in future.

Risk Classes for IVDs

  • Class A – Laboratory devices, instruments and receptacles.
  • Class B – All devices not covered in the other classes.
  • Class C – High risk devices presenting a lower direct risk to the patient population. Includes diagnostic devices where failure to accurately diagnose could be life-threatening. Covers companion diagnostics, genetic screening and some self-testing.
  • Class D – Devices that pose a high direct risk to the patient population, and in some cases the wider population, relating to life threatening conditions, transmissible agents in blood, biological materials for transplantation in to the human body and other similar materials.

Risk categories for general medical devices and active implantable medical devices in detail

Class I devices

These are generally regarded as low risk devices and pose little risk of illness and injury. Such devices have minimal contact with patients and the lowest impact on patient health outcomes. To self-certify your product, you must confirm that it is a class I device1,3. This may involve carrying out clinical evaluations, notifying the Medicines and Healthcare products Regulatory Agency (MHRA) of proposals to perform clinical investigations, preparing technical documentation and drawing up a declaration of conformity1. In cases where the device includes sterile products or measuring functions, approval from a UK Approved Body may still be necessary3. Devices in this category include thermometers, stethoscopes, bandages and surgical masks.

Class IIa & IIb devices

Class IIa devices are generally regarded as medium risk devices and pose moderate risk of illness and injury. Both class IIa and IIb devices must be declared as such by applying to a UK Approved Body and performing a conformity assessment3, 4. For class IIa and IIb devices, there are several assessments. These include examining and testing the product or a homogenous batch of products, auditing the production quality assurance system, auditing the final inspection and testing or auditing the full quality assurance system3. include dental fillings, surgical clamps and tracheotomy tubes4 Class IIb devices include lung ventilators and bone fixation plates4.

Class III devices

These are considered high risk devices and pose substantial risk of illness and injury. Devices in this category are essential for sustaining human life and Due to the high-risk associated with class III devices, they are subject to the strictest regulations. In addition to the class IIa and IIb assessments, class III devices require a design dossier examination3. include pacemakers, ventilators, drug-coated stents and spinal disc cages.

Risk Categories for In Vitro Diagnostics in detail

These include but are not limited to reagents, instruments, software and systems intended for in vitro examination of specimens such as tissue donations and blood4. Most IVDs do not require intervention from a UK Approved Body5. However, for IVDs that are considered essential to health, involvement of a UK Approved Body is necessary5. The specific conformity assessment procedure depends on the category of IVD concerned5.

General IVDs

These are considered a low risk to patients and include clinical chemistry analysers, specimen receptacles and prepared selective culture media4. For general IVDs, involvement from a UK Approved Body is not required5. Instead, relevant provisions in the UK MDR 2002 must be met and self-declared prior to adding a UKCA mark to the device5,6.

IVDs for self-testing

These represent a low-to-medium risk to patients and include pregnancy self-testing, urine test strips and cholesterol self-testing4. In addition to conforming to requirements for general IVDs, applications for IVDs involved in self-testing must be sent to a UK Approved Body5. This enables examination of the design of the device, such as how suitable it is for non-professional users5.

IVDs stated in Part IV of the UK MDR 2002, Annex II List B

These represent medium-to-high risk to patients and include blood glucose self-testing, PSA screening and HLA typing4. Applications for devices in this category must be sent to a UK Approved Body5. This can enable auditing of technical documentation and the quality management system6.

IVDs stated in Part IV of the UK MDR 2002, Annex II List A.

These represent the highest risk to patients and include Hepatitis B blood-donor screening, ABO blood grouping and HIV blood diagnostic tests4. Due to the high risk associated with IVDs in this category, applications for devices in this category must be sent to a UK Approved Body5. By doing so, an audit of the quality management system can be performed as well as a design dossier review6. In addition, the UK Approved Body must verify each product or batch of products prior to being placed on the market5,6.

Proposed updates to medical device categories in the UK

Due to the quickly evolving state of medical technology, many items that did not previously count as a medical device, such as software and AI, are now needing to be considered as such. New proposals have been put forward as potential amendments to the existing regulations and risk classifications to accommodate newer technologies and devices. Among other proposed changes the following list of novel devices has been recommended for upgrade to the classification of highest risk Class III.

  • Active implantable medical devices and their accessories
  • in vitro fertilisation (IVF) and Assisted reproduction technologies (ART)
  • Surgical meshes
  • total or partial joint replacements
  • spinal disc replacements and other medical devices that come into contact with the spinal column
  • medical devices containing nano-materials
  • medical devices containing substances that will be introduced to the human body by one of various methods of absorption in order to achieve their intended function.
  • Active therapeutic devices with an integrated diagnostic function determining patient management such as closed loop or automated systems.

With the shift to a higher risk classification will come increased demand of clinical evidence and clinical testing, including clinical trials, in order for these devices to meet regulatory approval and reach the market. While an increased burden for the manufacturer this will be to the benefit patient safety and satisfaction for the end users. A full list of the proposed changes, including those outside of Class III, can be found here: Chapter 2: Classification – GOV.UK (www.gov.uk)

Medical devices are incredibly heterogenous, ranging from therapeutics and surgical tools to diagnostics and medical imaging software including machine learning and AI. Accordingly, medical device research and development often requires an interdisciplinary approach. During R&D, it is important to consider for whom the device is intended, how it will be used, and under what circumstances. Similarly, it is crucial to understand the risk status of the device. By considering these attributes, the device can be successfully assessed through the appropriate regulatory approval pathway.

References

Factsheet: medical devices overview – GOV.UK (www.gov.uk)

[1] https://www.gov.uk/government/collections/guidance-on-class-1-medical-devices

[2] https://www.gov.uk/guidance/medical-devices-how-to-comply-with-the-legal-requirements

[3] https://www.gov.uk/guidance/medical-devices-conformity-assessment-and-the-ukca-mark

[4] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/640404/MDR_IVDR_guidance_Print_13.pdf[5] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/946260/IVDD_legislation_guidance_-_PDF.pdf

[5] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/946260/IVDD_legislation_guidance_-_PDF.pdf

[6] diagnostic medical devices IVD