Complex Adaptive Systems (CAS) Approach to Biomedicine & Public Health

While a majority of biomedical and public health research still maintains a linear reductive approach to arrive at empiric insight, reality is in most cases neither of these things. A complex adaptive systems approach, like reality, is non-linear and high dimensional. There are many benefits from taking a linear cause-effect reductivist approach in that the complex problem and it’s solution becomes simplified into terms that can be understood and predicted.  Where this falls short is that predictions don’t often hold up in real world examples in which the outcomes tend to seem unpredictable.

Genomics, proteomics, transcriptomics and other “omics” techniques have generated an unprecedented amount of molecular genetics data. This data can be combined with larger scale data, from cell processes to environmental influences in disease states, to produce highly sophisticated multi-level models of contributing factors in human health and disease.

An area that is currently seeing an evolution into a more personalised nuanced approach, albeit still linear, is clinical trials. By introducing a biomarker component to clinical trials, for example to evaluate drug efficacy, the number of dimensions to the problem is slightly increased in order to arrive at more targeted and accurate solutions. More specifically, the number of patient sub-categories in the clinical trials increases to accommodate various biomarker groups which may respond more of less well to a different pharmacological approach to the same disease. Increasing the dimensions of the problem beyond this would, for now not be feasible or even helpful. On the other hand, understanding the interplay between biomolecular processes and environmental interactions in order to gain insight into disease processes themselves and thereby, which biochemical pathways for oncology drugs to target, is something that clearly benefits from a non-linear approach.

Another example of a system that benefits from a non-linear approach is public health service provision and the desire to garner insights into changes that increase prevention, early intervention and treatment effectiveness as well as reduce service cost for the government and patient. Both of the above examples require attention to both macro and micro processes.

Some components of complex adaptive systems: Connectivity, self organisation, emergent, fractal patterns, non-linear, governed by feedback loops, adaptive, nested systems, stochastic, simple rules, chaotic behaviour, iteration, sub optimal, requisite variety, may be considered optimised at the edge of chaos.

Whether modelling clinical health services networks or biological processes, complex adaptive systems consist of several common characteristics.

Components of complex adaptive systems

Massive interdependencies and elaborate connectivity

The complex adaptive systems approach shifts emphasis away from studying individual parts (such as seen in conventional medical science which produces notably fragmented results) to characterising the organisation of these parts in terms of their inherently dynamic interactions. CAS are open rather than closed systems because it is exogenous elements impacting on the system that cause the disruption required for growth.

Complex adaptive systems can be understood by relations and networks. System processes occur in networks that link component entities or agents. This approach emphasises that structures are dynamic and it is the process of becoming rather than the being itself that is of empirical interest.

Necessarily transdisciplinary or multi-disciplinary

A complex adaptive systems approach requires a transdisciplinary approach. The collaboration of numerous disparate experts is required in the combining of myriad biological, physical and societal based sciences into a holistic model. This model should aim to represent pertinent simultaneous top-down and bottom up processes that reveal contexts and relationships within the observed system dynamics.

Self-organising, emergent behaviour

Complex adaptive systems are selt-organising in the sense that observed patterns are open ended, potentially unfinished and cannot be predicted by the conventional definition. Rules of cause and effect are context dependent and can’t be applied in a rigid sense.

A self organising dynamic structure, which can be identified as a pattern, emerges as a result of individual spontaneous interactions between individual agents or elements. This pattern then impacts the interactions of individuals in a continual top down, bottom up symbiosis.

While linear models represent a reductionist, closed conceptualisation of the phenomena under analysis, a complex systems approach embraces high dimensionality true to the myriad real world phenomena composing a system. This requires that the system be treated as open and of uncertain ontology and thus lacking predictive capacity with regards to the outcomes of system dynamics.

As an emergent phenomena, the complex adaptive system can be understood by interacting with it rather than through analysis or static modelling. This approach is concerned with “state change” or to evaluate “how things are becoming”, rather than “how thing are”. How did today’s state emerge from yesterday’s trajectories and process dynamics?

Fractal engagement. Fractal engagement entails that the system as a whole orientates through multiple actions. The same data can produce frameworks at the level of responsibility of every individual agent. Using public health intervention as an example, Individual agents make decisions, based on the data, as to what changes they can make tomorrow within their own sphere of competence, rather than overarching changes being dictated and determined in a top down way, or by others.

Feedback loops

Feedback loops link individual parts into an overaching dynamic structure. Feedback loops are self-reinforcing and can be positive or negative.

Negative feedback loops are stabalising in that they have a dampening effect on oscillations that causes the system or component to move closer to equilibrium. Positive feedback loops are morphogenic and increase the frequency and amplitude of oscillations driving the system away from homeostasis and leading to changes in the underlying structure or the system.

Positive feedback loops, while facilitating growth and adaptation, tend towards chaos and decay and are thus crucially counterbalanced by simultaneously operating negative feedback loops. Evolution is supposed to occur as a series of phase transitions, back and forth, from ordered to disordered states.

Both top-down and bottom-up “causality”

While CAS models describe elements in terms of possibilities and probabilities, rather than cause and effect in the linear sense, there is a clear interplay between top down and bottom up causality and influence on the dynamic flows and trajectories of any system. This is very much a mirror of real world systems. One example of this is the human body where both conscious thought (top down) and biomolecular processes such as hormonal and neurochemical fluctuations (bottom up) effect mood, which in turn has a lot of flow on effects down stream that cause changes that shirt the system back and forth from health to disease. One such manifestation of this is stress induced illness of various kinds. As a social example, we can of course find many examples of top down and bottom up causation in a public heath or epidemiological setting.

This has been a non-exhaustive description of just some key components of complex adaptive systems. The main purpose is to differentiate the CAS paradigm from the more mainstream biomedical research paradigm and approach. For a deeper dive into the concepts mentioned see the references below.

References:

https://core-cms.prod.aop.cambridge.org/core/services/aop-cambridge core/content/view/F6F59CA8879515E3178770111717455A/9781108498692c7_100-118.pdf/role_of_the_complex_adaptive_systems_approach.pdf

Carmichael T., Hadžikadić M. (2019) The Fundamentals of Complex Adaptive Systems. In: Carmichael T., Collins A., Hadžikadić M. (eds) Complex Adaptive Systems. Understanding Complex Systems. Springer, Cham. https://doi.org/10.1007/978-3-030-20309-2_1

https://www.health.org.uk/sites/default/files/ComplexAdaptiveSystems.pdf

Milanesi, L., Romano, P., Castellani, G., Remondini, D., & Liò, P. (2009). Trends in modeling Biomedical Complex Systems. BMC bioinformatics10 Suppl 12(Suppl 12), I1. https://doi.org/10.1186/1471-2105-10-S12-I1

Sturmberg J. P. (2021). Health and Disease Are Dynamic Complex-Adaptive States Implications for Practice and Research. Frontiers in psychiatry12, 595124. https://doi.org/10.3389/fpsyt.2021.595124

Master Protocols for Clinical Trials

Part 1: Basket & Umbrella Trial Designs

Introduction

As the clinical research landscape becomes ever more complex and interdisciplinary alongside an evolving genomic and biomolecular understanding of disease, the statistical design component that underpins this research must adapt to accommodate this. Accuracy of evidence and speed with which novel therapeutics are brought to market remain hurdles to be surmounted.

While efficacy studies or non-inferiority clinical trials in the drug development space traditionally only included broad disease states usually with patients randomised to a dual arm of new treatment compared to an existing standard treatment. Due to patient biomarker heterogeneity, effective treatments could be left unsupported by evidence. Similarly treatments found effective in a clinical trial don’t always translate to show real world effectiveness in a broader range of patients.

Our current ability to assess individual genomic, proteomic and transcriptomic data and other patient bio-markers for disease, as well as immunologic and receptor site activity, has shown that different patients respond differently to the same treatment and, the same disease may benefit from different treatments in different patients – thus the beginnings of precision medicine.  In addition to this is the scenario where a single therapeutic may be effective against a number of different diseases or subclasses of a disease based on the agent’s mechanism of action on molecular processes common to the disease states under evaluation.

Master protocols, or complex innovative designs, are designed to pool resources to avoid redundancy and test multiple hypotheses under one clinical trial, rather than multiple clinical trials being carried out separately over a longer period of time.

Due to this fairly novel evolution in the clinical research paradigm and also due to inherent flexibility within each study design, conflicting information related to the definition and characterisation of master protocols such as basket and umbrella clinical trials as well as cases in the published literature where the terms “basket” and “umbrella” trials have been used interchangeably or are ill-defined exists. For this reason a brief definition and overview of basket and umbrella clinical trials is included in the paragraphs that follow. Based on systematic reviews of existing research it seeks the clarity of consensus, before detailing some key statistical and operational elements of each design.

Master protocols for bio-marker based clinical trials.
Diagram of a basket trial design.

Basket trial:

A basket clinical trial design consists of a targeted therapy, such as a drug or treatment device, that is being tested on multiple disease states characterised by a common molecular process that is impacted by the treatment’s mechanism of action. These disease states could also share a common genetic or proteomic alteration that researchers are looking to target.

Basket trials can be either exploratory or confirmatory and range from full randomised, controlled double-blinded designs to single arm designs, or anything in between. Single arm designs are an option when feasibility is limited and are more focused on the pre-clinical stage of determining efficacy or whether a particular treatment has clear-cut commercial potential evidenced by a sizable enough retreat in disease symptomology. Depending on the nuances of the patient populations being evaluated final study data may be analyses by pooling disease states or by each disease state separately. Basket trials allow drug development companies to target the lowest hanging fruit in terms of treatment efficacy, focusing resources on therapeutics with the highest potential of success in terms of real patient outcomes.

Master protocol umbrella trial
Diagram of an umbrella trial design.

Umbrella trial:

An umbrella clinical trial design consists of multiple targeted treatments of a single disease where patients can be sub-categorised into biomarker subgroups defined by molecular characteristics that may lend themselves to one treatment over another.

Umbrella trials can be randomised, controlled double-blind studies that in which each intervention and control pair is analysed independently of other treatments in the trial, or where feasibility issues dictate, they can be conducted without a control group with results analysed together in-order to compare the different treatments directly.

Umbrella trials may be useful when a treatment has shown efficacy in some patients and not others, they increase the potential for confirmatory trial success by honing in on patient sub-populations that are most likely to benefit due to biomarker characteristics, rather than grouping all patients together as a whole.

Basket & Umbrella trials compared:

Both basket and umbrella trials are typically biomarker guided. The difference being that basket trials aim to evaluate tissue-agnostic treatments to multiple diseases based on common molecular characteristics, whereas umbrella trials aim to evaluate nuanced treatment approaches to the same disease based on differing molecular characteristics between patients.

Biomarker guided trials have an additional feasibility constraint to non-biomarker guided trials in that the size of the eligible patient pool is reduced in proportion to the prevalence of the biomarker/s of interest within that patient pool. This is why master protocol methodology becomes instrumental in enabling these appropriately complex research questions to be pursued.

Statistical Concepts and considerations of basket and umbrella Trials

Effect size

Basket and umbrella trials generally require a larger effect size than traditional clinical trials, in order to achieve statistical significance. This is in a large part due to the smaller sample sizes and higher variance that comes with that. While patient heterogeneity in terms of genomic or molecular diversity, and thus expected treatment outcome, has been reduced by the precision targeting of the trial design, there is a certain degree of between-patient heterogeneity that can only be expected when relying on treatment arms of very small sample sizes.

If resources, including time, are tight then basket trials enable drug developers to focus on less risky treatments that are more likely to end in profitability. It should be noted that this does not always mean that the treatments that are rejected by basket trials are truly clinically ineffective. A single arm exploratory basket trial could end up rejecting a potential new treatment that, if subject to a standard trial with more drawn out patient acquisition and a larger sample size, would have been deemed effective at a narrower effect size.

Screening efficiency

If researchers carry out separate clinical studies for each biomarker of interest, then a separate screening sample needs to be recruited for each study. The rarer the biomarker, the larger the recruited screening sample would need to find enough people with the biomarker to participate in the study. This number needs to be multiplied by the number of biomarkers. A benefit of master protocols is that a single sample of people can be screened for multiple biomarkers at once, greatly reducing the required screening sample size.

 For example, researchers interested in 4 different biomarkers could collectively reduce the required screening sample by three quarters compared to conducting separate clinical studies for each biomarker. This maximisation of resources can be particularly helpful when dealing with rare biomarkers or diseases.

Patient allocation considerations

If relevant biomarkers are not mutually exclusive a patient could fit into multiple biomarker groups for which treatment is being assessed in the study. In this scenario a decision has to be made as to which category the patient will be assigned and the decision process may occur at random where appropriate. If belonging to two overlapping biomarker groups is problematic in terms of introducing bias in small sample sizes, or if several patients have the same overlap, then a decision may be made to collapse the two biomarkers into a single group or eliminate one of the groups. If a rare genetic mutation is a priority focus in the study then feasibility would dictate that the patient be assigned to this biomarker group.

Sample Size calculations

Generally speaking, sample size calculation for basket trials should be based on the overall cohort, whereas sample size calculations for umbrella trials are typically undertaken individually for each treatment.

Basket and umbrella trials can be useful in situations where a smaller sample size is more feasible due to specifics of the patient population under investigation. Statistically designing for this smaller sample size typically comes at the cost of necessitating a greater effect size (difference between treatment and control) and this translates to lower overall study power and greater chance of type 1 error (false negative result) when compared to a standard clinical trial design. Despite these limitations master protocols such as basket or umbrella trials allow to evaluation of certain treatments to the highest possible level of evidence that otherwise might be too heterogeneous or rare to evaluate using a traditional phase II or III trial.

Randomisation and control

Randomised controlled designs are recommended for confirmatory analysis of an established treatment or target of interest. The control group typically treats patients with the established standard of care for their particular disease or, in the absence of one, placebo.

In master basket trials the established standard of care is likely to differ by disease or disease sub-type. For this reason it may be necessary for randomised controlled basket trials pair a control group with each disease sub-group rather than just incorporating a single overall control group and potentially pooling results from all diseases under one statistical analysis of treatment success. Instead it is worth considering if each disease type and corresponding control pair could be analysed separately to enhance statistical robustness in a truly randomised controlled methodology.

Single arm (non-randomised designs) are sometimes necessary for exploratory analysis of potential treatments or targets. These designs often require a greater margin of success (treatment efficacy) to be statistically significant as a trade-off for a smaller sample size required.

Blinding

To increase the quality of evidence, all clinical studies should be double blinded where possible.

To truly evaluate the effectiveness of a treatment without undue bias from a statistical perspective double-blinding is recommended.

Aside from increased risk of type 2 error that may be inherent in master protocol designs, there is a greater potential for statistical bias to be introduced. Bias can introduce itself in a myriad of ways and results in a reduction in the quality of evidence that a study can produce. Two key sources of bias are lack of randomisation (mentioned above) and lack of blinding.

Single armed trials do not include a control arm and therefore patients cannot be randomised to a treatment arm where double-blinding of patients, practitioners, researchers and data managers etc will prevent various types of bias creeping in to influence the study outcomes. With so many factors at play it is important not to overlook the importance of study blinding and implement it whenever feasible to do so.

If the priority is getting a new treatment or product to market fast to benefit patients and potentially save lives, accommodating this bias can be a necessary trade-off. It is after-all typically quite a challenge to have clinical data and patient populations that are at homogeneous and matched to any great degree, and this reality is especially noticeable with rare diseases or rare biomarkers.

Biomarker Assay methodology

The reliability of biologic variables included in a clinical trial should be assessed, for example the established sensitivity and specificity of particular assays needs to be taken into account. When considering patient allocation by biomarker group, the degree of potential inaccuracy of this allocation can have a significant impact on trial results, particularly when there is a small sample size. If the false positive rate of a biomarker assay is too high this will result in the wrong patients qualifying for treatment arms, in some cases this may reduce the statistical power of the study.

A further consideration of assay methodology pertains to the potential for non-uniform bio-specimen quality at different collection sites which may bias study results. A monitoring framework should be considered in order to mitigate this.

Patient tissue samples required for assays, can inhibit feasibility and increase time and cost in the short term and make study reproducibility more complicated. While this is important to note these techniques are in many cases necessary in effectively assessing treatments based on our contemporary understanding a many disease states such as cancer within the modern oncology paradigm. Without incorporating this level of complexity and personalisation into clinical research it will not be possible to develop evidence based treatments that translate into real-world effectiveness and thus widespread positive outcomes for patients.

Data management and statistical analysis

The ability to statistically analyse multiple research hypotheses at once within a single dataset increases efficiency at the biostatisticians end and allows frameworks for greater reproducibility of the methodology and final results, compared to the execution and analysis of multiple separate clinical trials testing the same hypotheses. Master protocols also enable increased data sharing and collaboration between sites and stakeholders.

Deloitte research estimated that master protocols can save clinical trials 12-15% in cost and 13-18% in study duration. These savings of course apply to situations where master protocols were a good fit for the clinical research context, rather than to the blanket application of these study designs across any or all clinical studies. Applying a master protocol study design to the wrong clinical study could actually end up increasing required resources and costs without benefit, therefore it is important to assess whether a master protocol study design is indeed the optimal approach for the goals of a particular clinical study or studies.

umbrella trials for precision medicine
Master protocols for precision medicine.

References:

Bitterman DS, Cagney DN, Singer LL, Nguyen PL, Catalano PJ, Mak RH. Master Protocol Trial Design for Efficient and Rational Evaluation of Novel Therapeutic Oncology Devices. J Natl Cancer Inst. 2020 Mar 1;112(3):229-237. doi: 10.1093/jnci/djz167. PMID: 31504680; PMCID: PMC7073911.

Lesser N, Na B, Master protocols: Shifting the drug development paradigm, Deloitte Center for Health solutions

Lai TL, Sklar M, Thomas, N, Novel clinical trial solutions and statistical methods in the era of precision medicine, Technical Report No. 2020-06, June 2020

Renfro LA, Sargent DJ. Statistical controversies in clinical research: basket trials, umbrella trials, and other master protocols: a review and examples. Ann Oncol. 2017 Jan 1;28(1):34-43. doi: 10.1093/annonc/mdw413. PMID: 28177494; PMCID: PMC5834138.

Park, J.J.H., Siden, E., Zoratti, M.J. et al. Systematic review of basket trials, umbrella trials, and platform trials: a landscape analysis of master protocols. Trials 20, 572 (2019). https://doi.org/10.1186/s13063-019-3664-1

Distributed Ledger Technology for Clinical & Life Sciences Research: some Use-Cases for Blockchain & Directed Acyclic Graphs

Applications of blockchain and other distributed ledger technology (DLT) such as directed acyclic graphs (DAG) to clinical trials and life sciences research are rapidly emerging.

Distributed ledger technology (DLT) such as blockchain has a myriad of use-cases in life sciences and clinical research.
Distributed ledger technology (DLT) has the potential to solve a myriad of problems that currently plague data collection, management and access processes in clinical and life sciences research, including clinical trials. DLT is an innovative approach to operating in environments where trust and integrity is paramount by paradoxically removing the need for trust in any individual component and providing full transparency as to the micro-environment of the platform operations as a whole.

Currently the two forms of DLT predominating are blockchain and directed acyclic graphs (DAGs). While quite distinct from one another, in theory the two technologies are intended to serve similar purposes, or were developed to address the same goals. In practice, blockchain and DAGs may have optimal use-cases that differ in nature from one another, or be better equipped to serve different goals – the nuance of which to be determined on a case by case basis.

Bitcoin is the first known example of blockchain, however blockchain goes well beyond the realms of bitcoin and cryptocurrency use cases. One of the earliest and currently predominating DAG DLT platforms is IOTA which has proved itself in a plethora of use cases that go well beyond what blockchain could currently achieve, particularly within the realm of the internet of things (IOT). Infact Iota has been operating an industry data marketplace since 2017 which makes it possible to store, sell via micro-tansactions and access data streams via web browser. For the purposes of this article we will focus on DLT applications in general and include use-cases in which blockchain or DAGs can be employed interchangeably. Before we begin, what is Distributed Ledger  technology?

The Iota Tangle has already been implemented in a plethora of use cases that may be beneficially translated to clinical and life sciences research.

Source: iota.org Iota’s Tangle is an example of directed acyclic graph (DAG) digital ledger technology. Iota has been operating an industry data marketplace since 2017.
​DLT is a decentralised digital system which can be used to store data and record transactions in the form of a ledger or smart contract. Smart contracts can be set up to form a pipeline of conditioned (if-then) events, or transactions, much like an escrow in finance, which are shared across nodes on the network. Nodes are used to both store data and process transactions, with multiple (if not all) nodes accommodating each transaction – hence the decentralisation. Transactions themselves are a form of dynamic data, while a data set is an example of static data. Both blockchain and DAGs employ advanced cryptography algorithms which as of today render them un-hackable. This is a huge benefit in the context of sensitive data collection such as patient medical records or confidential study data. It means that data can be kept secure, private, untampered with, and shared efficiently with whomever requires access. Because each interaction or transaction is recorded this enables the integrity of the data to be upheld in what is considered a “trustless” exchange. Because data is shared on multiple nodes for all involved to witness across the network, records become harder to manipulate of change in an underhanded way. This is important in the collection of patient records or experimental data that is destined for statistical analysis. Any alterations to data that are made are recorded across the network for all participants to see, enabling true transparency. All transactions can come in the form of smart contracts which are time stamped and tied to a participant’s identity via the use of digital signatures.

In this sense DLT is able to speed up transactions and processes, while reducing cost, due to the removal of a middle-man or central authority overseeing each transaction, or transfer of information. DLT can be public or private in nature. A private blockchain, for example, does have trusted intermediary who decides who is to have access to the blockchain, who can participate on the network, which data can be viewed by which participants. In the context of clinical and life sciences research this could be a consortium of interested parties, ie the research team, or an industry regulator or governing body. In a private blockchain, the transactions themselves remain decentralised, while the blockchain itself has built in permission layers that allow full or partial visibility of data depending upon the stakeholder. This is necessary in the context of sharing anonymised patient data and blinding in randomised controlled trials.
Blockchain and Hashgraph are two examples of distributed ledger technology (DLT) with applications which could achieve interoperability across healthcare,  medicine, insurance, clinical trials and life sciences research.

Source: Hedera Hashgraph whitepaper. Blockchain and Hashgraph are two examples of distributed ledger technology (DLT).
​Due to the immutable nature of each ledger transaction, or smart contract, stakeholders are unable to alter or delete study data without a consensus over the whole network. In this situation, an additional transaction recorded and time-stamped on the blockchain while the original transaction, that recorded the data to be altered in its original form, remains intact. This property helps to reduce the incidence of human error, such as data entry error, as well as any underhanded alterations with the potential to sway study outcomes.

In a clinical trials context the job of the data monitoring committee, and any other form of auditing  becomes much more straight forward. DLT also allow for complete transparency in all financial transactions associated with the research. Funding bodies can see exactly where all funds are being allocated and at what time points. In-fact every aspect of the research supply-chain, from inventory to event tracking, can be made transparent to the desired entities. Smart contracts operate among participants in the blockchain and also between the trusted intermediary and the DLT developer whose services have been contracted for building the platform framework, such as the private blockchain. The services contracts will need to be negotiated in advance so that the platform is tailored to adequately conform to individualised study needs. Once processes are in place and streamlines the platform can be replicated in comparable future studies.

DLT can address the problem of duplicate records in study data or patient records, make longitudinal data collection more consistent and reliable across multiple life cycles. Many disparate stakeholders, from doctor to insurer or researcher, can share in the same patient data source while maintaining patient privacy and improving data security. Patients can retain access to the data and decide with whom to share it with, which clinical studies to participate in and when to give or withdraw consent.

DLT, such as blockchain or DAGs, can improve collaboration by making the sharing of technical knowledge easier and centralising data or medical records, in the sense that they are located on the same platform as every other transaction taking place. This results in easier shared access by key stakeholders, shortening of negotiation cycles due to improved coordination and making established clinical research processes more consistent and replicable.

From a statisticians perspective, DLT should result in data of higher integrity which yields statistical analysis of greater accuracy and produces research with more reliable results that can be better replicated and validated in future research. Clinical studies will be streamlined due to the removal of much bureaucracy and therefore more time and cost effective to implement as a whole. This is particularly important in a micro-environment with many moving parts and disparate stakeholders such as the clinical trials landscape.


References and further reading:

From Clinical Trials to Highly Trustable Clinical Trials: Blockchain in Clinical Trials, a Game Changer for Improving Transparency?
https://www.frontiersin.org/articles/10.3389/fbloc.2019.00023/full#h4

Clinical Trials of Blockchain
https://www.phusewiki.org/docs/Frankfut%20Connect%202018/TT/Papers/TT18-1-paper-clinical-trials-on-blockhain-v10-19339.pdf

Blockchain technology for improving clinical research quality
https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-017-2035-z

Blockchain to Blockchains in Life Sciences and Health Care
https://www2.deloitte.com/content/dam/Deloitte/us/Documents/life-sciences-health-care/us-lshc-tech-trends2-blockchain.pdf