In the early 20th century, the scientific community was in an upheaval. Fueled by the advent of in vitro cell culture and the dawning acceptance of chromosome theory, the scientific community began to turn toward next-generation models, despite protests from scientists like renowned embryologist Ernest Everett Just.

“[Living matter] can never be divorced from its milieu,” Just wrote. “Our investigations of [life], however much for purposes of more refined and exact study… should never lose sight of the fact that the cell as organism is part with and of its environment.”1

“Holists” like Just believed that living things—be they cells or entire organisms—are as much products of their environment as they are of their internal matter. In contrast, reductionists took a more mechanical view, believing that reducing studies to the cells and chromosomes—life’s component parts—was sufficient to understand the nature and function of a living system.

For most of the last century, reductionism has dominated the life sciences to great effect. Two-dimensional (2D) cell culture models, such as petri dishes, are foundational tools for researchers studying human physiology. However, it is increasingly appreciated that cells in the human body exist in a dynamic microenvironment where fluid flow, signaling gradients, biomechanical forces, and other impactful features are in constant flux. True to Just’s warning, removing cells from this natural complexity can significantly alter their behavior, diminishing the translational value of these models. Such a realization has led to a burgeoning resurgence of holism, which has led to the rise of microphysiological systems.

Microphysiological Systems

“Microphysiological system” (MPS) broadly refers to any advanced in vitro system that allows cells to be cultured in a more natural, physiologically relevant environment. Unlike traditional 2D cell culture systems, wherein cells grow as monolayers atop plastic or glass dishes under static conditions, MPS expose cells to structural, material, or biophysical elements that replicate their native tissue physiology. Two of the most common MPS include:

  • Organoids: Three-dimensional (3D) spherical clusters of stem-cell-derived and organ-specific cell types that are typically grown in an extracellular matrix (ECM), where they replicate certain structural and functional features of the organs they represent.

  • Micropatterned tissue constructs: These are static systems that are typically designed to grow heterogeneous cell populations in a 3D scaffold. Microengineering technology allows for cells and their tissue-specific ECM proteins to be precisely positioned with a scaffold structure that closely resembles in vivo tissue.

  • Organ-on-a-Chip Technology: The most advanced type of MPS, Organ-Chips are 3D microengineered tissues that integrate microfluidic technology, enabling replication of fluid flow dynamics and other key biophysical forces that cells would naturally be exposed to in vivo.

By culturing cells in a more complex microenvironment that mimics their natural setting, MPS encourage cells to behave as they would in vivo, both in terms of transcriptomic profiles and phenotypic responses to various stimuli. Organ-Chips in particular have been demonstrated to recapitulate organ-level function for the liver, producing characteristic metabolites, metabolizing xenobiotics, and displaying signs of tissue damage in response to hepatotoxic drugs.2,3,4

As a result, the use of MPS in industries like pharma and academia has greatly increased in recent years, with applications ranging from the general study of human physiology to drug safety testing.

Stay Up to Date with the Emulate newsletter

Human Microphysiological Systems for Drug Development

Before candidate drugs can be tested in humans, they are typically first screened in highly reductionist model systems, such as 2D cell cultures, before being tested in animals. While a valuable process, the models that usher compounds towards the clinic are rife with limitations. Traditional culture systems lack physiological relevance and thus poorly predict how a compound will affect cells in a dynamic in vivo setting. Animal models provide the complexity of a living organism, but they are genetically and physiologically distinct from humans, which can greatly affect the perceived safety and efficacy of candidates.

These truths reiterate Just’s observation that living matter can never be divorced from its natural environment, and MPS are designed to provide this kind of environment for candidate compounds. Mounting evidence indicates that MPS are better able to predict how drug candidates will affect human tissues relative to both animal models and traditional 2D culture systems.3,4 As such, drug developers are increasingly integrating MPS into preclinical drug development pipelines.

As the adoption of human microphysiological systems for drug development continues to grow, the technology will likely further evolve towards a more holistic representation of human physiology. Defense Advanced Research Projects Agency (DARPA) has already invested5 in developing human-on-a-chip systems, wherein multiple different Organ-Chips are linked through microfluidics, emulating multi-organ systems. The advancing tide of ever more complex MPS systems promises to greatly improve the safety and productivity of modern drug development while moving scientific study toward an equilibrium balanced between reductionism and holism.

Frequently Asked Questions

Is There A Single Definition of “Microphysiological Systems”?

“Microphysiological” refers to two key aspects of microphysiological systems: (1) they are engineered with microscale precision using tools initially developed for the microchip industry; and (2) they are engineered to replicate natural human tissue physiology. However, there is currently no singular definition of microphysiological systems. However, the International Consortium for Innovation and Quality in Pharmaceutical Development’s MPS working group offers the following definition: “[Culture systems] that go beyond traditional 2D culture by including several of the following design aspects:

  • A multi-cellular environment within biopolymer or tissue-derived matrix
  • a 3D structure
  • the inclusion of mechanical cues such as stretch or perfusion for breathing, gut peristalsis, flow
  • incorporating primary or stem cell derived cells
  • inclusion of immune system components6

Additionally, MPS can be separated into static and microfluidic systems based on the use (or lack thereof) of microfluidics. For example, organoids are static, whereas Organ-Chips are microfluidic.

Is There a Microphysiological Systems Conference?

The growing interest in microphysiological systems has resulted in numerous conferences dedicated to the subject. Recent examples include the EuroOCS 2022 and 2023 conferences organized by the European Organ-on-Chip Society; MPS World Summit 2023, which is organized by EuroOCS, the Johns Hopkins Center for Alternatives to Animal Testing, and the International MPS Society; and Global MPS Seminars, hosted by Emulate.

References (Formatted):

  1. Just, E. E. (1937). The Significance of Experimental Parthenogenesis for the Cell-biology of To-day. CYTOLOGIA, FujiiJubilaei(1), 540–550. https://doi.org/10.1508/cytologia.fujiijubilaei.540
  2. Jang, K.J., Otieno, M. A., Ronxhi, J., Lim, H.K., Ewart, L., Kodella, K. R., Petropolis, D. B., Kulkarni, G., Rubins, J. E., Conegliano, D., Nawroth, J., Simic, D., Lam, W., Singer, M., Barale, E., Singh, B., Sonee, M., Streeter, A. J., Manthey, C., & Jones, B. (2019). Reproducing human and cross-species drug toxicities using a Liver-Chip. Science Translational Medicine, 11(517), eaax5516. https://doi.org/10.1126/scitranslmed.aax5516
  3. Ingber, D. E. (2022). Human organs-on-chips for disease modelling, drug development and personalized medicine. Nature Reviews Genetics, 23, 467–491. https://doi.org/10.1038/s41576-022-00466-9\
  4. Ewart, L., Apostolou, A., Briggs, S. A., Carman, C. V., Chaff, J. T., Heng, A. R., Jadalannagari, S., Janardhanan, J., Jang, K.-J., Joshipura, S. R., Kadam, M. M., Kanellias, M., Kujala, V. J., Kulkarni, G., Le, C. Y., Lucchesi, C., Manatakis, D. V., Maniar, K. K., Quinn, M. E., & Ravan, J. S. (2022). Performance assessment and economic analysis of a human Liver-Chip for predictive toxicology. Communications Medicine, 2(1), 1–16. https://doi.org/10.1038/s43856-022-00209-1
  5. Wyss Institute to Receive up to $37 Million from DARPA to Integrate Multiple Organ-on-Chip Systems to Mimic the Whole Human Body. (2012, July 24). Wyss Institute. https://wyss.harvard.edu/news/wyss-institute-to-receive-up-to-37-million-from-darpa-to-integrate-multiple-organ-on-chip-systems-to-mimic-the-whole-human-body/
  6. Fabre, K., Berridge, B., Proctor, W. R., Ralston, S., Will, Y., Baran, S. W., Yoder, G., & Van Vleet, T. R. (2020). Introduction to a manuscript series on the characterization and use of microphysiological systems (MPS) in pharmaceutical safety and ADME applications. Lab on a Chip. https://doi.org/10.1039/c9lc01168d

What is Gene Therapy?

Gene therapy is the modification, manipulation, or deletion of genes to treat, prevent, or cure a disease. Often used in diseases or disorders thought to be incurable by conventional modalities, gene therapy can be life changing for patients.1

Figure 1. The basic gene therapy process

There are two types of gene therapy: in vivo and ex vivo. Briefly, in vivo gene therapy is administered directly by packaging a gene of interest into a viral or nonviral vector and injecting the treatment into the patient (Figure 1, left). In contrast, ex vivo gene therapy is administered to cells that have been extracted from the patient. These extracted cells are genetically modified in culture, grown, expanded, and then infused back into the patient (Figure 1, right).2

Stay Up to Date with Emulate

A Brief History of Gene Therapy

Gene therapy began nearly 80 years ago, when Clyde E. Keeler conceptualized the practice of correcting genes in plants and animals.3 Research into gene therapy steadily increased throughout the 1960s and 1970s; by 1999, there were over 2000 clinical trials (Figure 2).3 However, all progress would soon come to a halt.

Jesse Gelsinger, a teenager from Arizona, had been born in 1981 with a condition that caused ammonia to build up in the blood. Using early gene therapy methods, doctors had successfully extended the lives of mice with the same condition; in 1999, Jesse enrolled in the one of the first clinical trials to see if the effect could be replicated in humans. Unfortunately, the results were not as intended. Jesse’s health quickly deteriorated, and he ultimately passed away.4 The death of Jesse Gelsinger caused all trials in the US to come to an abrupt pause.

Despite the setbacks from that tragic event, the field eventually rebounded. Today, there are 27 cell and gene therapy products approved by the Food and Drug Administration (FDA) and over 1200 gene therapy trials registered with ClinicalTrials.gov that are actively recruiting.

Figure 2. History of Gene Therapy3, 5-12

Examples of Gene Therapy

While gene therapy has applications across a wide variety of diseases, it has been particularly useful in researching cancer treatments, as many types of human cancer develop through genetic alterations. In fact, over 65% of the global gene therapy clinical trials are cancer-related2, with several receiving FDA approval in recent years. For example, Kymriah (tisagenlecleucel), the first gene therapy that is FDA-approved for medical use, is a chimeric antigen receptor (CAR) T cell genetic therapy for treating acute lymphoblastic leukemia (ALL), one of the most common pediatric and young adult cancers.13 Clinical trial data showed that 56% of all patients achieved sustained remission following treatment with Kymriah.

Breast cancer is another area that stands to benefit greatly from targeted gene therapies. Of the inherited forms of breast cancer, the most common cause is mutations in BRCA1 and BRCA2 genes.14 Gene therapy could help prevent inherited breast cancer by replacing the mutated version of these genes with functional versions. For non-inherited forms of breast cancer, gene therapy can be used to target oncolytic genes that drive cancer progression. According to ClinicalTrials.gov, many gene therapy trials are actively recruiting patients, with several showing promising results.

Another major area gene therapy could benefit is research into rare diseases. While thousands of rare diseases have been identified, only a small percentage have approved treatments. As more than 80% of rare diseases can be attributed to mutations in a single gene, gene therapy offers the potential to correct the underlying disease instead of minimizing symptoms with traditional treatments.15 For example, the gene therapies under development for cystic fibrosis (CF) aim to deliver a functional copy of the cystic fibrosis transmembrane conductance regulator (CFTR) gene into the lung, replace the mutated version of the protein, and permanently reduce CF symptoms in the lungs.

Other applications of gene therapy include:

  • Neuromuscular and central nervous system disorders, such as Huntington’s and Parkinson’s disease
  • Blood disorders, such as hemophilia and β-thalassemia
  • Ocular disorders
  • Diabetes
  • Osteoarthritis
  • Cardiac arrhythmias
  • Hearing and balance disorders16-19

Additionally, recent successes in gene therapy could lead to treatments for several diseases and disorders (Table 1).

Recent Milestones in Gene Therapy
20211. Experimental gene therapy reverses sickle cell in patients for up to 37.6 months20

2. Inherited gene mutation is disabled in transthyretin (TTR) amyloidosis, directly through an infusion of the genome editor CRISPR21

3. The first-in-human Phase I clinical trial begins for gene therapy to assess the safety and efficacy in Alzheimer’s disease or Mild Cognitive Impairment22
FDA Approvals ABECMA—cell-based gene therapy for multiple myeloma

BREYANZI—cell-based gene therapy large B-cell lymphoma (LBCL)

RETHYMIC—tissue-based therapy for congenital athymia

STRATAGRAFT—tissue-based therapy for thermal burns
20221. Trials in the US and Europe with a hemophilia B gene therapy called FLT180a (verbrinacogene setparvovec) using a synthetic AAV vector shows that 90% of trial participants still did not need regular injections after 2 years23

2. Two of the earliest CAR T cell therapy patients with cancer had survived over ten years post therapy24

3. Early preclinical success in injectable cell and gene therapy for prostate cancer, liver cancer, and leukemia24
FDA Approvals

ADSTILADRIN—gene-based therapy for non-muscle invasive bladder cancer (NMIBC) with carcinoma in situ (CIS)

CARVYKTI—cell-based therapy for multiple myeloma

HEMGENIX—gene-based therapy for hemophilia B

SKYSONA—gene-based therapy to slow the progression of early, active cerebral adrenoleukodystrophy (CALD)

ZYNTEGLO—gene-based therapy for ß-thalassemia
Table 1: Milestones in Gene Therapy in 2021 and 2022

Advantages and Disadvantages of Gene Therapy

Gene therapy, as with many new types of treatment, comes with many potential benefits as well as some risks and safety concerns (Table 2). These pros and cons must be balanced if researchers want to create a therapy that is safe, effective, and commercially viable.

As discussed above, gene therapy has numerous advantages over conventional disease treatments. By replacing defective genes, gene therapy has the ability to correct underlying disease mechanisms instead of just treating symptoms, with significantly less dosing frequency. In fact, some gene therapies could cure diseases in as few as one treatment. Gene therapy is also a versatile tool, with the ability to target a multitude of genes across various diseases.

However, gene therapy brings several potential risks. While the most common adverse side effect of gene therapy is liver toxicity, others include moderate to severe immune responses, development of certain types of cancer, and damage to organs or tissues. Poor outcomes have been largely traced back to the challenges surrounding the vectors, which are the vehicles that delivers therapeutic genetic material to cells. As such, researchers have focused on improving gene delivery technologies, which has led to safer viral and non-viral vectors, standardized production processes, and refined clinical protocols with better patient screening and dosing strategies. Collectively, these modifications have significantly improved the safety of gene therapy since the first clinical trials in the 1970s.

Advantages of gene therapy Disadvantages of gene therapy 
1. Versatile tool that can be applied to treat, prevent, or cure complex diseases, including vision loss, cancer, blood disorders, and spinal muscular atrophy.

2. Often individualized, increasing the possibility of success.

3. Could cure diseases with as little as one treatment.
1. Efficacy of therapy with AAV vectors are dose dependent, and higher doses may cause adverse events.26

2. Safety concerns due to gene integration

3. Certain treatments are not permanent and would need to be re-administered to maintain effectiveness.

4. Can be expensive25; in the US, costs can range from $373,000 to $2.1M per treatment.
Table 2. Advantages and Disadvantages of Gene Therapy

Gene Therapy Development Process

Bringing a gene therapy to market can be complex, costly, and time consuming. One of the most challenging aspects of gene therapy is developing vectors that are safe and effective. The most important criteria in designing an effective vector are protection for the genetic cargo, delivery to the target site, minimal immunogenicity and genotoxicity, and cost-effective patient access.27

There are multiple types of vectors that can be used, but they are generally classified as either viral or non-viral. Each type of vector is associated with risks. Thus, it is critical to rigorously test vectors in preclinical development and identify any toxicities or negative side effects they could cause.

Viral vectors commonly include adenoviruses (Ads) and adeno-associated viruses (AAVs). Adenoviruses were first used as vectors approximately 20 years ago and have many advantages, such as a broad range of tissue tropism, high transduction efficiency, and a well-characterized genome. They typically exhibit low pathogenicity and toxicity and can provide long-term transgene expression. However, repeated exposure can increase adverse immune responses.27 These challenges can be overcome by using different adeno-associated viral (AAV) serotypes to reduce immunogenicity. Other types of viral vectors include retroviruses, lentiviruses, poxviruses, and vaccinia viruses, which are less commonly used due to complexity of viral structure, safety concerns, and genotoxicity, although lentivirus has had success ex vivo in CAR T cell therapies.28,29

Non-viral vectors are made from a variety of materials, including polymers, lipid nanoparticles (LNPs), inorganic materials (e.g., gold nanoparticles and carbon nanotubes), and hybrid systems (e.g., peptide-lipid vectors).30 Compared to their viral counterparts, non-viral vectors have lower immunogenicity and cytotoxicity and can handle a larger payload.30,31 However, non-viral vectors typically exhibit low transduction efficiency, and some can illicit an immune response, which requires evaluation with each new vector created. Other challenges are physical and chemical stability, consistency of a vector during formulation preparation as well as product storage, and vector-system complexities. However, ongoing research seeks to optimize non-viral vectors for use in gene therapies, such as the recent LNP-based gene therapy for Stargardt disease.32 Non-viral vectors have also been successfully used for applications outside of gene therapy, such as with the two mRNA coronavirus-19 (COVID-19) vaccines produced by Moderna and Pfizer-BioNTech, which use LNPs as the vector for mRNA delivery.

Given that only a few gene therapies have reached the market despite decades of research, there are clearly hurdles this type of treatment must overcome on the road to widespread use. Gene therapy candidates frequently fail in clinical trials due to severe adverse events that were not predicted by the experiments performed on in vitro and in vivo models prior to clinical trials. Current preclinical in vivo models present challenges in that they poorly predict human drug responses, safety, and efficacy. This is on top of the ethical concerns that using animal experimentation brings.33 Meanwhile, conventional in vitro models lack the complexity to fully emulate human physiology, lacking critical features such as 3D cytoarchitecture, cell-cell interactions, and cell heterogeneity34. Thus, there is a great unmet need for suitable preclinical models that can provide adequate clinical translation.

Organ-Chips for Gene Therapy

Researchers can use Organ-on-a-Chip technology to circumvent the challenges of traditional preclinical models in gene therapy by better predicting possible safety concerns, evaluating transduction efficiency, assessing immune response, and avoiding species translation issues.

Take for example the Emulate human Liver-Chip’s ability to improve AAV-based gene therapy development. The liver is a target of many gene therapies due to its important role in several essential biological processes. However, hepatotoxicity is a major obstacle in bringing gene therapies to the clinic. Improved preclinical liver models could reduce human toxicity, patient adverse events, clinical attrition, and overall failure rates of gene therapy candidates. The Liver-Chip from Emulate is a physiologically relevant model of the human liver that incorporates all key hepatic cell types in the sinusoid (Figure 3).35 It can be used to generate human-relevant data in weeks, as opposed to the months it would take with animal models, which can decrease development timelines and facilitate rapid iteration in designing therapy vectors.

Figure 3. Schematic of the Emulate human Quad-Culture Liver-Chip

Testing AAVs, including AAV2 and AAV6 vectors, on the Liver-Chip showed that it could reliably measure concentration-, time-, and serotype-dependent transduction for 7 days. Toxic responses could be confirmed through performing common functional liver endpoint assays such as testing albumin and alanine transaminase (ALT) levels.

In a proof-of-concept study, a proprietary AAV construct was administered to the chip’s endothelial channel to mimic the intravenous administration route (Figure 4). After 7 days, AAV vector transport across the endothelial channel to the epithelial tissue could be measured in a distinct dose-dependent hepatocyte transduction. This would enable researchers to assess a vector’s tropism through a clinically relevant route of administration.

Liver AAV Schematic
Figure 4. AAV Transduction in the Liver-Chip

The data supports the use of Organ-Chips for testing AAV-based gene therapies to model intravenous treatments, assess transduction efficiency, and measure relevant clinical biomarkers to monitor for signals of hepatotoxicity. In addition, mechanisms of vector targeting can be further analyzed to provide insights for new vector designs. Better understanding of the biology of viral vectors and their interactions within the body can lead to improved clinical trials for different indications.

Taken together, gene therapy is rapidly evolving with the promise of improving treatment strategies for many genetic and inherited diseases that currently have no cure. While there are several challenges in gene therapy development, Organ-on-a-Chip technology provides a solution to improve the poor species translation that impedes current preclinical models. As such, Organ-Chips offer a platform for better predicting the benefits, safety, risks, and overall success of gene therapies in humans.

References

  1. Bulaklak, K. & Gersbach, C. A. The once and future gene therapy. Nat. Commun. 11, (2020).
  2. Belete, T. M. The current status of gene therapy for the treatment of cancer. Biol. Targets Ther. 15, 67–77 (2021).
  3. Braun, S. History of Gene Therapy. in Advanced Textbook On Gene Transfer, Gene Therapy And Genetic Pharmacology: Principles, Delivery And Pharmacological And Biomedical Applications Of Nucleotide-based Therapies (ed. Scherman, D.) 17–29 (World Scientific, 2019). doi:10.1142/Q0205#t=suppl.
  4. Rinde, Meir. “The Death of Jesse Gelsinger, 20 Years Later.” Science History Institute, Science History Institute, 4 June 2019, www.sciencehistory.org/distillations/the-death-of-jesse-gelsinger-20-years-later.
  5. Terheggen, H. G., Lowenthal, A., Lavinha, F., Colombo, J. P. & Rogers, S. Unsuccessful trial of gene replacement in arginase deficiency. Eur. J. Pediatr. 1975 1191 119, 1–3 (1975).
  6. Blaese, R. M. et al. T Lymphocyte-Directed Gene Therapy for ADA− SCID: Initial Trial Results After 4 Years. Science (80-. ). 270, 475–480 (1995).
  7. Stolberg, S. G. The biotech death of Jesse Gelsinger. The New York times magazine 136-140,149-150 (1999).
  8. Wilson, J. M. Gendicine: the first commercial gene therapy product. Human gene therapy vol. 16 1014–1015 at https://doi.org/10.1089/hum.2005.16.1014 (2005).
  9. European Medicines Agency recommends first gene therapy for approval | European Medicines Agency. https://www.ema.europa.eu/en/news/european-medicines-agency-recommends-first-gene-therapy-approval.
  10. Padhy, S. K., Takkar, B., Narayanan, R., Venkatesh, P. & Jalali, S. Voretigene neparvovec and gene therapy for leber’s congenital amaurosis: Review of evidence to date. Appl. Clin. Genet. 13, 179–208 (2020).
  11. FDA approves novel gene therapy to treat patients with a rare form of inherited vision loss | FDA. https://www.fda.gov/news-events/press-announcements/fda-approves-novel-gene-therapy-treat-patients-rare-form-inherited-vision-loss.
  12. FDA approval brings first gene therapy to the United States | FDA. https://www.fda.gov/news-events/press-announcements/fda-approval-brings-first-gene-therapy-united-states.
  13. Daley, J. Four Success Stories in Gene Therapy. Nature (2021) doi:10.1038/D41586-021-02737-7.
  14. Lima, Z. S. et al. Recent advances of therapeutic targets based on the molecular signature in breast cancer: genetic mutations and implications for current treatment paradigms. J. Hematol. Oncol. 12, 38 (2019).
  15. Brook PJ, Yang NN, Austin CP. Gene therapy: the view from NCATS. Hum Gene Ther. 27 (1), 7-13 (2016).
  16. Dastjerd, N. T. et al. Gene therapy: A promising approach for breast cancer treatment. Cell Biochem. Funct. 40, 28–48 (2022).
  17. Shahryari, A., Burtscher, I., Nazari, Z. & Lickert, H. Engineering Gene Therapy: Advances and Barriers. Adv. Ther. 4, 2100040 (2021).
  18. Mendell, J. R. et al. Current Clinical Applications of In Vivo Gene Therapy with AAVs. Mol. Ther. 29, 464–488 (2021).
  19. Arabi, F., Mansouri, V. & Ahmadbeigi, N. Gene therapy clinical trials, where do we go? An overview. Biomed. Pharmacother. 153, 113324 (2022).
  20. Kanter, J. et al. Biologic and Clinical Efficacy of LentiGlobin for Sickle Cell Disease. N. Engl. J. Med. 386, 617–628 (2022).
  21. First gene-editing treatment injected into the blood reduces toxic protein for up to 1 year | Science | AAAS. https://www.science.org/content/article/first-gene-editing-treatment-injected-blood-reduces-toxic-protein-1-year.
  22. First-in-Human Clinical Trial to Assess Gene Therapy for Alzheimer’s Disease. https://health.ucsd.edu/news/press-releases/2021-02-18-first-in-human-clinical-trial-to-assess-gene-therapy-for-alzheimers-disease/.
  23. Chowdary, P. et al. Phase 1–2 Trial of AAVS3 Gene Therapy in Patients with Hemophilia B. N. Engl. J. Med. 387, 237–247 (2022).
  24. Top Stories for Cancer Cell and Gene Therapy in 2022 | Alliance for Cancer Gene Therapy. https://acgtfoundation.org/news/top-stories-cancer-cell-and-gene-therapy-2022/.
  25. Mercier, J., Ruffin, M., Corvol, H. & Guillot, L. Gene Therapy: A Possible Alternative to CFTR Modulators? Front. Pharmacol. 12, 505 (2021).
  26. Moutsatsou, P., Ochs, J., Schmitt, R. H., Hewitt, C. J. & Hanga, M. P. Automation in cell and gene therapy manufacturing: from past to future. Biotechnol. Lett. 41, 1245–1253 (2019).
  27. Butt, M. H. et al. Appraisal for the Potential of Viral and Nonviral Vectors in Gene Therapy: A Review. Genes 2022, Vol. 13, Page 1370 13, 1370 (2022).
  28. Lundstrom, K. Viral Vectors in Gene Therapy. Diseases 6, 42 (2018).
  29. Bulcha, J. T., Wang, Y., Ma, H., Tai, P. W. L. & Gao, G. Viral vector platforms within the gene therapy landscape. Signal Transduct. Target. Ther. 2021 61 6, 1–24 (2021).
  30. Zu, H. & Gao, D. Non-viral Vectors in Gene Therapy: Recent Development, Challenges, and Prospects. AAPS J. 23, 1–12 (2021).
  31. Ramamoorth, M. & Narvekar, A. Non Viral Vectors in Gene Therapy- An Overview. J. Clin. Diagn. Res. 9, GE01 (2015).
  32. Sun, Da, et al. Effective Gene Therapy of Stargardt Disease with PEG-ECO/PGRK1-ABCA4-S/MAR Nanoparticles. Vol. 29, 1 Sept. 2022, pp. 823–835, https://doi.org/10.1016/j.omtn.2022.08.026. Accessed 27 July 2023.
  33. Li, C. & Samulski, R. J. Engineering adeno-associated virus vectors for gene therapy. Nat. Rev. Genet. 2020 214 21, 255–272 (2020).
  34. Choi, S. H. & Engelhardt, J. F. Gene Therapy for Cystic Fibrosis: Lessons Learned and Paths Forward. Mol. Ther. 29, 428–430 (2021).
  35. Jang, K. J. et al. Reproducing human and cross-species drug toxicities using a Liver-Chip. Sci. Transl. Med. 11, (2019).

Drug development has been plummeting in productivity over the last 70 years. Learn how improving preclinical models is the key to solving the challenge at the heart of drug development.

“The more positive anyone is about the past several decades of progress [in pharmaceutical development], the more negative they should be about the strength of countervailing forces.”  These foreboding words were penned in a seminal 2012 article by Jack Scannell, author of Eroom’s Law, in an effort to illuminate the drug development industry’s productivity crisis.

Productivity measures how efficient drug development is, often presented as the number of drugs that can be brought to market given a set amount of effort or investment. Consider pharmaceutical development in the 1950s, for example: Data presented by Scannell and his co-authors showed that, with the contemporary equivalent of $1 billion US dollars, the industry was able to produce around 30 new drugs. In contrast, that same investment of $1 billion in 2023 would not even produce one new therapeutic (see Figure 1).

Figure 1: Graph showing the change R&D efficiency since 1950.

That is a substantial decrease in productivity and one that many in the industry are concerned about. Whether it’s to treat neurodegenerative disease, stem the spread of infectious agents, or fight cancer, there is a persistent need among patients for new and innovative therapeutics. When productivity is low, developers face steeper costs, and progress slows. As a result, drug costs increase for patients, who are left waiting in desperate need of therapeutic relief.

Diagnosing the various factors that have led to our current productivity crisis—the “countervailing forces”—has been a challenge that Scannell and many others have worked to overcome. Through their efforts, many potential causes have been identified, one of which stands out as profoundly impactful: improving the accuracy of preclinical models.

Stay Up to Date with the Emulate newsletter

Preclinical Drug Development 

Preclinical drug development is highly speculative: Researchers are tasked with foretelling how compounds will behave in the human body and ultimately identifying the select few that are both safe and therapeutic. To do this, they rely on model systems that serve as proxies for the human body, with none serving a more prominent role than non-human animal models.

Rodents, primates, and other non-human animal models have long been the gold standard in preclinical toxicology screening. With complex and interconnected tissues, these animals allow researchers to test the effect of their drug in a dynamic system that resembles the human body. As such, animals have been positioned as the last filter in the drug development process, charged with the difficult job of weeding out toxic drugs before they enter clinical trials.

Despite their ubiquity in drug development, ample evidence indicates that animal models are far from perfect. Approximately 90% of drugs entering clinical trials fail, with roughly 30% of those failures attributed to unforeseen toxicity. Such abundant failure indicates that, at minimum, animal models alone are insufficient decision-making tools—too often, they get it wrong, and both patients and drug developers pay the price.

The cost of this failure plays a central role in the current productivity crisis. Though researchers now have access to next-generation sequencing, combinatorial chemistry, and automation, drug development costs have increased nearly 80-fold since 1950 to a staggering $2.3B per approved drug. And approximately 75% of the pharmaceutical industry’s drug development costs can be attributed to development failures.

It stands to reason that reducing clinical trial failure rates will improve the efficiency of drug development. Not only are failed trials expensive, but they also take up clinical resources that could otherwise be used to advance successful drugs. Since clinical trial failure rates reflect the quality of drugs that enter trials, improving the quality of these drugs should improve the industry’s overall productivity.

So how should researchers go about doing this? Revisiting his seminal work a decade later, Scannell provided powerful guidance: The quality of the compounds that enter clinical trials is a consequence of the preclinical models used to select them, and even small improvement in the quality of the preclinical models—more specifically, their predictive validity—can have a substantial impact on productivity. Enter more human-relevant preclinical models like the Liver-Chip.

Improving Productivity with Organ-Chips

In a recently published study, Emulate scientists showed that the Liver-Chip—a specialized Organ-Chip that mimics the human liver—can identify compounds’ potential to cause drug-induced liver injury (DILI) far more accurately than traditional in vitro and animal models.

Briefly, Organ-Chips are three-dimensional culture systems that combine heterogeneous cell culture, fluid flow, and several features of the tissue microenvironment to mimic human organ function in an in vitro setting. Evidence indicates that human cells cultured in Organ-Chips behave remarkably similar to their in vivo counterparts. Among many promising applications, these chips are particularly well suited for preclinical toxicology screening.

In their study, the Emulate researchers found the Liver-Chip to be a highly sensitive and specific tool for detecting hepatotoxic compounds. In particular, the Liver-Chip showed a sensitivity of 87% and specificity of 100% against a series of drugs that had progressed into the clinic after being tested in animal models, only to later be revealed as toxic when given to patients. Therefore, these drugs well represent the current gap in preclinical toxicology testing, through which some hepatotoxic drug candidates evade detection and advance into clinical trials.

If the Liver-Chip can fill the gap left by animal models, Scannell’s framework suggests that the Liver-Chip could profoundly affect the industry’s productivity by reducing the number of safety-related clinical trial failures.

To calculate how this reduction may impact industry productivity, Emulate researchers teamed up with Jack Scannell to build an economic value model. This analysis showed that applying the Liver-Chip in all small-molecule drug development programs could generate $3 billion dollars annually for the industry as a result of improved productivity. This is approximately $150M per top pharmaceutical company. And, that’s just for the Liver-Chip. In addition to hepatotoxicity, cardiovascular, neurological, immunological, and gastrointestinal toxicities are among the most common reasons clinical trials fail. If Organ-Chips can be developed to reduce these clinical trial failures with a similar 87% sensitivity, the resulting uplift in productivity could generate $24 billion for the industry annually—roughly $750M to $1B per top pharmaceutical company.

It is immediately evident that, even when the cost of integrating and running Liver-Chip experiments is accounted for, the cost savings of reducing clinical trial failures is substantial. Moreover, the freed-up clinical bandwidth would permit advancing other, more promising compounds. The Emulate researchers’  work demonstrates that improving productivity in drug development is possible, and it starts with developing better models. As the industry embraces the potential of Organ-on-a-Chip technology and continues to explore its application in various areas of drug development, there is hope for a future with improved productivity and faster delivery of life-saving therapeutics.


It isn’t enough that models identify toxic drugs—they must avoid mistaking safe drugs as dangerous. Read on to learn about the importance of specificity in the preclinical stages of drug development.

Drug-induced liver injury (DILI) has been a persistent threat to drug development for decades. Animals like rats, dogs, and monkeys are meant to be a last line of defense against DILI, catching the toxic effects that drugs could have before they reach humans. Yet, differences between species severely limit these models, and the consequences of this gap are borne out in halted clinical trials and even patient deaths.  

Put simply; there is a translational gap between our current preclinical models and the patients who rely on them.  

The Emulate Liver-Chip—an advanced, three-dimensional culture system that mimics human liver tissue—was designed to help fill this gap. The Liver-Chip allows scientists to observe potential drug effects on human liver tissue and, in turn, better predict which drug candidates are likely to cause DILI (see Figure 1).  

Figure 1: Liver-Chip cross-section.

Such a preclinical model could be extremely useful in preventing candidate drugs with a hepatic liability from reaching patients, but exactly how useful depends on how sensitive and specific it is.  

Stay Up to Date with the Emulate newsletter

Measuring Preclinical Model Accuracy 

In preclinical drug development, a wide range of model systems—including animals, spheroids, and Organ-Chips—are used for decision-making. Researchers rely on these models to help them determine which drug candidates should advance into clinical trials. Whether or not scientists make the right decision depends largely on the quality of the models they use. And that quality is measured as both sensitivity and specificity. 

In this context, “sensitivity” describes how often a model successfully identifies a toxic drug candidate as such. So, a model with 100% sensitivity would correctly flag all harmful drug candidates as such. In contrast, “specificity” refers to how accurate a model is in identifying non-toxic drug candidates. A 100%-specific model would never claim that a non-toxic candidate is toxic. It’s important to note that a model can be 100% sensitive without being very specific. For example, an overeager model that calls most candidates toxic may capture all toxic candidates (100% sensitivity) but also mislabel many non-toxic candidates as toxic (mediocre specificity). 

In an ideal world, preclinical drug development would use models that are 100% sensitive and 100% specific. Unfortunately, no model is perfect. Approximately 90% of drugs entering clinical trials fail, with many failing due to toxicity issues. This alone suggests that there is a strong need for more accurate decision-making tools.  

The give and take of Sensitivity and Specificity 

Researchers want preclinical toxicology models with the best sensitivity possible, as higher sensitivity means more successful clinical trials, safer patients, and better economics. However, this cannot come at the cost of failing good drugs. An overly sensitive model with a low threshold for what it considers “toxic” would catch all toxic drugs, but it may also catch drugs that are, in reality, safe and effective in humans. Good drugs are rare, and a lot of effort and investment goes into their development. Even one drug that fails to make it to the clinic can end up costing pharmaceutical companies billions and leave a patient population without treatment. Models should do their utmost to classify non-toxic compounds as such—that is, to have 100% specificity. 

But how can drug development insist on perfect specificity when no model is perfect? Fortunately, there is a give-and-take between sensitivity and specificity that model developers can take advantage of: one can be traded for the other. 

In decision analysis, sensitivity and specificity can be “dialed in” for the model in question. In most cases, this involves setting a threshold in the analysis of the model’s output. In a recent study published in Communications Medicine, part of Nature Portfolio, the Emulate team set a threshold of 375 on the Liver-Chip’s quantitative output; in the case of hepatic spheroids, an older model system, researchers have set a threshold of 50. In both cases, the higher the thresholds, the more sensitive and less specific the model tends to be. These thresholds were selected precisely to dial the systems into 100% specificity. 

This is why Ewart et al.’s findings are so striking. Even while maintaining such a strict specificity, the Liver-Chip achieved a staggering 87% sensitivity. This means that, on top of correctly identifying most of the toxic drugs, the Liver-Chip never misidentified a non-toxic drug in the study as toxic. For drug developers, this means that no good drugs—nor the considerable resources poured into their development—would be wasted. Using models like the Liver-Chip that achieve high sensitivity alongside perfect specificity would allow drug developers to deprioritize potentially dangerous drugs without sacrificing good drugs. In all, this could lead to more productive drug development pipelines, safer drugs progressing to clinical trials, and more patient lives saved.


 

Learn about the importance of sensitivity in drug development and why researchers can improve their pipelines by using more sensitive models.

In a recently published study in Communications Medicine, part of Nature Portfolio, Emulate scientists reported that the human Liver-Chip—an advanced, three-dimensional culture system that mimics human liver tissue—could have saved over 240 lives and prevented 10 liver transplants that were caused by the test set of drugs. Specifically, the study demonstrated that the Emulate human Liver-Chip could be used to identify a candidate drug’s likelihood of causing drug-induced liver injury (DILI), a leading cause of safety-related clinical trial failure and market withdrawals around the world1,2

Any preclinical model that helps prevent toxic drugs from reaching patients would be extremely useful, but exactly how much so depends in part on its sensitivity. In the context of predictive toxicology, sensitivity is a measure of how well a model can identify a drug candidate’s toxicity. For example, a model with 0% sensitivity would fail to identify every toxic drug it encounters, whereas one with perfect 100% sensitivity would never miss.  

Ewart et al. concluded that the Emulate human Liver-Chip could profoundly affect preclinical drug development because it showed a high sensitivity of 87%—specifically for identifying drugs that were cleared for clinical trials after being tested in both animals and in vitro systems, but ultimately proved hepatotoxic in humans. In other words, the Liver-Chip identified DILI risk for nearly 7 out of every 8 hepatotoxic drugs it encountered.  

That’s striking—particularly given the drugs that were being tested: Each had previously made it into the clinic, meaning they were deemed safe enough to administer to humans based on rigorous preclinical testing. While the specifics of preclinical testing may differ for each drug, each includes testing in at least two animal species per regulatory guidelines to progress a drug candidate into clinical testing. Despite this, 22 of the drug candidates went on to be proven toxic in patients. Because animal models failed to adequately forecast the harm that these drugs could—and did—bring to human patients, more than 240 people lost their lives, and 10 were forced to undergo emergency liver transplantation.  

Expressed in the language of preclinical toxicology, because the tested drugs made their way through animal testing, animals served as the “reference” in the study, meaning a 0% sensitivity for this drug set—a strong contrast to the human Liver-Chip’s 87%. While it’s important to bear in mind that sensitivity will depend upon the reference set of drugs—as it did here—the juxtaposition of these numbers tells an important story about modern drug development and its ability to protect patients. But to appreciate this story, it’s necessary to scrutinize the concept of model sensitivity and the assumptions underlying it.

The Assumption of Sensitivity 

As mentioned above, sensitivity measures how often a model system correctly identifies a drug as toxic or, conversely, how often it incorrectly marks a toxic drug as not toxic. If the model system allows a toxic drug to pass through without a strong indication of harm, it will support a false conclusion that the drug is non-toxic—a result colloquially known as a “false-negative.”  

False-negatives can have dire consequences, enabling harmful drug candidates to reach the patient’s bedside through clinical trials. To avoid this, researchers have long sought to test candidate drugs in model systems that closely approximate the human body and prevent bad drugs from reaching the market. And for more than 80 years, animals have filled that need. However, animals are far from perfect. Genetic and physiological differences can produce nuanced yet significant discrepancies in how animals respond to drugs—a drug that appears safe in rats may turn out to be lethal in humans. Such a result would be described as a false-negative.  

As roughly 90% of drugs that enter clinical trials fail—many due to safety concerns—it is clear that animal models are far from 100% sensitive3

So how sensitive are they? The short answer is that, although animals have been tested for the better part of a century, researchers have yet to produce robust data on the sensitivity of animal models, particularly with respect to preclinical toxicology screening4. Perhaps the largest hurdle preventing them from making such an assessment is their assumption that animals are as good as it gets—a belief that eclipses many researchers’ desire for proof.  

Because there is a lack of firm data, it’s difficult to make general estimates on animal models sensitivity. However,  about 90% of clinical trials fail, and 30–40% of these failures are due to toxicity responses. From this, it can be presumed that animals did not provide sufficient evidence to forecast drug toxicity in humans and that including more sensitive models could have helped prevent toxic drugs from reaching humans. 

Stay Up to Date with the Emulate newsletter

The Story in The Numbers 

Animal models are meant to be a protective barrier, the last line of defense that prevents toxic drug candidates from reaching humans. But, as described above, this barrier is imperfect—it has gaps in sensitivity that allow toxic drugs to advance into clinical trials far too often.   

In their study, Ewart et al. evaluated whether the Emulate human Liver-Chip could fortify this barrier. The team selected 27 drugs, 22 of which were known to be hepatotoxic. Importantly, many of these drugs were included in a set of guidelines that the IQ Consortium, an affiliate of the International Consortium for Innovation and Quality in Pharmaceutical Development, has designated as a baseline researchers should use to measure a liver model’s ability to predict DILI risk. 

Each of these drugs had previously advanced to clinical trials, and some were market approved. In other words, animal models in preclinical testing served as the reference for the study and therefore had a 0% sensitivity for this group of drugs.  

To measure a model’s sensitivity, researchers use it to screen a set of test drug candidates. To effectively gauge the model’s sensitivity, it’s essential that the set of drugs be carefully selected. For example, one could bias the test drugs towards “easy” drugs—ones that are very toxic in ways that even simple models would identify; however, showing that a new model is excellent at capturing “easy” drugs likely does not demonstrate its utility in capturing the difficult drugs that slip through animal testing. Establishing sensitivity based on such drugs is akin to claiming that a telescope’s ability to spot the sun makes it sensitive to observing stars—it may be technically true but is irrelevant to real-world challenges. That said, it’s not uncommon for preclinical models to be simply tested using highly toxic drugs that never made it to clinical trials5,6

Stated plainly, a model’s sensitivity changes depending on the drugs that are being tested. If you use clearly toxic drug candidates—ones that would never make it past animal models to begin with—your model will likely appear very sensitive because these drugs are easy to detect. But, if one achieves a high sensitivity when using more challenging drugs—ones that slip through and only show their toxic potential in humans—then the sensitivity of that model will be far more valuable.  

This is precisely why Ewart et al.’s study stands out. Instead of testing Liver-Chips with drugs that are too toxic for animal use, the researchers relied on drugs that had already undergone animal testing and appeared safe. This particular set of drugs not only bypassed animal testing, but they also went on to kill 242 patients. The Liver-Chip successfully identified all of them (see Figure 2). As an additional challenge, they also tested a set of drugs that hepatic spheroid models missed. Even though these drugs were especially difficult for current models to detect, the Liver-Chip achieved an impressive sensitivity of 87%.  

Figure 2: Drugs Ewart et al. tested in their study.

By producing these results, Ewart et al. have shown that the human Liver-Chip can, at a minimum, help fill the sensitivity gaps that plague animal models. Stated another way, this data strongly suggests that Liver-Chips can help to identify toxic drugs that animal models miss, which could greatly reduce the number of harmful drugs that advance into clinical trials.  

These numbers tell a story about modern and future drug development. Today, drug developers rely on models that, while useful, are imperfect. This imperfection can be catastrophic for patients, as it was for the patients who lost their lives to the drugs Ewart et al. tested. But, in recognizing these imperfections, we can work to prevent future harm by building a better drug development process—one fortified by modern technologies like the Liver-Chip.  

References 

  1. Craveiro, Nuno Sales, et al. “Drug Withdrawal due to Safety: A Review of the Data Supporting Withdrawal Decision.” Current Drug Safety, vol. 15, no. 1, 3 Feb. 2020, pp. 4–12, https://doi.org/10.2174/1574886314666191004092520. 
  1. Research, Center for Drug Evaluation and. “Drug-Induced Liver Injury: Premarketing Clinical Evaluation.” U.S. Food and Drug Administration, 17 Oct. 2019, www.fda.gov/regulatory-information/search-fda-guidance-documents/drug-induced-liver-injury-premarketing-clinical-evaluation. 
  1. Fogel, David B. “Factors Associated with Clinical Trials That Fail and Opportunities for Improving the Likelihood of Success: A Review.” Contemporary Clinical Trials Communications, vol. 11, Sept. 2018, pp. 156–164, www.ncbi.nlm.nih.gov/pmc/articles/PMC6092479/, https://doi.org/10.1016/j.conctc.2018.08.001. 
  1. Bailey, Jarrod, et al. “An Analysis of the Use of Animal Models in Predicting Human Toxicology and Drug Safety.” Alternatives to Laboratory Animals, vol. 42, no. 3, June 2014, pp. 181–199, https://doi.org/10.1177/026119291404200306. 
  1. Zhou, Yitian, et al. “Comprehensive Evaluation of Organotypic and Microphysiological Liver Models for Prediction of Drug-Induced Liver Injury.” Frontiers in Pharmacology, vol. 10, 24 Sept. 2019, https://doi.org/10.3389/fphar.2019.01093. Accessed 22 Nov. 2020. 
  1. Bircsak, Kristin M., et al. “A 3D Microfluidic Liver Model for High Throughput Compound Toxicity Screening in the OrganoPlate®.” Toxicology, vol. 450, Feb. 2021, p. 152667, https://doi.org/10.1016/j.tox.2020.152667. 

 

What are Organ-Chips?

An introduction to what Organ-on-a-Chip technology is and why it’s setting the stage for the next generation of drug development and research. 

To predict whether new therapeutics will be safe and effective for humans, drug developers first test them in animal- and cell-based models as well as computational tools that are designed to approximate human biology. Unfortunately, these approaches often fall short, as 90% of drug candidates that enter clinical trials fail, mostly due to a lack of efficacy or unforeseen toxicity. Not only is this inefficient system time- and resource-intensive, but it can be costly in terms of human lives1. To improve the odds of clinical success, researchers need models that better reflect human biology.

That’s where Organ-Chips come into play.

History of Organ-on-a-Chip Technology

In the early 2000s, researchers Dan Huh and Shuichi Takayama were interested in seeing whether they could model excess fluid accumulation, which occurs in diseased lungs, in an in vitro model. So, they created the first Organ-on-a-Chip system designed to mimic the airways of the lung2. Donald Ingber , M.D., Sc.D., from the Wyss Institute at Harvard, attended a presentation by Dr. Takayama and was amazed to hear the device make the same “crackle” sound he had learned to listen for to identify fluid in patients’ lungs—a phenomenon that remained poorly understood due to the inaccessibility of lungs for research.

Ingber invited Huh to his lab, and in 2010, they successfully created the first Lung-Chip capable of emulating breathing motions3. Following this success, Ingber and his team used funding from the Defense Advanced Research Projects Agency (DARPA), the Food and Drug Administration (FDA), and the National Institutes of Health (NIH) to develop more than 15 Organ-on-a-Chip models, including ones for the intestine, kidney, skin, bone marrow, and the blood-brain barrier. In 2014, Ingber founded Emulate to commercialize Organ-on-a-Chip technology.

Since then, Organ-Chips have progressed tremendously in quality, availability, and adoption Today, there have been over 100 publications using Emulate Organ-Chips, and the technology is used by 21 of the top 25 pharmaceutical companies as well as by academic research institutions and government agencies such as the FDA.

Stay Up to Date with the Emulate newsletter

How Does Organ-on-a-Chip Technology Work?

Organ-Chips combine cell culture with microfluidics to emulate the biological forces of different organ tissues and/or disease states, such as peristalsis in the intestines, breathing in the lungs, and blood flow through the vasculature4. The chips are flexible, thumb-drive-sized devices that contain parallel upper and lower microfluidic channels, each seeded with organ-specific cells. The channels are separated by a thin, porous membrane, which creates an interface for cell-cell communication (Figure 1). The membrane is coated with a tissue-specific extracellular matrix, helping to further drive tissue maturation.

Figure 1. Schematic of an Organ-Chip

Organ-Chips are cultured in a microfluidic platform which automates the fluid flow and cyclic mechanical strain to create and maintain physiologically relevant conditions in the chip’s microenvironment. This ensures that the cells in the chip behave as though they were in their native organ—and react to drugs, chemicals, and other substances accordingly.

Researchers  can collect data in real time with high-content microscopy imaging, effluent sampling, and a variety of functional measurements5. They can also perform traditional endpoint analyses such as cytotoxicity assays and immunohistochemistry as well as omics-based analysis to identify relevant disease pathways.

Organ-on-a-Chip Applications

Organ-on-a-Chip technology is used by academia and the pharmaceutical industry across a wide variety of areas, including toxicology, immunology, gene therapy, and cancer research. Within academia, Organ-Chips enable researchers to develop new models with closer-to-human gene expression to better understand human physiology and disease mechanisms. Within the pharmaceutical industry, the technology is primarily used to help determine drug candidates’ efficacy and toxicity ahead of clinical trials, helping to improve the quality of drugs that enter the clinic.

With a higher predictive validity than conventional models6, Organ-on-a-Chip technology can be used to identify drug candidates with a greater probability of success, improve safety for patients, lower development costs, and shorten the timeline for bringing new drugs to market. They can open new doors in medical research by allowing scientists to create new disease models and probe new biological pathways. Organ-Chips also hold significant promise in the field of personalized medicine, in which scientists could use the technology to create an in vitro model of a specific patient using biopsied cells and use that model to determine the best treatment strategy.

Advantages and Disadvantages of Organ-Chips

Compared to conventional culture and animal models, Organ-on-a-Chip technology provides several key advantages (Table 1). Traditional 2D cell culture models lack the structure, cellular heterogeneity, and physiological forces needed to emulate human biology, resulting in gene expression and morphology that poorly match in vivo tissue.

While organoid cultures provide 3D structure, they lack a dynamic physical microenvironment and endothelial co-culture, limiting human-relevant gene expression. Additionally, organoids exhibit highly variable ratios of cell populations, making it difficult to obtain robust and reproducible results. Animal models provide more physiologically relevant responses than conventional in vitro models but are limited in their predictive value by species differences, leading to poor clinical translatability. They also require considerable oversight and can pose ethical challenges7. For example, in cancer studies, successful translation of animal models to human clinical trials is less than 8%, which can cost a pharmaceutical company a large amount of money and prolong the search for lifesaving treatments8.

Table 1: Advantages and Disadvantages of Organ-on-a-Chip Technology 5,7,9

AdvantagesDisadvantages
Offers a tissue-specific dynamic microenvironmentHas less experimental longevity than animal models
Contains human cells for greater human relevance than animal modelsHas less throughput and scalability than 2D cell culture, limiting its utility in early stages of drug development
Produces in vivo-like cell morphologyRequires more training and a more demanding workflow compared to traditional in vitro methods
Uses multiple cell-types to better represent in vivo complexityIs more expensive than conventional in vitro models
Emulates human physiological processes such as breathing, peristalsis, and blood flowHas less data acceptance criteria than conventional models from regulatory agencies
Is less expensive than animal models 
Reduces reliance on animal models 
Can be used in precision medicine for personalized or group-specific treatment 
Allows for real-time, functional, and traditional endpoint analyses 

As with any technology, Organ-Chips have some limitations; however, scientists are making rapid advancements with the technology, which will see fewer disadvantages as it matures.

Organ-on-a-Chip Technology: Future Prospects

Organ-Chip research and development are heading into the next decade with a plethora of exciting advancements and promising prospects. One such development is the passage of the FDA Modernization Act 2.0, which was signed into law by President Biden in December 2022. This legislation removes the requirement to use animal studies for investigating drug safety and efficacy and authorizes the use of alternatives, including Organ-Chips and other microphysical systems, when submitting new drugs for approval to the FDA. The law also includes funding to help create new Organ- Chip systems and further improve existing ones10. This will help accelerate the development and standardization of more Organ-on-a-Chip technologies, making it easier for pharmaceutical researchers to integrate them into their drug development pipelines.

Organ-on-a-Chip technology is also making exciting progress on the validation front . In the first-of-its-kind study released in December 2022, researchers showed that the Emulate human Liver-Chip meets the IQ MPS Affiliate’s qualification guidelines for use in predictive toxicology applications. When tested against a set of 27 toxic and non-toxic drugs, the Liver-Chip was found to have 87% sensitivity and 100% specificity6. Importantly, all of the toxic drugs used in the study were falsely deemed safe by animal testing, meaning the Liver-Chip has the potential to reduce drug-induced liver injury by up to 87%. Economic modeling in that study also showed that incorporating Organ-on-a-Chip technology into drug development programs can generate billions of dollars annually for the smallmolecule pharmaceutical industry.

A diagram showing a cross-section of the Emulate Liver-Chip, with hepatocytes in the top channel, and endothelial cells, Kupffer cells, and Stellate cells in the bottom channel.
Figure 2. Schematic of the Emulate human-Liver-Chip

If this is what one Organ-Chip model is capable of, what happens when several are combined? Some researchers are looking to develop multi-Organ-on-a-Chip systems, incorporating two or more different Organ-Chip models to study interactions between organs or to model pharmacokinetics and pharmacodynamics7,11. One approach is to enable the interaction of multiple, independent Organ-Chip models through the transfer of cell media and effluent between chips. Ultimately, the goal would be to create Human-on-a-Chip system that emulates the response of the entire human body. A system like this could render animal models obsolete, further optimizing the drug development process12

Of course, the potential for Organ-Chips is not bound to any one purpose—nor to Earth itself. With space exploration becoming more accessible, there is an increased need to understand the effects of space travel on human biology. In collaboration with NASA and the consulting firm IRPI, researchers from Emulate sent Organ-Chips and a Human Emulation System to the International Space Station (ISS) to test the effects of space travel on astronauts13,14. This would ultimately enable researchers to predict how an astronaut’s body would respond to the stressors of space travel before launching them into orbit.

Conclusion

Organ-on-a-Chip technology is a powerful preclinical model that can give pharmaceutical, academic, government, and other scientists better insights into human biology and disease mechanisms. By precisely replicating the biology and function of an organ, Organ-Chips overcome many of the issues with traditional preclinical models, giving drug developers confidence  in progressing drugs to human trials. Ultimately, this will help create more efficient drug development pipelines—and, most importantly, better, safer medicines for patients.

  1. Scannell, J. W. & Bosley, J. When Quality Beats Quantity: Decision Theory, Drug Discovery, and the Reproducibility Crisis. PLoS One 11, e0147215 (2016).
  2. Huh, D. et al. Acoustically detectable cellular-level lung injury induced by fluid mechanical stresses in microfluidic airway systems. Proc. Natl. Acad. Sci. U. S. A. 104, 18886 (2007).
  3. Huh, D. et al. Reconstituting Organ-Level Lung Functions on a Chip. Science (80-. ). 328, 1662–1668 (2010).
  4. Ingber, D. E. Human organs-on-chips for disease modelling, drug development and personalized medicine. Nat. Rev. Genet. (2022) doi:10.1038/s41576-022-00466-9.
  5. Leung, C. M. et al. A guide to the organ-on-a-chip. Nat. Rev. Methods Prim. 2022 21 2, 1–29 (2022).
  6. Ewart, L. et al. Performance assessment and economic analysis of a human Liver-Chip for predictive toxicology. Commun. Med. 2022 21 2, 1–16 (2022).
  7. Ingber, D. E. Is it Time for Reviewer 3 to Request Human Organ Chip Experiments Instead of Animal Validation Studies? Adv. Sci. 7, 2002030 (2020).
  8. Mak, I. W. Y., Evaniew, N. & Ghert, M. Lost in translation: Animal models and clinical trials in cancer treatment. Am. J. Transl. Res. 6, 114–118 (2014).
  9. Singh, D., Mathur, A., Arora, S., Roy, S. & Mahindroo, N. Journey of organ on a chip technology and its role in future healthcare scenario. Appl. Surf. Sci. Adv. 9, 100246 (2022).
  10. Sen. Paul, R. [R-K. S.5002 – 117th Congress (2021-2022): FDA Modernization Act 2.0. (2022).
  11. Bhatia, S. N. & Ingber, D. E. Microfluidic organs-on-chips. Nature Biotechnology vol. 32 760–772 (2014).
  12. Syama, S. & Mohanan, P. V. Microfluidic based human-on-a-chip: A revolutionary technology in scientific research. Trends Food Sci. Technol. 110, 711–728 (2021).
  13. Emulate, Inc. is Researching Human Biology With Tissue Chips in Space. https://www.issnationallab.org/emulating-human-biology-with-tissue-chips/.
  14. Low, L. A. & Giulianotti, M. A. Tissue Chips in Space: Modeling Human Diseases in Microgravity. Pharm. Res. 37, 1–6 (2020).