The Future of Medical Technology

How the marriage of biology and silicon is transforming medicine
Subscriber Only
Sign in or Subscribe Now for audio version

For centuries the art of medicine has been dominated by bumps, bruises, or other symptoms, felt by the patient or discovered by the physician, with eyes ever-magnified by increasingly sophisticated scanning technology: the microscope, the x-ray, and eventually the MRI. But however powerful the machine, the underlying model remained the same. To find the illness, doctors first had to look for the symptom. To diagnose the cancer, they had to see the tumor. To find a drug, they had to undergo a long, costly, and laborious process of trial and error, trying millions of natural compounds on animals to find one that seemed to work.

This approach to medicine may be coming to an end. As drug discovery becomes an information-based science, speeded by rapid increases in computer processing power and the marriage of test tubes with microchips, we are transforming the way we diagnose and treat many of the worst human diseases. New drugs currently in clinical trials are no longer scattershot one-size-fits-all affairs, but carefully targeted to the molecular fingerprints of specific diseases. Some of these drugs are even targeted to a patient’s unique DNA profile. In a breathtaking paradigm shift, medicine is moving from the species level — the ingrained assumption that drugs and diseases work the same in all human beings — to the individual level, unlocking new healing possibilities in the minute differences between seemingly similar diseases and their individual victims. The result will be a new age of medical therapy, dominated not by cell, tissue, and organ replacements but by early diagnosis and individualized drug treatments.

When Biology Meets Silicon

To understand this new scientific paradigm, first consider how it is changing the way doctors diagnose disease. In conventional medicine, diagnosis remains mostly the art of neglecting remote dangers in favor of likelier ones. Diagnostic tools are often too expensive or too inaccurate to be deployed widely. But in the near future, diagnostic gene chips will rely not on spying crude symptoms but detecting the underlying molecular processes that trigger disease in the weeks, months, or years before the patient feels a twinge.

DNA chips are elegantly simple in concept: thin wafers of glass or plastic embedded with strips of DNA rather than, like silicon chips, tiny transistors. They exploit the natural tendency of double-stranded DNA molecules to bind with their complementary partners, in a process called hybridization. Once researchers have identified a particular strip of DNA within a virus or bacteria or genetic disease, that strip can be used to track down a matching strand from a patient’s blood sample or biopsy specimen. Dozens, even hundreds of potentially offending pathogens, genetic diseases, or other ailments can be diagnosed on the surface of a single chip — at a cost of hundreds or sometimes a few thousand dollars for each chip, although prices of these new diagnostic tools are dropping dramatically.

The first step is to fix a single strand of a known DNA sequence (or hundreds of such known disease-causing sequences) to a chip so that it can be used to search and bind to a complementary strand found in the patient’s own tissue. In hours, a remarkable feat of pattern matching occurs. Genes from the blood sample are allowed to bind to their complementary probes on the silicon surface. Then the entire chip is placed in an analyzer that can read the patterns of gene binding and transfer the information directly into computers capable of interpreting the results.

Last summer, scientists at the National Cancer Institute used gene chips to isolate DNA sequences that can differentiate among pediatric cancers called “blue cell tumors,” even in instances where the best-trained pathologists, peering through powerful microscopes, were once stumped. Previously, many of these tumors were considered to be one disease. But the gene chips discovered important distinctions among these similar tumors, revealing that doctors were actually dealing with several distinct cancers. More than a simple clinical breakthrough, it has led to a dramatic discovery: the criteria doctors are currently using to classify different cancers are not accurate.

This is likely to hold true for a myriad of common cancers. Doctors are currently treating most cancers in a uniform fashion. For example, everyone with pancreatic cancer or liver cancer ends up getting similar treatments, even though genetic markers are beginning to tell us that these cancers really come in a wide variety, each requiring its own unique approach. This is perhaps one reason why most cancer treatments usually leave a significant number of people behind, often with more than half (and in the case of pancreatic cancer, 90 percent) failing standard treatments.

All of these innovations, from targeted medicines to precision diagnostics on a chip, represent the new paradigm that medicine is entering as biology meets silicon. The history of medical progress has been the history of moving from surface to cause, from symptoms to underlying processes. Hippocrates and his fellow physicians probably killed as many patients as they saved with “cures” that ranged from leeches to arsenic. With the Renaissance came anatomy; anatomy begat physiology, and medicine for the first time moved towards science. Germ theory marked the next great leap from surface symptom to underlying process. A whole class of deadly diseases — typhus, whooping cough, measles, malaria — could be cured or prevented, because for the first time we understood their cause.

Understanding the body’s internal disease process took longer. In the twentieth century, biologists gained some understanding into what genes actually do: they make proteins. Proteins consist of strings of different amino acids. Scientists in labs can construct an almost infinite variety, but nature, it turns out, makes just 20 different kinds. All the millions of different proteins on Earth are compounded from that basic amino acid set, just as all 228,000 words in the Oxford English Dictionary are compounded from 26 letters.

The task of genes is to make sure that amino acids line up in the right order to produce the right protein. In biology, the right protein is everything. Cell membranes consist of proteins and fats. Fingernails, hair, and muscles are proteins. Proteins function as hormones, antibodies, or enzymes. They form the body’s cellular structure, direct the metabolism, carry cellular messages, and form defensive forces.

In 1953, with the discovery of the structure of DNA, the scientific basis for investigating how proteins cause and cure disease was finally at hand. But research on the cellular pathways to disease was characterized by a doomed reductionism: for years the big picture of how different genes interacted with different proteins to produce different symptoms was ignored, largely because scientists lacked the processing power to generate and analyze the huge volumes of information necessary to perform this task.

The merger of medicine and microchip is in one sense only natural. DNA can be thought of as a three-billion-year-old Fortran code easily transduced into bits of data, captured in databases, and analyzed with sophisticated software. But until recently, the body’s digital code was just too complex to crack. The true potential of emerging genetic knowledge remained locked in a box of complexity, awaiting the development of a sufficiently advanced information technology. The key was abundant processing power to generate and manage huge data sets linking gene sequences to body functions and dysfunctions.

In the simplest cases, such as sickle-cell anemia, diseases can be linked to a single gene. But most important diseases are far more complicated, determined by multiple gene markers at many different locations. Finding the culprits requires large sample sets and powerful software algorithms that can hunt down genetic patterns among millions of data points, tracking which ones seem to be associated with disease. Only now is such technology becoming available.

George Weinstock, co-director of the human genome center at Baylor College of Medicine, believes this new computer-assisted ability to crack the DNA code is as significant as the microscope. “Before the microscope they never realized the structure of cells and the presence of disease-causing microbes in water,” Weinstock says. “The gene sequence will likewise have an impact over a number of centuries.” Our growing mastery of genetics has finally met up with an information technology sufficiently advanced to exploit it for commercial medical purposes. These new technologies are being adopted throughout the drug industry, but they are being most effectively implemented in the R&D infrastructure of some of the smaller biotechnology companies, which are gaining a competitive advantage by embracing new tools.

Protein in a Haystack

One of the best illustrations of how computational tools are redefining drug discovery is a new process called rational drug design. Traditionally, pharmaceutical companies discovered leads on new drugs by a process virtually akin to blind luck. Most drugs work by binding to proteins and altering their function in some small way. So the first step en route to new “miracle cures” was finding a molecule that binds to a protein. In the old “wet lab” model, that required mixing millions of different chemicals and hoping one of them stuck. Pharmaceutical companies sank billions of dollars into technology upgrades that made this antiquated model work a little better — with automated systems that helped scientists synthesize and survey thousands of chemical compounds a week, hoping to stumble upon a few hits. But most of these sticky compounds still failed the minute they left the test tube.

The emerging paradigm marries computational tools with biology to develop a different model: deploying information technology to design drug-like qualities right into the molecules from the very start. Instead of mixing compounds in test tubes at random, this process begins by teaching computers what the molecular structure of an effective drug ought to look like, and what molecular structures it ought to avoid. For example, certain molecular structures, promising in the test-tube, bind to sites in the liver’s P450 system, where enzymes break them down, leading to poor absorption in the body. The idea behind rational drug design is to construct medicines atom by atom, fitting drugs like finely cut jewels onto settings made of protein.

One of the first drugs to be designed in this way is the AIDS medication Agenerase, from the powerful class of drugs known as protease inhibitors. The first step in the development of Agenerase, which the FDA approved in 1999, was to create a three-dimensional picture of a key enzyme that HIV uses to reproduce (i.e., a protease) in a process called x-ray crystallography. In this process, scientists crystallize the virus particle and then aim radiation at it. Computers capture the radiation as it bounces off the crystal, and reconstruct the diffracted signals into a three-dimensional picture of the enzyme.

On computer screens, the protease looks like a mass of sticks and balls. Computers home in on the parts of the enzyme where drugs can bind. The site contains about 10 different regions on which different drugs could be hooked. Most drugs work, as described above, by binding in some way to a protein involved in causing a disease. For example, many cancers are caused by the overabundance of mutant forms of naturally occurring proteins that instruct certain cells to divide continuously even when they should turn themselves off. Many new cancer drugs work by binding and disabling these mutant proteins.

In rational drug design, computers screen different drugs against a protein’s binding site, digitally docking them with the protease to see what fits. One early version of the drug Agenerase seemed to fit its pocket well, but part of the molecule hung out of the edge, meaning the drug was easily knocked around and became dislodged. So scientists involved in the development of Agenerase snipped off three carbon atoms in a key area of the drug, which gave it a smaller profile and thus a snugger fit.

The Coming of Genetic Medicine

The knowledge made available by precise diagnostics and targeted drugs is helping to shape another great paradigm shift in medicine: from species to individual. Medicine has been based on the largely unexamined assumption that disease process and treatment is species-specific. People all have the same basic biological processes. What cures disease in one human being will cure the same disease in all other human beings. But in reality, human bodies differ, and so do individual responses to drugs and other treatments.

For example, researchers recently discovered that one likely reason African-Americans die more often from heart attacks is that ACE inhibitors, one of the drugs given to heart attack patients, are much less effective in people of African than European ancestry. Computational tools give scientists the ability to collect and analyze such ethnic, family, and individual genetic variations. “Medicine never really focused on our differences,” explains Huntington Willard, President of the American Society of Human Genetics. “Our hearts are all different and the differences have implications for function and performance. Sequence knowledge will change doctors’ perspective to providing care for the individual.”

“By learning about what makes each patient’s tumor grow, what makes it spread or not spread, hopefully you could tailor therapies to the individual patient rather than use a one-size-fits-all kind of approach,” said Dr. Paul Meltzer, a senior investigator at the National Human Genome Research Institute. Researchers at one Cambridge-based company developed a test called Melastatin which detects a protein that accurately predicts whether a melanoma will recur. The next step is to come up with a drug that turns on the proteins that turn on Melastatin, blocking skin cancer from metastasizing. It’s not that the patient with the Melastatin gene would lose it. The gene would still be there, but the drug blocks the body from turning on the disease process.

This kind of research is already having an impact on clinical practice. Melanoma cases that slip through today’s antiquated screening processes account by themselves for six percent of the money awarded each year in malpractice suits. Scientists at the University of California, San Francisco, recently developed a test which detects characteristic chromosomal abnormalities, allowing doctors to diagnose melanoma even when normal tissue biopsies have pathologists stumped. Another strategy focuses on using receptors on the surface of the cancer cells as targets for drugs.

In short, as diagnostic tools and drug design techniques are transformed by the merger of medicine and microchip, so are the drugs that scientists are creating. Consider the advent of a large new class of drugs referred to as monoclonal antibodies. In 1975, two future Nobel Prize-winning scientists, Georges Köhler and César Milstein, stimulated an immune reaction in mice, cloned the antibody-rich immune cells, and then harvested the antibodies. Such refined antibodies are called monoclonal because, unlike the antibody cocktails our bodies create, they all react in a uniform way to a particular “antigen,” a piece of protein or carbohydrate on the surface of an “invader” cell. In many ways, monoclonal antibodies represent the low hanging fruit of genomics; they are among the first genetically engineered medical products and the leading edge of the new, personalized, and targeted treatments.

The concept of using an antibody as a drug is fairly simple. The first step is to identify the antigen marker on the surface of a disease-causing cell. In the case of cancer, researchers identify a protein expressed on the surface of every cancer cell and then engineer an antibody that is programmed to recognize and attach itself to that protein. Once attached to its target cell, monoclonal antibodies can be engineered to disable a protein, flag a diseased cell for destruction by a person’s own immune system, or kill a cell outright by interfering with its growth or by punching holes into it. There are six therapeutic monoclonal antibodies currently approved and marketed for several different diseases in the United States. More than 15 percent of the hundreds of drugs in clinical trials in the United States are antibodies.

Monoclonal antibodies can be designed to disable cell signals that, for example, tell a particular system how to go awry or carry the messages that instruct cancer cells how to grow. Alternatively, antibodies can be used as transport vehicles to identify cancer cells and deliver toxic payloads. Like tiny divining rods, these drugs hunt down only diseased cells, avoiding the shotgun approach of past chemotherapies. Best of all, scientists can make lots of them.

This is not, however, an overnight success story: researchers spent more than 20 years doing the underlying work that led to the arrival of monoclonal antibodies in the marketplace. Monoclonal antibodies were first produced in mice because it was comparatively easy to do so. But the drugs triggered rejection from human patients’ immune systems, which recognized them as foreign proteins. The result was that patients who received them often suffered life-threatening immune reactions. By the 1980s, researchers had begun to humanize the antibodies by replacing parts of the mouse antibody with human antibody, which ensured that the engineered antibodies would be better tolerated in humans. In effect, scientists re-engineered the antibodies to look more familiar by replacing at least half of the mouse DNA with human DNA.

The first of these “humanized antibodies” to reach the market, in 1994, was ReoPro, a clot-busting drug that reduces the risk of death during angioplasty procedures by 57 percent. ReoPro — which is half-mouse, half-human — was low-tech by current standards. The breast cancer drug Herceptin, which came to market four years later, is five percent mouse, 95 percent human. Better versions are already on the way.

The View from Industry

Some analysts point to the declining number of novel drugs submitted for approval by the Food and Drug Administration over the past two years as evidence that this new technology is not yielding the types of breakthroughs once envisioned, despite increasing investments in research and development. According to the pharmaceutical industry’s calculation, its R&D investment doubled to an estimated $30.5 billion in 2001. Despite the increased effort, output as measured by the number of new drugs and biologics approved or submitted for approval has been steady or declining across almost every major therapeutic area. Meanwhile, if you look at the trend over the next five years, it is not likely to change dramatically. If the technology being brought to the task of drug development is so impressive, why haven’t these innovations resulted in more new drugs being developed and approved? The answer is not so much technological as it is structural: some revolutions take time, and old ways of doing things die hard.

Nevertheless, the drug industry has been gradually reorienting itself around this new drug-discovery technology, and it has been adapting itself to the realization that its business model is to create medicine, not simply to identify new targets or pathways at which to aim new drugs. While the generation of these new targets is bearing fruit, some of these targets will also prove dead ends once drugs are tested in people. This is not because the science was wrong, but because humans are more than the single pathway that a particular drug is created to target. People are complete with systems that can not only override medical interventions but can cause unintended consequences. If you look at genomics and proteomics as a way of finding novel ways of attacking a disease, the next step is to build an understanding into the drug discovery process of not only what target or biochemical pathway a new compound will attack but what role this molecular intervention will have in the entire organism. This is the technical challenge that the industry is grappling with and why it is at a watershed moment in the evolution of its development skills.

As the respected industry publication BioCentury recently noted, the problem for industry is figuring out how to integrate medicine into the research equation. The work being done at the front end is very quantitative and data-intensive, while the actual practice of medicine deals with interactions of physiological systems that may only be partially understood; it is sometimes more of an art than a science. The answer, from the industry’s standpoint, is to transform some of the new tools so that they more closely mimic biological systems, which is being done at an increasing number of companies.

The Real Future of Medicine

Taken together, the marriage of biology and silicon and the shift from species-based to individualized therapy will change the face of medicine forever. Traditional human efforts to treat disease are being empowered with digital tools that annotate life with silicon technology. The enormous material effort to find symptoms is being replaced by a combined genetic and artificial intelligence that knows where to look and how to find problems before we do. In the new medical paradigm, disease will be diagnosed before it is made fully manifest. Highly targeted drugs will be used to intervene before organs are ravaged or tissue is destroyed.

This new ability to diagnose and treat certain diseases early, from infectious agents like hepatitis C to degenerative ailments such as Alzheimer’s and Parkinson’s, may obviate the need for the types of tissue, organ, or stem cell therapies that often attract the most public attention. Moving from wet lab to computer, from random to rational drug design, from species biology to the individual unique DNA profile, companies adopting the in silico paradigm are unlocking the long-hyped promise of genomic medicine, making targeted drugs and diagnosis a reality and drug development faster, cheaper, and better.

In the future, a supercomputer sitting in an air-conditioned room will work day and night, crunching billions of bits of information to design new drugs. Multiplying at the speed of Moore’s Law, which predicts that computer processing power doubles every three years, this drug discovery machine will never need to rest or ask for higher pension payments. It will shape how we use the abundance of genomic information that we are uncovering and will be the deciding factor for the success of medicine in an age of digitally driven research.

Of course, there are reasons to question whether this new medical revolution will come to pass, and there are many things that could go wrong: Regulatory procedures need to keep pace with technological change and government agencies need to create frameworks for evaluating drugs that look and behave differently than previous medicines. The industry needs to maintain its financial footing to fund the new research. And, of course, many parts of this new technology still need to be validated in the clinical setting. Scientists still need to prove that their cool new tools can also make important new medicines.

But if one had to guess where the future of medicine really lies, it is in DNA chips, supercomputers, and new drugs, not embryo research, tissue transplants, or stem cells. It is time for our public debate to pay more attention to this fact, since a medical and technological revolution of this significance is sure to have lasting political, economic, and social consequences.

Scott Gottleib, “The Future of Medical Technology,” The New Atlantis, Number 1, Spring 2003, pp. 79-87.

Exhausted by “science says”?

During Covid, The New Atlantis has offered an independent alternative. In this unsettled moment, we need your help to continue.