ChemTalk

The Drug Development Process

Core Concepts

In this article, you will learn how the pharmaceutical drug development process transforms the idea of a treatment into a full-fledged medicine ready for patients to use. You will also see how researchers applied the drug development process to create safe, effective COVID-19 treatments in record time.

This is the third article in a special ChemTalk mini-series about the intersections between chemistry and public health, using COVID-19 as a case study. Across this series, you can expect to learn about the drug discovery and development processes, chemistry’s central role in diagnosing and preventing diseases, and careers that are on the front line of chemistry and public health.

Previous Articles in This Mini-Series

> The Chemistry Behind Coronaviruses
> The Drug Discovery Process

What is the drug development process?

Close your eyes and picture the inside of your local drugstore. (Now open them so you can keep reading.) Did you imagine yourself standing in one of many aisles, surrounded on both sides by shelves upon shelves of medications? Buried somewhere within this rainbow array of branded labels and price tags is a question we ought to ask: How did all of these medicines actually get here?

Think of drug discovery and drug development as two halves of the same coin. Drug discovery does the initial dirty work of selecting the best drug candidates from a pool of millions. Then, drug discovery implements these candidates in clinical studies to paint a realistic picture of how the drug impacts human health. This clinical research indicates whether the drug is a worthwhile treatment for a particular condition and how it compares to existing treatments. Equipped with this knowledge, health care practitioners and their patients can make more informed decisions about their treatment options.

A chart contrasting the goals, timelines, and research objectives of the drug discovery and drug development processes.

The individual stages of drug development look very different. It starts in a laboratory setting, studying the drug candidate’s effects on animal models. This preclinical research period offers clues into how the candidate works (or doesn’t) in a living organism. A candidate that succeeds in preclinical research moves on to clinical trials, where it’s tested in humans for the first time. The trials progress over time to include more participants and answer different research questions. After gathering years’ worth of data in support of a candidate’s performance, safety, efficacy, and benefits, researchers seek regulatory approval. The approved candidate is then ready for manufacturing and marketing to the public.

Not every drug candidate survives this arduous process. In fact, most don’t. Over the course of this article, we’ll see how each phase of clinical research raises the stakes and lowers the likelihood that the drug will make it to market. This scenario is unfortunate, but it isn’t meant to sabotage drug candidates. Instead, it points out how researchers can improve upon a candidate, and ensures that the drugs on the market are the best they can be. So with this end goal in mind, how do we carry out this mission?

An Overview of the Drug Discovery Process

Before we dive into the details of how drug development happens, let’s first revisit the preceding process: drug discovery. Drug discovery is an exploratory series of trial-and-error laboratory experiments with the purpose of finding future medicines. With this mission in mind, pharmaceutical researchers design the ideal drug to treat a specific medical condition, then search for compounds that could serve as this drug’s driving force. Later stages of drug discovery narrow down these compounds, evaluate their role in treating or curing the condition, and assess their safety and side interactions.

Drug development picks up where drug discovery left off, refining the drug into the best version of itself. In doing so, it seeks answers for any unfinished questions. Is the drug working as expected, or are there unforeseen side effects? In what ways, and why, might the drug work differently on different people? And, above all else, how does the drug impact real patients in the real world?

By answering questions like these, drug development helps researchers draw conclusions about how well a drug meets patients’ needs, how it affects patients on individual and population scales, and how safe and effective it ultimately is. Getting there is a long journey, but a necessary one. Let’s take a look.

How does the drug development process work?

Drug discovery walks so drug development can run. Compared to the drug discovery process, which is guided by big research ideas and hit-or-miss experimentation, drug development is more regimented and structured. It’s marked with milestones that measure success or failure, culminating in the fearsome face of regulatory assessment. After the investigational efforts of the drug discovery process, this intense follow-up sounds like no fun. So why do we do it?

Think of drug development as the final exam or the final paper at the end of your semester-long course. The purpose is to see, after all your hard work, what you’ve learned and what you can do with it. Well, the same notion applies here, too. As one of the last phases of pharmaceutical research, drug development adds context to researchers’ prior knowledge about the drug, puts the drug to the test, and helps researchers identify shortcomings and future directions to improve upon the drug. Basically, it allows them to build upon what they know about the drug in order to make it even better.

Unlike your calculus exam, this is easy math: better drug treatment options lead to better patient outcomes. This underscores the pharmaceutical industry’s reason for existing in the first place. Pharmaceutical researchers want to make the world a better place by making medicines — and the drug development process guides their efforts to do that.

Preclinical Research: Setting the Stage for Success

There’s a big leap between testing a drug candidate in limited models like cells (which happened during drug discovery) and testing it in people. Luckily, we have a means of bridging that distance: preclinical research.

Clinical research, as we’ll see shortly, is performing research studies that involve people. So, preclinical research encompasses the multitude of research endeavors that must happen first. Technically, all of drug discovery belongs to the umbrella of preclinical research. In this article, we’re going to discuss what happens after a lead (a promising drug candidate) has been optimized.

Living Proof: Animal Models in Preclinical Research

In preclinical research, scientists utilize what they’ve already learned about the drug candidate to design and perform follow-up studies. What’s unique about these follow-up studies is that researchers perform them, for the first time, in a living organism. This type of study, in vivo testing, is a valuable tool in advancing drug development.

In vivo studies examine how drug candidates behave within and impact living systems. It’s a major step beyond drug discovery’s research strategies: using simulated computer-based scenarios and small-scale living structures, such as cells. But taking that step is a huge risk. If researchers haven’t yet rectified the candidate’s safety and quality flaws, then administering it to patients could have significant consequences.

Due to this, animal models in preclinical research serve as a crucial middle ground. Animal studies let researchers evaluate how drug candidates work in living systems, without posing any threat to human health. By studying how the drug affects an animal, they gather new insight about whether the drug is working as intended, whether or not there are unanticipated side effects, and how the drug impacts physiology. Nothing can fully imitate human physiology, but animal physiology comes close enough to make preclinical research useful. (It certainly comes closer than cell physiology.)

The form that in vivo testing takes will depend on the drug candidate that’s being researched. For example, as our closest evolutionary relatives, primates are arguably the animal model that most reflects the human body. But primates are incredibly complex creatures, and sometimes, that complexity isn’t necessary in a preclinical study. Instead, researchers might rely on simpler animals, like rodents or swine. Scientists studying neurological medications, for instance, may deem mice to be the ideal model, because mouse and human brains share many parallels. Researchers tailor the species of choice to reflect their drug candidate’s purpose and their patients’ needs.

How Animal Models Lay the Foundation for Drug Development

Considering this, a major precursor to preclinical research is knowing the drug candidate’s purpose and mechanism. How the drug works will be investigated in depth during clinical trials, but for now, a general understanding will suffice. This knowledge comes from all of the data gathered during drug discovery. Recall that drug discovery explored the drug candidate’s chemical properties, its reactions with other biomolecules, and how it operates within very basic living entities, like cells and tissues.

The drug candidate’s job is to interact with and influence the target molecule that contributes to the medical condition. This interaction occurs on a very small scale, between individual molecules, and this influence can take many different forms. Fundamentally, the drug candidate either activates or inhibits the target’s function. Drug discovery demonstrated these interactions in a limited way and in very controlled environments, like on a laboratory reaction plate or through in silico computer simulations. Although this was a solid starting point and gave researchers a context to frame their next steps, it ultimately didn’t reflect the body’s much greater complexity. After drug discovery is over, the candidate’s effects on the body are still poorly understood.

To clarify them, the researchers must introduce the drug candidate to living systems (larger, more intricate networks of interdependent organs that drive an organism’s function). This is the core motivation behind preclinical research. While navigating the preclinical stage, it’s important to keep in mind that the target and the drug candidate each have direct and indirect effects on a patient’s body. Activating or inhibiting the target, as the candidate does, consequently alters the target’s downstream effects. In practice, this might translate into symptom changes (hopefully symptom relief) or side effects.

A pyramid-shaped diagram showing the progression of increasing complexity within the hierarchy of life.
In biology, life is organized into units of varying complexity called the hierarchy of life. As complexity increases (down the pyramid), so do the interactions and processes that can occur. The examples listed in each level of the hierarchy are not an exhaustive list.
All Systems Go

The target’s downstream effects aren’t restricted to the immediate surrounding cells or tissues. In a living system, we can see how specific tissues and organs impact each other. This helps us learn about the drug candidate’s broader effects, as any changes in the target’s function can cause changes elsewhere in the living system.

Animal models become very advantageous here because they offer our first look into how the drug candidate actually works. Some medical conditions’ symptoms span multiple bodily systems (for instance, when the target is a specific kind of receptor that occurs on cells throughout the body). If those systems’ cells function differently from each other on a biochemical or physiological level, the target acting on these cells can create a wide assortment of symptoms. Or, maybe this receptor has a role in many metabolic pathways, and the disease pathway is only one of them. In this case, when the drug candidate influences this target receptor to treat the disease, it might inadvertently influence other body processes too.

Drug discovery focused on the direct interactions between the drug candidate and its target. However, since indirect interactions might exist too, we need to study the candidate in living systems, which are significantly more complicated then individual cells and tissues. Through preclinical research, the mechanisms behind the drug candidate’s effects and the nature of its function become more evident. This stage can reveal consequences, such as symptoms of the condition, that could never have been apparent during drug discovery. It’s crucial to understand these consequences before administering the drug candidate to people, so any risks can be carefully calculated and weighed against potential benefits. Undergoing this analysis protects future patients from unnecessary harm, setting clinical research up for success. But…

How do researchers measure success in preclinical research?

Since they occur in a more complicated biological system, outcomes of preclinical studies look somewhat different than those we saw in drug discovery. But how do we know if preclinical studies are working? After all, animals can’t exactly tell us when they feel better. Researchers have to look for outward signs of improvement: measurable signs that the drug candidate is changing the animal’s medical condition, physiology, or conduct.

These measurable signs can be qualitative, like visible changes in the animal’s appearance or behavior, or quantitative, like comparing an animal’s biomarkers over time. Biomarkers are quantifiable bodily characteristics that, when measured, indicate something about the organism’s health. Weight, blood pressure, hormone levels, antibodies, and more are all examples of biomarkers. These characteristics are capable of changing in response to environmental circumstances or drug administration, and they’re always measurable. Taking measurements of the same biomarker across a period of days, weeks, months, or even longer suggests how a drug candidate is changing the animal’s body over time. Some biomarkers, like certain proteins and antigens, can serve as indicators of disease. If these biomarkers decrease after an animal consumes the drug candidate, then this could imply that the candidate is an effective treatment for that disease.

Meanwhile, qualitative assessments can go beyond the animal’s superficial appearance. Animal models might undergo imaging, like X-rays and MRIs, to give researchers a deeper look into how the drug candidate is influencing the body’s physiology or internal function. Assuming the animal’s physiology reflects a human’s, this becomes powerful information about how the drug candidate will impact humans during the clinical trials to come. If an animal passes away during the preclinical research phase, whether its death is related to the study or not, researchers might perform an autopsy on the animal. Autopsies let the research team closely examine how the drug impacted the animal’s individual organs and entire organ systems. It’s a big-picture analysis to complement the relatively nitty-gritty details like biomarker levels.

Considering the animal’s health from various angles, researchers assess the drug candidate’s impact on a full-body scale. Furthermore, through changes in the animal’s behavior, they can determine if the drug has psychological effects alongside physical ones. (But, again, this is limited by the fact that animals can’t verbalize their thoughts and feelings.) To an extent, the findings during this preclinical research collectively set expectations for how clinical trials will go. Since animal models represent a human-like living system, researchers can extrapolate the preclinical results to predict how the drug would work in the human body. This doesn’t replace the actual clinical trials, but it does help researchers prepare accordingly by suggesting things to watch out for during the clinical trials that will follow.

Safeguarding Animal Welfare in Preclinical Research

Animal studies are unavoidable due to regulatory policies surrounding preclinical research, but there are certainly ethical concerns to grapple with. By the start of preclinical testing, researchers have already refined the candidate and made it as safe as possible — but unfortunately, it’s not perfect.

Scientists acknowledge the sacrifice that animals make to biomedical research, and they treat that sacrifice with the utmost respect. Animal welfare is a central consideration in preclinical studies. Preclinical researchers receive specific training on how to handle and treat the animals before they can even interact with them. Specialists, like regulatory authorities and veterinary experts, ensure that animals receive proper care and enrichment throughout the entire study. There are even panels, like the Institutional Animal Care and Use Committee (IACUC) in the U.S., which oversee how the study employs animals and can intervene to protect their well-being.

A photograph of a laboratory researcher handling a mouse.
Rodents such as mice are common animal models in preclinical studies.

In a way, during preclinical research, animal models are the patients. So although animals substitute humans in these studies, researchers treat them with the same respect that they would afford human patients. Animals are indispensable in drug development to maintain drug safety standards and make the drug as successful as possible in people, and researchers take this contribution seriously.

Preclinical animal research looks carefully at the drug candidate’s functional implications. This often takes the form of toxicology studies and behavioral studies: how the drug might harm a living system, and how it might influence symptoms or quality of life. For a candidate to proceed to human research, first it must prove its safety and effectiveness in animal models. Compared to the upcoming human trials, preclinical research is simpler because the organisms involved are simpler and there are no sociocultural variables to consider. Once a drug candidate succeeds here, factors like these come into play during the next segment of drug development, clinical research.

Clinical Research: Putting the Drug to the Test

Our drug candidate at this point has emerged victorious from an extensive range of tests, spanning from the introductory investigations of drug discovery to the immersive animal studies of preclinical research. Having proven itself worthy, the time has come for the candidate to proceed to humans.

Contemporary clinical research increasingly incorporates diversity and inclusion initiatives from the very start. Diverse clinical trial participant pools ascertain that the drug will work safely and effectively on diverse future patient populations. Genetics and socioeconomic factors can influence people’s health, so accounting for a broad spectrum of these qualities during clinical trials lets researchers evaluate the drug’s performance on individuals from many backgrounds.

If clinical research sounds like a big undertaking, that’s because it is. To make administering and analyzing a clinical study more manageable, it gets separated into several phases. As the study progresses through each phase, the pool of participants expands, providing deeper insight into the candidate drug’s profile. All the while, researchers are gathering data and making informed conclusions about the candidate’s future as a drug treatment.

A diagram summarizing each of the phases of clinical research, the number of participants during each phase, and the objectives of each phase.

Phase 0 Clinical Trials

Clinical research evaluates a drug candidate from the ground up — and sometimes that means starting from zero. Phase 0 clinical trials are an optional, preliminary clinical research stage that can prevent sunk costs in time or money.

This stage of tiny trials mainly serves the researchers, rather than the participants. In phase 0, a very small group of human participants receives a very small dosage of the drug candidate. When studying this group, which usually doesn’t exceed 20 people, researchers seek answers to very basic questions. Does the candidate actually do what it is meant to do, by acting on the intended target within the body? Does it have the potential that researchers initially believed it did?

Later, more thorough trials will flesh out these answers in detail. But at this point, researchers can confirm that the candidate has been worth the effort so far. If phase 0 studies indicate that the candidate still has underlying problems or doesn’t act on the target appropriately, then the clinical research process can pause here to remediate it. By doing so, it will save a lot of the investment required to carry out future phases: investments such as time, money, and of course, human health.

As a first-in-human study, the dosage during a phase 0 trial is much lower than the dosage future patients receive. The goal is to keep the dosage high enough to prove that the drug doesn’t cause significant harm, but low enough to protect participants from any of that potential harm. With such small dosages, any negative effect of the drug is likely to be limited. Sometimes participants don’t experience a difference at all.

Because of this, phase 0 is among the least risky of all the phases, but it’s also the least common. Sometimes researchers skip it altogether and advance directly from preclinical research to phase I trials. They base this decision on what questions, if any, remain about the drug’s safety after preclinical research has ended. Remember, preclinical research studied the drug’s safety in animals, and researchers extrapolated this to predict its safety in humans. But in some cases, the animal studies aren’t sufficient to determine this with certainty. This is where a phase 0 trial is useful! It allows researchers to study the drug in humans, but in a very limited way, which minimizes any possible harm.

Investigational New Drug Application

For U.S.-made drug candidates, an investigational new drug (IND) application is submitted to the Food and Drug Administration (FDA) after phase 0, or after preclinical research if phase 0 was skipped. The FDA reviews IND applications for three criteria. If the application fails to meet a criterion, the FDA can intervene to halt the clinical research process before the phases continue.

First, the IND application must provide the FDA with data from preclinical studies. The data indicate how the drug candidate performed in animal models, with regard to aspects like pharmacology and toxicology. Based on this information, the FDA judges if the drug candidate is safe enough for testing in humans.

The second criterion on the IND application has to do with manufacturing capabilities. The application describes the drug candidate’s ingredients, stability, quality standards, and supply chain. All these details demonstrate that the drug manufacturer, often a pharmaceutical company, will be able to make the drug consistently. When producing many batches of a drug, it’s vital that all of the batches are of the same high quality. This ensures that every patient will receive the expected dosage and that no impurities contaminate the drug. It’s also a safety measure, ensuring that, during long-term drug production, the drug’s quality and composition won’t change over time.

Finally, the IND application presents a study plan to the FDA. How will the clinical trials work? Are the investigators who oversee the trials qualified to do so? Will the study conform to all regulations and adhere to ethical requirements such as informed consent? Particularly, the FDA is looking for possible safety risks and what efforts will be made to minimize them. Trials can’t move forward without a logical, reasonably safe study plan, so it’s a focus of the clinical research preparations.

The FDA reviews IND applications carefully and determines if the drug candidate is safe enough to test in humans. If the FDA has no objections, the team of investigators can proceed with the first phase of clinical research.

We can describe the phases following phase 0 using Roman numerals (I, II, III, …) or Arabic numerals (1, 2, 3, …). Both of these notations are interchangeable in the context of clinical research. Throughout this article, we’ll use Roman numerals for consistency.

A Detour into the Dose-Response Relationship

Let’s briefly look at a key tool in clinical research that tells a story about the drug’s safety and effectiveness. The dose-response relationship refers to how a certain amount of a drug candidate elicits a resulting level of response in the study subject (animal model, study participant, or human patient).

All of the clinical trial phases contribute to researchers’ understanding of the dose-response relationship. The details of this relationship are unique for every drug, but tend to have common characteristics. When plotted mathematically, the amount of drug administered (the dose) is on the x-axis, usually on a logarithmic scale, and the response to the drug is on the y-axis. Presented in this format, many drugs resemble a sigmoid function.

What this means in a medical context is that the relationship between dose and response is not necessarily linear. The sigmoidal shape indicates that, at low doses, subjects have a low response to the drug. Below this dose, there’s practically no detectable response. Then, after a certain higher dose, a greater response is observed. Depending on the drug’s mechanism, “response” might be measured in enzyme activity, fraction of molecule bound, changes in biomarkers (like hormone secretion or heart rate changes), or something else. For the subject, the response might feel like a change in symptoms, a new side effect, or it might be imperceptible. Eventually there is a dose that results in the maximum possible response; this is the ceiling effect, where the upper region of the sigmoid curve forms a plateau.

An overlay of the therapeutic dose and toxic dose for drug candidates, pointing out the ED50 and LD50 values on each respective curve.
Using a drug’s dose-response curve, we can identify its ED50 and LD50 values. The x-axis represents dose and the y-axis represents level of response. The ED50 value is the dose (x value) at a response (y value) of 0.5 on the therapeutic dose curve. Similarly, the LD50 value is the dose at a response of 0.5 on the toxic dose curve.
Making Meaning from the Dose-Response Curve

This curve looks very plain, but researchers can interpret a lot of information from it. First, it establishes a causal relationship between the drug and the response. It proves that administering the drug does, indeed, impact the subject’s body in some way. Second, the ceiling effect tells researchers when increasing the dose is no longer useful. Once the maximum response has been reached, there’s no purpose to increasing the dose beyond that. In fact, sometimes increasing the dose further can cause (or worsen) harmful side effects. Based on this, researchers determine which doses are most functional, and after which doses it’s wisest to stop.

The curve tells us the ED50, the dose at which the drug has an effect on 50 percent of subjects. The ED50 is the x-axis value that intersects with 50 percent of the desired response. Note the desired response, elicited by the therapeutic dose, is always non-lethal and might not be the maximum possible response. Curves farther to the left on the x-axis are more potent than curves farther to the right, meaning that a lower dose is required to elicit the same level of response. You can also think of potency as how “strong” the drug is. With regard to the y-axis, the curve’s slope depicts how drastically each unit of dose changes the level of response. Sigmoid functions don’t have consistent slopes over the course of the curve, so this change in response varies as the dose varies.

An overlaid comparison of two dose-response curves shifted on the x-axis, representing the difference in their drugs' potency levels.
Comparing two drugs’ dose-response curves. Here, the dose is represented as concentration of the drug, and the response is measured as the fraction of molecule bound. Both curves are sigmoidal. The red drug is shifted farther to the left on the x-axis, so it is more potent than the blue drug. To elicit any given response level, a higher dose of the blue drug is needed than of the red drug.

Aside from effectiveness, the other central component of the dose-response curve is risk assessment. The flip side of the ED50 is the LD50, the dose lethal to 50 percent of subjects. LD50 values can be unique among different species or modes of administration. In other words, a mouse might have a different LD50 value than a primate, or administering the drug orally might result in a different LD50 than administering it intravenously. So, measuring toxicity in animal models during preclinical studies isn’t a direct representation of the drug’s LD50 in humans. That being said, every drug has a threshold where administering doses above this threshold can be dangerous or even fatal. Dose-response curves can pinpoint this threshold, preventing overdoses.

Dose-response curves visually show the delicate balance between effectiveness and toxicity. Researchers must strike this balance to make their drug be as effective as possible while simultaneously being as safe (having the lowest toxicity) as possible. A lot of circumstances can influence drug response, like the subject’s age, biological sex, genetics, and pregnancy status, to name a few. This is why its large sample sizes provide great insight during our clinical trials, giving researchers the clearest possible picture about how the drug works. As we progress through the clinical trials, we’ll see how each phase helps develop the shape of this curve.

Phase I Clinical Trials

Each phase of clinical research offers new insight into how the drug works, so each builds upon the previous ones. As a result, phase I clinical trials are the foundation of a successful study. A phase I trial’s objective is to establish that the drug is, indeed, safe in humans, and at which dosages.

These study participants, typically fewer than 100 people, are volunteers who may or may not have the condition that the drug is designed to treat. To fully understand the drug’s impact, it’s helpful to compare how it works in healthy people’s bodies versus in patients’ bodies. Participants are divided into smaller groups. Researchers administer a very low dosage of the drug to the first group, then observe them for negative side effects. If this dosage was safe for this first group, then the second group receives a slightly higher dosage. Again, researchers monitor the second group for side effects, and accordingly adjust the dosage for subsequent groups of participants.

This methodology, dose-ranging, leads researchers to find the highest safe dosage in humans. Importantly, it also suggests what consequences, if any, too-high dosages can have on the body. The drug candidate might still be considered safe, even if it causes negative side effects at a certain dosage. If side effects are minor, or if the candidate’s benefits outweigh negative side effects, clinical trials will likely continue. However, if side effects are severe, interfering with participants’ quality of life or ability to function, the study pauses here to reassess the drug’s safety.

As you can imagine, phase I forms the skeleton of the dose-response curve. This phase’s results identify how each dose of the drug candidate leads to quantifiable responses in humans. Researchers could have gathered some of this information from preclinical animal studies, but not all of it. Preclinical researchers study animal models as a representation of human patients, but the comparison isn’t flawless. A human might experience a side effect that an animal model did not. Alternatively, genetic differences between humans and animals might influence the drug’s performance. This is why, after animal studies have ended, the drug must undergo human studies before it can reach human patients.

Pharmaco-what? The Drug’s Inner Workings

This practice of monitoring for side effects and safety impact is pharmacovigilance, or PV. PV is a must-have during any responsible, ethical clinical research study because it maintains participant safety. After clinical research is over and the drug reaches real patients, PV continues by monitoring those patients for side effects. As such, safety surveillance is an ongoing procedure during — and after — drug development. Researchers don’t stop caring about a drug’s safety simply because that drug has already made it to market!

Phase I trials also study two other areas: pharmacokinetics and pharmacokinetics. These studies begin during phase I, but researchers continue to evaluate them throughout the later clinical trial phases too. Pharmacokinetics (PK) refers to the ways in which the body affects the drug. How the body absorbs, distributes, metabolizes, and excretes the drug all fall within the PK realm. Together, these four ADME descriptors illustrate what the body does with the drug after the drug has been administered. PK studies follow this journey from the time the drug gets administered, as the body makes use of the drug’s ingredients, all the way until the body has excreted the leftovers.

The final focus of a phase I trial is pharmacodynamics (PD), or how the drug acts on the body. Researchers ensure that the drug is working on the target, and investigate how. What is the drug’s mechanism of action, and does this differ from the mechanism that researchers originally intended? Does the drug bind to a specific type of receptor? Does it serve as an agonist, or perhaps an antagonist? For safety reasons, this is also an ideal time to watch for off-target interactions, in which the drug affects molecules or structures besides the target.

At the end of phase I, researchers conclude if the candidate is safe in humans, and at which dosages. They also have real-world data regarding how the candidate works in the human body. At this point, the study is ready to advance to phase II.

A table comparing the purposes and central principles of pharmacovigilance, pharmacokinetics, and pharmacodynamics in drug development.

Phase II Clinical Trials

It’s not enough to know how the drug candidate works. We also have to know that it works. This is phase II‘s goal: to confirm that the candidate is a viable solution for the condition being treated.

To address this question, the study recruits more participants than in phase I, but still usually under 100. Generally, this is a brand-new set of participants; a single participant typically doesn’t progress through all the phases of the drug candidate’s clinical trials. When it comes to research, having more participants is generally a good thing. It results in more data points for researchers to analyze, and a more diverse data set overall. With a greater number of participants, a broader range of side effects become apparent, including less common ones. Remember that phase I limited the number of participants in order to preserve participant safety and minimize potential harm. By phase II, researchers have already verified that the drug candidate is safe, so they can confidently recruit participants in higher quantities.

With regard to constructing the dose-response curve, phase II results continue to add more data points to the curve. The curve communicates information about the drug candidate’s toxicity and effectiveness, giving meaning to particular doses along the curve and investigating critical safety questions. All of these details combine to refine the curve and, by extension, the researchers’ understanding of the drug’s utility.

To discover if a candidate works as a treatment, it wouldn’t be very useful to administer it to healthy people. Therefore, phase II participants should have the medical condition that the drug is designed to treat. This way, researchers can see how the drug directly impacts the condition, if it does at all.

Trial Randomization

Study designs often randomize phase II trials. In doing so, different groups of participants receive the drug differently. This might look like each group receiving different modalities of the drug, like in pill form versus as an intravenous infusion, or different dosages that phase I determined are safe. Randomization has a few advantages. First, since researchers aren’t deliberately assigning these drug delivery formats to specific participants, it prevents researcher bias from interfering with the study. Secondly, randomization allows researchers to compare many combinations of modality, dosage, and any other factors they’re studying. They’re seeking the “Goldilocks” combination that maximizes safety and effectiveness while minimizing negative side effects and off-target interactions. To find it, they have to assess various combinations and the impacts of each one, which is easily built into a randomized trial’s design.

In some cases, researchers break down the phase II study even further, into phases IIA and IIB. Phase IIA studies the impact of dosage, while phase IIB explores the drug’s efficacy at different dosages. Efficacy and effectiveness are similar, but not interchangeable. Effectiveness pertains to how effectively the drug works under real-world conditions. But when studying the drug’s effectiveness under ideal conditions, like in controlled research studies, it’s called efficacy. The distinction is important because, unfortunately, sometimes real-world conditions aren’t ideal. In the real world, patients don’t always take drugs as prescribed, and they might accidentally forget (or purposely skip) doses. As a result, a drug’s efficacy is generally higher than its effectiveness.

By considering all of these factors simultaneously — the drug’s impact on the condition of interest, its delivery modality, its dosage, its effectiveness and efficacy — researchers gain a more well-rounded idea of whether or not the drug candidate is a viable treatment option. It’s a make-or-break point in clinical research: when clinical trials fail, they often fail during phase II. But, if phase II results successfully show that the drug treats the condition in a beneficial way, and is still reasonably safe and effective, then study can expand to more participants — many more.

Phase III Clinical Trials

Recall that having more research participants is valuable because it creates a more comprehensive understanding of the drug’s impact. The concept of value is central to a phase III trial, which involves hundreds or thousands of participants. Phase III trials compare the drug candidate’s performance, and therefore its value, against an existing treatment for the same condition.

The existing treatment is a standard way to treat the condition: already on the market, already established as safe and effective, and commonly used to help patients with that condition. If multiple treatments exist, phase III trials might extend to separately study how each one compares to the drug candidate. Thanks to this phase, researchers, patients, and health care practitioners can discern where the drug candidate fits into the larger medical landscape.

This is the final hurdle before a drug candidate faces regulatory approval. To overcome it, the phase III trial must demonstrate that it adds meaningfully to the array of current treatments. Phase III should replicate favorable results from prior phases, to confirm that the drug candidate is effective across large populations. It can take years to progress through phase III alone, in order to gather information about the candidate’s long-term effects. Let’s look a bit closer at the techniques that make this phase so meticulous.

Study Blinding

Like phase II, phase III involves randomization. At random, some participants receive the drug candidate while others receive the existing treatment or a placebo. A placebo resembles the drug candidate or the existing treatment, but actually has no therapeutic benefit. It serves as a control when comparing the impacts of each treatment option. Participants who receive the placebo aren’t aware that it’s an ineffective treatment. Still, sometimes they claim that it improved their symptoms nonetheless, a phenomenon known as the placebo effect. In double-blind studies, researchers also don’t know which participants receive the placebo, until everyone finds out after the study ends. This is another tactic to prevent bias, especially in contrast to single-blind studies, where researchers do know which participants receive each treatment type.

Ironically, blinding a study can help researchers see the true value of the drug candidate. Since it minimizes the interference of bias, blinding assures that the study results are valid, accurate, and free from distortion. What’s more, even participants who receive the inactive placebo may still benefit from the study. Surprisingly, the placebo effect has real potential to improve a participant’s symptoms — but any improvement is purely psychogenic.

Phase II studies can be blinded too, but blinding is especially powerful during phase III, when researchers are collecting significantly more data points than before. Having more data points increases the certainty that the results are accurate. To diversify the data set, phase III trials typically span many locations and recruit participants of many different backgrounds. Despite being the most costly, time-consuming phase, it’s the one that yields the most realistic results, because it reflects how the drug works in real-world settings.

A photograph of a clinical research participant receiving an dose of drug candidate via an injection into the right upper arm.
A research participant receiving a dose of a drug candidate as part of a clinical trial.
Determining a Drug’s Value

One way to gauge a drug candidate’s worth is to weigh its dose-response curve against those of existing drug treatments for the same condition. By doing so, researchers can make an apples-to-apples comparison of the condition’s existing and prospective treatments. Now that it contains data assembled through preclinical research and clinical phases I and II, researchers can judge the candidate’s dose-response curve’s safety, effectiveness, toxicity, and potency relative to the curve of existing drugs. However, comparing dose-response data excludes non-drug treatments that might also be worthwhile treatment options, and further evaluation would be needed to compare the candidate to these non-drug treatments.

Even if the existing treatment is better than the drug candidate, this doesn’t mean that the drug candidate is worthless. For example, a patient might be allergic to an ingredient in the existing treatment. If the new drug candidate lacks that ingredient, now the patient has a practical treatment option. Similarly, the drug might elicit a response in one patient but have no therapeutic benefit on another. This justifies having multiple treatment options on the market, so this latter patient can still experience symptom relief from a different treatment. Or, the existing treatment might be significantly more expensive than the drug candidate. In this scenario, having the candidate available would alleviate financial barriers that hinder patients’ access to treatment.

Whatever the reasoning, if phase III results show that the drug candidate has justifiable value as a treatment option, the most arduous part of clinical research is over. The research team can now seek regulatory review to put the drug on the market. While that process is underway, phase III trials may still be taking place. Continuing phase III during this time has a twofold purpose: it provides an ongoing stream of even more data points, and it allows participants to keep receiving a beneficial treatment during the wait for regulatory approval.

Relatively few drug candidates make it to the review stage. Just as each chapter of the drug discovery process narrowed down the number of feasible drug candidates, so do the phases of drug development. Some candidates that succeeded during drug discovery may end up failing during preclinical research, phase I, or phase II. Overall, only about 10 to 15 percent of drug candidates that have reached clinical trials manage to secure approval. (Success rates are generally 50 percent for preclinical stages, 70 percent for phase I, and 30 percent for phase II, although this is an estimate.) Failure usually results from low efficacy, high toxicity, or a lack of resources to continue the study.

Though this is disappointing for the researchers and patients who were counting on the candidate to get approved, it ensures that the drugs on the market are truly the best options available. Another silver lining appears at this point, too: drug candidates that have completed phase III are nearly certain to receive regulatory approval.

The Home Stretch: Review, Approval, and Beyond

The finish line is in sight. By now, researchers have collected sufficient information to believe that the candidate is safe, effective, and evenly matches (or surpasses) existing treatments. The only task that remains is to convince regulatory authorities of this. The regulatory approval process varies by country, but in this section, we’ll examine how it plays out in the U.S.

New Drug Application

Recall that, before phase I, the research team submitted an IND application to the FDA. The IND application explained why the drug candidate succeeded in preclinical research and was ready to advance to clinical research. Now, after phase III, the research team must submit another application to the FDA: a New Drug Application (NDA). The NDA explains why the drug candidate succeeded in clinical research and is ready to go to market.

In the NDA, the researchers summarize all of their findings from all of the clinical trials. They describe what they learned about the drug candidate’s side effects and pharmacokinetic and pharmacodynamic mechanisms. Here, the FDA’s role is to review all of the data and make key decisions about how to market the drug. This agency ensures that the trials were statistically valid, maintained ethical standards, and that the candidate’s potential benefits outweigh any potential risks. As always, the candidate must be of high quality and purity, with consistent production across batches.

This Little Druggy Went to Market: Getting Medications on the Shelves

Chances are, you’ve opened a package of medication and unfolded a very long insert written in tiny text, or you’ve zoned out during a drug commercial that rattles off too many side effects to count. During regulatory approval, the FDA decided that this information is necessary when marketing the drug to you. Labeling the drug accurately keeps patients and health care practitioners in control. In order to make a well-versed decision about whether the drug is the right treatment for the patient, it’s crucial to understand what ingredients are in the drug, what possible side effects and complications might arise, and what to do if that happens. All of this information is reviewed during the FDA approval process.

If the FDA raises concerns about the research findings or isn’t convinced that the drug is safe and effective, it can ask the researchers to provide more detail or conduct further studies. But if it agrees that the drug candidate is a valuable treatment option, it approves the drug and releases it to the market (and the researchers and patients all celebrate a job well done!). It’s worth noting that FDA approval only approves a drug to be marketed within the U.S. In other words, FDA approval is not automatic permission to market the drug in any other countries. Other countries may perform their own approval processes for the drug, following their own regulatory criteria, but these criteria can differ worldwide.

After approval, the drug is manufactured at a higher scale and marketed to patients who have the condition. But — gotcha! — drug development isn’t over yet. In fact, it never truly “ends.” And although this may sound unproductive and exhausting, it’s actually a good thing.

Phase IV Clinical Trials

As you might remember, phase III was the longest of all clinical research trials. It took years to compile enough accurate, diverse, statistically sound data points to craft an argument in favor of approving the drug. There’s one phase that can exceed this duration, though: phase IV.

Phase IV trials are the final stage of clinical research (this time we mean it). It’s also referred to as post-marketing surveillance, which is exactly what it sounds like: monitoring an FDA-approved drug for long-term effects. For some drugs, certain effects don’t become apparent until several years after the patient receives it. To accommodate this, regulatory authorities and researchers continue monitoring the patients during phase IV.

Safety, quality, side effects, and effectiveness are all points of interest in this study. With the drug now on the market and widely available, many more patients are using it, and the phase IV participant pool includes thousands of them. Phase IV studies can tell patients if there are future effects to anticipate, and tell health care practitioners if there are any negative interactions between the drug and other medications. If taking the drug for many years will yield effects that don’t appear in short-term use, then a phase IV trial can indicate this.

Since drug monitoring is an ongoing process, it’s inherently never-ending. New effects could arise at any time, and patients and regulatory authorities need to be aware of them. Also, understanding the long-term course that a drug takes might inspire ways to refine upon the drug. Based on details they obtain in phase IV, researchers may revisit the drawing board to address unforeseen issues and make the drug better. This continuous improvement makes the drug the best it can be, so the patient’s quality of life is the best it can be.

A flowchart describing the sequential order of major milestones in drug development.
A flowchart showing how the leads from the drug discovery process proceed through preclinical and clinical studies, regulatory review, and post-marketing surveillance (Phase IV clinical trials).

How were COVID-19 drugs developed?

So, drug development takes quite a long time, and for good reason. But what happens when it’s not feasible to wait that long?

Dilemmas like these framed the first year of the COVID-19 pandemic. Economic paralysis and periodic waves of mutated variants devastated people worldwide, and waiting for a solution was painstaking. After it became clear that the pandemic would persist, biomedical researchers and pharmaceutical scientists jumped into action to devise treatments. These treatments would be in a league all their own: easy to administer quickly and on a global scale, effective enough to relieve the pandemic’s strain, and of course, safe.

To conquer these obstacles, the drug development process had to pivot in the face of a constantly-evolving public health crisis. Eventually, suitable medications and vaccines made it to patients and made the pandemic much more manageable, to the extent that it no longer affects most of us. This is a huge achievement for biomedical research, but it didn’t come without its challenges. Let’s reflect on how this situation played out, from a pharmaceutical perspective.

How were COVID-19 medications developed?

If you contracted COVID-19 today, your health care provider could likely prescribe you a medication tomorrow. Now, in the aftermath of the pandemic, antiviral COVID-19 medicines are more readily accessible than they used to be.

Every drug has an origin story, but during the height of the pandemic, researchers didn’t have the luxury of time to search for brand-new medicines. (Some researchers certainly tried, but their drug candidates took longer to reach patients.) Consequently, many COVID-19 drugs didn’t have a drug “discovery” process so much as a drug “repurposing” process. Consider diseases that cause symptoms like COVID-19’s, or viruses similar to the one responsible for COVID-19. Can you, as the researcher, design a drug that mimics those diseases’ existing treatments? Can you apply an existing antiviral drug in a new, coronavirus-specific context?

To an extent, this strategy appears in the standard drug discovery process too. It was a good technique for the COVID-19 drug search because it gave researchers a starting point that wasn’t square one, saving precious time on initial drug discovery efforts. Through creative thinking, persistence, and a bit of sleuthing, researchers can reach solutions that streamline the drug development timeline. For example, remdesivir (brand name Veklury) is now a commonplace COVID-19 medication, but it originally treated the coronaviruses behind SARS and MERS. All coronaviruses infect host cells using a similar approach, making it relatively straightforward to translate an existing medication from one coronavirus to another.

Coronavirus Treatment Acceleration Program

At the onset of the pandemic, the FDA implemented the Coronavirus Treatment Acceleration Program (CTAP). Redirecting scientific resources and biomedical experts’ interdisciplinary expertise toward COVID-19 drug development, CTAP drove the search for COVID-19 therapeutics besides vaccines. CTAP enabled the FDA to review, in a relatively short period, many more INDs and NDAs than it typically receives. In doing so, it set a vast number of COVID-19 drug discovery efforts and clinical trials into motion.

You might personally know people who received COVID-19 medications but still experienced symptoms for some time afterward. This doesn’t mean that the medications weren’t doing their job. Remember that, at this point in the pandemic, health care facilities were spread very thin. These facilities struggled to meet so many patients’ needs while maintaining infection control measures and managing employee burnout. Because of this, perhaps COVID-19 drugs’ highest priority was not to eliminate symptoms, but rather to reduce patient hospitalizations. This way, patients recovered at home when possible, while health care facilities devoted resources to sicker patients needing immediate care.

This was, all-around, a fragile situation. If we could have anticipated COVID-19, it would have been ideal to prevent it from happening in the first place, right? Well, that’s precisely what a vaccine is for.

How were COVID-19 vaccines developed?

Most vaccines take over a decade of research and discovery before the public can receive them. Strikingly, COVID-19 vaccines were released less than a year into the pandemic. What accounted for this rapid rollout?

The most prominent COVID-19 vaccines in the U.S., made by Pfizer and Moderna, are mRNA vaccines. Many people first learned about mRNA vaccines in the context of COVID-19, but the technology predates the pandemic. At first, researchers attempted to use mRNA vaccines to treat conditions like rabies and Ebola. The pandemic inspired researchers to apply this method to COVID-19 as well, because coronaviruses are RNA viruses. Coronaviruses use a spike protein to infect host cells, so mRNA vaccines teach the body to recognize a synthetic version of that spike protein. That way, if a real coronavirus enters the body later, the body’s immune system will already be poised to attack.

Traditional vaccine technology was a focus of COVID-19 vaccine development too, but mRNA technology was more groundbreaking and ultimately reached the public sooner. CTAP didn’t support COVID-19 vaccine development, but other programs contributed to it instead. For example, the U.S. government’s Operation Warp Speed invested financially in the discovery, research, and mass production of these vaccines.

But despite this support backing COVID-19 drug and vaccine development, the process didn’t happen seamlessly. To learn lessons from the pandemic, it’s helpful to understand what we could have done differently or better. Let’s explore a few of those factors.

Special Challenges in COVID-19 Drug Development

Regardless of the pandemic’s obvious repercussions, some people remained hesitant when presented with these new treatments. Given the treatments’ production speed, and with misinformation swirling abundantly in media outlets, skeptic attitudes raised questions about whether these drugs were truly safe.

Being attentive to what you put in your body is a good thing. Informed consent is a non-negotiable component of medical decision-making, and it’s understandable that someone might hesitate to accept a drug that may have been made recklessly. However, COVID-19 treatments were not made recklessly, as some people believe.

It’s true that the drug development process was accelerated to match the urgent circumstances. Regardless, researchers and the FDA gave the same scrupulous attention to detail to emerging COVID-19 treatments as they do to any other drug. Let’s see how the FDA balanced both the responsibility of being thorough and the responsibility of acting swiftly.

EUAs to the Rescue!

After COVID-19 became a pandemic, drug development efforts intensified. There was a constant steady stream of new drug candidates being invented, researched, and reviewed. To expedite the process, the FDA issued Emergency Use Authorizations for the most promising candidates. An Emergency Use Authorization (EUA) makes a not-yet-approved drug candidate available during a time-sensitive health situation. Just like all drugs, the candidate still undergoes preclinical and clinical studies, and the FDA still reviews these findings. If the FDA believes that the candidate is reasonably safe and effective, with potential benefits that justify its potential risks, then it can issue an EUA, giving the public access to the drug.

During COVID-19, EUAs authorized a variety of products, including diagnostic tests, medical devices, and vaccines. The rollout of vaccines, in particular, was complicated. It seemed that as soon as a vaccine became widely available, the virus had mutated, making that vaccine somewhat less effective. Viruses mutate to make it easier for them to infect a host, and the virus behind COVID-19 is no exception. During a pandemic, this mutability made the stakes even higher; drug development efforts were essentially a race against time. To beat the virus at its own game, researchers had to redesign vaccines to target new COVID-19 variants and clinically study those vaccines. Then the FDA had to review those study results, issue EUAs when appropriate, and bring the vaccines to market, all before the virus mutated again.

A close-up photograph of the Janssen COVID-19 vaccine, including its label designating it for use under Emergency Use Authorization (EUA).
The COVID-19 vaccine manufactured by Janssen. Note the EUA designation on the left side of the image. The Janssen COVID-19 vaccine does not use mRNA technology.

Show Me the Money

We can’t neglect the fact that drug development requires money. A lot of money. Funding comes from sources like governments, pharmaceutical companies, philanthropic support, and venture capitalists. Sometimes, researchers must apply for grants in order to receive funding, and it’s not guaranteed that they’ll receive those grants. But the pandemic was a unique circumstance. Since its consequences wrecked people’s lives, livelihoods, and the global economy, these sources redirected extra funding to COVID-19 drug development. These supplemental funds quickened the pace of this process even more, enabling researchers to do what they do best: design and innovate game-changing drug solutions.

Now, with sufficient funding, we’ve seen what science can do. Technically, any forthcoming drug candidate could zip through the discovery, development, and approval processes if it received this much financial support. However, COVID-19 is a special case where this all-hands-on-deck approach was a productive way to manage the spread in a timely manner. But it’s not realistic or sensible to devote the majority of resources to treating a single disease. Other conditions, including chronic and less urgent ones, need to receive funding too. Even during the pandemic, people still suffered from conditions besides COVID-19, and drug development efforts had to continue for those conditions simultaneously.

Delivering Drugs to Patients

With this in mind, it makes sense that the drug development process for early COVID-19 therapeutics was unusually shortened. After all, a treatment can’t do a patient any good if the patient has already succumbed to the disease by the time the treatment becomes available. But as treatments became available, even on an advanced timeline, the FDA still inspected them closely for safety, quality, side effects, and all other regulatory criteria.

Once an EUA was in place, the last challenge was the logistical question of how to manufacture and distribute COVID-19 drugs efficiently. Obviously, there was great demand for these products, but supply was initially much lower. It took time for drug manufacturers and pharmaceutical companies to scale up production in order to meet demand. Unfortunately, even if there’s enough supply to share with other countries, those other countries may not have yet approved the drug according to their local standards. Regulatory and, in some instances, political barriers prevented international drug distribution from happening immediately. Meanwhile, other countries were performing their own COVID-19 drug development. But while drug distributors awaited permission to expand internationally, infections continued to grow and spread across borders.

As anyone who lived through, or worked to solve, the COVID-19 pandemic can tell you, it wasn’t easy. This drug development undertaking was undeniably a showcase of these scientific and biomedical experts’ ambition. In the next article in this mini-series, we’ll introduce you to some of them and show you how you can join them to shape the future of public health.

A flowchart showing the sequence of major milestones within the combined drug discovery and development processes.
A roadmap of the major milestones of the drug discovery and development processes.

Conclusion

Drug development piggybacks off of the earlier drug discovery process to turn a candidate drug into a reality. Preclinical research, including animal studies, launches the investigation into full gear by evaluating the candidate’s basic physiological impact. Once these studies have proven that the candidate is sufficiently safe and effective, clinical research can begin, allowing human subjects to participate for the first time. Several phases of clinical trials rigorously assess the candidate for performance, ADME properties, how it compares to existing treatments, and again, safety and effectiveness. If the candidate meets expectations, it undergoes regulatory approval and finally reaches the public. Combined, the drug discovery and drug development processes are the heart of the pharmaceutical industry, diligently delivering new medicines to improve patient outcomes.