Saturday, June 21, 2014

Animal Testing Might Do Something!

Since I've decided to stop advocating animal rights, I felt that I should learn about the benefits of current uses of animals in our economy.

First mission:  research whether animal testing is effective.  It was hard to find objective information, because most of it comes from PETA-like bands of simpletons who couldn't tell you the difference between ethanol and petroleum.  I kept seeing the statistic that 92% of drugs that pass animal testing fail human trials, but that seemed like it was impossible.  And I'm always wary of numbers that don't come from a reputable agency, like our government. (Yes, I'm going to assume that the government is reputable.)

After scouring various webpages about animal liberation and hippie conspiracies, I finally found a solid article, written by a highly esteemed professor at London's National Institute for Medical Research, that purported to debunk the importance of this statistic.  Thankfully, it referenced the original FDA Report on the "Critical Path" towards Medical Products, which had the number I was looking for:

"...a new medicinal compound entering Phase 1 testing, often representing the culmination of upwards of a decade of preclinical screening and evaluation, is estimated to have only an 8 percent chance of reaching the market."





Preclinical screening includes animal testing.  8% of drugs that have passed animal testing are effective for humans.  When a drug passes animal testing, it is 92% likely to be useless or toxic to humans.  I wonder what I'd do if a doctor told me that the drug he was prescribing me was 92% likely to be useless or toxic.  I'd probably start crying and wonder why they didn't have a better system of preclinical trials.

As I read the aforementioned explanation of this failure rate by Professor Robin Lovell-Badge...

"If you want to know how truly successful animal tests are, consider that in over 30 years there has not been a single death in a Phase 1 clinical trial in the UK. The last major incident was in 2006 in the Northwick Park trialswhere 6 people suffered extreme side effects in a Phase 1 clinical trial – though it should be noted that TGN1412 was a very novel type of molecule which was poorly understood. Considering that there are normally over 200 Phase I clinical trials each year in the UK (each involving multiple people), animal testing has been exceptionally effective at keeping dangerous drugs away from people."

...I started to question the empirical underpinnings of his argument.

Couldn't the lack of deaths could be explained by the fact that doctors and scientists are smart? Or by the fact that they start with doses far below a possible LD50?  Or maybe they're praying real hard for the drugs to work?

OK, seriously now, where's the evidence to back up this claim?  This scientist, who is at his Institute the Head of the Division of Stem Cell Biology and Developmental Genetics, goes through a whole article explaining the math behind the clinical phase of pharmaceutical testing, only to end with contradictory statements about what happens in preclinical testing.  His closing remark doesn't even do a good job of rhetorically supporting his defense of animal testing.

"Furthermore, when a drug is licensed for use, it is on the basis of the clinical trials in humans, not the preclinical animal tests which exist to ensure that a drug is safe enough to move into Phase 1 trials. So when animal rights activists claim that adverse drug reactions can be blamed on animal tests approving the drug, remember that it is the clinical trials in thousands of people which provide the evidence of its safety."

Come now.  Let's think about this rationally.  If a drug passes animal testing, then that means it's safe for animals.  There were drugs that passed animal testing and were deemed toxic to humans.  I'm sure most of the 92% of failed drugs were useless, but it's of note that the FDA report mentions safety concerns as the first reason for failure ("The main causes of failure in the clinic include safety problems and lack of effectiveness").  But wait - isn't it also true that 0 people died in pharmaceutical clinical trials in the UK in over 30 years?  In other words, standard protocol for clinical trials was 100% effective at preventing human death after administration of toxic chemicals that were not known to be toxic.

[For the sake of being both accurate and germane, "toxic" = "a low LD50" and "lethal" = "a very low LD50"]

But it's also important to keep in mind how many drug candidates were eliminated by preclinical testing, which includes but is not limited to animal testing.  You might think preclinical testing weeds out some vast number of dud drugs, but it's actually the case that preclinical testing only weeds out 36% of drug candidates.  That is, 64% of drug candidates pass preclinical testing.

To be extra cautious, let's assume that animal testing was responsible for weeding out all of those drugs:  we'll assume animal testing eliminated 36% of drug candidates before human trials.  And let's assume that every one of these drugs was toxic, whether lethal or nonlethal.  We'll keep this in mind for later.

There are a few more preliminary numbers to put on the table.  According to that FDA report, the FDA approved 23 novel drugs in 2003.  That would be roughly 8% of the drugs that passed animal testing before phase 1 of clinical trials.  The 92% that passed preclinical, but failed clinical tests, comes out to 265 candidates (23 * 92/8).  And in total, there would have been ~449 drugs that made it to preclinical screening (23/.08/.64).

With those numbers on hand, let's get back to addressing the question.

One might suggest that the worst toxins are screened out during animal testing, leaving only lesser toxins to show up in human trials.  I would challenge this idea by asking, "if there are toxins that are poisonous to humans but not animals, surely some percentage of those toxins would be lethal?"  If there are chemicals that are innocuous to nonhumans but minorly toxic to humans, then mustn't there be chemicals that are innocuous to animals and lethally poisonous to humans?  For simplicity's sake, let's say that a certain percentage of toxins are lethal.  And let's assume, for the sake of Occam's Razor, that percentage holds true across species.  Let's say that 5% of toxic drug candidates are lethal.  In that case, 5% of the toxic drugs that are tested on animals are lethal, and they don't get to humans.  So 36% of drug candidates were deemed toxic during animal testing, which is 162 (449*.36) toxic chemicals, of which we'd estimate 8 are lethal.  Phew, animal testing did its job!

...But what about those drug candidates that are toxic to humans but passed animal testing?

Since the FDA report didn't specify how many of the failed drugs were toxic, let's make up a number.  Let's say 20% were toxic.  Since toxicity was the first reason for failed clinical trials listed in the 2004 report, that seems like a reasonable number to me.  That's 53 toxic drugs that made it through animal testing which were toxic to humans.  Then, using the previous assumption that 5% of toxic drugs are lethal, we can estimate that in 2003, 2.65 drugs would have been lethal.  Let's say this is an average that holds true for every year.  Then, over the course of 30 years, shouldn't there have been 80 drugs that were lethal and went through clinical testing?

...That doesn't seem to be reasonable.  Animal testing has long been heralded as the primary line of defense for safety.  But if we estimate that there have been something like, and at least on the scale of, 80 drugs that were lethal to humans, and ZERO humans have died in these clinical trials, then... what does this mean?

Maybe it means that clinical trials are so safe that animal testing is completely irrelevant?  Maybe it means that animal testing is so bad, doctors have needed to protect their patients from lethally toxic chemicals?

This is all a lot to take in, considering I've always been told that animal testing is vital to the safety of human patients.  I mean, I find it hard to believe that something we've been doing for so long and into which we've poured so much money could be a waste of time and resources.

I mean, hypothetically speaking, let's say that we lived in a society that puts no value in morality.  Let's imagine a world where animals are systematically abused for fun, and nobody sees anything wrong with animal cruelty.  Let's make believe that we only care about the bottom line:  Money.  Animal testing saves money, right?  So if it saves money, it's a good idea!

According to the FDA report, "...inability to predict these failures before human testing or early in clinical trials dramatically escalates costs. For example, for a pharmaceutical, a 10-percent improvement in predicting failures before clinical trials could save $100 million in development costs per drug."

But current preclinical testing methods, including animal testing, only predict 36% of drugs to be failures, when 92% of drugs fail.  What about the other 59%?  If current preclinical methods miss more than half of the failures, can you honestly say it's an economical model?  There are literally better screening methods.

Take, for instance, in silico testing, which, in a controlled experiment (here, page 10), led to a 66% saving in number of patients.  The results of the simulation were confirmed by real-life test results.  This screen rate alone would dramatically reduce the number of useless drugs that go through clinical trials.  Even if other preclinical testing was completely eliminated, the savings from this single technology would reduce the costs of clinical trials far below than the paltry 36% savings provided by all the other methods.

And hey, there's the Tissue Chip.  It's like a Gene Chip, in that it has various human tissue samples on a chip and can be washed over with a drug candidate.  This means actual human cells from different tissue types are being tested, which is much closer to human tissue than nonhuman tissue is.  It's too new for me to find good data on it, but the concept is leaps and bounds more reasonable than nonhuman testing for human medicine.  Sooner than later, I guarantee we're going to see statistics that make this a wildly accurate way of predicting which drugs would fail clinical trials.

One might ask, why not do both animal testing and these other forms of testing?  Surely a combination of a somewhat accurate measure and a very accurate measure is better than either one alone?

In the context of that question, I propose a thought experiment.  Let's say you are making an investment, you have two stockbrokers to ask for advice, and you have three portfolios to choose from.  Stockbroker A suggests that you don't invest in portfolio 1, but you instead invest in both 2 and 3.  Stockbroker B suggests you invest in only portfolio 3.  You decide to test them by just waiting to see how the portfolios do without your investment.  In a few months, you find that stocks 1 and 2 both hit rock bottom, but portfolio 3 skyrocketed.  Now you're presented with 4 portfolios, and you're given a slightly different situation - another chance to test the stockbrokers.  Stockbroker A warns against portfolios 1 and 2, and says nothing about 3 and 4.  Stockbroker B warns against portfolios 1 and 4, and says nothing about 2 and 3.  Months later, you learn that portfolio 2 and 3 both succeeded wildly, and the other two plummeted.  If you had listened to just stockbroker A, you'd have won once and lost once.  If you had listened to just stockbroker B, you would have won twice and lost none.  If you had listened to both of them, and invested in just stock 3, you would have won one investment.

Stockbroker A is eliminating some options, but he's not really earning you much return.  When he suggested a winning stock earlier, he had also suggested a losing stock.  Stockbroker B, on the other hand, has proven to be capable of accurately predicting the values of portfolios, and has on both occasions won you a net return.  Now you have a choice:  Do you take the advice of both stockbrokers?  Or do you stick with the more reliable one, and ignore Stockbroker A's feeble attempts to advise you?  In this case, it seems pretty clear that it's better to stick with the accurate predictor of investments, to avoid bad investments and to not miss any good investments.

Here's my personal conclusion, with which you may disagree:  Current preclinical testing methods are shitty.  The fact that animal testing causes suffering and death to animals makes it unethical.  The fact that animal testing is made redundant by methods of testing that do not cause suffering or death makes it unjustifiable.  The meager rate of failure screening provided by animal testing leads researchers to lose billions in clinical trial research.  And the fact that large amounts of toxic chemicals completely bypass animal testing indicate that it's generally not a reliable, scientific, or even medically sound method method vetting important drugs.

The zealotry of animal rights opponents to defend standard practice is so basic it hurts.  It's the kind of simplistic surface-level superstition that undergrads pine for during finals week, and which researchers are left staving off for decades after the practice is made the norm.  It hurts science, it hurts medicine, and it hurts the innocent.

No comments:

Post a Comment