Pieces from the full text
Data from FDA Reviews
We identified the
phase 2 and 3 clinical-trial programs for 12 antidepressant agents approved by the FDA between 1987 and 2004
(median, August 1996), involving 12,564 adult patients. For the eight older antidepressants, we obtained hard copies
of statistical and medical reviews from colleagues who had procured them through the Freedom of Information
Act. Reviews for the four newer antidepressants were available on
the FDA Web site. This study was approved by the Research and Development Committee of the Portland Veterans Affairs Medical Center; because of its
nature, informed consent from individual patients was not required. From
the FDA reviews of submitted clinical trials, we extracted efficacy data on all randomized, double-blind, placebo-controlled
studies of drugs for the short-term treatment of depression. We included data pertaining only to dosages
later approved as safe and effective; data pertaining to unapproved dosages were excluded.
Previous studies
have examined the risk–benefit ratio for drugs after combining data from regulatory authorities with
data published in journals.3,30,31,32 We built on this approach by comparing study-level data from the FDA with matched data from journal
articles. This comparative approach allowed us to quantify the effect of selective publication on apparent
drug efficacy.
Qualitative Description of
Selective Reporting within Trials
The methods reported in 11 journal
articles appear to depart from the pre-specified methods reflected in the FDA
reviews (Table B of the Supplementary
Appendix). Although for each of these studies the finding with
respect to the protocol-specified primary
outcome was non-significant, each
publication highlighted a positive result as if it were the primary outcome. The non-significant results for the pre-specified primary
outcomes were either subordinated to non-primary
positive results (in two reports) or omitted (in nine). (Study-level
methodological differences are detailed in the footnotes to Table B of the Supplementary
Appendix.)
For each of the 12 drugs, the effect size
derived from the journal articles exceeded the effect size derived from the FDA
reviews (sign test, P<0.001) (Figure 3B). The magnitude of the increases in effect size
between the FDA reviews and the published reports ranged from 11 to 69%, with a
median increase of 32%. A 32% increase was also observed in the weighted mean
effect size for all drugs combined, from 0.31 (95% CI, 0.27 to 0.35) to 0.41
(95% CI, 0.36 to 0.45).
DISCUSSION
We found a bias toward the publication of positive
results. Not only were positive results more likely to be published, but
studies that were not positive, in our opinion, were often published in a way
that conveyed a positive outcome. We analyzed these data in terms of the proportion
of positive studies and in terms of the effect size associated with drug
treatment.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
From CL Psy
A commentary on the
results of the study was published on January 17,2008, by Clinical Psychology
and Psychiatry at http://clinpsyc.blogspot.com/2008/01/antidepressants-hiding-and-spinning.html. Excerpts therefrom:
A whopper of a study has just appeared in the New England Journal of Medicine. It tracked each study antidepressant submitted to the FDA, comparing the results as seen by the FDA in comparison with the
data published in the medical literature. The FDA uses raw data from the submitting drug companies for each study. This makes
great sense, as the FDA statisticians can then compare their analyses to the analyses from drug companies, in order to make
sure that the drug companies were analyzing their data accurately.
After studies are submitted to the FDA, drug companies then have the option of submitting data from their trials
for publication in medical journals. Unlike the FDA, journals are not checking raw data. Thus, it is possible that
drug companies could selectively report their data. An example of selective data reporting would be to assess depression using
four measures. Suppose that two of the four measures yield statistically significant results in favor of the drug. In such
a case, it is possible that the two measures that did not show an advantage for the drug would simply not be reported when
the paper was submitted for publication. This is called "burying data," "data suppression," "selective reporting," or other
less euphemistic terms. In this example, the reader of the final report in the journal would assume that the drug was highly
effective because it was superior to placebo on two of two depression measures, left completely unaware that on two other
measures the drug had no advantage over a sugar pill. Sadly, we know from prior research that data are often suppressed in such a manner. In less severe cases, one might just switch the emphasis placed on various outcome measures. If a measure
shows a positive result, allocate a lot of text to discussing that result and barely mention the negative results. From an amoral, purely financial
view, there is no reason to publish negative trial results. The NJE article stated “For each drug, the effect-size value based on published
literature was higher than the effect-size value based on FDA data, with increases ranging from 11 to 69%.”
The drugs that were found to have increased their effects as a result of selective publication and/or
data manipulation:
·
Bupropion (Wellbutrin)
·
Citalopram (Celexa)
·
Duloxetine (Cymbalta)
·
Escitalopram (Lexapro)
·
Fluoxetine (Prozac)
·
Mirtazapine (Remeron)
·
Nefazodone (Serzone)
·
Paroxetine (Paxil)
·
Sertraline (Zoloft)
·
Venlafaxine (Effexor)
That is every single drug approved by the FDA for depression between 1987 and 2004. Just a few of many
tales of data suppression and/or spinning can be found below:
·
Data reported on only 1 of 15 participants
in an Abilify study
·
Data hidden for about 10 years on
a negative Zoloft for PTSD study
·
Suicide attempts vanishing from a Prozac study
·
Long delay in reporting negative results from an Effexor for youth depression study
·
Data from Abilify study spun in dizzying
fashion. Proverbial lipstick on a pig.
·
A trove of questionable practices
involving a key opinion leader
·
Corcept heavily spins its negative antidepressant trial results
Another article based
on the NEJM study at
http://www.fiercepharma.com/story/study-antidepressants-useless-for-most/2008-02-26?utm_medium=nl&utm_source=internal&cmp-id=EMC-NL-FP&dest=FP
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Study: Antidepressants useless for most
February 26, 2008 — 7:59am ET
Here's a study guaranteed to put almost every drugmaker on the defensive.
Researchers analyzed every antidepressant study they could get their hands on--including a bunch of unpublished data obtained
via the U.S. Freedom of Information Act--and concluded that, for most patients, SSRI antidepressants are no better than sugar pills. Only the most severely depressed get much
real benefit from the drugs, the study found.
The new paper,
published today in the journal PLoS Medicine, breaks new ground, according
to The Guardian, because the researchers got access for the first time
to an apparently full set of trial data for four antidepressants: Prozac (fluoxetine), Paxil (paroxetine), Effexor (venlafaxine),
and Serzone (nefazodone). And the data said..."the overall effect of new-generation antidepressant medication is below recommended
criteria for clinical significance." Ouch.
The study could have a ripple effect, affecting prescribing guidelines
and even prompting questions about how drugs are approved. "This study raises serious issues that need to be addressed surrounding
drug licensing and how drug trial data is reported," one of the researchers said. In other words, all trial data needs to
be made public.