Home | Bias 32% Exposed--raw data versus journal articles | FDA in Crisis | Wrong Drugs Used in Dementia | Bisphonphonates & Calcitonin for Osteoporosis don't work | Antidepressants don't work | Drug ads make people sick | adverse drug reactions--frequency | Atrovent (ipratropium) another killer | Anemia drugs Epoetin and Procrit kill | drugs and cognitive impairment | Lunesta, a sleeping pill, with negative results | DIABETES DRUG KILLS | Zyprexa weight gain, Lilly sued | Darvocet pulled in GB, but not here | Leading Researcher Creates Favorable Data

Bias 32% Exposed--raw data versus journal articles

Drug News--disappointing

 

Positive bias from 11 to 69%:  a study compares raw data to journal published data.

 

The study compares the data raw data supplied the FDA to the selected data published in journal articles. As expected, because the studies were done as part of an overall strategy to market drugs, the results were manipulated for that end.  In all 37 studies the positive bias was 11 to 69%.  In other words every journal article based on clinical trials of psychiatric medications overstates substantially their positive results.  There of course is no reason to believe that what motivates Pharma for psychiatric drugs is not also operating for all their published studies. 

 

 

Volume 358:252-260, January 17, 2008,  Number 3

http://content.nejm.org/cgi/content/short/358/3/252

 

Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy

Erick H. Turner, M.D., Annette M. Matthews, M.D., Eftihia Linardatos, B.S., Robert A. Tell, L.C.S.W., and Robert Rosenthal, Ph.D.

bias-pharma-fda-raw-data.jpg

ABSTRACT

Background Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials — and the outcomes within those trials — can lead to unrealistic estimates of drug effectiveness and alter the apparent risk–benefit ratio.

Methods We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. We conducted a systematic literature search to identify matching publications. For trials that were reported in the literature, we compared the published outcomes with the FDA outcomes. We also compared the effect size derived from the published reports with the effect size derived from the entire FDA data set.

Results Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall.

Conclusions We cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, from decisions by journal editors and reviewers not to publish, or both. Selective reporting of clinical trial results may have adverse consequences for researchers, study participants, health care professionals, and patients.

Full text at http://content.nejm.org/cgi/content/full/358/3/252

 

 

Pieces from the full text

Data from FDA Reviews

We identified the phase 2 and 3 clinical-trial programs for 12 antidepressant agents approved by the FDA between 1987 and 2004 (median, August 1996), involving 12,564 adult patients. For the eight older antidepressants, we obtained hard copies of statistical and medical reviews from colleagues who had procured them through the Freedom of Information Act.  Reviews for the four newer antidepressants were available on the FDA Web site. This study was approved by the Research and Development Committee of the Portland Veterans Affairs Medical Center; because of its nature, informed consent from individual patients was not required.  From the FDA reviews of submitted clinical trials, we extracted efficacy data on all randomized, double-blind, placebo-controlled studies of drugs for the short-term treatment of depression. We included data pertaining only to dosages later approved as safe and effective; data pertaining to unapproved dosages were excluded.

Previous studies have examined the risk–benefit ratio for drugs after combining data from regulatory authorities with data published in journals.3,30,31,32 We built on this approach by comparing study-level data from the FDA with matched data from journal articles. This comparative approach allowed us to quantify the effect of selective publication on apparent drug efficacy.

 

From CL Psy

A commentary on the results of the study was published on January 17,2008, by Clinical Psychology and Psychiatry at http://clinpsyc.blogspot.com/2008/01/antidepressants-hiding-and-spinning.html.  Excerpts therefrom:

 

A whopper of a study has just appeared in the New England Journal of Medicine. It tracked each study antidepressant submitted to the FDA, comparing the results as seen by the FDA in comparison with the data published in the medical literature. The FDA uses raw data from the submitting drug companies for each study. This makes great sense, as the FDA statisticians can then compare their analyses to the analyses from drug companies, in order to make sure that the drug companies were analyzing their data accurately.

 

After studies are submitted to the FDA, drug companies then have the option of submitting data from their trials for publication in medical journals. Unlike the FDA, journals are not checking raw data. Thus, it is possible that drug companies could selectively report their data. An example of selective data reporting would be to assess depression using four measures. Suppose that two of the four measures yield statistically significant results in favor of the drug. In such a case, it is possible that the two measures that did not show an advantage for the drug would simply not be reported when the paper was submitted for publication. This is called "burying data," "data suppression," "selective reporting," or other less euphemistic terms. In this example, the reader of the final report in the journal would assume that the drug was highly effective because it was superior to placebo on two of two depression measures, left completely unaware that on two other measures the drug had no advantage over a sugar pill. Sadly, we know from prior research that data are often suppressed in such a manner. In less severe cases, one might just switch the emphasis placed on various outcome measures. If a measure shows a positive result, allocate a lot of text to discussing that result and barely mention the negative results.  From an amoral, purely financial view, there is no reason to publish negative trial results.  The NJE article stated “For each drug, the effect-size value based on published literature was higher than the effect-size value based on FDA data, with increases ranging from 11 to 69%.”

 

 

The drugs that were found to have increased their effects as a result of selective publication and/or data manipulation:

·                            Bupropion (Wellbutrin)

·                            Citalopram (Celexa)

·                            Duloxetine (Cymbalta)

·                            Escitalopram (Lexapro)

·                            Fluoxetine (Prozac)

·                            Mirtazapine (Remeron)

·                            Nefazodone (Serzone)

·                            Paroxetine (Paxil)

·                            Sertraline (Zoloft)

·                            Venlafaxine (Effexor)

That is every single drug approved by the FDA for depression between 1987 and 2004. Just a few of many tales of data suppression and/or spinning can be found below:

·                            Data reported on only 1 of 15 participants in an Abilify study

·                            Data hidden for about 10 years on a negative Zoloft for PTSD study

·                            Suicide attempts vanishing from a Prozac study

·                            Long delay in reporting negative results from an Effexor for youth depression study

·                            Data from Abilify study spun in dizzying fashion. Proverbial lipstick on a pig.

·                            A trove of questionable practices involving a key opinion leader

·                            Corcept heavily spins its negative antidepressant trial results

 

 

Another article based on the NEJM study at

http://www.fiercepharma.com/story/study-antidepressants-useless-for-most/2008-02-26?utm_medium=nl&utm_source=internal&cmp-id=EMC-NL-FP&dest=FP

 

Study: Antidepressants useless for most

February 26, 2008 — 7:59am ET

Here's a study guaranteed to put almost every drugmaker on the defensive. Researchers analyzed every antidepressant study they could get their hands on--including a bunch of unpublished data obtained via the U.S. Freedom of Information Act--and concluded that, for most patients, SSRI antidepressants are no better than sugar pills. Only the most severely depressed get much real benefit from the drugs, the study found.

The new paper, published today in the journal PLoS Medicine, breaks new ground, according to The Guardian, because the researchers got access for the first time to an apparently full set of trial data for four antidepressants: Prozac (fluoxetine), Paxil (paroxetine), Effexor (venlafaxine), and Serzone (nefazodone). And the data said..."the overall effect of new-generation antidepressant medication is below recommended criteria for clinical significance." Ouch.

The study could have a ripple effect, affecting prescribing guidelines and even prompting questions about how drugs are approved. "This study raises serious issues that need to be addressed surrounding drug licensing and how drug trial data is reported," one of the researchers said. In other words, all trial data needs to be made public.

 

 

 

 

http://www.sfgate.com/cgi-bin/article.cgi?file=/c/a/2008/12/15/MNKF14GTLO.DTL

 

San Francisco Chronicle

 

 

UCSF says reports on drug trials skew positive: What are the pills in your medicine cabinet, and how do you know they're best for you?

David Perlman, Chronicle Science Editor, December 15, 2008

 

When drug companies seek approval to market new medicines, they must show the U.S. Food and Drug Administration the results of all the tests they've run on volunteer patients - at first on only a few, then on dozens, and finally on hundreds or sometimes thousands.  After winning approval, the companies typically sponsor reports of those tests in medical journal publications, which many doctors often rely on to determine whether to prescribe new drugs for their patients.

Now a skeptical team of medical investigators at UCSF has accused the major drug companies of bias by distorting the results of their trials in those publications, making it hard for doctors to judge for themselves the pros and cons of prescribing the new drugs.  As a result, the researchers say, patients may sometimes be taking medicines they don't need - or with unwanted side effects - that their doctors have prescribed on the basis of inadequate information.

Negative findings omitted

The UCSF team, led by Lisa A. Bero of the medical center's Institute for Health Policy Studies, probed the details of 164 drug trials involving as many as 1,500 patients over a two-year period and then examined reports on those trials that were published in medical journals, as well as those that remained unpublished.  Their conclusions are published in the current issue of PLoS Medicine, an online medical journal.

"We found really important information from the official trial reports that were either not published at all or that stressed mostly the positive results of trials in the published versions," said Kristin Rising, a physician at the institute who did the major investigation and has now moved to the Boston University Medical Center.  "Doctors who prescribe new drugs - or even older ones - for their patients should have complete and unbiased information on those medicines before prescribing them," she said. 

According to Bero, doctors frequently complain that they're left to rely on incomplete data from the drug companies.  "I do think our findings are important for patients because their physicians may not have full and accurate information about the drugs they prescribe," she said.

In response to questions from The Chronicle, a pharmaceutical industry leader disagreed with the researchers' conclusions.  Doctors who seek to make "appropriate prescribing decisions" on new drugs can find all the critical information they need in the detailed drug labels that the FDA has approved based on all the trials every drug has undergone, said Ken Johnson, senior vice president of the Pharmaceutical Research and Manufacturers of America.  {That is misleading, since first the information is buried in details, second the print is quite small, and thirdly the significance of the warning concerning possible side effects is not quantified, so that the care giver can not make an informed judgment concerning risks.  Finally the positive effects are neither quantified or compared to alternative treatments and medications.  Know these limitations and difficulties, physicians rarely read those statements, and its influence upon proscribing practices is minimal at best.--jk}

A policy statement from the industry also says its member companies "have a long-standing commitment to ensure that physicians and patients have access to all relevant information about the medicines we discover, consistent with regulatory requirements, so that our products can be used safely and effectively."  {Too ingenuous to be worth comment upon.}

Trials posted online

According to Johnson, drug companies are required by law to post "a broad range of ongoing clinical trials and comprehensive information about those trials" on a registry maintained by the National Institutes of Health. The registry is at www.clinicaltrials.gov.

But in a commentary on the study published in the same issue of PLoS Medicine, An-Wen Chan, a Mayo Clinic physician who studies drug approval policies, said the UCSF investigators' findings show "bias, spin and misreporting" by the industry.

If this sounds like an in-house controversy, far removed from the bedside or the medicine cabinets of sick folks taking their prescriptions, it isn't. According to Rising, many doctors get their information about new drugs they prescribe either during visits from drug-company representatives known in the trade as "detail men," or from articles published in major medical journals.

Drug companies may call on specialized companies to prepare articles for medical journals on new medicines that have won FDA approval, with the articles emphasizing the trials' positive reports, and bearing the names of physicians who have participated in the trials as the authors, researchers say. The journal articles may also be written by drug company physicians involved in developing the new medicines.

"I'm just amazed at how many doctors will prescribe a new drug right away and depend only on what they read in the company's own summaries of trial results or on articles in medical journals that may be incomplete," said Thomas Bodenheimer, a physician at San Francisco General Hospital who was previously in private practice for 23 years.  "Practicing docs are not getting all the information about new drugs we need, and the information we are getting favors the new drug, with the studies almost always funded and controlled by the company making the drug," he said.

The issue has long been controversial and has been raised before in studies of individual drugs and how doctors prescribe them, but this is a detailed look at the issue, involving a large number of new drugs tested and a large number of patients.

Publishing discrepancies

According to the study by Bero, Rising and Peter Bacchetti, UCSF director of biostatistics, not all trial results that were submitted to the FDA by drug companies were published in medical journals. When they were published, there were often many discrepancies between the results the FDA received and the published data, the study found.

One-fourth of the results from trials testing the effectiveness of new drugs were not published at all within five years after the FDA approved them, the researchers found.

Furthermore, drug trials showing "favorable" results were five times more likely to be published than trial results that showed "unfavorable results," the UCSF team said. In other words, the publication of drug trial results can often give doctors a more favorable view of a new drug's safety and effectiveness than information the drug manufacturer has submitted to the FDA, according to the researchers.

Even information on new drugs that doctors can find in the journals they read is often "incomplete and potentially biased," the UCSF team concluded.

E-mail David Perlman at dperlman@sfchronicle.com.

This article appeared on page A - 1 of the San Francisco Chronicle

 

I find most telling is the failure to look for or mention issues that independent scientists would quite naturally raise.  For example for statins the research concerning their effect on thrombi is conspicuously missing, (2) end point information on lives saved is scant, and (3) serious side effects which have been found are not mentioned in subsequent studies such as cognitive impairment and cancer caused by statins.  Cancer could be easily uncovered by going through the data base of care providers such as Kaiser and the Veteran’s Administration.  I have spent over 200 hours reviewing articles on statins, and have come to only tentative conclusions because of the poor quality of journal articles, the failure to research obvious issues, and the positive bias of these journal articles.  Drug companies view journal articles as a marketing tool, and so too do the doctors in their pay, including those doing the research. 

 

 

 

Those who have a financial interest in the outcome manipulate the results