MEDICAL CRAP, more
Home | The Mechanism of How VIOXX Kills | Reforming Big Pharma | Claritin fails well-designed study | Pseudoephedrine minimally effective, if at all | DIRECT TO CONSUMER ADVERTISING ought to be banned again | FDA failed supervision--Consumer Report | British Commission's recommendations for drug industry reform | Journal Favorable Results Bias | Asthma Medications cause Fatal Asthma Attacks | Drug Advertising Not Good Medicine | VIAGRA NOT LIKE VIOXX, another horror | Vioxx & Celebrex Over Prescribed | 50-year Old Schizophrenia Drug as Good as Newer Drugs | AVERAGE FAMILY HEALTH INSURANCE COSTS $11,000 | Zopiclone (Imovane) ineffective sleeping pill | Drugs companies lobby, not in the public's interest | Erythromycin doubles sudden cardiac death | Iressa, another killer drug | Naproxen, leading over-the-counter drug a killer

Journal Favorable Results Bias

Since pharmaceutical industry covers through advertising dollars the major cost of publication, they exert an influence over content.  They support content that is in their financial interest, and oppose the opposite--jk

 

http://healthyskepticism.org/

PLOS Medicine  (a peer reviewed access journal at http://medicine.plosjournals.org)


PLoS Medicine is an open-access journal published by the nonprofit organization Public Library of Science.

Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies

Richard Smith

Richard Smith is Chief Executive of UnitedHealth Europe, London, United Kingdom. E-mail: richardswsmith@yahoo.co.uk

Competing Interests: RS was an editor for the BMJ for 25 years. For the last 13 of those years, he was the editor and chief executive of the BMJ Publishing Group, responsible for the profits of not only the BMJ but of the whole group, which published some 25 other journals. He stepped down in July 2004. He is now a member of the board of the Public Library of Science, a position for which he is not paid.

Published: May 17, 2005

DOI: 10.1371/journal.pmed.0020138

Copyright: © 2005 Richard Smith. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Citation: Smith R (2005) Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies. PLoS Med 2(5): e138


“Journals have devolved into information laundering operations for the pharmaceutical industry”, wrote Richard Horton, editor of the Lancet, in March 2004 [1]. In the same year, Marcia Angell, former editor of the New England Journal of Medicine, lambasted the industry for becoming “primarily a marketing machine” and co-opting “every institution that might stand in its way” [2]. Medical journals were conspicuously absent from her list of co-opted institutions, but she and Horton are not the only editors who have become increasingly queasy about the power and influence of the industry. Jerry Kassirer, another former editor of the New England Journal of Medicine, argues that the industry has deflected the moral compasses of many physicians [3], and the editors of PLoS Medicine have declared that they will not become “part of the cycle of dependency…between journals and the pharmaceutical industry” [4]. Something is clearly up.

The Problem: Less to Do with Advertising, More to Do with Sponsored Trials

The most conspicuous example of medical journals' dependence on the pharmaceutical industry is the substantial income from advertising, but this is, I suggest, the least corrupting form of dependence. The advertisements may often be misleading [5,6] and the profits worth millions, but the advertisements are there for all to see and criticise. Doctors may not be as uninfluenced by the advertisements as they would like to believe, but in every sphere, the public is used to discounting the claims of advertisers.

The much bigger problem lies with the original studies, particularly the clinical trials, published by journals. Far from discounting these, readers see randomised controlled trials as one of the highest forms of evidence. A large trial published in a major journal has the journal's stamp of approval (unlike the advertising), will be distributed around the world, and may well receive global media coverage, particularly if promoted simultaneously by press releases from both the journal and the expensive public-relations firm hired by the pharmaceutical company that sponsored the trial. For a drug company, a favourable trial is worth thousands of pages of advertising, which is why a company will sometimes spend upwards of a million dollars on reprints of the trial for worldwide distribution. The doctors receiving the reprints may not read them, but they will be impressed by the name of the journal from which they come. The quality of the journal will bless the quality of the drug.

Fortunately from the point of view of the companies funding these trials—but unfortunately for the credibility of the journals who publish them—these trials rarely produce results that are unfavourable to the companies' products [7,8]. Paula Rochon and others examined in 1994 all the trials funded by manufacturers of nonsteroidal anti-inflammatory drugs for arthritis that they could find [7]. They found 56 trials, and not one of the published trials presented results that were unfavourable to the company that sponsored the trial. Every trial showed the company's drug to be as good as or better than the comparison treatment.

By 2003 it was possible to do a systematic review of 30 studies comparing the outcomes of studies funded by the pharmaceutical industry with those of studies funded from other sources [8]. Some 16 of the studies looked at clinical trials or meta-analyses, and 13 had outcomes favourable to the sponsoring companies. Overall, studies funded by a company were four times more likely to have results favourable to the company than studies funded from other sources. In the case of the five studies that looked at economic evaluations, the results were favourable to the sponsoring company in every case.

The evidence is strong that companies are getting the results they want, and this is especially worrisome because between two-thirds and three-quarters of the trials published in the major journals—Annals of Internal Medicine, JAMA, Lancet, and New England Journal of Medicine—are funded by the industry [9]. For the BMJ, it's only one-third—partly, perhaps, because the journal has less influence than the others in North America, which is responsible for half of all the revenue of drug companies, and partly because the journal publishes more cluster-randomised trials (which are usually not drug trials) [9].

Why Do Pharmaceutical Companies Get the Results They Want?

Why are pharmaceutical companies getting the results they want? Why are the peer-review systems of journals not noticing what seem to be biased results? The systematic review of 2003 looked at the technical quality of the studies funded by the industry and found that it was as good—and often better—than that of studies funded by others [8]. This is not surprising as the companies have huge resources and are very familiar with conducting trials to the highest standards.

The companies seem to get the results they want not by fiddling the results, which would be far too crude and possibly detectable by peer review, but rather by asking the “right” questions—and there are many ways to do this [10]. Some of the methods for achieving favourable results are listed in the Sidebar, but there are many ways to hugely increase the chance of producing favourable results, and there are many hired guns who will think up new ways and stay one jump ahead of peer reviewers.

Then, various publishing strategies are available to ensure maximum exposure of positive results. Companies have resorted to trying to suppress negative studies [11,12], but this is a crude strategy—and one that should rarely be necessary if the company is asking the “right” questions. A much better strategy is to publish positive results more than once, often in supplements to journals, which are highly profitable to the publishers and shown to be of dubious quality [13,14]. Companies will usually conduct multicentre trials, and there is huge scope for publishing different results from different centres at different times in different journals. It's also possible to combine the results from different centres in multiple combinations.

These strategies have been exposed in the cases of risperidone [15] and odansetron [16], but it's a huge amount of work to discover how many trials are truly independent and how many are simply the same results being published more than once. And usually it's impossible to tell from the published studies: it's necessary to go back to the authors and get data on individual patients.

Peer Review Doesn't Solve the Problem

Journal editors are becoming increasingly aware of how they are being manipulated and are fighting back [17,18], but I must confess that it took me almost a quarter of a century editing for the BMJ to wake up to what was happening. Editors work by considering the studies submitted to them. They ask the authors to send them any related studies, but editors have no other mechanism to know what other unpublished studies exist. It's hard even to know about related studies that are published, and it may be impossible to tell that studies are describing results from some of the same patients. Editors may thus be peer reviewing one piece of a gigantic and clever marketing jigsaw—and the piece they have is likely to be of high technical quality. It will probably pass peer review, a process that research has anyway shown to be an ineffective lottery prone to bias and abuse [19].

Furthermore, the editors are likely to favour randomised trials. Many journals publish few such trials and would like to publish more: they are, as I've said, a superior form of evidence. The trials are also likely to be clinically interesting. Other reasons for publishing are less worthy. Publishers know that pharmaceutical companies will often purchase thousands of dollars' worth of reprints, and the profit margin on reprints is likely to be 70%. Editors, too, know that publishing such studies is highly profitable, and editors are increasingly responsible for the budgets of their journals and for producing a profit for the owners. Many owners—including academic societies—depend on profits from their journals. An editor may thus face a frighteningly stark conflict of interest: publish a trial that will bring US$100 000 of profit or meet the end-of-year budget by firing an editor.

Journals Should Critique Trials, Not Publish Them

How might we prevent journals from being an extension of the marketing arm of pharmaceutical companies in publishing trials that favour their products? Editors can review protocols, insist on trials being registered, demand that the role of sponsors be made transparent, and decline to publish trials unless researchers control the decision to publish [17,18]. I doubt, however, that these steps will make much difference. Something more fundamental is needed.

Firstly, we need more public funding of trials, particularly of large head-to-head trials of all the treatments available for treating a condition. Secondly, journals should perhaps stop publishing trials. Instead, the protocols and results should be made available on regulated Web sites. Only such a radical step, I think, will stop journals from being beholden to companies. Instead of publishing trials, journals could concentrate on critically describing them.

Acknowledgments

This article is based on a talk that Richard Smith gave at the Medical Society of London in October 2004 when receiving the HealthWatch Award for 2004. The speech is reported in the January 2005 HealthWatch newsletter [20]. The article overlaps to a small extent with an article published in the BMJ [21].

Examples of Methods for Pharmaceutical Companies to Get the Results They Want from Clinical Trials

·                       Conduct a trial of your drug against a treatment known to be inferior.

·                       Trial your drugs against too low a dose of a competitor drug.

·                       Conduct a trial of your drug against too high a dose of a competitor drug (making your drug seem less toxic).

·                       Conduct trials that are too small to show differences from competitor drugs.

·                       Use multiple endpoints in the trial and select for publication those that give favourable results.

·                       Do multicentre trials and select for publication results from centres that are favourable.

·                       Conduct subgroup analyses and select for publication those that are favourable.

·                       Present results that are most likely to impress—for example, reduction in relative rather than absolute risk.

References

1.                     Horton R (2004) The dawn of McScience. New York Rev Books 51(4): 7–9. Find this article online

2.                     Angell M (2005) The truth about drug companies: How they deceive us and what to do about it. New York: Random House. 336 p.

3.                     Kassirer JP (2004) On the take: How medicine's complicity with big business can endanger your health. New York: Oxford University Press. 251 p.

4.                     Barbour V, Butcher J, Cohen B, Yamey G (2004) Prescription for a healthy journal. PLoS Med 1: e22 DOI: 10.1371/journal.pmed.0010022. Find this article online

5.                     Wilkes MS, Doblin BH, Shapiro MF (1992) Pharmaceutical advertisements in leading medical journals: Experts' assessments. Ann Intern Med 116: 912–919. Find this article online

6.                     Villanueva P, Peiro S, Librero J, Pereiro I (2003) Accuracy of pharmaceutical advertisements in medical journals. Lancet 361: 27–32. Find this article online

7.                     Rochon PA, Gurwitz JH, Simms RW, Fortin PR, Felson DT, et al. (1994) A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. Arch Intern Med 154: 157–163. Find this article online

8.                     Lexchin J, Bero LA, Djulbegovic B, Clark O (2003) Pharmaceutical industry sponsorship and research outcome and quality. BMJ 326: 1167–1170. Find this article online

9.                     Egger M, Bartlett C, Juni P (2001) Are randomised controlled trials in the BMJ different? BMJ 323: 1253. Find this article online

10.                  Sackett DL, Oxman AD (2003) HARLOT plc: An amalgamation of the world's two oldest professions. BMJ 327: 1442–1445. Find this article online

11.                  Thompson J, Baird P, Downie J (2001) The Olivieri report. The complete text of the independent inquiry commissioned by the Canadian Association of University Teachers. Toronto: Lorimer. 584 p.

12.                  Rennie D (1997) Thyroid storm. JAMA 277: 1238–1243. Find this article online

13.                  Rochon PA, Gurwitz JH, Cheung M, Hayes JA, Chalmers TC (1994) Evaluating the quality of articles published in journal supplements compared with the quality of those published in the parent journal. JAMA 272: 108–113. Find this article online

14.                  Cho MK, Bero LA (1996) The quality of drug studies published in symposium proceedings. Ann Intern Med 124: 485–489. Find this article online

15.                  Huston P, Moher D (1996) Redundancy, disaggregation, and the integrity of medical research. Lancet 347: 1024–1026. Find this article online

16.                  Tramèr MR, Reynolds DJM, Moore RA, McQuay HJ (1997) Impact of covert duplicate publication on meta-analysis: A case study. BMJ 315: 635–640. Find this article online

17.                  Davidoff F, DeAngelis CD, Drazen JM, Hoey J, Hojgaard L, et al. (2001) Sponsorship, authorship, and accountability. Lancet 358: 854–856. Find this article online

18.                  De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, et al. (2004) Clinical trial registration: A statement from the International Committee of Medical Journal Editors. Lancet 364: 911–912. Find this article online

19.                  Godlee F, Jefferson T (2003) Peer review in health sciences, 2nd ed. London: BMJ Publishing Group. 367 p.

20.                  Garrow J (2005 January) HealthWatch Award winner. HealthWatch 56: 4–5. Find this article online

21.                  Smith R (2003) Medical journals and pharmaceutical companies: Uneasy bedfellows. BMJ 326: 1202–1205. Find this article online

 

 

The Orange County Register, Health & Science Section, Weds., Oct. 20, 1999, p. 20.

In drug studies, money counts

 

FUNDING: Drug-company sponsor­ship makes for less-critical views, researchers say.

By DON BABWIN

The Associated Press

CHICAGO — Studies on the cost-effectiveness of drugs are far more likely to report favor­able findings if they are spon­sored by the drug companies themselves rather than inde­pendent groups, researchers found.

Their study — funded by a pharmaceutical company — appears to confirm long-held suspicions that doctors are less critical about a drug's safety and effectiveness when they have financial ties to the manufacturer.

"It is possible that these fac­tors may result in some uncon­scious bias" in interpreting a study's findings, the research­ers said.

                 

Last year, the conflict-of-in­terest issue made headlines when a report found that the vast majority of doctors who defended the safety of calcium channel-blockers had a finan­cial relationship with manu­facturers of the blood-pressure pills.  In the current study, pub­lished in today's Journal of the American Medical Associa­tion, the researchers looked at 44 studies on the cost-effective­ness of cancer drugs. Twenty of the studies were funded by pharmaceutical companies and 24 by nonprofit organiza­tions.  Those sponsored by nonprof­it groups reached unfavorable conclusions 38 percent of the time, compared with just 5 percent for studies spon­sored by pharmaceutical com­panies. Also, researchers in company-backed studies were slightly more likely to over­state the cost-effectiveness.  Some researchers receive funding directly from pharma­ceutical companies. Some get funding in the form of honorar­ia or travel expenses. Some hold stock in drug companies and profit directly from in­creased drug sales. 

 

Dr. Charles Bennett, the lead author and a professor at Northwestern Medical School, said that in addition to the pos­sibility of unconscious bias, there could be other explana­tions for the findings.  For example, pharmaceuti­cal companies are given early looks at studies. That enables them to abandon studies that appear to be unfavorable and focus on those they think are going to be positive, Bennett said.  Bennett said the findings should not be seen as a major criticism of pharmaceutical companies.  "Our study was sponsored by a pharmaceutical compa­ny," he said, adding that the company, Amgen Inc., did not comment on it before publica­tion. He also said his paper an­alyzed studies sponsored by Amgen, which fared no better than other company-spon­sored studies.  Bennett said the best thing would not be to stop pharma­ceutical companies fr6m spon­soring research, but to get oth­er types of sponsors to under­write studies, too, such as managed-care organizations. 

 

Amgen spokesman David Kaye said: "If you want the best physicians in the world, you have to let them run the trials. If you kill a study or over-control it, word gets out and the best investigators won't do your studies."  Others not involved with the study said the findings raise serious concerns.

"The best hypothesis I can tell for that is the person doing the research has internalized the values of their funder," said Sheldon Krimsky, a Tufts University professor who stud­ies scientific integrity and con­flict of interest and who wrote an editorial about the study in JAMA.  Dr. Sidney Wolfe, director of Public Citizen's Health Re­search Group in Washington, agreed: "As in other studies of the drug industry, this shows the financial interests of the drug industry rides over the actual data."

Jk--Comments on the 1992 legislation affecting the FDA, which required the drug companies to contribute to most of the cost for the approval of new drugs and which was followed by budget reductions of the FDA so that it couldn’t fund follow-up studies on safety and efficacy of approved drugs. These legislative changes were lobbied for by the drug industry.  The second is to further weaken the position of the smaller companies by adding an additional expense which thereby forced them to be even more dependent upon the drug giants in raising the funds for the trials and now also for FDA approval. 

 

 

How do you get honest reliable data:

 

1).  Have the drug companies contribute to a research pool, much like the way the State & Federal government contribute to the universities in a state to do other types of research.  Then let the university departments and the Graduate Study Department work out how those funds are to be allocated. 

 

2).  Drug companies could still do basic research, but the clinical trials would be run by the universities. 

 

3).  Those drugs developed by the universities could then manufactured at a cost plus formulation, with the profits therefrom flowing back into the general pool.   

 

 

 

1a).  Permit the university departments to funnel research into drugs into those area where the needs are greatest, such as where there is a lack of effective treatment, the lost to society is greater, and where it seems that an effective medication would be sufficiently likely to be produced in relatively short period of time.  They would be less likely to fund research on a me-to type of knockoff drugs which now make up over 90% of the drugs approved by the FDA. 

 

2 & 3).  This would result in a much smaller role of the Pharmaceutical industry.  The benefits of this would be lower prices for drugs, a competitive advantage over foreign companies by lower pricing, and increased effort to develop drugs based on needs rather than profits.  Moreover, there would be more funds for research, because universities are not so market driven as the pharmaceutical industry which squanders 75% of its profits on advertising and while budgeting just 25% for research.