A post popped up on my Facebook page today with a link to an online article from a site called Science Alert with this headline: New drug causes cancer to “melt away” in patients with advanced leukaemia. This article had been read and shared by thousands of people on Facebook, with one commenter lamenting that this breakthrough had not come in time for ‘my sister,’ and others thanking God and the scientists involved for the miracle of melting cancer cells. The problem? The article attached to the headline made no reference to ‘melting away’ cancer. It also indicated that this drug was so toxic it would only ever be considered for people with no other options, and revealed that, while it had reduced cancer cells in 79% of recipients, it had only resulted in a 20% remission rate. Nowhere does the term ‘preliminary’ appear in the headline, even though the data generating all this fanfare was from a phase I trial. Critical, and potentially less positive, phase II and III trials have not even been done yet.
Hopefully, this drug will turn out to be the blockbuster people suffering from advanced
If only bad penmanship were the worst problem in medical communication
leukemia need and a 20% remission rate, if that holds true in larger trials, is nothing to sneeze at. However, that is exactly the problem with hyped medical headlines. This data should stand on its own. Over-the-top hype unnecessarily tarnishes the whole endeavor. After all, how much confidence can you have in the veracity of research that has been misrepresented in such a blatant fashion?
The above example is notable, not because it is so unusual, but because it is now so common. This sort of sloppy, hyperbolic treatment of medical and scientific news has become mainstream and it is not just media outlets and medical journalists to blame. Pressure to publish, desperation for research funding and good old-fashioned competition between scientists and the institutes they work for can lead to a desire to embellish reality. Often the resulting press releases, interviews and articles are just truthful enough to assuage any guilt or second thoughts the authors might have about their up-sell, while at the same time being just ‘improved’ enough to be not quite accurate.
This push to produce snappy headlines rather than to educate and inform is now creeping into peer-reviewed publications, as well. Once the bastion of scientific integrity, these journals—even very good ones—can at times take a cavalier approach to accuracy in promoting papers. In a food chain of medical communication (publication of results-press release about publication-media coverage of press release-consumer attention), starting out with misleading information at the point of publication ensures that downstream consumption of that information will be fatally flawed. This is not a victimless crime. Selling unrealistic hope to the hopeless is the modus operandi of the conman, not the scientist.
So what can be done about this scourge of marketing over medicine, style over substance?
Recently, I had the opportunity to speak with Gary Schwitzer from HealthNewsReview.org. In a distinguished medical journalism career spanning four decades, Mr. Schwitzer has seen a lot of good–and a lot of bad–medical journalism. In an effort to promote the good, in part by exposing the bad, Mr. Schwitzer, with funding from the Laura and John Arnold Foundation, started HealthNewsReview.org to analyze coverage of healthcare stories. He and his team established criteria for assessing coverage and for the first time, have created a system that aims to quantify the quality of medical reporting. This important endeavor has recently grown to include special consideration of press releases from institutes and universities, which often are the seed from which questionable stories sprout.
I contacted Mr. Schwitzer regarding a frustrating situation that impacted the patient group I work with. In summary, a well-regarded peer-reviewed journal published a paper with a less than clear headline, leading to false expectations among our patient community that a therapy/potential cure had been developed for a rare genetic lung disorder. It had not.
In raising our concerns to the journal editor, we were informed that, while they understood the title could be misinterpreted by the lay public, that was not their target audience. They took no responsibility for publishing a paper with an unclear title, instead laying blame on the ignorance of the audience. While it is true that their target readership is scientists, they seem unaware that there is now a robust thing called the Internet which enables almost instant information sharing–and does so without qualifying readers scientific bona fides before allowing access. The new reality is that they do not, in fact, exist in a rarified academic bubble any more. Whether they like it or not, lay people ARE part of their audience now and it really should not that difficult to be clear and accurate enough in what they publish to prevent misleading them.
This (IMO) dismissive attitude completely ignores that part of the vast group of lay individuals they lump into ‘not our target audience, so of no concern’ are actually the medical journalists and press release people who will be tasked with interpreting their unnecessarily unclear titles when writing for main stream media. This is why a misleading headline about a rare disorder prompted glowing news stories, complete with quotes from an Italian health minister, congratulating scientists on discovering a cure that does not actually exist and extolling non-existent therapies–all as the result of a single unnecessarily unclear title. It is a bit disingenuous to say it is not your problem if lay people don’t understand how scientific publishing works when it is clear from the outcome that many, many people found the title to be misleading. Were they all wrong, uneducated, or not important enough to be in the target audience?
Another justification the journal editor gave for ignoring our concerns about the title is that the content in the abstract made the limitations clear, even if the title did not. This is true, but as subsequent press releases and media presentations made clear, it was not just the patient community who failed to distinguish what was in the abstract from what was in the title. Clearly lots of people found it confusing. It also does not explain the logic behind publishing a paper with a title that does not adequately convey or match the content. Most of us learned that was a no-no in grade school composition classes.
I thought it was an interesting example of misleading (whether by intention or not) medical publishing, so brought it to the attention of HealthNewsReview.org. The result is this podcast, shared with my thanks to Mr. Schwitzer and his crew. Enjoy!
UPDATE: Excellent interview on the Cancer Network site with Dr. Vinay Prasad regarding his recent letter to JAMA Oncology decrying the use of ‘superlatives’ in oncology reporting. Money quote:
“I am going to make one bold recommendation. Do not cover any drug that has not been tested in people. If a drug only has mouse or cell culture or laboratory data, it does not deserve any press. We know for a fact that the majority of the most promising compounds at this stage will fail in human trials, and covering these drugs is akin to doing a nightly news segment on someone who bought a lottery ticket, asking them what they will do if they win.”