Last month, PLoS Medicine published a study that showed, pretty convincingly, that scientific studies were rarely “spun” by PR professionals. The spinning, such that it was, almost always happened in the scientific paper itself.
The research looked at 41 published studies of randomized controlled trials, along with the press releases and news coverage that followed. Of the 41 studies (and accompanying press releases), the researchers determined that 17 included “spin”* — defined broadly as emphasis on elements other than the pre-specified primary goal of the study — in the paper’s abstract. Of those 17 papers, 16 yielded press releases that also included spin. But the 24 spin-less abstracts yielded spin-less press releases more than 85 percent of the time. (News coverage followed a similar pattern, but was actually more prone to spin than the press releases.)
In some ways, the study was a welcome vindication for every serious health PR practitioner that does his or her best to write press releases that faithfully represent the data that’s being promoted. To the extent that things get exaggerated or taken out of content during the public relations and news process, the original sin, so to speak, often occurs elsewhere: in the drafting of the abstract.
But we, as PR pros, should take little comfort in the conclusion that we, too often, amplify study data that doesn’t offer the proper perspective on clinical research. It’s not enough to faithfully re-cast data from an abstract. We need to get our hands dirty with the data. We need to understand how clinical trials are designed and executed and how the data from those trials are analyzed and reported.
This kind of know-how is not a simple requirement, but it’s critical to ensuring that the information flows from scientists and researchers to end-users such as consumers, doctors, payers and advocates are as precise as possible. It means that I’m going to go back and hit the books some, and I’m going to encourage those I work with to do the same. Because PR isn’t spin, and the better a job we can do of transmitting complex information to the public, the better off we’ll all be.
* To be sure: press releases or news stories that depart from the primary conclusions of a piece of research cannot be automatically labeled with the pejorative “spin.” In many studies, the datapoints of most interest may indeed vary from what was expected. Turning a spotlight on that, in a responsible way, is part of the responsibility of the sharp PR pro and dogged reporter. But when and how that happens is a much longer and more nuanced discussion.
(Thanks to this Curtis Brainard story in the Columbia Journalism Review for reminding me of the study)