What the Science Tells Us About “Trust in Science”

This post continues our series focused on science communication research. Instead of reporting on or recapping a single paper, we’re asking what the literature has to say about urgent or recurring questions in our field. This is inspired, in part, by John Timmer’s call for an applied science of science communication, as well as the upcoming special issue of PNAS with papers from the 2012 Sackler Colloquium on the Science of Science Communication.

When climate scientist Tamsin Edwards published her editorial “Climate scientists must not advocate for particular policies” in The Guardian, it triggered a cascade of responses on engagement and advocacy. This is something COMPASS spends quite a lot of time thinking about and discussing in our trainings and writings, but the line that particularly caught my eye was: “I believe advocacy by climate scientists has damaged trust in the science. We risk our credibility, our reputation for objectivity, if we are not absolutely neutral.”

I admire the conviction in that statement and it’s nothing if not clear. But is it true? Is the behavior of individual scientists a primary driver of public opinion? It reminds me of a conversation regarding our assumptions about audiences, in which my friend Ben Lillie quipped: “Communicating science to the public? Neither noun exists and I’m not sure about the verb.” Given the current conversations, I am not so sure of our use of the phrase ‘trust in (the) science’ either, so I decided to do a little digging.

In this post, I’m contrasting existing data with what we often hear in arguments about credibility and trust. I’m barely scratching the surface, but really looking forward to discussing this further during the plenary on ‘Credibility, Trust, Goodwill, and Persuasion’ I’m moderating at next week’s ScienceOnline Climate conference.*

Background

In the mid 1980s, a pivotal UK Royal Society report “The Public Understanding of Science” attributed public controversy and low levels of support for science policies to a ‘knowledge problem.’ People just didn’t know enough about science to properly appreciate it. Such deficit model thinking assumes that improving science literacy is the key to improving support. It has since largely been discredited in favor of the view that improving public support for science hinges on understanding the processes by which science and scientific expertise are viewed as legitimate and trustworthy.

Today, in casual conversations, trust is often used synonymously with ‘to believe.’ It expresses faith in something without the need for further investigation or guarantee. Given what a person must achieve to earn professional scientific credentials, it is reasonable to expect a certain amount of deference in the area of specialization. Repeated challenge to that authority feels hostile, perhaps deliberately antagonistic. And when political actors escalate the matter by positioning scientific knowledge as just another opinion and publicly devaluing hard-won expertise, it can be infuriating.

Yes, infuriating. This is emotional territory and it’s important to acknowledge that. What we’re actually talking about is persuasion and power dynamics, not accuracy and precision. Trust is a social mechanism for coping with uncertainty and risk. Fundamentally, it is as much about fear, identity, and conflict as it is about being correct.

The Science

A comprehensive review by Chryssochoidis et al., 2009, makes the case that trust can be “determined or influenced by: (1) the (perception of) the characteristics of the information received; (2) the (perception of) the risk managed or communicated; (3) the (perception of) institutional characteristics; (4) the individual and socio-cultural characteristics of those who exhibit trust” and by interactions of these four elements.”

Drawing from that review and other papers in the social science literature, here’s what we know about “public trust in science”:

Messengers

Trust in groups (“scientists”) or institutions (“the National Academy of Sciences” or “the IPCC”) is called social trust in the literature. Although it is different than interpersonal trust, both vary greatly depending on who is asking to be trusted.

  • Contrary to fears, public esteem for scientists overall is not plummeting. In 2009 and 2013, The Pew Research Center conducted opinion polling asking, “How much to do you think scientists contribute to the well-being of our society?” During that span, the proportions have held relatively steady with more than 65% of respondents answering “A lot” and another 20% with “Some.” Note: this result does not hold across different political ideologies.
  • Specific groups of scientists have suffered reputational damage recently. Climate researchers, for example, saw a significant drop in perceived trustworthiness from 2008 to 2010, but it’s important to note that as a category they remained much more trustworthy than competing sources like weather reporters, political and religious leaders, or the mainstream media.
  • The strength of trust varies greatly among institutions. For example, in response to a study question asking, “How much do you trust the scientific research of [a specific government agency],” NASA and NOAA scored highest in “strongly trust” responses, while DOE and USDA had most “strongly distrust” answers.

Messages

When it comes to a message, volume (repetition) and content matter. Bad news or information that damages trust is given disproportionate weight (‘trust asymmetry principle’). Familiarity with a topic combines with pre-existing views to influence perceived trustworthiness of new pieces of information. And attitudes toward specific technologies, like nuclear power or genetically modified crops, do not predict overall trust in science.

Audiences 

Personality and demographics of ‘the public’ matter. Gender, race, age, and socioeconomic status can influence the propensity to trust science. We also know that education and political affiliation interact in surprising ways, as when we find that distrust in science significantly increases with greater educational attainment among politically conservative people.

The Takeaway

Based on my readings, ‘trust in science’ is shorthand for something complicated that I can best approximate as ‘your willingness to embrace the advice of a group of strangers because you believe they: (a) know the truth; (b) will tell you the truth as they know it; and (c) have your best interest at heart’… oh, and all of that depends on (d) who you are, (e) who they are, and (f) what you’re all talking about.

It feels like we’ve fixated on (a). Yet for as strong a case we can build about the systems by which we know what we know, ‘knowing the truth’ is just one element in that long list. Considering (b) through (f) – and all the other letters I’m sure we should append – it suddenly feels much less surprising (and frustrating) that other people don’t feel the same level of trust in science and scientists that I do.

When it comes to making statements about trust in science, let’s ensure our assertions are grounded in the relevant science. We’ve got a long list of unanswered questions about how to build trust besides getting the facts right. This approach isn’t going to solve it all, but it’s a good place to start.

————————————————————

P.S. You might have noticed I never answered the question of whether advocacy by scientists has been shown to erode public trust. More on that on Wednesday.  (Note: now posted!)


APOLOGIES – our commenting system has been plagued by technical problems. Until we get that sorted, please feel free to email me, talk with me on twitter, or head to Google+ with your thoughts and reactions.

*Watch the ScienceOnline Climate livestream and follow the hashtag #scioClimate for what should be an amazing panel discussion on “Trust, Credibility, and Goodwill in Climate Communications” featuring Dan Kahan, Michael Mann , and Tom Armstrong. If you’re interested in more in the meantime, browse some of my favorite excerpts from the literature, see my Mendeley library on the science of trust, and use the comment section here to kick off a discussion.

About Liz Neeley

Liz is the Assistant Director of Science Outreach. Though she hasn’t held a PipetteMan in 6 years, she still occasionally dreams of running PCR gels. These days she’s more likely to sustain repetitive stress injuries from livetweeting science conferences or joining marathon conference calls. Lately she’s been baking lots of artisanal bread, finding it to be effective as both a crosstraining and carb-loading exercise.

Comments

  1. Lawrence Krauss once answered the question “how can we improve popular science?” by saying “get a better public”. Still, he’d just as soon admit that we don’t know what 95% of the universe is made of.

    Today, on Schrödinger’s birthday, I would like to invite you to consider that perhaps we’re all here for the uncertainty, for the discovery, for the questions, not just the answers. The minute we kill skepticism, we kill science, whether it be in ourselves or those we wish to enlighten.

    The question is not how to you build trust, but rather, how do we get people to let go of their instincts that have served them so well for so long, and embrace principles that are counter-intuitive? How can the cat be both dead and alive at the same time? How can we push our limited mental capacity and truly grasp quantum superposition?

  2. Liz, I’d agree with your description here. As a statistician, the Tamsin Guardian article struck me as just a restatement of the frequentist position of desiring pure objectivity, while the objectors were more in tune with the Bayesian ideal of “we’re all biased, so let’s state that up front” mindset.

    I’d also say you are correct in saying there are several inputs to the decision process for belief or trust, but would caution that they should not be considered to be equally weighted or independent. How much emphasis is put on “a” relative to “f”, for example is important and likely to differ from person to person or group to group. I for example would find “a” and “b” to be most important, however, “c” is irrelevant if “a” and “b” are true anyway. Hence, concentrating or fixating on “a” is essential because it determines the relevancy of the other potential inputs. Granted, others may weight and correlate these factors differently, which indeed, makes prediction difficult :)

    T

  3. Marcel Kincaid says:

    ” we have a moral obligation to be impartial” — Ms. Edwards doesn’t understand the concept of morality, nor the concept of impartiality. “I became a climate scientist because I care about the environment” — perhaps she needs to care a bit more about the fate of human beings instead.

    “I believe advocacy by climate scientists has damaged trust in the science.”

    She believes? She seems not to understand the concept of science either.

    “We risk our credibility, our reputation for objectivity, if we are not absolutely neutral.”

    And she’s an incredible hypocrite. Here she is, advocating for, taking a stand on, how scientists should behave, and she is doing so based on her baseless and in fact contrafactual beliefs; she is anything but neutral on this subject … and she is wrong, factually and morally. It is because of that, not because of her utter lack of neutrality here, that she has gravely risked, even ruined, her credibility and any semblance of objectivity. Ms. Edwards should have abided by her own principles and STFU, but she just couldn’t help herself, the ego-driven arrogant poser that she is.

    • Liz Neeley says:

      Marcel, this is not the place to tell anyone to shut up.

      I believe Doctor Edwards – and any other scientist – has every right to share her motivation for choosing a career that demands so much of a person, as well as to take a public stance on an issue of values related to science. Strongly disagree with you that she has ruined her credibility. I may not agree with her, but I don’t think her argument is counterfactual and certainly not deliberately so.

  4. I wrote about a relevant poll and paper, both of which came out in April of this year, a few months ago. Both drive home the point that there is clearly a role for scientists when it comes to public discussions of climate change and related issues. Encouraging scientists to stay mum seems astonishingly wrongheaded. Here’s what I wrote on the poll and study: http://www.scilogs.com/communication_breakdown/scientists-trust-media-climate/

    • Liz Neeley says:

      Thanks Matt for reminding me of this post! Particularly agree with your note on the Hmielowski et al paper that causality is tricky to determine. It could well be that news consumption doesn’t determine trust, but rather your worldview (which influences trust) determines your news preferrences. I saw Dan Kahan raising this point on twitter – https://twitter.com/cult_cognition/status/364997086002950144

  5. Liz Neeley says:

Trackbacks

  1. […] was interesting then to see Liz Neely write about what science tells us about trust in science. I believe there is some overlap between the two […]

  2. […] for science actually make the public trust them less? The science behind this baffling thesis is explored by Liz Neeley at Compass. It’s relevant to all scientists who are moved by their data to become advocates — like […]

  3. […] the Science Tells Us About “Trust in Science”” http://compassblogs.org/blog/ … via […]

  4. […] Liz was a co-organizer of ScienceOnline Climate. She continued her exploration of issues around trust in science by moderating a plenary discussion on “Trust, Credibility, Goodwill, and Persuasion,” […]

  5. […] or trust, what matters in science communication? Liz Neeley (@LizNeeley), assistant director of science Outreach for […]

  6. […] trainings we lead, and I’ve blogged about some of the papers & concepts we draw from here, here, and […]

  7. […] meaningfully engage with the relevant social science and psychology literature (see my posts here, here, here, and here). After the workshop, I’m convinced that on a personal and professional level, I […]

Speak Your Mind

*