This week, to help us dissect the year in trust, we’re joined by Dr Amy Ross Arguedas, a Postdoctoral Researcher Fellow at the Reuters Institute for the Study of Journalism. Between 2020 and September 2023, she worked on the Trust in News Project, and she’s currently a part of the team working on the Reuters Institute’s annual Digital News Report.

Amy tells us about the work of the Trust in News Project, and how to go about measuring trust in a way which is useful. She explains how trust is affected by political divisions and whether the impact of publisher mistakes is long-lasting, as well as the opportunities for more traditional sources to engage younger people.

We also explore the potential impacts of AI on trust; how transparent should publishers be with their use of it, as well as the risks and opportunities AI presents for the fight against misinformation.

Trust, disinformation and how publishers have responded this year will be one of the chapters we explore as part of our upcoming Media Moments 2023 report. Find out more and pre-register for the report here.

Here are some highlights from the episode, lightly edited for clarity:

Beginning the Trust in News project

We really just took the first few months to do a very scoping review on everything that’s been written on it; trust has become an increasingly important and popular topic in academic work as well. So we spent a good chunk of time at the very beginning just trying to get a sense of, what do we really know?

One of the big challenges is that sometimes you don’t know what these things mean. What are people thinking about when they say trust? So on the audience side, we started out just having in-depth conversations with regular audience members across the four countries and asking very basic questions: so what does trust mean to you? How do you think about this? Then we took it from there…

Most of us would agree that we don’t want people to trust everything all of the time. There’s a good reason in the current information environment for people to be sceptical and to be discerning. So we’ve also spent a lot of time thinking about what it is that normatively we would want to see among the population. That’s not necessarily people trusting everything, but being able to tell apart and having sources that they can turn to and feel like are reliable.

Multiple factors affecting trust

One of the challenging and interesting things about trust is that you do see all of these different factors. What is interpersonal trust like in this culture? Are people generally trusting or not trusting? In Brazil, we had I think 98% of respondents saying that they didn’t trust other people around them! How have new digital technologies changed things for people, and how people consume news? That can indirectly impact people’s sense of trust as well.

Then of course, politics is a really important part of this. There’s research that’s very clearly shown in a comparative way that there’s a clear trust ‘nexus’ between trusting political institutions, and trust in the news media.

So it’s shaped by these things that are often external to journalism. We know that in some countries, issues around polarisation has made this particularly fraught. Then there’s all the things that are internal to journalism as well that can shape trust. So it’s very complex, and there are things operating on a variety of different levels.

Labelling AI-generated content as a way of building trust

I think that there’s certain things that are good practice, regardless. I do think that it comes back to the point of, how are news organisations using AI? That’s part of what makes this really challenging is that AI can be a whole bunch of different things. News organisations have been using machine learning and AI behind the scenes for personalising things, what recommendations they’re offering different people, this has been around for a long time behind the scenes.

Generative AI has shifted the conversation, but AI can mean a whole lot of different things. It can be challenging because audiences might not necessarily know what all these different uses are, and I don’t think that all of them are equally relevant to disclose necessarily.

It’s not realistic to think that every time AI has been used, you have to have a label; there are limits to where that makes sense. If we think about checking grammar, does that need to be labelled? I think there’s a lot of conversations that need to be very nuanced around when it’s editorially relevant to label.

In some cases, it really is. If you’re creating images for example, that’s an instance where it is relevant and important to disclose it. But that can be challenging because we don’t really know how audiences are going to respond. Transparency can sometimes be a difficult thing to navigate.


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *