Interviewer: Chris Sutcliffe

Chris: Can you explain what your unique role is at the BBC?

Marianna Spring: So it’s something that came out really of the UK election that happened last year, because we did lots of in-depth social media reporting looking at disinformation, looking at online conversation in Facebook groups, looking at the key figures and people – often ordinary people – who were either spreading misinformation or running big Facebook pages that promoted certain parties or politics.

And out of that we realised the real appetite for and also need to report on disinformation and organic online conversation in an impactful way, so long reads, radio programmes, television reports, doing it across the board, and as the BBC, we’re obviously quite uniquely placed to do that. We’ve got lots of really brilliant people who work in different teams from BBC monitoring, where we have some excellent disinformation experts, to Africa Eye which I’m sure you’ve heard of who do excellent open source investigation, then Reality Check who do great debunks, and BBC Trending, which is a team that does in-depth social media reporting.

So my role essentially came out of that, the decision that we needed to pull together the different parts of the BBC that covered disinformation, and to have a specialist reporter who focuses on this area of expertise.

It was something that I first started looking into during the European elections last year for Newsnight, when I did an investigation into closed Facebook groups, as well as an investigation into online abuse, and it is from them really that I mean, compared for instance to the US, I think, in the UK, we just don’t have as many people who specifically cover this beat. In America for NBC, BuzzFeed, New York Times, there are some brilliant people that do investigations into disinformation, Facebook groups, all kinds of things like that.

And so it’s absolutely brilliant that the BBC has this role and that I get to do it.

Yeah, I suppose actually it might be quite depressing at times, but also it’s such a fast moving environment, it must be quite challenging and fun to do at the same time.

Definitely, and the timing, I guess, was perfect. So I actually started this role – well perfect is perhaps the wrong word – I started this role a couple of weeks before the pandemic really kind of became a big deal here in the UK. And it meant that, obviously, there was a tonne of misinformation, disinformation, conspiracies on social media relating to the pandemic and there have continued to be and will continue to be, but also various breaking news events and anti racism protests.

We’re looking towards the US election, so there is no shortage of disinformation that needs investigating, and ways of telling really compelling stories about the dangers of misinformation online, and the impact that misinformation can have.

I did a big investigation into the human cost of misinformation that was all about that. But also even a case study I looked at recently about a peaceful protester who received racist abuse after false claims were shared about him online. And so yeah, there’s absolutely no shortage of important stories that I and the rest of my team need to tell.

Yes, certainly. And you’ve actually preempted one of my next questions, which is going to be about that human cost, because we hear about it almost as an abstract, this idea that there is misinformation floating around there, but it genuinely does have a profound impact on people’s lives.

It does. I think one of the issues really sometimes with how we cover what happens online is that we think of it as this separate other thing. I think that can be the case with a lot of tech reporting, but actually disinformation that circulates on and offline and also, general conversation that happens online is the fabric of all of our lives, especially younger people. And it’s really important that we don’t think of it as this separate ‘other’ thing, but as an integral part of how lots of us experience breaking news events, have experienced the pandemic, will experience elections and big events like that.

As a consequence of all that, misinformation therefore has a real world impact. In the case of the pandemic that’s been incredibly stark, and very, very worrying because misinformation has – we did a big investigation where we looked at the links between arsons, attacks, deaths, people not seeking help soon enough, and all of that could be linked to misinformation that was circulating on social media.

There was one particularly sad case of a man I spoke to in Florida who thought Coronavirus was a hoax because of stuff he’d seen on Facebook. Consequently, he didn’t seek help or follow social distancing guidance, didn’t week help when he and his wife fell ill, and his wife is still very ill, still in hospital, still hooked up to a ventilator, not doing very well. So this stuff has real world impact.

And then like I said, there are examples too of protesters, for instance, who’ve been targeted with racist abuse because of false claims made about them online. So this isn’t limited to the pandemic, although the the pandemic has been a prime time to see the real danger unfortunately of misleading information.

So given that we have had all these flashpoints lately, do you think that the public is more aware of disinformation? And what’s your sense of how they think about it, and how its spread, where it originates and sort of the vectors for its dissemination?

I think that throughout the pandemic, people have become increasingly savvy about misinformation. At the beginning, we saw so many WhatsApp apps being forwarded on, messages being shared, ‘just in case’, copy and pasted across Facebook.

Obviously as panic ebbed a little, when people calmed down, you noticed that stop. That was a combination of timing, of just people being less scared, and perhaps less inclined to share information.

But I also think people realise when, for instance, you get a WhatsApp saying there are going to be tanks on the streets tomorrow, and then you look out your window the next day, and there aren’t tanks on the streets, that this stuff actually can be misleading and untrue. So I think in that sense, people are becoming increasingly aware of it.

It’s a bit like, I often use the analogy of when I was in secondary school, I remember in year eight, year nine often receiving those chain emails that tell you ‘Oh, if you don’t forward this on, Mickey Mouse is going to come find you in the night,’ or really stupid things like that, and it’s like, ‘Forward it on, otherwise you’re going to get caught by Mickey Mouse!’

And you soon realise that they’re not real, and you don’t need to forward them on. But initially, you forward everything on, you’re like, ‘Oh, if you don’t forward this on, this baby seal will get hurt.’ You’re like, ‘Right, let’s forward this on to everyone.’

So I think that there’s an element of that, that people just become increasingly more aware of how to spot misinformation and how to stop its spread. That’s something that I’ve spent so much of the pandemic talking about, the ways that we can stop misinformation spreading.

But that said, I think that misinformation continues to evolve and is incredibly complex. And just as people get up to date with one type of thing that can be misleading and unhelpful, another thing springs up, a bit like whack a mole, and it just continues to evolve.

It’s terrifying to think about. From your experience then, is the public undifferentiated and how it approaches misinformation? Or are younger people who have grown up, I suppose the perception is that they are better at spotting it, maybe than some of the older generations who take everything for granted that they see online?

It’s quite complex, actually, because I think it does make sense to think of it like that, in that younger people are more social media literate, and they’ve grown up with it, and they’re used to it and so they’re perhaps less likely to be caught out by the same things that you see older people copying and pasting on Facebook and sharing with their friends.

But that said actually, because younger people spend a lot of time on social media and get their information through these, these social media channels, more so than older people, you also often see them a) being more exposed to misinformation online, and b) being more exposed to conspiracies. That’s something that has really interested me that lots of younger people – whether that’s on TikTok or Instagram, and celebrities too, younger celebrities – have been especially interested I guess, or perhaps even propagating conspiracies about 5g or other Coronavirus conspiracies.

So, I think that it is important that that social media literacy question does mean that we have a lot of work to do with older audiences to make sure that they’re more savvy to what they share on social media.

But that’s not to say that younger audiences are exempt, and if anything, the misinformation that they perhaps fall foul to is harder to tackle than just telling people to not share stuff on their Facebook all the time.

So in a sense, this isn’t necessarily a new phenomena, it’s more widespread, or at least each individual piece of information can spread further?

Yeah, I think it’s that. Obviously, for instance conspiracies aren’t anything new. In fact, the report I’m currently working on makes that point very clear. We always have conspiracies come out of big life-changing events are actually very hard for us all to process, whether that’s 9/11, or the moon landings, or the assassination of JFK.

However, I think, firstly, in a unique setting of the pandemic, conspiracies have perhaps spread even more because people all over the world are all worried about the same things, all looking for answers, and there is this void of information, this lack of information, which allows conspiracies to thrive. That’s not anyone’s fault. It just means that we don’t yet have answers to some of the questions that people want to ask, and as a consequence, they turn to conspiracies for those answers.

But I also think in the modern age of social media where people can investigate things themselves, people can look things up, go on YouTube, the case study that’s in this report I’m working on is a guy who uses YouTube to kind of question things and investigate things. But he soon finds himself down the rabbit hole, exposed to a lot of content that 20 years ago, you probably wouldn’t have just coincidentally come across. And that as a consequence makes him feel as though there is something else slightly untoward going on.

So I think there’s that. I also think there’s the issue of Facebook groups and online communities, where people who have perhaps fringe views about things are able to find lots and lots of other people who have those same views.

So again, 30 years ago, it would have been hard to walk down your road and knock on everyone’s door and say, ‘Oh, hi there. Just want to see if you believe vaccine conspiracies,’ or ‘Hi there, are you a Nazi?’ It’s very unlikely you’d find people to hang out with. Whereas with Facebook groups, you can find 10s and 10s of thousands of people who all think the same things as you and therefore affirm your beliefs, no matter how outlandish the conspiracies you might believe are.

Yeah, that’s really interesting. It sounds like you’re saying that almost the issue is one of discovery, that technology is enabling people to find one another, but the corollary of that is that people can find confirmation bias and people who are willing to back up the things that they believe even if there is very little basis for that in reality.

Yeah, exactly. So much of susceptibility to conspiracies is dependent on all sorts of things; people’s emotional state, and when you’re really frightened when you’re looking for answers…

The number of times I’ve sat in cabs during the pandemic, and cab drivers have said to me, ‘Oh, my friend or my wife or my cousin has started contemplating conspiracies and they’d never normally do that, they’re normally quite rational, but all of a sudden, they’re interested in this stuff.’

We’ve also had lots of people sitting at home on their phones, scrolling through their feeds, reading different stuff, watching different things, so a bit too much time on on on our hands, a really frightening world event, which is been very difficult to make sense of, and various people looking to profit out of that, whether it’s pseudo scientists who have big YouTube channels, the likes of David Ike, like all kinds of prominent figures, or even celebrities who we’ve seen amplify conspiracies.

So it’s sort of a, I don’t always like to say this, because it’s a bit of a cliche, but it’s the perfect storm for conspiracies and misinformation to spread like wildfire online.

Yeah, really is. Well, you’ve mentioned YouTube, you’ve mentioned Facebook, and if you read the Reuters reporting term misinformation, you find that often a lot of the criticism is directed at those platforms for, I suppose almost their hands-off approach, which does let that information live there and percolate there, and really, people can find it, as you mentioned, there’s that whole discovery issue there. But to what extent do you find that those platforms are, I suppose, to blame through lack of an action or because they kind of tacitly encourage that kind of extremism to flourish there?

I’m trying to think how to go about it without sounding angry. I think the issue is, especially I often think this is the case for Facebook, that the sheer amount of content on that platform is incredibly difficult to monitor and to remove, and one thing that often strikes me when I’m investigating disinformation, when I’m investigating suspect behaviour in Facebook groups, or even online abuse, when I flag things to them, Facebook are often quite cooperative and remove them, but I always think why was there no one on Facebook’s team finding this stuff? Why am I doing it? It’s a really big issue.

A number of the stories I’ve looked into, I feel as though social media sites – and that’s not just Facebook, but Twitter, TikTok, Instagram and WhatsApp being kind of offshoots of Facebook – are all platforms where misinformation and hate speech fester. And I don’t think there’s enough consistency when it comes to the implementation of policies.

I think there’s this real fear from social media companies that they don’t want to editorialise, and to not be liable for for defamatory content that’s shared on their platforms, particularly the pandemic. You have seen social media sites throughout the pandemic tighten their rules, update their regulation, particularly when it comes to conspiracies. 5g stuff was kind of cracked down on.

But in my opinion, a lot of that was just too little too late. It hasn’t been enacted well enough, and it didn’t happen quick enough. And this whole investigation I did into the human cost of misinformation showed the examples of people who have come to harm either because they are a telecommunications worker who’s been attacked because people believe 5g conspiracies. Obviously we heard about those 5g masts – not even often 5g masts but telephone masts – set on fire.

And then you’ve also got just people who didn’t seek medical help, which doctors are incredibly concerned about, people who didn’t seek medical help because they thought that they could hold their breath and test whether they had Coronavirus because of post they saw on social media.

As we go forward, you see how, particularly with vaccine conspiracies, these spaces are booming. There are hundreds of thousands of people in these conspiracy theory groups that have either changed their [topics] to talk about vaccine conspiracies, or that are already focused on vaccine conspiracies, and that content inevitably spills over into people’s personal feeds. It spills over into other groups, local groups, parents groups, all kinds of different spaces and more people become exposed to it.

And I find it very hard to understand…well, I don’t find it hard to understand why they don’t do anything about it, but I find it very disappointing that especially private groups and closed groups, which are harder for journalists to get access to and to analyse, just continue to grow in number and to be a perfect place for disinformation and conspiracies to spread.

100%. And actually you’ve preempted, something I was going to ask, which is, once the platform, and once publishers have taken action, and they’ve either debunked something or as you have, conducted investigations into it, how does that actually affect the people who have bought into misinformation, because it’s very hard to prove that negative. I suppose for a lot of people who believe it, the fact that publishers and platforms are taking action is proof that there is something going on.

So one of the big concerns always I think, particularly of social media platforms is if they shut down an individual or a group, that people will just use it to further fuel the conspiracy and say, ‘Oh, yes, we were right. They want to shut down our conversation because we’re onto something.’

I do think it’s a very, very, very difficult line to tread, but one thing I would say is that generally you find if you disperse these groups, particularly the kind of for instance, during the pandemic, the big 5g Facebook groups, it does dent the movement, it does inevitably make it harder to find a new space to gather with so many people and it probably does help curb the spread in some sense.

God there’s there’s a direct analogy with the pandemic isn’t there. So I suppose then the question is, what can broadcasters like the BBC, what can publishers more widely do to help curb that? Because obviously, if you look at that Reuters report into trust as well, people tend to believe the publications with which they feel a political alignment regardless of whether they actually do publish correct information or not. So what can publishers do, I suppose is this huge question about misinformation, other than stand their ground and say, ‘Actually no, what we’re reporting is correct’?

I think it’s a really difficult thing to do. And it requires a multi-faceted approach. With what we’ve been doing in the BBC’s anti-disinformation team, that’s involved a combination of exposing and investigating disinformation, humanising it so that people are interested in the stories we have to tell and want to read them, and we found that to be the case with the number of people that engage with the content that we’re doing.

But it’s also about social media literacy, trying to reach as many people as we can to inform them about how they can stop and spot misinformation, and stop its spread. And then obviously, there’s this big question of trust.

I often get emails saying, ‘Haha, the BBC has a Specialist Disinformation reporter, the BBC is the home of fake news,’ all of this kind of stuff, which always intrigues me because they must have scrolled to the bottom of the article that’s got my email there to send me the email. But I do think it’s a big question.

And one of the reasons I’ve done this ‘How to talk about conspiracies’ report that’s coming out is because the guy who emailed me said, ‘Maybe if the BBC did more to tackle conspiracies, more to address the actual conspiracy theories, rather than dismissing them, I’d believe what you have to say, but the more you dismiss them, the more I turn elsewhere, because I feel as though you’re not tackling things I’m worried about.’ And so that’s where this report grew from.

And it’s become a discussion about how you can talk to people who believe conspiracies, who’ve perhaps fallen victim to them. And we’re very careful not to tackle things until we feel as though they’re viral enough that we need to either debunk them or investigate them or show the anatomy of how they’ve spread and how they’ve gone really, really viral. That’s obviously a big editorial question we have all the time.

The other thing is also reaching into spaces, either spaces where people are misled, for instance, in the Facebook groups I’ve been talking about, or onto TikTok, onto Instagram, onto other platforms where people will actually come across our content, because a lot of the issue we find is that we’re talking to people about disinformation, but the people we’re talking to are already the people who are more savvy, or at least more willing to become more savvy about disinformation. So that means we’re not reaching the people who need this stuff.

So it’s a big discussion we’re having at the moment. But we do try and reach into other spaces, use different forms, use a range of different social media platforms to reach the source, I guess.

When I did, for instance, the investigation about the human cost of misinformation, the guy I mentioned, Brian, who featured in it, he then shared that with all his followers, many of whom were people who’d similarly fallen victim to conspiracies, and that felt helpful and important that he was actually, we’d reached him and he was reaching others. And people were far more likely to listen to him sharing it than me sharing it. So that was positive.

But it’s a massive, big question of trust. I think that bigger question of trust, I mean it’s very hard, the BBC is a big place as well, so I can only answer for our anti-disinformation team, but I can answer more broadly, in that everybody I know at the BBC worked incredibly hard, especially during the pandemic, to report on everything accurately and fairly and well. People obviously do make mistakes sometimes. But that’s never the intention. And I think it’s just about trying to reach people and make them feel as though our reporting represents them.

So you said there it’s about humanising that impact and also being proactive and reaching out to these spaces. Does that necessarily mean a little bit of transparency on the part of your team saying ‘This is who we are, this is why we believe what we do’?

Yeah, definitely. That starts getting into the question of ethical newsgathering and closed spaces, which is something that we’ve had lots of important conversations about in our team. And that’s about joining closed Facebook groups and whether we need to declare who we are or not who we are.

And if we do then share things, we obviously have to explain who we are, and it’s quite obvious because there’ll be a byline on the piece that makes that evident. But because these closed spaces are often so massive, with thousands and thousands of members, it’s actually relatively easy to join some of them and not to have to declare anything.

You don’t have to lie but you also don’t have to go on to the group and say, ‘By the way, I’m a BBC journalist,’ because actually the editorial justification for joining the group is greater than having to reveal who you are. You need to be in these spaces to monitor and analyse and investigate the content being shared. And if you told them you were a BBC journalist off the get go, you’d probably get kicked out.

That’s a complex editorial question that we often have to answer.

I suppose it’s one that’s not got a boilerplate answer as well, because it will depend on the community, and it’ll depend on the severity of the potential disinformation as well.

Definitely. And one thing we find that’s very difficult is, when it comes to Facebook, a lot of the groups are massive, but when it comes to platforms like WhatsApp, it’s actually very, very difficult for us to gain access to those groups, and to analyse and assess the information in them.

I’m incredibly reliant on audience members getting in touch with me, although I said I do get emails calling me the fake news reporter, media, whatever, I also get very helpful tip offs from audience members who say ‘This voice note’s mega viral in my WhatsApp group,’ or ‘Someone shared this in my community forum, could you look into it?’ And those kinds of tip offs are brilliant. I can investigate things that I wouldn’t otherwise have access to.

And because they’re being shared with me by someone who’s part of the group, but usually who doesn’t divulge any kind of detail about the other members of the group or which specific group it is, it’s quite easy to protect the identity of the individuals and to respect their right to privacy at the same time as addressing important disinformation that is circulating, and just to gauge how viral something is.

So if we think something’s mega viral, we can say, ‘Right, there’s this WhatsApp going around, it’s really dangerous, it’s going to make loads of people really panicked about Coronavirus. We’re going to tackle it on our live page.’

Whereas otherwise, if we can’t gauge how viral it is, we have to be really careful that we’re not amplifying that disinformation to more people than would have seen it in the first place.

Similar Posts