In this last episode of Media Voices in 2023 we take a big picture look at AI, based off our recent collaboration with Media Makers Meet on their Mx3 AI conference. We hear from experts from Immediate Media, Ipsos, the News Media Association and more, about where they are placing their chips to take advantage of the fastest-moving area of media.

This holistic look at AI in the newsroom has been split into two parts. In this first part we set the scene for AI and its use in publishing, as experts tell us how to prepare for internal and external changes to media businesses. The second part – coming in the new year – is comprised of case studies from publishers already getting their hands dirty with AI tech.

Media Voices and Media Makers Meet would like to thank FT Strategies, InsurAds, Labrador CMS, Miso, Sub(x) and Zuora for sponsoring the conference.

There is a corresponding report, written by Media Makers Meet, available to download here.

Episode guests and session snippets from:

  • Tim Bond, Associate Director, Ipsos
  • Roxanne Fisher, Director of Digital Content Strategy, Immediate Media
  • Ross Sleight, Chief Strategy Officer EMEA, CI&T
  • Owen Meredith, CEO, News Media Association
  • Lucky Gunasekara, Co-founder and CEO, Miso.ai
  • Jan Thoresen, CEO, Labrador CMS
  • Julian Thorne, COO, Sub(x)
  • John Barnes, Chief Digital Officer, William Reed
  • Alan Hunter, Co-founder, HBM Advisory
  • Kevin Donnellan, Director, Explainable
  • Owen Meredith, CEO, News Media Association
  • Matt D’Cruz, Partner, Martin Tripp Associates

Here are some highlights from the episode:

The three acts of AI development

Ross Sleight: The first act, which we’re in right now is around productivity and efficiency. So how can we use AI in order to be able to augment our current processes, speed them up, take away the menial tasks allow greater efficiency amongst all members of that value chain?

Today, we’re barrelling quickly into act two, which is hyperpersonalisation; the ability for AI to start to bring multiple different modalities, multiple different information together for you, for your particular needs in response to a natural language question. So all of the standardized ways of us getting information beforehand, in the last 20 years of digital has all been about human [search] in terms of finding things out and going down rabbit holes, AI is going to do the work for us and hyperpersonalised around that that means everything can be aimed around the individual. And that has wide implications not just in terms of content consumption, but also in terms of healthcare and individual health care plans and dietary plans and all those issues.

And then the third act is because of that type of personalization, we’re going to end up in a situation where we’re going to see massive disruption to business models.

What can publishers do when it comes to regulation

Owen Meredith: There’s a bit of a nervousness in the publishing community, when OpenAI allowed publishers to opt out. Some publishers have, some publishers haven’t. And there’s potential advantages or disadvantages of both. You’ve got to weigh those up and decide on an individual basis. Similarly, with with Google allowing opt outs as well.

But there is a there is a huge upside for publishers from getting this right. And creating a licensing model that works for developers, that potentially adds a new revenue stream, we all know that we need multiple revenue streams to to create sustainable business models. This is potentially a new one.

There’s also a huge opportunity in the newsroom and in general business efficiencies of where we can use AI, particularly the newsroom context to free up journalists so that they can get out there and do some more of the human work that needs to happen.

Creating a framework to allow experimentation with AI tools

Roxanne Fisher: When we’ve spoken to people, what they appreciated most about the journey we’ve been taking [them] on this year is having the permission to experiment with AI. I don’t think it’s the case at all media organisations at the moment that you’re encouraged to use these tools. But we also have given them guidelines to help them use them safely and carefully. So they can move forward and not only develop skills for the company, but for themselves to be able to use AI be at the forefront of what’s going on and know that they are doing it in a way that isn’t going to damage the reputation of the brands that we all love.

The starting point for us was really having people who are willing to get stuck in, so we were using the tools ourselves. And I wouldn’t call myself an expert in generative AI. But I know enough to know what’s reasonable to ask of people. So I think walking the walk and talking the talk, and practicing what you preach is really important.

I think giving people the space and the opportunity to experiment – so that’s not only giving them permission, but it’s giving them access to the right tools, giving them time to actually learn. And we found bringing people from different departments across the business together to share learnings and working with people who they hadn’t worked with before.


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *