Have you ever shared a piece of news or information on your podcast that you couldn’t verify was true? Well, a new AI tool has just emerged that will make any of us think twice about doing that again.
NewsGuard and Barometer have just partnered to launch an AI-powered solution specifically designed to combat the spreading of misinformation in podcasts. According to NewsGuard’s press release, the tool can track and detect potential misinformation shared by hosts in podcast episodes within seconds.
Let’s take a look at how this tool works and how it could force greater accountability, not just for the industry at large, but for podcast creators too.
How the AI Tool Works
The ‘Misinformation Detection Solution for Podcasts’ tool (catchy!) scrapes podcast content to find fake news. It works by combining Barometer’s AI-powered data identification algorithms with NewsGuard’s catalogue of ‘misinformation fingerprints’ to automatically detect the spreading of false narratives in podcasts at an episode level.
Misinformation fingerprints are NewsGuard’s machine-readable depictions of false narratives. They’re created manually by a team of experienced journalists, so some human input is involved. Each fingerprint contains the description of a specific false narrative the journalists have found online, with a detailed debunk that proves the narrative is false. Debunks include contrary factual information and citations to verified sources, along with associated keywords and hashtags.
Barometer’s algorithm then uses these ‘fingerprints’ as starting points they call ‘data seeds’. The tool uses these data seeds to scan through vast amounts of podcast content and identify potential instances where a host has used the misinformation narrative in specific podcast episodes.
NewsGuard’s VP of Operations, Sruthi Palaniappan, has said that the aim of this new tool is to “combat the spread of misinformation, promote media literacy, and restore trust in trustworthy news through journalist-vetted data.”
A Fake False Narrative Example
So, for example, say I read something on social media about Drake marrying a giant dinosaur statue in Lake Mead.
I see people are sharing the story widely across social, so I assume it must be true. I decide to talk about the story on my podcast about Canadian R&B.
But as we all know, just because a story is everywhere online doesn’t mean it’s credible.
If NewsGuard’s journalists consider the narrative important or dangerous enough (which Drake’s wedding probably wouldn’t be (sorry, Drake)), they will have created a misinformation fingerprint about this story.
Barometer’s AI algorithm will then use this fingerprint to scan swathes of podcast content and pull out any instances of the false narrative about Drake’s wedding.
What isn’t so clear at this point is whether the tool is smart enough to establish between peddling misinformation and simply discussing a popular false narrative. Could tools like this create discussion no-go areas for certain topics (vaccinations, for example) because they’re associated with false narratives?
Which takes us to our next point…
Enforcing Accountability in The Podcasting Industry
Any tool that seeks to make podcasting a more trustworthy medium should be good for the industry at large. The spreading of false narratives has been a problem on social media for years. As a similarly unregulated medium, podcasting isn’t immune to misinformation either.
Ironically, AI is exacerbating the fake news problem and trying to fix it. We recently reported on a company that launched an AI radio DJ solution which works by scraping the most popular local news stories from social media to read live on local radio through an AI-generated voice.
Until now, there have been no tools or systems around to ensure misinformation isn’t spread through audio.
But accountability is important in podcasting. Over time, if no one does anything to regulate the medium, we could see an impact on advertising. Some advertisers are cautious about working with podcasters due to brand affiliation. Tools like this could be a good way of restoring trust.
Studies have also shown that podcasts are now an even more effective medium for advertisers than TV and radio. This is something we need to protect.
But where things could get a little problematic is the idea that one (private) company like NewsGuard could have so much power in determining what qualifies as misinformation in podcasts. Sure, they’ve hired a team of ‘experienced journalists’ to pull out false narratives, but who’s holding NewsGuard to account? And what if they have a journalist on the team who creates a misinformation fingerprint about a narrative because of a conflict of interest?
And as mentioned earlier, is the tool intelligent enough to tell the difference between spreading misinformation narratives and simply having an open discussion about a (potentially divisive or controversial) topic?
Gaining diverse perspectives on complex subjects is a big part of podcasting’s magic. So while it’s great that we’re gaining a tool that promotes trust and accountability in podcasting, anything that seeks to censor discussion would be problematic. Here’s hoping NewsGuard will handle it the right way.
Building Accountability as Podcasters
As you probably guessed, this AI tool isn’t designed for podcasters per se, but for people who work with podcasters. That’s not to say it won’t impact your podcast, though. Newsguard will sell the solution to podcast publishers, agencies and advertisers so they can decide which podcasts they want to work with. Therefore, if NewsGuard flags your podcast as a misinformation-spreader, this could get in the way of you securing future sponsorship opportunities.
In a nutshell, we’re reaching a stage in podcasting’s evolution where accountability matters—taking responsibility for being a trustworthy source matters.
We know that listeners consider podcasts a trusted source of news and information, even when they’re not an official news source. In fact, many listeners trust their favourite podcasts more than traditional broadcast media.
So clearly, as podcasters, we have a responsibility not to share misinformation with our listeners. Hopefully, AI tools like this can help us achieve it.
Originally posted on May 17, 2023 @ 1:24 am