JANUARY 4, 2021

What News Teams Should Know about AI-Powered Search Tools

using ai-powered search tools in the newsroom

There's a real urgency behind finding the right footage to illustrate a news story, and news teams are leaning more and more on AI-powered search tools for media management. When news breaks, a piece of archival b-roll that once felt unimportant may suddenly be the visual centerpiece of the report—if only the editor can get their hands on it in time. AI helps to bridge that time gap.

In a report titled Media Tech Trends - Artificial Intelligence, the IABM says AI adoption in broadcast and media is growing. "Nearly one quarter of broadcasters say that they have already deployed AI, while over 60% of them are likely or somewhat likely to deploy AI in the next 2-3 years," the report says.

A newsroom's media asset management system can benefit from using AI-powered search features to quickly find the correct media through rich metadata. These search tools are evolving quickly; technology decision-makers can keep pace by learning not only what features are available now, but what's coming down the pike.

Facial Recognition

Computer vision, and specifically facial recognition, serves a practical purpose in the newsroom, helping producers and editors track down clips of particular people in a pinch. Some large vendors leverage their own databases, which are seeded with facial and image recognition data on public figures and public places or buildings.

Consider, for example, a news network providing coverage of a large political rally. For the day's coverage, they have been tasked with highlighting certain public figures who are known to have been in attendance. Someone might have to comb through hours of footage to find the correct people—that is, unless they're using an AI-powered database seeded with facial recognition data on their subject. Those databases can also be "trained" for less public figures, but that information has to be added to the data sets.

Speech-to-Text and Phonetic Search

A similar slog could exist when tracking down the perfect sound bite. Traditionally, searching through interview clips for a particular person saying a certain phrase has required someone to scour through pages of transcripts, hoping a matching timecode exists. Even more challenging, more up-to-the-minute coverage might require watching footage as it feeds in and hoping to catch the right moment. AI-backed speech-to-text recognition can expedite the process by pinpointing clips where someone said specific words. Consider the political rally example: if the same public figures spoke several times throughout the event, speech-to-text could help reporters find mentions of a particular keyword or topic for the ideal sound bite.

Newsrooms can also leverage phonetic search, which searches an entire media asset manager for words or phrases across all content managed by a system. Combining speech-to-text capabilities with phonetic search is the best bet for a streamlined workflow.

OCR, Pattern Recognition, and More

Newsrooms can also look to optical character recognition (OCR) to detect and extract any text that's displayed in the footage, automatically enriching it with helpful metadata. AI has another role to play in quality assurance, where it can analyze footage for any adult or otherwise inappropriate content, pick up undesirable frame patterns, or even analyze the footage by sentiment level. And don't forget it's possible to lean on AI for automated subtitling as well.

Think back again to the political rally: throughout the coverage of the event, b-roll might be required to illustrate the sound bites found earlier, or to provide color behind a reporter's commentary. By using OCR and pattern recognition tech supported by AI, rally signs with particular text can either be targeted or avoided, and content can be filtered more quickly, avoiding inadvertently airing material that violates the network's standards.

The combination of all of these tools can facilitate network coverage that is focused and targeted, all at the much faster pace required by the 24-hour news cycle.

What's in Store for AI-Powered Search Tools?

Right now, most newsrooms are using AI to automate routine tasks and set themselves up with nimbler workflows. We'll likely see more widespread adoption as more vendors embrace some of the cutting-edge features that AI can offer, such as the ability to attach metadata to files as they are growing or ingesting.

Imagine how quickly a news story could come together if your media asset management system could seek out b-roll to match search text even as the editor types it in—all from footage that was just shot in the field. News producers and reporters could effectively cut packages to go with their stories as they write the text. In the meantime, while this aspect of the technology progresses, media can still be leveraged the "old-fashioned" way, on the spot and then reutilized as its metadata populates in the system.

Whether considering the tools as they exist now or considering where the future may lead, one thing is certain—implementing AI tools into the broadcast production process can help news producers meet increasing demands for up-to-the-minute reporting and story creation.

  • Amy Leland Headshot

    Amy Leland is a film director and editor. Her short film, Echoes, is available on Amazon Video. She is an editor for CBS Sports Network.

  • © 2024