Thursday, August 10, 2017

Audioburst launches a web and mobile search engine for audio news



Audioburst launches a web and mobile search engine for audio news

Audio is beginning to play an increasingly important role in how consumers connect with information, thanks to the popularity of podcasts and other short-form audio programming, improvements in voice technologies, and the growing consumer adoption of smart home devices like the Amazon Echo and Google Home. Today, a company called Audioburst is unveiling a new search engine designed to connect you to the information found in audio content from podcasts and programs aired on the radio.

Tel Aviv-based Audioburst, which also has staff in New York and Palo Alto, has been developing its technology for an audio search engine and content library over the past two years.

The idea is that much of the information arising from daily news programs or topical podcasts – or even TV news – is not available in an organized, searchable fashion. It’s broadcast over the radio, and then it largely disappears; or it’s only heard by those who subscribe and then listen to a particular episode of a podcast series, for example.

The larger goal is to make this sort of audio content available across platforms – including from Audioburst’s own search engine; major search engines like Google and Bing; from smart assistant apps, like Google Assistant; and from voice platforms like the Alexa-powered Echo speakers and Google Home.

Audioburst had previously rolled out its Google Assistant integration, its “News Feed” skill for smart devices, and developer API. Now, the company is unveiling Audioburst Search, a web and mobile-optimized search engine that helps you find, discover, and listen to audio news.

The product works by ingesting audio content from a number of sources. In some cases, Audioburst is proactively scouring the web for available live streams to import. However, the company is largely focused on partnership deals with radio stations, radio programs, and podcasters. It’s also starting to venture into the TV space, with plans to index TV news, and is chatting with a small handful of auto manufacturers about integrating Audioburst into their own in-car entertainment systems.

To make audio content searchable, the company pulls in millions of audio segments daily from now over 1,000 sources. While it’s not disclosing a full list of partners, if you look in its search results, you’ll find that it’s indexing the likes of Bloomberg Radio and some Fox radio programs, numerous radio stations, as well as a lot of podcasts, particularly in the tech space.

After ingesting the audio, Audioburst leverages technologies like A.I. and natural language processing to understand not just what’s being discussed, but also the context.

It also doesn’t only match users’ search queries to those exact same words when spoken, either. For example, it knows that someone speaking about the “president” in a program about U.S. politics was referring to “Donald Trump,” even if they didn’t use his name.

The audio content is then tagged and organized in a way that computers understand, making it searchable. And it’s broken into smaller sections – clips it calls “bursts” – which Audioburst identifies by understanding when the audio changes.

It can identify when an ad break starts, when there are station breaks, when a new speaker joins, when there are pauses, and other signals that tell it when to start and end an audio clip – a process that all happens automatically.

This allows its search engine to not just point you to a program or show where a topic was discussed, but the specific segment within that show where that discussion took place. (If you choose, you can then listen to the full show, as the content is linked to the source.)

As the technology is further developed, its ability to understand consumers’ personal preferences will be improved. For example, if you’re a fan of a particular sports team, and they won their last game, you might hear audio content featuring more praise and cheering from the commentators; but if your team lost, the news returned may have a less emotional tone.

Audioburst isn’t there yet – it’s only beginning this process of understanding listener behavior. But in the long-term, the company believes this would pave the way to things like personalized audio advertisements, alongside a daily news briefing, for example. It may also choose to generate revenue through more traditional methods, like sponsorships and promoted content. Revenue would be shared with the audio content’s producers.

But these are goals that are still a year or two out, we understand.

While the startup is making its technology available across platforms – web, mobile, and one day, cars, it sees potential in the voice-powered smart device market, in particular.

“Of course, voice assistants and smart speakers are the natural interfaces to use our library, because it’s all about voice,” explains Assaf Gad, VP Marketing and Strategic Partnerships at Audioburst. 

“It allows you to ask a question and get a result. Instead of Alexa reading it out loud to you in her voice, you can get the actual speaker,” he says, noting that audio could come directly from a public figure’s sound bite or the host of the audio program itself. “It’s a more human voice,” Gad adds.

The company recently closed on a $6.7 million round of funding led by Japanese speech recognition tech company Advanced Media to further develop its underlying technology and its consumer-facing products.

Its audio search engine, Audioburst Search, is live on web and mobile here.


Regards 

Pralhad Jadhav  

Senior Manager @ Knowledge Repository  
Khaitan & Co 

Upcoming Lecture | ACTREC - BOSLA Annual lecture series (125th birth anniversary of father of library science, Padmashree Dr. S. R. Ranganathan) on Saturday, 12th August 2017 at Advanced Centre for Treatment, Research and Education in Cancer (ACTREC), Kharghar, Navi Mumbai.  (Theme | 'MakerSpace')



Twitter Handle | @Pralhad161978

No comments:

Post a Comment