Apple invests in making Siri smarter about sound (among other things)
Apple just acquired a machine intelligence audio analysis company called Pop Up Archive, founded by Anne Wootton and Bailey Smith.
Making sound searchable
What’s interesting about this is what the company does:
“Pop Up Archive makes sound searchable,” the company said on a LinkedIn post.
“Media on the web is opaque and hard to discover. Pop Up Archive takes sound from anywhere and automatically creates timestamped transcripts and keywords, indexed so you can play back search terms exactly where they are found in the audio.”
Above: Company founder, Anne Wotton, explained a little about the tech at the Library of Congress in November 2013.
The LinkedIn post revealed plans to build smart transcription tools that could help media companies improve productivity and monetize content by adding a semantic SEO-ready layer to media. The tools it provided included an API and widgets for CMS platforms.
“Pop Up Archive will be shutting down operations and ending support for Pop Up Archive on November 28, 2017,” Pop Up Archive wrote on October 11, promising customers free November service, rebates, and an easy way to download their existing assets.
This was a real product
Before its acquisition, the company had been working with the Public Radio Exchange to create collections of sound from journalists, media organizations, and oral history archives from around the world. The solution was already in demand.
A blog post (now deleted) read as follows:
A more recent audio clip from the DLPA explains how the company worked on a project to expand use and discoverability of an audio collection
That podcast thing
Other users included ESPN, and the company’s companion Audiosearc.ch service was used to transcribe election podcasts during last year’s US election event.
Audiosear.ch provided a full-text search and intelligence engine for podcasts and radio. “We transform speech into text, then analyze and index it to create the deepest database of podcasts and radio in the English language,” the company said.
Here’s another video that shows you a little about how this worked and a link to a little more information.
How could Apple use this?
There has been a lot of focus on how this tech could help improve and enhance how Apple distributes podcasts – I’m guessing because Apple’s iTunes head, Eddy Cue, promised new podcast features earlier this year.
However, I also there are big implications to enhance Siri’s capacity to search through and retrieve spoken audio from multiple sources. It may also have implications in creative apps, such as Final Cut, Logic, GarageBand, and could have some promising implications for accessibility and dictation on Apple’s platforms. (iBooks, talking books, even iTunes U may benefit from this tech, if applied assertively across Apple’a platforms).
Not only that, but the company’s analytics and machine intelligence teams are bound to want to get to know this tech.
I don’t suppose we’ll learn how Apple intends to deploy these technologies for a year or two, as it typically takes around that long to begin weaving acquisitions inside of products.
While we wait, Apple’s playing it close, as ever, telling TechCrunch:
“Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans.”
All the web pages relating to both the parent and spin-off company have been taken offline, but you can still find a few traces.