Voice recognition is useful. Beyond Verbal has raised $1 million to prove that emotion recognition could be too
July 24, 2013 Leave a comment
BY NATHANIEL MOTT
ON JULY 23, 2013
Anyone could tell you that communication isn’t necessarily what you say but how you say it. We have evolved to communicate through our gestures, posture, pitch, and cadence as well as our vocabularies, allowing us to convey different emotions without necessarily changing the words we use. Humans can pick up on those signals fairly easily. Machines can’t — and that’s exactly what Beyond Verbal, an Israel-based startup, is trying to change.
The company, which previously raised a $2.8 million seed round led by Genesis Angels, is today announcing that it has raised a $1 million follow-on round led by Winnovation to continue developing its emotion recognition software. The round will allow the company to refine its product and introduce APIs that will allow developers of other services to incorporate such emotion recognition tech into their own products.Beyond Verbal’s service is simple in execution, if not in origin. The software is built atop 18 years of research from physicists and neuropsychologists who have essentially taught our computers and smartphones how to divine our emotions after listening to snippets of audio. Put another way: Tools like Siri and Google Voice Search understand what you’re saying. Beyond Verbal wants to teach them how to understand how you’re saying it.
Yuval Mor, the company’s chief executive, and Dan Emodi, its VP of marketing and strategic accounts, demonstrated Beyond Verbals’ capabilities for me last week. Clips of Barack Obama speaking, an interview with the late Princess Diana, and a clip from “Dirty Harry” were all analyzed to determine how the speaker felt, what they were trying to convey, and how much control they had over their own speaking.
Mor also used his company’s software to show me what Emodi was feeling as he answered questions and demoed the product. Emodi, like all marketers, was predictably excited about his subject. He was also using forceful language, in careful control of his emotions, and doing his best to persuade me to buy into Beyond Verbal’s vision. The software, as far as I could tell, was working as promised. (You can try the company’s software for yourself — or a limited version of it, anyway — on its website.)
Emodi and Mor say that they want Beyond Verbal to become an important aspect of all kinds of technologies, whether it’s a media player that changes a playlist based on your mood, a movie recommendations service that knows what kind of film you might enjoy the most given your current state of mind, or something else entirely.
Voice recognition is becoming an increasingly important aspect of modern computing, what with all of the virtual assistants from Apple, Google, and other companies making their way to our smartphones and computers. Beyond Verbal is hoping that emotion recognition will one day become just as important to modern software. Just don’t blame it, when you’d rather talk to your smartphone than your family because Siri “really gets you,” and Google totally understands where you’re coming from, no matter what you say or do.
Beyond Verbal, in some ways, is reminiscent of weather applications like Weathermob, in that it’strying to add context to otherwise discrete data. Location apps, like voice-enabled applications, know where you are and what you’re doing; they don’t know what the weather’s like or how it’s going to change in the next few minutes. If there’s anything more capricious than the weather, it’s human emotion, and adding that context to your requests and demands could help these services better figure out what you should do, who you should do it with, and so on.
It’s not about what you’re saying. It’s about how you’re saying it.