Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

At MWC Shanghai, Telecom Review carried out an interview with Rana Gujral, CEO of Behavioral Signals. The Los Angeles-based tech start-up focuses on initiating emotionally intelligent conversations with artificial intelligence (AI). Their use of emotions to deduce voice data has helped them build engines that are more emotionally-savvy. Their award-winning flagship product - OliverAPI -  allows enterprises to track emotions and behaviors in natural language conversations to get a complete view of related key performance indicators.  By being able to recognize emotional cues and perform and behavioral prediction analytics, their technology provides enriched conversational insights for interaction with voice assistants, chatbots, robotic virtual assistants, social healthcare robotics and mobile voice assistants.

Could you comment on the current tech landscape in Asia and where do you see the tech scene going over the next few years?

It’s very impressive. We’ve seen in the last 5-10 years that the ecosystems in Asia have matured to not only replicate the use cases that have succeeded in the past, but also come up with unique intelligent solutions that address specific local business and population challenges. That hyperfocus on targeted solutions is where magic happens. That is how the next “unicorns” are born.

We are also seeing that there’s an awareness around certain future technologies that are focused not only on AI, but also data and telecommunications. A culmination of these technologies will redefine the next 20, 30  and even 50 years. Asia is waking up and it’s not just following, it’s actually leading and it’s pretty remarkable.

In your opinion, what does it mean to be emotionally intelligent as a human being and what does it mean to be emotionally intelligent as a machine?

Emotional intelligence is a complicated science; The right technical term is affect.

We, as humans, project signals in terms of how we’re feeling.Whether it’s passion, anger or sadness, there are behavioral signals that are translated from emotional cues such as“am I engaged or am I disengaged?”. Therefore, when you talk about affect and emotional signals, we project them through a variety of cues which include facial expression, body language and tone of voice.

Our particular focus as a company has been around deducing emotions from the voice aspect and the way we do it is through our focus on the tonality. We cue in on not just what is being said but how it is being said. We put emphasis on the language behind the actual words being said. 

Can machines really be emotionally intelligent? How exactly?

Machines definitely can. One of the things that AI is focused on is getting machines to do things that humans can do but to an even greater extent. In a sense, this allows software systems and inanimate systems have their own superpowers. For example, machines have an extraordinary computing power to continuously process large amounts of data that humans simply cannot do for obvious natural human reasons. For example, we as humans need to sleep, take breaks, eat, spend time doing things other than constantly compute data. So you have those capabilities in a machine that can be leveraged to bring intelligent use cases to the picture. However, what we’re seeing now is that we’re interacting with machines more and more and it’s not just delegating a task for a machine to do but actually interacting with the machine and talking to it.

When it comes to virtual assistants like Google Assistant, Alexa and Cortana, we’re literally treating this inanimate entity as a human substitute. When we’re interacting with machines through voice, we need to understand how we interact with all humans. When I’m interacting with you and you’re saying something to me and I’m saying something back, I’m not just cueing on what you’re saying, I’m also cueing on how you’re saying it and trying to empathise with your cognitive state of mind, your feelings or the emotions behind the words you’re using.

Today, that interaction is missing between a human and a machine, and as a result, a lot of these interactions don’t really have superior user experience; they’re just very transactional. Our goal is to provide ability to these machines to be as good as humans when processing affect and the emotional state of mind so that they could be more relatable and have a much more user-engaged experience with a fellow human.

At Behavioral Signals, we apply our technology into robotics and virtual assistance, and we enable these robots to be a caregivers or retail assistants, to understand the human state of mind and respond back in a more empathetic manner.

So, why voice? What differentiates Behavioral Signals from its competitors in the ecosystem of emerging tech?

Voice is a very powerful barometer of deducing the emotional state of mind.

In fact, there is a recent study which was done by Yale University by Professor Kraus, a leading American psychologist Professor Kraus used a video feed of an interaction and turned the video off. He then measured the emotions based on simply the audio and benchmarked it. He then turned the video back on and consumed two data points. You’re looking at the audio but you’re also looking at facial expressions and you would expect that the read on emotions would become more accurate, but what he found was it became less accurate and he was really surprised by that!

That was sort of the whole study which made him think “why is it that when I’m looking at both the visual and the audio I’m actually getting a lesser read than when I’m looking at just the audio?”

What he found was that as humans we are very adept at masking our emotions through our facial expressions, but we’re not very good at doing the same through our tone of voice. So if you’re just cueing on to the tone of voice which is just listening to somebody on the phone, you actually have a better read on the emotional state of mind than when you’re looking at the person and listening to them.

We send out a lot of false alerts from our facial expressions and that throws people off. Although I’m sensing something in somebody’s voice, the facial expressions tell me otherwise and that’s the interesting part.

So for us, that brings an opportunity because with machine learning or AI, the biggest challenge is access to high quality data. For us, the high quality data is voice data and a variety of the use cases we apply ourselves into provide us with audio data, for instance call centres or contact centres that involve interactions with virtual assistants but no visual feedback. Not only is it more accurate to focus on voice, but there’s more data available on voice versus visual data.

The founders at Behavioral Signals have been researching the analysis of voice interactions as well as the emotional and behavioural state of mind behind voice interactions for 20 years. We are confident that our focus on voice gives our team a highly competitive advantage.

 

What are you working on right now?

Our core focus has been around delivering a platform that deduces emotions and behaviours. We’ve  also built some very specialized intent prediction engines. or example, we’re working with a client who is a leading player in the speech analytics business and they operate heavily in the debt collection market. We worked on building a prediction engine where just by simply analyzing 10-15 minute voice conversations. We can predict with over 82% accuracy if the debt holder is going to pay their debt or not. It is almost essentially predicting what will happen in the future based on processing that voice conversation and that’s been a ground-breaking capability that we’re bringing to the market.

In retail for instance, the propensity to buy would be if you’re talking to a client and you’re trying to sell something to them, you can actually predict if the client is ready to buy or not buy and so you can react to that situation accordingly.

So what are some of the drawbacks of using AI or voice-powered technology in a social context?

I believe  the concern around using it in a social context is privacy. I think that’s a legitimate concern that most people today in the connected ecosystem are worried about.

We don’t really have much privacy left as consumers and for the most part, it’s about choice. This is because we have adopted these experiences that are in lieu of giving up privacy. That being said, people are concerned about not keeping their emotions private and that is a legitimate concern.

The technology offers tremendous potential like in the case of one of our client who is using this technology to predict propensity for suicidal behaviour and they’re working on a platform that caters to patients with depression.

Depression is a major epidemic, it’s one of the biggest killers in the world and there’s not much help available. People don’t understand or really identify when someone is depressed and so the suicidal epidemic is very real, especially in the developed world. If we could build something that could help that community then that would be fantastic. That’s just one example not just from a business perspective generally helping humanity.

On the other hand, we need to make sure that the privacy concerns are managed properly and that there are proper disclosures in place.

From a wellbeing perspective, how would you use your AI-powered equipment to tackle issues?

One of the things that we’re working on today surrounds social robots. So there’s amazing implementation of robots coming to play. They’re specialised robots which are designed for a specific purpose such as caring for the elderly which is a major problem as our population is aging around the world .

These companion robots not only mitigate loneliness but also reminds people about their medication and takes care of their basic needs such as ordering food or re-filling a prescription order.

So if you have a care robot, you would need it to possess the ability to understand emotions and the human state of mind, not just listen to the words being said.

Another implementation could be potentially building specialized toys for kids who have emotional challenges which would help them with their learning, personal growth, development and wellbeing. Such a toy can understand their state of mind and react more empathetically by being a lot more patient and available.

Additionally, sometimes the question isn’t whether or not a human or machine do a task but rather who can do it better? Sometimes humans are not the best at processing emotions and so for the most part this problem is not replacing a human, it’s basically doing something which I think a human can’t do as well.

With regards to the use of care robots for the elderly, do you see digital trust being a problem amongst them?

Yes, obviously we haven’t reached our point of adoption yet - I recognize that these concepts are still very much in the experimental stage. However, experiences are there and the products are available in the market. This can also be said with their challenge to adopt latest innovations such as smartphones and social media usage. However, most of the elderly population have adapted to that. There’s always a point where people feel comfortable with the technology and obviously that depends on getting to a basic level of experience in place.

Necessity is the mother of invention. We’re living in nuclear families where people don’t have time to spend with each other and those problems can’t be solved in any other way. So if you’re a senior who prefers to stay home for a longer period of time, you need help. Either you can afford to hire a full-time caregiver, which most people can’t, or you need a solution like this to be able to live a quality life.

Are there any moral issues surrounding the prospect of giving machines element of emotional intelligence?

I think morality is a tough question. I don’t know if I’m equipped to judge that. Personally, I would say that with any technology, the morality is around proper disclosures and not making a choice on someone else’s behalf, by providing  the choice to experience what one wants or not.

What we can bet on is that we are going to depend on machines more and more - that’s something I strongly believe is going to happen. The second thing is that machines are going to become more intelligent.

With those two truths in mind, would you rather have that machine that you’re interacting with and depending on - which is very intelligent- also be emotionally aware or not? So that’s the question we need to ask ourselves. So how would you respond to the human question in the human context?

If you have a human whom you depend on and who’s very intelligent, but is not emotionally aware, that is the clinical version of a psychopath. Would we want machine psychopaths? An emotionally intelligent machine would typically be more ethical and fairer rather than a very intelligent machine you depend on but who has no ability to process emotions.

 

Pin It