Interview: what are the challenges of personal data protection on voice interfaces?

The success of voice search and voice assistants is now well-established. The trend is growing steadily, with users choosing en masse to make their daily lives easier by using devices such as Alexa, Siri or Google. But this popularity also makes it necessary to consider the security of the personal data – and voiceprints – collected by these devices. Yann Lechelle is the operations director of the company Snips, which offers a voice assistant solution that respects personal data. We asked him a few questions on the occasion of his presentation on November 30th at the Digital Tech Conference in Rennes.

Where do we currently stand regarding voice assistant technology?

Smart Speakers are beginning to train consumers to use their voices to interact with everyday content, whether consumable content or transactional content such as e-commerce or music streaming. These voice interfaces first emerged with Siri, and more recently with Alexa and Cortana. The story isn’t much different from that of smartphones. Ten years ago, the smartphone emerged with the iPhone, and then the Android, followed by various competitors, rapidly developing into the ecosystem we know today, with several billion users worldwide. Today, the human-machine interface is moving towards a use model in which it’s no longer necessary to take your device out of your pocket to interact with it: you need only speak out loud.

Can you explain Snips’ role in this context?

Snips is an emerging player that is developing a voice interface to serve as the equivalent of an assistant, like Siri or Alexa, as a white label, with a special feature: privacy by design. This involves respecting users’ personal data, and therefore also their voices. Snips won’t send users’ voices into the cloud to process their requests. Each user search or request will be processed locally using the Snips embedded system.

The voice is a biometric signature, just like fingerprints: it is part of our identity. It seems ludicrous to think that a connected object collects your digital fingerprint each time you press one of its buttons. Although these practices are relatively well-accepted in China and the United States, they are becoming more problematic in European culture. Europeans are less trusting, more sensitive, and more aware of the potential for abuse.

Today, with Alexa or Google Home, when I speak, the sample of my voice is sent to the Amazon or Google servers, based in Seattle. My request is processed there, and the result is sent back to the speaker. With Snips’ technology, the voice is sent to the object itself. For example, if I ask a voice assistant built into my home to turn on the light, the light will turn on because the voice assistant sends a message directly to the lamp. If I ask Alexa to do the same, my voice will be sent to Seattle and the message will be sent back to my home in order to turn the light on. It’s absurd!

The Big Four seem to be saying that they need this data to improve user experience, to know them better so they can serve them better. Are they justified in using voice assistants, and the artificial intelligences that run them, in this manner?

The Big Four place a lot of emphasis on describing their assistants as “intelligent”. The voice interface is an extremely powerful technology, and therefore requires artificial intelligence techniques to create algorithms so the machine can adapt to humans, rather than the other way around. But this does not mean it is “intelligent”, as we understand the term, meaning on a level close to that of humans. I believe that it is a mistake to depict voice assistants in this anthropomorphic manner, as the Big Four are doing.

However, it is in the Big Fours’ interest to refine their assistants’ anthropomorphism, and to convince consumers that this is natural and normal, so they will confide in the machines. It is important to keep in mind that the Big Four have just one goal: to protect their multi-billion-dollar valuations. They have so much to lose that they use unreasonable methods to profit from our data. In creating services that improve their users’ daily lives, they also create dependency on these services.

How does Snips plan to convince professionals to resist the siren song of the Big Four and their voice assistants?

Snips is aimed at professionals who are sensitive to data protection issues. Our company offers an alternative message, enabling our clients to take a different approach to privacy. Privacy is important, and encompasses the notion of security. There is a large portion of the market, whether for military, banking or insurance applications, that is highly sensitive to these notions. By having Snips’ technology work locally, we also rouse customers’ interest in embedded technology and independence from the Big Four and the Cloud. In addition, we make it possible for our integrator customers to avoid becoming Big Four satellites by default. If they want to become Big Four compatible, they can do so on their own, but the idea is to make a secure interface the foundation for any application, and for this to become a reflex.

There is a tendency to turn to Google or Amazon “by default”. But this “by default” is becoming less and less true. It is possible to function differently, and that is what we intend to show with Snips.