Voice Activation Security

Are You Listening? Are They? 

The rise in virtual assistants like Amazon’s Alexa, Apple’s Siri, Google’s Assistant and Microsoft’s Cortana have led to many concerns about user privacy and security. Whether we have a standalone device like the Echo sitting on our counter or Siri or Assistant built into our iPhone/Android device, many of us now have at least some interaction with these AI assistants. To be sure, some of their features are tantalizing: activating your music with a voice command, paying your credit card bill without touching a button, easy access to podcasts. Coming soon, Alexa will even be able to listen for a fire alarm or break in while you are away from home.

But are there potential downsides to all this convenience?

Certainly. There are some great and legitimate uses for these devices, but as with all things connected to the Internet, there are pitfalls as well. (In this brief examination, we’ll be assuming that the device in question has their settings such that the assistant can be woken by voice; this can be turned off, but reduces the functionality of the assistant, especially for stand alone devices).

First off, the obvious: “If Alexa or Google Assistant or Siri is listening for my wake up command, then aren’t they listening to everything?”

The answer is “yes.” And “no.” While the device is constantly listening in, nothing is recorded or saved until the wake-up command is given. Then the recording starts (including, apparently, just a bit of whatever was said before the wake-up word). Once the recording starts, your interaction with the assistant is transferred to the cloud and saved indefinitely. Therein lies the potential issue. These recordings can only be deleted manually and, depending on how you use your assistant, may contain sensitive banking, account, or even just Christmas gift information. In the event of an attack on the cloud service, whatever interactions you’ve had with the assistant can potentially be used against you.

Which leads us to another area of caution: What types of tasks and interactions are safe to perform with a virtual assistant? Obviously, we need to use greater caution in connecting our credit card information to Siri than we do playing music from the device, but take care to watch the permissions requested by any application you download. If voice activation is not absolutely necessary for the application in question, then it should be disabled.

One way to think about this is to realize that anyone outside your window can call out the wake word and access your device. “Alexa, wake up. Unlock the doors.” “Alexa, wake up. Purchase dog food.” “Alexa, wake up. Set an alarm for 2 am.” Etc. If there’s a function that would potentially put yourself in a dangerous situation should a stranger access it, perhaps it’s not a good task to assign to artificial intelligence.

What about purchases? Famously, assistants have been given commands to make purchases that the owners didn’t approve, whether that be from kids taking advantage of their parents’ device or the television inadvertently waking up the device and ordering a product. The fix, here, is a bit more straightforward. If you’ve enabled purchases on your device, be sure to require the optional pin code to confirm a purchase as a safeguard.

There are many other iterations of the possible issues that can crop up from voice recognition connected to our virtual assistants, but a good rule of thumb is this: only use and allow voice recognition for those specific situations and applications where you most need it. Otherwise?

Turn it off!

Voice Activation Security