While Amazon Alexa skills facilitate users in their day-to-day activities, this personal assistant can go malicious as well. Researchers have found that vicious skills can bypass the Amazon Alexa skill vetting process.
Researchers from the Ruhr-Bochum University have found how malicious skills can flood Amazon Alexa. Alexa skills are like third-party apps that run on top of Alexa to empower the assistant and facilitate users. Although these skills provide more usefulness to Alexa, a maliciously designed skill can also exploit this voice assistant to target users’ privacy.
While Amazon already applies a vetting process to approve these skills, researchers discovered numerous limitations in this process. These limitations potentially allow an adversary to enter the Alexa Skill ecosystem.
As for the loopholes in Amazon’s skill vetting process, the researchers highlight the duplication of skill invocation names, registration of skills by anyone under well-known developer names, and the absence of skill check in case the developer changes a skill code after approval. Details about this study are available in their research paper presented at the Network and Distributed Systems Security (NDSS) Symposium 2021.
After this research surfaced online, it gained traction as it shows how one of the most popular voice assistants can potentially threaten users’ privacy. However, Amazon has denied such possibilities for skills to bypass their vetting process. In their statement to Threatpost, they said,
“The security of our devices and services is a top priority. Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms to further protect our customers. We appreciate the work of independent researchers who help bring potential issues to our attention.”