Alexa skills are growing in popularity as users look to extend the capabilities of their Alexa devices. Researchers now believe that the rapid adoption of these skills could have implications for information security as they could open Alexa users up to phishing or invasive data collection
What is an Amazon Alexa skill?
An Amazon Alexa skill is an application, often built by a third party, that users interact with through their Alexa device. Some examples include Alexa Guard for home security, Easy Meal Ideas for recipes and Spotify for music.
Amazon creates its own native skills but also allows third-party apps to integrate with Alexa. There are certain requirements that these third-party skills must adhere to when they’re developed:
- Invocation names: Skills must have a name or phrase that, when said by the user, will automatically enable the application.
- Intents: These are words that trigger certain actions from skills.
- Cloud-based services: Skills must be hosted on a cloud-based service in order to accept and act on requests.
- Proper configuration: All three of the previously mentioned requirements must be configured properly in order for Alexa to route requests.
The last step in creating a skill is to have it vetted by Amazon to ensure it meets policy guidelines. The issue that has researchers concerned is the stringency of this vetting process. This is where the issue lies.
Amazon Alexa skills security issues
Problems with Amazon Alexa skill vetting
There are two primary issues when it comes to the Amazon skill vetting process. The first is the potential for duplicate invocation phrases. When developers register their skills with Amazon, some have found loopholes that allow them to use the same phrase as popular brand names, such as Ring and Samsung.
The issue that arises from duplicate invocation names is the increased threat of phishing attacks. When users download a skill, this usually gives a third party access to the user’s email address. Using the name of a popular brand can add fake legitimacy to phishing emails sent by the third party, encouraging users to fall victim to this malicious practice.
The second major issue is that developers are able to make code alterations to their apps after they’ve already been vetted by Amazon. This means developers could go back and either accidentally or purposely make changes to the code that opens their apps up to malware and other cyber threats.
In fact, a mere 28.5% of third party skills in the US offer valid privacy policies that clearly outline how user data is collected and used. Even more surprising is that only 13.6% of skills that are aimed at children offer valid privacy policies.
How to improve Amazon Alexa skills security
Unfortunately, securing Amazon Alexa users bear the ultimate responsibility for ensuring the skills they enable are secure. Alexa owners should audit their skills to see which offer valid privacy policies and disable any that aren’t being used or are not transparent about how they manage user data. The most surefire way to secure an Alexa is to remove third party skills altogether.
Security issues with Amazon Alexa skills should serve as a lesson for other organizations. Namely that if they open their product or service up to integration with third parties, there are many factors to consider to ensure organizations and their users’ data remains protected. Businesses planning to open their products or platforms to third party integrations should develop a comprehensive and stringent vetting process to guarantee proper security precautions are in place and require full transparency over how user data is collected and used.