Malicious hackers can use verbal commands to perform SQL injections on web-based applications run by virtual assistants such as Amazon's Alexa, researchers say.

"Leveraging voice-command SQL injection techniques, hackers can give simple commands utilizing voice text translations to gain access to applications and breach sensitive account information," reports Baltimore, Maryland-based Protego Labs, in a blog post this morning. (Protego shared a copy of the post with SC Media in advance of publication.)

The flaw that enables voice-based attacks doesn't lie within Alexa or, for that matter, Google Assist, Cortana, Siri and similar technologies. Rather, the problem are the apps themselves, Protego explains. According to the blog post, an application can be attacked via voice-based SQL injection if three conditions are met: the Alexa function/skill is using SQL as a database, the Alexa function/skill is to vulnerable to SQL injection, and one of the vulnerable SQL queries includes an integer value as a component of the query.

Please register to continue.

Already registered? Log in.

Once you register, you'll receive:

  • News analysis

    The context and insight you need to stay abreast of the most important developments in cybersecurity. CISO and practitioner perspectives; strategy and tactics; solutions and innovation; policy and regulation.

  • Archives

    Unlimited access to nearly 20 years of SC Media industry analysis and news-you-can-use.

  • Daily Newswire

    SC Media’s essential morning briefing for cybersecurity professionals.

  • Learning Express

    One-click access to our extensive program of virtual events, with convenient calendar reminders and ability to earn CISSP credits.