Artificial Intelligence, Real Threats: Can Attackers Flip the AI Script?

Categories
Risk Management

Published on October 17, 2018 by Alexander Goodwin

Reading Time: 4 minutes

There’s big money in artificial intelligence (AI) — reaching almost $12 billion over the next six years. As noted by research firmMcKinsey & Company, companies are now in the process of building out the technology foundation they need for AI deployment, with 45 percent of executives already worried about not investing enough in AI to keep up with the competition. It’s not a baseless fear: The McKinsey research also suggests that AI adoption is following a standard “S-Curve” model, which starts with slow adoption by a limited number of businesses followed by rapid mass adoption as market opportunities increase and then slows again as stragglers are left behind. 

Given the wide range of potential applications for AI and the evolution of core intelligence technologies, increased business interest is no surprise. What companies may not be prepared for, however, is the uptick in hacker usage of AI tools and solutions — what happens when attackers flip the AI script?

AI Basics

Before digging into use cases and attack vectors for AI, it’s worth setting the stage: What are artificial intelligence and its oft-mentioned companion, machine learning (ML)? What’s their role in helping organizations achieve corporate goals?

AI comes with a broad definition: Any task performed by a program or machine that — had a human performed it instead — would have required the application of intelligence. While the term often conjures up images of super-intelligent robots performing demanding or difficult tasks, in practice it’s typically more benign; programs capable of analyzing data and making basic decisions. According to Forbes, meanwhile, machine learning focuses on emulating human cognition by enabling devices “to learn and adapt through experience.” Familiar applications of machine learning include online search auto-completion and product recommendations from e-Commerce sites.

There’s a solid case for AI and ML in security: As noted by GigaBit, 75 percent of cybersecurity professionals “agreed that AI and machine learning can help with security.” Specifically, AI-driven defense tools could be used to search for and detect malicious network behavior, identify false positive incidents and then report to human IT staff.

When Good AI Goes Bad

Burgeoning interest in AI has led to the development of multiple open source projects, making it easier than ever for businesses — and would-be attackers — to get their hands on useful code. So what’s the worst that could happen?

According to The Register, last year’s DEF CON included a demonstration of AI-powered malware capable of modifying malware tools based on the reaction of victim devices to evade antivirus detection. The proof-of-concept used OpenAI‘s shared framework to change small byte sequences in malware code and was able to bypass test security systems in one of every seven attempts.

As noted by ZDNEt, meanwhile, tech giant IBM has also developed a POC AI malware known as “DeepLocker”. Using a Deep Neural Network (DNN), IBM teams created malware which hid WannaCry ransomware in a video conferencing application. While the malware wasn’t detected by AV solutions, that’s not the most worrisome aspect — DeepLocker leveraged a “trigger system” which only deploys the malware when specific conditions are met. In this case, the condition selected was facial recognition of a single individual. When observed, the malware deployed behind network defenses — if not seen, the malware stayed “locked up” and effectively undetectable.

To combat this kind of AI-enabled attack, companies will have to marshal their own AI defenses and ISVs will have to build applications that are more resistant to tampering and malware injection. 

Phishy Business

Another avenue for AI-enabled hackers? Phishing. 

Here’s why: It’s common knowledge that humans are the weakest link in the security chain, and phishing attacks remain successful despite improving infosec education. With AI, they’re even more effective. 

For example, security researchers from Florida have developed ML-enabled software capable of creating URLs for phishing attacks that go undetected by standard antivirus tools. According to Gizmodo, meanwhile, when data scientists tested the ability of humans and AI to compose phishing Tweets and convince users to click. While their conversion rates were similar — around 40 percent in each case — the AI was 6x faster at composing and posting malicious Tweets.

Despite the uptick in AI, solutions to social engineering remain rooted with humans: Better training around avoiding links and tips to stay cynical online are necessary to defeat both man and machine.

Take and Break

This might all sound like a bit much; after all, companies are just starting on the road to full AI adoption. Beyond proof-of-concept code and hacker conference wargames, there’s little evidence of full-blown AI attacks on organizations.

But here’s the thing about hackers — they’re willing to innovate.

Consider: What if instead of using AI tools to build their own malware, hackers spend their time collecting whatever data they can about your network operations and code? They download your app, visit your website and glean any information they can. Then, they reverse engineer your code and using open source AI scan for commonly-known vulnerabilities. If successful, they attack and infiltrate. If not, they wait and scan again when the next big exploit is made public. With the help of AI, they reduce the time between information availability and attack, making it harder for you to react.

While it’s impossible to prevent the use of AI solutions to scan application code, companies can make it harder for hackers to get what they want by obfuscating everything — including open source components — so that AI isn’t enough. Instead, attackers must fight their way through randomized, encrypted code to get what they want, making them more likely to look elsewhere.

AI Showdown

Artificial intelligence is on the rise, and there’s showdown coming between “good” and “bad” iterations of this new technology. For companies looking to protect their applications, assets and networks, the “winner” isn’t what really matters: Defending apps means taking steps to frustrate aggressive AI — no matter how smart it gets.

  1. https://www.businesswire.com/news/home/20181001005802/en/2018-11.95-Bn-Artificial-Intelligence-Security-Market
  2. https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/artificial-intelligence-why-a-digital-base-is-critical
  3. https://www.forbes.com/sites/quora/2018/02/15/how-will-artificial-intelligence-and-machine-learning-impact-cyber-security/#718fa64b6147
  4. https://www.gigabitmagazine.com/ai/75-cybersecurity-professionals-see-benefits-ai
  5. https://www.theregister.co.uk/2017/07/31/ai_defeats_antivirus_software/
  6. https://www.zdnet.com/article/deeplocker-when-malware-turns-artificial-intelligence-into-a-weapon/
  7. https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425
  8. /obfuscation