Four Common Misconceptions about AI Security

Kojin Oshiba

AI is the future of every business. The extension of human resourcefulness through automation powered by AI is already changing the way we shop, eat, and live in real-time. From self-driving cars to Apple’s Face ID, AI is steadily trickling into industries far-removed from high tech, creating entirely new categories of products and possibilities.

Although the idea of AI-enabled intelligent machines like HAL from 2001: A Space Odyssey might leave some feeling uneasy, the immediate danger of this technology is much simpler. Even the best AI systems can collapse as a result of a single rudimentary data entry error — and the results can be catastrophic. Now, imagine what could happen if these same AI systems were attacked by intelligent people with bad intentions. The integration of AI with industry-leading services and businesses has created new opportunities for fraudsters and other adversaries to exploit. Despite these real-time dangers, most businesses are utterly unequipped to handle such attacks.

This is why we founded Robust Intelligence. Our AI security platform paves the way for a novel category of software that democratizes the tools needed to address these vulnerabilities. Robust Intelligence provides companies ranging from startups to Fortune 500 businesses with the expertise and technology to proactively tackle AI threats and mitigate future risks. For more on this topic, you can take a look at our first blog post.

In this second post on our company blog, I address some of the common misconceptions about AI security that we’ve often heard in discussions with our potential customers, candidates, and investors. I hope this post clarifies the real threats associated with AI so that you can take the first step towards leveraging its full potential while understanding its risks.

Misconception 1: AI Security is the same as AI for security

Image for post

When you think about AI security, you might imagine AI solutions for cybersecurity problems, such as automated identification of malware. Indeed, many companies employ AI in this fashion as an effective means of combatting cyber fraud. However, this isn’t what AI security means. While many other companies work on AI for Security, we work on Security for AI.

Instead of leveraging AI to protect against or detect security breaches, we protect AI itself from its inherent vulnerabilities. It’s critical to note that these threats are constantly evolving. For example, each time a financial institution or e-commerce company updates the AI service it uses for fraud detection, attackers can find new loopholes to exploit in order to pass off fraudulent transactions as legitimate. When banks and government agencies use speaker verification AI to authenticate their users, fraudsters can generate a synthetic voice to spoof the system. As I’ll describe later, there are also plenty of examples of non-malicious threats to AI. These are threats inherent to AI, not threats that AI can neutralize. Although AI security and AI for security sound similar, they refer to entirely different concepts.

Misconception 2: AI Security is about adding “noise” to images

This is a popular misconception that is more common among those who are already familiar with AI and machine learning. If you’ve ever taken a course on machine learning, or if you’ve read recent news articles about the field, you might have seen pictures like the following:

Image for post
Image for post

These are canonical examples of what AI researchers call “adversarial examples.” The image on the left demonstrates how, by adding some special noise imperceptible to human eyes, image classification models can be fooled to classify a panda as a gibbon. In the image on the right, a few physical patches placed on a real-life stop sign cause it to be no longer recognized by state-of-the-art computer vision models.

Although these are great “hello world” examples of AI security, they are certainly not representative of the numerous set of vulnerabilities that exist in end-to-end AI systems in the real world. How so?

  1. AI has been proven to be vulnerable in various domains beyond image recognition systems. For instance, adversarial attacks can also be targeted to AI models trained on tabular data. Here at Robust Intelligence, we’ve worked with companies that receive transactions every single second from fraudsters attempting to find loopholes in their productionized fraud detection models. Although such attacks modify information like device ID, IP address, and geolocation rather than pixels in an image, they are even more widespread and threatening to businesses.
  2. There are many different kinds of vulnerabilities beyond forcing misclassifications by AI models in production. In fact, there exist significant security concerns throughout the AI development life-cycle that data science, machine learning, and security teams must be wary of. Here are a handful of examples people often overlook:
  • Model security: more and more data scientists are using pre-trained models available on the web for industry applications. This leaves them vulnerable to hackers who corrupt the model files and embed backdoors.
  • Data poisoning: Adversaries can also compromise ML models by injecting certain types of corrupted data into the train set. Such poisoning attacks are possible even when the training data is not publicly available, since models are often retrained on data collected in production.
  • Model privacy: Adversaries can steal sensitive information about the data used to train a model, such as inferring whether a specific person, item, or company was a member of the dataset based on model predictions.
Image for post
AI security problems exist throughout the entire AI life-cycle

Misconception 3: AI security is only relevant for complex models

The problems outlined above aren’t only relevant in more complex AI domains like deep learning. On the contrary, linear models, decision trees, and even rule-based systems are often just as susceptible to these attacks.

For example, consider a rule-based content moderation system for a social media website or a marketplace platform. Such a system might depend on a hard-coded rule that looks for specific key words to flag comments like please wire me $10,000. You can imagine an adversary spoofing this rule by creating adversarial text that looks like pl3@se,wire.me 10000 US DoLars. To humans, the latter comment is also obviously something we want to flag (perhaps even more so than the first one). However, rule-based systems or other similar models can very easily overlook such tweaks. Even if you amend your model to include such patterns, adversaries will soon adapt. Even the simplest AI models have inherent weaknesses and vulnerabilities that demand a disciplined and robust security approach.

Misconception 4: AI security is only about defending against bad actors

So far, I’ve focused on one aspect of AI security: adversaries trying to attack or find loopholes in AI systems. Although this is a serious and pressing threat, it is certainly not the only one. There are numerous other ways AI can fail catastrophically without the right security practices in place.

Let’s take the case of erroneous data inputs. At large companies, data scientists often publish models that are used by other engineers. Consider, for instance, an international company where Japanese localization engineers depend on a model trained and deployed by American data scientists. If the model input includes a price column as one of its features, it's entirely conceivable that the engineers in Japan pass in price=10,000 assuming the currency is in yen (~100 USD), but the model instead treats the input as 10,000 USD. Similar erroneous inputs are possible with image recognition models as well. Since images can be represented as matrices in two different ways (values in [0, 1] vs. values in [0, 255]), users might pass in images formatted one way without realizing that the other is actually the correct one.

Sometimes, AI services can also contain “bugs” just like traditional software. A common example is the case of unseen values for categorical inputs. Suppose there’s a feature called device_type in your dataset. When you train and evaluate your fraud detection model, everything seems to work just fine. However, a few weeks after you deploy the model, you suddenly start to see massive spikes in model service errors. What happened? You examine the inputs that caused the model to crash and realize that they all have one thing in common: device_type=iPhone_12. iPhone 12 wasn't released when you deployed your service, and you forgot to update the model to handle that device type when it did become available.

If you’re a data scientist, I’m sure you’ve encountered such a “model bug” at least once in your career. The complexity and relative infancy of AI development compared to other software result in these types of errors occurring all the time.

Image for post
Even the benign actors can contaminate the AI pipeline

Conclusion

These are just a few of the common misconceptions about AI security. Although there are many more which we hope to cover in future posts, I hope this gives you a better sense of the major risks that inevitably come with the superpowers that AI can bring to your business. If you feel these problems resonate with you, or if you’d like to better understand AI security, you can email me anytime at kojin@robustintelligence.com.

Kojin Oshiba