Robust Intelligence: Secure Deployment of AI

We’re excited to announce that today, Robust Intelligence is coming out of stealth with $14M in Seed and Series A funding led by Sequoia Capital. We started the company about a year ago with the mission of pursuing “secure deployment of artificial intelligence.” Artificial Intelligence (AI) is becoming a ubiquitous technology in many industries. The benefits of automation, however, can easily mask the vulnerabilities inherent to AI, and current AI development practices often expose organizations to systemic risks. Robust Intelligence is building products that both defend operational AI systems and enable developers to deploy AI models in a secure manner.

The pain: AI is vulnerable

Image for post

Researchers discovered the world’s first computer worm back in the 70s, when computers and the internet were still in their infancy. These discoveries led to the advent of cybersecurity. By the 90’s, as computers and the internet began to be widely adopted, computer viruses also became widespread and increasingly sophisticated. As the awareness and severity of cyber threats increased, we saw a corresponding surge in the number of cybersecurity companies — a surge that continues to this day in 2020.

History repeats itself and we are now witnessing the exact same pattern with AI and machine learning. AI today is playing the role of computers and the internet in the 90’s. Thanks to the acceleration of breakthroughs in research, the rapid advancement of open source AI technologies, and the explosion in the amount of data available to “fuel” AI models, we’ve seen a rapid adoption of AI across various industries. And, just like with the adoption of computers and the internet, we’re now seeing a surge of threats to AI with the expansion of AI technologies.

Unfortunately, companies are left alone to mitigate these AI risks. Research in AI is making giant leaps forward, but security and reliability of AI technology are being left behind. Perhaps a handful of companies with arsenals of machine learning talent might be able to secure their AI systems in-house. But for the rest, AI vulnerabilities mean that rather than spending effort developing the core AI capabilities of the organization, data science teams are spending precious development cycles coming up with ad hoc solutions to address the myriads of vulnerabilities associated with using this technology.

The increasing threats associated with AI

AI technology has been under attack for well over 20 years now, in ways that you might recognize: email spam, financial fraud, or even fake account creation. Beyond these applications however, the adoption of AI, automated attacks, and the practices of the industry in recent years make AI security an even more urgent problem. Here’s why:

  • AI is rapidly expanding into industries outside of major consumer tech companies. Securing against spam and click fraud was once a problem unique to companies like Google or Yahoo!. Today, this is no longer the case. Major banks have adopted AI for fraud detection, credit scoring, user authentication, and more. Insurance companies automate claims management with AI. Governments around the world are spending billions on AI systems to strengthen their national security. The future of every business lies in AI, and hence the future of every business also lies in AI security.
  • Methodologies for attacking AI systems are rapidly advancing. Most notably, fraudsters are now executing algorithmic attacks on AI. These are attacks that are themselves automated, enabling fraudsters to counteract defensive updates much more quickly. Such attacks are also much stronger because the algorithms determine the optimal attack strategy, instead of humans manually investigating various options. To make matters worse, these attacks can be used not only to spoof the AI models, but also to steal sensitive user data or information about your AI systems.
  • There are emerging trends in the AI industry which makes AI systems increasingly vulnerable. For example, many developers and researchers are making their state-of-the-art “pre-trained” models and datasets publicly available. Many companies rely on crowdsourcing to collect and label their data. Although these phenomena contribute to the democratization of AI across the industry, they also make it substantially easier for fraudsters and adversaries to spread “malware” models or contaminate the data used for model development.

The AI Security market is expected to grow to $20B in the next few years, as the AI market as a whole grows to $60B. As much as we protect servers from DDoS attacks and network communications from man-in-the-middle attacks, we must also protect AI from its own set of rising threats.

Example of an automated attack on a facial recognition system.

Robust Intelligence is the one-stop-shop for AI security

We created Robust Intelligence to take away the pain of securing AI by yourself. We’ve bundled the robust algorithms and machine learning technology we’ve been working on for years into an AI security and reliability platform. This platform frees up machine learning engineers and data scientists from dealing with security and reliability and allows them to focus on core AI development.

Although we started Robust Intelligence just a little over a year ago, we’ve come a long way from our Harvard Square office below Alfred’s Hair Salon to San Francisco’s up-and-coming neighborhood.

We have spent the last year building our AI security platform and battle testing it for our customers in tech, finance, and government.

We have raised $14M in Seed and Series A led by Sequoia Capital with participation from Engineering Capital, Harpoon VC, Ram Shriram, and Alex Balkanski. Thanks to the resources they provide, we’re able to advance our core technology and bring our product to market.

The great progress we’ve made this year is due to the amazing people who joined us on this journey. Prior to Robust Intelligence, our team worked at companies like Google AI, Facebook, Palantir, Airbnb, Uber, and Two Sigma.

Securing AI is not easy. With the team we’ve assembled, and with all of you who will join us on our journey, we are confident that we will offer AI security technology to everyone.