NTT Data uses RIME to secure image AI

Kojin Oshiba

Kojin is a co-founder of Robust Intelligence.

Kojin Oshiba

Kojin is a co-founder of Robust Intelligence.

NTT DATA is a top 10 global IT services provider operating in more than 50 countries. They have used our flagship product, Robust Intelligence Model Engine (RIME) to discover multiple vulnerabilities in their AI model for image similarity search. The following is a case study written from NTT Data's point of view. The blog is also available in 日本語 on their website.

Many companies invest heavily in AI, actively conducting proofs-of-concept (POC) to create new businesses. On the other hand, it is also necessary to take an extra step toward utilizing AI in actual business by paying attention to its security. This post will introduce NTT DATA's engagement with Robust Intelligence, a startup building AI security products and technologies.

1.  AI Utilization & Risks 

AI capabilities are improving at a remarkable rate. As a result, the application of AI is continuing to expand into a broader range of fields. However, without a method to predict with certainty how AI will operate against data not provided at training time, there is a great possibility of security risks to the entire system due to unexpected behaviors. In recent years, there have also been reports of malicious users discovering vulnerabilities in AI behaviors and intentionally attacking systems, causing malfunctions. Therefore, it is necessary to pinpoint the weaknesses that lead to vulnerabilities (Fig.1) in the development stage and take preventative measures against possible future attacks. 

Figure 1: Examples of AI Risks

2. Addressing Vulnerabilities

Unfortunately, identifying every weakness in AI is not an easy task. This is because there is an infinite amount of unknown data, and every behavior related to this data cannot be verified. Currently, business experts, data scientists, and other highly skilled professionals are carrying out multi-faceted verifications in AI models one by one to identify as many weaknesses as possible. However, this approach is strongly dependent on the person's skillset and requires significant costs to implement. Robust Intelligence is a startup engaged in providing solutions for these types of problems.

3. Robust Intelligence Solution

Robust Intelligence (RI) is an American startup specializing in AI security. It was founded by machine learning professors and researchers from Harvard. RI’s team of engineers previously built bleeding-edge AI systems at companies like Google and Facebook and worked with tech, financial and national institutions in the US.  

Based on their prior experience, RI has developed Robust Intelligence Model Engine (RIME), a platform for addressing the operational risks associated with AI systems in various stages of its development. 

RIME automates the testing of AI system behaviors. Based on proprietary algorithms, this automated testing identifies common pitfalls in AI systems (Fig. 1) before they are deployed in production. For example, RIME can identify input data causing malfunctions in AI systems and the implicit assumptions developers make about the data. Furthermore, to catch up with the ever-changing demands of AI systems, RIME updates its tests continuously.

Testing can be applied to any AI system or data, whether the model is rule-based or state-of-the-art deep learning. Moreover, because RIME automatically infers data trends and distributions, it can test AI systems in various areas such as fraud detection, CTR prediction, insurance reviews, inventory control.

4. Robust Intelligence & NTT Data Initiatives 

As a part of our initiative to “Strengthen AI Governance," we have been utilizing RI’s solution to perform technical verifications for AI security since 2020.

These technical verifications used RIME to detect and address vulnerabilities in our deep learning-based Image Similarity Search Model for trademark images. The results of this initiative are provided below.  

Uploaded a new image, the Image Similarity Search Model searches for similar images in a database consisting of 500,000+ previously registered trademark images. This model aims to improve work efficiency by allowing users to upload any proposed logo image and confirm whether there is a similar trademark image in the database. 

Figure 2: Image Similarity Search Model Overview

By utilizing this model, we expect to reduce unnecessary trademark applications since companies can confirm whether a similar image exists before applying for a trademark. On the other hand, we must be aware of the possibility of malicious attacks and unexpected input data when utilizing this model to verify that we protect the rights of previous applicants. For example, we can expect that inexperienced users input mismatched data into system requirements or malicious attackers intentionally cause malfunctions that may degrade the reliability of search results. To examine these vulnerabilities, we tested the model’s behaviors when presented with various images.

Figure 3: Image Verification

This verification process utilized the RIME AI STRESS TEST function to perform various tests and measure the robustness of this model. Specifically, RIME performed three transformations (noise, geometric changes, color changes) to test the registered trademark images.

Based on the prediction results of these inputs, RIME applied additional transformations to induce misclassification further.

Figure 4: RIME Generated Image Examples

The results of this verification process showed that the Image Similarity Search Model was robust to color changes. However, the search performance degraded when images were rotated or when particular noise was added. We resolved these vulnerabilities system-wide, e.g., by removing noise from pictures before they entered the model.

Figure 5: Images with High Risk of Malfunction

In the future, we aim to further improve our level of security by utilizing RIME AI Firewall to filter corrupted or malicious images.

Figure 6: System-Wide Security Improvements 

5. Conclusion

Through our technical verification with Robust Intelligence, we confirmed that RIME could detect vulnerabilities in AI models. We will continue to work with Robust Intelligence to improve AI security using RIME. We will also promote the use of this technology for AI models trained on tabular data and natural language and strive to improve the security of a broader range of AI applications.

NTT DATA will actively introduce advanced technology and establish a mechanism to use AI safely, aiming to widely distribute AI spanning a wide range of industries to create new opportunities and value.


Kojin Oshiba