August 15, 2022
-
4
minute read

Introducing ML:Integrity

In October Robust Intelligence will host ML:Integrity, an event dedicated to machine learning integrity. It’s hard to overstate the societal and business impact that machine learning has had in the past decade and will continue to have in decades to come. It is also hard to overstate the risk; machine learning models fail frequently and such failures can have dire consequences. Thus, any organization that relies on machine learning in critical decision making is now building strategies and engineering paradigms that ensure machine learning integrity. This conference is a step towards developing such strategies and criteria as a community. We’ll be joined by industry and business leaders who are creating strategies and building infrastructure for these paradigms.

REGISTER HERE

AI adoption introduces risk

Intellectually, the idea that we can collect data and synthesize it to produce predictions is not new. In fact, it’s as old as the law of large numbers that Bernoulli proved almost 350 years ago. But what is new is that massive digital data and computing power have become available to us in the past decade. These advancements turned the intellectual idea of machine learning into reality, enabling an extremely powerful, practical set of techniques we use for automated decision making. And now, with this great power comes great responsibility. From credit worthiness and healthcare coverage to hiring practices and aid allocation, machine learning models are entrusted to make critical decisions.

This responsibility for machine learning is far from trivial. At its core, machine learning is an extremely fragile paradigm. In a nutshell, our ability to make seemingly intelligent predictions relies on our ability to reduce these predictions to fixed decision boundaries that are extremely sensitive to subtle changes in the underlying data. These changes may be due to data drift, broken data pipelines, bias in data collection or training of the models, corner case inputs, or adversaries that are purposefully manipulating the data to affect the prediction outcomes.

From an engineering perspective, when machine learning models fail, their failures are agnostic to the causes. Thus, when we think of creating a universal paradigm we shouldn't look at drift, bias, corner case inputs, or adversarial input in isolation but rather as symptoms of a larger problem of risk introduced by AI deployment.

AI risk is a probabilistic expectation

Defining AI risk as a mathematical object is instructive, because it gives us a recipe for action. AI risk can be expressed as a probabilistic expectation:

AI risk = (likelihood of an error to be made by an AI model) x (its potential effect)

While we may not be able to control the potential effect an error may have, we can reduce the likelihood of an error occurring. We can accomplish this by developing appropriate engineering paradigms. This set of engineering paradigms is machine learning integrity.

The case for ML integrity

In traditional software engineering, the advent of the internet introduced vulnerabilities in software systems and led the discipline of cybersecurity. Similarly, as machine learning graduates to become an engineering discipline, machine learning integrity emerges.

A recent report by Gartner identified that 70% of enterprises have hundreds to thousands of AI models in production. The scale and magnitude of deploying hundreds of complex models calls for systematic and automated solutions.

Data scientists have long recognized that a proactive approach throughout the model lifecycle is needed to eliminate AI failure, despite operating under the status quo. One that will bring integrity into their systems.

Introducing the first conference on ML integrity

The pursuit of integrity in machine learning is shared by the data science community and is best advanced through dialog. It’s in this spirit that we announce ML:Integrity, an annual conference hosted by Robust Intelligence that will bring together leading executives, policy makers, and academics to discuss the corporate and societal challenges of AI and how to overcome them.

This year’s ML:Integrity conference will be held virtually, and will take place on Wednesday, October 19. The agenda is slated to include over a dozen talks on ML fairness, security, scale, regulation, and more. By bringing together senior leaders, we aim to create a forum that offers education, inspiration, and progress toward furthering the cause of ML integrity. In other words, import ml:integrity. We hope to see you there!

To register for the conference and get more information, please visit the link below.

https://www.mlintegrityconference.com/

August 15, 2022
-
4
minute read

Introducing ML:Integrity

In October Robust Intelligence will host ML:Integrity, an event dedicated to machine learning integrity. It’s hard to overstate the societal and business impact that machine learning has had in the past decade and will continue to have in decades to come. It is also hard to overstate the risk; machine learning models fail frequently and such failures can have dire consequences. Thus, any organization that relies on machine learning in critical decision making is now building strategies and engineering paradigms that ensure machine learning integrity. This conference is a step towards developing such strategies and criteria as a community. We’ll be joined by industry and business leaders who are creating strategies and building infrastructure for these paradigms.

REGISTER HERE

AI adoption introduces risk

Intellectually, the idea that we can collect data and synthesize it to produce predictions is not new. In fact, it’s as old as the law of large numbers that Bernoulli proved almost 350 years ago. But what is new is that massive digital data and computing power have become available to us in the past decade. These advancements turned the intellectual idea of machine learning into reality, enabling an extremely powerful, practical set of techniques we use for automated decision making. And now, with this great power comes great responsibility. From credit worthiness and healthcare coverage to hiring practices and aid allocation, machine learning models are entrusted to make critical decisions.

This responsibility for machine learning is far from trivial. At its core, machine learning is an extremely fragile paradigm. In a nutshell, our ability to make seemingly intelligent predictions relies on our ability to reduce these predictions to fixed decision boundaries that are extremely sensitive to subtle changes in the underlying data. These changes may be due to data drift, broken data pipelines, bias in data collection or training of the models, corner case inputs, or adversaries that are purposefully manipulating the data to affect the prediction outcomes.

From an engineering perspective, when machine learning models fail, their failures are agnostic to the causes. Thus, when we think of creating a universal paradigm we shouldn't look at drift, bias, corner case inputs, or adversarial input in isolation but rather as symptoms of a larger problem of risk introduced by AI deployment.

AI risk is a probabilistic expectation

Defining AI risk as a mathematical object is instructive, because it gives us a recipe for action. AI risk can be expressed as a probabilistic expectation:

AI risk = (likelihood of an error to be made by an AI model) x (its potential effect)

While we may not be able to control the potential effect an error may have, we can reduce the likelihood of an error occurring. We can accomplish this by developing appropriate engineering paradigms. This set of engineering paradigms is machine learning integrity.

The case for ML integrity

In traditional software engineering, the advent of the internet introduced vulnerabilities in software systems and led the discipline of cybersecurity. Similarly, as machine learning graduates to become an engineering discipline, machine learning integrity emerges.

A recent report by Gartner identified that 70% of enterprises have hundreds to thousands of AI models in production. The scale and magnitude of deploying hundreds of complex models calls for systematic and automated solutions.

Data scientists have long recognized that a proactive approach throughout the model lifecycle is needed to eliminate AI failure, despite operating under the status quo. One that will bring integrity into their systems.

Introducing the first conference on ML integrity

The pursuit of integrity in machine learning is shared by the data science community and is best advanced through dialog. It’s in this spirit that we announce ML:Integrity, an annual conference hosted by Robust Intelligence that will bring together leading executives, policy makers, and academics to discuss the corporate and societal challenges of AI and how to overcome them.

This year’s ML:Integrity conference will be held virtually, and will take place on Wednesday, October 19. The agenda is slated to include over a dozen talks on ML fairness, security, scale, regulation, and more. By bringing together senior leaders, we aim to create a forum that offers education, inspiration, and progress toward furthering the cause of ML integrity. In other words, import ml:integrity. We hope to see you there!

To register for the conference and get more information, please visit the link below.

https://www.mlintegrityconference.com/

Blog

Related articles

July 6, 2021
-
5
minute read

Smooth Sailing - Building Secure AI for Insurance

For:
March 31, 2023
-
6
minute read

Prompt Injection Attack on GPT-4

For:
November 14, 2022
-
5
minute read

Robust Intelligence, Now SOC 2 Certified, Lists in the AWS Marketplace

For:
No items found.