October 27, 2022
-
7
minute read

Moving in the Right Direction: AI Bill of Rights

Overview of the Blueprint

On October 4th, the White House unveiled its long-awaited “AI Bill of Rights” – marking an important day for US policy, and the day the US joined what now feels like a crowded field. The US has until now been behind other key players in the space in offering some kind of guidance around AI and automated systems. With this Blueprint for an AI Bill of Rights, the White House Office of Science and Technology Policy introduces five principles that should be incorporated into AI systems to protect fundamental rights and democratic values.

The US is now calling for artificial intelligence systems to be developed with built in protections - AI development and implementation has until now mostly been a free-for-all. This is important because AI has the power to influence and determine important and consequential aspects of people’s lives. At its core, the bill is protecting fundamental civil rights and democratic values by way of non-binding proposals without formal enforcement mechanisms — this essentially means it offers a framework for companies to opt in or out of. The challenges of using AI that this bill is aiming to protect are shared concern across all industries.

In the White House’s announcement about this blueprint they shared the sentiment that many things pose challenges to democracy today, and among those things is the use of technology, data, and automated systems. The Blueprint is a promising step, but by no means the end for what is needed from federal-level regulation in terms of legally-binding action. Like I mentioned, the release of the White House’s Bill of Rights is a delayed reaction compared to many other key players in the space: the EU released its list of guidelines in 2019, and most big tech companies have had a list of good AI principles and practices for a few years now, like IBM’s set of principles from 2017 and Google’s AI principles published in 2018. Additionally, the bill addresses important high impact areas of AI use, like healthcare, employment, and financial services, but doesn’t provide sufficient attention to other important areas, like education and law enforcement.

Five Key Principles

The five principles being introduced are an effort to ensure safety, transparency, limit algorithmic discrimination, and give users control over data. Below is a quick overview of what the blueprint states and what this will involve in practice.

1. Safe and effective systems
  • The blueprint states: automated systems should be "developed in consultation from diverse communities, stakeholders, and domain experts” and should “undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring.”
  • In practice this means: people should be protected from unsafe automated systems and companies should implement pre-deployment testing and validation, risk mitigation strategies, and ongoing monitoring for model developers and implementers.
2. Algorithmic discrimination protections
  • The blueprint states: “You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.”
  • In practice this means: people should not face discrimination enabled by algorithmic systems based on protected features and companies should be implementing testing for bias and fairness in order to protect against discrimination and disparate impact.
3. Data privacy
  • The blueprint states: “you should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.” It also encourages developers of automated systems to request permission regarding “collection, use, access, transfer, and deletion of your data[.]”
  • In practice this means: people should be protected from bad data practices and companies should be implementing good privacy practices and data quality assurance.
4. Notice and explanation
  • The blueprint states: you “should know that an automated systems is being used, and understand how and why it contributes to outcomes that impact you.”  The principle also encourages clear documentation regarding how the system works, the context in which it’s used, and its limitations, and this reporting should be made public.
  • In practice this means: people should be notified when an AI system is in use and understand how it’s being used and companies should incorporate sufficient explainability and transparency around the use of the system.
5. Human alternatives, considerations and fallback
  • The blueprint states: “[y]ou should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.” In addition it encourages information be made available and clear for ways to opt out of automated systems in favor of human alternatives.
  • In practice this means: people should be able to opt out of AI system use and, where appropriate, have access to human judgment instead. For companies, this mostly means there needs to be accountability and access to human intervention when or if necessary.

Key Considerations to Instill ML Integrity

The White House Office of Science and Technology Policy has had a clear vision for this blueprint for the AI Bill of Rights since they made an announcement last year stating that a bill was needed in order to respect fundamental civil rights and protect democratic values while implementing widespread use of AI.

The blueprint is most impactful to employment, housing, healthcare, and financial services industries and successfully provides thorough principles and lays the groundwork for federal agencies to bring these principles into practice. It does not, however, create a binding, legal document and doesn’t cover all important sectors needed.

Data scientists have long recognized that a proactive approach throughout the model lifecycle is needed to eliminate AI failure - one that will bring integrity into their systems. So, what does Robust Intelligence mean when referring to ML Integrity? ML Integrity, in short, refers to models that work as intended. So let’s expand on that a little bit: Machine Learning Integrity is a set of practices that aims to eliminate risk throughout the ML lifecycle through data quality, model performance, fairness, security, and transparency. It is necessary to think about MLI at each stage of the ML lifecycle - from data preparation to model development to model operations. Failures in any one area can have critical downstream consequences for your business and customers.

People in industry should expect more to be added and enforced to regulations in the future, so it’s always best to be proactive and audit and manage risk across the AI system. It’s good data science practice to implement pre-deployment testing and ongoing monitoring into an ML system, as well as to have proper reporting and documentation on models and techniques used. A comprehensive, end-to-end testing framework can help instill integrity in ML models and systems right from the start.

The blueprint is a sign that there is a big push from the White House to guide private industry and state and local governments in developing and adopting practices for the responsible use of automated systems. All organizations using AI will benefit from being proactive about building AI systems that implement ML Integrity.

October 27, 2022
-
7
minute read

Moving in the Right Direction: AI Bill of Rights

Overview of the Blueprint

On October 4th, the White House unveiled its long-awaited “AI Bill of Rights” – marking an important day for US policy, and the day the US joined what now feels like a crowded field. The US has until now been behind other key players in the space in offering some kind of guidance around AI and automated systems. With this Blueprint for an AI Bill of Rights, the White House Office of Science and Technology Policy introduces five principles that should be incorporated into AI systems to protect fundamental rights and democratic values.

The US is now calling for artificial intelligence systems to be developed with built in protections - AI development and implementation has until now mostly been a free-for-all. This is important because AI has the power to influence and determine important and consequential aspects of people’s lives. At its core, the bill is protecting fundamental civil rights and democratic values by way of non-binding proposals without formal enforcement mechanisms — this essentially means it offers a framework for companies to opt in or out of. The challenges of using AI that this bill is aiming to protect are shared concern across all industries.

In the White House’s announcement about this blueprint they shared the sentiment that many things pose challenges to democracy today, and among those things is the use of technology, data, and automated systems. The Blueprint is a promising step, but by no means the end for what is needed from federal-level regulation in terms of legally-binding action. Like I mentioned, the release of the White House’s Bill of Rights is a delayed reaction compared to many other key players in the space: the EU released its list of guidelines in 2019, and most big tech companies have had a list of good AI principles and practices for a few years now, like IBM’s set of principles from 2017 and Google’s AI principles published in 2018. Additionally, the bill addresses important high impact areas of AI use, like healthcare, employment, and financial services, but doesn’t provide sufficient attention to other important areas, like education and law enforcement.

Five Key Principles

The five principles being introduced are an effort to ensure safety, transparency, limit algorithmic discrimination, and give users control over data. Below is a quick overview of what the blueprint states and what this will involve in practice.

1. Safe and effective systems
  • The blueprint states: automated systems should be "developed in consultation from diverse communities, stakeholders, and domain experts” and should “undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring.”
  • In practice this means: people should be protected from unsafe automated systems and companies should implement pre-deployment testing and validation, risk mitigation strategies, and ongoing monitoring for model developers and implementers.
2. Algorithmic discrimination protections
  • The blueprint states: “You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.”
  • In practice this means: people should not face discrimination enabled by algorithmic systems based on protected features and companies should be implementing testing for bias and fairness in order to protect against discrimination and disparate impact.
3. Data privacy
  • The blueprint states: “you should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.” It also encourages developers of automated systems to request permission regarding “collection, use, access, transfer, and deletion of your data[.]”
  • In practice this means: people should be protected from bad data practices and companies should be implementing good privacy practices and data quality assurance.
4. Notice and explanation
  • The blueprint states: you “should know that an automated systems is being used, and understand how and why it contributes to outcomes that impact you.”  The principle also encourages clear documentation regarding how the system works, the context in which it’s used, and its limitations, and this reporting should be made public.
  • In practice this means: people should be notified when an AI system is in use and understand how it’s being used and companies should incorporate sufficient explainability and transparency around the use of the system.
5. Human alternatives, considerations and fallback
  • The blueprint states: “[y]ou should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.” In addition it encourages information be made available and clear for ways to opt out of automated systems in favor of human alternatives.
  • In practice this means: people should be able to opt out of AI system use and, where appropriate, have access to human judgment instead. For companies, this mostly means there needs to be accountability and access to human intervention when or if necessary.

Key Considerations to Instill ML Integrity

The White House Office of Science and Technology Policy has had a clear vision for this blueprint for the AI Bill of Rights since they made an announcement last year stating that a bill was needed in order to respect fundamental civil rights and protect democratic values while implementing widespread use of AI.

The blueprint is most impactful to employment, housing, healthcare, and financial services industries and successfully provides thorough principles and lays the groundwork for federal agencies to bring these principles into practice. It does not, however, create a binding, legal document and doesn’t cover all important sectors needed.

Data scientists have long recognized that a proactive approach throughout the model lifecycle is needed to eliminate AI failure - one that will bring integrity into their systems. So, what does Robust Intelligence mean when referring to ML Integrity? ML Integrity, in short, refers to models that work as intended. So let’s expand on that a little bit: Machine Learning Integrity is a set of practices that aims to eliminate risk throughout the ML lifecycle through data quality, model performance, fairness, security, and transparency. It is necessary to think about MLI at each stage of the ML lifecycle - from data preparation to model development to model operations. Failures in any one area can have critical downstream consequences for your business and customers.

People in industry should expect more to be added and enforced to regulations in the future, so it’s always best to be proactive and audit and manage risk across the AI system. It’s good data science practice to implement pre-deployment testing and ongoing monitoring into an ML system, as well as to have proper reporting and documentation on models and techniques used. A comprehensive, end-to-end testing framework can help instill integrity in ML models and systems right from the start.

The blueprint is a sign that there is a big push from the White House to guide private industry and state and local governments in developing and adopting practices for the responsible use of automated systems. All organizations using AI will benefit from being proactive about building AI systems that implement ML Integrity.

Blog

Related articles

June 14, 2021
-
4
minute read

The Fallacy of the Hero Lifeguard

For:
August 9, 2023
-
4
minute read

Robust Intelligence partners with MITRE to Tackle AI Supply Chain Risks in Open-Source Models

For:
August 15, 2022
-
4
minute read

Introducing ML:Integrity

For:
No items found.