OMB Memo on Agency Regulation of AI

Published: January 22, 2020

Federal Market AnalysisArtificial Intelligence/Machine LearningOMBPolicy and Legislation

OMB issues a draft memo outlining the guidelines agencies must comply with in forming regulatory and non-regulatory approaches to AI.

A little over a year ago saw the release of the Executive Order (EO) on Maintaining American Leadership in Artificial Intelligence, Infrastructure & Technology, directing the country’s direction in AI R&D and implementation. The EO called on OMB to issue a memorandum guiding agencies in regulatory and non-regulatory approaches to industry AI. Earlier this month, OMB delivered on that requirement.

The draft Guidance for Regulation of Artificial Intelligence Applications stipulates the considerations taken into account by agencies when forming regulatory and non-regulatory methods in the design, creation and implementation of AI systems.

The memo specifically warns agencies from overly burdensome regulations that might stifle the advancement of the technology, risking security, societal and economic benefits presented by AI. The same message was emphasized by NIST last year, within its plan for AI standards in the federal space.

In potential non-regulatory instances, the memo directs agencies to consider approaches such as existing statuary authority, pilot programs and consensus standards in their use of AI. In cases of regulatory approaches, the draft memo describes the following 10 principles agencies must abide by when it comes to AI:

  1. Public Trust in AI. Implement reliable, trustworthy AI applications to help boost public trust in AI. Those applications of AI that pose privacy and other risks must depend on the nature of the risk and its relevant mitigations.
  2. Public Participation. To the extent allowable, provide opportunity for public participation in all stages of the rulemaking process and promote awareness of AI standards once they are developed.
  3. Scientific Integrity and Information Quality. Leverage best scientific and technical information practices including, transparency of the AI application’s strengths, weaknesses and intended outcomes, as well as ensuring the integrity of the data used to train the AI system.
  4. Risk Assessment and Management. Develop a risk-based approach that determines the level of acceptable risks, saddled with expected costs and benefit considerations.
  5. Benefits and Costs. Consider the full benefits, costs and distributional effects of the implementation or lack of implementation of AI.
  6. Flexibility. Pursue approaches that are adaptable to the rapid changes and updates in AI applications.
  7. Fairness and Non-Discrimination. Consider the reduced level of unlawful, unfair bias an AI application may eliminate versus the unintended discrimination in the outcomes by the AI system.
  8. Disclosure and Transparency. In conjunction with increasing public trust in AI, consider the extent of disclosures in AI use.
  9. Safety and Security. Encourage the development of AI systems that are safe and secure to ensure confidentiality and integrity of the AI system, considering risks of dangerous use in AI applications.
  10. Interagency Coordination. Agencies must coordinate with one another to share experiences and common practices to promote consistency and optimization of AI policies and implementation.

The memo will be available for public comment through March 13th. Once finalized, the memo requires agencies to submit plans to OMB of any planned or considered regulatory approaches in the development and use of AI applications.