The Executive Order is guided by eight principles and priorities:
- AI must be safe and secure by requiring robust, reliable, repeatable and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, mechanisms to test, understand, and mitigate risks from these systems before they are put to use.
- The US should promote responsible innovation, competition and collaboration via investments in education, training, R&D and capacity while addressing intellectual property rights questions and stopping unlawful collusion and monopoly over key assets and technologies.
- The responsible development and use of AI require a commitment to supporting American workers though education and job training and understanding the impact of AI on the labor force and workers’ rights.
- AI policies must be consistent with the advancement of equity and civil rights.
- The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
- Americans’ privacy and civil liberties must be protected by ensuring that the collection, use and retention of data is lawful, secure and promotes privacy.
- It is important to manage the risks from the federal government’s own use of AI and increase its internal capacity to regulate, govern and support responsible use of AI to deliver better results for Americans.
- The federal government should lead the way to global societal, economic and technological progress including by engaging with international partners to develop a framework to manage AI risks, unlock AI’s potential for good and promote a common approach to shared challenges.
Notably, the EO uses the definition of “artificial intelligence,” or “AI,” found at 15 U.S.C. 9401(3):
“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
Therefore, the scope of the EO is not limited to generative AI; any machine-based system that makes predictions, recommendations or decisions is impacted by the EO.
As expected, the NIST is tasked with a leading role in implementing many of the directives of the EO. The NIST is called upon to lead the development of key AI guidelines, and the NIST AI Risk Management Framework is repeatedly referenced in the Executive Order. However, the EO adopts the “all-of-government approach” that has become a trademark of the Biden administration, tapping agencies and offices across the entire administration to tackle the use of AI technologies in their areas of expertise with numerous actions specified in the near and medium term.
With Congress continuing to study the policy implications raised by AI technologies, this Executive Order and the actions that follow will be the cornerstone of the federal regulatory approach in the space for now. Of course, these actions are limited to the authorities of the executive branch, so this EO concentrates its mandates on programs administered by federal agencies, requirements for AI systems procured by the federal government, mandates related to national security and critical infrastructure, and launching potential rulemakings that govern regulated entities. This EO, like all executive orders, cannot create new laws or regulations on its own, but can trigger the beginning of such processes.
Key provisions of the EO are summarized below.