Site icon InGovCon

Understanding AI Policy: Executive Order 14110 and What It Means for Federal AI Use

Artificial Intelligence Specialist on a Government Contract

Artificial Intelligence Specialist on a Government Contract

Almost a year ago, in October 2023, President Biden signed a critical executive order shaping the future of AI in the United States—Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This policy highlights the government’s responsibility to manage the risks associated with AI, primarily when it’s used to make decisions impacting people’s lives. The order sets the stage for federal agencies to lead by example in the responsible development and deployment of AI technologies.

One of the major outcomes of this order is a directive for the Office of Management and Budget (OMB) to release guidance on the federal government’s use of AI. OMB drafted a policy titled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence to fulfill this. This document outlines key actions that will guide federal agencies in using AI to enhance services and ensure equitable service delivery to the American public.

Let’s look quickly at the top 10 questions crucial for federal technology officials and contractors to understand and implement this AI policy.


1. What’s in OMB’s Proposed AI Policy?

The draft policy is organized around three pillars:

These pillars reflect a commitment to both innovation and responsible AI use.


2. What Will Chief AI Officers (CAIOs) Do?

CAIOs will be the primary leaders within their agencies, responsible for coordinating AI efforts and ensuring the responsible management of risks. Each agency can either appoint a new official or assign the role to an existing expert in AI. For major agencies, the CAIO must be a senior-level position, ensuring they have the authority to implement the necessary changes.

CAIOs will work closely with CIOs, CDOs, and CTOs to integrate AI into broader technology strategies, fill existing gaps, and mitigate risks like algorithmic discrimination.


3. How Should AI Governance Bodies Be Engaged?

Agencies must establish AI Governance Boards, led by their Deputy Secretary and vice-chaired by the CAIO. These boards will meet quarterly to oversee AI use and risk management. However, agencies are encouraged to leverage existing governance structures to reduce the burden of creating new bodies.


4. Do AI Risk Management Requirements Apply to All AI Use Cases?

No, OMB’s requirements only apply to AI that significantly impacts safety or individual rights. Everyday AI applications like noise-canceling software don’t need the same level of oversight. The policy ensures that resources are directed toward AI use cases that pose real risks to public rights and safety.


5. How Do You Know if Your AI Use Impacts Rights or Safety?

AI is categorized into two types:

OMB’s draft guidance offers specific examples to help agencies identify when their AI applications fall under these categories.


6. How Will AI Risk Management Align with Authorization to Operate (ATO) Processes?

Since AI is a form of software, it will still need to go through standard ATO processes for system security. However, the guidance adds a new category of AI-specific risks that must be evaluated, including transparency, fairness, and accountability. CAIOs will collaborate with CIOs to ensure that AI use is secure and complies with these additional safeguards.


7. What Does the Policy Say About Generative AI?

Generative AI, like chatbots and content creators, comes with unique risks. Agencies will need to ensure that proper safeguards are in place before deploying these technologies. The goal is to allow controlled use based on risk assessments rather than banning the technology outright.


8. What Resources Are Available to Help Agencies Implement the Policy?

EO 14110 outlines several resources to assist with policy implementation, including:


9. Will This Policy Apply to Contractors?

Yes, the guidance will also apply to contractors working on behalf of the federal government. Additional guidance for contractors is expected to be released, ensuring they meet the same standards.


10. What’s Next?

OMB is reviewing public comments and working on a revised policy. Once finalized, the guidance will be issued within 150 days, setting the framework for the federal government’s AI use in the coming years.


AI is transforming how the government serves its people, and EO 14110 ensures that this transformation happens responsibly. Agencies will play a pivotal role in both leveraging AI’s potential and managing its risks, and it’s crucial to stay informed about the latest developments in AI policy.

Stay tuned for updates and further guidance as the government continues to lead the way in the responsible use of AI.

Exit mobile version