
PwC’s Artificial Intelligence services
Unlock the full potential of artificial intelligence at scale—in a way you can trust.
The Biden administration issued its long-awaited executive order (EO) on artificial intelligence (AI). The order is the government’s biggest step toward regulating the fast-moving technology. It calls for new standards, funding, training and enforcement to mitigate AI risks, while also paving the way for the technology's widespread adoption.
The EO builds on the administration’s 2022 Blueprint for an AI Bill of Rights, which noted the technology’s many risks — including those to the workforce, privacy, critical infrastructure, national security and democracy — but also its potential for good. The order also builds on the voluntary commitments the White House secured more recently from executives of major AI companies, who pledged to conduct internal and external “red-teaming” (simulated attacks) on their AI models, share data about safety to third parties and develop technology to identify AI-generated content, among other measures. The EO seeks to address these and many other risks.
Who needs to pay attention? While companies that create “foundation” AI models and those that serve the federal government are most immediately affected, the EO is a bellwether of how AI may be regulated in the future. This will likely affect all companies. In addition, the order may touch them in other ways, for example, by influencing how AI models and their use can drive healthcare advances and climate solutions, shape consumer expectations for transparency and accountability, and affect the workplace.
Noting AI's “extraordinary potential for both promise and peril,” the Biden administration cited the urgent need for a partnership between and among government, industry, academia and civil society to mitigate the technology’s substantial risks and harness its capacity for good.
The EO represents a coordinated approach spanning the entire federal government to lead in this frontier space. Drawing heavily from the NIST AI risk management framework, the order reflects a policy of advancing AI’s development and use according to eight guiding principles and priorities. By focusing in equal parts on innovation and safeguards, the EO shows the government’s stance on the matter, one that reinforces the criticality of responsible AI.
While targeted in many ways and lacking the force of law, the EO extends far beyond the government’s use of the technology. It sets policy objectives for federal regulatory agencies and will likely have far-reaching consequences.
To prepare for its implementation, companies should understand the potential direct and secondary impacts, identify the gaps and opportunities, and plan accordingly. Consider taking these steps.
The EO on AI is a clear signal on how active the government intends to be in this space. Over the coming weeks, we can expect more specificity as individual agencies provide further resolution to policy directives. In the interim, adopting sound risk management based on responsible AI principles will position you to meet the requirements envisioned in the order.
Unlock the full potential of artificial intelligence at scale—in a way you can trust.
What security, privacy, internal audit, legal, finance and compliance leaders need to know to harness trusted generative artificial intelligence.
PwC's Tech Effect is a digital resource for busy leaders: your guide to growth in a people-led, tech-powered world.
Next Move discusses the latest regulatory and technology policy developments and how risk leaders can react. Read the latest issue on bulk data transfers.