Policymakers focus on making generative AI safer for all

Example pattern for mobile
Example pattern for desktop

Summary

  • Recent experience with generative AI (GenAI) has caused policymakers to take quick measures to prevent large-scale and rapid spread of unintended consequences.
  • Companies will have to contend with shifting AI regulations and expectations, as they invest in GenAI applications.
  • Companies that deploy GenAI in a responsible way in their closed environments can position themselves to be compliant with future regulations.

The issue

Policymakers are scrambling to set limits and increase accountability — treating generative AI with urgency because of the scale and speed of possible effects on broad swathes of society. Anyone with internet access can wield the power of GenAI, thanks to new interfaces developed for public models. And sizable corporate investments will likely accelerate its use throughout value chains and businesses.

GenAI is the subset of AI that generates text, code, images, video and other content (“outputs”) from data provided to it or retrieved from the internet (“inputs”) in response to user prompts.

The catalog of risks that concern regulators is long. More believable phishing emails and more convincing fake identities. Loss of control over personal data fed to models. Models’ data sources that are largely unknown and may include inaccurate and deliberately false content on the internet. Misuse of the models to create more and more realistic misinformation. Biases built into models, even if inadvertent, resulting in discrimination. Displacement of people in the workforce. Concentration of enormous private power over new economic growth drivers.

The looming question is this: how to realize the value of GenAI in a way that aligns with responsible AI practices as regulations and expectations shift and change? 

Existing regulations, such as the Global Data Protection Regulation and equal employment opportunity laws, already apply to aspects of GenAI. Other laws, such as the EU's proposed AI Act, will need revisions to address the generative form of AI. In some cases, especially in the United States and China, new laws will be necessary to help close regulatory gaps exposed by GenAI uses.

But amid regulatory uncertainty, companies can control one thing: how they deploy GenAI in a responsible way in their closed environments, which can position them to be compliant with future regulations.

The Next Move

The latest updates on regulatory and policy developments in tech

Through PwC’s Next Move series, we provide context to policy and regulatory developments in technology and steps on how you can get ahead of what might come next.

Learn more

 

The regulators’ take

For GenAI-specific policy and regulatory proposals, here are the developments to watch. Companies building their own models or solutions based on models should note the emerging requirements specifically targeted to them. 

China: proposed government review of AI chat tools

In China, the nation’s top internet regulator — the Cyberspace Administration of China — proposed rules to require government review of AI chat tools even as some of its biggest online companies prepare to roll out, or are already offering, new GenAI features for consumers.

The proposed Chinese rules would prohibit using the technology to profile users, restrict AI-generated content, hold companies accountable for protecting personal data, and require that developers train their GenAI models using only data compliant with Chinese law.

Europe: AI Act updates, UK guidance

The European Union recently proposed broadening its Artificial Intelligence Act to regulate “general purpose artificial intelligence,” of which GenAI is a subset. New obligations for providers of foundation models would guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. Providers would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database. 

Generative foundation models, like GPT-n, BERT, DALL-E and LLaMA, would face additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training. A key milestone is coming soon: the European Parliament is expected to vote on the draft AI Act during its 12-15 June session.

Meanwhile, the UK Information Commissioner’s Office has published guidance in the form of eight questions that developers and users need to ask. It’s a reminder that organizations, developers and users should consider their data protection obligations from the outset, following a data protection by design and by default approach

United States: Executive, legislative and enforcer activities

The AI Bill of Rights is noteworthy for synthesizing different views that have emerged about responsible AI. But lacking a national AI law, the United States is moving toward setting policy for GenAI use, finding a balance between encouraging innovation and identifying and mitigating potential harms. 

Executive agenda

Harnessing the open and widespread application of GenAI, the White House got an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles at the AI Village at DEFCON 31. This approach would allow these models to be evaluated by thousands of community partners and AI specialists against responsible AI principles and practices. 

Also, the White House has invited public input on GenAI, which will be considered by the President’s Council of Advisors on Science and Technology (PCAST) working group on GenAI.

The focus of the White House is clear in a succession of announcements, the latest of which includes a 2023 update of a National AI R&D strategic plan, a request for information from the Office of Science and Technology Policy (OSTP) and the release of a new report on AI and the future of teaching and learning.

Enforcement agencies, including the Federal Trade Commission (FTC) and the Department of Justice, recently emphasized their commitment to protecting Americans against discrimination and bias engendered by automated systems including AI. The statement says the agencies will “monitor the development and use of automated systems and promote responsible innovation” and “vigorously use our collective authorities to protect individuals’ rights.” 

Meanwhile, the FTC is eyeing closely how companies use AI to interact with consumers, noting its dual mandate of promoting fair competition and protecting Americans from unfair or deceptive practices. 

Perhaps most significant of all, the Department of Commerce recently issued a public request for comment on a proposed Accountability Policy for AI Development. It seeks input on policies that can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems. It likens providing assurance that an AI system is trustworthy to the way financial audits create trust in the accuracy of a business’ financial statements. The accountability policy would govern AI in all its forms as well as the broader systems around it. Comments are due June 12.

Legislative agenda

Members of Congress in both parties are working on a variety of bills and policies to address emerging GenAI concerns. Senate Majority Leader Chuck Schumer (D-NY) is reportedly distributing a framework among AI specialists that outlines steps toward responsible AI and has announced a legislative task force composed of senators. Sen. Mark Warner (D-VA) has sent letters to AI firms asking about their technologies’ security. Others in Congress have called for task forces, and still others have introduced legislation aimed at preventing AI from launching nuclear weapons.

Absent congressional agreement, individual states are likely to adopt their own laws, as has occurred in the data privacy realm, or they may opt to apply existing data privacy laws to GenAI. For example, the Connecticut Senate just approved a bill that creates an Office of Artificial Intelligence to oversee government use and development of AI. 

Your next move

GenAI-specific risks are uncharted territory for most organizations, but agreement on the broad principles of responsible AI has advanced in recent years.

The concept of Responsible AI has been adopted by many businesses to drive stronger AI governance and compete more effectively in the market. And financial institutions have applied model risk management standards to their AI and model use for decades. Established frameworks such as the ISO framework and the NIST AI risk management framework and playbook are available. And there are more than 800 national AI policies from more than 69 countries, territories and the EU. (A helpful dashboard on dominant topics in national AI policies can be found here.)

1. Build on your existing programs. Organizations with strong compliance and privacy programs and AI governance should be well positioned to comply with any regulations that come forth. We can already see in existing privacy laws and statutes as well as the federal AI blueprint what regulators will likely want: risk-based governance, transparency and explainability. Update your AI risk management programs for GenAI considerations.

2. Assemble your risk executives to create plans to manage the risks. A risk-based approach to GenAI can start you on the right digital foot with regulators, consumers and other stakeholders. This moment calls for an enterprise-wide playbook on responsible GenAI because of the scope, novelty and breadth of risks.

Can your chief data officer (CDO) improve your data hygiene, confirming that yours is vetted, protected and visible and available to its owners? Can your CDO improve your “data nutrition,” making sure to feed only verified, consistent data into your GenAI algorithms? Can your CISO fortify cyberdefense to protect against large-scale, more credible phishing? Is your chief legal officer prepared for legal risks that will likely be created or exacerbated by GenAI? Is your CFO vigilant to “hallucination” risk on financial facts or errors in reasoning, when using GenAI for financial reporting uses? Can your chief compliance officer quickly assess the compliance posture of your GenAI deployments? Can the head of Internal Audit design and adopt new audit methodologies, new forms of supervision and new skill sets to create a risk-based audit plan specific to GenAI? Can your risk professionals exercise much needed influence in managing the risks of powerful GenAI applications?

3. Incorporate GenAI into your AI governance. Having an effective AI governance strategy will be vital because, beyond the risk professionals, many people inside and outside your organization can influence your ability to use GenAI responsibly. They include data scientists and engineers, data providers, specialists in the field of diversity, equity, inclusion and accessibility, user experience designers, functional leaders and product managers.

The foundation for Responsible AI is an end-to-end enterprise governance framework, focusing on the risks and controls along your organization’s AI journey—from top to bottom. PwC developed robust governance models that can be tailored to your organization. The framework enables oversight with clear roles and responsibilities, articulated requirements across three lines of defense, and mechanisms for traceability and ongoing assessment.

4. Consider how you can participate in the regulatory process. Comments are being sought on the Accountability Policy for AI Development by June 12 and on US national AI priorities to the OSTP by July 3. The public is also invited to give input to the PCAST working group on GenAI. Make your voice heard. Get to know the regulators. They ask for input because they really want it.

Get the complete GenAI risk playbook

What security, privacy, internal audit, legal, finance and compliance leaders need to know now.

Learn more

 

Generative AI

Transform the future of business and lead with trust.

Learn more

 

Next and previous component will go here

Follow us