The democratization of AI is upon us as new generative AI tools like OpenAI’s ChatGPT and DALL-E put the power of AI into the hands of everyday users.
In the first five days following its release in November 2022, more than a million people logged into ChatGPT’s platform to test its capabilities. Users are eager to experiment with how these generative AI tools can write code, craft essays, create art, design blueprints, sketch package designs, create virtual worlds and avatars in the metaverse, troubleshoot production errors and so on. They’re also learning how to refine their prompts or instructions to the tool, in iterative cycles, to achieve better results.
While the positive use cases for generative AI are staggering, there’s also potential for misuse and harm. As users began exploring the new tool, for instance, many discovered they could use it to generate malware, write phishing emails and spread propaganda. The same tools also were found to “hallucinate” facts and reinforce perspectives from misinformation campaigns.
As generative AI becomes increasingly popular and widespread, questions about who is responsible for mitigating the associated risks will become unavoidable.
More than 800 AI policy initiatives are pending in 69 countries, but the application to generative AI models is not settled. The Biden administration’s blueprint for an AI Bill of Rights, for example, can help organizations and developers manage risks for consumer-facing AI, but it doesn’t address unique aspects of generative AI tools. The European Union announced its intention to regulate generative AI (included within general purpose AI systems), specifically around the data used to train them, under the EU AI Act.
Existing and proposed AI regulations today cover several specific use cases (e.g., data privacy, discrimination, surveillance), and specific decisions (e.g., hiring, lending, recommending on sites, public contracting), and most are enacted in response to the potentially harmful effects of AI on people and societies.
As users continue to discover new applications for these tools, new risks will likely emerge. The risks associated with generative AI are unprecedented and growing, but include:
Even without regulations to guide them, some companies are voluntarily adopting responsible AI models, including OpenAI, the largest developer of generative AI today. For instance, when users reported that ChatGPT was generating discriminatory answers to prompts, the developers swiftly disabled prejudicial responses. OpenAI also employs teams dedicated to tagging harmful content to filter similar results from outputs.
Capitalizing on opportunities while managing risks will require action from three stakeholder groups.
This article first appeared in the January 2023 edition of The Next Move, PwC’s insights on fast-moving policy and regulatory developments affecting technology.
Lead with trust to drive sustained outcomes and transform the future of your business