Responsible AI: Building trust, shaping policy

August 16, 2023

With the use of generative AI accelerating, it’s important to focus on how business leaders can get the most out of their tech investment in a trusted and ethical way. In this episode, we dive into responsible AI – what it is, why it’s important and how it can be a competitive advantage.

To cover this important topic, PwC’s host, Joe Atkinson, is joined by leading AI experts and members of the National AI Advisory Committee to the President and the White House – Miriam Vogel, President and CEO of EqualAI, and Ramayya Krishnan, Dean of the Heinz College Of Information Systems And Public Policy and Director of the Block Center for Technology and Society at Carnegie Mellon University.

Find out more about how PwC can make generative AI work for you.

PwC Pulse podcast series landing page


You may also be interested in listening to our PwC Pulse podcast episode on Generative AI: What it really means for business.


About the podcast participants

Joe Atkinson is the Global Chief AI Officer at PwC. Prior to that he was a member of PwC's US Leadership Team, responsible for leading the firm through the next wave of digital transformation. This included accelerating growth in go-to-market products and digital solutions, as well as the firmwide technology capabilities.

Previously serving as the Chief Digital Officer, Joe defined and executed the vision to digitally enable the firm, and better leverage technology and talent to bring greater value (and experience) to the firms people and clients. Prior to that, Joe was the leader of PwC’s Technology, Media and Telecommunication consulting business for the US, Japan and China, and he has built a reputation as a trusted, pragmatic and thoughtful advisor to clients as they navigate through accelerating technology changes and increasing regulatory complexity.

Miriam Vogel is the President and CEO of EqualAI, a non-profit created to reduce unconscious bias in artificial intelligence (AI) and promote responsible AI governance. Miriam cohosts a podcast, In AI we Trust, with the World Economic Forum and also serves as Chair to the recently launched National AI Advisory Committee (NAIAC), mandated by Congress to advise the President and White House on AI policy. Miriam teaches Technology Law and Policy at Georgetown University Law Center, where she serves as chair of the alumni board, and serves on the board of the Responsible AI Institute (RAII). Miriam also serves a senior advisor to the Center for Democracy and Technology (CDT).

Ramayya Krishnan is the W. W. Cooper and Ruth F. Cooper Professor of Management Science and Information Systems at the H. John Heinz III College and the Department of Engineering and Public Policy at Carnegie Mellon University. A faculty member at CMU since 1988, Krishnan was appointed Dean when the Heinz College was created in 2008. He was reappointed upon the completion of his first term as Dean in 2014, and after a successful second term, again in 2020. INFORMS, the Institute for Operations Research and Management Sciences, the leading international society of scholars and practitioners of analytics, recognized the Heinz College in 2016 with the UPS George D. Smith Prize for educational excellence. Krishnan was elected in 2017 to serve as the 25th President of INFORMS. He was appointed to serve on the National AI advisory committee to the President and the White House in April 2022.

Episode transcript

Find episode transcript below.

Announcer:

00:00:02:00 Welcome to PwC Pulse, a podcast to provide insights to help you solve today's business challenges.

JOE ATKINSON:

00:00:09:20 I'm Joe Atkinson, Vice Chair and Chief Products and Technology Officer at PwC. And I also have the privilege of co-chairing PwC’s Executive AI Strategy Council. There's no question that generative AI is top of mind for executives. While it has the capabilities to transform businesses and the way we work, it's important to also think about the potential risks and how to implement it in a secure way that can build trust across the organization.

00:00:35:15 Today, we'll dive into responsible AI. What it is and why it's important. We want to focus on how business leaders can get the most out of their tech investment and get the most in a trusted and ethical way. To cover this important topic, I'm joined by leading AI experts and members of the National AI Advisory Committee to the president and the White House.

00:00:57:06 Miriam Vogel, president and CEO of EqualAI, and Ramayya Krishnan, Dean of the Heinz College of Information Systems and Public Policy and Director of the Black Center for Technology and Society at Carnegie Mellon University. Miriam and Krishnan, welcome to the podcast.

RAMAYYA KRISHNAN:

00:01:13:15 Hi, Joe. Thank you again for the opportunity.

MIRIAM VOGEL:

00:01:15:26 Great to be here. Thanks, Joe.

JOE ATKINSON:

00:01:17:19 So before we dive in, we've got a couple of rapid-fire questions just to get to know our guests a little bit better. So, Miriam, let me start with you. Let's go with favorite vacation spots.

MIRIAM VOGEL:

00:01:27:02 Really, any time with family and friends is precious. Wherever I would say. On the one hand, we grew up taking our daughters to the Cape. On the other hand, taking them on an adventure and having them see a new culture and a new place. We went to Costa Rica and Israel, and learning about exploring the world through their eyes is quite a gift.

JOE ATKINSON:

00:01:47:07 Love that. So let me turn it over to you, Krishnan, favorite vacation spot.

RAMAYYA KRISHNAN:

00:01:51:09 I'm going to pick up on the second part of Miriam's answer. We were in Norway, in Bergen, the fjords are amazing.

JOE ATKINSON:

00:01:58:11 Yeah, I've never seen the fjords in Norway, but it is on my wife's list, so I think I have to put it on the list. And let me go to a question two a show or movie that you recommend.

00:02:07:11 There's a couple of big blockbusters going on right now this summer, but anything else that you're watching so Krishnan I'll start that one with you.

RAMAYYA KRISHNAN:

00:02:12:25 I'd like to watch these two movies and they represent the breadth of my interests. One is Oppenheimer and the other is MI?

JOE ATKINSON:

00:02:19:03 I love it. I wasn't sure what the second one was going to be, but we'll go with Oppenheimer and MI. Miriam, what's a show or a movie that you would recommend to our listeners?

MIRIAM VOGEL:

00:02:28:03 I would say in this field, I think that Black Mirror is required watching, seeing how all these things can play out and how really life has followed entertainment and vice versa is very important. On the other hand, I would say one of my all-time favorites is The Wire. It gives you a sense of the complexity of each human and each person's role in the world and how different it is depending on where they sit and where they are in that moment.

JOE ATKINSON:

00:02:54:01 I'm surprised there are no Ted Lasso fans.

MIRIAM VOGEL:

00:02:56:18 Love it.

JOE ATKINSON:

00:02:57:20 Okay. All right.

RAMAYYA KRISHNAN:

00:02:58:13 I loved it.

MIRIAM VOGEL:

00:02:58:13 Which is joy.

JOE ATKINSON:

00:02:58:13 This couple. All right. Well, that gets us warmed up now. I think it would just be helpful for our listeners to get a little bit more context about your roles and responsibilities. So, if you don't mind, Krishnan let me start with you.

RAMAYYA KRISHNAN:

00:03:08:04 Thank you, Joe. So, I'm at Carnegie Mellon University, as you mentioned, and Carnegie Mellon is a home to AI. I took one of the places where AI was created, and I am dean of a college that is both home to a school of information systems and a school of public policy. So, this intersection of IT and public policy is very much aligned with the sets of issues that I'm personally interested in and what the school and the students and faculty are really engaged in.

00:03:37:23 So the first thing we are really focused on relates to how to educate the next generation of students having the requisite skills to contribute to the AI economy, to the AI ecosystem, to AI policy.

00:03:51:11 And in this it's both the knowledge of technology, but equally well knowing and understanding how to think carefully in a systems kind of way about the sets of issues that AI brings to the fore.

00:04:02:00 The second aspect has to do with my personal research interests, and these are about Gen AI, it's reliability, how to deploy it responsibly, and working with a whole bunch of colleagues at the block center and at the center that we have with PwC at Carnegie Mellon on digital innovation and transformation.

00:04:22:00 On the policy front, there's been active, engaged, and I'm privileged to work with Miriam on the National AI Advisory Committee in Nyack, but also in other policy related initiatives at the state and at the local government, all of which really are focused on these questions of how do we build trust in AI.

JOE ATKINSON:

00:04:45:19 Krishnan, I love that connection of your personal research agenda. And then, of course, the important role you in the faculty and leadership team you play in educating students. We also could not be more proud of the association we have with CMU in the work that we're doing in the Innovation Center. So, Miriam, let me go over to you and maybe just a little bit about your role, both for organizations and listeners that might not be familiar with EqualAI, and then the really important role you play advising the White House.

MIRIAM VOGEL:

00:05:08:01 So EqualAl was created almost five years ago with the express purpose of reducing bias and other harms in artificial intelligence, which is a lot harder to explain to people five years ago. And it's been so interesting to see how people's response to us and what we're doing has really transformed over the years.

00:05:25:11 I would say that our key focus is helping people trust AI so that the economy can continue to accelerate, and AI use can continue to accelerate and create new opportunities.

00:05:39:03 But a key part of that is making sure that the AI that we're using can and should be trusted. And so, we work with three main constituencies to effectuate that goal. We work with companies. A lot of our work is helping companies understand you're now an AI company because you're using it in pivotal functions. And so, what does it mean to be responsible AI actor?

00:05:58:18 We work with lawyers. My personal bias is as a lawyer that lawyers have a key role. This is what we do. We issue spot, we institute frameworks and governance to mitigate harms and reduce liabilities and accelerate opportunities and the third constituency is working with policymakers, which has really changed over the last five years from more of a basic introduction to the technology to digging in now and understanding what does it mean to be a responsible AI actor and what is the policymakers role in that respect?

00:06:29:20 The other role that I am honored to have as chair of the National AI Advisory Committee, to be clear, I'm not here in the capacity of representing Nyack today. I'm here in a personal capacity and representing EqualAI. That is a really special committee because it is 26 experts from across industry, academia, civil society who are working together to fulfill our mandate.

00:06:52:26 And that's advising the president in the White House on AI policy.

JOE ATKINSON:

00:06:56:18 We see immense opportunity for Gen AI to transform business in the way we work. But we have all been using this word trust, and we use responsibility very, very powerful concepts in any setting, but particularly powerful concepts when we start talking about technology as Gen AI has kind of splashed onto the front page of all the newspapers and into media in a way that has really captured the public's eye. Miriam, how do you define responsible AI and what does it mean to really drive AI in a trusted way?

MIRIAM VOGEL:

00:07:26:18 That's a great and important question, because too often we throw around these terms without ensuring that we're all on the same page and clear on what we're all aspiring for here, or why. Responsible AI is a concept that has really come to fruition in the past. I would say three or four years it's been there's been different iterations of it, but at the end of the day, where we've landed on is responsible AI because it requires us to think about trust.

00:07:53:12 It requires us to think about inclusivity, it requires us to think about efficiency and effectiveness. So those are, I would say, the key elements of what it means to be a responsible AI actor or what it means for your AI to be responsible. And why is that important? I think we all in this room would agree that AI adoption is key for many reasons, as key to our economic success it’s key to democratic success.

00:08:20:20 A small d. It's key to ensure that we realize the opportunities that AI can offer. We need adoption. We need inclusion. We need effectiveness. We can only do that if we have broader adoption. And we can only do that if we have broader representation in the creation and deployment of AI. And so, I love that we have this mandate that we all need to be understanding that AI economy, the new, I would say, industrial revolution that we are living through.

00:08:50:06 We don't all need to be computer scientists, but we need to have a basic understanding of this technology that is fueling our lives. And we need to be able to ensure that we know why we're using it and that we can trust it. Otherwise, we won't adopt it or use it or engage in it. And really, I think it breaks down into four key areas why we tell companies that this is something they need to be squarely focused on.

00:09:13:06 And first is it's an employee retention issue. If your employees don't trust that you're using AI responsibly, whether it be with them internally or externally, with your consumers and clients that really chips away at employee confidence, at their satisfaction. On the flip side, if they see that you're committed to embedding your values in your AI use, that helps build trust.

00:09:36:20 It helps them be proud of the services that they're providing. Whether you're a traditionally AI company or using AI in other types of functions, as most companies are. Second, it's about brand integrity. If people see that your AI cannot be trusted or that people are excluded or harmed, how are you going to build back that trust? And then the third is the opportunity.

00:09:58:09 If you do this well, if you are more inclusive in who you're building your AI systems for, who can benefit from it, you have a broader consumer base. So, the upside is significant. If those three didn't get you, the fourth is the litigation and liability. So, in addition to the upcoming regulations that we see coming up around the globe, there's laws on the books currently that are applicable, whether or not you're using AI.

00:10:21:27 So you are incurring liabilities if you're not mindful of the ways that it could be not inclusive, not responsible, not trustworthy, not lawful.

JOE ATKINSON:

00:10:32:00 I think those are four very compelling pillars of how to think about this. And I think it's just so important. The point that you make about reputation and trust, we all know how quickly that can be lost. But now we're talking about technology tools that can scale so quickly. They can impact many people, many organizations so quickly. The challenge in some ways is multiplied by the power of the technology.

00:10:53:26 Krishnan, I'm going to come back to you. When we spoke last time, we talked about the trust and success of AI depending on the inputs, do you have the right data. How have the recent advancements that again may have taken a long time to come to fruition? But from the public's perspective and from a lot of executive’s perspective feel like it just happened.

00:11:13:04 These recent advancements, they're pushing new boundaries for us in responsible AI, they are creating new challenges. How does an executive in the C-suite keep up with these advances?

RAMAYYA KRISHNAN:

00:11:23:17 That's a really important question and one that, you know, we often hear from leaders about this very issue. I think the question of trust in AI systems, I think the capacity to really understand how these AI systems do on a variety of metrics of interest. Miriam, articulated a number of them from the viewpoint of efficiency, effectiveness, fairness, robustness, privacy, security, how do we take these high-level statements and convert them to implementable, actionable kinds of metrics?

00:11:57:26 So in other words, what are the standards we need to evaluate these AI models against? Because standards often incorporate our values.

00:12:07:26 If I only focus on efficiency, that's signaling something versus saying I really care not only about efficiency, I'm going to care about all these other things that I mentioned, about fairness, about bias, about robustness, about effectiveness. In other words, a holistic evaluation measure I think goes a long way towards building that base of evidence, base of information upon which one could actually make decisions.

00:12:30:25 Now, it turns out that there are a number of these new models that you referred to, the generative AI models some of the largest models that are out there are closed. They're only available over an API such as the open AI set of GPT models, Google, (Unintel Phrase)___12.46. And then there are these other open-source models, that are smaller models, that are larger models.

00:12:52:25 It would be good to know. In fact, early work seems to indicate that these closed models may be even more accurate than the open models. The closed models, maybe they're correlated with maybe higher levels of robustness as well. So it's not necessarily the case that closed is bad and open is good. So, I think we need to build this base of evidence upon which these decisions can be taken.

00:13:16:08 Now, if you're a business executive, it's really important to be connected in to an ecosystem to keep up with where the best practices are. What the kinds of information that are going to be relevant to make the kinds of decisions that they need to not just about foundation models, but about that entire pipeline that's going to be required to actually derive value from the use of this technology to solve really compelling and consequential business and societal problems.

JOE ATKINSON:

00:13:45:08 You know, Krishnan, that perspective, if you take everything that you just shared and now you put yourself in the shoes of some of our C-suite client executives for a moment. They're struggling to climb the learning curve of understanding about open and closed models. They're struggling to understand the implications of broad bases of data that are driving models versus what we've been calling the micro models you and I talked about.

00:14:06:07 So I think a big part of what we continue to advise our clients and hear from them is that the learning curve is very steep. The need to lean into this learning curve to take advantage of a podcast like this one, like the podcast that Miriam and her team run, that that's a really important part of taking advantage of these models, but also making sure that as you take advantage of this technology, you're deploying in a responsible way.

00:14:28:00 You also talked about standards. Miriam, I want to come to you on the standards point, because you made reference to the regulatory frameworks, and you talked about existing law creates a lot of responsibility today. It's not a legal advice podcast, obviously, but there's more regulation coming. And again, I know you're here in your individual capacity. You're not speaking for the committee or any other role that you have, but just in your role having such a perspective on this, what do you think is coming from a policy and framework perspective?

MIRIAM VOGEL:

00:14:53:10 In terms of new laws and regulations, I think we see a lot coming from the Hill in DC. We see the states more and more, you know, 17 bills last session on AI across the states. Several have already come up in Congress this year. Some are offering frameworks like the elements that we've seen of what Senator Schumer's working on.

00:15:16:11 Others are looking at more specifics or government use of AI and frameworks. So I think in the U.S., we're seeing both the detailed notice requirements of when you're using AI, such as in that interview has already been on the books in a few state laws at this point for a few years now. In New York, we have the upcoming city bill that will require certain audit functions and other requirements when using AI in hiring in specific ways.

00:15:45:19 And then we've got the rest of the world. We have the EU AI regulations that are full speed ahead, and we've been seeing drafts in different iterations with an end of year expectation for its passage. And across the globe we've seen many different models and examples of how different countries are approaching this, let alone the international bodies. We've seen the U.N. take unprecedented standing on AI policy and best practices.

00:16:11:27 So in some ways it'll be who's acting. First of all, obviously set the playing field will be the Brussels effect again, or will some of the international bodies create consensus, which will hopefully bring in more values, more norms? This has to be an area where we have international understanding, international participation, AI has no borders and really operating with borders on the laws and requirements of AI is too complicated, as I'm sure you.

00:16:39:06 I would love to hear your thoughts on that. I'm sure that creates a lot of headaches and opportunities for you.

JOE ATKINSON:

00:16:44:27 It's a really good way to put it. There's headaches and there's a ton of opportunity. And to your point, the same learning curve we just talked about in the C-suite, if you're a regulator, legislator or public policy leader, if you are staffing those folks, it's a massive learning curve to help make sure that this regulatory framework comes together.

00:17:01:07 To that point, maybe just one quick follow up question and then Krishnan, I want to come with you on one last one before we do our wrap up on the policy front. What is your advice to the regulators and what is your advice to the policymakers again, recognizing that given advice in the official capacity, but just for somebody that's been in this space for so long, what should they be doing as they think about these frameworks?

MIRIAM VOGEL:

00:17:21:17 I think they need to do something that's quite challenging but couldn't be more important. We need clarity and we need it quickly. If it takes a few years to decide on what our expectations are, we've missed the boat. The AI is being built and deployed at scale and has for a few years now, so it will be too hard a few years from now to implement regulations that have transparency, requirements, accountability.

00:17:46:02 They need to be built in yesterday. So, I think first and foremost we need clarity on expectations. We also really need clarity on the liabilities. As I mentioned, there are many laws on the books currently in many of those impact liabilities. We've been fortunate that several regulatory agencies have been clear about their expectation that their purview includes decisions that have been impacted by AI or not.

00:18:10:19 It doesn't matter if air has informed or guided your decision, you are equally liable. We've seen the FTC for several years now trying to clarify to anyone acting in this space that if you're making a claim about your AI, it better be accurate. Otherwise, it falls under their purview. If you are not providing due process requirements or explanations of under their legal purview with an EECO, etc., you can be liable.

00:18:34:26 We've seen the EEOC and DOJ have historic joint statements two years in a row now, first saying make sure that your AI is compliant with civil rights laws, including the Americans with Disabilities Act. Too many people aren't mindful of the ways that they need to make sure that not only creates opportunities for inclusion but doesn't exclude people based on their disability because it's wrong and because it's illegal.

00:18:58:27 So I would say clarifying the expectations, making sure we are consistent in our terms and really making sure that if you're being a good, responsible actor in this space, that it's the norm and the expectation, not a penalty. And to the extent that they can, even better yet, is if they can make it a competitive advantage to be a responsible AI actor.

JOE ATKINSON:

00:19:20:11 I love that concept of responsible AI giving you a competitive advantage. And it's interesting as you give that perspective on advice to regulators and lawmakers. It's very similar to the advice that we're giving clients, C-suite executives, which is understand your plan clarity, understand what you're trying to achieve, and make sure you understand how your teams are going about achieving it.

00:19:38:21 What data are they using? Is it inclusive? Is the output inclusive? Loved your point on making sure that we're inclusive of those with disabilities as well.

MIRIAM VOGEL

00:19:46:00 And I'm so glad you're saying that we like to call that good Ai hygiene and EqualAI. It's making sure that like with cyber, like with every other kind of hygiene, that you're clear that you're communicating across your enterprise. There's consistency, accountability and inclusivity.

JOE ATKINSON:

00:20:01:20 And you mentioned cyber. So, I'm going to come around to Krishnan on cyber, because cyber is a hot topic that we certainly could do a podcast on all by itself. But one of the discussions that has been so prominent in the generative AI space specifically is an issue that you raised earlier Krishnan, on the difference between open tools and what I'll characterize as not only closed models but closed tools.

00:20:21:26 Most of our clients have been concerned about if they're using open tools or they're using tools that are available on the Internet. The big worry, of course, is they're submitting data, offering prompts, getting outputs that are publicly available, and they may not be intuitively, publicly available, but they are generating information that can help inform the models. We're experimenting with what we call chat PwC, which is what I always characterize.

00:20:45:25 If you want to do work at PwC in generative AI, you work in chat, PwC, if you want to play, you play in ChatGPT or the other public models, but work needs to be done in secure spaces. How do you think that benefit is going to play out? How do you see that playing out from a technology policy perspective that companies implement?

RAMAYYA KRISHNAN:

00:21:02:19 Joe, I think this falls more broadly under the response I was giving on transparency, reliability and setting up an AI pipeline or an AI system that you can rely on specifically, I want to first respond to the question that you post and then talk more broadly. So, with regard to if you use ChatGPT, you're effectively using a shared resource.

00:21:26:23 So my using it, you're using it, Miriam using it, we are all using a shared resource and submissions that you make to ChatGPT the questions that you ask or the documents that you're asking it to summarize are all potentially viewable by open AI staff and potentially usable in further refining and training their models. So, I think the first question that firms have to ask themselves is do they want to use a shared resource or do they want a dedicated resource?

00:22:00:22 So I think the opportunity to create a dedicated instance of GPT for your own firm is often based on do I have the volume in terms of number of tokens per day, usually 450 million to 500 million tokens a day is usually what's used as a measure for saying that makes sense to have a dedicated instance.

00:22:23:22 So it's still behind an API, but it's an instance that's dedicated to you and your applications and then the data that you are using, you could actually store in what's called a single tenant model on the cloud, thereby allowing for private data from the firm, private communication of that data over a private network to the API, and engaging with that dedicated instance, thereby allowing for quote unquote a secure private interaction of the sort that I think you're referring to when you talk about chat PwC.

00:23:01:23 Now, that makes sense for firms that have the volume and the capacity to incur the costs to set this kind of thing up. What would be really interesting, and I think it's important that that be done both to provide trust and assurance to the firm that their data are being solely used for the purposes that they intend this data to be used for.

00:23:24:22 But at the same time, you don't want this to be a barrier to small and medium sized enterprises wanting the same kind of functionality but being unable to afford it. So, I think there's a inclusion point here that we need to talk about as we think about, you know, this architecture that I just laid out.

JOE ATKINSON:

00:23:41:05 Krishnan, we've talked a lot about guardrails as well and the risks that companies face. Maybe your comments on that and some research that you have underway.

RAMAYYA KRISHNAN:

00:23:47:08 So with AI chatbots like Bard, like ChatGPT, like Cloud, there's been considerable effort by the vendors to build in guardrails to prevent individuals from eliciting toxic responses from them. Recent work at Carnegie Mellon has shown how very simple suffix attacks these are attacks that allow for, say, the addition of a bunch of exclamation marks. After the question that you post to the chatbot can actually defeat these guardrails.

00:24:14:11 So I think it's important to think about these vulnerabilities that might exist in these AI tools as they're being deployed and understand how one might mitigate or address these vulnerabilities. And this is where I see the analogy to the kinds of work that was done with information security and how security vulnerabilities were reported, address patched. And I think we're going to see something similar having to be developed for AI.

00:24:42:03 And at that point, the U.S. government stepped in, in a public private partnership with industry, with the government setting up what's called a Computer Emergency Response Team, CERT. The idea would be that vulnerabilities would be reported to this Computer Emergency Response Team, and this database of vulnerabilities would be known and vendors would then, quote unquote, patch those vulnerabilities and then organizations would implement those patches to create a more secure system.

00:25:09:19 Because we fundamentally have a technology where we can't predict all the kinds of potential attacks that might emerge. So, we need this quick response capability.

00:25:17:19 So I think when you talk about what should business executives be thinking about in the space, they should also be participating and thinking about AI vulnerabilities and how to sort of create a culture and an organization that can both report, but also respond to vulnerabilities in partnership with vendors and the tool providers whose technology they're using to actually develop and implement the AI.

JOE ATKINSON:

00:25:42:14 You know, the cyber threat actor model that you're talking about. The CERT response, I think, is a really informative model as executives start to think about. What is the right model for collaboration and connectivity? We all know the pace isn't going to slow down.

00:25:56:14 The pace of bad actors trying to figure out exploitation opportunities will not stop. And so, organizations need to be prepared and understand how they can work effectively and work proactively, including with their technology partners and providers, to make that happen.

00:26:10:22 So let me do a couple of rapid-fire round up questions, if you don't mind. So, if you had, let's say, 30 seconds to offer advice to pick your audience, regulators, employees, company leaders, people coming into the workforce, pick a group of people that you want to give advice to and what advice would you give to them? And Miriam, I'll start with you, if that's okay.

MIRIAM VOGEL:

00:26:30:17 Sure. We often talk with C-suite and board members in making sure that they understand their liability and opportunity. We want them to understand that everyone has a role to play. We want them to understand that it's not an engineering problem to resolve.

00:26:45:14 It's engineers with lawyers, with sociologist, with ethicists. The more people you're bringing in that are closer to the product, the safer you will be, the more you can optimize on AI opportunities. I would share with them what we talk about in our badge program for senior executives. But really, it's all about practicing good AI hygiene, making sure that you're clear on your framework.

JOE ATKINSON:

00:27:06:21 That was really good content and advice for people. So Krishnan, will call it rapid-fire with variability. So, what thoughts do you have advice and who would you target your advice to?

RAMAYYA KRISHNAN:

00:27:16:24 The NIST AI RMF is an important tool in helping people operationalize AI. I think that's going to be important for both business executives to really pay attention to, especially as tools emerge from NIST AI RMF. And then on the education side. So that's one group.

00:27:33:24 The second, I think, is this issue of the need for greater transparency. I think I've spoken somewhat extensively about this on the podcast, building the capability to evaluate, characterize and measure and help engender trust through reliability assessments I think is going to be critical.

00:27:50:23 And I think it's really important to understand how that plays out, both for what I call closed as well as for open models for small versus large. And then the third piece I think is on the workforce. How does one think about building the capability, both nationally but also within firms about quickly upskilling, reskilling, repositioning, redeploying workers and giving them the opportunity to acquire the skills in ways that are most meaningful to them and to the organizations that they work for to really add value.

JOE ATKINSON:

00:28:21:24 I love your last point. I think it's a great place for us to wrap. One of the things that we're very proud of PwC, is we have an opportunity to help a lot of people every year start their careers. And those individuals coming into the workforce today are looking at developments in generative AI and they're asking the question, is this going to make my career better and more valuable?

00:28:40:03 I love the opportunity that generative AI and AI broadly in this technology advancement brings. And my advice to students and people starting their career and frankly, at any step of their career journey is lean in. Understand it.

00:28:51:03 I thought, Miriam, your point about width and width is so spot on. Learn from others, engage with others, get engaged in these topics, spend the time to understand and you can not only advance yourself, you can advance the organizations that you're part of.

00:29:05:28 Miriam and Krishnan. This is an incredibly important topic, and it was great speaking with you both about the role that responsible AI will have in powering the future. We all share.

RAMAYYA KRISHNAN:

00:29:16:04 Thank you, Joe.

MIRIAM VOGEL:

00:29:16:24 Thanks, Joe.

JOE ATKINSON:

00:29:17:29 And to our listeners, thank you for joining us on this episode of PwC Pulse. We'd love to hear your thoughts and your comments and feedback on today's conversation. You can leave a review on your favorite podcast platform.

ANNOUNCER:

00:29:30:09 All of the views expressed by Miriam Vogel and Professor Ramayya Krishnan on this podcast are their own and not that of any national committee they serve on or of the United States government.

ANNOUNCER:

00:29:41:12 Thank you for joining us on the PwC Pulse podcast. Subscribe to PwC Pulse wherever you listen to your podcast or visit PwC.com/Pulsepodcast to hear our next episodes. 

Announcer:

00:29:55:05 Copyright 2023 PwC. All rights reserved. PwC refers to the PwC Network and or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details.

00:30:14:00 This podcast is for general information purposes only and should not be used as a substitute for consultation with professional advisors.


Contact us

J.C. Lapierre

J.C. Lapierre

US Sustainability Transformation & Operations Leader, PwC US

Follow us