Generative AI: What it really means for business

April 19, 2023

Companies have been using artificial intelligence (AI) technology for years. But as AI rapidly evolves, it has the power to reshape industries and transform today’s businesses. How can AI improve your business? What is Generative AI? What are the risks to consider?

In this episode, Joe Atkinson, Global Chief AI Officer, is joined by Ramayya Krishnan, Dean of Heinz College of Information Systems and Public Policy at Carnegie Mellon University, to explore the evolution of AI, how it can unlock opportunities and the critical role ethics and responsibility will play in its success.

Find out more about how PwC can make generative AI work for you.

PwC Pulse podcast series landing page


You may also be interested in listening to our PwC Pulse podcast episode on Responsible AI: Building trust, shaping policy.


About the podcast participants

Joe Atkinson is the Global Chief AI Officer at PwC. Prior to that he was a member of PwC's US Leadership Team, responsible for leading the firm through the next wave of digital transformation. This included accelerating growth in go-to-market products and digital solutions, as well as the firmwide technology capabilities.

Previously serving as the Chief Digital Officer, Joe defined and executed the vision to digitally enable the firm, and better leverage technology and talent to bring greater value (and experience) to the firms people and clients. Prior to that, Joe was the leader of PwC’s Technology, Media and Telecommunication consulting business for the US, Japan and China, and he has built a reputation as a trusted, pragmatic and thoughtful advisor to clients as they navigate through accelerating technology changes and increasing regulatory complexity.

Ramayya Krishnan is the W. W. Cooper and Ruth F. Cooper Professor of Management Science and Information Systems at the H. John Heinz III College and the Department of Engineering and Public Policy at Carnegie Mellon University. A faculty member at CMU since 1988, Krishnan was appointed Dean when the Heinz College was created in 2008. He was reappointed upon the completion of his first term as Dean in 2014.

Krishnan was educated at the Indian Institute of Technology and the University of Texas at Austin. He has a Bachelor’s degree in mechanical engineering, a Master’s degree in industrial engineering and operations research, and a PhD in Management Science and Information Systems. He has served as Department Editor for Information Systems at Management Science, the premier journal of the operations research and management science community. Krishnan is an INFORMS Fellow, a member of the Global Agenda Council on Data Driven Development of the World Economic Forum, and a former President of the INFORMS Information Systems Society and the INFORMS Computing Society.

Krishnan’s government service includes his current work on the IT and Services Advisory Board chaired by Gov. Tom Wolf of the State of Pennsylvania. He has served as an Information Technology and Data Science expert member of multiple US State Department delegations and briefed ICT ministers of ASEAN in October 2014 on Big Data technology and policy.


Episode transcript

Find episode transcript below.

ANNOUNCER:

00:00:01:00 Welcome to PwC Pulse, a podcast to provide insights to help you solve today's business challenges.

JOE ATKINSON:

00:00:09:19 Hi, I'm Joe Atkinson, Vice Chair and Chief Products and Technology Officer at PwC. I also serve as the firm's Co-Chair for our Executive AI Strategy Council, where we are exploring use cases for our clients and helping them figure out how to apply these emerging technologies to help transform their business.

00:00:26:19 In fact, today we're out here at PwC’s Emerging Tech Exchange talking about these emerging technologies, including artificial intelligence.

00:00:34:10 And that's our topic today, how to put it to work so that it really delivers value. What regulators might do next and how the technology itself might evolve and more.

00:00:44:10 I'm excited to have with me a leading expert on AI, Ramayya Krishnan. Krishnan is the dean of the Heinz College of Information Systems and Public Policy at Carnegie Mellon University.

00:00:53:00 He also serves as Faculty Director of the Block Center for Technology and Society at CMU. And additionally, he's a member of the National AI Advisory Committee to the President of the United States.

00:01:04:10 I've had the great pleasure of working with him in our ongoing collaboration between PwC and CMU, the Digital Transformation and Innovation Center. I know he has a lot of insight to share. I'm very excited to have him here with us today. Krishnan, welcome to our podcast.

RAMAYYA KRISHNAN:

00:01:18:03 Thank you, Joe. Thank you for this opportunity to chat with you about AI.

JOE ATKINSON:

00:01:21:18 Well, we're going to start with some easy stuff Krishnan. We like to get a little bit into knowing you before we get into the topic of the podcast. So let me share a few rapid fire questions with you, and hopefully the audience will get to know you a little bit better. So let's start with what is your favorite city in the world to visit?

RAMAYYA KRISHNAN:

00:01:37:08 There's a Roman Holiday answer to this one, and I've been to many cities. I like them all, but we are in San Francisco. I like San Francisco a lot.

JOE ATKINSON:

00:01:43:18 Excellent. Excellent. It is a great city.

RAMAYYA KRISHNAN:

00:01:45:18 Yeah.

JOE ATKINSON:

00:01:46:18 And how about a book that you recommend?

RAMAYYA KRISHNAN:

00:01:48:04 Code Breaker. This is a book that I just recently read, Walter Isaacson's book about the race to create CRISPR and the Nobel Prize To Professor Doudna at Berkeley.

JOE ATKINSON:

00:01:58:16 Excellent. I'm going to put that one on my list, too and how about an inspirational figure in your life or your work?

RAMAYYA KRISHNAN:

00:02:03:22 My parents, dad and mom. So they've been really great role models.

JOE ATKINSON:

00:02:08:23 I love that and I appreciate you sharing that with us Krishnan. So now let's jump into our topic for the day. You started out your career studying mechanical engineering, and now here we are recognizing with well-deserved praise your expertise in AI. That's quite a journey. How did that all come about? And maybe tell a little bit about that passion for AI that I know you have?

RAMAYYA KRISHNAN:

00:02:31:15 It was actually a serendipitous journey in many ways. I mean, I began indeed as a mechanical engineer, but when I studied mechanical engineering, I was introduced to operations research and optimization.

00:02:44:10 And as you know, optimization is one of the key pillars of AI. When I was an undergraduate, I wasn't looking that far ahead.

00:02:51:15 But after I completed my mechanical engineering degree, I went on to do a master's at the University of Texas, go Longhorns, I should say, on industrial engineering and operations research, where I also got a grounding in statistics, which is the other pillar of AI, I was building towards AI, I guess.

00:03:07:03 And then I did my Ph.D. It was actually in AI and optimization. It was a different generation of AI. It was the expert systems generation of AI. But as the field played out, this grounding and optimization in statistics and computing, which is part of my Ph.D. work, really helped me to develop position, to really contribute, to learn from apply AI.

00:03:33:00 And it certainly has been something that I've been passionate about over these many years.

JOE ATKINSON:

00:03:38:19 Well, I think we're going to put that grounding and that journey to good work today for the audience. And let's talk about AI and the complexity there. I think we can all agree that there's complexity to the topic of AI, not just the tech, but the procedures and the skills and the risk management that companies are struggling with that surround it.

00:03:55:02  In some cases, it's been around some companies for a long time and there's some that are getting it right. Then there's others that still have many opportunities to improve how they're using it to think more creatively and innovatively about applying it. They may have AI work going on in one or two areas, but not achieved any meaningful scale.

00:04:11:19 So what are your thoughts on the common mistakes and maybe some of the barriers that companies are facing on how they scale these powerful technologies as they merge?

RAMAYYA KRISHNAN:

00:04:20:07 That's a great question, Joe. First and foremost, I think was one aspect of over indexing on what this technology is capable of. They think it's something that can solve for all kinds of problems. That's one common error that I see. The other is recognizing that AI even as it's gotten powerful and as it's rapidly developed, is but one part of a puzzle of bringing together a collection of technologies.

00:04:48:13 And this is a collection of technologies that taken together are really going to solve a business problem or a societal problem. So it's not AI alone that's going to do it. Just to pick an example, if you think of how physicians engage and interact with patients to make a diagnosis, there's a number of things that they do with regard to assessing, determining how you look, how you feel, etc.

00:05:16:13 To a small part of this is actually collecting the data to then sort of get a recommendation as to potentially what the issue might be. I could replicate that in a number of contexts from predicting the likelihood of risk with regard to students. I mean, I'm a dean.

00:05:33:14 How do you assign tutors to students who are struggling and are predicting students or address things of that nature? I think those small elements are really important, but they're part of a much larger system, and I think we should focus on the system rather than just that little piece that is the AI.

JOE ATKINSON:

00:05:50:06 It's such a great point Krishnan. And you see it in the application of so many different technology and process challenges, having talked with clients for so many years, everybody wants to simplify everything, right?

00:06:02:06 They want to break it down into the piece and then they look for that one solution. And it's typically very hard to find and usually not very effective in the absence of the system as you describe it.

00:06:09:20 And actually one great example that is probably generative AI, there's a lot of buzz around generative AI and it's a very powerful capability, but you could argue that some people are looking at it and concluding it will solve a much wider range of problems than it's capable of solving or that it will solve it alone.

00:06:25:20 So what are your thoughts on how that practical and emerging use of generative AI is going to impact business? And what advice do you have for companies that are thinking about?

RAMAYYA KRISHNAN:

00:06:35:02 Again, really good question. I think generative AI while there's been a lot of excitement about it in the last several months, it's been around actually for close to a decade in terms of research. And just to pick one example, large language models, they have been a focus of research and application.

00:06:53:02 And one thing that's been really remarkable, I guess, has been that as the size of these language models has grown to where you're now training them on 500 gigabytes of data versus 40 gigabytes of data and the size of the neural networks.

00:07:06:23 What's really interesting is that scale has grown the capability of a model that effectively is predicting just the next token or the next sequence is demonstrating a whole bunch of capability from being able to help with writing code to being able to summarize text; very different kinds of applications.

00:07:29:23 So as one thinks about generative AI and at the Heinz College at CMU, we just ran a micro mini course. We tend to do this when you want to take an emerging tech and think about both the upside, what are the opportunities and also what are the consequences? What are the societal kind of consequences?

00:07:45:20 And in some verticals or some sectors more than others, initially, where you're going to find opportunities. For instance, you have people communicating with customers or writing contracts where you want the style to be uniform and consistent across a firm.

00:08:04:20 This is something that this technology can help augment, not necessarily substitute, but help augment a human analyst and being able to do that task really well.

00:08:13:10 There could be other such examples that I could give of creative work, of knowledge work, and I think those opportunities are going to continue to increase.

00:08:23:20 Now, it's also the case, as one thinks about these applications, that one needs to think, not just in terms of the quantity or the scale at which you could do this, but is the quality of what it does you're going to produce. How good is it? So take the example of writing code.

00:08:35:18 There's a really interesting products and services. One example increase the data from productivity, studies demonstrate an increase in productivity of the order of 30% to 40%. The interesting question is, is it the case that the quality of code being written, is that better with this kind of technology on average.

00:09:00:18 Because this type of code is also going to feed into the training corpus of these large language models that over time, are you going to get regression towards the mean or are you going to get not only higher productivity, but also are you going to improve the quality of the code?

00:09:16:01 I don't think we have a definitive answer, but I think it sort of focuses on what firms should be thinking about in deploying the technology they have to deploy, taking intentional care in terms of asking the question: how should this technology be deployed in ways that it not only improves productivity, it doesn't give up on that quality dimension point I made.

JOE ATKINSON:

00:09:37:16 It's a really powerful point about where the combination of that human factor and the technology come together and does it improve outcomes over time? Does it actually improve quality over time, or does it really just standardize along a less effective average perhaps?

00:09:54:16 There are people that have characterized particularly generative AI as a very sophisticated guessing machine with the depth of your knowledge, maybe react to that a bit as to how close is that to the reality of predictive AI in generative AI versus how close is that to the hype that some are putting around these topics?

RAMAYYA KRISHNAN:

00:10:12:16 Well, if you think of the traditional large language models, generative AI covers a number of different types of technologies. I think we're talking here about large language models in particular which have a transformer network.

00:10:25:16 There's a sort of the deep learning neural network, which in effect is trained on a very large corpora and then has an adaptive tuning component where you then use and the first part to transform a part is done without any kind of supervised learning. It's unsupervised. It's called self-supervised learning.

00:10:42:15 And then the second part is where you have this adaptive tuning with human input in terms of using reinforcement, learning to get the kinds of outcomes you want large language model to produce.

00:10:53:15 Now, to your point about is it guessing at one level it is a stochastic system that's working with the training data that it has, and it's predicting the next token, which could be the next letter, the next word, in a sequence.

00:11:08:18 That's what it's being optimized for. But that said, the most recent results as recently as a few months ago, demonstrate that when you have this size, the point I made about scaling, when you have large number of parameters, large corpora of data, we are seeing some emergent behavior with regard to it being able to not just repeat or replicate, but is able to recombine content in new ways.

00:11:37:07 And so indeed it is using this predictive capability of the next sequence, next token.

00:11:44:07 But the recent results demonstrate that some really interesting scale properties as the scale of this grows and this is an interesting point that very few companies have either the access to the data and/or the compute power.

00:11:57:08 So all the leading technology companies are going to have versions of these large language models and we'll be able to actually see as they grow in size and scale, what kind of properties are they actually able to demonstrate in practice.

JOE ATKINSON:

00:12:10:23 It's a great example around the scale of the deployment as well as to your point, the quality and volume of data that you have that are feeding the models that leads to all kinds of questions about responsible use.

00:12:23:08 And I know that's a topic that's near and dear to us, but maybe for the audience's thinking, talk a little bit about the risks of AI and how that starts to come to play in the way that we are applying these models, developing these models and continuing to use these capabilities.

RAMAYYA KRISHNAN:

00:12:37:03 Since we are talking about generative AI, we can start there and then we could segway to what today we think of as the more traditional machine learning based AI, though as you know, AI itself is an umbrella of technologies.

00:12:50:04 Often times we tend to anchor on machine learning among that umbrella of technologies. So with generative AI, one of the issues that I'm perhaps most concerned about is I'd say especially large language models, is the question of trust.

00:13:05:07 So when you think about what it effectively enables us to do is it allows us to produce content that looks very professional but may not be founded, in fact, it might be actually disseminating incorrect information either intentionally, in which case it's misinformation and disinformation by malicious actors.

00:13:28:08 And it's challenging to determine if this content was written by an AI bot, so for instance, one of the large companies released a bot to try and classify if content was generated by a chat bot and AI bot or not.

00:13:45:09 There are two positive rates are in the 20s, 25%, 26% and false positive rates are high too.  Meaning that you know, you Joe wrote this content but it thought it was produced by an AI bot, so I think that potentially technology is going to develop to do that classification better.

00:14:04:04 So this issue of authorship, what the nature of who wrote this is one issue. Content, what about the content itself?  Can we identify if the content is factually correct?  It's probably checkable, but are there ways in which content could be created?  So this goes to the point about adaptive tuning.

00:14:23:01 Typically, it's adapted for the purposes of removing bias, toxic things that the generative AI LLM would say, but I can also use that same technique to get it to produce content that manipulates people, that nudges people to do things that perhaps societally we don't think is the right thing to do.

00:14:40:08 So the same adaptive tuning technology that second phase could be used in ways that are societally detrimental.  So I think a big policy question has to be how do we address these trust questions just like we have and we have you know, I was drinking a glass of wine yesterday.  Do you know where that wine came from?  You know, is it from California?

00:15:03:12 Is it from Italy?  So you have that labeling, right?  So should there be some kind of labeling with content?  Should it be 20% of that content comes from an AI bot, we want to say that it's from an AI bot or does it have to be over some percentage for you to say that's from an AI bot.

00:15:20:01 Do we have to have other ways in which like when we choose a restaurant, we get multiple raters to look at.  So should we have the same for content online?  Because I think this changes the economics of producing content.

00:15:34:00 So one concern is that this generative AI could produce content at scale that generates so much noise that it makes it very hard for people to know whether what to discern, what signal and what's noise.  Now that was a long answer just in response to generative AI.  There's a lot more we can talk about responsible AI.

JOE ATKINSON:

00:15:51:03 But it was such a powerful answer honestly, when you look at the scale of the challenge that we all collectively face, both in companies and in markets as well as in society.  And let me dig into the societal question for a moment, if you don't mind, because if you look obviously at the role that you play in public policy, no doubt you have a perspective. So where do you think regulation goes over time given the complexity you just articulated?

RAMAYYA KRISHNAN:

00:16:14:09 I think this is an important policy question, but at the same time, we want to balance our public policy in ways that harness and encourage innovation while at the same time protecting society from different kinds of potential harm, so you have both these kinds of things that we need to think about.

00:16:34:01 So if you look at what the NIST has done, this is a National Institute of Standards and Technology, its produced AI risk management framework.  It's not required, it's not regulation, it's not law, but its recommended practice with the objective of getting firms to think about how do they measure things like accuracy.

00:16:54:08 Issues related to their implementation of ethics, how do they think about robustness?  Because lots of AI are very brittle, when you move away from the training data, it starts producing answers that are not very stable.  Is it explainable? Is it interpretable?

00:17:12:10 All of these are nice phrases to use, but how do you actually measure it and then set maybe firm level standards with regard to deploying it?  Because my sense is that this issue of measurement might be a first step, but once we now think about trade across countries between the US and Europe and U.S. and Asia, inevitably there will be requirements that, let's say the EU may have of what it would take for American firms to make available the AI products and services.

00:17:41:06 So before we get to the point of regulation and some of these questions like trust, which I just talked about, might require regulation, but even other things related to the other points that I made about when you deploy AI, how do you do that in a responsible way, in a way where you have an understanding about from problem formulation and all the way down to AI creation, to deployment, to monitoring?

00:18:05:01 Because I think the traditional software engineering approaches of validating and verifying software need to be expanded because the AI has continued to evolve and grow.  So there are things that we need to do thoughtfully on the methods side, we need to thoughtfully think about measurement.

00:18:21:01 We need to think about how you measure risks of various sorts, but also the benefits and then how do you govern these systems in ways that one could think about tradeoffs. So suppose I told you, are you willing to give up a little bit of accuracy to get a lot more equity?

00:18:39:04 First, do we have a framework for doing it?  So there is some of the work that we are doing at CMU, but then who's empowered?  Even if you have a framework like this, who within the firm is empowered to actually make those tradeoffs?

00:18:52:01 So there's a number of questions to your question about responsible use of AI that will involve firm level policy and public policy.  Both of them, I think, will have to work together and in concert to get the most out of this technology, but in the most responsible way.

JOE ATKINSON:

00:19:07:18 Krishnan is actually a great place to wrap up.  We look at that, both the risks and opportunities, but how much work there is to be done by regulators, company leaders, innovators, technologists, and of course, in the public policy arena, government.

00:19:22:05 And we know from history that there will be no shortage of regulatory policy and governance oversight and government oversight and we think is that comes together that will create all kinds of opportunity for organizations to both adopt and adapt as this continues the journey.

00:19:37:08 So maybe to take an eye to the future, we'll close with what you're most excited about.  What you see is the largest promise of these technologies for people, society and business?

RAMAYYA KRISHNAN:

00:19:46:05 I'm somebody who's an optimist in the sense of I think this technology has the opportunity to really improve people's lives and provide pathways to economic opportunity by giving them skills in ways that would allow them to find opportunities in perhaps newly growing sectors of the economy.

00:20:08:01 So, for instance, as you know, there's been a very significant increase in bringing back critical technologies to the United States, the Science and Chips Act, the Inflation Reduction Act.  Where are we going to find workers that are going to have the right skills to be able to contribute to the new fab in Foundry outside of Columbus, Ohio, or the new EV companies coming outside of Syracuse, New York?

00:20:28:08 I think there are opportunities to have workers, not all of whom may be college educated, that can acquire skills, maybe using these technologies like AI, so AI and education, AI and health in ways that would allow for them to have not only more healthy lives, but also more prosperous lives by finding them pathways to economic opportunity that they otherwise would not have had.

00:20:50:22 But to do all of that, to get those benefits, we have to do this in a responsible way.

JOE ATKINSON:

00:20:55:04 I love that, you know, you and I share that passion for up-skilling people and helping them find those pathways to unlock new opportunities.  And I, like you see a ton of opportunity with these tools to help people do that and unlock new economic opportunity for lots of people. So we'll end on that note.  This is a good positive vision of the future and AI.

00:21:12:11 Krishnan thank you so much for joining us today, it has been absolutely a pleasure to talk with you about this fast-growing technology.  I know you gave me plenty to think about, and I'm sure you gave our listeners just as much.

RAMAYYA KRISHNAN:

00:21:24:02 Thank you so much Joe for the opportunity, it’s always fun talking to you.

JOE ATKINSON:

00:21:27:07 Same and to our listeners, thank you for joining us on this episode of PwC Pulse.  We'd love to hear your thoughts about today's conversation.  Feel free to leave a review or your comments on your favorite podcast platform.

ANNOUNCER:

00:21:40:01 All of the views expressed by Professor Ramayya Krishnan on this podcast are his own and not that of any national committee he serves on or of the United States government.  Thank you for joining us on the PwC Pulse Podcast.  Subscribe to PwC Pulse wherever you listen to your podcast or visit PwC.com/Pulsepodcast to hear our next episodes.

ANNOUNCER:

00:22:03:09 This podcast is brought to you by PwC all rights reserved. PwC refers to the U.S. member firm or one of its subsidiaries or affiliates, and may sometimes refer to the PwC network. Each member firm is a separate legal entity. Please see www.pwc.com/structure for further details.

00:22:24:00 This podcast is for general information purposes only and should not be used as a substitute for consultation with professional advisors.

Contact us

J.C. Lapierre

J.C. Lapierre

US Sustainability Transformation & Operations Leader, PwC US

Follow us