How do we ensure humanity stays ahead of technology?

Take on Tomorrow podcast series

Take on Tomorrow, the podcast from our management publication strategy+business, brings you Episode 9: "How do we ensure humanity stays ahead of technology?"

From artificial intelligence to alternative energy sources, technology is changing how we live and work. Some of these advances are happening so fast that it’s hard to keep up. But if businesses and governments fail to understand these technologies, what happens to society? In this episode, we’ll discuss what CEOs need to know to be able to harness these fast-moving technologies, how businesses should use AI and other technologies responsibly, and what’s at risk if they don’t adapt to the pace of change.


Hosts

Lizzie O’Leary
Podcaster and journalist

Ayesha Hazarika

Ayesha Hazarika
Columnist and former senior political advisor

Guests

Azeem Azhar
Founder, Exponential View

Annie Veillet
National Data and Advanced Analytics Lead Partner,
PwC Canada

2023 Webby award nominee for "Best Podcast" series


All episodes in the series

Kick-start your reinvention

Business model change is hard. But the December issue of s+b reveals three critical abilities that will give you the power to push ahead.

Explore now

The Leadership Agenda

Sharp, actionable insights curated to help global leaders build trust and deliver sustained outcomes. Explore our latest content on the global issues affecting organisations today from ESG to value creation, technology and cyber to workforce transformation.

Explore now

Transcript

Azeem Azhar: Ultimately, we’re living beings who’ve lived in a world that hasn’t moved at exponential rates, and so we get caught out by the speed with which these technologies improve.

Annie Veillet: Is it too late to start, and to start putting in the right frameworks and controls? Absolutely not.

Azeem: Society was really disengaged. It looked at technology as manna from heaven that bright and brilliant people produced as gifts from the gods—and far be it for us to ever ask a critical question of it. And we need to stop doing that, right? We need to be there and ask those questions.

Lizzie O’Leary: From PwC’s management publication strategy and business, this is Take on Tomorrow, the podcast that brings together experts from around the globe to figure out what business could and should be doing to tackle some of the biggest issues we face.

I’m Lizzie O’Leary, a journalist in New York.

Ayesha Hazarika: And I’m Ayesha Hazarika, a columnist and, in a former life, a senior political advisor in London.

Today, we’re talking about technology. Developments such as AI are changing the way we live. But what happens when those changes happen too quickly for business to deal with? How can companies make sure progress is beneficial—and minimize the risk of leaving people behind?

Lizzie: To answer those questions, we talk to Azeem Azhar. Azeem is a technology analyst and author who has founded several tech companies. He talks about how businesses can adapt to these dizzying changes—and why it’s so important to keep people and governments engaged in technological issues.

Ayesha: But first, we’re joined by Annie Veillet, the national data and advanced analytics lead partner with PwC Canada, to talk to us about how companies can use artificial intelligence—and to use it ethically. Annie, hello. Welcome to the show.

Annie: Hi. Thanks for having me.

Ayesha: Annie what does responsible AI mean? Because that’s an expression we hear a lot about. What does that actually mean in real terms?

Defining responsible AI

Annie: There’s the aspect of ethics, right, of making sure that an AI model is not acting in a way that’s considered unethical for a society? But, for me, it goes beyond that. It’s also about truly governing and making sure that the intent of the machine that’s being used and deployed is respected. So that also means being responsible in aligning the machine to that organization’s strategy. It means making sure that we’re monitoring the machine going forward so it doesn’t have the opportunity to do things like go rogue, et cetera.

Ayesha: Now, Annie, we want to talk more about the challenges of working with data and AI as a business. But, first off, we are going to hear from Azeem Azhar. Now, he’s the founder of Exponential View, a platform for tech analysis, and he’s the author of The Exponential Age.

Lizzie: Azeem is basically a technologist, but his view is pretty broad. He thinks and writes about computing but also advances in biology, manufacturing, and energy. This is sort of how he explained it to me.

What is exponential technology?

Azeem: I’m an analyst. And so, I spend my day talking to smart people and reading interesting reports and trying to synthesize a view of the future. And that view of the future is one that is grounded in technology, but also economics and geopolitics. And I wrote a book that summed up my thinking so far, and that book is called The Exponential Age, in the US and Canada, or Exponential if you’re in other parts of the world.

Lizzie: Could you explain what this idea of the exponential age is?

Azeem: I mean, I think that we are going through a really fundamental transition that is driven by some remarkable general-purpose technologies—and those technologies have broad applicability across our economies. And because of that, they change the nature and the shape of companies, of industries, of labor relations. And also the dynamics between countries.

The one that we’re most familiar with is computing. Everything from silicon chips up to AI. There are three others. There is what’s happening in manufacturing, the world of 3D printing. There’s what’s happening within renewables and the transition away from fossil fuels in terms of solar and things like battery storage. And, finally, there is what we’re able to do with the stuff of life, with biology itself. Since we’ve been able to decode genomes and understand how to engineer proteins, we can start to harness nature’s beauty to create new types of industrial processes. So, four families: computing, biology, energy, and manufacturing.

Lizzie: I wonder if we could pull apart this idea of exponential. Why did you choose this term, and what is it about right now that feels exponential?

Azeem: I chose the term because that is really the effect that we are seeing in these technologies. When I say a technology is an exponential technology, I mean that it improves, on a price-performance basis, by at least 10% every year.

Lizzie: Wow!

Azeem: And that compounds, and it grows very quickly. So, if you take a look at computing, the amount of computing power that you can buy for a dollar, roughly speaking, increases by 50 to 60%, on average, every single year. And it’s been doing that for decades. And it’s one reason why the Apple smartwatch that I have that costs a couple of hundred dollars is more than twice as powerful in computing terms than the Cray-2 supercomputer that cost several million dollars when I was a 13-year-old boy, back in 1985. And that’s happened because computing is an exponential technology.

Lizzie: But I think what I find so fascinating about your work is this gap, right? The technology is doing that, but I, Lizzie, or you, Azeem, or just the general public, we’re not doing that in our common understanding of these technologies.

Humans are not exponential

Azeem: Yeah, that’s absolutely right, because we are ultimately, we’re living beings who’ve lived in a world that hasn’t moved at exponential rates.

And so, we get caught out by the speed with which these technologies improve. Because they improve, essentially, they get cheaper. And because they get cheaper, companies and consumers buy them and use them more frequently. The natural world does not expose us to these sort of rapidly changing trends. Evolutionarily, we never needed to adapt for it.

Lizzie: Well, where does that leave us? I mean, because, it seems like, certainly from where I sit in the United States, you have these massive technology companies and social media companies trying to make policy around, at least here, a law that was last updated in 1996. And so, there is this big space between the technology and then the policies and laws that dictate how they are applied in the real world.

Azeem: Yeah, there is a huge space, and I call that space the exponential gap. Because you have the technologies racing away on that familiar curve that we’ve seen. We have the companies and the organizations that can make sense of it and the adjustment of industry towards it. But there are so many of these customs and habits that are informal institutions, as well as the formal ones, that have to adapt, but they just don’t adapt as quickly as the technology changes during this helter-skelter moment we’re witnessing.

Lizzie: I think there is maybe an argument that some of these human institutions that we have built, whether they are formal or informal, exist to push back on our inventions, right? Exist to pump the brakes a little. So, should our laws and policies kind of move ahead exponentially, or should they sit there and say, “Well, wait a minute. Is that good for society?”

Moving from monopolies to unlimited companies

Azeem: I mean, it’s a real balancing act. So, when we look at the really big technology companies, whether it’s the Amazons or the Googles or the Metas, you know, they feel really, really large. They feel a bit monopoly-like. But when you test them by the rules that we think of what makes a monopoly, based on 20th- and 19th-century industrial policy, they generally fail that test. In other words, they don’t come across as monopolies, because the assumption of a monopoly was that you would ultimately drive up consumer prices, right? With these new companies, their advantage comes from something that’s much more intangible, right? It comes from their data. It comes from their network effect to their customers. It comes from their IP. And it doesn’t necessarily come from the fact that they have locked down the supply of oil or cotton or whatever it happens to be.

Lizzie: Is this what you mean by an unlimited company? I wonder if you could explain that a bit?

Azeem: You know, an unlimited company is not constrained the way traditional industrial firms were. So, traditional industrial firms were constrained in a number of ways. The first is that you had this sort of profitability at the margin, right? So, as you started to sell more and more, you became less profitable because your input materials were getting more expensive.

You also had issues of managerial complexity. Companies would just get too big to manage, and it would just become expensive to do it well. Well, exponential-age firms don’t seem to face those problems. They’ve demonstrated through their hiring and their use of new technologies that they can expand in many, many, many different areas, and they can still manage that control. But the more important thing is that they have these network effects, which means that the more customers they get, the more valuable and more profitable they can become. And that network effect is at the heart of Google’s business around its data. It’s at the heart of Facebook, but it’s even also at the heart of Apple. Because, when you sell iPhones, you make the iPhone a more attractive platform for developers to develop really cool apps. And that flywheel spins time and time again.

Checks and balances to technology

Lizzie: Well, you’re sort of bringing me to where I wanted to go, which is this idea of trying to put some check in place—to make sure that this exponential leap, as you keep calling it, that these things benefit society.

Azeem: So, I should just say that the starting point is we have to have that discussion, and we have to have it as a creative and constructive discussion. I also talk a little bit about three things that we should bring up in that conversation. One is this value of resilience, even outside of technology, right?

A lot of the focus for chief executives since the 1970s has been efficiency, right? You pare back everything in your business, your inventories, your staffing, in the name of efficiency. And I think we learned through covid that resilience is going to be important, because during times of change, you need that.

The second one is the idea of flexibility. Because, in a sense, we don’t really know what’s going to happen. There are going to be many shocks to the system, and we don’t know what strategies are going to work. So, organizations need to think about flexibility. And the final value is commonality—where we start to think about what things can be shared—really helps in getting a societal focus.

I think it’s ultimately, unfortunately, going to require a sort of value-driven behavioral change, but I think that can happen. What I hope we can do as we go through this process, and the way we narrow that exponential gap, is to get more engagement from society, because society was really disengaged in how it looked at technology in the ’70s, ’80s, and ’90s. It looked at technology as manna from heaven that bright and brilliant people produced as gifts from the gods—and far be it for us to ever ask a critical question of it. And we need to stop doing that, right? We need to be there and ask those questions.

I mean, if anything, because of the power of these technologies, we need to do more of that, more of the time. And what I’m worried about is that it becomes so convenient to just press a smartphone button and get a pizza and watch the thing you want to watch, that we’ll just sit on our backsides and not worry while the power slowly and then rapidly ebbs away from us, possibly for good.

Who should have the power?

Lizzie: So, who needs to take that power? Who needs to hear that message? Is it leadership? Is it citizens? And how do you get it to that person?

Azeem: Political leadership needs to hear it. I mean, I certainly do spend a bit of time talking to senior policymakers. So, if you think about chief execs and chair of boards, they’re often thinking about these questions because they can reflect on their business career and say, “We’ve never seen a time like this.” So, they’re really trying to connect back to those issues in ways I think that I hadn’t noticed a few years ago.

And I do think that at this moment, where there is sort of some weakness within the political class to come up with perspectives and views, and there are no answers, getting the brightest brains around the table—which is everyone’s brain, frankly—becomes important. And so, yes, we have to reach lots and lots of people. I do think that it’s going to require quite a lot of activity up and down and across society.

Lizzie: Azeem Azhar, thank you so much for your time.

Azeem: Thank you, Lizzie.

Ayesha: Some really interesting thoughts there from Azeem. Let’s bring back Annie Veillet, from PwC Canada. Annie is experienced in AI, particularly the ethics of AI—one of the areas where there’s a gap in corporate and societal understanding of the technology. Annie, what stood out to you from that conversation with Azeem?

Annie: So, for me, the concept that technology moves faster than people, and that we need to invest in people if we want to start seeing the productivity that the technology offers is absolutely in line with what I’ve been seeing in the market. But I don’t think it’s something that is impossible for us to surmount.

We need to remember that it is still humans that created that technology. So, we have the capacity to train and learn and make sure that organizations are actually using it. So, taking the time to put in place the mechanisms from a human perspective that are required for the technology to be leveraged in a way that is driving value.

Taking control of AI

Ayesha: And, Annie, we heard Azeem speak about how fast some of these technologies are developing and improving. That of course includes AI. So, when it comes to the topic of responsible and ethical AI, what are the kinds of conversations you’re having with your clients right now? And what do they tend to ask for help with?

Annie: So, the first front, when it comes to responsible AI, there’s the question of the ethical frameworks that they may have internally, all the regulations that they have, the guardrails that they put in within their organization. We are often asked to review, understand—is it still valid? Or do they need to look at it and update it based on the new reality that AI brings?

The good news is the leaders and the folks in these different organizations realize that how we’re trying to work and accomplish and interact with their different clients doesn’t change from the ethical framework perspective. It simply changes what techniques we can use and what technologies we can leverage.

The second part is to understand what can go wrong. What could an AI machine truly do for it to be a danger, a risk, for our organization? So, we have been asked for a few years now to do a lot more of that upskilling and up-knowledging, both of the leaders and of the teams on the ground, as to how do you make sure you put the controls in place for it to come together in a responsible manner.

Ayesha: And, Annie, just outline to us what are those scenarios, what could go wrong?

When AI goes bad

Annie: So, one of the clients I worked with was a bank in Europe, and they were using an AI machine to approve or decline whether somebody would be getting a loan.

And it’s a great way, in a way, to do it mathematically and not let the human emotions come in. But what ended up happening is that the folks that would get declined for a loan would call back their call centers and ask for explanations and information about that. And the organization didn’t have an answer, because they didn’t set up the AI machine in a way to be able to explain how it had come down to that, to that information.

So, they ended up in the news saying basically that they were making decisions in a nonethical manner, which wasn’t necessarily the truth. But how it was set up, it gave that impression. So, there is, again, the very obviously negative press that you might see, but also there’s some of that indirect impact where the clients didn’t necessarily know it was an AI machine causing that stress. Clearly, we helped them then fix the situation by adding interpretability within the model, so that the call center agents could explain to somebody calling them, this is why your application was declined, because of X, Y, Z factors. Obviously, the factors were not unethical ones in those situations.

Is it too late to take control?

Lizzie: When you’re working with clients and they are either new to AI or new to incorporating ideas about ethical and responsible frameworks, are they feeling like they’re behind the ball? Is it too late for them to take on a kind of responsible AI framework in what they do?

Annie: Is it too late to start and to start putting in the right frameworks and controls? Absolutely not. I think AI is being adopted today. It’s certainly a technology that’s mature enough to use very broadly. But most organizations, frankly, are still at early stages in the full potential of applying AI.

Lizzie: Are they thinking about the ethics part or are they just sort of, “gee whiz,” by the technology?

Annie: It’s a little bit of both, right? A few years ago, I have to say, about three years ago, we were getting all kinds of requests about, yeah, the cool proof of concept. “Hey, I want to just check the box saying that I’m an organization that uses AI.” But in the past two years, it started to shift.

So, last year, the majority of the requests that we were getting was a lot about the foundational elements required for AI to be successful. So, data itself, data management, making sure all the controls and the quality of the data was there and we were protecting the data, et cetera. And now, we’re getting more into, “How do I make sure that when I put into production, I can manage it long-term?” Like, “How do I tie it back to ‘use case’ that’s really meant to drive value?” So, a lot less conversations about the cool aspect of AI and a lot more about: how do I do this properly?

The rewards of doing it right

Lizzie: What are the rewards for a business in embedding that kind of ethics and responsibility conversations from the start?

Annie: I think managing the risk up front can first of all save a lot of money in the event that something would go wrong. Think about reputational impacts. Think about, even if there’s wrong decisions being made by a machine and then you have to go back and change or fix it, or what if the machine’s working in a way that’s not even aligned to your business’s strategy? That can’t be driving the type of value that you really truly want and can drive. So, for me, it’s not a should-you-or-should-you-not. You should absolutely do it. And there’s value from doing it from the start of that design, wherever you can possibly do that.

Introducing diversity into the technology

Ayesha: Traditionally, technology, computing, artificial intelligence—these have not been hugely diverse areas. How can companies help address that issue and improve diversity in these fields?

Annie: Globally, we’re definitely not seeing the diversity that is required. So how do we start moving the needle there? Within the workspace, we found ways to still bring in that diverse perspective. Whether it’s a data scientist that kind of came out of university with already that knowledge, or we’re upskilling some of the technical women already on our teams to learn more about machine learning.

But the sooner we start influencing early in the educational process, the more that diversity will get out in the marketplace. Maybe I’ll use us as an example. Three years ago, we had, I think, about 14% women on our team of about a hundred. We’re now up to 37%. So, we’re doing everything, and we’re hitting all the angles that we can to start changing that; and the performance of our models, the performance of our project and our team, has been increasing exponentially. And it’s really fascinating how important it is to have that diversity to not only reduce the bias of the models but just to make them drive more value.

Ayesha: Well, Annie, it’s been such a fascinating conversation. Thank you so much for speaking to us on Take on Tomorrow.

Annie: Thank you so, so much for having me, and I look forward to hearing many more of these podcasts.

Lizzie: That was really fascinating. And, yet again, I feel like in many of our conversations, we are learning that, like, the train is moving down the tracks, people need to get on board. And, also, to think the big-picture ethical thoughts now, as opposed to after something has blown up.

Ayesha: I completely agree. I feel like I had a lot of my own biases about AI challenged during that conversation, and I think my big takeaway, that the sweet spot is using AI in collaboration with human beings—I feel like that was, “Yeah, definite. Agreed, agreed.”

Lizzie: And that is it for this episode. Join us next week, when we’ll be asking: how can we ensure the workers of tomorrow get the skills they need?

Guest: The private sector has to be at the table. If they’re not at a table, you cannot give the kind of skills that they need in the jobs that they are creating for the young people to be able to take up.

Ayesha: Take on Tomorrow is brought to you by PwC’s strategy and businessPwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity.

© 2022 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details.

This content is for general information purposes only, and should not be used as a substitute for consultation with professional advisors.

Contact us

Kevin Burrowes

Kevin Burrowes

Chief Executive Officer, PwC Australia

Tel: +61 3 8603 1443

Annie Veillet

Annie Veillet

Partner, Data Analytics and AI, PwC Canada

Tel: +1 514 205 5146