What did the recent AI Summit reveal about the state of AI, including the energy problem?
The recent AI Summit in Paris raised key issues about AI regulation—and saw the US and UK refuse to sign on to the declaration.
Generative AI is becoming autonomous—not only performing assigned tasks but figuring out how best to achieve goals. How likely is a “Terminator” scenario, and how should the world address important issues such as energy use, regulation, and deepfakes?
Join Steve Odland and guest Ivan Pollard, Center Leader for Marketing & Communications at The Conference Board, to learn the difference between generative AI and autonomous AI, the potential dangers of unchecked AI, and why the US and UK refused to sign on to the recent Paris declaration.
For more from The Conference Board:
C-Suite Perspectives is a series hosted by our President & CEO, Steve Odland. This weekly conversation takes an objective, data-driven look at a range of business topics aimed at executives. Listeners will come away with what The Conference Board does best: Trusted Insights for What’s Ahead®.
C-Suite Perspectives provides unique insights for C-Suite executives on timely topics that matter most to businesses as selected by The Conference Board. If you would like to suggest a guest for the podcast series, please email csuite.perspectives@conference-board.org. Note: As a non-profit organization under 501(c)(3) of the IRS Code, The Conference Board cannot promote or offer marketing opportunities to for-profit entities.
Steve Odland: Welcome to C-Suite Perspectives, a signature series by The Conference Board. I'm Steve Odland from The Conference Board and the host of this podcast series. And in today's conversation, we're going to discuss AI. How do you spell it? What's the innovation going on? What's happened in the US, and what happened recently at the Paris conference?
Joining me today is Ivan Pollard, who's the head of the marketing communication center and also all things AI here at The Conference Board. Ivan, welcome.
Ivan Pollard: Thank you for having me, Steve.
Steve Odland:So Ivan, you've been doing this a long time, and you've been involved with AI in your marketing career, and obviously you're running the center now, Andwe've done a lot of work on AI. But just help our listeners understand the various definitions of AI, as we consider them.
Ivan Pollard: OK, well, let's start with the thing that really was the transformer for everybody getting into AI, which was large language models. This transformed because it made us able to talk to computers and essentially code with our voices.
So large language models are just simply used to understand, interpret, and generate human language that is contextually relevant. So that's the thing that triggered it. They were a component, a subset, of what is called machine learning, where machines look at and analyze millions and billions and trillions of pieces of data, spot patterns and connections between them—for example, connections between words and what comes next in the sentence. But they spot patterns, and they use that to make predictions or decisions leveraging the data and algorithms that enable systems to learn without explicit programming.
So they can answer a question when you say to ChatGPT, "What's the best recipe tonight?" And it will tell you if you tell it what the ingredients are. No explicit programming, but that inference and interaction with language. So those are the two basic things that I think we should talk about. Then Steve, obviously, things started to pop a couple of years ago with generative.
Steve Odland:Yeah. And then there's autonomous AI, as well.
Ivan Pollard:Yeah, so generative AI went that next level, which is you could get it to do a task for you. Draw me a picture of a kitten playing soccer. Make me a recipe with just bacon and eggs and rice. Those were generative. You could also do, "Get me some code" or "Find me a new compound," using language. But AI, generative AI, was used to create outputs such as text, image, computer code designs, and music and all sorts of other things that resemble content created by humans. Humans were still involved in that.
What's next? We already have some parts of it, but what's coming next is autonomous AI, which is used to perform tasks without human intervention or direct control. Solet's go back to what I just said: "Hey, ChatGPT, make me a recipe with these ingredients." What you do with autonomous AI is say, "Hey, make dinner, and make sure my kids will eat it."
So one is a task, the other is a goal, and you can go even deeper with autonomous. Where AI agents and things like that will come into play is the idea of being autonomous, multistep programming, with the ability to reason and to analyze data. And soyou'd be able to say, "I've got $28. Go off and buy me the food that I need tonight to make the most nutritious, that you know my kids will eat. I want you to make sure it all lands at 5 o'clock, and then I want you to talk me through preparing it. Then I want you to take control of the oven, and make sure it's all perfectly cooked at 7 o'clock."
.
Yeah
Steve Odland: this sounds almost like sentience. In other words, human-like thinking, but it's not because—
Ivan Pollard:it's not.
Steve Odland: —it's really responding to just a fulsome set of directives.
Ivan Pollard: And as I say, I mean, I think it's a little bit tenuous, but when people will tell you the official definition, task-oriented versus goal-oriented. When it's a goal, make me the best dinner possible for $28, it can react in real time to somebody just putting a deal on a website at Walmart and sending you the eggs for $4 cheaper. It'sgoal-oriented. It's going to try and achieve what you want. And in theory, it learns as it goes.
Steve Odland: OK, so those are the basic levels of AI as people describe them today. Now, there was this conference recently in Paris, and it was a multi-nation conference regarding AI. Tell us about that conference. What countries were represented, and what were the objectives?
Ivan Pollard:Actually it kicked off, Steve, with a safety, AI safety conference in the UK under the British Prime Minister Rishi Sunak in November of last year. All the countries went, including the USA in the previous administration, and they signed, essentially, an agreement that they would try and make sure AI was regulated and safe and good for humanity.
Shift it forward to early February, Macron has a second summit, slightly different, just called it the AI Summit. And this one, again, same sorts of things put out. Making AI useful for people and useful for the planet was one of the missions. The second mission was making AI available and accessible to everybody. And the third was to make sure it was, again, safe and regulated and harmonized across the world.
So you had countries there like China and France, obviously, and India. Those all signed the declaration. That was a little bit woolly, but it was a declaration. But two countries said they wouldn't sign. That was the USA, and that was the UK.
Steve Odland: And so what did the declaration say?
Ivan Pollard: The declaration essentially said that we will, as I said, those three things: We'll make sure that we use energy efficiently, we'll make sure AI is accessible to everybody, and we will make sure it is developed safely and with controls in place.
Steve Odland: OK, so it built off of the November conference, which focused on safety, and it added the other two, which is some climate aspects of it, and that it was good for humanity. Gosh, that sounds pretty good. I mean, if all the nations could do that, then it seems like a home run. Why the holdouts of the US and the UK?
Ivan Pollard: Now, of course, we don't know exactly why. But the things that have been reported and the things that we have found out have been to do with the innovative nature of AI and without wanting to put any barriers in the way of the speed with which that is done
Steve Odland:So it was basically a concern by the UK and the US that adding too many regulations would throttle the innovation level and the development level and thereby not be as good for humanity as it could be. Whereas you had another group, the rest of the countries saying, no, we need to regulate it to make sure it's safe and it doesn't use a lot of energy and that it's good for humanity. So some of the objectives were the same. It's just a matter of how you go about it. Is that a fair description?
Ivan Pollard: I think so. And the speed with which you go about it.
Steve Odland: The speed. Yeah. Well, and the thing is you've got a lot of development going on in the private sector and the public sector, both in the UK and the US that, if they had signed it, it would really throttle it back. And so this feels like any other frontier that we've approached. And, you and I, in our careers and lifetimes, have seen many of these where you got to let it run a little bit until you figure it out and let it mature, right?
Ivan Pollard: That's definitely an agreed route that has been used in the past. Think about the internet, though. If the internet had developed with four or five different protocols, it wouldn't work in the way it does now. SoI think there is a little bit of getting the balance right between allowing for innovation and allowing for harmonization, a word I used earlier on. Regulation is good, but making sure this thing is going to end up as optimized as it can be for humanity, I think, is better than regulation.
Steve Odland: Well, how could it be unsafe? I mean, tell us, give us some scenarios where AI would really be unsafe for people.
Ivan Pollard: Well, everybody's looked at the what they call the cyberdyne, scenario, the thing that happened in "Terminator." So remember I said autonomous AI makes decisions on its own and turns its own learning into actions. That is one of the extreme cases of where it's not safe.
Steve Odland: But is that real? I mean, that's a movie, that's fiction. Are we reacting to somebody's imagination versus reality?
Ivan Pollard:I think weprobably are, but I remember watching "1984," as well, and it said that one day people will be carrying phones around with them, and that seemed crazy at the time.
I think it is a possible outcome that some people still talk about. I think it's very unlikely, Steve, to your point. I think it's humans and machines, and us working together is really where the thing goes.
Sowhat's the step back from that, if that's slightly fictitious, could happen, but we don't know it necessarily will? Things behind that, the ability for AI to run amok, and probably in three areas. One is crime. How do we make sure that the underpinnings for everybody are used appropriately? Crime would be one. The second is just like destabilizing power in the world. Who can make what happen, and how will that be? And the third is actually?—it is a fair thing to say there's a risk to the amount of energy. If we all go after this all at the same time in different ways, we may burn more energy than we think is appropriate.
Steve Odland:So your point on power under number two was political power. The point on power under number three is electrical power, which means it, when you do this stuff, it consumes a lot of energy, and therefore, you've got carbon output and so forth. So that's the environmental piece of it that they talked about in Paris, right?
Ivan Pollard:Yeah. And just putting it into ordinary language for people, you make that recipe I talked about earlier and use ChatGPT, that uses the same amount of electricity just for you as it does to keep on a light bulb for five to 10 minutes.
Steve Odland: Wow. Yeahit's interesting. The same thing with crypto, the amount of energy that it's consuming. So all of these new technologies, you have to be thinking about it in the terms of the broader goal.
OK. But let's go back here. Cause you know, imagination's a wonderful thing, cause imagination drives new and different services and products and really advances the state of humanity and the quality of living and all of that. So you can imagine good things. You can imagine bad things, which is your point on the crime. You could if you got bad people imagining bad things, it can be detrimental to society. But you could also imagine very scary things to that may never happen. And maybe that's where the governments can help allay some of the concerns.
Is that a fair way to bracket, what's going on here?
Ivan Pollard:I think it is, Steve. I think everything, every advance that humanity has ever made, comes with risk. You need good actors to sit there and make sure that the risk is controlled. On the other hand, you do have a point of view that says, that's true, but the more you control me, the less I can do that's different. And therefore innovation doesn't happen.
Steve Odland:Yeah. The Conference Board's been very clear on regulation. We are not against regulation. We think that the benefits should outweigh the costs, number one, and number two, that it should level the playing field and protect the participants, but that we ought to let people compete on that level playing field in order to drive innovation and results. And soI think this is no different than the position on any regulation.
Ivan Pollard: No, I think you're correct. I think that's very nicely said.
Steve Odland: OK, so let's go back here. We talked about generative AI doing output that resembled human content. That doesn't seem all that dangerous, does it?
Ivan Pollard: Well, it does if you use it the wrong way. It doesn't seem at all dangerous when what I'm doing is using it to create an agenda for a meeting or a slide for PowerPoint, or my daughter's making pictures of her favorite rock stars winning the FIFA World Cup. It can be dangerous if it's a deepfake video of you, Steve, that they can now pull from just one video online of you on CNBC, and you phoning me up saying you want to talk to me about AI and regulation. So deepfakes, getting people's bank accounts, hacking people's coding and passwords, there's a lot of stuff that generative AI can do that could be used for harm.
Steve Odland: OK, that's really helpful to understand. Now, how can machine learning hurt you?
Ivan Pollard: Well, machine learning is a part of generative AI. It looks at it, it's just a way of looking at data. You could argue that you can still put machine learning to bad use. Massive amounts of data, computer spot patterns, "Hey, how can I find a way of taking Ivan's DNA and suddenly reengineering him to become a Manchester United supporter?" It could happen.
Steve Odland: Oh, I don't think so. But it is a risk that we should really consider, don't you think?
Ivan Pollard:Yeah. And I think some of the bioethics that are going to come into play when it comes to AI are going to be one very big stream of consideration, but also biowarfare.
Steve Odland: Biowarfare. Well, also you have trading, if these models then make predictions and start manipulating trading so that, again, you're skewing the playing field. It's another whole area.
We're talking about AI, the risks of AI, and what's happening with the various conferences, most recently Paris. We're going to take a short break and be right back.
Welcome back to C-Suite Perspectives. I'm your host, Steve Odland, from The Conference Board, and I'm joined today by Ivan Pollard, who is the leader of our Center for Marketing and Communications and AI at The Conference Board.
OK, Ivan, so we were talking about a number of the dangers of the various kinds of AI, machine learning and generative AI, and you gave us some great examples. It seems like autonomous AI is really the scariest one.
Ivan Pollard: It is, but just like all of these things, Steve, it's also the most exciting one.
Steve Odland: Aha. OK. Tell us why.
Ivan Pollard: Well, the promise of autonomous AI is that it can take away not just tasks, but entire work streams that the human being is not best optimized to actually deliver. And then free those human beings up to do the things that they are slightly better to deliver.
Super example would be sitting by a hospital bed. The human being will have the empathy and the emotional connection with the person to understand their problem and communicatingto them. The AI, the autonomous AI in the background, can be listening to everything, taking the notes, helping you spot, before it happens, new conditions arising. And also being able to help generate solutions for whatever the pain or the problem is.
Steve Odland:Yeah. So that's pretty good stuff. So I guess, when I listened to you run through all these forms of AI and the upsides and the downsides, you can see why countries feel like they've got to be on the same page in terms of what they're allowing and what they're trying to regulate—again, to try to protect the participants on the playing field, right?
So how did they think about balancing that? Because as you said before, if you overregulate, you squish innovation, and of course, that's the US and the UK's concern. Where is the right balance?
Ivan Pollard:So I think that's still being worked out, and maybe what we should do is ask AI to solve it for us because it might be able to in a better way.
Though I think where we are, it'svery important to keep transferability of these things open. So something massive happens and something good happens that we can instantly spread it around the world. That's where you're sitting there going, OK, we've got to make sure these things are homogenized and regulated.
But on the other hand, we didn't, in the US, we didn't see DeepSeek coming. So DeepSeek, for those of you who don't know, it's an equivalent to ChatGPT but was made with fewer chips, made with less money. It's open source, which is one of the other things they talked about the Paris Summit. How do we get more open-source things that people can build on top of, rather than just buy? And it came out and it really shook. It came out on January the 20th, and it shook, essentially, America's belief that they had the lead in AI. And the administration reacted very quickly to say, OK, if we want to keep the lead, we've got to keep going.
And I guess that fed in, two-and-a-half weeks later, you get the Paris summit, and that fed into America's response and, to a degree, the UK, who are in a completely different position, which is they were so far behind that if they don't find a way to slingshot, they're going to get left on the starting line.
Steve Odland: The other area that we haven't talked about, but they did in these conferences, is IP, intellectual property. And it's interesting that China solved this, and you don't want to be cynical about that. But the concerns have been with China's respect for other people's IP. And soit's interesting that China was in favor of this. So another area of risk really is related to intellectual property, Ivan. Talk about how that comes into play with AI.
Ivan Pollard:Yeah, it'sremember we talked about machine learning. The machine learns by looking at billions of sets of data. ChatGPT-4, apparently, was trained on 13 trillion pieces of data. Text, code, computing, mathematical formulas, whatever it was, it absorbed it all and finds patterns in them.
Now, inside that 13 trillion bits of dataprobably has today's copy of the New York Times or an old Bob Dylan song or maybe a piece of chemical engineering that is patented. So when it starts to pull things apart and then put them together again in new and unforeseen ways, how is it using the things that other people have patented or have IP on? How is it using that to create something new?
Now the problem with most of the big models is we don't know, and that's one of the things I think AI has risen for the entire law of intellectual property is, if there's a picture behind me that somebody copied a little windmill on it from a da Vinci painting but interpreted it themselves, how would we ever know?
That's the same thing that's happening with AI, and it's causing problems.
Steve Odland:It's interesting. I heard an interview with Paul McCartney recently, one of the former Beatles, and he referred to a YouTube video, which has him singing one of the Beach Boys songs, and it's video and audio. And he said, I never sang that song, never in my life. I have never sung that Beach Boys song.
That is a deepfake, and that is all AI, and he was upset because he said, if someone could take not only my name, but my likeness, my voice, everything about me, and create content that I never created and it's outside of my control, what do I have? What is there from me? What's left? So this intellectual property is pretty broad, but that's scary.
Ivan Pollard: And in that case, do the Beach Boys sue McCartney or does McCartney sue the Beach Boys? And how do we ever find out who actually made it?
Steve Odland:I think they have a bot that sues, one of their bots sues the other bot, and the other bot defends. I don't know. I mean, this is the Wild West.
Ivan Pollard:So there are companies and services springing up now that will help track down, to the individual pixel or the individual piece of code, where something was created. But you sit there and think, a three-minute video with every individual pixel, that's billions and billions of iterations of things that they have to find out the provenance of, and then you can sit there and go, yeah, it's going to be super-difficult. SoI think this is a big question.
In the AI summit, they didn't really get deep into how they were going to solve intellectual property. They did talk about how there was a need to tackle that problem.
Steve Odland: Well, and as we sit here, there's probably thousands of examples that are being created right now of abusing it so that people are thinking about all these ways. Some of it is just, people do it for fun, but unfortunately, there are a lot of nefarious goals in here, too.
Ivan Pollard:Yeah, like we said, the deepfake of somebody saying, "Hey, I've got your son, send me a payment right now." Here is he speaking on the phone, and you can hear his voice, or you can see his face. That's going to be tough to spot. So I think all companies should be thinking about how you train for people to look at these. So a deepfake video, often not as much emotion, sometimes as they move, the light doesn't go with them. Sometimes the language is a little bit like unusual for what it is. But the training companies are starting to put things out there now that teach you. And of course, we'll have AI defense mechanisms that stop the AI bots.
Steve Odland: I think that's what it's going to have to come down to because they're getting better and better, and it's learning. So at some point, just like you need spam filters, you're going to need AI filters to filter out bad AI. Essentially, down to the, as you said, the pixel level or whatever the equivalent is in the audio track.
Ivan Pollard: When you think about the working space, I mean, this is, it's a bit tenuous, but this is an argument for why we need to come back to the office.
Right now, if we're on a computer, everything is digitized, and this interface that you and I are on right now, and our listeners are on, that interface is digital. And there is going to be a little bit of value that's going to be ascribed to the in-person reaction and interaction.
Steve Odland:Yeah. You actually raise a good point. Is that really you, Ivan? And how do I know?
Ivan Pollard: Send me some money and I'll tell you.
Steve Odland: Exactly. Or when you start rooting for Man U.
Ivan Pollard:Yeah.
Steve Odland: The last thing that they talked about that we have time for is the sustainability piece. And again, it's the energy that'srequired to produce these using AI. I mean, you just talked about one little example, running a light bulb for five. But it's enormous. I mean, there isn't enough electrical generation right now to provide for the planet. And then you start thinking about putting automobile and other forms of transportation on the grid. That doesn't work. And then you overlay it with crypto, and then you overlay it with AI.
We, just in this country alone, but worldwide, we don't have a grid for this, we don't have power generation. And, if the only solution is to add coal-fired plants, like China and India, then we're going to create a worse environment than we have today. How do you handle all that?
Ivan Pollard: I honestly don't know. And it's concerning, I think, for humanity, which is why it's great that the governments are starting to think about this. But the energy drain of just AI, they've mapped it out in the next five years to be somewhere in the same size as the energy demand—just from the data centers—the energy demand of a small country in Europe.
Sowe're having to add that. It doesn't sound a lot, does it? 2-3%. But that's a lot that we have to add on. And where it's going to come from, I don't know. But just to play the positive side, right now, we're getting closer and closer to getting fusion to work. So in France, one of their fusion reactors ran for 22 minutes, 10 days ago, which is much longer than anybody's got.
And they're using AI to try and keep the machinery—I don't really understand the technology—but keeping the machinery stable at temperatures that are in the hundreds of millions of degrees. AI does the adjustments for them and it makes it. So at the same time that AI is draining energy, it might actually be helping us create it, as well.
And that goes back to that balance, Steve. What do we need to hold back, and what do we need to let go forward? That's where you need the really good brains, the scientists, and the diplomats to work this out together.
Steve Odland:Yeah, and fusion is the holy grail of electricity production. I mean, you saw it going all the way back to the '60s in "Star Trek." Yeah. It's safer and no residual, anddoesn't pollute the environment. Good progress on that.
But you're right. I mean, you have to think about the balance here, all the pros of AI and whether it outweighs the cons. And I think the conclusion has been yes, that still is the case, but you have to think about the downsides and manage it.
Ivan Pollard:That's good. And just on one thought about fusion. One gram of hydrogen produces the same amount of energy in a fusion reaction as 11 tons of coal.
Steve Odland: Wow. Ivan Pollard, thanks for being with us today and sharing with us the latest on AI and the Paris conference.
Ivan Pollard: Thank you for having me, Steve.
Steve Odland: And thanks to all of you for listening to C-Suite Perspectives. I'm Steve Odland, and this series has been brought to you by The Conference Board.
C-Suite Perspectives / 10 Mar 2025
Learn the difference between generative AI & autonomous AI, and the potential dangers of unchecked AI.
C-Suite Perspectives / 04 Mar 2025
Find out why reshoring is more complicated than it sounds, when “made in the USA” may not be, & how some negotiating tools may really just be…negotiating tools.
C-Suite Perspectives / 25 Feb 2025
Unpack what’s behind declining consumer sentiment, including survey respondents' write-in responses, and how it could shape economic growth this year.
C-Suite Perspectives / 24 Feb 2025
Learn about the biggest perceived risks, what's next for hiring, and why we’re in an unusual period of high uncertainty and high CEO confidence.
C-Suite Perspectives / 18 Feb 2025
Find out how the company designs corporate responsibility programs and why replacing copper wiring reduces energy use.
C-Suite Perspectives / 13 Feb 2025
Find out the current state of the pause in fighting, how the world reacted to Trump’s proposal, and what obstacles remain to a permanent ceasefire.