Think Out Loud

As AI becomes more powerful, Portland tech expert urges federal intervention

By Elizabeth Castillo (OPB)
Sept. 1, 2023 5:09 p.m.

Broadcast: Friday, Sept. 1

00:00
 / 
18:49

Artificial intelligence is becoming more sophisticated, and the U.S. is a leader in the technology. Policymakers will need an innovative approach to managing the tech, and safety should be top of mind for government officials, according to Charles Jennings. He’s a Portland-based former AI executive and the author of the book “Artificial Intelligence: Rise of the Lightspeed Learners.” He joins us with details of what U.S. leaders can learn from history and why Oregonians should be urgently learning more about AI.

THANKS TO OUR SPONSOR:


The following transcript was created by a computer and edited by a volunteer:

Dave Miller: This is Think Out Loud on OPB. I’m Dave Miller. The Oregonian Charles Jennings is the former CEO of an AI company. In 2019, he published a book called “Artificial Intelligence: Rise of the Light Speed Learners.” He then spent three years focused on federal AI policy. Now, he has published an urgent message in Politico. He says that because of its infinite risk, AI’s underlying technology has to be nationalized the way atomic energy was after WWII. He’s calling for the immediate creation of the Humane AI Commission. He joins us now to talk about this. Welcome to the show.

Charles Jennings: Good afternoon.

Miller: We asked on Facebook, how AI is impacting people? Randy Lauer said, “It hasn’t. Next question.” Matthew Hall wrote, “It’s made it more necessary for me to filter out articles and podcasts focused on it.” So there’s a good chance that Matthew Hall is not listening right now. But if you had a minute to convince him and others who say that this is overblown, there’s too much talk about it, it’s not important or real or a big deal, what would you tell them?

Jennings: Yes, there’s a lot of hype but that does not mean this is unimportant technology. It gets classified in the GPT bucket a lot.

Miller: The ChatGPT?

Jennings: Yes, ChatGPT, the  large language model. The use of AI to write a term paper..

Miller: Or to make fun pictures?

Jennings: …or make fun pictures or just to do research on the web. But AI is much more extensive than that. It’s in agriculture, it’s in medicine, it’s in transportation, they fly airplanes, movies, music, many, many walks of life and it’s accelerating. It’s getting smarter and it’s clearly the biggest technology disruption since electricity in terms of its breadth.

Miller: Bigger than the internet?

Jennings: Bigger than the internet because it is also disrupting the internet - deep fakes, misinformation - but it really goes into all these nooks and crannies at a level that most people are not fully aware of.

Miller: Ryan Hooper on Facebook added this, “I think the prognosticator should point out that AI is like an oncoming hurricane. You see it on the horizon, but until it hits, you have no idea how devastating it will be.” What are the different categories of risks that you’re thinking about?

Jennings: Accidents, bias, greater wealth inequality, the whole disinformation [or] misinformation phenomenon, deep fakes. But ultimately the greatest risk is what we call runaway AI which comes out of the fact that nobody really understands completely how this stuff works.

Miller: I want to turn to some efforts, at least to the federal government thinking about regulation. Last year, the White House released what it called a blueprint for an AI bill of rights, a very non binding, very preliminary big picture take on the regulation of this technology. What did you think of it?

Jennings: 100,000 ft level, fine. Totally irrelevant, however, in terms of having any real impact. It’s not the rights that we need to be concerned about. It’s the technology itself, it’s the proliferation and the need for safeguards, the need for understanding how it’s going to impact society. I think the concern about AI transparency is legitimate but sort of overblown. I was head of an AI company, I had access to every element of an AI from the algorithm to the data, to the test results. There were parts of it that the engineer who built it couldn’t tell me how it worked. And so we need to get our arms around a new type of governance that we don’t have yet.

Miller: And so this question transparency or legibility is a key one and it’s one that blueprint has talked about, but lawmakers as well. The idea is that humans should be able to understand what’s under the hood. We built this thing and like a car or nuclear energy or all kinds of things, we should know how it works, [and] in this case, how it came up with an answer. You have said that that’s a pipe dream and you’re not alone in saying that there’s something ultimately not magic but unknowable on the human level. When you give something a million pictures to look at and it figures out different ways to see something, we probably may never know actually how it came up with some answer. What are the repercussions of that?

Jennings: Well, first of all, I think that was a good way of putting it. There is nothing magical necessarily about AI but they deal with such massive amounts of data that our puny little brains really can’t see. They can’t draw the inferences that an AI can from a huge Himalayan mountain and we’re working in a mole hill. So given that, my belief is that the first thing we need to do is to have some new AI labs that begin to test and develop AI in a publicly available way. That’s what we did in nuclear times with Livermore, Los Alamos. These were labs that were controlled by the government. So there was transparency to different elements of government and we need to do that in partnership with the big AI companies.

Miller: So you’ve gotten to starting to talk about your big pitch here. So let’s turn this around. So, it’s based loosely or maybe not loosely on the post war, post Hiroshima Nagasaki era where, in 1947, there was the Atomic Energy Commission. What’s the connection for you between this commission and what you’re calling for now?

Jennings: The initial notion was after spending three years working on AI policy in Washington DC, remotely, this was COVID times, but I was working with a think tank called The Atlantic Council. I began to despair that we were going to ever get an AI policy. We just seem to be falling further and further behind. And so I felt that we needed to come up with an entirely fresh new bold approach. And when I look back in history, I saw Truman in 1947 snapping his fingers, taking nuclear reactors away from the military, which was not a popular move right after WWII and the victory there and putting the nuclear reactors under the control of five civilians almost overnight. And if you’ve seen the film Oppenheimer, you know the AEC had some problems, but it also played a crucial role for several decades in the first period of nuclear weapons and helping avoid global nuclear war.

I don’t think we’re going to get there with congressional regulation. It’s just too slow. AI is evolving too fast, filling in too many dimensions of society. And congress, if there’s one thing, I think everyone that would agree about it, is not an agile quick institution. It’s the opposite. So we need something new. And so I proposed this nationalization idea as a prompt to my fellow citizens to say we’ve got to do something. It’s either this or let a few handful of tech bros continue to run this stuff. They’re carrying the nuclear codes of AI right now. And a couple of the most powerful of these guys want to have a cage fight.

Miller: Literally?

Jennings: Come on, we need a better way of governing this stuff.

Miller: What would it take legally to wrest control of the world’s most advanced AI models from this handful of companies, tech giants, that have spent billions of dollars of their own money and also a lot of federal funding to create these models? Legally, what would it take? Because it’s worth saying that this is different from the atomic model where there was a federal government at the Manhattan Project that was behind this to begin with. So there, it wasn’t like you had to wrest control of this underlying technology from private corporations. It was a federal government at Los Alamos and other places that invented this to begin with. That’s different to a great extent. That’s different with AI. How do you envision that happening?

Jennings: Well, first of all, there [has] been a much greater investment by American citizens and American taxpayers than most people realize. There would be no AI without the US government. For the last six decades, we went through two AI winters when the only AI research in the world was at US government labs like Jet Propulsion and Bell labs.

THANKS TO OUR SPONSOR:

Miller: But you don’t disagree with the notion that Google and Microsoft are also spending their own money to make their own products?

Jennings: Billions, $25 billion a year. And,

Miller: But you’re saying not alone there.

Jennings: There has been federal money on top of a foundation that we citizens built for them, A.  And B they’re working in an economy that we have helped keep free for them. They have not been very regulated in tech and the move fast and break things culture of tech has worked pretty well for them in building value and creating the internet and social media and so forth. My thesis is that it’s not going to work in AI. We need a new model and the US government has skin in the game and needs to be involved.

Miller: In your article in Politico where you lay this out, you mentioned the wartime nationalization of the Kaiser Shipyards, which is not very far from where we are right now. Up and down the west coast, but including Vanport and also General Motors and others. That happened, obviously, at a time when the Nazis were on the march [and] were trying to take over the world. The Atomic Energy Commission happened after the world had seen mushroom clouds and had seen what was at stake.

How do you build the kind of public support that I think would be necessary to give a president the sense that they could actually do this? How do you do that when there is not that kind of hyper visible, imminent danger? But instead, what we’re talking about is smart people telling us that a potentially world changing danger is right around the corner.

Jennings: Well, there’s the risk and there’s the AI dividend, two sides of one coin and even Sam Altman, the brilliant CEO of Open AI, is saying we need a new model for governance. And I happen to believe, as someone in the tech field who’s had more experience with Congress than most of these tech AI leaders, that we need a positive program in the federal government. It wouldn’t start out by taking control and nationalizing AI. It would start out with the lab collaboration and ownership of IP going forward. That is sort of underneath the hood of the Humane AI Commission, how it would operate and…

Miller: Let me make sure I understand what you’ve just described because with the Politico op ed, there was a limit to it so you had to get into all of them, the details of how you envision this work. And in my mind, it actually was kind of overnight with the stroke of a president’s pen saying that this is an emergency. WWII was an emergency, right? That’s why they were able to nationalize industrial production. And I thought that that’s in a sense what you were saying we need to do right now. Tomorrow, we need to take this over it. That’s not what you’re arguing?

Jennings: Parts of it. Yeah, I literally in the article I say parts of AI they only give you so many words and

Miller: You have a little bit more time right now.

Jennings: OK.

Miller: So what are the parts?

Jennings: OK. So the first part is the research, that’s where the first money goes. These new labs where they are setting up the most large scale AI model research in the world. And they’re doing it by having the power to take over some of the IP of Microsoft, of Open AI and so forth, hopefully doing it with as much collaboration as possible.

Now let me insert an idea here going back to the Atomic Energy Commission. One of the most successful players under the era of the AEC (Atomic Energy Commission) was Westinghouse. They didn’t own the nuclear reactors. They licensed them from the Atomic Energy Commission. They had a very successful business. That model can work. And one of the reasons it’s in the interest of the tech companies is that it removes a lot of the liability that they currently carry and there are positive reasons why tech should be interested. And you would be surprised how much support I’ve heard on this. Now, there are definitely a lot of people who think this is the nuttiest idea they’ve heard all year, but there have been quite a few people who believe that this is really important technology, potentially dangerous with a huge upside and we’re starting to look at different ways that we could move forward with democracy at the table, with the United States government at the table.

Miller: I’m glad you mentioned the United States government. How do you think about the rest of the world? China, especially, Russia may as well, but China especially in the context of your proposal?

Jennings: We have to stay ahead of China in my view. China is very committed to AI. President Xi has said whoever controls AI will rule the world. Do I have time for a short China AI story?

Miller: Yeah, go ahead. Go ahead.

Jennings: Ok. So in Hangzhou, a city of 15 million people, an AI called City Brain is controlling all the traffic and emergency response today and it has been for the last four or five years. It’s greatly reduced traffic, it’s reduced pollution there. As a result, there’s less carbon in the atmosphere and it saves lives because City Brain literally controls all the traffic lights and it will turn them off and on dynamically and it’ll shut down a corridor if a fire truck or ambulance needs to go through. So it’s great. It’s good for everybody. But City Brain also is used for face recognition, identifying Uyghurs on the streets of Hangzhou and political dissidents who often as a result are sent to concentration camps. Single AI City Brain doing both either benefit to the city or an instrument of tyranny. And China has a tendency to use AI in the latter mode way more than we do in the United States.

The United States is still the world leader in AI, as I said in the article by a light year or two, but that could change. And if we were to do the AI pause that a number of my colleagues have recommended, which even if that’s possible, which I don’t think it is…

Miller: Meaning, everybody unilaterally lays down their AI’s until we figure this out.

Jennings: Yeah, you think nationalization is not a good idea that one just won’t work. And if it did work, the result would be China would race beyond this. And I don’t think we want a world where the Chinese Communist Party is controlling the most powerful technology on earth.

Miller: Isn’t there a counter argument though, that if that by letting Mark Zuckerberg and Elon Musk and open AI and everybody else go as fast as they can, we have a better shot of staying ahead of China? In a sense that that is the American enterprise model that’s gotten us where we are right now, right? And you’re talking about on some level, slowing us down to be a little bit safer. Are you at all concerned that that means that we would lose our international edge?

Jennings: It’s a concern and I’m very convinced that we need to stay ahead of China. I’m not concerned about Russia or anyone else. Toronto is a very progressive place. They have good AI but outside of that, it’s really the West Coast of the United States. And I think that we’ve got to make sure we stay ahead internationally, but we need to slow down enough, take enough care to be able to make sure that AI itself doesn’t get out of control.

Essentially, we have three buckets of risk, three potential problems: One is bad actors taking over AI and using it against us; one is that AI itself will go off the rails; and, the third is that I think China would become the global dominant force. I think all those are concerns and a national humane AI commission would deal with those threats. But also the flipside which Senator Cantwell from Washington is working on actively is to create an AI economy job training for AI. There’s going to be a lot of jobs, social disruption. We need to make sure we get that going as well.

Miller:  Charles Jennings, thanks very much.

Jennings: Thank you.

Miller: Charles Jennings is the author of “Artificial Intelligence: Rise of the Lightspeed Learners.” He’s a former AI executive who is now calling for the nationalization of big chunks of AI technology.

Contact “Think Out Loud®”

If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983. The call-in phone number during the noon hour is 888-665-5865.

THANKS TO OUR SPONSOR:

THANKS TO OUR SPONSOR: