ITIF Logo
ITIF Search
Podcast: The Promise of Artificial Intelligence, With Steven Shwartz

Podcast: The Promise of Artificial Intelligence, With Steven Shwartz

Artificial intelligence (AI) is a branch of computer science devoted to creating computer systems that perform tasks characteristic of human intelligence, such as learning and decision-making. AI overlaps with other areas of study, including robotics, natural language processing, and computer vision. Understanding what AI can do—and perhaps more importantly what it cannot—is critical for understanding the substantial benefits AI can bring to many sectors of the economy and society. Rob and Jackie talk to veteran AI researcher, statistician, and investor Steve Shwartz about the mechanics of AI and how to spur further development and adoption of the technology.

Mentioned

Related

Auto-Transcript

Rob Atkinson: Welcome to the Innovation Files. I’m Rob Atkinson, founder and president of the Information Technology and Innovation Foundation. We’re a DC based think tank that works on technology policy.

Jackie Whisman: I’m Jackie Whisman. I handle outreach at ITIF, which I’m proud to say is the world’s top ranked think tank for science and technology policy.

Rob Atkinson: This podcast is about the issues we cover at ITIF from the broad economics of innovation, to specific policy and regulatory questions about new technologies. Today we’re going to talk about AI or Artificial Intelligence, and specifically about the culture of fear surrounding this technology. Unfortunately, in the minds of many including frankly some policymakers, AI is magical and all powerful, it’s poised to do all sorts of transformative things and including matching or even surpassing human capabilities eliminating most jobs. If you believe in Elon Musk, actually imprisoning humanity in gorilla like enclaves in the mountains. We are here to talk about why all of that’s nonsense.

Jackie Whisman: We’re going to talk to veteran AI researcher, statistician and AI investor, Steve Shwartz. He is the author of the upcoming book, Evil Robots, Killer Computers, and Other Myths, the Truth About AI and the Future of Humanity, which is out in February. Welcome Steve.

Steve Shwartz: Thank you. Thanks for inviting me.

Jackie Whisman: Can you start off maybe small by explaining how AI works in simple terms, you have a lot of experience with this.

Steve Shwartz: Sure. So there are two major types of AI, machine learning and natural language processing. Machine learning is really an evolved form of what we used to call statistics, but it’s responsible for most of the major AI inventions like facial recognition and machine translation. Natural language processing enables us to talk to our smartphones in some cases makes use of that machine learning technology.

So let me spend a minute and just talk about machine learning itself. Most machine learning starts with a table of data in which one column contains the correct label. So for example, in a facial recognition application, each row in the table will contain the individual dots of a facial image that are known as pixels, plus the name of the person in the image, and there’ll be millions of rows of images in the table. The job of the machine learning system is to learn a function that relates the pixels in each row to the correct name. And once that function is trained, that function can be used to correctly name images that weren’t in the database and that’s how facial recognition works. And that’s how pretty much most of the AI systems work. But it’s important to note that that function that can put names to faces, can’t do anything else. It can’t distinguish dogs from cats, it can’t translate languages. And more importantly, there’s absolutely no reason to characterize that function as human-like thinking or human-like intelligence.

Rob Atkinson: So Steve, one of the things I really enjoyed about your book, and I really encourage all the listeners to pick up a copy as soon as it comes out is, first of all, it was really a good primer, me being not a computer scientist but knowing somewhat about this, it’s a very nice primer. You go through sort of how it actually works in the various applications and then how it’s used in things like driving and other areas. But I thought one of your key points in the book is what you just said. It’s not human intelligence. We released a report a few years ago called, It’s Going to Kill Us and other myths about AI.

And you have these pundits like Ray Kurzweil talking about the singularity. In other words, all of a sudden these computers are smarter than humans and the rise of what they call AGI or artificial general intelligence, artificial super intelligence. When the Terminator comes and kills us all. And you know, this is good for pedaling Ted talks, but ultimately it’s computer science nonsense, but it’s sort of become what people think about it. And you do a really good job of explaining what AI is and what it isn’t. Could you say a little bit more about that?

Steve Shwartz: AI is producing amazing things that are having big impacts on society from facial recognition to translating languages in foreign countries. We can talk to our smartphones to some extent, but as I mentioned before, each one of those things is a single function that can only do one thing and really is nothing like human level intelligence that can do a lot of things.

Every AI researcher agrees that the current forms of machine learning are a dead end, not when it comes to producing more and more great inventions, but they’re are dead end when it comes to producing human-like intelligence that Elon Musk or Stephen Hawking or Ray Kurzweil are worried about. AI will never produce machines with that level of intelligence. That said, there are many AI researchers out there, brilliant, brilliant researchers who have brand new ideas for how to build intelligence machines like that.

And, I have a little bit of a historical perspective on this. I’ve got enough gray hair to be able to make this comment. I moved to Connecticut in the late 1970s to join one of the major artificial intelligence laboratories at Yale university. Back then we really thought our ideas would result in intelligent machines. So did researchers at dozens of universities around the world who had their own ideas, but none of them panned out. It is a really hard problem. And there’s no reason to believe that today’s ideas are any more likely to work than the ones we had in the 1980s.

Today’s technology, today’s AI capabilities are impressive engineering feats, but there’s no reason to take them as the slightest bit of evidence that the new ideas have a chance or are any more likely to succeed than the old ones. So we don’t need to worry about the Terminator exterminating humankind, and we don’t need to worry about robots that can read books and take courses and learn all our jobs.

Rob Atkinson: Goes back to the original AI conference. She’s back in the fifties. Yeah, the Dartmouth one. And even back then they were saying that, a number of the folks there said, it’s really only about a year or two from now and we’ll get this. It’s a little bit like nuclear fusion where everybody says another 10 years, another 20 years, it just has never happened. One of the things that I think is the problem here is, first of all, I think the problem is also the terminology. Calling it AI, it makes it sound like it has human intelligence as opposed to machine capabilities. I was struck, I think it was last week over the holidays. I think you might’ve seen the Boston robotics dancing robots, anyway they’re really great robots and they now can dance. And a lot of people on Twitter were like, Holy Jesus, they’re close now, the terminators, if they can dance they can kill us.

Jackie Whisman: Just because they had a couple more joints, a couple more arm joints than other robots.

Steve Shwartz: Yeah. And it’s amazing to me because five years ago if you saw videos of the Boston robotics robots, I think that was Boston robotics. You’d see robots that try to walk down a path and open a door and they’d take a step and fall over and hit their head on the door and now they can dance. And that’s really amazing. I mean, that’s just great applications of technology. They’ve just found better ways of learning functions that enable them to dance. It’s impressive technology but there is really no intelligence there.

Rob Atkinson: There’s nobody home, as they would say.

Steve Shwartz: Nobody home, right? No ghost. The original name of my book was going to be, There’s No Ghost in The Machine, but the editors rejected that because you wouldn’t be able to find it on Amazon. There was a famous book written in, I want to say 1970 by Arthur Kessler named The Ghost in the Machine.

Rob Atkinson: Yes, of course, but very few people would get that allusion. So probably a good point for your editors.

Steve Shwartz: Yeah.

Jackie Whisman: But people really have a hard time with this concept. And I think the more education that occurs is maybe not for the best. And I say that because we hear a lot of talk about the need for AI transparency kind of based on the notion that unless we know how the AI engine actually works or the algorithm actually works, then we shouldn’t trust it. I think that is a policymaker stance too. But in fact, studies show that the more people know how AI works, the less they trust it and you talk about this and the fact that some predictive algorithms must remain secret, you give the example of IRS in your book. Can you say more about that?

Steve Shwartz: Sure. There’s a lot of visibility around the need for transparency and the European Union has led the way in regulation towards transparency. The US and other companies are following fast. And I don’t think it will be very long before there are some pretty strict regulations in place that we’re going to start worrying about whether they’re doing as much harm as good.

There was an interesting study that looked at discrimination. The researchers had people evaluating ads for jobs, and they looked at the difference between the evaluators looking at resumes with people’s names that were clearly African-American names versus white names. And they found there was a big bias against the African-American names, basically the same resumes.

So there is really no question that people are often biased. Some are prejudiced, some may have unconscious biases and AI systems are a possible solution to those human biases. And if we throw out all AI systems, because they can be based on biased data and make biased decisions than we leave ourselves back in the situation of relying on people not to be biased, and we know that that’s a problem. So it’s a very difficult issue and I think that’s where we’re headed, and I think that’s a bit unfortunate.

Rob Atkinson: Yeah. Oftentimes in these debates people compare a technology to perfection rather than compare it to what is prior to it. And in many cases, how decisions are made, are they made by people with unconscious or sometimes even conscious biases. And the advantage of an AI system, especially as you point out in your book, if you work hard on the data set to make sure the data set is representative reality, you can really have systems that are less biased than maybe what a person might do.

Steve Shwartz: Exactly, and I would prefer to see the regulation focused on requirements around unbiased data sets, because I think that’s where we’d really find the sweet spot.

Rob Atkinson: Yeah. Our framework on this was rather than mandate algorithmic. In some countries a policy gets advocated that you actually have to turn over your source code, which obviously has big problems around IP also, it’s sort of what does it tell you? It doesn’t really tell you anything because of the way that deep learning works. But our view is, these systems need to be accountable if they say they’re going to do X, they should do X and that’s really what we should be focused on.

Steve Shwartz: Absolutely, yeah. I think accountability is a great regulatory framework for a lot of the AI issues. One of my big soap boxes is self-driving cars. I think there’s so much enthusiasm about self-driving cars that I think governments are actually starting to consider removing liability from the auto makers. There’s a recent article about the UK starting to think about that. And it’s really very scary because in my view, if you have a car that’s driving itself, if I can sit in the car and read a book and that car gets in an accident, I should be able to blame the manufacturer. And if I can’t blame the manufacturer, what’s to prevent the manufacturer from putting these cars out before they’re really safe.

Rob Atkinson: I know what you’re saying, but on the other hand, again, it goes back to this question of what are we comparing to, because I wrote a report on... Actually its funny, I used to be at the Congressional Office of Technology Assessment. And 25 years ago we did a report on self-driving cars that Congress asked us to do, and we said minimum 25 years from now. Everybody at the time. “No, no.” Very quickly. No, no. It’s a long, long way away.

I mean, I don’t even know that we’ll ever get to what’s called level five where you just fall asleep in the backseat and in a snow storm, but we could get to level four where the car itself helps avoid accidents. And anyway if we get to that level, we will save a lot of lives. And I guess in our view, I guess you’d want a regulatory posture that would enable and promote that with this assumption that there will be some accidents. And I guess the question would be is the accident caused because of it faulty algorithm in cars, or just simply because you can’t avoid all accidents.

Steve Shwartz: Yeah. In my view, there is a fundamental technology weakness of self-driving cars. Let me explain. Most of us have encountered unexpected phenomena while driving, a flood makes the road difficult or impossible to navigate, a section of new asphalt has no lines, you notice or suspect black ice, drivers are fish tailing trying to get up an icy hill. We all have stories about unique, let’s call them edge cases and we don’t learn how to handle these unexpected situations in driving school. Instead, we use our common sense. We use our common sense to anticipate. If we hear an ice cream truck in the neighborhood, we know to look out for children running towards the truck. The problem with self-driving vehicles is nobody knows how to build common sense into computers. So what happens when a self-driving vehicle encounters a situation that it hasn’t been programmed for?

It’s either going to crash or stop and cause a traffic jam. A few years ago in Moscow they had a self-driving car competition and a car conked out at a stoplight and all the cars just stopped and waited. A human driver would say, Oh yeah, that car must be conked out and go around them. But it caused a three hour traffic jam. So these edge cases are really problematic for self-driving cars. We can have self-driving cars on a corporate campus where you’ve got a self-driving shuttle that goes from point A to point B. You can identify all the edge cases and program them in, either using machine learning or conventional programming. Maybe in a small area of a city, you might be able to do it with taxis or delivery vehicles, but how are you going to do it with a consumer vehicle that can encounter so many different edge cases?

If every driver in the world has had a unique experience and let’s say, there are, I forget how many drivers in the world there are two or three billion, or maybe it was six billion. I forget the number. If there are six billion different edge cases, how are these self-driving car companies going to... They’ll have to identify every one of those six billion edge cases and individually program them in. [inaudible 00:16:05] three machine learning for them, that’s going to be very difficult. And If we can’t do that, the result is going to be crashes and traffic jams. So I don’t think the answer is that it’s necessarily going to be better than what we have now. I think it could be a lot worse.

Rob Atkinson: Well, when I used to go to work pre COVID, I worked downtown and lived in the suburbs of DC and I would ride my bike to work and been doing it for 25 years. And knock on wood. I was doing pretty well until the last summer or two summers ago when I was riding almost in front of the white house, on Pennsylvania Avenue and a cab driver wasn’t looking and turned a U-turn and hit me and I had a concussion and luckily I survived with no damage, long-term damage. But if he had had level four, in other words, where the car sensed he was going to hit me and automatic braking, he wouldn’t have hit me. And so I think how I would look at it is, level five, where we start taking our hands off the wheel and go to sleep would be probably be a big problem. But getting to that other point where you sort of have automatic reactions in the car to assist and compliment the driver, would probably be a good thing.

Steve Shwartz: And that’s actually a level two capability. Level three is where you can watch a movie or read a book, although it’s only in limited areas. So it’s starting with level three that I have that problem. In the UK, what they’re talking about doing is saying, well, we’re going to make sure that if somebody is reading a book, they have to be able to take over within 10 to 15 seconds or else it’s their fault. I mean, 10 to 15 seconds, the car realizes it’s in trouble. It says you have to take over the wheel right now. And now you’re going to wait 10 seconds. Come on. I have a Tesla, I love the Tesla and I probably run it in autopilot maybe 90% of the time that I’m driving, but that other 10% of the time it would be an accident. And if I’m going by somebody on a bicycle, I don’t trust it I’ll take over.

Jackie Whisman: So you’ve forgiven Elon Musk for his slander against your industry?

Rob Atkinson: I think we agree on that, Steve, it’s interesting. Let me sort of maybe jump into maybe a final or second last final question. I think the struggle in the US now, I think Europe has crossed the Rubicon on where they are on AI. They, don’t like it, they want to constrain it, you have to get permission from the government to even come out with an algorithm potentially. And again, that’s not to say that the alternative is just libertarianism and nothing.

I liked how you addressed the point of your book. Like for example, on facial recognition, and I won’t quote you directly, but facial recognition is not an inherently bad technology. It can be used in bad ways as the Chinese are doing, or it could be used in ways with set of rules and guidelines and protections and rules about data, how long you keep it, when you can use it for an arrest, et cetera, et cetera. And maybe you can just say a few more words on that, how you see that.

Steve Shwartz: Yeah, I think that’s absolutely right. Facial recognition technology can have great benefits for society. It’s been used to catch child molesters. It’s been used to catch terrorists, but then on the downside you have discrimination aspects that we talked about. If it’s used to detain terrorists at airports and it makes mistakes, and it makes a lot more mistakes on minorities, that’s unacceptable. But the answer to that I think is better technology and better datasets, not banning it all together. And then you have what they’re doing in China. So the Chinese government is putting together a big brother, 1984 society by linking up virtually all the cameras, the security cameras in the country, and using AI to monitor people. So dissidents, they know everything that a dissident does. They’ve even used it to catch toilet paper thieves. So that’s a level of invasion of privacy that absolutely won’t be acceptable in the United States.

And I’m pretty sure we’ll see some regulation against that level of privacy. We’re already seeing a moratorium on the use of facial recognition in law enforcement and government. Most of the major players won’t sell their technology to law enforcement. We’re seeing lots and lots of regulation from the FTC to, I forget I have a list somewhere of, it’s probably at least a dozen governmental agencies that have put out regulations about facial recognition.

So I think what we’re seeing in this country is that the government is responding to people’s concerns in a fairly effective way. They may go overboard in some cases, but I suspect we’ll land where we need to land where we’ll be able to use the technology but the people who use it will be responsible for the correct usage, as you mentioned, Rob. And I think the government will hopefully enact some rules and regulations that will prevent itself from doing what China is doing.

Rob Atkinson: I mean, I’m maybe a little more optimistic than you are. If we want to be like China, we don’t need technology to do it. We’re not China and whether we have technology or not, they have guns they might kill people, we don’t. I guess we’ve always sort of had the framework of don’t ban the technologies, put a set of rules around it and where there could be abuses, put the rules there. But one of the challenges I think is how the media covers this.

For example, NIST has this facial recognition challenge every year and they did it the last time they did it. And they had out in over a hundred fr systems that they evaluated. And it turns out that like the top 10 basically had zero bias on gender or race, zero. In some cases, they were a little more accurate actually for minorities, but 90% of them, let’s say, they had bias. And so the story from the media was facial recognition systems biased. What the story should have been was, don’t buy those other ones, only buy the ones that don’t have any biases. It’s like CES testing cars, and two of them are bad and you’re like, Oh, cars aren’t safe. So I think that’s part of the challenge that we should say, you got to use the right systems. You got to use effective systems, but not shoddy ones.

Steve Shwartz: Yeah. I’m a hundred percent in agreement. I think what that study showed was we know how to make data sets that are unbiased and not everybody did it, but we know how.

Rob Atkinson: Exactly, yeah.

Jackie Whisman: Steve can you tell our listeners where they can find you and follow your work?

Steve Shwartz: Absolutely. So I maintain a website that has a lot of information on AI, including a 15-chapter, a 400-page AI 101 book on online.

Jackie Whisman: We can include that in our show notes too.

Steve Shwartz: Great, yeah. That’s at aiperspectives.com.

Rob Atkinson: Steve, thank you so much for being here. Really great conversation. I look forward to reading the book in full when comes out, I’ve sort of skimmed it and I really enjoyed it. Look forward to reading the whole thing and I encourage our listeners to do the same.

Steve Shwartz: Yeah, likewise. I really enjoyed the conversation and it’s great meeting you both Rob and Jackie.

Jackie Whisman: Thank you and that is it for this week. If you like it please be sure to rate us and subscribe. Feel free to email show ideas or questions to [email protected]. You can find the show notes and sign up for our weekly email newsletter on our website, itif.org and follow us on Twitter, Facebook and LinkedIn, @ITIFdc.

Rob Atkinson: We’ll have more episodes and great guests lined up. New episodes drop every other Monday, so we hope you’ll continue to tune in.

Back to Top