ITIF Logo
ITIF Search
Podcast: The Case for Killer Robots, With Robert Marks

Podcast: The Case for Killer Robots, With Robert Marks

August 10, 2020

There’s a lot of doomsday hype around artificial intelligence in general, and the idea of so-called “killer robots” has been especially controversial. But when it comes to the ethics of these technologies, one can argue that robots actually could be more ethical than human operators. Humans can commit war crimes. They can deliberately kill innocent people or enemies that have surrendered. Humans get stressed and tired and bring any number of biases to the table. But robots just follow their code. Moreover, U.S. adversaries are deploying these technologies quickly, and stakes are high if we don’t keep up. Rob and Jackie discuss these technologies—and the risks of sitting out the AI arms race—with Robert J. Marks, Distinguished Professor of Electrical and Computer Engineering at Baylor University, and Director of the Walter Bradley Center for Natural and Artificial Intelligence.

Mentioned

Auto-Transcript

Rob Atkinson: Welcome to Innovation Files. I’m Rob Atkinson, founder and president of the Information Technology and Innovation Foundation. We’re a D.C.-based think tank that works on technology policy.

Jackie Whisman: And I’m Jackie Whisman. I handle outreach at ITIF, which I’m proud to say is the world’s top-ranked think tank for science and technology policy.

Rob Atkinson: And this podcast is about the kinds of issues we cover at ITIF, from the broad economics of innovation to specific policy and regulatory questions about new technologies. In this episode, we’re talking about, what’s known as “lethal AI,” military technologies based on artificial intelligence. There’s a lot of doomsday hype around AI in general. And the idea of so-called killer robots has been especially controversial.

Jackie Whisman: When people hear killer robots, they think Terminator, they think science fiction. They think of something that’s far away. While AI will, in all likelihood, never be self-aware, in reality autonomous weapons systems are proceeding and are probably much easier to implement than self-driving cars even.

Rob Atkinson: And when it comes to the ethical hype around these technologies, I’d argue that robots can actually be more ethical than human operators. Humans can commit more crimes. They can deliberately kill innocent people. Humans get stressed and tired and bring any number of biases and mistakes to the table. Robots follow their code. So I want to get our guest’s reaction to all of this. So let’s get to it.

Jackie Whisman: Dr. Robert Marks directs the Walter Bradley Center for Natural and Artificial Intelligence at Discovery Institute. And he is a distinguished professor of electrical and computer engineering at Baylor University. He has consulted for Microsoft, DARPA and Boeing. His latest book is The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal AI, which we’re excited to talk about today. Welcome, professor.

Robert Marks: Thank you. It’s great to be here.

Dr. Robert Marks

Jackie Whisman: Before we get started. I wanted you to tell us a little more about your professional background, kind of to put your research into context for everybody.

Rob Atkinson: Well, I have been doing work in algorithms and artificial intelligence for 30 years, so I’ve been around for a while. I spent 26 years at the University of Washington. Part of that was before the AI and I’ve spent since 2003 here at Baylor University. I have funded research by the National Institutes of Health and National Science Foundation, EPRI and more related to what we’re going to talk about today from the Army, the Navy and the Air Force. They have all been sponsors of my research. And my research is primarily motivated by application. I’ve found out that applications are the great spearhead of new ideas for theoretical development. And since 2018, as you mentioned in the introduction, I have been the director of the Walter Bradley Center for Natural and Artificial Intelligence.

Jackie Whisman: Well, can you explain to us and listeners, what is AI in the context of military applications?

Robert Marks: Well, I think part of this is not engineering. It’s rather historicity. I think to remain competitive, the US military needs to respond and adapt to new warfare technology. Military superiority wins and it shortens wars. More importantly, it gives pause to potential adversaries. We see this in history all over the place in warfare. We don’t have to go back too far to World War II to see that technology helped win World War II by our cracking of the Nazi enigma code, the Norden bombsight, which increased the accuracy of dropping bombs, of radar, which at the time was a highly-protected intellectual invention. And of course, the thing that ended the war was the atomic bomb. So military superiority helps win and shorten wars and gives pause to those who would be an adversary, so I believe that this relates now to the use of artificial intelligence as the latest technology to be incorporated into warfare.

Rob Atkinson: Robert, you also mentioned World War II, and then obviously the Cold War, the military always stayed ahead of our adversaries by what they call the offsets. So the first offset was nuclear weapons. The second offset with things like stealth and precision weapons. And now with the rise of Russia and China, the military is talking about so called third offset. In other words, how do we keep staying ahead? Because if we’re competing, if we’re fighting or battling with another nation and it’s peer to peer on technology, we might lose because there’s more people there. And one of the things you talk about in your book, you go into lethal AI and you talk about the practical and ethical challenges. You argue that the development of lethal AI is not only appropriate, but it’s unavoidable if we want to survive and thrive in the future.

Can you say a little bit more on that? Why do we have to do this? Why can’t we decide we’re going to take the lead in AI weapons?

Robert Marks: Well, unfortunately I think we’re a little bit behind on the AI technology development, at least as far as I can see from what’s available in the media. The New York Times says the autonomous weapons arms race is already taking place. It turns out that this idea of an arms race, which you talked about is something which continues. And it’s typically technological. Andrew Yang, who ran for Democratic nomination for president said that the US was losing the AI arms race. This was a few years ago. I think we’re making steps in the right direction right now. So we do have challenges, in terms of the AI arms race. We need to catch up on it.

Rob Atkinson: A lot of countries, in fact, I think if you talk about in your book, it’s not just China, the US and Russia that are investing. A number of different countries, militaries are focusing on ... What are some of the big areas that you see are the big application areas for using AI and military applications?

Robert Marks: I think that asking that question is like asking, where can we use electricity or where can we use computers? Artificial intelligence can be applied across the spectrum in terms of military hardware. And I believe that’s the current focus, which is happening now by the generals who are directing the development of artificial intelligence. See where artificial intelligence will enhance things of the future in terms of older technology. How can we perform the enhancement? There are also new areas which are coming into the focus. The one that gives me pause the most is drone swarms. I don’t know if that has been addressed as much as possible as it should be, but this is really scary stuff because swarm intelligence, which actually has a history in warfare, the RAND Corporation put out a whole book called Swarm Intelligence in the use of the military.

But the use of swarming drones is really chilling to me because you are able to take out a few of the elements in the drone swarm, and it still be effective. It’s like kicking over an anthill and stomping on the ants. You can do that a lot, but if you come back in a week, the anthill is reconstructed and it’s the same thing with the drones. So they’re very robust and Israel has taken, as I see, some steps toward anti-drone defense, but this is something which we need to pay more attention to. So I guess there are some specific areas where we can apply artificial intelligence. I believe that the drone swarm is one of them, but I think across the spectrum of all applications, artificial intelligence is applicable.

Rob Atkinson: Just to your point there, you make an important point that first of all, AI is going to be used for much more than just weapons systems. It’ll be used for intelligence gathering. It’ll be used to make sure soldiers have the right sensors on them in the battlefield, a whole set of things, but in the weapons part, your point’s an important point, which is it’s not just used for offensive weapons, it’ll be used for defensive weapons as well.

Robert Marks: Oh, absolutely. Absolutely. You mentioned about the different technology that different countries are using is Israel has developed something called an anti-radiation missile. And the name of theirs is called the Harpy. And they launched this Harpy and it can be done totally an autonomous scenario and it flies around a predefined kill zone waiting to be illuminated by radar. And as soon as it’s illuminated by radar, it traces the radar down to the source of the radar and goes down and takes out the radar installation. So we have autonomy things happening. In fact, the Israeli Harpy has been around for, I believe about 15 years. So we do have an inroad into some of these technologies.

Jackie Whisman: Well, opponents of so-called killer robots, of course argue that the technologies can’t be trusted and will turn into the Terminator and kill innocent people. What’s your view there?

Robert Marks: Well, I think that here we need to separate science fiction from science fact, as was mentioned in the introduction that artificial intelligence will never be sentient. It will never be creative. It will never understand. And currently, it has no common sense. It can’t even parse simple flubbed headlines. One of my favorite is “Seven Foot Doctors Sue Hospital.” You see there that we have an ambiguity. It’s either doctors that are seven feet tall or that these are doctors who specialize in the foot. And there’s whole list of these. There’s a yearly competition, something called the Winograd Schema where you look at ambiguous questions and artificial intelligence tries to parse it off. And according to the economics professor, Gary Smith at Pomona College, it turns out that these Winograd Schema can only be cracked about 50% of the time. So ambiguity is really, really a difficult problem in artificial intelligence.

In fact, you remember IBM Watson that won at Jeopardy. IBM Watson thought it’d be a great idea if we went into the medical field and we were able to mine data from the medical literature to help physicians. And they were commissioned by MD Anderson to do that, but they weren’t able to do that. And MD Anderson ended up firing them. And basically, the bottom line condensed into just a sentence that the AI, IBM Watson had no common sense to do the word crafting that was needed to do this mining.

Rob Atkinson: We couldn’t agree more that AI, it’s almost seen as like a magic pixie dust now. Put AI on something and it’ll do these amazing things. I get frustrated when I ... just simple things. When I go on Amazon and I click on an order, it doesn’t know that it’s me. It thinks it’s my wife, even when I’m on my phone and I have to put the order to come to me, how simple is that to do? And here we are living in a world of AI. In your book, which I really, really enjoyed, which we’ll talk about at the end of the end of the show, you noted in the book that the secretary general of the UN has said that autonomous weapons are, “morally repugnant.” Why do you think that’s wrong? I think that’s wrong, but why do you think that’s wrong?

Robert Marks: You have to go back to definitions here. What is the definition of repugnant? I think a lot of people would say the war is repugnant. And yeah, I guess in one sense, they’re right, but is it avoidable? I don’t think it is. Humanity has fallen and there will always be the Adolf Hitler’s and the al-Assad, the president of Syria that are going to go against treaties and they are going to use these weapons of artificial intelligence to do things that they want to. That is a problem. I don’t believe that using artificial intelligence in a so-called “just war” is repugnant. I think it’s necessary. And the fighting of evil and the resistance of evil, as in the case of just wars is something that we need to participate in.

Rob Atkinson: One of the things I think, as you will know quite well about is there’s a global movement and against “killer robots.” It’s interesting how the other side, what I would call the “Neo-Luddite” side...

Robert Marks: Luddites. Good, good.

Rob Atkinson: They come up with these clever phrases. I don’t want killer robots either. And what I mean by that, I don’t want a robot going around and killing people randomly. No one does. And they use these phrases to scare people. And what’s troubling about that though, Robert is, I think you know, there are a number of computer scientists in the US that have signed on to this global petition. What’s your take on that, as a leading AI scientist?

Robert Marks: The banning of these things simply doesn’t work. We could go around and say, “We’d like to ban atomic bombs,” but that isn’t going to work. The best we can do is get together with people of good will and hopefully generate treaties that, as Ronald Reagan said, that we can, “Trust, but verify.” Unfortunately, artificial intelligence is in a new arena. Artificial intelligence, unlike nuclear weapons, are cheap. There was a story about a young lad somewhere in New England, I believe it was Connecticut that put together an artificial intelligent drone that he armed and was able to control by remote control.

And this drone went into the woods and the guy was able to fire it remotely. And he put this together for 500 bucks. So artificial intelligence is going to be incredibly cheap. It’s going to be difficult to monitor. There’s going to be no purification of uranium or any other thing. It’s something which is available online to people who are technically savvy. So this to me is also a little bit chilling in terms of verifying artificial intelligence weapons development. I don’t think it can be done. And as a result of that, the only other option is to stay ahead of it, make sure you know what the technology is and come up with the appropriate countermeasures.

Jackie Whisman: So it seems like there are lots of benefits that we should talk about, both to our national security, but also in keeping our troops out of harm’s way. Can you go into that a little more?

Robert Marks: Yeah. I think that that artificial intelligence will assist initially in keeping troops out of harm’s way. And the drone taking out of Soleimani, the drone was controlled remotely and was done totally without American lives being placed in jeopardy, but a big area of the future is going to be electronic warfare. And this is something that needs to be addressed and researched. Electronic warfare has to do with control of the spectrum and what we do with the spectrum and suppose that the Soleimani link between the home base and the drone was jammed. What will we do then? Would go to a totally autonomous sort of scenario? I don’t think that’s probably a wise thing to do, but do we scrub the mission? Yeah, possibly. That would be the wise thing to do in the case of Soleimani, but scenarios like this have to be addressed in the future.

And this is part of the arms race, as technology hops and does things. And this is what the military does. They look at weapons and they look at countermeasures. Then they look at the idea that there’s new weapons and hopefully those weapons will be effective until countermeasures are generated by the enemy. So I think that to answer your question, yes, immediately, I think the consequence is going to be having more fighters be kept out of harm’s way, but this is only a temporary thing like most things in the arms race, and needs to be addressed in the future.

Jackie Whisman: And even though you support these technologies, you also argue for guidelines and guardrails, which I’d love for you to talk about a little bit too.

Robert Marks: Yes. Well, one of the things that I think needs to be talked about is definition of things like ethics and performance. There are questions of ethics, but ethics never belongs to the AI. Never. It belongs to humans. It belongs to the people who do the computer programming. It belongs to the people that test the artificial intelligence system in the field. They are the ones that are responsible for the ethics, along with the operator, the one that chooses what to do with the artificial intelligence. So it’s important to point to the source of inappropriate use of artificial intelligence and warfare. My mind is that AI and any weapon system should perform the way it was designed to do. This is at odds to some of the people who are against artificial intelligence. They believe, for example, that we should draw the line at autonomy.

No, that’s not the question. The autonomy has been shown to work and will work in the future. The question is will the designed artificial intelligence do what it’s designed to do? That question belongs in the arena of the computer programmer and the people that test the system. And then once that is done, that’s the engineering part. It goes to command who decides how to use the artificial intelligence, recognizing its performance capability and limitations. And therein, we have the three components that define the ethics, the programmer, the testers, and the end users.

Rob Atkinson: You mentioned earlier, World War II and all the amazing innovations really that stemmed from World War II, radar being one later on semiconductors, Intel, for example, most of its sales initially of semiconductors, which were super expensive to the Air Force for guided missiles, GPS, aviation, the Internet was really DOD-funded. And obviously we relied then on great entrepreneurs and the commercial marketplace to take these innovations and movement into the economy. Today’s a little bit different in the sense of there’s now, what used to be called spinoff. DOD was so important that they’d have technology they’d spin off into the commercial sector.

Now with AI, a lot of work is being done in the commercial sector, but still when DOD funds these technologies, whether they’re at universities or other places or startups through companies, they support, they do support AI innovation. I think that’s one of our concerns that if we say that the whole defense world, and maybe even the Intel world, can’t be involved in AI, that it’s going to at one level, deter, slow innovation, particularly more radical innovation that universities tend to engage in. What are your thoughts on that?

Robert Marks: Well, the US as you know, as had recent policies, which give a lot of money to universities in order to develop artificial intelligence based systems and new technology. Unfortunately, as a many-year member of academia, I see a lot of this wasted. I see that the money going to high-profile places, places with big brands is not probably a good place to place the money that is being used in artificial intelligence research today. Unfortunately, that’s the way that it’s been done and that’s the way it’s being done currently. We had a visit to Baylor by General Murray. He’s a four star general who was interested in development of technology in general for the Army.

And he mentioned as he went around to the different labs to look at the research being done here at Baylor University, he says, “I don’t want to see your papers.” And unfortunately, we talked about my upcoming book, Supply Side Academics, that’s the currency of universities today. It’s funding and publication of papers. There is little interest in figuring out what to do with the military or the private sector. Not totally, but the emphasis is on those two things, publication of papers in generating money. And unfortunately, I think that’s a terrible economics for universities and is something which should be hopefully reversed or mitigated.

Jackie Whisman: One last fun question. If you were a presidential advisor on tech policy, what would be your top priority?

Robert Marks: First of all, I would gather intelligence. I think that we know not as much as we need to in order to make policy. I always believe in rewarding youth, innovation and success. These are the things which I believe that the money should go for. I would stay away from the university and look at people who have a history of innovation and success. Unfortunately, that’s not the way it works. And every university plays in this game of pork, where lobbyists are hired in order to get their special program funded or something of that sort. I don’t know how to get rid of that. But as president, I would try to get rid that and try to appeal more directly to people with successful entrepreneur records and the people with innovative ideas, instead of just dumping it on universities where they’re interested primarily in publishing papers.

Jackie Whisman: You’re going to get a call from Baylor’s lobbyist after that.

Robert Marks: Oh, I’m sure. I’m sure. And Baylor plays the game, but it’s one of those things that everybody has to play until the policy is repealed. It’s like term limits on government. We have a lot of senators that want to have term limits on government that they’re not going to do term limits until the legislation is passed. It’s the same thing, I think with universities. The whole broad spectrum needs to be addressed before people play the game properly.

Rob Atkinson: Robert, there’s a group that we really like a lot in the Defense Department now called the Defense Innovation Unit, used to be called DIUx, and it’s an effort by the Defense Department to work more closely with entrepreneurs, the types of folks you’re talking about, and they have a proposal. I think it might’ve made it into the Defense Authorization Act this year, but it certainly hasn’t been funded yet.

And that’s to be able to fund entrepreneurs directly who are developing technologies, that where the private sector may not fund them as much. So a lot of venture money goes purely into software. Doesn’t go into some of the military areas. And that sounds like something that would be a kind of thing you’re talking about is more money to these really interesting and innovative entrepreneurs to get them to be focused more on defense applications.

Robert Marks: Absolutely. And I think there needs to be a greater emphasis on reduction to practice. There needs to be a focus on what are we going to do with this artificial intelligence. It doesn’t need to be immediately. It can be in the future, but we need an end goal in sight, not to just do pure research, to publish more papers and some theoretical journal.

Jackie Whisman: Thanks for being here. Can you tell our listeners where they can find you and follow your work?

Robert Marks: Yes, you can. As mentioned in the introduction, I’m the director of the Walter Bradley Center for Natural and Artificial Intelligence. And we have a website called mindmatters.ai, which is a great suffix if you deal with artificial intelligence, right? Mindmatters.ai. And therefore for a while, we are offering the book that we’ve been talking about, The Case for Killer Robots, as a free ebook download, but mindmatters.ai is the place where you can see the work of the Center.

Jackie Whisman: Great. Thank you. We will link to that in the show notes.

Robert Marks: Okay. Thank you.

Jackie Whisman: Well, that is it for this week. If you liked it, please be sure to rate us and subscribe. You can find the show notes and sign up for our weekly email newsletter ITIF.org. Feel free to email show ideas or questions to [email protected] and follow us on Twitter, Facebook and LinkedIn, @ITIFdc.

Rob Atkinson: Thanks for listening. We have more episodes and great guests lined up. New episodes will drop every Monday morning, so we hope you’ll tune in next week.

Back to Top