Human Robot Interaction

A Society and Culture podcast
Good podcast? Give it some love!

Episodes of Human Robot Interaction

Mark All
Search Episodes...
We discuss the three laws of robotics by Isaac Asimov.
I talked about HRI methods with Tony Belpaeme.
How can a robot act ethically?
We discuss the conception, rise and fall of ATR.
We talk about the people at ATR and the robots they created.
In this episode we discuss the evolving landscape of academic publishing.
It is not a question if sex robots will become available but only when. Many companies are working on their development with moderate success. While empirical research on the topic is rare, the ethical implications of sex robots have been discussed passionately. Some want to ban them before they even enter the market while others defend the rights of people to own sex toys. For this episode I interviewed David Levy, Robert Sparrow and Eleanor Hancock. Important links: Intimate relationships with artificial partners by David LevyRobots, rape, and representation by Robert SparrowCampaign against sex robotsInteracting with Anatomically Complete Robots by Christoph Bartneck and Matthew McMullen
This episode tells the story of the infamous ACE2018 conference.
What advice would you give to your younger self?
What and how does art contribute to HRI?
Racism is a major problem in our society and it is an issue from which the HRI community cannot shy away from. Several studies showed that people transfer their racial biases onto robots. In this episode we will talk about these difficult topics of racism and sexism. My guest are Kumar Yogeeswaran, Friederike Eyssel and Megan Strait. They all work on racism among humans and towards robots. Besides identifying the biases we also talk about how and if robots might be able to help reducing them. ISSN 2703-4054 Relevant links Social categorization of social robots: Anthropomorphism as a function of robot group membership by Friederike Eyssel and Dieta KuchenbrandtRobots Racialized in the Likeness of Marginalized Social Identities are Subject to Greater Dehumanization than those racialized as White by Megan Strait, Ana Sánchez Ramos, Virginia Contreras, Noemi GarciaRobots and racism by Christoph Bartneck, Kumar Yogeeswaran, Qi Min Ser, Graeme Woodward, Robert Sparrow, Siheng Wang, Friederike Eyssel Addendum In an earlier version of this post the work of Megan Strait on racism among humans and towards robots was described as “extensive”. Upon Megan’s request the qualifier “extensive” was removed. Megan Strait would like to also note that: I do not endorse the idea of using robots to reduce bias, as I do not find the premise to respect existing understanding and literature. If racism were readily solvable via intergroup exposure, there would not be such movement on “AI ethics” as is readily apparent in the discourse of mainstream media. With respect to combatting racism, my perspective is that robots have potential value in the role they could play in moderating of social dynamics (see, for example, Campos, Martinho, & Paiva 2018, Hoffman, Zuckerman, Hirschberger, & Shani-Sherman 2015, and Martelaro, Jung, & Hinds 2015). Applications in this manner have particular potential to address social inequities (e.g., the placement of responsibility largely on people of color to combat manifestations of racism). But that does not specifically serve toward attenuating individual and institutional bias. 
Most research at universities is done by PhD students. They are an essential part in the progress of science. The relationship between a PhD student and his or her supervisor is not unlike a marriage as Jorge Cham pointed out. It lasts for a couple of years, starts with a proposal, and culminates in a ceremony where you walk down the aisle dressed in a gown. Naïve young people enter, and around 50% end up in bitter remorse. In today’s special episode I talk to three of my former PhD students, Omar Mubin, Eduardo Sandoval, and Jakub Zlotowski, whom I met at a conference in Sydney. We talk about live during and after a PhD. ISSN 2703-4054 Relevant Links The Scientific Century: securing our future prosperity by The Royal CommisionThe future of the postdoc by Kendall PowellMarriage vs. the PhD by Jorge Cham
The business of creating and selling social robots is risky. Many companies have tried to sell social robots and only few have succeeded to sustain their business over a longer period of time. Aldebaran, or Softbank Robotics, as it is known today, is one of the biggest companies selling social robots to human-robot interaction researchers. Their Nao and Pepper robots are being widely used in the research community. I had the chance to talk with Amit Kumar Pandey, the head principal scientist at Softbank Robotics and Franziska Kirstein from Blue Ocean Robotics about how they develop robots. ISSN 2703-4054 Relevant Links: Pepper RobotBlue Ocean RoboticsJibo
We continue our discussion on autonomous vehicles and focus in this episode on the legal aspects. In particular the issues around liability are a major obstacle for the wide adoption of autonomous vehicles. Who will be responsible for damages caused by auto pilots? Will it be the driver, the manufacturer of the car or maybe even the car itself? I talked with Professor Tracy Hresko Pearl, Professor Ryan Calo and Professor Alan R. Wagner about what needs to happen in our legal system to be able to deal with the liability around autonomous vehicles. ISSN 2703-4054 Relevant Links: Compensation at the Crossroads: AutonomousVehicles & Alternative Victim CompensationSchemes by Tracy PearlLoyal Wingman ProjectOne Flew Over the Cuckoo’s Nest by Ken KeseyNational Vaccine Injury Compensation Program Transcript: [00:00:00] Tracy: I’m convinced that the U. S. legal system is not prepared currently to handle autonomous vehicle crash cases. So crashes that are caused by some sort of hardware software malfunction rather than by driver error. [00:00:15] Christoph: It is alarming that the leading country in testing autonomous vehicles, is legally unprepared for this disruptive technology. Last week we already started talking about how autonomous vehicles might change our society. This week, we will try to better understand the legal and political implications that these new forms of transportation might introduce. [00:00:48] Christoph: So what exactly is the problem in the legal system of U.S.? [00:01:02] Tracy: We have no case law. We have very minimal statutory law. The United States right now on so, I’m concerned that the judges are going to mishandle these cases. You hand down rulings that are very unfavorable, either to industry or to consumers, and both of those would be very, very problematic. My name is Tracy Pearl, and I am a professor of law at Texas Tech University School of Law in Lubbock, Texas. [00:01:27] Ryan: I don’t think we’re quite in a position where we should be making general laws about machine learning because I don’t think we completely understand it’s full ramifications. Well, my name is Ryan Calo. I am a law professor at the University of Washington in Seattle. I think that government should officially accrue greater expertise, for example, by refunding the Office of Technology Assessment. I’ve even argued for a standalone agency devoted to better understanding cyber physical systems. [00:01:59] Christoph: The Office of Technology Assessment was an office off the United States Congress from 1972 to 1995. Its purpose was as the name suggests to assess what impact technology might have on society for this purpose. It provided objective analysis off the complex scientific and technical issues off the late twentieth century. Despite protests, it was defunded. [00:02:22] Tracy: I watched the Mark Zuckerberg hearings with great interest earlier last year in United States, and I think the dynamic that you saw it play in the Senate hearings is exactly what you see in courts, which is you have younger smugger people explaining to older, more in authority people, technology that the older people really struggle to understand. I clerked for two federal judges when I graduated from law school, one at the trial level and one of the appellate level. And, you know, my, my trial court judge was still having his emails printed out and given to him by his assistant and for as incredibly wise as an incredibly good jurist as he was. He was not somebody I think, who was maybe the right person to be judging really sophisticated matters of technology. And I would say that about a large number of judges, judges, The United States tend to be older. That’s sort of the nature of that career path. And so I think that we’ve seen that struggle not with autonomous vehicles yet. But in other areas of technology, judges really struggling to understand even just the nature of the dispute, much less reach a decision that that makes a lot of sense. There’s less information available about jury pools, you know. Certainly they’re young people serving on juries, but I have witnessed really complex patent cases involving, you know, really dense technological issues. You know, the composition of microprocessors, and I have just seen juries just with completely glazed over expressions for the entirety of those trials and really questioned whether or not they were the right people to make the decision on whether the jury system at all. Is this the sort of correct milieu in which those issue should be dealt with? [00:04:04] Christoph: Pointing the finger at the incompetence of others is easy. I myself am not a computer scientist. Have you any experience and programming? [00:04:14] Tracy: I have programmed a computer in the most minimal way possible. I mean, I took computer programming classes when I was in school, and I did basic like HTML and things like that. But no, I am not a computer scientist by any stretch of the imagination. [00:04:28] Christoph: If you have little experience in programming. Where do you get your information from that enable you to form your legal opinions? [00:04:35] Tracy: Yeah, so the great thing about long technology is that there are a lot of interdisciplinary conferences. So why, over the last couple of years have become less interested in attending law c conference is because I think it’s a lot of people who have the same knowledge base talking amongst themselves. And I’ve really become much more in favor about attending conferences like these. Where there are computer scientists are under engineers and having those conversations because I think this is sort of an existing problem in all areas of law, which is that you have judges, lawyers and law makers and lawmakers tend to be lawyers, just like law professors. And we’re all talking to each other and not talking as much to industry as we need to. And then you have industry that sort of lobbying lawmakers in a really self interested way. And so I think what is sort of missing there is those kinds of honest, in depth conversations about the nature of the issues that we have to confront. And that’s my hope. The legal world is really siloed and the legal academic world is even more siloed. And so I hope to see more of those conversations happening and more collaborations happening sort of a cross disciplines in this area. [00:05:43] Ryan: There has been a growing number of applications of machine learning to context like translation and so on that that feel to me that at least that the application layer not the development of the core techniques behind, eh? Yeah, but at the level of what can we deploy today? Because we have the algorithms in the data and the computational power that you know that it is. It is something that’s getting being part of the mainstream, right? So what I’m thinking about us, for example, liability regimes for driverless cars. Drone navigation thinks like that [00:06:22] Christoph: Liability refers to who will have to pay in case of an accident. The manufacturers of autonomous vehicles also struggle with liability implications. They could be hold responsible when their vehicles misbehave or under perform. The problem is that the manufacturers cannot completely predict how they’re cars will perform in the real world. Naturally, they would not like to be liable for the damage their cars may cause. Last week, we already discussed several accidents in which the autonomous vehicles failed and even killed a pedestrian. [00:07:01] Alan: These are really challenging problems. I mean that that that is probably, I think, one of the most challenging problems in artificial intelligence. This this fact that these systems can become so complex and so experiential, right, the robot’s behavior could change with his experiences out there in the in the world. The loyal Wingman Project, which was a project looked at by the United States Air Force, had these systems that wouldn’t learn on the fly but would learn through, you know, it would run a mission. It would be debriefed on the mission, and it would include that in its future missions and rationale for behavior. It can certainly get a faraway further out than the program or ever intended. [00:07:47] Christoph: The real problem seems to be the complex environment. The reality on the road is far more complicated than current Autonomous vehicles can handle. Just this week, Waymo announced that its car can know understand, gestures from police officers managing traffic manually. Again, this development was announced only this week, which means that so far it would have not known what to do. While this form of traffic regulation is rare in most developed countries, it is far more common in the developing countries. Moreover, in some countries, no traffic signs hold any truly regulatory power. There are merely suggestions. How could an autonomous vehicle operate in such a lively and organic flow of traffic? [00:08:35] Tracy: Yeah, I think just exactly what you said, which is that as good as programmers are can be, they can’t possibly predict the complexity of the driving environment, right? Additionally, there driving situations in which there are two bad options rather than two good ones. This is the sort of famous trolley problem. Do you program the card to run into the mom with a stroller or to run into the Group of six people right that there is not a good choice there. [00:09:01] Christoph: If the legal system is unprepared, what could a solution be? [00:09:05] Tracy: So my proposal is a victim compensation fund, and this is I don’t want to say a uniquely American phenomenon, but one that America, I think, has utilized more than other countries. So the Victim compensation fund is an alternative to the legal system it creates a pool of money and makes it available to victims of a particular type of injury, and it tells those victims that it will compensate them for their injuries in exchange for them waving their right to sue. We’ve used these in the United States to compensate, among other groups, at nine eleven victims, victims of the deepwater horizon oil spill, coal miners who suffer from black lung and the victims of the Virginia Tech shooting. And I think that these kinds of funds offer a really interesting alternative for AV crash cases. I mean, so I’ve proposed creating a funds that would compensate people who were injured by fully autonomous vehicles and that would offer in exchange manufacturers some protection from liability exposure. And, I think, additionally, we have to come to grips with the fact that autonomous vehicles are not going to have a zero percent risk, right they There is going to be some predictable amount of injuries or fatalities caused by autonomous vehicles. And importantly, that doesn’t mean that they’re a bad product or that somebody should be be held liable. But it does create real issues in terms of liability exposure so the analogy that I give here and I give it of my paper is childhood vaccines. So childhood vaccines are great. I’m a believer in childhood vaccines. They save tens of thousands of lives United States every year. But childhood vaccines also injure a predictable, unfortunately small number of children every year. And there’s nothing that we can really do. We can make them sort of incrementally safer, but there will always be some number of children who have a bad reaction to a vaccine. In the United States, the vaccine manufacturers were almost sued out of existence in the in the nineteen eighties and nineties because every time a child was injured, even if it was one of those sort of predictable injuries that we can’t eliminate from the vaccine protocol, their parents would turn around and sue the vaccine manufacturers. And for small companies in particular, the liability exposure of producing childhood vaccines simply became too high, and they stopped producing them. And we almost run out of vaccines in the United States. And so we created a victim compensation fund that eliminated that problem. So now the US you pay a tax on every childhood vaccine and if your child is injured, you’re compensated by a fund. This may seem completely obvious to all of my European, Australian and apparently Kiwi counterparts, right, You’ve been doing this much better than we have the United States, but I think for autonomous vehicles, we’re going to have the same exact issue. There will be a risk profile. People will be injured, but I think the Net positive is going to be very significant. [00:12:00] Christoph: This sounds like an interesting idea, although it does seem more like a work around Tracy’s argument is that in the absence of any legal guidelines or laws we need to settle outside of the legal system, there’s still another option available. Saudi Arabia gave the Sofia robot citizenship, and the European Union is also debating whether the grand personhood to robots could the autonomous vehicles not become a legal entity that would become responsible for its own actions. [00:12:32] Alan: How I I don’t I don’t see how you could have a robot have a legal state. I mean, the European Union has sort of hinted at this with the Electronic personhood, right? But I don’t understand it because you cannot punish a robot right, you, can only sort of punish the individual that is behind the robot, the human right. But holding the system itself responsible makes no sense because the system has no sense of punishment, no sense of loss. [00:13:02] Tracy: I know that there has been a lot of discussion about giving machines legal rights. There’s a book that I read called Robots Are people too. That was about just this issue on Some people have talked about this in terms of like insurance. Instead of making humans own insurance, why not make the car itself the insurable entity? Gosh, I don’t know that I have better answers than anybody else here in terms of how we define autonomy and whether and how it’s possible to make machines accountable for their own actions. One way to do it would be, I think it’s it’s always presented as a sort of either or like either machines have the full panoply of human rights, and they can, you know, go get a beer from the local, you know store as long as they have a license or and do everything else that a human would do. And certainly I think that that’s you know, reaches a level of absurdity. I think there are ways in which that we could consider machines from a legal perspective illegal entity such that the machine itself can be sued, or force the machine itself to carry insurance. And I think that actually, you know, sort of creates a lot of interesting possibilities. When we’re talking about compensating people for injuries, I don’t also know that writes rhetoric is maybe the right paradigm to used to address some of these questions. I mean, I think that when we talk about rights, it’s so squishy, and it’s subject to so many assumptions and so much existing law that talking about rights, context, actually create more problems than it solves. [00:14:32] Christoph: Personhood itself is a difficult term. From a legal perspective, what are the requirements for an entity to be considered a person? [00:14:39] Tracy: Oh, goodness is like the hardest question. I mean, here’s the thing. We don’t We don’t have a great answer legally as well. If you look at sort of the abortion jurisprudence that we have, you know, one of the most basic questions to ask about personhood, which is when does it start, is something that’s subject to great political debate and one that still, I would argue, at least on a sort of legally open question. So I don’t know that we have a legal answer, and I think it is. It’s funny that we just sort of don’t even address that issue when we enter into a case until we’re compelled to, because we have some sort of super difficult question [00:15:17] Christoph: As so often the situation is more complex, since also humans can lose their citizenship, civil rights and can even be declared incompetent. At some point in our lives, we also required to stop driving. Even Prince Philip surrendered his driving license after an accident at the age of ninety seven. [00:15:39] Tracy: You know one thing that states have done. I know in a sort of Hawaii. For instance. You renew your license every eight years, but after the age of seventy two, you have to renew it every set, every two meaning that you have to undergo eye testing that, you know, you have to be eyeballed by somebody in a position of authority who can then make that determination. You know, I think that should be standard in all states. I think that, you know, maybe past the age of sixty five, you should have to go for more routine testing. And the presumption should increasingly be that you actually are not a safe driver rather than that you are. Now, You know, I’m gonna get a huge amount of pushback from elderly people, and I certainly don’t want to insinuate that elderly people cart blonde should be taken off the roads because there are people well into their eighties and nineties who are very safe drivers. But I think that with the sort of growing elderly population in the U. S, this is going to be a really difficult question that we’re gonna have to ask for ourselves not only legally, but personally as well when we’re dealing with are elderly parents. [00:16:37] Christoph: Besides age, there are other factors that might compromise our ability to drive. Certain medical conditions disqualify us from driving. What does it take to become an incompetent driver? [00:16:50] Tracy: So it actually varies from jurisdiction to jurisdiction, So that’s competency law. And in most states, it’s actually surprisingly difficult to have an adult declared incompetent. We’ve encountered this actually my own family. I have a relative, who has frontal lobe dementia and who has very clear symptoms of it. And has changes in his personality, and we cannot yet get him declared incompetent because he’s still able to function fairly normally and that he can dress himself. He can bath himself. His decision making abilities are very compromised, but that’s actually not enough under the state law of the state in which he’s living to have him declared incompetent. I see a lot of he is a lot of drive a car, and that’s been a real issue. We had to step in and actually take his car away, not so much because he had lost his driving ability is actually but because his relationship to drinking changed as a result of the dementia. And so he went from somebody who you know didn’t drink much at all to somebody who drank quite copious amounts and then was making the decision to get behind the wheel. You know, competency laws are really tricky, because if you make it too easy to have somebody declared incompetent, it could become a form of coercion. Right? And you can have people who are fully functional, institutionalized against their will. And we went through that in sort of the nineteen sixties and seventies. United States If you’ve read, Ken Kesey’s to kill a Mockingbird that comes out of tactical unlocking her Oh, my goodness. What’s the name of the book I’m looking for? One flew over the cuckoo’s nest involved a bird. On the other hand, if you make it too difficult to have people declared incompetent, you have situations exactly like the one in which my family finds itself. And when she you have people who really should have supervision and who really should lose their driver’s license and any number of things, they’re able to do. But, you know, just don’t reach that level. The driver’s test doesn’t become just then, you know, Can you parallel park? Do you know how to handle a four way stop? But can you drive better than on autonomous vehicle? And I think the answer there is one hundred percent of the time going to be no, because an autonomous vehicle I know that, you know Waymo’s prototypes scans I think it’s one point seven million spots per second, and so they’ve already far exceeded the capabilities of human perception. And so I just see humans losing your failing that test every single time, no matter how good of a driver they are for his high of a number of accents, as we have per year and motor view. Cold deaths continue to be a leading cause of death for most age groups. You know, we’re actually pretty good given the number of miles that we drive every year. We’re pretty sophisticated at handling machinery, and most of us, you know, in our day to day lives encounter what could otherwise be tricky driving problems with really no problem at all. But that doesn’t mean that we deserve a chance to be on the road when there’s a vehicle that’s a ninety four percent safer than we are. You know, at that point, I think that you know the right result legally and morally is to hand our keys back over to the robots. [00:20:03] Ryan: I believe that we should be in a process now that culminates in the near term in legislation, because I don’t think that AI is truly transformative unless it leads to changes to our law and legal institutions. But I think it may be premature to pass general rules about machine learning at this time. [00:20:23] Christoph: The opinions on when we should hand our keys over to the autonomous vehicles and when we should engage in the process of creating the legal framework for the operation do seem to differ. Researchers are more or less optimistic when autonomous vehicles will be safer than human drivers. In the meantime, we need to be careful not to trust autonomous vehicles too much. Here is Alan Wagner from the Pennsylvania State University. [00:20:51] Alan: There’s a raw, wide variety of different types of social interactions. And so some of things we’ve tried to do is, have robots communicate that they shouldn’t be trusted at some point. We found this exit extremely difficult. To give you an example, we had a sort of maize following experiment where we would put a person in a maize and we say just, you know, you can follow the robot, even try to find your own way out, right? This was an early experiment. And the robot. We just wanted to see how the robot could communicate that it was broken. And so initially, we had the robot move randomly around and the experiment was supposed to take about ninety seconds. But we found that some people would follow the robot for fifteen minutes. And so we’ve looked at another way. But what if the robot just goes around an obstacle, like in a loop? And but we still found that people would follow it for fifteen minutes, then Finally, we would have the robot run into a wall, fall over and just stay there. And then we found some people who would stand by the robot waiting on believing that it would. It was going to the person that the world was going to tell them something important. Um, so we’ve looked at a variety of different ways that robots can try to communicate. But at this moment, you shouldn’t trust me. It’s very difficult to do, but it comes back to this. Which issue that people are nuanced. And so even as an engineer, you’re looking for that one solution. And there isn’t that one solution you need to have variety of different solutions that depend on the actions of the person is making it the moment. A lot of the first electric appliances people would literally unplug them every night just to plug them back in in the morning because they didn’t. They were afraid that fires would start from these devices. And in fact, there were a lot a lot a number of fires with first electric devices. I wasn’t totally unfounded. So the point being that as we develop better models of the robots, we interact with a lot. Without question, our trust will probably become more calibrated to the actual system in its behaviour. But the challenge becomes what there will always be people that haven’t interacted with the robot. And so while they’re trying to develop that relationship, that understanding of the system, what are the risks that they’re taking on that are maybe not well founded. The best examples of the ones we’ve already recently seen watching Harry Potter While You’re Autonomous Vehicle is flying along Now these are rare events, right, but it it’s it’s this belief, and we see in our data lot that people believe that the systems are are flawless, that they cannot fail. Or they believe that the programmers or the people or the companies that created them are largely flawless because they’ve interacted, for example, with Google, or had cell phones that have worked in these ways. [00:23:46] Christoph: It is not only the humans that need to be careful when trusting machines. But also the machines need to be careful when to trust humans. [00:23:57] Alan: There are many situations were in autonomous system should not potentially trusted a human. So my adviser was Ron Arkin. He’s looked at a lot of military situations, and in these situations, people can get very emotional in war situations. And so you wouldn’t want a human soldier, to tell a robot to commit a massacre in that situation of the autonomous systems should say no, and it should have the right to say no and not accept the command to commit that that atrocity. Now I’m assuming a lot. I mean a lot of perception, a lot of action, a lot of behavior there, but way afford this with our soldiers in the West, right? They have the right to refuse commands that are immoral. And so if we’re going to field autonomous robotic systems that are there soldiers, they have to have that same right. But it brings up these moral issues that are so critical and that we have no answer to. And if people like us that really know the technology well, can’t even begin to answer these questions. Who can? [00:25:04] Christoph: Autonomous Weapon Systems? Another upcoming topic at the Human Robot Interaction podcast. Join me again next week when we discuss humans, robots and everything in between.
Autonomous vehicles are one of the most interesting topics in human-robot interaction right now. While they are not humanoid in shape, they are one of the biggest, deadliest and most promising robots available to the general public. They have the potential to dramatically change how we get from point a to b and what infrastructure we need. But they also raises many ethical issues besides a long list of technical challenges. Driving a car is inherently dangerous once you drive at a practical speed. Your autonomous vehicle needs to be able to deal with all sorts of traffic situations, weather conditions and even unpredictable human operators. This is an enormous challenge. I talked with Professor Robert Sparrow from the Monarch University, Professor Tracy Hresko Pearl and Professor Alan R. Wagner about the future of transportation with autonomous vehicles. The core question is: when will it become illegal for humans to drive cars? ISSN 2703-4054 Relevant links: When human beings are like drunk robots: driverless vehicles, ethics, and the future of transport by Robert SparrowCompensation at the Crossroads: AutonomousVehicles & Alternative Victim CompensationSchemes by Tracy PearlThe Moral Machine Experiment by Awad, Edmond and Dsouza, Sohan and Kim, Richard and Schulz, Jonathan and Henrich, Joseph and Shariff, Azim and Bonnefon, Jean-François and Rahwan, Iyad Transcript: Welcome to the human-robot interaction Podcast. I’m your host, Christophe Bartneck, and I’m fascinated with how humans and robots interact with each other. In this series we will explore this relationship with philosophers, engineers, psychologists and artists. We will look into the ethical implications off this new technology and how we can make it work for humans. We will shine light on the technologies that make robots intelligent and useful. We will also look at the people behind the science. Why are they interested in human-robot interaction? What career path did they take? And what does it mean to be a postgraduate student in this area? This is a lot of ground to cover. So let’s get started with one of the most interesting topics in human-robot interaction right now. Autonomous vehicles. While they are not human in shape they are one off the biggest, deadliest and most promising robots available to the general public. They have the potential to dramatically change how we get from point A to B and what infrastructure we need. But they also raise many ethical issues besides a long list off technical challenges In France, Germany and England, terrorists used vehicles as weapons and drove them into crowds of people, killing and injuring many. At the same time, hackers were able to remote control the Jeep Cherokee. This does raise considerable security concerns. Driving a car is inherently dangerous once you drive at a practical speed. The autonomous vehicle needs to be able to deal with all sorts of traffic situations, weather conditions and even unpredictable human operators. This is an enormous challenge. Despite the recent advances that Waymo and practically every major car manufacturer made, we did already encounter several fatalities with autonomous vehicles. In 2018 an Uber car with switched on auto pilot even killed a pedestrian crossing the street. Professor Rob Sparrow from the Monash University in Australia wrote an article called “When Human Beings Are like Drunk Robots, Driverless Vehicles, Ethics and the Future of Transport”. I talked to Robert in Sydney during a symposium on ethics in robotics and AI. Rob, how do you see people interacting with autonomous vehicles? [00:03:02] Robert: So what’s of interest here is how the system as a whole, the sort of human plus robot or human plus AI system will function when the machines are less than perfect. You know when one is trying to reproduce human performance at some task with an AI or a robot. Usually people can get a fair way, but they can’t always produce perfect task performance. In fact, in some context, you might wonder whether there’s any such thing as perfect driving performance. How does the system as a whole operate when the machine only works part of the time? [00:03:49] Christoph: Part of the time? Are we not talking about autonomous vehicles? Are they’re not supposed to drive all the time? The sad news is that today’s autonomous vehicles are unable to drive all the time. Governments have specified levels of autonomy ranging from zero no driving automation to five full driving automation for better managing the legal aspect of the today’s autonomous vehicles. Level Zero represents traditional vehicles without any automatic functionality. At level one, the vehicle has one type of automatic functionality, for example, breaking automatically when encountering an obstacle. At Level two, the vehicle can perform both breaking and accelerating functions as well as changing lanes. However, the driver has to monitor the system at all times and be ready to take control whenever necessary. For example, all Tesla vehicles are officially considered level two automation. At level three, the driver does not need to monitor the system at all times. Under certain circumstances, the system can work autonomously. The system gives the driver time, for example, ten seconds before handing back control. In 2018 the Audi A8 claimed to be the first car capable off level three automation. At level four the vehicle can perform all driving functions understand standard circumstances. Non standard conditions would include bad weather, such a snow or heavy rain. Finally, at level five, the vehicle can perform all driving functions in all circumstances. But now back to Rob Sparrow. [00:05:37] Robert: And this is actually really hard problem in sort of systems design, because one thing we know is that human beings come to rely on machines very quickly. They over rely on machines. And so the sort of naive solution which is for the human being to take over the task when the machine is failing actually works much less effectively than people might think. Because when the driverless vehicle, for instance, says big beep, beep driving context exceed specifications, please take control. The person who is nominally in charge of the vehicle may not be paying attention, and that is likely to generate accidents. [00:06:25] Alan: The problem is that humans are just not cognitively built to be able to re engage the autonomous vehicle on very short notice in very dynamic situations. We are not meant for that we shouldn’t be asked to. And it’s simply because we’ve bought a Tesla or whatever car and have agreed to the terms that we probably never read should not imperil us. [00:06:45] Robert: I mean, this is quite this sort of problem of what’s called automation bias is quite well known, has been studied for a long time and has really, I guess, now led some of the people draw doing driverless car research to think that these systems won’t be safe if they rely on human beings at all. That you simply cannot expect human drivers to take control in a short amount of time in a dangerous situation. And so really, the performance you need from a machine needs to be at least as good as the performance you’d get out of a human driver. [00:07:28] Christoph: In 2018 we had several accidents with autonomous vehicles that demonstrated that their performance is not yet where we want it to be. First, in March an autonomous Uber car in Tempe Arizona killed Elaine Herzberg crossing the road at night. The Uber taxi drove in the autonomous mode and the backup human driver, Rafaela Vasquez, failed to take back control in time. She was distracted watching television on her phone. Still, the vehicle had detected Elaine around six seconds prior to impact but only decided to trigger the emergency breaks around one second prior to impact. Unfortunately, the emergency braking system was disabled that day as “emergency braking manoeuvres are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior”, according to Uber. The second accidents on October 19th had a happier but still very enlightening result. The driver of a Waymo car took back control to avoid car that had cut into its lane. The driver changed the lane unaware that a motorcyclist was driving in it. The motorcyclist was hurt but survived the crash. Waymo claimed that its simulation based on the data gathered from the vehicle showed that the car would have avoided all accidents if it was only left in control. [00:08:58] Christoph: What are your thoughts on these accidents, Rob? [00:09:01] Robert: Well again, I think that’s entirely predictable. The system’s involving human beings and machines are quite complicated, and you can get both circumstances. You can get cases where people over rely on the machine to the detriment of the performance of the system. But you can also get the situation where people haven’t exaggerated sense of their own abilities and take over performance in ways that degrade the performance of the system as a whole. I think that’s actually an early example of a scenario that we predict in our paper called “When human beings are like drunk robots”. Which is that you can imagine the motorcyclist in that case actually taking legal action against the human being who took control off the vehicle, pointing out that they wouldn’t have been injured except for that person’s actions. And that is, I think, the sort of scenario that is likely to eventually render driving illegal that once the machines outperform human beings in the driving task. And I think they need to be able to do that in order for it to be ethical, to put them on the roads. Once they do achieve that level of performance, then human beings taking control will generate accidents. And where those accidents involve third parties eventually people are going to say, Look, you’ve got to stop taking control of these vehicles when you drive, you kill people. And so I think the future of driverless vehicles is actually cars with no steering wheels. And then I think that has some really interesting implications for things like the psychology of vehicle ownership, how people relate to other people in vehicles and, indeed the entire nature of our urban infrastructure. [00:10:58] Christoph: When we slowly stop driving ourselves and let the machines take care of us, we are likely to also become less skilled drivers. How will this transition period workout? [00:11:11] Robert: So it is a really hard problem. And, you know, in some ways, the easiest solution would be to move very rapidly to a fully driverless automated driving system. That is actually a much easier performance scenario for driverless vehicles. If they didn’t have to worry about the unpredictable behaviour of human beings think politically, that’s unlikely to happen. I guess I think that if the machines aren’t safer, there really is three possibilities here. One is that we way we introduce driving systems that do fail, and we hope that human beings can take control for the reasons we’ve already discussed I think that’s actually quite dangerous and that you would expect relying upon human beings to continue to concentrate and pay attention to the road situation whilst not actually being in control of the vehicle. That’s really unlikely, and I think, quite dangerous. So if you’re not going to be able to rely upon people in most of the context in which accidents might occur, I think the performance of the driver’s vehicle has to be better than the average human driver before you should be allowed to put it on the road. I mean, if you said, Look, the average human driver has so many accidents, my car has more accidents than that. We shouldn’t accept that vehicle on the road. If you can show that your vehicle outperforms the average human driver, then it seems to me that it’s unethical for people to drive in that circumstance. It’s actually slightly more complicated than that, because driver performance varies quite a bit. Most accidents are caused by a small minority of drivers, young men, sometimes people in advanced old age, so the most drivers are better than the average driver. Which means that actually, you might have a situation where replacing the entire vehicle fleet with driverless vehicles would reduce the rate of accidents but would still expose most drivers to a slightly higher risk of death. That’s quite a sort of classical philosophical problem in how we make tradeoffs between the value of the total aggregate consequences or utility and our treatment of individuals. My suspicion is that once vehicles can outperform the average driver, they will very quickly move past that to outperform any human driver, and then I think it will be unethical for human beings to drive. I think politically it’ll be quite hard for governments to legislate. To prohibit driving, not least because it will require everybody to buy new cars. But I think it is quite plausible to insist that from some in particular date, you’re not allowed to have a steering wheel in a car that’s intended for use on the public roads. When that date will occur, I’m really not sure it’s still bizarrely hard for someone not at the heart of this research programme of driverless vehicles to get a real sense of the capacity off off these systems, I’ve been to lots of driverless vehicle conferences. Now I’ve seen multiple people with PhDs from Stanford and MIT, and some of them say, Look, we’re just two years away from having a vehicle that is safer than the average human driver. Others say it’ll be another fifty years before these cars are on the road. As a philosopher, I’m not especially well call for it to make that judgement, but I am confident that if they’re not safer than human beings, they shouldn’t be on the road. Once they are safer than human beings, it’ll be unethical for us to drive. [00:15:20] Christoph: Not everybody agrees that driving a car should become illegal. Here’s Alan Wagner from the Pennsylvania State University. [00:15:29] Alan: So I sort of feel that we should have a right to be able to drive. We should have a right to take certain risks. Understanding that there are risks. Autonomous vehicles shouldn’t be used or any type of robot shouldn’t be used as a method to prevent us from taking on these risks. You could imagine. And this has been written about in science fiction, right? These robots that sort of lock you into your you’re home to prevent you from any kind of fall or getting sick or any kind of danger that comes to you. As human beings we should have our own autonomy and driving. Maybe one of those things that which we should retain the right to do. [00:16:09] Tracy: As a law professor, I would push back on that and say that you don’t have a legal right to drive. You have a desire to drive. But at least in the United States, you have no legal right to do so. The state retains its right to revoke your licence for any for any number of reasons. You know, my, my hope is that I think private driving tracks will become a very popular recreational destination in short order. So anybody looking for new investments, I would put you there. [00:16:41] Robert: It’s important to acknowledge that we already I mean the moment you get on the roads in a car, you place the lives of pedestrians at at risk and the faster you drive the more risk, you place people under. So we are as a society. When we’ve accepted that people should be allowed to drive at one hundred kilometres an hour rather than thirty kilometres an hour. We’ve already said we are willing to accept us a number of fatalities. You could reduce the road toll to almost zero by simply speed limiting all cars to thirty kilometres an hour. And then we would live in a society with a very, very, very low road toll. But it would take along the time to get to work. I mean, it’s not just engineers building driverless vehicles that are trading off human lives for the performance off the system. That’s absolutely part of the road industry. [00:17:43] Alan: Many people die in the roads every year. Certainly in America think it’s sixty five first seventy thousand a year we could. We could eliminate all those deaths, right? We could. We could force everyone to our helmet. We could have roll cages in every car, and we could set the speed limits to five miles an hour. But we choose not to. We choose to sacrifice sixty five to seventy thousand people a year for civil expediency, but also because we feel like we should have the autonomy to take some of these risks. And in many ways this comes down to a public safety versus individual rights debate. Where does it seems like it’s a slippery slope, arguing that for safety’s sake, you should take away right? You could also argue that for safety sake, you should not be able to eat cheeseburgers because they’re fattening. They clog your arteries and they cause greater health care risks. They might even be made more dangerous than driving in a car to a certain place. But it’s very I would argue that you should have the right to choose the type of food that you want, even if it’s bad for you, just as you should have the right to choose how you arrive at a location. [00:18:53] Tracy: I think we will get to the point. Not soon, not in the next ten years. But I think you know, thirty years from now, this is going to be a very interesting political debate. I think that the risk introduced by a single driver onto a public road is going to be so comparatively high that from a regulatory perspective, I think we’re gonna be left with little choice but to ban humans from the road. That’s the right perspective from a risk profile perspective. Politically, I think this might be a very unpopular thing to do and the joke that I make in America is that people who have their keys might be like gun owners, right, saying, You’re gonna have to pry my human driven car from my cold, dead hands. I’m optimistic, though, that by the time we reached that point, the nature of driving and of car ownership will have changed so dramatically that it won’t be the kind of horrible scenario that the people currently ambition. Look, I think that my children are going to have a very different relationship to cars and driving than people my age Do. I think that they’ll be raised in a world with driverless cars and will be so attached to their keys? My name is Tracy Pearl, and I am a professor of law at Texas Tech University School of Law in Lubbock, Texas. [00:20:11] Robert: As for the future of transport. There’s a vision that I would like to see realised. And then there’s a vision that I, in some ways I suspect, might be realised, and they’re quite different visions. So I think the best possible outcome for this technology is to use it to solve what’s called the last mile transport problem, which is rail systems are very efficient movers of people into central location, so most people’s commute takes them into a city or another place where a lot of people working and there’s often a train for part of the way. But getting from your house to the station and getting from the station in city to your office block. Those require walking and, you know, maybe it’s raining. Maybe you can’t afford the time, so people tends to drive and you see these cities where everybody is doing the same commute along the freeway. It would be much more efficient if people were in buses or on trains. A driverless vehicle technology would enable a scenario where you had essentially fleets off driverless minibuses that people interacted with with an app on their phone. They said. Look, I need to be at the railway station by nine AM. Here’s where I live and an Autonomous vehicle would come and pick them, and I don’t know four other people in their neighbourhood along the route. Take them to the railway station, commute on the train into town and, if necessary, the same process at the other end. And that would essentially reduce the number of vehicles on the road. It would really are change urban infrastructure massively because you wouldn’t need so much. Well, you wouldn’t need private motor vehicles, for instance, or all houses could lose their garage is the shopping malls and shopping centres could lose all the car parking space around them. There’s a very attractive future where people spend less time in vehicles and there are fewer cars on the road. Unfortunately, I believe that scenario will only come about if governments regulate because the other or other scenario is, you might still lose the private motor vehicle and have people doing most of their travel in autonomous vehicles. So something like Uber. But there’s no driver in the car. Drop a pin on your phone car, comes and picks you up, but drives you by yourself to your workplace. And so we’ve got the same number of vehicle miles being driven, though there might be fewer vehicles on the roads, but environmentally, that scenario is much less attractive. And indeed, there’s likely to be many more trips occurring as well, because, for instance, there’s a lot of people who can’t drive at the moment. And part of the attraction of driverless vehicle technology is that you could be blind and or you could be young, child and safely travel in an automobile. All the people who don’t have driver’s licences could take car trips so you expand the population of people who can potentially travel. You also make things possible, like, you know, I want a pizza and I can have it delivered by an autonomous vehicle. Unfortunately, I think in that scenario is in some ways, is more likely if we leave this, if there’s a policy vacuum in this area. [00:24:26] Christoph: In New Zealand, we have the problem that many tourists are unfamiliar with driving on the left side of the road, and this does cause some accidents. Autonomous vehicles could solve this problem and at the same time enable the tourists to enjoy the scenery. This could be a great business model for rental companies. Lift and Uber are working in this direction, but it seems odd that the major rental car companies do not yet seem to have engaged with this new technology. But this is only a small proportion off the road traffic. Autonomous vehicles have the potential to dramatically change how we relate to cars, ownership, and transportation. [00:25:13] Tracy: You know, there’s some really interesting data out of the United States showing that young people now teenagers are actually getting their licences at much lower rates as they did before. I mean, I remember when I was I grew up in Hawaii and fifteen was the age at which you get your licence? I remember for my friends and I. The day we turned fifteen, we went down to our department of motor vehicles and got our licences. We don’t see that now with with teenagers because the availability of things like Uber and Lyft it’s just so much easier now to secure transportation for yourself. You can do it from an app on your phone on. So I think that that’s likely to be the case. I think it’s going to become economically inefficient to own your own vehicle. I think that’s one of the many societal changes that’s going to be ushered in by an era of autonomous vehicles. I think we won’t need parking lots anymore. We won’t need to structure our urban centres in the way that we do. I think that’s really exciting, and I think people don’t realise the full spectrum of changes that are coming. [00:26:13] Christoph: It seems absolutely necessary for us as a society to have reliable and valid data on the progress that the autonomous vehicles make. Tesla and others reports some statistics that could be interpreted as that they are already driving safer than an average human being. But we have to keep in mind that the test drives only happen under conditions under which the companies feel safe for their cars and other traffic participants. This usually country roads or freeways in a city’s or bad weather conditions are often avoided and hence the statistics available can be misleading. For a country to pass legislation that would dramatically alter how we relate to cars and traffic. It does require considerable societal discussion and consent building. Recently, the result of a large scale study on the moral dilemmas around autonomous vehicles was published in nature. Several million participants made choices between two unfavourable options. The Autonomous Car would, for example, either run into a wall and kill its driver. Or run into a group of children. Such scenarios are often referred to as trolley problems. By systematically varying factors, the researchers were able to shed light on the moral preferences off people around the world. They do, for example, tend to spare the young over the old on the many over the few. But the study also showed that the participants tended to safe man over women. [00:27:53] Robert: I guess one thing to keep in mind about the Moral machines study is that from a philosophical perspective, and actually the authors are conscious of this, ethics, both shouldn’t be but isn’t a popularity contest. And you can’t really settle ethical questions by polling People. So for instance, you can imagine if you performed a similar exercise I don’t know, one hundred fifty years ago, then no number off African Americans would outweigh the life of one white person, for instance, and no one would say, Well, that’s the behaviour We should instantiate in our machines. I mean, it’s useful, vital to see what people do think and philosophers are increasingly interested in doing empirical work on people’s intuitions. But I would really resist the suggestion that we can settle ethical questions by simply polling people. You know, philosophy and ethics are about more than that. They’re about thinking deeply about matters and considering arguments that mightn’t occur to most. Most people. Now, having said that, you are absolutely right that in public policy, people do make decisions about how much to spend, for instance, to ensure that people don’t die on construction sites. Every skyscraper you see in a city that people have been killed building, building, that you can reduce the number of deaths through policy choices. It’s just that those policy choices, either contrary to the interests of wealthy and powerful people and or expensive. So yes, at one level we do already place a value on human life. There are good reasons, though, for not wanting to extend that and make it explicit. There is something about a fundamental moral equality of every human being that I think we need to hold on to. And we need to resist the idea that some lives are worth more more than others. So I would be very hostile to suggestions that we should allow people to pay their way out of accidents or that we should be privileging certain classes off citizens. Those judgments are notoriously unreliable and bigoted, For instance. I mean, in lots of societies, for instance, it’s pretty clear that people value the lives of women less than the lives of men. And that’s not something I think we should be building into our automated systems. [00:30:32] Tracy: Yes, So I mean, look, I’m with you. I’m a torts professor, so I am very risk averse. And so I would be in favour of taking humans off the road next year. I think politically in the United States, that’s just I can’t imagine you can imagine the electorate being willing to put down their keys. I think ten years from now it’s going to be slightly different. Twenty years or not may be a lot different. It will happen and I agree with you should happen sooner than later. But that’s going to be a situation not for professors to decide. Unfortunately. [00:31:07]Christoph: Autonomous vehicles will play an important role in our society, and there’s so much more to know and to talk about it. So join me again next week in the next episode of the human-robot interaction focusing on autonomous vehicles. Thank you for listening.
Rate Podcast

Share This Podcast

Recommendation sent

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Podcast Details

Created by
Christoph Bartneck
Podcast Status
Potentially Inactive
Mar 6th, 2019
Latest Episode
Feb 18th, 2021
Release Period
Avg. Episode Length
39 minutes

Podcast Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
Are we missing an episode or update?
Use this to check the RSS feed immediately.