Episode from the podcastHuman Robot Interaction

Autonomous Vehicles – Legal Concerns

Released Monday, 11th March 2019
Good episode? Give it some love!
We continue our discussion on autonomous vehicles and focus in this episode on the legal aspects. In particular the issues around liability are a major obstacle for the wide adoption of autonomous vehicles. Who will be responsible for damages caused by auto pilots? Will it be the driver, the manufacturer of the car or maybe even the car itself? I talked with Professor Tracy Hresko Pearl, Professor Ryan Calo and Professor Alan R. Wagner about what needs to happen in our legal system to be able to deal with the liability around autonomous vehicles.

ISSN 2703-4054

Relevant Links:


[00:00:00] Tracy: I’m convinced that the U. S. legal system is not prepared currently to handle autonomous vehicle crash cases. So crashes that are caused by some sort of hardware software malfunction rather than by driver error.

[00:00:15] Christoph: It is alarming that the leading country in testing autonomous vehicles, is legally unprepared for this disruptive technology. Last week we already started talking about how autonomous vehicles might change our society. This week, we will try to better understand the legal and political implications that these new forms of transportation might introduce.

[00:00:48] Christoph: So what exactly is the problem in the legal system of U.S.?

[00:01:02] Tracy: We have no case law. We have very minimal statutory law. The United States right now on so, I’m concerned that the judges are going to mishandle these cases. You hand down rulings that are very unfavorable, either to industry or to consumers, and both of those would be very, very problematic. My name is Tracy Pearl, and I am a professor of law at Texas Tech University School of Law in Lubbock, Texas.

[00:01:27] Ryan: I don’t think we’re quite in a position where we should be making general laws about machine learning because I don’t think we completely understand it’s full ramifications. Well, my name is Ryan Calo. I am a law professor at the University of Washington in Seattle. I think that government should officially accrue greater expertise, for example, by refunding the Office of Technology Assessment. I’ve even argued for a standalone agency devoted to better understanding cyber physical systems.

[00:01:59] Christoph: The Office of Technology Assessment was an office off the United States Congress from 1972 to 1995. Its purpose was as the name suggests to assess what impact technology might have on society for this purpose. It provided objective analysis off the complex scientific and technical issues off the late twentieth century. Despite protests, it was defunded.

[00:02:22] Tracy: I watched the Mark Zuckerberg hearings with great interest earlier last year in United States, and I think the dynamic that you saw it play in the Senate hearings is exactly what you see in courts, which is you have younger smugger people explaining to older, more in authority people, technology that the older people really struggle to understand. I clerked for two federal judges when I graduated from law school, one at the trial level and one of the appellate level. And, you know, my, my trial court judge was still having his emails printed out and given to him by his assistant and for as incredibly wise as an incredibly good jurist as he was. He was not somebody I think, who was maybe the right person to be judging really sophisticated matters of technology. And I would say that about a large number of judges, judges, The United States tend to be older. That’s sort of the nature of that career path. And so I think that we’ve seen that struggle not with autonomous vehicles yet. But in other areas of technology, judges really struggling to understand even just the nature of the dispute, much less reach a decision that that makes a lot of sense. There’s less information available about jury pools, you know. Certainly they’re young people serving on juries, but I have witnessed really complex patent cases involving, you know, really dense technological issues. You know, the composition of microprocessors, and I have just seen juries just with completely glazed over expressions for the entirety of those trials and really questioned whether or not they were the right people to make the decision on whether the jury system at all. Is this the sort of correct milieu in which those issue should be dealt with?

[00:04:04] Christoph: Pointing the finger at the incompetence of others is easy. I myself am not a computer scientist. Have you any experience and programming?

[00:04:14] Tracy: I have programmed a computer in the most minimal way possible. I mean, I took computer programming classes when I was in school, and I did basic like HTML and things like that. But no, I am not a computer scientist by any stretch of the imagination.

[00:04:28] Christoph: If you have little experience in programming. Where do you get your information from that enable you to form your legal opinions?

[00:04:35] Tracy: Yeah, so the great thing about long technology is that there are a lot of interdisciplinary conferences. So why, over the last couple of years have become less interested in attending law c conference is because I think it’s a lot of people who have the same knowledge base talking amongst themselves. And I’ve really become much more in favor about attending conferences like these. Where there are computer scientists are under engineers and having those conversations because I think this is sort of an existing problem in all areas of law, which is that you have judges, lawyers and law makers and lawmakers tend to be lawyers, just like law professors. And we’re all talking to each other and not talking as much to industry as we need to. And then you have industry that sort of lobbying lawmakers in a really self interested way. And so I think what is sort of missing there is those kinds of honest, in depth conversations about the nature of the issues that we have to confront. And that’s my hope. The legal world is really siloed and the legal academic world is even more siloed. And so I hope to see more of those conversations happening and more collaborations happening sort of a cross disciplines in this area.

[00:05:43] Ryan: There has been a growing number of applications of machine learning to context like translation and so on that that feel to me that at least that the application layer not the development of the core techniques behind, eh? Yeah, but at the level of what can we deploy today? Because we have the algorithms in the data and the computational power that you know that it is. It is something that’s getting being part of the mainstream, right? So what I’m thinking about us, for example, liability regimes for driverless cars. Drone navigation thinks like that

[00:06:22] Christoph: Liability refers to who will have to pay in case of an accident. The manufacturers of autonomous vehicles also struggle with liability implications. They could be hold responsible when their vehicles misbehave or under perform. The problem is that the manufacturers cannot completely predict how they’re cars will perform in the real world. Naturally, they would not like to be liable for the damage their cars may cause. Last week, we already discussed several accidents in which the autonomous vehicles failed and even killed a pedestrian.

[00:07:01] Alan: These are really challenging problems. I mean that that that is probably, I think, one of the most challenging problems in artificial intelligence. This this fact that these systems can become so complex and so experiential, right, the robot’s behavior could change with his experiences out there in the in the world. The loyal Wingman Project, which was a project looked at by the United States Air Force, had these systems that wouldn’t learn on the fly but would learn through, you know, it would run a mission. It would be debriefed on the mission, and it would include that in its future missions and rationale for behavior. It can certainly get a faraway further out than the program or ever intended.

[00:07:47] Christoph: The real problem seems to be the complex environment. The reality on the road is far more complicated than current Autonomous vehicles can handle. Just this week, Waymo announced that its car can know understand, gestures from police officers managing traffic manually. Again, this development was announced only this week, which means that so far it would have not known what to do. While this form of traffic regulation is rare in most developed countries, it is far more common in the developing countries. Moreover, in some countries, no traffic signs hold any truly regulatory power. There are merely suggestions. How could an autonomous vehicle operate in such a lively and organic flow of traffic?

[00:08:35] Tracy: Yeah, I think just exactly what you said, which is that as good as programmers are can be, they can’t possibly predict the complexity of the driving environment, right? Additionally, there driving situations in which there are two bad options rather than two good ones. This is the sort of famous trolley problem. Do you program the card to run into the mom with a stroller or to run into the Group of six people right that there is not a good choice there.

[00:09:01] Christoph: If the legal system is unprepared, what could a solution be?

[00:09:05] Tracy: So my proposal is a victim compensation fund, and this is I don’t want to say a uniquely American phenomenon, but one that America, I think, has utilized more than other countries. So the Victim compensation fund is an alternative to the legal system it creates a pool of money and makes it available to victims of a particular type of injury, and it tells those victims that it will compensate them for their injuries in exchange for them waving their right to sue. We’ve used these in the United States to compensate, among other groups, at nine eleven victims, victims of the deepwater horizon oil spill, coal miners who suffer from black lung and the victims of the Virginia Tech shooting. And I think that these kinds of funds offer a really interesting alternative for AV crash cases. I mean, so I’ve proposed creating a funds that would compensate people who were injured by fully autonomous vehicles and that would offer in exchange manufacturers some protection from liability exposure. And, I think, additionally, we have to come to grips with the fact that autonomous vehicles are not going to have a zero percent risk, right they There is going to be some predictable amount of injuries or fatalities caused by autonomous vehicles. And importantly, that doesn’t mean that they’re a bad product or that somebody should be be held liable. But it does create real issues in terms of liability exposure so the analogy that I give here and I give it of my paper is childhood vaccines. So childhood vaccines are great. I’m a believer in childhood vaccines. They save tens of thousands of lives United States every year. But childhood vaccines also injure a predictable, unfortunately small number of children every year. And there’s nothing that we can really do. We can make them sort of incrementally safer, but there will always be some number of children who have a bad reaction to a vaccine. In the United States, the vaccine manufacturers were almost sued out of existence in the in the nineteen eighties and nineties because every time a child was injured, even if it was one of those sort of predictable injuries that we can’t eliminate from the vaccine protocol, their parents would turn around and sue the vaccine manufacturers. And for small companies in particular, the liability exposure of producing childhood vaccines simply became too high, and they stopped producing them. And we almost run out of vaccines in the United States. And so we created a victim compensation fund that eliminated that problem. So now the US you pay a tax on every childhood vaccine and if your child is injured, you’re compensated by a fund. This may seem completely obvious to all of my European, Australian and apparently Kiwi counterparts, right, You’ve been doing this much better than we have the United States, but I think for autonomous vehicles, we’re going to have the same exact issue. There will be a risk profile. People will be injured, but I think the Net positive is going to be very significant.

[00:12:00] Christoph: This sounds like an interesting idea, although it does seem more like a work around Tracy’s argument is that in the absence of any legal guidelines or laws we need to settle outside of the legal system, there’s still another option available. Saudi Arabia gave the Sofia robot citizenship, and the European Union is also debating whether the grand personhood to robots could the autonomous vehicles not become a legal entity that would become responsible for its own actions.

[00:12:32] Alan: How I I don’t I don’t see how you could have a robot have a legal state. I mean, the European Union has sort of hinted at this with the Electronic personhood, right? But I don’t understand it because you cannot punish a robot right, you, can only sort of punish the individual that is behind the robot, the human right. But holding the system itself responsible makes no sense because the system has no sense of punishment, no sense of loss.

[00:13:02] Tracy: I know that there has been a lot of discussion about giving machines legal rights. There’s a book that I read called Robots Are people too. That was about just this issue on Some people have talked about this in terms of like insurance. Instead of making humans own insurance, why not make the car itself the insurable entity? Gosh, I don’t know that I have better answers than anybody else here in terms of how we define autonomy and whether and how it’s possible to make machines accountable for their own actions. One way to do it would be, I think it’s it’s always presented as a sort of either or like either machines have the full panoply of human rights, and they can, you know, go get a beer from the local, you know store as long as they have a license or and do everything else that a human would do. And certainly I think that that’s you know, reaches a level of absurdity. I think there are ways in which that we could consider machines from a legal perspective illegal entity such that the machine itself can be sued, or force the machine itself to carry insurance. And I think that actually, you know, sort of creates a lot of interesting possibilities. When we’re talking about compensating people for injuries, I don’t also know that writes rhetoric is maybe the right paradigm to used to address some of these questions. I mean, I think that when we talk about rights, it’s so squishy, and it’s subject to so many assumptions and so much existing law that talking about rights, context, actually create more problems than it solves.

[00:14:32] Christoph: Personhood itself is a difficult term. From a legal perspective, what are the requirements for an entity to be considered a person?

[00:14:39] Tracy: Oh, goodness is like the hardest question. I mean, here’s the thing. We don’t We don’t have a great answer legally as well. If you look at sort of the abortion jurisprudence that we have, you know, one of the most basic questions to ask about personhood, which is when does it start, is something that’s subject to great political debate and one that still, I would argue, at least on a sort of legally open question. So I don’t know that we have a legal answer, and I think it is. It’s funny that we just sort of don’t even address that issue when we enter into a case until we’re compelled to, because we have some sort of super difficult question

[00:15:17] Christoph: As so often the situation is more complex, since also humans can lose their citizenship, civil rights and can even be declared incompetent. At some point in our lives, we also required to stop driving. Even Prince Philip surrendered his driving license after an accident at the age of ninety seven.

[00:15:39] Tracy: You know one thing that states have done. I know in a sort of Hawaii. For instance. You renew your license every eight years, but after the age of seventy two, you have to renew it every set, every two meaning that you have to undergo eye testing that, you know, you have to be eyeballed by somebody in a position of authority who can then make that determination. You know, I think that should be standard in all states. I think that, you know, maybe past the age of sixty five, you should have to go for more routine testing. And the presumption should increasingly be that you actually are not a safe driver rather than that you are. Now, You know, I’m gonna get a huge amount of pushback from elderly people, and I certainly don’t want to insinuate that elderly people cart blonde should be taken off the roads because there are people well into their eighties and nineties who are very safe drivers. But I think that with the sort of growing elderly population in the U. S, this is going to be a really difficult question that we’re gonna have to ask for ourselves not only legally, but personally as well when we’re dealing with are elderly parents.

[00:16:37] Christoph: Besides age, there are other factors that might compromise our ability to drive. Certain medical conditions disqualify us from driving. What does it take to become an incompetent driver?

[00:16:50] Tracy: So it actually varies from jurisdiction to jurisdiction, So that’s competency law. And in most states, it’s actually surprisingly difficult to have an adult declared incompetent. We’ve encountered this actually my own family. I have a relative, who has frontal lobe dementia and who has very clear symptoms of it. And has changes in his personality, and we cannot yet get him declared incompetent because he’s still able to function fairly normally and that he can dress himself. He can bath himself. His decision making abilities are very compromised, but that’s actually not enough under the state law of the state in which he’s living to have him declared incompetent. I see a lot of he is a lot of drive a car, and that’s been a real issue. We had to step in and actually take his car away, not so much because he had lost his driving ability is actually but because his relationship to drinking changed as a result of the dementia. And so he went from somebody who you know didn’t drink much at all to somebody who drank quite copious amounts and then was making the decision to get behind the wheel. You know, competency laws are really tricky, because if you make it too easy to have somebody declared incompetent, it could become a form of coercion. Right? And you can have people who are fully functional, institutionalized against their will. And we went through that in sort of the nineteen sixties and seventies. United States If you’ve read, Ken Kesey’s to kill a Mockingbird that comes out of tactical unlocking her Oh, my goodness. What’s the name of the book I’m looking for? One flew over the cuckoo’s nest involved a bird. On the other hand, if you make it too difficult to have people declared incompetent, you have situations exactly like the one in which my family finds itself. And when she you have people who really should have supervision and who really should lose their driver’s license and any number of things, they’re able to do. But, you know, just don’t reach that level. The driver’s test doesn’t become just then, you know, Can you parallel park? Do you know how to handle a four way stop? But can you drive better than on autonomous vehicle? And I think the answer there is one hundred percent of the time going to be no, because an autonomous vehicle I know that, you know Waymo’s prototypes scans I think it’s one point seven million spots per second, and so they’ve already far exceeded the capabilities of human perception. And so I just see humans losing your failing that test every single time, no matter how good of a driver they are for his high of a number of accents, as we have per year and motor view. Cold deaths continue to be a leading cause of death for most age groups. You know, we’re actually pretty good given the number of miles that we drive every year. We’re pretty sophisticated at handling machinery, and most of us, you know, in our day to day lives encounter what could otherwise be tricky driving problems with really no problem at all. But that doesn’t mean that we deserve a chance to be on the road when there’s a vehicle that’s a ninety four percent safer than we are. You know, at that point, I think that you know the right result legally and morally is to hand our keys back over to the robots.

[00:20:03] Ryan: I believe that we should be in a process now that culminates in the near term in legislation, because I don’t think that AI is truly transformative unless it leads to changes to our law and legal institutions. But I think it may be premature to pass general rules about machine learning at this time.

[00:20:23] Christoph: The opinions on when we should hand our keys over to the autonomous vehicles and when we should engage in the process of creating the legal framework for the operation do seem to differ. Researchers are more or less optimistic when autonomous vehicles will be safer than human drivers. In the meantime, we need to be careful not to trust autonomous vehicles too much. Here is Alan Wagner from the Pennsylvania State University.

[00:20:51] Alan: There’s a raw, wide variety of different types of social interactions. And so some of things we’ve tried to do is, have robots communicate that they shouldn’t be trusted at some point. We found this exit extremely difficult. To give you an example, we had a sort of maize following experiment where we would put a person in a maize and we say just, you know, you can follow the robot, even try to find your own way out, right? This was an early experiment. And the robot. We just wanted to see how the robot could communicate that it was broken. And so initially, we had the robot move randomly around and the experiment was supposed to take about ninety seconds. But we found that some people would follow the robot for fifteen minutes. And so we’ve looked at another way. But what if the robot just goes around an obstacle, like in a loop? And but we still found that people would follow it for fifteen minutes, then Finally, we would have the robot run into a wall, fall over and just stay there. And then we found some people who would stand by the robot waiting on believing that it would. It was going to the person that the world was going to tell them something important. Um, so we’ve looked at a variety of different ways that robots can try to communicate. But at this moment, you shouldn’t trust me. It’s very difficult to do, but it comes back to this. Which issue that people are nuanced. And so even as an engineer, you’re looking for that one solution. And there isn’t that one solution you need to have variety of different solutions that depend on the actions of the person is making it the moment. A lot of the first electric appliances people would literally unplug them every night just to plug them back in in the morning because they didn’t. They were afraid that fires would start from these devices. And in fact, there were a lot a lot a number of fires with first electric devices. I wasn’t totally unfounded. So the point being that as we develop better models of the robots, we interact with a lot. Without question, our trust will probably become more calibrated to the actual system in its behaviour. But the challenge becomes what there will always be people that haven’t interacted with the robot. And so while they’re trying to develop that relationship, that understanding of the system, what are the risks that they’re taking on that are maybe not well founded. The best examples of the ones we’ve already recently seen watching Harry Potter While You’re Autonomous Vehicle is flying along Now these are rare events, right, but it it’s it’s this belief, and we see in our data lot that people believe that the systems are are flawless, that they cannot fail. Or they believe that the programmers or the people or the companies that created them are largely flawless because they’ve interacted, for example, with Google, or had cell phones that have worked in these ways.

[00:23:46] Christoph: It is not only the humans that need to be careful when trusting machines. But also the machines need to be careful when to trust humans.

[00:23:57] Alan: There are many situations were in autonomous system should not potentially trusted a human. So my adviser was Ron Arkin. He’s looked at a lot of military situations, and in these situations, people can get very emotional in war situations. And so you wouldn’t want a human soldier, to tell a robot to commit a massacre in that situation of the autonomous systems should say no, and it should have the right to say no and not accept the command to commit that that atrocity. Now I’m assuming a lot. I mean a lot of perception, a lot of action, a lot of behavior there, but way afford this with our soldiers in the West, right? They have the right to refuse commands that are immoral. And so if we’re going to field autonomous robotic systems that are there soldiers, they have to have that same right. But it brings up these moral issues that are so critical and that we have no answer to. And if people like us that really know the technology well, can’t even begin to answer these questions. Who can?

[00:25:04] Christoph: Autonomous Weapon Systems? Another upcoming topic at the Human Robot Interaction podcast. Join me again next week when we discuss humans, robots and everything in between.
Rate Episode
Do you host or manage this podcast?
Claim and edit this page to your liking.