October 31, 2022
Guest Kurt Gray, UNC-Chapel Hill
Hosted by Michael Dello-Iacovo, Sentience Institute
Kurt Gray on human-robot interaction and mind perception
“And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.”
What is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?
Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.
Topics discussed in the episode:
- Introduction (0:00)
- How did a geophysicist come to be doing social psychology? (0:51)
- What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)
- What is mind perception? (4:45)
- What is a mind? (7:45)
- Agency vs experience, or thinking vs feeling (9:40)
- Why do people see moral exemplars as being insensitive to pain? (10:45)
- How will people perceive minds in robots/AI? (18:50)
- Perspective taking as a tool to reduce substratism towards AI (29:30)
- Why don’t people like using AI to make moral decisions? (32:25)
- What would be the moral status of AI if they are not sentient? (38:00)
- The presence of robots can make people seem more similar (44:10)
- What can we expect about discrimination towards digital minds in the future? (48:30)
Resources for using this podcast for a discussion group:
Transcript (Automated, imperfect)
Michael Dello-Iacovo (00:11):
Welcome to the Sentience Institute podcast, and to our 19th episode. I'm Michael Dello-Iacovo, strategy lead and researcher at Sentience Institute. On the Sentience Institute podcast, we interview activists, entrepreneurs, and researchers about the most effective strategies to expand humanity's moral circle. Our guest for today is Kurt Gray. Kurt is a professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides. He also has a dark past as a geophysicist just like myself.
Michael Dello-Iacovo (00:54):
Alright, so I'm joined now by Kurt Gray. Kurt, thanks so much for joining us on the Sentience Institute podcast.
Kurt Gray (00:58):
Happy to be here.
Michael Dello-Iacovo (01:00):
So one thing I noticed when I was doing some background for this interview is we both actually have a geophysics background. So I'm not sure if you know, but I did my PhD in the use of seismic for space exploration, and doing something very different now. And same for yourself. I saw that you also work in geophysics and are doing something very different. So what made you switch from geophysics to social psychology? That seems like very two very different things.
Kurt Gray (01:25):
Yeah. That's a, uh, . I think you're the first person I met in social psychology land that cares about geophysics or that cared, I guess. I did resistivity imaging. That was primarily what I did, here on Earth looking for natural gas. I guess what made me switch is that I didn't really care about rocks and I didn't care about natural gas or oil. Not only because I didn't really want to, you know, dig up beautiful wilderness to find oil, but mostly I just cared more about people than rocks. And so I switched without ever taking a class in psychology, but never, never looked back.
Michael Dello-Iacovo (02:08):
Yeah. Well, how did you find the switch given you didn't have a psychology background?
Kurt Gray (02:12):
Well, I thought, I mean, I thought it was great. I think I always was interested in psychology, but I thought it was, it was too soft, You know, I thought I needed to do like science not realizing that science takes all different kinds. And yeah, my first class was in organizational behavior, and it was interesting, but it just seemed like it was psychology in organizations, which is, you know, kind of what it is, and that's okay. But I wanted to study more, more things, more weird things, and, uh, and so that's what I'm studying now. More weird things. Yeah.
Michael Dello-Iacovo (02:49):
Yeah. Cool. Excited to talk about the weird stuff. But, yeah, just so you know as well, I also worked in oil and gas. I did, seismic exploration for oil and gas, and then went to do my PhD. And now, now I'm doing something very different.
Kurt Gray (03:00):
, Okay. Are you from out west, Western Australia?
Michael Dello-Iacovo (03:03):
No, South Australia originally. But, uh, I worked in Central Australia for a little bit. Alright. So, it looks like you do quite a few different things. I know that you work with the Center for the Science of Moral Understanding and also the Deepest Beliefs Lab. So I thought we could just start by maybe you giving a quick pitch on what both those organizations do.
Kurt Gray (03:26):
Right. So the, I run a lab called the Deepest Beliefs Lab, and it used to be called the Mind Perception and Morality Lab, because we studied how people make sense of the mind of others and how people make their moral judgements. And then kind of my early research and ongoing research too kind of argues that those things are kind of the same. So when we make our judgments about the minds of others, we also use those perceptions of the minds of others to make our moral judgments. But then as I kind of did more work and got more connected with things like politics and religion and even how we understand the rise of AI, I realized that the lab was a little broader and so rebranded, in a, in a corporate sense to be the Deepest Beliefs Lab so we could study whatever we felt was interesting, as long as people cared a lot about it. And so that's what we study. And then the Center for the Science of Moral Understanding is really a kind of more applied endeavor, and that's to explore how best we can bridge moral divides in a kind of divided world, and especially in a divided America.
Michael Dello-Iacovo (04:38):
Great. Thanks for that. So I've got lots of follow up questions. So, we'll start with mind perception, I guess. So, first, what is mind perception? How would you describe that?
Kurt Gray (04:50):
Yeah, so the, I think the most, the easiest way to get into what mind perception is, is to take a step back and think about this longstanding philosophical problem called the problem of other minds, right? And so, the problem of other minds is great when we're doing remote conversations like we are now, right? Because I'm looking at you on a screen and you're talking to me, but all I see is this, this face on a screen, and like, I can't even see the back of your head, right? So I don't know if you are a human being like I am, or you're just like some robot, right? With some like fleshy face on top of some, you know, uh, mechanical skeleton that's just like parroting those words. And so what applies on, you know, zoom calls also applies on if your, you know, a lover tells you that they love you, but how do you really know, Right?
Kurt Gray (05:45):
How do you know if you see blue, it looks the same to me, or strawberries taste the same, right? These are questions that, you know, maybe when you're a teenager and getting stoned and talking with your friends, that kind of like come up naturally. But there were kind of like deep philosophical issues in the idea that other minds are ultimately inaccessible. And so because they're inaccessible, we're just left to kind of infer the presence of other minds. We're left to perceive other minds, right? And so you can see this maybe best when people think about dogs, you know? So, my parents have two dogs. They're purebred akitas. They're very pretty dogs. I don't think they're particularly smart dogs, right? But when I look at the dogs, I just see like two dogs, just plain dogs, you know, like they're looking at the wall, they're thinking about nothing as far as I can tell, right? I don't see a lot of mind. But my parents look at these dogs and think that they're, you know, nostalgic about the way that things were, or thinking about the state of the government, right? They infer huge amounts of minds in these animals. And so we perceive different amounts of mind in the same thing. And so that's really what mind perception is, the idea that two people can look at the same thing and perceive different amounts of the ability to think and to feel.
Michael Dello-Iacovo (07:08):
Is what your parents doing in that example, anthropomorphizing the dogs to an extent.
Kurt Gray (07:13):
Yeah, exactly. So anthropomorphism is kind of inflating mind perception above and beyond what most people would, would guess. You know, people can actually, or things can actually think and feel. And then of course, the other way is, is typically called dehumanization, right? So if you take people that do have minds that can think and feel like other people, and you deny them their mind, right? So you treat them like they're animals or machines or something like that.
Michael Dello-Iacovo (07:43):
So, okay, what exactly is a mind then? How would you describe a mind?
Kurt Gray (07:48):
Yeah, I, that's a question I get all the time, and I never have a good answer. It's if, I mean, if you look at the dictionary definition, there's lots of definitions of what a mind is. And I wrote a book with the word mind in the title . I don't even know what a mind is. I mean, I think you can think of a mind in the sense that it takes incoming information, it takes sensations, it takes perceptions, it takes things in the outside world, and it performs some set of kind of computations, cognitions. There's beliefs in there, right? There's these internal states, and then you transform the kind of input using the kind of like internal states of the mind to an output, right? Behaviors, actions, things like that. So I think, you know, to be a, I don't know if this is like too computer focused, but I think it's really kind of like, there's inputs, there's like computations and there's outputs, but I think those computations are not just, you know, that sounds very behaviorist, right?
Kurt Gray (08:55):
Like there's some black box and it's like going beep boop boop, right? And then spitting out some behavior. But like, there's something that it's like to be a mind too, right? And those are, those are feelings and sensations and, you know, the feeling of love or the sensation of the color red, right? These like rich conscious experiences. And so I think that you need to emphasize in that, in that chain from input to output, that there's like that thing in the middle that's really this, like this person typically, right? These like feelings and sensations. So, I think that's what a mind is.
Michael Dello-Iacovo (09:31):
Yeah. And you just mentioned output and input, and I think I've heard you say those can be thought of as agency, which is the doing and thinking and experience, which is the feeling and sensing is that right?
Kurt Gray (09:45):
Yeah, that's right. So experience is the capacity to feel and to sense and agency is the capacity to do and to act. And then, you know, some of like beliefs, you know, what is that? I'm not sure where that fits into that, but, at least when people perceive the minds of others, they perceive them along in these kind of like two broad dimensions like thinking and feeling, and they can come apart, right? So usually you think of dogs as being more about feeling and less about thinking, and you think of corporations like Google as being more engaging and thinking than feeling. And then you or I, you know, every day people we can think and feel, and things like rocks, neither. But of course, right, these things are a matter of perception.
Michael Dello-Iacovo (10:30):
Yeah. And that brings me to another question. So, you've also said that for an interesting example, moral exemplars people often think of as less susceptible to feel pain. Could you talk about this a little bit? And, would I be right in saying this is related to people seeing moral exemplars as being perhaps more agential than experiential?
Kurt Gray (10:53):
Yeah. So the phenomena you're talking about is something that we call moral type casting. And moral type casting is the idea that when we perceive those in the moral world, we kind of bin them into one of two categories. And that one of those categories is like those who do moral deeds, like good or evil people, right? They're moral agents. And the other category is those who receive moral deeds, and we call those moral patients, right? So a victim is a moral patient. And on the other side, like heroes and villains are, they're moral agents. And so I guess if we take a step back, when we think about morality, you know, average, average person, they think about like, good and evil is like the big dimension of morality, right? Like on the pole, there's like Hitler and Satan, you know, the evil people. And then there's the good people like, you know, Martin Luther King Jr.
Kurt Gray (11:49):
And Mother Teresa and all these great people. But those people like MLK Junior and Mother Teresa are still doers, right? They're still moral agents. They still, you know, impact others whether through help or harm. And the other side in the kind of like, you know, my moral world, we've got our agents and our patients like victims, right? Like victims of crime, people who are struck by natural disasters and need help, right? They're like more about experience, they suffer, and agents are more about doing, right? They're agency. And so when we think of the moral world, we put people into either like an agent bin or a patient bin. And this means it's hard to see them as the other other way. So if you think of, you know, heroes, like you think of action films, right? Like there's Arnold Schwarzenegger, and like he's on a mission to like save someone.
Kurt Gray (12:40):
And you know, he gets like stabbed. He gets shot, he gets punched, You're not really worried. I mean, he's super ripped, obviously he's buff, but you're not really worried about Arnold Schwarzenegger because he's such an agent. Like he's a doer. And then you see like the victim of the movie and, and you know, the villain just kind of like slaps the victim lightly and you think, Oh my God, what a monster, right? So evil, right? Because you're worried about the victim's experience and suffering, and you only think about the kind of hero's agency, and the same with the villain. So that, you know, that's how we divide the moral world. And I think the most interesting thing there is when, you know, if people think of you as a hero, you know, you help out a lot around the house, or you do a lot at your workplace, then it turns out that they think you're generally insensitive to pain, and that means they ignore you when you're suffering, right? So we have this other paper showing that like leaders in workplaces where, you know, they're heroes like firefighters, let's say. Like, people don't think those leaders ever need help because you just think of them as heroes, right? But in fact, right, everyone needs help. We just need to kind of change our perceptions.
Michael Dello-Iacovo (13:48):
Yeah. It's interesting. So you mentioned, movies, for example, heroes and villains. And first of all, I've heard you say this before, but I didn't really think about it in terms of, villains as well as like, also, having that aspect where people feel like they don't feel as much, they more see them as more of agents. I guess I thought about that in terms of just moral exemplars, on the good and the good scale, the good side of things. So it's interesting to hear that that seems to apply for villains as well. But for the movie examples, do you think any of this feeling is driven by depiction in media in fiction? Or is it more an innate thing that is kind of like expressed in media?
Kurt Gray (14:31):
Yeah, that's a good question. I think the answer is almost always both with these things. I mean, there's a reason why we depict them in these ways, I think, but also these depictions kind of reinforce it a little bit in our studies. So we have a paper with, you know, like seven studies, and in our paper, we, we show this effect even without kind of musclebound pictures of Austrian bodybuilders, you know, . So it's just like the, the very fact that someone is kind of like committed to good or committed to evil makes you think of them as being able to, let's say, hold their hand in a bucket of ice water longer, or feel less pain if they step on a, on a piece of glass. I mean, I think a good example is Gandhi, right? Like, if you want to talk about physical types, Gandhi is about as far away as possible from Arnold Schwarzenegger as you can get. And yet, right? People think that, you know, Gandhi can endure all sorts of things in the service of helping India achieve independence right? In, in the service of heroism. Same with Mother Teresa, right? People think that Mother Teresa is kind of relatively insensitive to pain, and I don't think she could, you know, bench press more than, you know, , I don't know what Mother Teresa could bench.
Michael Dello-Iacovo (15:48):
Yeah. Do you think there's an extent to which that is kind of a true thing because, I guess one, so one might think, let's take the example of Gandhi. I mean, they would often go on hunger strikes, and I mean, it seems like to an extent maybe they were less susceptible to pain, or maybe that's just that their moral convictions are so strong that they were more willing to endure that pain. So is there an extent to which that's actually a true thing and not kind of a mirage that people are feeling when they, when they think about moral exemplars?
Kurt Gray (16:25):
Yeah, I mean, you know, I don't, I wouldn't say it's a mirage, right? I think a lot of our perceptions are grounded in, in some kind of truth. And I think you bring up a good point that this is too, so, like amazing heroes, they're, they are kind of, um, generally more agentic, right? That's how they can do such heroic deeds. But I think the way to think about it is like, maybe more of like a stereotype of heroes, and then people can apply it to other cases where it doesn't really fit. You know? So if you think of your mom or dad as a hero, as kids often do, and then, you know, you see them cry, maybe the first time you see them cry, you think, What is going on? Right? Like, how can you cry? Like you are a sheer agent, you're a hero.
Kurt Gray (17:13):
Like, how can you even, you know, feel pain or sadness? And so I think we like generalize this beyond the limits of where we should. And just totally neglect the suffering of heroes like doctors as well. I mean, doctors do endure a lot of things to go through medical school. They like never sleep. They work long shifts, but there's also, you know, chronic burnout among doctors, and they work way too many hours and they make foolish accidents because people think that they can do more than they can, right? So I think there's a little bit bit of a, you know, a and b here.
Michael Dello-Iacovo (17:51):
Mm. Are there any examples of groups of people, types of people that we might see as both, or not even necessarily people, any other entities that we might see as both and experiential, or is it very much often one or the other and it's on this inverse relationship?
Kurt Gray (18:09):
We can see ourselves as both those ways as agents and patients, we can see our very close others as agents and patients. But even then, even then, right, if you're, if you're speaking to your spouse or your best friend, you can recognize in the kind of abstract that they are both agents and patients. But then when you're talking to your friend and they're telling you about this terrible thing that happened to them, or they're telling you about this amazing thing they did and helped others, you still get that type casting that way. So I think it, you know, it's not a, it's not a physical law, but I think it's a pretty powerful tendency.
Michael Dello-Iacovo (18:48):
Mm. Okay. Let's switch a little bit to talk about AI and robots, but to keep the same topic for now, I'm interested in how, what does this tell us about how people might perceive AI and robots in the future? So whether or not they're actually sentient, if there are AI and robots that feel and look sentient, what does this imply for how we might, interact with robots and what we might think about them, because they, I guess you might see them as agential, and does that, yeah, I'm curious to hear you talk about that.
Kurt Gray (19:25):
Yeah, it's super interesting to think about how people interact with robots. I'm actually writing a big chapter on it now for this handbook, and it's made me think about it about things in a lot of ways. And, you know, robots and AI are unique in that they are the only agents in the world in a sense that humans have created just to replace other agents, right? Like, yeah, there's like dogs, but they like come from nature, you know, like maybe we selected them or whatever, and other people, right? They're born, but like, people make machines and they make machines to replace other people, typically, right? To replace people at work, to replace, there's actually this Australian guy, I'm sure you're not friends with him, but maybe you are. He wants to marry a robot, you know, like he wants to be the first person to marry a robot.
Kurt Gray (20:23):
And, you know, this is obviously replacing a human spouse, right? And so as we're trying to replace people, we make those robots more people like, and then we come to think of them as having human thoughts and human emotions and all sorts of weird and wonderful things happen. So the first thing that happens that I've kind of studied the most is something called the uncanny valley, which folks may have heard of, but it's this old, it's this old idea from 1970 when robots were far from being human by this Japanese roboticist whose last name was Mori. And he thought that people would like robots more the more human they looked. So if they're cute and anthropomorphic, we like them more, but then they get too human and then they get too human that you're not sure, are they human?
Kurt Gray (21:19):
Are they robot? I don't know what's, are they zombies? Are they dead? And then people don't like them, right? They get wigged out. And so this is called the uncanny valley, because that's the sudden drop in liking when something becomes too human. And so this has been discussed a ton in kind of like human robot interaction. It's important in movies. So there's lots of movies like the Polar Express or Beowulf animated movie that came out that just creep people out. They like kind of tanked because people didn't like them. And it's why Pixar, if you've ever seen a Pixar movie refuses to do realistic human beings, that's like their, like literal policy. They only do like fun, cute, you know, cartoony people like the Incredibles, never like a you know, like dead gray eyes or whatever is like what happens when people try to make real people.
Kurt Gray (22:11):
And so the explanation for why the Uncanny Valley's there, at least, you know, before, you know, we started looking into it, was that just like something about the face is super creepy, right? Like, no one likes the human-like face, but we thought it was really about perceptions of experience, right? It's like perceptions of mind that when you see a human like robot, you feel that it can sense and feel it can love, it can be afraid, it can feel pain. And we have this fundamental intuition that like, machines don't get to do that, right? That that's off limits. And so it's that mismatch between machines are made of silicon and like are just made to like, you know, vacuum my floor to like this something looks like it can, you know, feel and feel deeply. And that's what's creepy about it.
Kurt Gray (23:01):
And so, yeah, we've shown a lot of studies that, that people get creeped out when you tell them that robots can feel. And yet I think when , when people get used to a feeling robot, like this movie Her, where he like falls in love with his smartphone, who happens, by the way, to be Scarlett Johanssen, right? Which everyone can appreciate is like a pretty, you know, attractive woman. And I should say they actually had a different actress voice her, and then it didn't work. So they redid it with Scarlett Johanssen because everyone can appreciate that Scarlett Johanssen is like, you know, has this like, ability for experience. But I think, I think people get used to robots with experience, right? It's creepy at first. It's uncanny, and then you think, yeah, you know, I have a robot wife. That's cool.
Kurt Gray (23:50):
And so then the question is, and I think this is an unanswered question, mostly like what happens when we are living with robots that we really think can feel, And we're doing one set of studies on this right now, led by my graduate student, Danica Willbanks and , Basically what we, what we're finding is that people are afraid, and this is not published, so, you know, don't scoop me here. People are afraid that robots with experience will resent people for kind of like mistreating them, treating them as slaves in some sense, and that they're gonna rise up and destroy us, right? , because, you know, they're gonna realize that like we're having them vacuum our floors, and that's kind of not a great thing to do. And then they're gonna, you know, fast forward from Roomba to Skynet and the Terminator, and then we're all, you know, running for our lives, away from machine guns.
Michael Dello-Iacovo (24:42):
Uh, cool. Lots to unpack there. Thanks for that. So, first it's interesting, I have seen Beowulf and Polar Express. I don't think I was particularly creeped out by it, but I hadn't really thought about the uncanny valley in terms of, I guess I've only thought about them in in relation to robots and how they look and not necessarily like animation and them becoming closer and closer to life. And I guess the uncanny valley sort of implies once you get past that valley of being really close to human, but not quite, if you can get past that, and now they actually are humanlike, I presume that people's reaction to them would be, would improve. Is that the idea?
Kurt Gray (25:23):
That's the idea of the uncanny valley. Yeah. Yeah. So the idea is you can climb out of it. I don't know if you can ever climb out of it. So I've called the uncanny valley the experience gap in the sense that like, there's always a gap. And that's because you have a fundamental expectation that machines should lack experience, the capacity to feel, right? Like if we're talking right now and you look fully human and then you open your head and there's a circuit board and you're like, Surprise, I'm a robot. I wouldn't be like, cool , you know, like, that's great. I'd be like, that's real creepy . You know?
Michael Dello-Iacovo (26:01):
Yeah. So it's, it's the fact that you just know that they are robots means you can't fully climb out of it. But I guess unless maybe if they're so lifelike that you can't even tell, and you just don't know and you think you're interacting with a human maybe because a robot is so lifelike, I guess, you could presume you climb out like that.
Kurt Gray (26:21):
Yeah, totally. If there's to you a human right, if you are actually a robot, and I can't tell than for all intents and purposes, you're a human. I mean, yeah, I think to illustrate the experience gap, it also goes the other way. So if you're told about a human being who fundamentally lacks the capacity to feel pain or pleasure or love, like a psychopath, right? American Psycho speaking of movies, it's like a, he's super creepy, right? I mean, he like murders people, obviously, but there's also something about him that like, you know, just being devoid of that like capacity for, for emotion is just like so unnerving. So people don't like human humanlike robots and they don't like robot like humans.
Michael Dello-Iacovo (27:05):
Yeah. Just on that, I'm not sure if you've seen No Country for Old Men, but I think the villain in that was really effective and quite disturbing, for similar reasons.
Kurt Gray (27:15):
Yeah. Yeah. He was a dead eyes, right? Javier Bardem, I think his name is. Yeah. Like, he just always was so impassive. I think that's a great example.
Michael Dello-Iacovo (27:23):
Yeah. So, what can we expect about the mind perception of AI and robots? We talked about that in the human context and even for non-human animals. But what can we expect about how people might perceive minds in AI and robots? And and I wanna talk about perspective taking as well, which I think is maybe a little bit different. But let's start with mind perception of AI and robots.
Kurt Gray (27:45):
Yeah. I think generally when people perceive the mind of robots, they perceive them as having agency but not experience, right? So the capacity for planning and thinking, right? Like, you can, if you're a robot, you can plan out the best route for me to take on my flights. You can crunch a hundred numbers, you know, when doing insurance adjusting or something like that. But people generally, again, don't expect robots to be able to feel. But these are kind of like squishy, and they depend on the appearance of the robot, as we've discussed. They also depend on the person perceiving, right? So some work by, um, by Adam Waytz shows that if a person's very lonely, you know, they'll perceive more mind in all sorts of things, including AI and robots. I mean, again, right? Like this, the Australian guy who wants to marry a robot, or, the movie her, right?
Kurt Gray (28:37):
Like, the reason that he's in love with his phone ostensibly, is because he's lonely, right? There's this, there's this movie on BBC you can find on YouTube called Guys and Dolls, and it's about men who, I dunno what you call it date, are with, hang out with a lot, real dolls. It's like very lifelike sex dolls. And you or I might look at those dolls and think that, you know, they're just like lifeless silicon, but for them, they perceive a rich amount of mind and experience in the dolls. So, you know, you get lonely enough, Tom Hanks sees a mind in a volleyball. And so I think AI robots, it's like fair game for us to kind of like project our own desires and needs upon them.
Michael Dello-Iacovo (29:25):
So sort of related, I think to mind perception is the concept of perspective taking. So this is where not just perceiving a mind, but we're trying to take the perspective of that mind and I guess really put ourselves in their shoes, so to speak. So how might this work with AI? There's some research to think about taking the perspective of other humans or non-human to, as a, tool potentially to I guess improve our perception of them and to maybe be less prejudiced towards them. Could this potentially work with AI as well?
Kurt Gray (30:04):
Yeah, yeah. That's interesting. I hadn't thought about that. I mean, I guess there's a couple ways to think about it. One, are you just kind of trying to project your human consciousness into an AI? So that film, I think it was just called AI, right? The Hailey Joe Osman, right? The kid,
Michael Dello-Iacovo (30:23):
I haven't seen it. Sure.
Kurt Gray (30:25):
Yeah. He's like an AI, but he just looks like a kid. And so you perceive him and like bad things are happening to him, right? I think he's on a quest to become a real boy. See the blue fairy like Pinocchio. And so you just like, feel terrible for him, and you sympathize with an AI, but what you're really doing is sympathizing with a child. I mean, it's the same thing. Like you take Scarlet Johansen's perspective in her, but you're really just taking scarlet Johansen's perspective, right? So are we trying to get people to project and just see AI as more human, or are we trying to get people to, to feel what it's actually like to be AI? And if that's the case one, I don't know if we could ever do that. So there's this famous philosophy, paper kind of, that gets people to think about what it's like to be a bat, and they're like, what's it like to be a bat?
Kurt Gray (31:18):
And people are like, I don't know, it's fun. I'm like, flying around eating mosquitoes, like squeaking. It's like, no, no, you're imagining if you were like a human in a bat costume, like you're Batman, you know, who eats some mosquitoes, but like, now imagine you're like, echolocating, so you're like clicking, you can't see and, right, all these weird things. And then you're like, actually, I can't know what it's like to be a bat. There's a, again, the problem of other minds, right? There's this fundamental divide between like a human mind and a bat, but at least a bat's a mammal. Like, what is it like to be an AI? I have no idea. You know? So I think there, it could make us less sympathetic to them in some sense because it's like, I don't know, they're a circuit board, there are like these algorithms. And so who knows, right? Like, I can subjugate them now under our, you know, the heel of human desire because right. They're not like me.
Michael Dello-Iacovo (32:11):
Sure. Yeah. That's interesting. So to jump topic a little bit, uh, but still in the realm of human robot interaction, you've also talked about how people like to use AI for decisional contexts, like what stocks to invest in, for example, but they don't like to use them to make moral decisions. Why is that?
Kurt Gray (32:32):
Yeah, moral decisions are increasingly being done by AI. So medical triage decisions, right? Like who gets, who gets a ventilator if there's only one ventilator, but a number of patients or parole decisions, like what inmates gets is most eligible for parole to have algorithms kind of make recommendations there. They're biased, I just wanna say they're, you know, people use 'em because they're, they're supposed to be unbiased. They're still biased because we program them with kind of biased information.
Michael Dello-Iacovo (33:05):
Like how we've seen AI that are racist because they read the internet or something, they're trained on the internet or something like that.
Kurt Gray (33:10):
Exactly. Right. Yeah. Right. Or a, you know, there's like, take something very minor like, technology that, you know, when you put your hand under a tap and it turns on, well, that turns out that technology doesn't work very well if you're black, if you have black skin, because those technologies were trained by the engineers who made them who were predominantly white, right? And so it's kind of like whatever your model is, right? The output's gonna be biased towards that model. But there is some promise for AI being less biased, right? If you program them right, with unbiased data, assuming that exists, they'll be less biased. And so, I don't think we should scrap, you know, AI decision making systems because they're biased. I think they have a lot of promise, and we should take that promise seriously. But here's the thing.
Kurt Gray (33:57):
People generally don't like AI making moral decisions because they think that AI, again, doesn't have the capacity for experience. They can't feel, and we, we like to think that to be a good moral decision maker, you have to have compassion, right? So if a doctor comes in to your, the waiting room and you're, you know, sitting there without your pants on and one of those gowns, you know, it's open and you're feeling vulnerable and exposed, and like robo doctor comes in and says, you know, like, this is your treatment. You need to do this. It's the best because it's got the highest percentage of chance of working. I think you'd be like, I don't feel great about it. You know, like, Robo doctor is making a good call, I'm sure, but what I want is someone who's like, Oh, like you must be so afraid, whatever. Like, I, you know, I want you to do well, and so I care about you and I'm choosing this treatment. And so people want those making moral decisions to care about the value of other human lives and robots, at least now do not. And so that's medicine, that's self-driving cars, that's drones, right? You want, you want people to be in the other end of those decisions because you want something that cares about other people.
Michael Dello-Iacovo (35:11):
Yeah. So it, it sounds like, to sum it up, it's, in simple terms, it's because they can't feel, or we perceive them to not be able to, to feel or, or have emotion or empathy. But, but even if they can't actually feel, if an AI could be made to act like it feels and to be quite, become realistic in expressing emotion, is, is that enough? Does it actually need to feel, or from our perspective, I guess, what's the difference between AI that's actually sentient and one that just really seems like it, maybe we still have the same problem where it might seem like it but people just might not perceive AI as a robots, as being able to feel that if we could have, say, a robot doctor that really feels like it's really emotional and empathetic, does it, is that enough? Could that be enough?
Kurt Gray (36:00):
I think that'll be enough. Yeah. I mean, you know, mind perception can be explicit. So you can ask people, like I do in my studies, like, how much mind does a robot have rated on a scale of one to five, You know, how much can they feel? But in everyday life, it's much more implicit, right? You're just like hanging out with people, you're talking with 'em, you know, you're like seeing them on the street. And in those cases, I think just the sheer appearance is enough to convince people that they generally do, like most people aren't, aren't philosophers pondering the problem of other minds. They're just trying to go to the doctor and get some food and, you know, go about their lives. So I think if they're convincing enough, I think that'll work for 99% of the cases. I mean, I think maybe an analogy is in Japan there are these, these like girlfriend clubs, if you've heard of these, but you know, you're, you're a businessman.
Kurt Gray (36:59):
You go to these clubs, you, you pay money, you'll, you know, buy drinks, whatever, Uh, you tip these, these women who act like they're your girlfriend in some sense, they like talk, they listen to you, right? And, and I think for, if you think about it explicitly, you're like, I am paying these people to listen to me and to act like my girlfriend, like to be kind and considerate. But implicitly in the moment, right? You're just like, Wow, someone's listening to me and it's great, and we're connecting on this deep level. And is it fake? Probably a lot of the time, maybe sometimes it isn't, but in the moment you can convince yourself that it's real because it feels that way. And I think AI could be the same way. And I think, you know, this is how this guy can marry his robot, or wants to, right? Cause he's just like, in the moment, it feels real and that's enough.
Michael Dello-Iacovo (37:47):
Yeah. So even if an AI seems like it's sentient, but we can't prove they're sentient, what would be their moral status or what, how should we see their moral status? And just for a bit of context, we spoke to David Gunkel recently who suggested that even, if AI isn't sentient, and even if it could never be sentient, we, how we treat them might actually have consequences for how we treat other humans or, or animals. Like treating them badly might lead to us being more likely to treat humans or animals badly. And so we might want to grant them rights regardless and in the legal system in the same way that we grant corporations and rivers legal rights. It's not that they're sentient, but that's kind of a tool in our legal system. So do you have any thoughts on, I guess rights granting of rights in the legal system to AI? And their moral status?
Kurt Gray (38:36):
Yeah, that's interesting. I guess regarding the question of whether, you know, AI deserves rights because they're, they have moral status as AI per se. I'm not, I'm not super convinced, but again, the problem of their minds is hard to tell. So the, you know, the old experiment for determining if an AI was human enough was the Turing test, right? And so the idea there is if you have a conversation with an AI and you can't tell if it's a computer or a human, then it's human enough. But, you know, I've talked a lot with chatbots and sometimes they seem pretty human. I'm not ready to grant them rights, right? I don't think they like get to vote. I don't think they get to, you know, marry who they want or, you know, work wherever they, you know, all the rights who grant human beings.
Kurt Gray (39:31):
But I could imagine a time at which AI becomes sophisticated enough that it does deserve some rights. Part of that's gonna be perception. Um, and it's gonna be harder than with other humans. I mean, we had the same thing with humans, right? Humans of different races, of different nationalities, of different religions. You know, we like, as a society struggled with like, well, shouldn't rights only be held by rich white landowners? You know? And then turns out like, no, because these people can feel as we feel, right? And there it's easy because it's like, literally they're people like us, right? Like the exact same species with AI. I think it's harder to guess when they're the same, but I don't know right. If it can write a stirring opera, but then there's like GPT-3, right? That can write beautiful things, right?
Kurt Gray (40:20):
Or like if it paints paintings, but then you've got like, or like WALLE, right? I think, DALL-E. That's right. Right. I'm confusing with Disney movies. I've got some kids, right? So like, it's like doing all the things that you, like this is, you know, what humans can do, who can experience, but I still don't think they're like DALL-E or, you know, GPT-3 can, can feel. So I don't know what it would take, but Sure. Yeah. As to the argument of like, it could be good for society if we grant robots rights, maybe, You know, I think, I think, you know, there's that classic study with kids seeing the adult punch the Bobo doll, and then the kid gets in there and knocks the stuffing outta the Bobo doll. Like, yeah, I think I don't want my kids being an asshole to Alexa because I think that, you know, kids just shouldn't be assholes. But do I think we need to change the legal system to enshrine AI rights in it? I don't know. That seems like a little premature, but maybe, maybe one day. Sure.
Michael Dello-Iacovo (41:25):
So I've also heard that you, you talk about the fact that, in one study they, someone found that people would rather give, give up hundreds of hospital beds than let AI make moral decisions. I didn't get any more context on that, but that just sounds kind of wild. So could you talk a little bit about what they actually looked at in that study? What happened there?
Kurt Gray (41:44):
Yeah, so we, that's a study that I love that like, didn't really make it into the paper. But I always like it talking about it in talks because it's the, I think in my mind, the most interesting. So we, it was just a trade off, right? So you can have a human doctor, like we do now, or you can have an AI who can make recommendations about life or death decisions. And it turns out the AI's cheaper. It's free once you get it. You don't have to like pay for it. It's never gonna, you know, try to get a raise, complain about vacation. And so you can save more hospital beds with, an AI doctor. And the question is, well, how many more hospital beds do you have to save to have an AI doctor to want an AI doctor in a hospital instead of a human doctor?
Kurt Gray (42:27):
And so we're like one, some people are like, Oh, you know, the rare person's like, Yeah, it's better. Okay, one, but I think the transition point, and I should look at these data with something like 50 beds or something. Like that's the break even point. Like, people are like, fine, You say like 50 people. That's a lot, right? Like, some small hospitals don't even have 50 beds, right? Like you, 50 beds in a hospital. And some people said like, never, you know, like, a thousand, No. You know, So they were just like, fundamentally opposed the idea of AI doctors.
Michael Dello-Iacovo (43:02):
I'm guessing in that study there was stipulated probably that, the AI doctor would be as good as a human doctor or something like that. Is that sound right? So is what's happening here, just that people just really don't like AI making more decisions, and they, they think even if it's as good and as effective at its job, it's just that they think it's not going to have empathy and therefore is not gonna be as ideal as, as a doctor, as a human.
Kurt Gray (43:30):
That's right. Yeah. I mean, people, people will say, Well, is it, is it good as an expert? And so we ran studies where we're we? If you, if you present people with, like, here's a, here's a doctor who's terrible, he's gonna kill you, , you know, like, you're feeling sick, he's gonna recommend hemlock and you'll die. Like, not that extreme, but like, you know, doctor is not great. AI better, you know, and imagine that you're like, sick people are like, Okay, give me the AI, I don't want to die. But if you don't make it such a stark comparison, you're just like, Oh, here's an AI that's 90% good, or a, or a human doctor is 90% good people, way less want the AI, even if they're equally good.
Michael Dello-Iacovo (44:09):
Hmm. Another interesting example I heard you mention as well is that the presence of robots can actually make people seem, more similar and reduce discrimination. So just to, I, I'll let you describe this, but just to prime you, I think this was from a blog post, on where you talked about humans as being red squares and slightly different shades of red. And then you introduce robots, which are blue squares. And then the example of participants imagining they're the treasurer of a post-apocalyptic community. But I'd love to hear you talk about all that.
Kurt Gray (44:40):
Yeah. , that's a lot to unpack. , if you're listening, there's like, there's colored squares and there's a post apocalypse community where you're a treasurer. And so this study was run by Josh Jackson, and he had this intuition that, well, kind of a a counter intuition. So most people expect that the rise of robots will turn people against each other, right? Like they're, robots steal the jobs, not so many jobs. People turn against each other, they turn against immigrants, whatever. There's lots of hate. But we wondered whether robots could bring people together. If people realize, and I think this is the key point, that like, they realize that robots are different. They're not people, right? Like if you look at the immigrant who's competing for the same job that you are, and you think, Well, at least we're human beings, right? At least we feel the same things and we eat, eat the same food, and a robot doesn't eat anything, but like, I don't know, motor oil and electricity, right?
Kurt Gray (45:42):
Like the robot's different. And so maybe recognizing the fundamental kind of like unhumannesss of robots can bring people together. And so we found that in a number of studies. But I think the funnest one that you mentioned is, we ask people to imagine that they're living in this post-apocalyptic community where there are, either humans, just humans in the community of different races. So white, black, and Asian. This is around America or the other community where there's like four races in a sense, like white, black, Asian, and robots. And so we ask people to kind of divide up salary, money, a across folks in the commune based on their job. So you could be the blacksmith, you could be the, I don't know, like the cook, you know, whatever you need to keep the community running. And so, you know, you're the treasurer, you're on the treasurer shoes, and like, how do you give out money?
Kurt Gray (46:37):
And so typically what you find in these cases is white participants are racist, you know, with these outcomes, right? They give more money to the white folks and less money to the black folks, even if they have the same job. And so it turned out when there were no robots, robots in the community, the white treasurers were indeed racist, giving less money to the black people for the same job, right? Not great, but expected. But it, it turns out that if there are robots in the community, then this gap gets way smaller. I forget the data now. I'm not sure if it totally disappears, but it definitely gets way smaller because you're like, , I'm not giving any money to the robot. Cause it's a robot. And we're all people here, right? Like, it doesn't, you know what your skin tone is, we're all people. Let's screw the robots over and keep the money for the humans. And so it can bring us closer together because you know, at least robots aren't humans.
Michael Dello-Iacovo (47:34):
Yeah. Do we, do we see this effect as well with within, within humans say with like, you know, I don't, I don't know, you can imagine say like a country, people in a country with their differences banding together because immigrants or something like that, because the immigrants are even more different than the people within the country. Does that affect still hold if other cases like that?
Kurt Gray (47:53):
Oh yeah. I hold, I mean, this is a super robust effect. It holds, it holds with every social category. So in the movie Independence Day, aliens come, you know, like we put aside all our differences to blow up the aliens. And then, you know, and as soon the aliens are gone, it's like, wait a second, you're a Muslim, I'm a Christian, like that. You know, But, you know, and then like even Muslim and Christians can agree that atheists are evil, right? So just like, it just matters how you kind of slice the pie up, but it's always there.
Michael Dello-Iacovo (48:23):
Sure. Yeah. So just, the last topic I'd like to talk about before we wrap up is, moral circle expansion. So that's this idea of, hopefully an ever growing, expanding circle of moral concern where, at the center of the circle is yourself and the people closest to you. But we'd like to expand that circle to include, other species and maybe if robots and AI sentient, then hopefully them as well. So there's this idea of substratism as well, which is, you can think of it as like racism or speciesism where if an entity is sentient, it shouldn't matter what substrate their mind is in, if it shouldn't matter if it's like a biological mind or like a synthetic, mind of an AI, as long as everything else is, is the same. So do you have any thoughts about what we might be able to expect in the future for, I guess, we've talked about this a little bit already, but, discrimination against, digital minds in the future?
Kurt Gray (49:26):
Yeah, it's a good question. I mean, it's, I think these are, these are super important questions and really interesting to think about, right? But you have to like, worry about kind of like prejudice, like AI wouldn't be like so far down on the list in terms of like all the prejudices within, within people. And you know, based on the paper we just talked about, like if AI's lack kind of full human sentience, maybe being total jerks to AI is the way we can bring people together, right? Like that's the solution to all the racism and discrimination against like religion and creed and politics, right? Like, we all get together and we all hunt robots down and hurt them, right? Like, as much as robots can peel pain, I'm not saying we should do that, but if we're talking about thought experiments here, I think we need to think about like, whose sentience are we most concerned about?
Kurt Gray (50:21):
And I guess the other general point about like expanding moral circles is, I mean, I'm all for kind of like expanding moral circles, but I also, I wonder if it's costless, right? So if we expand moral circles to animals more, right? So people care a lot about their pets, right? Pets used to be something that we abuse, but you know, or just like kicked or put in dog houses, but now we buy them sweaters and get their DNAs tested, right? But is it, does it come with a cost? Like are you more likely to walk by the homeless person, right? Who needs a kind of like a warm meal and a bed and just ignore them because like, you know, Mr. your dog, Mr. McGillicutty, like needs a new cute sweater maybe, you know? And so I think it's always useful to think about like, well, what's the trade off?
Kurt Gray (51:10):
Because there's nothing free in the world. But yeah, I think as our, as our kind of senses of morality expand, I think it's possible and reasonable to think that we could care more about AI. Again, I don't know if it's a good thing, right? Like if you care more about whether your phone's upset, like some kind of new age Tamagotchi, and there's people dying in third world countries, you know, developing nations. Like I think, I don't know if right, like I, if we had to choose between her and we had to choose between someone who's, you know, a family who's got some terrible, like malaria in sub-Saharan Africa, I'd be like, let's pick the, the human beings, you know? And then I guess the pragmatic question of like, could you make a mind out of silicon? It's like an interesting question, right?
Kurt Gray (52:00):
Like if we hit the singularity, I think for sure if, you know, minds can, silicon minds can improve 'em themselves. And I think there's a way that it's possible, but I think the more we learn about biology, the more I'm kind of like astounded at the fact that humans are, you know, conscious and other animals have consciousness and like the computers are still kind of suck , you know, like they can do things, they can, you know, they can pare things back. But I think like the, you know, like the levels of magnitude of like a neuron and then all the things that happen in the, in the neuron and all the things that happen, like within the, in the organelles, within the neuron, it's just like, it's mind boggling. So maybe, maybe it's fair to be prejudiced against silicon substrates, but, but I think if the day comes, when, when humans can convince me that, or sorry, that that robots can convince me that they're fully human, then I think they deserve moral rights. But I don't know when that day will come, if ever.
Michael Dello-Iacovo (52:55):
Sure. Yeah. Just to get back to the moral circle expansion, yeah, it's interesting. There's, there's, you can easily think of like, a positive effect and a negative, well, a positive potential effect, negative potential effect. The positive might be if we expand the moral circle and that kind of like brings everyone else along with it. Like in, in our moral concern. The opposite effect is, as you said, if expanding the moral circle somehow makes it more diffuse and the things in the center of the circle, we still care about them, but maybe care about them less if like there's like a cap or something to the amount of moral concern we can have. May like a, maybe a practical example might be, which kind of like almost get burnt out from caring, trying to care about everything. Like the more things you try and care about, the more it's easy to try and care about. You almost get burnt out to an extent. But I'm, I'm curious, do you know if anyone who's tried to study this before or test this? It doesn't seem like it'll be too hard to test, I guess. What do you think?
Kurt Gray (53:56):
Yeah, I think that, I mean there is, I'm trying to think. There is some work showing the kind of like kind of collapse of compassion or empathy fatigue, right? That you kind of get tired, you get burned out. So certainly people in the caring profession, this happens. I think that there is some work by Adam Waytz again who shows that liberals or conservatives vary in kind of the moral circle they emphasize. So conservatives emphasize the moral circle closest to them. Their community. Their kind of like tight knit church community. Their family. And liberals are more likely to be kind of universalists. So I'm assuming, you know, most folks in your institute are kind of like more liberal, right? Cuz it's like the uber expanded moral circle animals and, and possibly AI. And I think, I mean, I think to, to be honest in a lot of ways, the, the world would be better off if we cared in general more about the distant moral circle, right?
Kurt Gray (54:54):
At the same time, I think it's not costless to not care as much about your family. So let's go back to Gandhi, right, Who's not Arnold Schwarzenegger, but still a moral hero. He was not a, a good father, he was not a good husband, right? Martin Luther King Jr. Also not, right? And, and in fact, you know, I think they did a lot of things that, you know, if you think about like what it means to be a good father or a good husband, rather, they're like a, they're terrible examples. And yet they were amazing heroes who affected incredible social change for so many people. And so if you had to pick, right, should Gandhi be a, a national hero or like a good dad? I think, you know, I think the world would pick hero, but if you ask his kids, I don't know, right?
Kurt Gray (55:43):
They, I don't know what they would say. And like, I'm a dad, you know, and I think I feel this trade off a lot, right? Like effective altruism give all your money away, like where it has the most good, but also like, my kids need to go to college and it's expensive in America. And, you know, and so like, what do I do with my money? Well, I think I need to protect my kids in some sense. And so it's, it's a struggle and, and there's a tension and I don't know the best way to resolve it. But I think there is the tension.
Michael Dello-Iacovo (56:11):
Yeah. I thought of some follow up questions, but we could almost have another podcast interview about that. So might, might leave that there, but I'll leave you with one last question, which will be a little bit general just for us to finish up on. So I'm interested if you have any thoughts in on what social psychology can tell us that might be useful to help people interested in getting others to care about more sentient beings. So I guess you could think of that as like, expand the moral circle, but just in general, how do we get people to care more about others? How, what does social psychology tell us? Any like learnings that you'd like to share?
Kurt Gray (56:45):
Yeah, that's interesting. I mean, I think the work on empathy is really applicable here, right? It's just like, it's just caring about suffering, right? So you have to do two things. You one have to recognize that people are suffering or animals or, AI and often that's just enough. Once you recognize it, then you care about it. But if they don't spontaneously care about it, then you have to get them to care, right? And there's lots of work on empathy of how you can, you know, like you get people to simulate it so they feel it in their own hearts, you know, like we're trying to, to raise our kids as moral kids. And so what do we do? You know, my daughter, my younger daughter, like say she like punches another kid, I'm like, Look, that causes harm to the kid. Okay, right?
Kurt Gray (57:35):
Like, we like, you know, this like makes this other kid cry and now she appreciates it. But that's, you know, that's not enough because like, yeah, yeah, that kid's crying, you know, , I mean, sometimes enough. And then you have to say like, well, what would it feel like for you to get punched? You know, like, Oh, wouldn't that feel bad? And like, oh yeah. So I think you need to kind of like get people, we talked about perspective taking, right? You need to get people in the shoes of these other people, of these animals, right? So this is why people turn vegetarian after they see those videos of the kind of like horrors of factory farming, right? Because they're like, Oh my god, they're suffering. And I can kind of like project my own self in there and that feels terrible. So I think if you wanna expand the moral circle, get people to recognize suffering and then care about it.
Michael Dello-Iacovo (58:20):
Great. Well thanks so much for your time today, Kurt. And where can people go to see more about your work if they're interested? Anything you'd like to plug?
Kurt Gray (58:28):
Yeah, sure. So they can see me on Twitter, they can see me at the Deepest Beliefs Lab site, the Center for the Science of Moral Understanding, or you know, if you Google me, you'll find some stuff and if you're interested you could watch that stuff . So that's all I'll say.
Michael Dello-Iacovo (58:45):
Great. And what's your Twitter handle?
Kurt Gray (58:47):
Michael Dello-Iacovo (58:49):
Cool. All right, well, we'll have links to those websites and all the other studies we discussed in the show notes. So thank you again, Kurt, really appreciate your time.
Kurt Gray (58:57):
Great. Thanks for having me.
Michael Dello-Iacovo (59:00):
Thanks for listening. I hope you enjoyed the episode. You can subscribe to The Sentience Institute podcast on iTunes, Stitcher, or any podcast app.