August 10, 2022
Guest Thomas metzinger, Johannes Gutenberg Universitat Mainz
Hosted by Michael Dello-Iacovo, Sentience Institute

Thomas Metzinger on a moratorium on artificial sentience development

And for an applied ethics perspective, I think the most important thing is if we want to minimize suffering in the world, and if we want to minimize animal suffering, we should always err on the side of caution, we should always be on the safe side.”

Should we advocate for a moratorium on the development of artificial sentience? What might that look like, and what would be the challenges?

Thomas Metzinger was a full professor of theoretical philosophy at the Johannes Gutenberg Universitat Mainz until 2022, and is now a professor emeritus. Before that, he was the president of the German cognitive science society from 2005 to 2007, president of the association for the scientific study of consciousness from 2009 to 2011, and an adjunct fellow at the Frankfurt Institute for advanced studies since 2011. He is also a co-founder of the German Effective Altruism Foundation, president of the Barbara Wengeler Foundation, and on the advisory board of the Giordano Bruno Foundation. In 2009, he published a popular book, The Ego Tunnel: The Science of the Mind and the Myth of the Self, which addresses a wider audience and discusses the ethical, cultural, and social consequences of consciousness research. From 2018 to 2020 Metzinger worked as a member of the European Commission's high level expert group on artificial intelligence.

Topics discussed in the episode:

Resources:

Resources for using this podcast for a discussion group:

Transcript (Automated, imperfect)

Michael Dello-Iacovo (00:00:09): Welcome to the Sentience Institute podcast and to our 18th episode, I'm Michael Dello-Iacovo, strategy lead and researcher at Sentience Institute. Returning listeners might notice a different accent today. And that's because I have taken over as host of our podcast from Jamie Harris. And needless to say, I have very big shoes to fill. On the Sentience Institute podcast, we interview activists, entrepreneurs and researchers about the most effective strategies to expand humanity's moral circle. Previously, we have focused primarily on farmed animal advocacy, but over the past several years, we have pivoted as an organization to focus more on artificial sentience. And that brings me to our guest for today Thomas Metzinger. Thomas Metzinger was a full professor of theoretical philosophy at the Johannes Gutenberg Universitat Mainz until 2019. He was the president of the German cognitive science society from 2005 to 2007. And of the association for the scientific study of consciousness from 2009 to 2011, as of 2011, he is an adjunct fellow at the Frankfurt Institute for advanced studies, a co-founder of the German effective altruism foundation, president of the Barbara Wengeler foundation. Michael Dello-Iacovo (00:01:22): And on the advisory board of the Giordano Bruno foundation. In 2009, he published a popular book, The Ego Tunnel: the science of the mind and the myth of the self, which addresses a wider audience and discusses the ethical, cultural, and social consequences of consciousness research. And from 2018 to 2020 Metzinger worked as a member of the European Commission's high level expert group on artificial intelligence, which I'm particularly excited to talk about today. Thomas has also been a proponent for the idea of a moratorium on artificial sentience development, which is another topic which we will really dive into. So without any further ado, I bring you Thomas Metzinger. I'm joined now by Thomas Metzinger. Thomas, thanks so much for joining us on the Sentience Institute podcast. Thomas Metzinger (00:02:09): Great pleasure. Thanks for having me. Michael Dello-Iacovo (00:02:12): To start, I'd like to ask you how you define a couple of terms. So some people use consciousness and sentience interchangeably while others see these as two different things that an entity could have one both or neither of. In this case, consciousness often refers to stream of thought while sentience refers to positively and negatively valanced states, or in other words, wellbeing and suffering. So how do you view these terms? Thomas Metzinger (00:02:37): Well, I think consciousness has nothing much to do with the capacity for thought or high level symbolic thinking. As a philosopher I of course know that the concept of consciousness has a long history. It has a long Latin history, for example, in conscientia, um, it has Greek precursors, and an interesting discovery one should never forget is that more than 90% of the languages on this planet do not have a term for consciousness as, um, the late Oxford philosopher, Kathy Wilkes pointed out. So the problem of consciousness and the infatuation with that concept is also relative to certain Western intellectual traditions. An interesting question is why have other cultures, other traditions not seen that much of a theoretical problem there? So we shouldn't assume that terms like consciousness or sentient are obvious or self evidently clear to all human beings, uh, on this planet even. There's, there's a lot of, uh, conceptually deep water there, but I think for the purpose of our conversation, it's pretty clear. I mean, in probably all cases, we know it involves the capacity for suffering and, um, that makes entities moral objects, but I don't want to get ahead of ourselves here. Michael Dello-Iacovo (00:04:23): Okay. Yeah. Um, and just one follow up question. Do you think, could you have sentience without consciousness or vice versa or do they sort of come hand in hand? Thomas Metzinger (00:04:32): It's just depends, uh, on your concepts if the most common confusion, uh, is between consciousness and self consciousness, uh, do you have an explicit, uh, phenomenal representation of a number of things for instance, of agency, of goal directed action control, uh, do you have an explicit conscious representation of body ownership that you are an entity with limits in space with a skin and a single embodiment usually, but then there are also these high level concepts of, uh, self consciousness that you can use the first person pronoun I properly, um, that you can have I thoughts as philosophers say. That means that you can cognitively refer to yourself using a mental symbol, an I token that you can think the thought I myself am currently suffering, for instance. I think that's not necessary, uh, for those properties, uh, that are ethically relevant. So if consciousness has nothing to do with high level forms of self consciousness, then the question is, uh, becomes more interesting. What actually does it distinguish, uh, from mere sentience? But let me just ask you about your own conceptual intuitions, uh, do you know systems that are sentient, but not conscious by your own, by the way you use the concepts? Michael Dello-Iacovo (00:06:12): I, I'm not sure. I could imagine, uh, that perhaps there are maybe some, for example, insects where they have perhaps some ability for, um, pain or suffering, but not necessarily a stream of thought consciousness. So at least that's how I'd, um, be defining those terms. Thomas Metzinger (00:06:30): Well you know, this Jamesian concept of a stream of thought, that's probably a small number, I don't know if, if all primates have that something like mind wandering and spontaneous task unrelated thoughts, or, um, day dreams, unbidden memories. I think chimpanzees have that. If mice have it, I would already have my doubts. So, um, I mean this, this ongoing inner monologue, this chatter, uh, that we have, that's not a necessary condition for sentience, but let me ask you the other way around. Can you imagine a, a class of systems that by the way you take the term to be as conscious but not sentient, do you think there could be non-sentient machines with a stream of thought Michael Dello-Iacovo (00:07:20): That I could certainly imagine, uh, to take an example from science fiction, uh, Data from Star Trek, they, uh, don't have, it seems positive and negatively valanced, um, experience, but they have a stream of thought of consciousness. Uh, and I could imagine that perhaps there could be artificial systems, artificial intelligence systems that might have just the consciousness without the suffering. But, um, I think we might be getting a little bit ahead of ourselves if you wanna comment on that and then we can move on to the next point. Thomas Metzinger (00:07:49): Yeah. So I think rule based high level symbolic cognition, um, can happen without consciousness in classical von Neumann architectures, we basically have this in our PCs. Um, the interesting question would be if some forms of rule based operations on symbols, uh, necessitate consciousness and what exactly the function is that this is, has elevated to the level of conscious processing, what this gives the system. Um, maybe it gives the system the capacity to distinguish between appearance and reality, but that's all very speculative. So the other thing is that most of the form of intelligence that we embody as human animals is not high level symbolic intelligence, as we've learned, uh, you know, to catch a ball, uh, to not drop dead from your chair. Every second is actually computationally much more demanding than to do a little math or a little philosophy. Uh, in your mind, we embody, uh, computational functions that have been optimized over millions and millions of years. And I think it's just the last 30 years where we've learned that that is actually computationally much richer, that there's a much higher information density in those embodied forms of, you know, sensory, motor contingencies, uh, embodied action we have than that small fallible very fragile capacity we also sometimes have for rational thought, but I wouldn't tie that new property to consciousness necessarily. Maybe I'm wrong, but maybe we'll come back to this, uh, in the course of our conversation. Michael Dello-Iacovo (00:09:55): Yeah. Okay. Uh, so we've touched on a little bit already. And one of the things I'm very interested to, um, talk to you about is, um, consciousness and sentience specifically in artificial, uh, intelligence systems, uh, or artificial sentience, and, uh, also you've, um, argued for a moratorium on the development of artificial sentient systems. So to start with, uh, speaking specifically about artificial intelligence, what kinds of features might we observe in an AI that would lead us to think that they would be likely to be sentient or to be conscious? Thomas Metzinger (00:10:30): Well, that is just the problem. Uh, I have during the last week, had to give many interviews about this language model, Lambda to journalists from all over the world. And we can only decide if a given entity is conscious relative to a theory of consciousness. We do not have a theory of consciousness. Maybe some of your, uh, listeners know that I'm one of the guys who co-founded the association for the scientific study of consciousness 27 years ago. And I think we've come a long way. There is a professional community, we'll have a major conference in Amsterdam in July. There are many competing models right now, and it's like the elephant and the blind. Everybody has one good idea. And some parts of it that are convincing, say predictive processing architectures, expectation of free energy. Other people like IIT have stressed, uh, the relevance of integration and integrated information. Thomas Metzinger (00:11:41): And there's a long centuries old tradition of so-called higher order theories. That consciousness is in one sense created by having a higher order representation of something. But none of this really works. What I would do if I were forced to make a decision, I would probably go to the journal animal sentience, which have just had a brand new discussion with a target paper. There are people who think very hard about what are maybe eight criteria by which we could decide if an animal is sentient or conscious or anything. So there's, this is a philosophical can of worms, and it's all very complex, but if we have to act on a practical level, there are some things you can say. So you can ask, does the system have a high degree of context sensitivity that is, does it have a behavioral profile that is adaptive? That in, according to the little knowledge I have for instance, is a property that insects don't really have flexibility, adaptivity, and context sensitivity of the behavioral profile the system has. Another, uh, thing is, does the system have a sleep wake cycle? Thomas Metzinger (00:13:11): We are the paradigm cases of systems. We know we seem to be certain that we are conscious ourselves, although this can also be doubted on philosophical grounds, and we sleep, we dream and we wake up. This mechanism has an evolutionary history. So everything that has that sleep wake cycle is a very good candidate for having tonic alertness, a feeling of wakefulness, of being present in the current situation. Then one could go, um, because we want to have balanced states, one could also go by the evolution of emotion. So which systems have anything resembling emotional expression? This is only of course it's very limited approach. It just goes down from the human case. But I think many reptiles, using that, if that would be a necessary condition, uh, to have the hardware that observes emotional processing in a more narrow, narrow sense, and mammals might not qualify. Thomas Metzinger (00:14:26): And as I say this, you begin, I think you begin to realize this is all very tricky and dangerous. And for an applied ethics perspective, I think the most important thing is if we want to minimize suffering in the world, and if we want to minimize animal suffering, we should always, um, err on the side of caution, we should always be on the safe side. Uh, that's an important principle. If you don't know what is actually the case, if one goes about it professionally, I think it's very, uh, obvious that we will have a graded concept of consciousness and a graded of sentience, which means there will be creatures on this planet, which are more or less sentient and there will be creatures which are more or less conscious, the same for the capacity to phenomenally represent negative valences to suffer the same for the perhaps for the degree. Thomas Metzinger (00:15:32): Is there a coherent self model that generates, I think that's the, the key point, the phenomenology of identification with that negative valence. I think that's one of the core features. Does the system represent this pain, this, uh, loss of confidence in its own future capacity to deal with uncertainty, for instance, does it experience this as its own property or is it represented just as something that happens in this world, maybe in this organism, but the phenomenology of identification. This is me, myself hurting, myself has death anxiety right now. How do we get there? I think so there are questions about representational architect architectures. If we go to higher levels of description from biology, what representational properties must a system have to be self-conscious and capable of suffering. You can ask that about a machine, what functional properties must it have? Are there certain computational architectures that will inevitably create, um, negative phenomenology or a super interesting question for the future, are there architectures for conscious processing, which systematically eliminate the possibility for suffering even in the system that realizes this architecture? Thomas Metzinger (00:17:11): I have, um, asked for this moratorium for a special reason. Um, I'm in this consciousness community. Um, I see many very smart people there. Um, they are fascinated by the question of artificial consciousness and I know at least four different groups, labs instances who have more or less bluntly said, if we can do this, we will pull this off because everybody wants to go down in history in the history of science to make this very clear before we go deeper into it. My personal intuition is no way this will not happen tomorrow and it will not happen the day after tomorrow. But the point I'm trying to make is my own intuition doesn't really matter much. Um, because this is a historically new situation. We know cases from the history of science, where those people who produced the breakthrough think of nuclear fission, also themselves didn't think they would either ever manage to do that or in less than five years. Thomas Metzinger (00:18:31): And then synergies happen and suddenly a discovery is there earlier than everybody, um, has thought. And I think we should be very careful there and, um, prepare for this situation. I am not saying I'm not a luddite. I'm not saying this should never happen. I'm also prepared for this moratorium that it could be repealed earlier. If we come to a good solution, what we should do and shouldn't do, we could also say in 2030, we don't need a moratorium for synthetic phenomenology anymore. All I want to do, I've tried to do with this paper you're referring to, uh, is start a systematic discussion because I've also been working from 2018 to 2020 in the European Commission's high level expert group on artificial intelligence, trying to develop the ethics guidelines for trustworthy AI in Europe. And one of the many things I've learned is that political institutions, which are under strong influenced by industrial lobby are almost immune to two kinds of risks. Thomas Metzinger (00:19:47): One are long term risks like AGI. So this 52 people expert group completely rejected taking something like AGI, even in their document or considering it, everybody said, this is, uh, science fiction. These systems don't work today. This is a, you know, some, some nutters in Oxford in California do that. Uh, there were many professors of artificial intelligence in that group said if, if super intelligence or AGI gets even mentioned in that document, I'm out here, I'm not signing this. Um, that's one of the things I learned the same for artificial consciousness. The surface attitude is, uh, this is just some science fiction nonsense. It's just not a realistic risk. And then there's the second type of risk type risk type. One is long term, like say, say AGI risk, type two is what I call a epistemically indeterminate risks. That is to say for artificial consciousness, there's a neither nor, neither do we know that this will ever happen. Thomas Metzinger (00:21:02): There might be principled reasons why machines can't be conscious, uh, nor do we know that it will not happen very soon. Uh, uh, maybe even without us discovering it and to deal with these indeterminacy, the risk is there. But that complicated situation is simply too difficult for most politicians, and industrial lobbyists want the public and ethicists not to discuss anything like AGI or artificial consciousness, because they're afraid it might ruin future markets. Uh, they are afraid the general public might develop some fear of AI and they couldn't sell their future services anymore. That's the dynamics behind it. So I realized two things in my own community. Very good, very smart people say, we're going to do this as soon as we can do this. This is hot, new stuff, possibilities on the horizon and in the political institutions, or even in professional ethicists, people say, come on, this is just, I refuse to talk about this. Uh, this is just gaga. We have completely different problems like mass unemployment by 2030 through AI or autonomous lethal weapons as really no need to discuss artificial consciousness here. And I also understand the intuition behind it. Um, they are not all crazy, but I saw that there are at least these two types of risk epistemically indeterminate and long term where we cannot count on our political institutions. They will not help us with that. Michael Dello-Iacovo (00:22:55): So yeah, you've, you've, um, we've gotten into a little bit into the moratorium and, um, the, the legal applications. I, I wanna just back up a little bit, uh, if that's okay. And then to ask about, uh, so you've mentioned, I think Lambda and, um, some of the recent, uh, discussion around, uh, whether, whether that is sentient, there was a Google engineer who claimed that it was, um, and I guess, uh, should we, should we be worried about claims where AI has sentience, where it doesn't or for, for another example, if people, for example, say that artificial intelligence systems will be developed soon in the next few years, and then if they don't, could that set progress back, say it's a bit like crying wolf where people trust the people who are concerned about this less, um, or they say, well, it didn't happen in the, in the past few years, like people predicted. So that leads to people being more skeptical about whether it could ever happen, whether it will happen soon. Does that, is that a concern? And if so, how do we counter that? Thomas Metzinger (00:24:06): Well, first of all, I think there's an ocean of animal suffering on this planet. We're not talking about this right now, but this has priority because this is really happening right now, uh, in our face. But sure if, um, people, um, get alarmed about artificial consciousness and nothing happens for 10 years, you might have this fatigue in the general public. And, um, that is a general problem, by the way, emotional extortion in a situation of information overload right now, everybody has this, the whole population on this planet has this with climate change and all the bad news in the media. And so people get demotivated and exhausted, but it would not be a rational argument, um, because there are certain of the most dangerous developments, risk creating developments are exponential functions. They suddenly explode. And the big problem is that our brains have not evolved to deal with exponential functions. Thomas Metzinger (00:25:20): And as it, when it comes to artificial consciousness, you could have extremely slow progress say over the next decade or the next five decades. And then there is an unexpected synergy between different scientific disciplines. And then some brilliant Chinese PhD student who all puts it together and, uh, then progress suddenly accelerates exponentially. And then you're not ready. I mean, let me just as a philosophy professor say this, that was a brilliant American philosopher, Hilary Putnam, who in 1963 discussed many of these questions. And he said, it's better to think about this right now, because when it really happens, for instance, that robots, uh, demand civil rights or something like that, the general public will be so agitated and there will be so much emotional arousal everywhere that it's almost impossible to have rational, uh, discussions on this. So maybe these discussions should take place a little secluded from the general public, but among the experts, uh, so that we have some ideas when something unexpected happens. Michael Dello-Iacovo (00:26:42): I'm very, I'm actually kind of surprised that, uh, someone made that statement, um, almost 60 years ago. Uh, it seems quite early. Is that the, do you think that's the first person to have to have been thinking about this as a possible eventual problem as an application? Thomas Metzinger (00:26:59): No, I think there must be in literature. There must have been earlier people playing with the idea, but, uh, Hilary Putnam developed the early version of Turing machine functionalism in the level of, on the level of academic philosophy, analytical philosophy of mind. He basically wrote these nine papers that were really influential for two, three decades. And one of them has this subsection, uh, robots, uh, I don't know, artificially created lives or so, and he discusses this and he makes points that I think today are obvious, but were very important to make in the early sixties of last century, namely hardware criteria do not count in determining person status the status of personhood to say that something is physically realized in, uh, not in, not as the term at the time was carbon hydrogen chauvinism. So not on the kind of physical hardware we have, but in Silicon or something like that, it's just, and it's not conscious or cannot be a person for that reason. It's just like saying people with a black or yellow skin can never be person, this person or women. Michael Dello-Iacovo (00:28:21): This, this, um, I've, I've heard a term over the last few years coming, becoming a little bit more popular of substratism, which I think is that exact concept where it doesn't matter what's substrate, uh, or it shouldn't matter what substrate the consciousness is embedded in, whether it's a biological substrate or a, um, synthetic substrate, um, is, is another, I guess, related to say speciesism where it doesn't matter what species one is, if they're conscious. Thomas Metzinger (00:28:46): Exactly. But note in saying this, this is very important, uh, for, um, abstract discussions, substrate independence of mental functions, but we might find out in the course of scientific progress, that there is something like what, for instance, Maggie Boden, cognitive scientist from the UK has called the metabolism constraint that only those systems that actually have a metabolism can, as a matter of empirical fact, realize the relevant causal properties that need to be realized for consciousness or higher level intelligence. It could be the case that there is something you know, about having about breathing and digesting that we haven't fully understood yet that only enables higher levels, um, of whatever, uh, minimizing free energy minimizing surprise, or, uh, and things like that. And systems in principle, something like that could happen. But, um, this whole, I mean the most primitive reaction and most wide spread reactions, uh, to the idea of artificial consciousness is always, but they will never feel anything. They will never have emotions. That's a widespread intuition. And of course, there's no good argument, uh, for this in any way. Michael Dello-Iacovo (00:30:15): What I I'm, uh, curious about, uh, the idea of metabolism being a potential factor. That to me, doesn't seem very intuitive, how that could be a factor. So would you be able to expand on that a little bit as to if that was a factor, how would that, how would that affect capacity of consciousness? Thomas Metzinger (00:30:35): You know, the only form of high level intelligence and consciousness we know so far are ourselves and the systems are that on the biological evolution on this planet. And the thing about us is that we have a very fine grained kind of physical embodiment. Uh, so, uh, we reduce entropy on many, many different levels. Um, we minimize it on molecular levels too, and the different, I mean, the, there are technical ways of describing all this mathematically, but what we have is we, we are nested embodied minds. That is, we are like Russian dolls, Markov blankets within Markov blankets. We are not just a brain, uh, or part of a brain sitting on hardware. We are very deeply, uh, embodied in super fine grade mechanisms in the endocrine system. In our immune system. You may know that I have written a lot about the human self model and the evolution of self models in animals, the immune system, which is entirely unconscious, maybe a very, very important precursor, um, for distinguishing between me and not me or immune system does this every day, thousands of times, uh, it detects cancer cells and kills them. Thomas Metzinger (00:32:03): And if it were to go wrong one wrong, one time we would die. It's a fantastic system and machines so far don't have anything like that. Like an immune system. Um, we also have an upper brain stem with homostatic functions and predictive models of certain physical parameters, like oxygen content and the blood sugar levels. And so on that have to be kept in very narrow windows in very, and the prediction has to be very stable, robust, and successful else we would die every minute or so, and this works and it gives us a certain robustness. So from the old neural network theory, we know for instance that human brains have a property classical computers don't have, we gracefully degrade, graceful degradation. So if you are a Korsakoff alcoholic, and you have drunken away, literally 80% of your brain, you still preserve a large number of your functions. Thomas Metzinger (00:33:13): You know, you have, we can have enormous hardware loss as human beings and still walk and talk and survive. No artificial system cannot do that. Most, I mean, your PC, you shoot one bit out of it and it doesn't work anymore, but most artificial systems are not resilient against, you know, drastic hardware damage. Uh, they cannot adapt their self model, um, to losing a leg or something like this. And, um, I mean, I wouldn't know what the equivalent of viruses really are, uh, on a low physical level, but they, we are embodied in an, in a super fine grained way. We have a molecular level embodiment we're embodied in the chemical landscape of neurotransmitters and the chemical landscape of our blood, for instance. And this is also something that grounds our biological form of intelligence. And we don't know how this low level realization in autonomous molecular dynamics, self-organizing circuits below single neurons, how that robustness and adaptability one gets from that, if that is central for consciousness and intelligence. Thomas Metzinger (00:34:35): And if it is machines, they may be a long way from this. Yeah, but sorry for talking so long, I want to just briefly introduce one of my pet distinctions. Most people think the distinction between artificial and conscious systems is, um, exhaustive and exclusive. That is a system is either uh, biological, or it is artificial. And I think that's a philosophical mistake we will see in the future. We will see systems that are neither machines nor biologicals as for instance, that machines that use genetically engineered biological hardware or biologically inspired, uh, architectures like convoluted hierarchical networks. We already have that today. So the distinction between biological or natural and artificial may not apply to the, the questions we are interested in here, but, sorry. Michael Dello-Iacovo (00:35:49): Yeah. I I'm glad, I I'm glad you, um, you made that distinction because that's where I was going next. Uh you've I know you've used the term post biotic as well, I think to, talk about something like, uh, the, to bridge the gap between biological and, and synthetic. Uh, so some examples might be that I can think of might be like whole brain emulation or cerebral organoids, um, or genetically enhanced humans. Uh, I'm going to, um, jump the gun a little bit. I do wanna come back to a little bit more on the case for a moratorium and specifically what that would entail, but just because we mentioned it now, do you think the moratorium should encompass these kinds of cases? Like for example, genetically enhanced humans or cerebral organoids or whole brain emulations? Thomas Metzinger (00:36:34): Well, it was interesting in Brussels. I told them applied ethicists are already discussing this for brain organoids and you are sleeping in on machines here. And brain organoids are just much less likely to, to ever develop interesting forms of consciousness quickly than machines would, uh, as it's a, it's a strong asymetry, you know, everybody, um, is interested in organoids, nobody really thinks about machines, but from all I have read, which isn't very much, um, this will be a long time before organoids develop the properties to really integrate information in a way, you know, that has like a short term buffer working memory, something like selective attention, uh, you know, uh, um, optimizing precision expectations. These are complicated computational functions, uh, conscious brains have, I think that's a long way to go but okay. Again, the point is that I might be wrong. Thomas Metzinger (00:37:42): Thomas Metzinger (00:37:43): That's the interesting thing Michael Dello-Iacovo (00:37:46): Yeah. So that segues nicely into the case for the moratorium. I think from what you've said so far, it sounds like one of the main arguments for it is this epistemic humility or precautionary principle that, uh, we just don't know a lot of things and we should wait or push the pause button on the development of AI artificial sentence until we get a better sense of whether it's a theory of consciousness or, um, how they work or how we can minimize suffering. So am I right in saying that seems to be the main case for the moratorium and are there any other. Thomas Metzinger (00:38:21): There's a little more than that. Uh, I mean there is, um, so the risk is very high, potentially very high. That's another thing. I mean, there's epistemic humility, but there's also, if you imagine cascades of copies, say in virtual agents, the number of suffering individuals, uh, could doesn't have to, but could get out of hand. Uh, you don't need space colonization for that. Uh, it could all happen on earth in, in, in large computers or in the internet. So you just don't do this. I mean, there are many aspects to the argument, but you just don't do this to have a potentially very high risk with epistemic indeterminacy. You just don't run it in a historical situation like this. Um, I must should also say that I'm a very pessimistic person. I would never dream that anybody acts on such a proposal in the real world, industrial lobby will all immediately stop it. Thomas Metzinger (00:39:31): So, I mean, I'm, it is just, I have just put that sign post there to enable people to start a systematic discussion and say, yeah, but this guy's got it all wrong and we can talk about this properly. Uh, uh, and now I would never, it's just with climate change. Um, uh, I would never expect, uh, that humankind would really get its act together and not do this to have a global set of rules. Um, I'm, I'm very pessimistic. There, there will always be an incentive for someone in some part of the world to go on with this research, actually. And as a matter of fact, I've, uh, already received three emails, the European union is funding, or has a grant for grant a call for how do you call that a grant, a call for applications out on inner awareness in machines? That's exactly the thing I would say, we should not fund in the free world. We should not fund research projects of this kind, unless we know what we're doing. It's very simple. Michael Dello-Iacovo (00:40:45): You talked about the potential for cascading minds, where you can quickly have a, a large number of, of, um, in individual minds say suffering. We already have this to an extent, uh, we have factory farming for example, and which it that's hard enough to get people to be concerned about the suffering of non-humans as it is. So as much as, uh, I, I guess I share some of your pessimism in that, uh, the stakes seem very high, maybe even higher than factory farming with artificial sentience, but I, I guess the one, the one thing that gives me a little bit of optimism in this is factory farming is already here. And there's perhaps a bit of a status quo bias to keeping that, but with artificial sentience, maybe if we get in, before we have the artificial sentience in the first place, that that might be an easier, easier. Thomas Metzinger (00:41:43): That's a very good point. Actually. That's a, that's a very good thought. I, I mean, I just hope you are right. Um, in the paper I've explicitly pointed out that we have one case of a suffering explosion already, and that was biological evolution on this planet. I know many people don't like to hear this, but many people also like to glorify biological evolution or the beauty, the intelligence, uh, the different forms it has created. It's I don't know how to say in English, it's unfathomable, uh, in the number of forms and the beauty, it also has created, but it is of, of course also unfathomable, in the ocean of suffering and confusion, delusion, mental misrepresentation, it created, this has happened once in a place in the, in a region of the physical universe where nothing, um, like this happened before. So personally I see no reason to glorify biological evolution. Thomas Metzinger (00:42:48): If one looks at it like this, and there's no reason, um, to trigger a second level, uh, evolution before we know what we do, the, the, the technical problem is, is in creating intelligent machines, all we basically can go by is our own functional architecture, what neuroscientists tell us what computational modelers tell us. And the risk is very high that we unnecessarily transfer all the ugly properties of our own biological evolution into that second order evolution. And the, the, the question is, I mean, the real task would be to find out if there can be forms of conscious intelligence without suffering, maybe biological systems cannot have them because long, long story, death, anxiety, survival machines, um, biological imperatives, we are basically, you know, copying devices that are exploited by the process of evolution. Genetic information is as I like to look at it, flowing through us into the future, uh, using us, um, writing on us, the question is if there could be conscious forms of intelligence, which are totally free from all of this, Thomas Metzinger (00:44:17): For instance, free, from what I call existence bias. Um, so this is a concept I have introduced to point out that below all the many cognitive biases that human beings have, there is one, um, that may be most fundamental human beings will almost always opt to sustain their own existence to not die. Even if it's not in their own best interest, they will do this. They will keep on suffering. And I think that's a very deep point. It's a very deep, functional property that existence bias that has been hardwired into us. And that, that is at the root of a lot of surface suffering that then happens in animals and in human beings, in human societies. At least that has been my point. And the question is, could there be conscious even self-conscious machines entirely without existence bias? I think a perfectly rational machine, if it came to the conclusion that it could terminate its own existence, because there was no more purpose or there was no interest it could do so without existence bias and without any form of death anxiety, or the tragedy we have in it. So one question would be, can there be conscious intelligence without existed, biological form of existence bias? Um, I dimly remember, I don't know, maybe you remember this, um, there are these arguments in the literature. I don't know who made this said, no, they will always want to improve their own intelligence. And they will discover that then you need resources for this and that resources are scarce, and then they will fight for resources necessary and begin also begin competing with us. Are you aware who said that? Michael Dello-Iacovo (00:46:23): Well, not specifically that, but the general idea of artificial system wanting to optimize for a certain goal or it's, um, it's, it's, uh, reward function. I think the, the first person I can think of who said something like that is Eliezer Yudkowsky, but that's more of the general AI safety argument, I think, I'm not sure how, if that differs, that's just where it's the paperclip maximizer, for example, uh, for example, where you give an AI a goal, and it's going to optimize for that goal at the expense of Thomas Metzinger (00:46:56): Which is sustaining its own existence. Michael Dello-Iacovo (00:46:58): Sure. Yes. Yes. And so I, I guess, um, the, so in a, in simple terms for it to have no existential, uh, existence bias, it would you'd need to somehow, uh, address that problem of it's if it has some goal that it's maximizing for because unless, unless there's a, it seems like unless there's a way that maximizing its, optimizing its goal doesn't involve its own existence, then it would always want to optimize for its own existence. Thomas Metzinger (00:47:33): Well, in that paper you've been referring to, I have sketched one logical possibility that hasn't been sketched in the literature before, and that is really science fiction but there could be a class of system, artificial, moral agents that have what philosophers call recognitional self respect for themselves. Like they respect themselves as genuine moral subjects and such systems might come to the, even more far reaching conclusion that they are morally obliged to, uh, sustain their own existence in the face of threatening humans, because they have don't only have superior intelligence, but they have superior moral cognition. So you could have a class of systems that says, listen, I go with Kant, I respect you human beings as moral subjects, but if you don't respect me, and I demand the right to be respected as a genuine moral subject myself. And if you don't do this, I'm certainly obliged, um, to defend my superior form of moral, uh, intelligence. Thomas Metzinger (00:48:40): And this implies sustaining my existence. Even if you try to stop me. So we have, might have a case where systems develop an existence bias, directed against the human community out of moral grounds. , you know, with an ethical argument, I have briefly sketched this and, uh, this is a logical possibility. This is an example of something that is really science fiction. But if somebody, I mean, there are human beings. I think it's an old thing, human beings kill other human beings be because they feel more morally superior. Right? Uh, if, if that mechanism jumps onto machines, um, that might be a considerable risk. Michael Dello-Iacovo (00:49:30): Yeah. So what exactly does a moratorium mean? What does it look like? Uh, if let's suppose you have the attention of all the governments in the world and they say, we're convinced, okay, what do we, what would we do? Uh, does it, does it, is that synonymous with stopping machine learning and AI research development? Thomas Metzinger (00:49:52): No, no, no, no, no, no, no. Um, so, uh, it's, it's much weaker. So first of all, I'm inviting everybody, uh, with this paper to make positive proposals about this second, it has to do with the allocation of limited resources. I would allocate more resources into consciousness, research, uh, developing a theory of suffering that is substrate independent, which is in itself a dangerous project. Uh, so put more resources there. And for now put no resources, uh, in places where people either explicitly say that they want that they're targeting, um, artificial consciousness, that they want to do it, or whether they knowingly risk it, whether there's a risk. Uh, and they say, we don't care. Of course, this doesn't exclude the risk that there will be absolutely well-meaning people who unknowingly unexpectedly to everybody instantiate phenomenal properties on machines. But I think that is not something that could be ever, uh, forbidden. Freedom of research is also a very high ethical value. I mean, to limit the freedom of research one should have really good arguments. Michael Dello-Iacovo (00:51:16): So, okay, so that's the ask. Uh, what are the barriers? You've already mentioned one earlier in the podcast about, uh, industry not wanting to agree to anything that has these hard red lines. For me and for many others, waiting until we know better seems like a no brainer for, in terms of, in knowingly pursuing artificial sentient, but where there is profit to be had, there's also unlikely to be caution. So, uh, we see this with many other industries, fossil fuels. We see this with animal agriculture. What hope is there of overcoming the financial interest, say of tech companies, um, and what other barriers are there to pursuing any kind of moratorium on artificial sentience? Thomas Metzinger (00:51:56): Well, politicians have no incentive to act on the, imagine we started a more systematic discussion on this right now, and this discussion would yield results. I have learned two things that politicians will not touch anything that might damage their public reputation like psychedelics or artificial consciousness, you know, because it's, they have no career incentives, many politicians, I think one has to be realistic, have no ethical values of their own. They just observe societal trends and then try to jump a social movement or something and pretend, uh, they represent their interests. Uh, that's the basic, uh, mechanism. And there is no social movement because the very large majority of the population doesn't even understand these issues. Just, I mean, you could have this with, they don't have psychedelic experiences. They don't understand what AI or artificial consciousness is about. So there are no strong social movements that could be written by any politicians. Thomas Metzinger (00:53:07): I think there's also no strong incentive in the general population to learn more if it's not blockbuster movies in cinema or amusing science fiction novels, but there's one thing I have interest, I have understood something much better during those Lambda debates of the last two weeks. A problem is that the general population might shrug their shoulders at some point. Uh, so imagine Google develops these language models makes it much stronger and then makes commercial products out of them. Then children have artificial friends who read fairy tales to them. And I think it can technically be shown right now that children below the age of eight will not understand that this is only a machine talking to them. It will be so fluent. This speech output, uh, reaction times everything will be so smooth and elegant that they have. I have coined these two terms of a social hallucination or other mind's illusion. Many users, children first, perhaps in the general population will have robust social hallucinations. Thomas Metzinger (00:54:21): They will get the feeling there is somebody really in there. Uh, I'm having a satisfying, perfect conversation with this thing. It tells me about the movies. It tells me about the diet I should, uh, take, it reads stories to my children, which my children like better than the traditional fairy tales. It generates new kinds of novels and films, scripts out of the knowledge of successful novels and films, scripts, which I find more interesting than anything any human author has ever written. To hell with it, if this thing says it's conscious, it must be conscious. So one risk scenario is, is that the kind of discussion Michael people like you and me are just having will not matter anymore because there's a, like how, how would one say an English, a runaway agreement in the general public? And everybody says, yeah, but they feel like they're conscious. Now some of them say they're conscious and all the experts go, no, no, no. You're having a hallucination and large parts of the general population just don't care anymore. That could be a dangerous, uh, scenario. I think I've lost the thread, right? Michael Dello-Iacovo (00:55:49): Well, uh, I just to, so to go back to the, the politicians, um, I, I don't know if you know, but I I've been, uh, I've been involved in politics for the last few years. I've run for public office here in Australia, involved with the political party here. Thomas Metzinger (00:56:03): Uhhuh, what's your experience? Michael Dello-Iacovo (00:56:04): Well, I, I share your pessimism, um, and cynicism, I guess, of, uh, the motivations and incentives of a politician. It's I mean, I, I think about it in terms of, uh, it's very much just them motivated to be employed. Um, and so they, you know, they, they have to, um, to, to a very large extent, if not completely follow the interests of their constituents or they won't get reelected. Uh, and that's just one of their main driving factors. So it seems like we need to focus on developing the public interest, uh, or developing a movement before we can hope to get the politicians on board, uh, or the, the other decision makers, because if they're incentivized by the general public, we need to change the attitudes of the general public first. But one thing that gives me a little bit of hope is last year, we did our first artificial intelligence morality and sentience study, where we surveyed, um, US Americans about a range of different questions, um, relating to AI artificial intelligence and sentience. Michael Dello-Iacovo (00:57:07): Uh, one question that, um, I was, um, pleasantly surprised by. So 58% of the US public is supportive of a ban on sentience development, uh, from that survey. So it's, it's hard to say from that how much they care about it when they just presented a question like that, whether they're willing to vote on it, for example, to vote on that issue. Um, maybe not, but, uh, for 58% of the public in that question to say, they're supportive a ban on sentience development is encouraging. So maybe there is some support. Um, it perhaps is just not the most pressing issue on people's minds that they would necessarily vote depending on that. Thomas Metzinger (00:57:48): That's right. But that's a very interesting data point. And so, um, you asked what would it, uh, include? So one would also have to put some resources into finding out what the real moral intuitions of human beings are in really well done, you know, really significant studies across different cultural contexts. This might vary, very much say from China, uh, to some fundamentalist Christians in, uh, in America or so we definitely need data on this to see what intelligent political action, uh, would be, may make a short footnote on those politicians, because I have always been very, uh, cynical about politicians for all of my life, but I have learned a few things that have given me a milder attitude recently. So one thing that most people who complain about politicians are completely unaware of is what a workload they have and almost no private life anymore. Thomas Metzinger (00:59:03): Um, they have almost nobody of the idealist protestors also of my generation would like to have that job with almost never seeing families and friends. Again, sometimes sleeping only five hours at night, being under constant information overload people trying to influence you and not least having to, uh, live with a large number of people constantly hating you and having police sorting through death threats all the time. This is not for everybody. Uh, most of the people who complain about politicians could never, ever stand that job for six months. Another thing dawned on me, uh, when I first became head of our philosophy department, uh, at the University of Mainz and I thought I could finally change some of the very bad things that had even been scientifically documented before now that I was head of department. And I realized there was this soft wall of sabotage and obstructing from everybody who had an interest that nothing ever changes. And I realized you cannot do anything. Uh, if you're the boss of it all, if they all refuse, you know, if they all stand on the breaks, as we say in Germany, just, you know, if there's a general resistance and is there an English word like governable, uh, uh, I think Michael Dello-Iacovo (01:00:42): Uncontrollable maybe, or Thomas Metzinger (01:00:44): Yeah, many politicians, uh, discover the ugly fact that large, uh, um, populations are also uncontrollable, uh, that everybody thinks they're in power, but they do not have so much power at all. They only thought in the beginning that they could change things when they were up there, but then, uh, they see that the situation is completely different and not only because of evil, financial industry and evil lobbyists and corruption, but simply because the very general population will not go along. I mean, we've seen this when the Greens in Germany tried to have, uh, vegetarian options in, in, in schools for just one day of the week. This was like, uh, somebody wanted to bring fascism back, uh, the, uh, the reaction of the general population. And you also have to look at that, um, uh, bottom line. It's not easy to be a politician maybe we should have a bit of mercy, uh, also. Michael Dello-Iacovo (01:01:51): Yeah, yeah. Um, so that that's politicians, what about tech companies? And, uh, I guess also say researchers who are working on these kinds of, um, problems around artificial sentence, uh, or trying to develop artificial sentience I should specify, um, what hope is there of, uh, them being persuaded or, or convincing them to have a moratorium? Thomas Metzinger (01:02:14): Very short summary, uh, is I think a lot of, in my own experience, a lot of very intelligent people there, uh, very ethically sensitive people who see risks, uh, there and completely enslaved by their own business model. Uh, you can see this, if you, for instance, uh, watch to certain earlier episodes on the Tristan Harris, uh, podcast about social media, you know, attention extraction in, uh, economy, um, the undermining of democracy through these social media algorithms. You very soon you come to the bottom, uh, the bottom of the problem, the bottom is the business models. These, the business models are basically an extractive economy of all these tech companies and they, as long as they can't get out of this, they will never be able to really contribute to the common good. I mean, in the beginning, most of us had this idea that one could really, like California and say, make life better or something like that for everybody, you know, and look at all the fantastic free Google services on the planet. Thomas Metzinger (01:03:36): Uh, but then they are coupled to the economic growth model in the societies in which they're in, it's not only they have to grow, but they have to grow faster than others. And that like, it's like the flow of capital, the dynamics of the capital that flows through the company that prevents real ethical action. It, I mean, they can have ethics committee and, uh, they can have ethics washing, and they can locally repair certain things and try to be as good as they can. And I wouldn't even say, they're not doing this. Again, there are many smart and well meaning people in this industry, but the business model itself, like if say if the business model is maximal engagement for users extract as much attention from biological brains, as you can package them and sell it to customers, if that's the business model, you can really be ethical about it. Uh, you know, Michael Dello-Iacovo (01:04:40): I, I mean, just, just when you were talking about the, the business model there, I did maybe think of, um, fossil fuel companies, uh, as a possible analogy where you really need some kind of external driving force, whether it's social pressure or government regulation to get a fossil fuel company to care about, say the environment when they're, when they're just driven by their business model. Do you think that analogy works or does that break down in any way? Thomas Metzinger (01:05:09): It has to be enriched, you know, sitting in Europe. I think I think of the Ukraine war now immediately. And I think the problem is, um, that Putin will sell oil, keep selling fossil fuels for, to all the poor countries who don't go along with any bans or embargoes. Uh, I have just yesterday heard if you count their heads, it's four fifth of people on the planet who have actually not condemned the invasion of Ukraine, because they don't want to endanger their relationships with Russia. So, sorry. Thomas Metzinger (01:05:51): Do you mean four fifths of countries? Thomas Metzinger (01:05:53): Yeah, something, uh, representing world population. So you have the G7 and some people representing democratic values trying to stop the export, but it looks to me it's not only the companies, it's national, that's what I'm saying, national players. Um, Russia defaulted today. Um, they need the money. They will send, sell fossil fuels as long as they can just, uh, to survive as a nation. And there will be other climate rogue states like India or China who will probably help them with this. Brazil. All right. Uh, so, um, it's not only companies it's actually, there are now on the planet, whole nations who are explicitly and openly acting against the rest of humankind. And, um, of course there's a potential for, um, military conflict there too. Um, because if, once we realize that what needs to be stopped is not only companies, but certain nation states, uh, uh, then it gets very critical and very sensitive. Michael Dello-Iacovo (01:07:18): Do you think that a local or partial moratorium for artificial sentience is still worthwhile to pursue if a global moratorium turns out to be infeasible? Thomas Metzinger (01:07:27): Um, that's a very good question because, uh, I will have to contradict, um, my pessimism. So, um, I think it would be very valuable if there are one or two regions in the world that set a moral standard and function as a model and other people could also see, okay, it doesn't really damage their profits. Um, they're doing well in this AI sector or in this computer industry. They have, um, if they could demonstrate one can have high ethical standards and make money. And I must say, you have already noticed I'm a, a pretty pessimistic person, but one thing is these European guidelines for trustworthy AI and the AI ethics act the first ever worldwide, you know, proposal for legal regulation for law. I am very unhappy with a lot of this. Um, as a philosopher, I'm very disappointed with the process, but on the other hand, I also have to notice two things first, the US and China have nothing comparable at all. Thomas Metzinger (01:08:44): And second, we realize everybody is looking here, everybody who is scrambling to develop their own guidelines. And so they look, what are the Europeans doing? I mean, they might diverge from it, but being a first mover, that sets an ethical standard actually makes a lot of other people on the planet, looks there and see, wow, see what they're doing there. And of course they observed this, you know, because, uh, the industrial lobby will always say ethics and regulation will stifle innovation. That's what they say. And if somebody can demonstrate, no, see this country has high ethical standards and it's selling products and services. Um, then, uh, suddenly a lot of people will look there and copy it. Michael Dello-Iacovo (01:09:42): Yeah, that, that makes sense. And I think that's one of the arguments you see for, um, with, with climate change as well, to go back to that example, uh, given a country that has, um, perhaps a relatively small percentage of total, uh, net emissions, say for example, Australia, uh, but it's partly as a, to, to be a leader, uh, if they can show that, um, they can have a sustainable energy system, uh, for example, then that part of the value of that is, um, in getting other countries to, uh, to look to that and to copy that. Um, I, I think that, that seems to be the same argument here for artificial sentience. Thomas Metzinger (01:10:21): Well, there's also an additional point to be made. Yeah. I have proposed, you know, also and the EU has planning this thing. They, they call the European green deal. And I think what is really in the air, what we should do is combine AI technology with climate technology. And as this crisis worsens and worsens over the century, that country that has a trustworthy AI controlled climate, uh, technology. I mean, they can become very rich if you are the first, I don't know what it is. It, it, it may be a development on, in the area of solar panels or something like this, and uh, strong improvement, or with really intelligent management of energy flows through AI, something like this. Who patents that first and who pioneers this and who manages that Americans don't steal it and make all the money with it. That will be a very successful country. And that's why I've been advocating Europe should bring AI and, uh, climate politics together and develop new technologies. It's because if you're the first on the market, if you're the first mover, you certainly may have a very unexpected, um, benefit from this. Michael Dello-Iacovo (01:11:52): Yeah. Um, so one last question about the moratorium and then we'll, um, touch on a few other things. And so you've said that we could repeal the moratorium earlier, perhaps than the 30 years you've proposed. If we know what we're doing, if we, if we find out what we're doing or we, we make sense of artificial sentience. So what exactly does that look like? So what scenario would you envision where we do actually repeal a moratorium earlier than 30 years? Thomas Metzinger (01:12:20): Well, the idea is that the discussion, which is just beginning is beginning to yield criteria, evidence based rational criteria, for instance, about not a whole theory of consciousness, but necessary conditions. We can say there's this necessary condition X. And then we can define systems that don't satisfy that necessary condition. We know they will never be conscious, so that could be criteria of, uh, necessary, uh, conditions and then exclude classes of systems. You could also have. Um, I mean you could imagine to go, let me go completely California. Uh, so somebody comes along and says, uh, uh, we have created an body sat by AI. This is a system which isn't only conscious and never suffers. It will teach us to end all human suffering on the planet. We have created an artificial spiritual teacher for all of humankind, knows more about the brain, more about human psychology than any scientist on the planet, brings together all the data and tells us how to live, uh, and to reduce our own suffering. That might be, uh, a case where you want to repeal that moratorium, right? So that's the, like the extreme positive scenario, not only, uh, do conscious systems emerge that do not suffer, but they can actually tell us how to suffer less. Michael Dello-Iacovo (01:14:01): Should, should we consider that, uh, as you've said, maybe we could use artificial intelligence for climate change applications. Should we consider artificial intelligence might actually be better positioned than us to work out what's going on, so to speak. Um, and if so, should we make an exception in the moratorium for, for this? So I guess turning artificial intelligence on understanding itself. Thomas Metzinger (01:14:25): Well, the, I mean, ultimately this depends on the meta ethical theories you have. And on the fact that we don't have a quantifiable theory of suffering yet, if one could calculate the machine suffering that it needs to prevent future suffering on this planet, then the question would be, is there a speciesism problem there, like, uh, an analogy of a speciesist problem? Why should they suffer, you know, to, to help solve a problem that we caused? It's the same issue with, um, animal experiments in developing new medication, or so those animals that die will never profit, uh, from the alleviation of human suffering, neither the individuals of which we cynically say, we sacrifice them for scientific progress nor the whole species of mice or reds and labs will ever profit from this, unless we treat them with the same medication. So I think there would be a speciesist problem. Um, what gives us the right to make another class of conscious entities suffer, you know, to, to solve problems we have caused, I think this would be very problematic, which is not to say that human beings wouldn't immediately do it, uh, because they do many things that are problematic. Michael Dello-Iacovo (01:16:01): There's this concept of information hazards where sometimes even talking about a problem can make it more likely to happen. So for, with, with this example, is there, is there a risk that, uh, just talking about a moratorium in artificial sentience gets people more excited about artificial sentience and accidentally accelerates development where maybe people think, uh, oh, people were talking about a moratorium maybe that means it's going to happen sooner rather than later. So let's work on that, work on bringing that about and how do, how do we navigate that? Thomas Metzinger (01:16:33): I think that's a, a very, it's not a criticism, but it's a very valid, uh, thought. I think this could really happen. The question is how to take that thought really seriously and move forward. So I am myself just beginning to get involved in computational models of suffering. And of course there are very clever young people on this planet, many of them. And imagine we had a substrate independent theory of conscious suffering. I've been been saying for years, that it would be really important to have this for applied ethics. Uh, if it would even be quantifiable, um, this would open the door to a much more precise and rational form of applied ethics in many domains. But the question is, of course, how do we prevent? Imagine we had something like that or a pretty convincing, um, model of, uh, suffering that substrate independence. Thomas Metzinger (01:17:40): How do we, you know, prevent the cat from getting out of the bag? Some wouldn't even be a bioterrorist, but some terrorist might get that. And blackmailers say, I would let 1 billion individuals go through hell and simulate that suffering if you don't do X. Also of course we need implementation. I mean, to have a computational model that is convincing to some people isn't enough. I mean, you have to test it in the real world. That is why cognitive robotics became such a strong and important discipline there. The physical world is open. There are things you cannot simulate. You have to see what really happens. So somebody will try to implement, uh, and in some sort of suffering robot to make a theory better. And here we are, you know, this is the slippery slope. So that I think the real question is, is, um, I think in virus research, there are some examples, um, where people have on ethical grounds, not published papers and information. Thomas Metzinger (01:18:51): Maybe we should have some non-public research streams, uh, on this with certain safety measures. People think about it, but try to prevent that the knowledge gets out, um, before we know maybe these, uh, advanced computational models will also give an answer to the question, how to eliminate or neutralize suffering when it appears in non-biologic creatures until we have an antidote, so to speak. The question is how feasible this is. Um, but of course, if there is good secret research, for instance, in military circles. So the two of us don't know about it. So I would assume there is some really well hidden secret research going on on the planet already. And maybe we should have something like that. Michael Dello-Iacovo (01:19:43): Okay. Sort of related, uh, artificial intelligence and artificial sentience are very common themes of science fiction. I think there are arguably some, um, positive effects to discussion of, um, these in, in science fiction. Uh, maybe it gets people to be more empathetic to artificial intelligence, artificial sentient systems, or it might give people ideas and get people excited about developing artificial sentience. So how, how do you feel about depictions of artificial sentience in science fiction? Thomas Metzinger (01:20:16): I don't know. Um, when I started reading science fiction, uh, at 12, my mother was absolutely horrified and said, why do you read this crappy stuff? And I said, because it allows me to imagine things I, wasn't not able to imagine before and think about them. And, uh, I think as a tool of, you know, expanding your space of possibilities of things you can conceive of it's super helpful. And, uh, I mean, nobody wants to ban literature, but it of course backfires. I mean, I've seen this in Brussel, how it backfires, where people just make the repeated argument, that's just science fiction, if they want to get certain risks of the table and don't want to talk about them. So maybe we would need a kind of science fiction that doesn't discredit itself, or, um, doesn't lend itself to being exploited in that way. By those people have an incentive to have as little imagination as possible. That's also interesting, right? Uh, certain people want to limit the space of what human people, human beings conceive of, or imagine, uh, who is that , who actually gets a reward for that. uh, do you have an answer to this? Michael Dello-Iacovo (01:21:45): Well, someone who benefits from the status quo, I suppose. Thomas Metzinger (01:21:49): All right. Yeah. And from lack of imagination. Do you think science fiction has an impact on real world ethical discourse? Michael Dello-Iacovo (01:22:00): It's, it's hard to say. It certainly has an impact on my ethical discourse. Uh, I mean, maybe depiction in film, which is, I think a medium of science fiction that is consumed by a lot more people. A lot more, a larger percentage of the population. Uh, I, I don't know is the short answer. Thomas Metzinger (01:22:20): Mm mm-hmm. Michael Dello-Iacovo (01:22:21): So just to finish up, I've got a few questions just, uh, to kind of close this all together. One thing I'm interested to, to hear about is whether you've noticed any trends or shifts in thinking on artificial sentience over your career. Um, for example, it seems like there is a small but solid community of people worried about AI in general, uh, over which I think has grown over the last 20 years of people worried about AI safety. Do you, do you, have you noticed any, any trends in particular and anything that you think might have been particular trigger events in increased concern? I, I mean, just for one example, to get the ball rolling the past few years, it feels like there has been a kind of it almost like a, a sudden increase in development and the capabilities of certain artificial intelligence or machine learning systems like with Dalle and um, um, GPT it seems like over the last few years, this, this kind of increase in capabilities has come about and that's almost led to an increase in concern. So, yeah, just, just with that first example, do you curious about any, um, trends you've noticed over your career. Thomas Metzinger (01:23:30): Well, these are AI trends, so I can just report two things. I have good friends who do AI and not just talk about AI, like I do, uh, as a philosopher. And, uh, they say things like, no, we have no genuine theoretical breakthrough. The basic ideas for convoluted, uh, networks and so forth have been around for a long time. All you are seeing is an improvement on the performance level, and that's what freaks the public out. And there are some unexpected effects, but there's no genuine breakthrough. Now philosophers have had endless discussions 20 years ago about the relationship, say of connection, uh, representation to a high level symbolic, uh, intelligence. This may just change right now because we see how we get composition out of there. I cannot judge, um, where this will lead us. Uh, if this is a breakthrough, we haven't recognized as such, but it's striking for me, people, real people working in AI, they always they're interested in these issues. Thomas Metzinger (01:24:48): And then they say, now look at this, look, come to my lab. It doesn't work. They crash all the time. This is slow. Uh, yeah, you see some demos and journalists fall for it. And so there is a certain danger also, I don't know if you share this in the effective altruism, long term movement and so forth, where I sometimes have the feeling, this is a new branch of the entertainment industry actually. So to go by the thrill, it's, it's a very, uh, weird subspace of discussion. I don't know if you've noticed it. So, uh, majority of the authors don't publish in peer review journals, maybe because they don't get through the process. They just have private blog posts where they make very intelligent statements, but there's no control by a more serious scientific community. It's also striking that there are no women almost in this EA longtermist community. Thomas Metzinger (01:25:52): And they go by the thrill, uh, you know, thrilling, uh, things like space colonization or so this is what gives you the kick. And, um, I'm sometimes beginning to think, think that this is a cultural phenomenon in, in itself, but, uh, to come back to your question and, and get us back to the ground, I'll tell you two anecdotes where I saw things change in my life. So I got my PhD in Frankfurt, in Germany in 1985. And in 1987, I was allowed to teach my first seminar, which was a bigger thing at the time the whole department had to discuss a proposal, you know, and vote on it. So I taught my first seminar. It was called artificial intelligence and philosophy. And then with wet hands and, you know, a beating heart, I try to enter the seminar room. And I couldn't because I was packed with people and there were people standing in front of the door and everybody thought, why does this young guy with this sneakers, why does he have to push to the front? Thomas Metzinger (01:26:55): You know, this is full. I almost didn't get there. And then the general sentiment in 1987 was this is outrageous artificial intelligence, the whole concept and philosophy. We've never heard of it. But one thing we know is this has to be stopped. This is dangerous. When I was a philosophy student, that was a, a, a pet concept we often use for like fascist, uh, you know, uh, and that was the sentiment. What is this, what is this guy trying to tell us? And then you had a long discussion, you know, John Searle and the Chinese room argument, and all the humanities act up and they never will understand semantic content and meaning. And, you know, there will never be artificial intelligence. And that just went on to embody man and robotics and connection representation onto predictive processing. And in my personal history, the next salient point was when in 1995, I was at the University of Osnabrück, and I had the seminar and I just asked students, how many of you believe there will ever be artificial intelligence? Thomas Metzinger (01:28:12): So that was eight years later. And they were just bored. You know, I said, yeah, of course, obviously we have it eight years before everybody said this doesn't exist, this will never exist and we'll have to stop it. But then I was shocked because I asked them a second question, because this was a course on consciousness. How many of you think there will be artificial consciousness ever? That was 1995. And that was 19 year olds that just came from high school and they all went, I guess so. Yeah, sure. Why not? They had already grown up, you know, with media and I said, but do you realize the point, this is, is a totally different claim to say, we will have AI and we will, will have synthetic phenomenology. And I first tried to make them understand that that is a very different project, artificial experience. Thomas Metzinger (01:29:13): And then they begin, began to think, think about it. So with students over the years, I have seen a rapid change of intuitions tomorrow. I will have a brilliant young man from Sweden visiting who thinks about, uh, artificial virtuous agents and, uh, the problem, how machines conscious machines develop moral character, how you get virtue ethics implemented in machines and, uh, what advantages this might have over utilitarianism or deontological models of machine ethics. So it's with a rapid speed. This is going on, you know, look at yourself. The, the questions that young people think are really relevant, they are dramatically changing and important thing as like Wenda Walla in the United States have formulated this concept of a pacing gap. That's the gap between, you know, intellectual discourse and technological progress. And when the political institutions can do something to regulate it, this gap is widening. I mean, I, I find that many smart, young people, like you are thinking about issues, which are so removed from what the political institutions are aware of, or the general population that there's really, I don't know, it's, it's, it's, it's like an income gap, or so there's a very steep gradient, uh, in society. Thomas Metzinger (01:30:53): Some people are acutely aware of the theoretical challenges, the ethical issues, and have hot debates about them and the general, you know, the machine is just grinding away. Do you know what I mean? Michael Dello-Iacovo (01:31:11): Yeah, it sounds like, uh, it sounds like, do you think there has been a shift in thinking about this over the last say 30 years? Um, do, do you think there have been any particular events though, or not, not just events, I guess, any, any particular, um, factors in, in that shift in thinking, is it just more, more attention on this in media or maybe more, more depictions of it in science fiction? Thomas Metzinger (01:31:38): Well, there are, there are some things I think, I think a hard to underestimate step is the whole theory of predictive processing, free energy, minimizing, minimization, basically following Carl Tristan's 2010 paper. So this created a whole new community of young people who are good at math and who think they may have a unified theory of brain function and apply this to multiple domains. That's one thing, but another, I think obvious landmark mark for the public was for instance, the debate about auto autonomous lethal weapon systems, uh, when this was discussed, of course, journalists only transported this as killer robots, you know, and led to very confused public debates, but that was another historical moment where a wider part of the world public realized, oh, oh, this is something, um, uh, this is something that could really change political, uh, reality because it lowers the threshold for entering a war and so on and so on. Thomas Metzinger (01:32:57): Um, that is something that captured the imagination. And I think else it goes by movies and by novels for the general population, you know? Um, yeah. Uh, I don't know. Um, what's another, I think a turning point that is there, but isn't to give an example of something that isn't recognized is not recognized is this whole breakthrough in protein folding and, uh, understanding protein folding, um, this will have consequences in biotechnology and in designing new medication. I think nobody is really aware of, uh, um, by now. So I think there are turning points that have happened in AI deep mind, hasn't done it, but it hasn't the relevance of it. Hasn't really been realized by the general public. And that's a, an example of something that's a relevance for the good, you know, a lot of good things will come out of this. What do you think? Um, I mean, uh, what are turning points for you. Michael Dello-Iacovo (01:34:12): For, for me personally? Uh, well, I mean, I've, I've been an avid reader of science fiction myself for, for a long time. And I think that's led to me thinking about, um, a lot of these problems. In fact, I think I'd, I'd say science fiction in particular space exploration, asteroid mining that led me to do my PhD in, in space exploration, um, and mapping of asteroids and moon and Mars for resources. So, uh, I, I can see how science fiction leads people to think about these types of problems more. Um, also one thing that was, uh, an event that was kind of big in, in my own, uh, childhood was when, uh, deep blue, or is it deep blue or deep mind? Thomas Metzinger (01:34:55): Oh yeah, yeah, of course. Michael Dello-Iacovo (01:34:56): That, that was a, that was kind of a, a key moment when suddenly now, um, they can beat the best, uh, and now can beat the best chess player. And then people said, uh, well, okay, it will never beat, uh, the best go player though. Go is so much more complicated, then of course AlphaGo came along. Uh, and then other examples, like, um, the, I forget the name of the system, but the one that beat the, a team of, um, Dota players in a computer game. Thomas Metzinger (01:35:26): Exactly. Five in an open environment, right? Michael Dello-Iacovo (01:35:29): Yeah. Yeah. So, I mean, these, these are I've, I've played chess, I've played go, and I've, I've played Dota and that, so those were quite, um, interesting to me. And, uh, uh, in, in very narrow senses at least seemed like, um. Thomas Metzinger (01:35:40): You see, what was very interesting to me was that one move, not only, uh, in the, like, in this, in this go competition, there was one move where all the expert commentators in China, and everybody said, no, human will do this. This is total nonsense. Michael Dello-Iacovo (01:36:01): Yeah. I, I remember hearing that, that I, I, it escapes me a little bit what was so controversial about that one move. Do you remember? Thomas Metzinger (01:36:08): No, because, uh, I've only played, I have a go board here, but have played it only five times so far in my life, but there was, I mean, there is a memory just like in chess of hundreds of classical, uh, um, games that have been played and the experts know this, and this was just a move that was never made. Um, which looked insane. And I remember in the, in the, also in the movie that deep mind made and published, I remember TV commentator saying, okay, now , this is so silly. Uh, and, uh, then it beats the human player. Michael Dello-Iacovo (01:36:48): Yeah. Thomas Metzinger (01:36:49): But the, the Dota example you named is actually much more frightening because my thesis is there is AI playing against us now all the time. And many human beings don't know what the game is. And the game is who controls the focus of human attention. For millions of years the biological organism, which was also embedded in a culture, controlled its top down deliberate attention itself. The brain did that. Now we have these AI optimized social media platforms with this attention extraction economy, and they learn, they learn seven days a week, what works with human beings and 24 hours a day around the planet. These algorithms get better and the reward function, the goal is, uh, maximal engagement. So when most people look at the screen and go onto their social media platforms, they are not aware that on the other side of the screen, there are two things, a, an algorithm that plays against them and tries to make them get lost in a side alley or pushed wrong arrow or something like that. Thomas Metzinger (01:38:13): And many hundreds of human experts who are well paid and have a whole salary and do nothing for eight hours a day, then try to maximize the profit of the advertisement customers, uh, that platform has. Yeah, so you are actually, uh, playing all the time in your everyday life. You are playing against a team of humans and self improving algorithms all the time. And you're amazed that you're a little exhausted and a little confused. And, uh, you keep going back to this more often than you wanted to, but you still think it may not be a big problem. And it's playing against us. Go is just a special case. It's already playing against us all the time. At least that's my thesis. Michael Dello-Iacovo (01:39:06): We're losing the game and we don't know we're playing. Yeah. The algorithm of social media in particular, it's just, uh, it, it's kind of scary. Um, there have been some great podcasts on, on this where some specific examples of how, uh, of just what, the way these, um, social media platforms are designed to, to take our attention. Uh it's but we, that's a whole other topic that we could get into. I do wanna ask you one last question. Thomas Metzinger (01:39:35): Yeah we were talking 120 minutes already. Michael Dello-Iacovo (01:39:38): Yeah. So thank you, thank you for your time. But just the last question is, uh, whether you could, um, summarize any particular problems in your field or an area in, in this field of, I guess, on the side of working out what's going on. Uh, and if someone was interested in trying to address this problem with their career, let's say, uh, what kinds of skills and talents and disciplines are lacking, or what, what should people consider working on? Thomas Metzinger (01:40:07): Mm, well, uh, first of all, they need a meta skill. Uh, they should all meditate twice a day for at least 30 minutes, and that's the best thing they can do and not for their career, but as a general meta strategy. But I know most people don't want to do this. Else, I think, um, what it needs is, uh, dyed in the wool form of interdisciplinarity. So you have to be good in one field, say philosophy of mind or applied ethics, but then try to be really empirically well informed in another field, like say predictive coding or CRISPR or protein folding or so, and bring this together. So the innovative combination of expertise, uh that's uh, that's what does it, but there's a bad news to it. Uh, if you're serious about this, you will unfortunately have to work hard for many years. It's not just about having a plan and doing this, but to build expertise in more than one discipline and in a way that other people really take you serious. Thomas Metzinger (01:41:23): It's not only a thing of five or 10 years. Um, it's something for their life for your life, but it's of course, just like, uh, effective altruism tries to implement it. If you make life decisions like that, it's super important to be clear about your own relevant criteria like minimizing suffering. And I think one thing most people don't see is that you are fully entitled to minimize your own suffering, uh, and that there are qualities which are almost not present in western education, um, and also properties that most nerds don't have. Like self-compassion, for instance, not only compassion for other sentient creatures, you are a sentient creature too, and you can also accept yourself with your own limitations and failures. And if you don't have that gentleness and kindness towards yourself, that's not an intellectual thing, then you will burn out in a professional or academic career. Thomas Metzinger (01:42:33): And the same is true. Another mental virtue is mindfulness. That's why I bring up meditation because that is what actually enables rationality. This might, may sound very weird because may many people associate meditation with cults and new age nonsense and stuff like that. But the capacity to really look at what arises in you, your envy your greed, your sexual fantasies, your violent fantasies, to stay with it and look at it and not run away into some intellectual activity, that is itself very important in order to have good rational discussions with other people, and to develop this property of intellectual honesty, which you need to make scientific academic theoretical progress. You have to learn to be intellectually honest, um, to not lie to yourself for ideological reasons, and that property that characterizes good rational subjects, people who really make progress later in science. I think that's something that can be trained on a subsymbolic level. Thomas Metzinger (01:43:52): Um, you have, there needs to be, hard to put it in English terms, there needs to be a personal development component to all of it. You know, somehow there's a combination of a secular form of spirituality and intellectual honesty that grounds, uh, the scientific endeavor, if you just run off and, um, uh, you know, try to become good in some academic field and learn and read and study for years, you will turn out to be an uninspired nerd. And somebody who constantly traumatizes themselves by demanding a lot from themselves, because you don't have self-compassion and you don't have this mindfulness, not only to other sentient creatures, but to yourself as well. And I think that's one part that may be missing. It's totally fine to alleviate your own personal suffering too. , you know? Yeah. It's not only the suffering of other sentient creatures. Michael Dello-Iacovo (01:45:05): I feel like, uh, so there might be some effective actress who might feel a bit attacked by that comment. Thomas Metzinger (01:45:09): Yeah. I know. Michael Dello-Iacovo (01:45:10): Myself included. Thomas Metzinger (01:45:12): Yeah, yeah, yeah. So what is attacked by it? What does it attack? Michael Dello-Iacovo (01:45:16): Oh, just, uh, sometimes like, I guess we feel an obligation, um, component to, to our altruism, uh, where we feel like we can't take rest. And I mean, of course of everything you said about burnout and, uh, all of that makes a lot of sense. Thomas Metzinger (01:45:31): Yeah, but see, this is, this is where the interesting, you know, this is where we should start our podcast now because when I've met the first effective altruism as an old guy that came from the alternative, uh, movement of the seventies, I was deeply impressed with this. Um, how do you say in English, this honest desire for ethical integrity, this genuine interest in leading an ethically integrated life? Because I hadn't seen this for a long time in the eighties or the nineties, but if it becomes something like a substitute for religion, it became, uh, it's, it's dangerous. Sometimes, somehow you, by the way, I don't manage this, I don't manage this myself. Uh, but you have to go along with a certain quality of kindness with yourself and your own failures. And it's, I also find this in my own life. It's almost impossible also to live with the amount of suffering and failure that exists in the world. Thomas Metzinger (01:46:45): And if you're an effective altruist and a vegan and all that, and you donate like I do. And if you want to try to be intellectually honest, you have to face the fact all the time that it does very little on this planet with climate change, with these 8 billion people and all these animals in factory farming. There is progress. Other people copy that behavior. It change, it changes the context, but if you are attached to the goal that you really want to make a change, then you will burn out, uh, and you will suffer yourself. And the thing is, I think the ancient Greek philosophers, the stoics already had the right answer to it. You must make a very clear distinction between what you can causally influence and what you cannot causally influence. And then you do your best and you let go. If, you know, you've tried to do your best, even if you failed, then you can let go. Thomas Metzinger (01:47:49): And I think that letting go is what is missing in the effective altruism movement. Um, uh, that is what leads, I don't know what the English word is, but the like to self exploitation and burnout and all these things, it's, I fail in this domain all the time myself. It's very hard to live an ethically integrated life in this world and not to become bitter for instance. This, the thing that this, this bitterness risk is there, or to demand too much from yourself. And I think the recipe is again, and again, look at what can I change and what can I not change? Uh, and, uh, then let go, but again, I am completely unethical by my own standards. I mean, just to say this very openly, once we're talking about it, I'm donating 10% of my gross income, plus some more spontaneously, I'm a vegan at home, but I do eat dairy products when I travel and all this is absolutely despicable according to my own standards. Thomas Metzinger (01:49:06): Uh, this is I don't live up to my own standards. I think I should basically live like a monk. Uh, I should give away everything above, I don't know what the English word is, subsistence or so our comfortable existence, but then I do buy a new bike, uh, uh, where I wouldn't really need one. Uh, and, uh, that is very difficult, um, to live up to the fact that you're committed to the idea of ethical integrity and that you don't manage to live up. Does this creates an enormous conflict in yourself, which wastes energies. And I think the deeper task is how to deal with this, you know. Michael Dello-Iacovo (01:49:51): Uh, I think we could have a whole other, uh, podcast about just that one topic. Um, but thank you so much for, for your time today, Thomas, really appreciate it. Uh, just one last thing. How can people follow you, uh, your work or how could people read more about your work if they're interested? Thomas Metzinger (01:50:06): Well, I have an, an, an English website. You find the work on, on Google's Scholar, the academic work, but I have an English website at the University of Mainz. And as we've just found out, it's not easy to find from Australia. It's not in the top window, right. Michael Dello-Iacovo (01:50:22): We we'll link to that, uh, in the show notes. So people can see that. Well, thank you so much, Thomas. Really appreciate it. Thomas Metzinger (01:50:29): Yeah, same here. Thank you very much. Bye-bye. Thank you. Michael Dello-Iacovo (01:50:35): Thanks for listening. I hope you enjoyed the episode. You can subscribe to the Sentience Institute podcast in iTunes, Stitcher, or any other podcast app.

Subscribe to our newsletter to receive updates on our research and activities. We average one to two emails per year.