July 03, 2023
Guest Raphaël Millière, Columbia University
Hosted by Michael Dello-Iacovo, Sentience Institute
Raphaël Millière on large language models
“Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.”
How do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language?
Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.
Topics discussed in the episode:
- Introduction (0:00)
- How Raphaël came to work on AI (1:25)
- How do large language models work? (5:50)
- Deflationary and inflationary claims about large language models (19:25)
- The dangers of overclaiming and underclaiming (25:20)
- Summary of cognitive capacities large language models might have (33:20)
- Intelligence (38:10)
- Artificial general intelligence (53:30)
- Consciousness and sentience (1:06:10)
- Theory of mind (01:18:09)
- Compositionality (1:24:15)
- Language understanding and referential grounding (1:30:45)
- Which cognitive capacities are most useful to understand for various purposes? (1:41:10)
- Conclusion (1:47:23)
Resources for using this podcast for a discussion group:
Transcript (Automated, imperfect)
Michael Dello-Iacovo (00:00:09):
Welcome to the Sentience Institute Podcast, and to our 22nd episode. I'm Michael Dello-Iacovo, strategy lead and researcher at Sentience Institute. On the Sentience Institute podcast. We interview activists, entrepreneurs, and researchers about the most effective strategies to expand humanity's moral circle. Our guest for today is Raphael Milliere. Raphael is the 2020 Robert A. Bert Presidential Scholar in Society and Neuroscience in the Center for Science and Society, and a lecturer in the philosophy department at Columbia University. He completed his PhD in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language, understanding, reasoning, and planning. And I'm excited to get stuck into large language models with Raphael today. All right, so I'm joined now by Raphael Milliere. Raphael, thanks so much for joining us today.
Raphaël Millière (00:01:13):
Thanks for having me.
Michael Dello-Iacovo (00:01:15):
So I'm looking forward to getting stuck into large language models with you today. But to start, I just want to ask what your background is and how you came to work on AI.
Raphaël Millière (00:01:26):
Yeah, so I have a background in philosophy. I'm a philosopher by training. I did my PhD in philosophy at the University of Oxford. But I've long had an interest in cognitive science generally including artificial intelligence. And so perhaps somewhat unusually for a philosopher, I've been involved in a lot of interdisciplinary research working with researchers across different disciplines. So psychology, neuroscience, and more recently computer science. So I've had a long standing interest in ai generally speaking, but for a while there was more of a side interest of mine. So during my PhD, I was working more specifically within the philosophy of cognitive science on self representation and self-consciousness. And I had the side project for which I started using natural language processing algorithms. And my interest in these algorithms for that particular project was instrumental as it were.
Raphaël Millière (00:02:22):
So I wasn't directly interested in investigating the capacities limitations of the model, but as I was doing this, and I, you know, in the process, learned how to train a model from scratch and fine tune models and do this kinda thing. I got really interested in this, and I, it was before language models came around. It was, I started around 2015, so I was using earlier water meting models, but I really thought there is something there really interesting philosophically, some substantive foundational questions that haven't really been discussed yet. There is a long literature in philosophy on artificial intelligence generally, and there are some really interesting older discussions about connectionist models, neural networks from the nineties, for example. But really, this whole literature hadn't really been updated in light of new developments with deep learning and with new techniques in NLP.
Raphaël Millière (00:03:18):
So I got really interested in that. And already then I was thinking once I'm done with the PhD, I might wanna focus more specifically on this, and then in 2017, the transformer architecture came out which described the main architecture that is still used today to create these large language models and chat bots. And that really changed the game in NLP. And then, language models started coming out initially GPT1, GPT2. And just as I was done with my PhD and about to start my current position at Columbia University, in the summer of 2020, GPT3 came out, which was a real turning point because it really demonstrated how powerful this technology could be and what language models could do. Before that, it was more akin to proof of concept or time models that were interesting. They could generate some grammatical text, but they couldn't do really impressive things.
Raphaël Millière (00:04:17):
And then GPT3 came out and it was a real change. And so that's when I thought, okay, I really want to be working on this technology as my main focus in my philosophical research and also further develop collaborations across disciplinary boundaries with people in computer science. And I really want to work on assessing the capacities and limitations of these models in light of various questions from the philosophy of cognitive science, because a lot of people in the field of the philosophy of AI work on ethical issues that are very important and interesting. But I think there is also room to work on the more foundational substantive, well, not to say that the ethical questions are not substantive, that's not what I meant, but more that the foundational questions that are more connected to certain longstanding issues in the philosophy of cognitive science.
Raphaël Millière (00:05:13):
And I think that can inform the ethical questions as well downstream. So, for example, do language models really understand language? In what sense can we say that they do or don't? What kind of other cognitive capacities, if any, can you can ascribe to them? All of these questions are, posed to inform, also the practical ethical questions that people generally focus on the philosophy of AI. So I think there is really a, I thought at the time there was really a need to focus on these questions.
Michael Dello-Iacovo (00:05:46):
Great. Thank you for that. I'm glad you mentioned ChatGPT and GPT3. It does seem like there's been an explosion of interest since around mid to late 2022, which coincides with the release of ChatGPT maybe a little bit earlier than that. And I guess that's, you could argue that the interest has exploded because the capabilities have exploded, and it really seems like there has been a large increase in capability and capacity in the last few years. So what has happened in the last few years that we didn't see earlier? I guess we can, maybe we could start off with I suppose a discussion of what LLMs are doing, and how they actually work. But I'd be interested to understand, I guess, what's changed in the last few years in terms of architecture. Just to prompt you, I know transformer architecture, some have argued, has been a big, major change from around 2017. So could you maybe give us a brief discussion of how large language models work if there's such a thing, and what's changed in the last few years to increase these capabilities?
Raphaël Millière (00:06:53):
Yes. So, in some sense the question, how do language models work and what are they really doing is the million dollar question that, you know, we could spend hours discussing. But I'll give you the kind of first pass version of an answer. So just a brief historical overview, since the very beginning of, you know, empirical work with computers when the very first computers were invented, say in the 1940s, people have been interested in using this new technology to try to process language in some ways and do things, automate some tasks with language. For example, one of the early research projects with the first computers pretend to machine translation, trying to build systems that could translate from one language another. And there was a big part of research in AI in the postwar, post second World War period, with obviously tremendous potential applications in various industries.
Raphaël Millière (00:07:55):
And so, early on there was this kind of divide between two different approaches to this field of natural language processing with computers. One of them was the most symbolic approach that was heavily inspired by theoretical linguistics and the works of people like Noam Chomsky. And the idea that we could design systems that would draw inspiration from knowledge in theoretical linguistics about grammar, for example, to hand build some rules for these systems to process language programmatically. So, for example, people at the time would design these syntactic parsers that would just embed some grammatical rules provided by linguists to try to break down, to process sentences, break them down and try to interpret their meaning using these, you know, handcrafted ad hoc rules inspired from linguistics.
Raphaël Millière (00:08:55):
The other paradigm that was developed in parallel you could call it the stochastic or statistical approach, was trying to do things in a different way instead of using this kind of top-down hand-built set of rules, trying to work from the bottom up with a large amount of data, with a corpus of text, and try to learn from the statistical properties of this corpus, without these hand built rules. And so, initially, that work was done more in the field of speech recognition. But then, I'll spare you the details of all this history. It's a very interesting history. But eventually there was this idea that emerged as a very powerful intuition that people call the distributional hypothesis. It's the idea that the meaning of a word is determined by the context in which the word appears.
Raphaël Millière (00:09:55):
So it's an idea that was initially developed in by structural linguists in the fifties. One of the big slogans from, you know, summarizing this kind of approaches, you shall know a word by the company keeps. And this article kind of percolated into the statistical approach to an NLP with the idea that we can just train or, you know, learn, have a model that learns from a large corpus, from the statistical distribution of words, not corpus to kind of induce or learn some information about the meaning of words and the syntactical roles of, of words. And that's essentially the intuition that led with a lot of intermediate steps to current language models. So the way in which we get to current language models is that people started to try to operationalize that kind of intuition around the set of models that would try to use the statistical properties of words coocurrent statistics.
Raphaël Millière (00:10:57):
So, for example, the fact that the word apple and pear both occur very frequently next to the word juice, that tells you that must be something in common in the meaning of apple and pear. In fact, they're both fruits and they're both fruits that can be juiced and can be used to flavor juices. It's just one example of how the patterns of distribution of words and the company that words keep can give you a lot of information about what the words mean. And so, people realize that you could use that approach looking at statistical properties of words to try to represent the meaning of words as vectors in a high dimensional vector space. So we don't need to get too deep into the weights here, but, that's a very powerful idea, because it means that then you can look, you know, represent words as these vectors where each dimension of the multidimensional vector space in which these vectors are placed captures some aspect of the meaning, and potentially syntactic roles of words.
Raphaël Millière (00:12:09):
And then you can look at the distance between two word vectors, the vectors that stand for two different words, to give you some insight into the, the semantic distance between two words. So two vectors that are close together in the space, like the vector for apple and the vector for pear would be for two words that are very close together in terms of meaning. And vectors that are far away would be vectors for words that are very different. So people started working on these word emitting models. A big one was word2vec in 2013, that really showed the power of this kind of approach, after decades of it being more of a, you know, kind of proof of concept time models, it started to show real promise around that time.
Raphaël Millière (00:12:53):
And then finally, we get to the modern language models, basically with this turning point in 2017 that I mentioned already with this paper, attention is all you need by Vaswani and colleagues that extended the kind of approach to the modeling of sequences, not just words. So obviously, language is more than just a collection of words when you take a sentence. The order in which the words appear is important. Sentences are not just bags of words, to use a term that's used in NLP. And so if you want to model the whole sequence, you have to also model the relationships within the words in that sequence, including the grammatical structure of the sentence. And this paper from 2017 introduced this transformer architecture that turns out to be a really powerful neural network architecture. So it's an architecture for an artificial neural network that can learn from data.
Raphaël Millière (00:13:44):
And it has this mechanism called attention that basically enables the model to attend to different kinds of relationships between words or to model different kinds of relationship between words, including syntactic or grammatical relationships between words such as subject verbal agreement, for example, and many others. So really what it does is that it enables these models to, as it were, learn the rules of grammar, and learns all sorts of other kinds of relationships between words and sequences in sentences such that these models with, you know, this one version of this family of transformer models is enabled to generate text, to produce text. And so that gave rise to this family of GPT models. GPT just stands for generative pre-trained transformer. So you have this transformer architecture there in the name, it is generative because it can generate texts, so you feed it some text, and then you can continue that text, generate new text, or just answer questions.
Raphaël Millière (00:14:50):
And it is pre-trained because you pre-train it on a massive amount of data, a whole subset of the whole internet, a really large corpus with all of Wikipedia, hundreds of thousands of books and millions of webpages. And once it's been pre-trained in this way it can perform a bunch of different tasks just by having the user ask in the input, in the prompt to the model, ask the model to do something, certain tasks. It could be summarize this paragraph or answer that question, or, write a story about something, write a new story about something. And these models tend out to be really good at doing that, even though they hadn't really been pre-trained specifically for these tasks. So the pre-training objective, what we call the learning objective in natural learning for these models, again, by being trained on this whole very large corpus script from the internet, is very simple, is just next work prediction.
Raphaël Millière (00:15:46):
So the models will sample sequences of text from this corpus and try to predict which word follows from that sequence. And it does that over and over again, millions of times. And at the beginning, it's no better than chance. It's really bad at it. It just predicts random words, and as it goes, it gets better and better, because every time it makes this prediction of the word that follows a sequence that's randomly sampled from this very large corpus, there is a method, a procedure to calculate how wrong it was, so how far off it was with its prediction, and then use that measured error to correct, to update the weights of the model. So it's like adjusting the knobs inside the model to make it a little bit better. So every time it predicts a word, it doesn't get it very right. You can adjust the knobs slightly, you know, course correct, get the model to be a little bit better, and then do this millions of times.
Raphaël Millière (00:16:43):
And eventually you get your pre-trained model that is really, really good at next word prediction. And the thing that no one really saw coming, I think, 10 years ago, people would've thought would be crazy, is that just by doing this, just by playing this next word prediction game, you can end up with a model if it's big enough and trained with enough data, like GPT4, that can do a really remarkable range of things that maybe you wouldn't have thought, abstractly without having seen this result that you could do just by playing this next prediction game. So sometimes people will say things like, these models are just fancy auto complete just doing next word prediction, as if that's kind of a limitation on the range of things they can do once they've been pre-trained. But it's important to keep in mind that in the process of doing next word prediction so well on so many different linguistic contexts.
Raphaël Millière (00:17:39):
So again, if you think the model is sampling sequences from the training data, it will sample some sequences from books, some sequences from Wikipedia articles, some sequences from Reddit threads or discussion threads. It will have to be really good at next word prediction in virtually any kind of context on any kind of topic. And in order to become so good in all of these different contexts. In fact, these models are superhuman at next word prediction compared to humans to do much better at predicting the next word, presumably, that forces the model in order to become so good to learn a lot in the process, right? Including learning how to produce grammatical sentences that are not just gibberish, but also learning a lot of world knowledge, learning, a lot of knowledge about how things are in the world such that the synthesis, bare not, you know, nonsensical, even if they're grammatically well formed, that they actually, you know, if you want to predict the next word correctly, you have to know a lot about what is the most plausible next word, based on how the world actually works.
Raphaël Millière (00:18:40):
And all sorts of other kinds of knowledge that people are still kind of trying to probe and find out exactly how much these models know, how much they can do, what kind of capacities we can ascribe to them. But the bottom line is how these models work. Short answer is by being pre-trained with this next word prediction, learning objective, just playing this next word prediction game on the large amount of data. A long answer is in that process, they seem to learn a lot in the process of doing next word prediction very well, and researchers are still trying to find out exactly what kinds of capacities or computations they learn in that process, and people disagree about that.
Michael Dello-Iacovo (00:19:20):
Great. Thank you. I think that was a really good primer to go a little bit deeper. I think you've already touched on this a little bit, but I know that you've said previously there is a tendency for polarization in talking about large language models, between two very strong sets of claims, which you've called deflationary and inflationary claims. So I'd just like to give you an opportunity to explain what those claims are and why you think those are problems.
Raphaël Millière (00:19:45):
Yes. So that's something that I think is very salient in the current discourse on language models. And you see that if you, you know, go in the opinion section of the New York Times and you see essays, op-eds on ChatGPT essentially, you know, a lot of them will be either saying something like, the headline will be something like, you know, ChatGPT is not smart at all. It's as dumb as a toaster. It's it's not what you think it is. It's again, just next word prediction or fancy autocomplete. It's a cheap trick. And it is not making any progress towards generating artificial intelligence. Or you will be, say, you will see people saying that ChatGPT and similar models are harbinger of human-like intelligence, and that they're on the verge of, you know, breaking through to artificial general intelligence and maybe even superhuman intelligence.
Raphaël Millière (00:20:44):
And that, you know, just a few years we might be facing existential risk from super intelligent AI that threatens to eradicate the human species. So you get these really radical views that are extremely polarized on both extremes of the spectrum. So either these models are not intelligent at all, they're really dumb, they're doing this really cheap trick of next word prediction, and some people say they're stochastic parrots, such as haphazardly stitched together bits and pieces from the training data without producing anything novel or anything, that is deserving of the description of any psychological capacity or cognitive capacity, or they exhibit sparks of artificial general intelligence. And they're almost there, almost at human level on various tasks. And I find that a little frustrating as a researcher because I think there is a huge rich middle ground between these two extremes that hasn't been as present in the public discourse, at least.
Raphaël Millière (00:21:48):
I mean, I, of course, a lot of serious researchers are working on charting that middle ground and taking various positions in between. But if you look again at op-eds, public op-eds, people on Twitter, things like that, including the outputs of some researchers, it tends to be quite polarized. And I think that's, that's a shame because I personally think the truth is somewhere in between. These models are, as I said, just because they're doing next word prediction, doesn't tell you really that much about what they learned to do in the, in the process of doing next word prediction. So, well, it seems that they do genuinely acquire some non-trivial capacities in that process, but they're also severally limited in other ways. And certainly I think they're extremely far from something like general human-like intelligence in various respects. But the devil is in the detail, and I think it's, I understand why people tend to gravitate towards these polarized positions because there's something comforting about being able to argue for a very determinate stance about what these models are and what they can do.
Raphaël Millière (00:22:51):
Either, you know, taking on this very deflationary take and saying, well, they're just fancy autocomplete. They can't do much just cheap tricks. So, you know, you have this sense that you kind of really know what heights be beyond the hype that is pandered by, you know, venture capitalists and some companies. And you can say, well, don't be fooled. This is what these models really do. Or on the other hand of the spectrum, people, you know, thinking that, they've uncovered the true capacities of these models that would be close to human intelligence in various respects. And also, there's something comforting about feeling that you are lucid about what these models can do, and you have, you have the answer. And the truth is, I think people are still kind of stumbling in the dark as it were in this line of research, trying to find out what these models can and cannot do, how they work and reverse engineer the computations that they induce during training.
Raphaël Millière (00:23:49):
And that's a long and painstaking process, as is often the case in science. And I think, you know, we should be more kind of practical and persist about this and to perform experiments and gradually learn more and more about how these models work. But I think no one really has a fully worked out answer just yet. And that's not to say that, you know, anything goes, and that perhaps we'll find out. These models are basically, it's not just to kind of endorse a kind of mystical line of discourse in these models, that it's to say that, oh, these are alien intelligences and we don't really understand how they work. They might be really advanced, and we just don't know. That's not what I'm saying, that clearly, you can probe their behavior and see that they have other limitations. So it's not to say that, it's just to say that to really truly fully understand how they achieve the behaviors that they can achieve, which is quite remarkable, and, how similar, the mechanisms that they've implemented to achieve that behavior might be to the mechanisms that human brains or animal brains, implement in order to achieve some of their intelligent behaviors.
Raphaël Millière (00:24:58):
That's a bit of an open question, so we should be pragmatic about this.
Michael Dello-Iacovo (00:25:03):
Yeah, unfortunately, it's not too surprising to me that we see such polarization in something like a New York Times op-ed. I guess that applies to many topics. I guess reasonable opinion pieces don't sell too well. And we don't know how many reasonable opinion pieces were submitted to New York Times. They just chose the most polarized. But, I guess I'd like to hear a little bit more about what the dangers are of underclaiming and overclaiming. So Samuel Bowman's paper, which I'm sure you're familiar with, dangers of underclaiming, talks about this. But I guess you could argue that Murray Shanahan's paper talking about large language models maybe is more on the dangers of overclaiming side. So what are the risks, I guess, with underclaiming or overclaiming?
Raphaël Millière (00:25:50):
Yeah, so given this prioritization of discourse and the fact that, on the one hand you have these very deflationary takes, on the other hand, it's very, inflationary takes. If you think that both of these extreme views are mistaken in some ways that, you know, one of them is not giving enough credit to what these language models can do. The other one is inflating what they can do and giving way to hype. I think there are dangers to both excesses. So dangers to underclaiming what these models can do and dangers to overclaiming what they can do. We can start with the risks of overclaiming, because I think they're the most salient. Certainly there has been a lot of breathless hype about these language models lately, especially since the release of ChatGPT and a lot of that hype is fueled by economic incentives.
Raphaël Millière (00:26:44):
There's a lot of money to be made. There are now big companies like Open AI and others that have a lot of skin in the game. And, of course, it's partly in their interest to hype up these systems and to kind of dangle in front of people the promise of something like AGI, artificial general intelligence, which is explicitly the goal of Open AI, for example, to build AGI. We can come back to that. I have some qualms about this very notion of AGI. I think it's not always a very coherent notion, or at least in the way it's used by some of these companies. But giving that aside, I think there is this dynamic where some of these companies, the venture capitalists that invest in them have this incentive to hype up these systems.
Raphaël Millière (00:27:28):
And there's a risk here because, essentially you end up having people misrepresenting what these models can really do, and convincing people who don't know any better because they don't work in that field, and they don't fully understand as far as anyone can really fully understand how these models work. They don't even know the very basics. And because it feels, it can feel, of course, borderline magical to interact with these systems. They're very impressive. And because we humans have a remarkable tendency for anthropomorphism when interacting with systems that can converse fluidly, fluently in language, it's very easy for people to just buy wholesale into this idea that these models are harbinger of superior or human-like intelligence. There's this old, you know, phenomenon in research in AI that people call the Eliza effect, dating back to the observation of Joseph Eisenberg at MIT in the sixties, he built this chatbot called Eliza that was a very rudimentary chatbot compared to what ChatGPT can do, using a bunch of kind of clever tricks, using handmade rules that you could talk to it, write on a keyboard in which respond to you in a somewhat mysterious and elusive way.
Raphaël Millière (00:28:37):
But people back then in the sixties on the MIT campus were really quick to start treating the system and talking to it as if they were talking to another human. So people are really quick to project certain anthropomorphic traits onto the systems and all the more of language models. So I think that's the clear risk of overclaiming is to essentially mislead people with all sorts of potential nefarious, unseen consequences behind people ascribing too much weight to what these models produce, giving too much credence to some of the things they say. And sometimes they can generate answers that are factually incorrect. They can generate dangerous answers, they can do all sorts of problematic things. And also you have companies that go even further and, I think are really, have a really unethical business model, which is to try to market AI companions.
Raphaël Millière (00:29:31):
So I think of companies like replica that will explicitly market these, you know, use fine tuned versions of language models like, like GPT3 to sell access to these so-called quote unquote virtual companions that are just, you know, dressed up language models that they sell as the market as AI companions that really care, that you can confide in, that you can treat as a friend, a mentor, or a counsellor or even a romantic partner or sexual partner, explicitly pushing that really anthropomorphic angle. So I think that's, I mean, it's already, you see people already, if you go online, people falling in love with these systems or at least reporting that they do. And I think that's a disaster waiting to happen. So that's the danger of overclaiming. Underclaiming on the other hand, well, a lot of people who I think might be erring on the side of underclaiming, are actually motivated by ethical concerns precisely because they want to resist overclaiming, they want to resist the hype.
Raphaël Millière (00:30:30):
So I think it comes from a good place, a concern for the risks of overclaiming, but I think they are overcorrecting in the other direction in a lot of cases. And I think that comes with its own risks, including ethical concerns, which is a shame because, you know, if you're concerned about AI ethics, you should really be concerned about calibrating your characterization of what current systems can and cannot do in the right way so that you can inform the public as well as possible about potential risks. And if you err on the sign of underclaiming and you say, this is just fancy autocomplete, it's really no smarter than, you know, the autocomplete feature on your phone and you write a text or it's just a stochastic parrot that's just parroting sentences from the training data, well, you will, kind of create a blind spot in people's mind if they buy into that picture wholesale where they are not seeing the potential disruptive applications of the systems and the things that can already or real soon be able to do.
Raphaël Millière (00:31:38):
And again, that's not to say that we're on the verge of anything like super intelligent AI or that concerns about existential threats to humanity's existence are well placed necessarily. I'm rather skeptical about that. But it's more that there are everyday concerns, you know, more, more mundane risks that are nonetheless very concerning that are on the horizon about, whether it's about using language models for misinformation or manipulation of people, impersonation in ways that are, you know, that mirror stochastic parrots or a language model that would be just to kind of fancy autocomplete feature like on your phone, wouldn't be able to do so. I think on the horizon are sophisticated forms of misuse or risks of language models that are really tied to the kinds of cognitive capacities you can ascribe to them, or I would say proto intelligent behavior.
Raphaël Millière (00:32:41):
You can't ascribe to them where the, you know, these problematic behaviors precisely come from the fact that they are quite capable in some respects. They're not just these dumb, you know, dumb stochastic parrots that some people say they are. So that's, I think, the risks. And I think we ought to map out the middle ground, not just if we want to gain a better theoretical understanding of these models, which is a good and interesting goal in it's own right? But also, if we want to take the full measure of this minefield of ethical issues.
Michael Dello-Iacovo (00:33:16):
Great. Thanks for that. Raphael. There's a long list of things that AIs might have, like compositionality, intelligence and so on. So I'm curious to ask about, one by one, go through these, ask about what they are, do AI have these, are they important for AIs? But before we go through them one at a time, do you have a general map of how you see these different characteristics or some other way to summarize them that might be useful?
Raphaël Millière (00:33:41):
Yeah, so generally, I like to think about this as, we have these systems, right, that artificial systems, and we want to know what kind of capacities we can ascribe to them. And specifically we want to know in particular, when it comes to discussions about intelligence or comparisons to humans, the human mind, we want to know what kind of psychological or cognitive capacity we can ascribe to them. So we humans have a whole suite of cognitive capacities or psychological capacities, and we have different concepts to talk about them and carve them out, some of which are very broad concepts, very vague concepts like intelligence or consciousness, or just the very concept of a mind. It's a very kind of fuzzy concept with blurry boundaries. And we ascribe, we apply these concepts also to non-human animals, to various species of animals that we deem more or less intelligent.
Raphaël Millière (00:34:43):
We talked about animal minds or animal cognition. We talk about remote consciousness as well. And then we have some concepts that are a bit more specific. For example, when it comes to human cognition, talk about certain specific faculties or capacities like memory, long-term memory, short term memory, you can make furthers subdistinctions, capacity for thinking, capacity for reasoning, which might be slightly different, capacities for language understanding, understanding language, but also understanding the world around us, visual understanding, perceiving the world. We have all of these different terms that we apply to carve out different aspects of human cognition and perception, and generally the human mind. And again, we apply these terms more or less liberally, liberally or conservatively to different non-human animal species, depending on where you stand. And we have a long, traditional, long history of trying to do that in a more principled, empirical way, in the field of comparative psychology, comparing animal cognition and human cognition.
Raphaël Millière (00:35:47):
So now we have these new systems that exhibit behavior that up until now was the kind of behavior that only humans could exhibit, first and foremost speaking, producing language. So up until recently, fluidly, fluently producing language in the way that systems like ChatGPT can do was a clear sign of a whole suite of cognitive capacities were ascribed to humans. Only humans could do that. And the capacity to generate this kind of fluid, fluent, similar intelligent language was something that was always associated with everything else that was with human minds. And so now we have this problem that we have the systems that can do that, but for the first time seem to come into question the association between the capacity for language production and all of these other capacities.
Raphaël Millière (00:36:41):
So the way I like to think about the question you just asked is, let's take this on a, on a case by case basis, have a kind of divide and conquer approach where we take these different capacities, these different psychological properties that we can ascribe to humans and maybe we can ascribe to non-human animals and on a case by case basis, decide, okay, based on the evidence and based on the specific definition of this capacity, that has to be precise, specific enough that we can discuss it without falling into merely verbal dispute that we don't talk past each other. Can we reasonably ascribe that particular capacity to language models? And of course, oftentimes the answer is not gonna be that clear cut right now because it's still early days, and this line of research is still emerging.
Raphaël Millière (00:37:30):
But that's the kind of approach I like to take. So, and it's helpful then for, to go from the very general psychological properties like intelligence that are very vague and not necessarily very helpful or understanding. Another one, understanding is, you know, it can mean very many different things, and people mean mean very many different things by this term. And so they tend to talk past each other. So go from these to more specific sub distinctions, of more specific kinds of capacities where we have a perhaps a better idea and a better empirical grip on to kind of arbitrate between different positions where people disagree on whether we can ascribe the capacity to language models.
Michael Dello-Iacovo (00:38:13):
So let's, let's start with intelligence, which you've already mentioned. Maybe we could start with the definition of what intelligence means in the context of AI. And I know you've said that it's maybe hard to pin down exactly what that means, but, your favorite definition is fine just to get this rolling.
Raphaël Millière (00:38:30):
So that's a really hard question. I mean, for various reasons, also, because, so intelligence, as we said, is a vague concept. It is also a somewhat problematic concept that is laden with all this historical baggage, including this really unfortunate baggage from the attempts to measure intelligence and psychometrics that is really historically intertwined with racist ideas about intelligence. So, for example, the whole history of the IQ measurement is intimately tied with extremely pragmatic ideas about, about what constitutes intelligence and, and that were motivated initially by eugenicists and racialist ideology. So it's problematic for these two reasons. So when we want to operationalize notion of intelligence, we want to be able to measure it in a more objective way.
Raphaël Millière (00:39:33):
And often that leads to things like the IQ measurement that kind of centralize specific manifestations of intelligence in a way that really reduce the range of things we consider to be intelligence in a somewhat problematic way. And the same is true when we look at, say, human animal comparisons. And, we kind of essentialize certain human manifestations of intelligence to deny intelligence to certain animal species where intelligence might be manifested in different ways through different behaviors in species that don't have things like language, for example. So these are complicated questions. I do like this paper by Francois Chalet called on the Measure of Intelligence. That was a really interesting paper trying to breach this kind of historical theoretical discussions about intelligence, with discussions about how could we measure intelligence in AI systems.
Raphaël Millière (00:40:35):
And he comes up with this pretty broad characterization of intelligence. It's been a really long time since I actually read about the paper, but, if I recall correctly, the definition goes something like this. So intelligence is characterized by two things. So, the scope of applicability of kind of general problem solving skills. So systems will manifest our intelligence by being able to solve problems. And the range of problems that they're able to solve is one dimension of intelligence. The scope of applicability of their intelligent problems, of skills. A system, for example, like deep blue that was designed by IBM to play chess and beat Gary Kasparov, the world chess champion at the time, chess playing system. That's a narrow system that has a very narrow scope of applicability, right? So people call it narrow AI because, it's really designed to do just one thing and do it pretty well, but just do one thing, which is playing chess, and so can only solve a very narrow range of problems.
Raphaël Millière (00:41:46):
So that's one dimension. And I could ensure that there is this other dimension, which is kind of the generalization power of the system. So how quickly and efficiently can it generalize to unseen situations on unseen tasks, and problems where it'll just, you know, generalize from one domain to another. So if it's learned to solve a problem in a specific domain, and it's suddenly put in a completely new situation or a new domain that hasn't encountered before, how efficiently can it generalize to solving tasks in that domain? So say for example, you train a robot, you build a robot that is designed to make coffee, and you train it, you have it learn how to make coffee in a specific kitchen where things are placed in certain ways. There's a specific layout of the kitchen, and there's a specific coffee maker, coffee grinder, the bin are in the place and so on.
Raphaël Millière (00:42:46):
It might become very good at that task, right? And make coffee super quick, excellent coffee every time, very consistent, much better than a human. But then suppose you take that robot and place it in a totally new kitchen where things are in different places, layout is different, the beans are different. The apparatus is different. It might be that your robot is going to completely fail and not be able to perform even the basic first steps to make coffee just because it's not able to generalize to this new situation. So we humans are very good at generalizing, and children from a very young age are extremely good at generalizing to novel situations very quickly. So we're very efficient in this way, and that's one of the key characteristics of human intelligence. And to a large extent, you know, non-human animals also are very good at that, not as good as humans because we have an extreme form of generalization where we are able to generalize to completely new domains that would range all the way to putting a man on the moon by building a rockets, things that no one has ever done before.
Raphaël Millière (00:43:44):
So animals don't generalize that far, but they can generalize to novel situations, generally better than AI systems. So I like this kind of approach to intelligence because it's quite broad. It's not taking a really specific stance about, I mean, the paper goes on about to discuss how we could measure intelligence, but that's a different question, but you know, it's not, it's not reducing intelligence to a very narrow kind of range of problem solving behavior that the kind of thing that would be measured by the IQ test. And it's broad enough to be applicable to various kinds of systems, including humans, non-human animals, and artificial intelligence systems.
Michael Dello-Iacovo (00:44:25):
Listeners might be aware that, recently GPT4 was able to beat 90% of lawyers on a bar exam, for example. And there are many more examples of various LLMs performing quite well on written exams. What does test scores like a 90th percentile on the bar exam mean for AI compared to what they mean for a human? So I guess, and more generally, when an AI performs as well as a human on one of these narrow tasks, what does that mean for intelligence?
Raphaël Millière (00:44:58):
That's a really great question, a few thing I I would like to say here. So the first is just to tell your audience. There was this seeming leap between GPT3 and GPT4, like there was between GPT2 and GPT3, Open AI released GPT4 just a few weeks ago. And they showed that indeed this new model performs really well on various exams that were designed for humans clearly the bar exam and various other exams where GPT3 was not doing that well. So GPT3 perhaps was in the 30th percentile or something like that. And GPT4 on a lot of these exams is in the 90th percentile, so performing as well or better than 90% of humans. So that's at least seemingly on the face of it.
Raphaël Millière (00:45:50):
Very impressive and it is impressive, and I don't want to take away from how impressive it is, but there are two I think two main concerns about these results. So one first concern is about this problem called data contamination, which is that current systems like for are trend on so much data because they are extremely data hungry and they need a lot of text to perform well. And the trend has been from, you know, the first consumer models in 2017, 2018, up to GPT4 that we train bigger models. So we have bigger neural networks and we train them on more data. These two things have to scale together. So the more, the bigger your model, the more data you need to train it on. And we realized we need a lot of data to get good models, and by a lot, it's really hard to wrap your head around it.
Raphaël Millière (00:46:41):
It's more than a trillion words in the case of systems like GPT4. So again, it's a significant subset of the whole internet. And then there is even speculation that we're running out of data, but just scraping the open web for text. And that Open AI had to use their in-house transcription algorithm called Whisper to transcribe things like perhaps podcasts or YouTube videos to get even more text, more words to train the models on. So it's certainly a humongous amount of data and that data will basically encompass anything that humans talk about. And so it becomes harder and harder to rule out that the model has been trained on some of the things that you want to test it on. And that would include questions for an exam. Of course, there are many, many websites where people will discuss examples of questions from various exams.
Raphaël Millière (00:47:36):
And that's a problem if you want to claim that your system has achieved excellent performance on an exam because it could be that a system has memorized the right answers to certain questions. And this is also very good at memorization. They are generally very lazy in the sense that if they can solve a problem by just brute forcing it and memorizing the answer, they will do it. So, you find that there is a lot of memorization going on and that's a feature and not a bug in a lot of cases because it means that they know about a lot of world knowledge because they just memorized that and they know about, say, the great work, great works of literature because they memorized that. And so, if you ask GPT4 to recite the, you know, the Tyger by John Keats, it will do it because it's memorized it, but you don't want, when you're testing the model's capacity to have it just regurgitate and sources memorized, data contamination and leakage of the training data, testing data into the training data is a very hard problem to mitigate, because the corpus is so large that it's really hard to actually check that what you're testing the system on hasn't occurred anywhere in the training data, including in some variations.
Raphaël Millière (00:48:53):
So maybe, you know, someone, maybe there's a question in the bar exam that's not quite the same as question that've been asked in previous years. It's phrased in a different way, somewhat, slightly different, but it's still similar enough that having encountered the slight variation of the question from a previous year is gonna help you a lot to answer that question. So Open AI took some precautions, in a technical report to GPT4, they talk about it, that they tried to control for that, but it's debatable how well they were able to control for that. And it's a really interesting question actually at which, you know, at where do you draw the line about when it becomes cheating for model to have memorized some variations on a question because obviously, you know, humans also cram when they, you know, pass exams and they also learn from previous exam questions and things like that.
Raphaël Millière (00:49:51):
But still it's a concern. But the second, and perhaps even more substantive concern is about how to interpret good performance on an exam that was designed for humans and not for LLMs, not for language models. And there we're hitting on this really big important issue in the philosophy of AI and discourse about AI generally, which is this question of the gap between performance and competence. So one thing you can investigate when you look at the behavior of the model by testing it on exams, on benchmarks and so on, and looking at the quality of the outputs is the performance of the model. How well is it doing, how well is its score at the exam, how well is its score on the benchmark? That's all about the performance of the model. And so you can compare models, language models in humans in terms of that performance, right?
Raphaël Millière (00:50:40):
But by doing that, you don't just care about the performance, the exams score, what you really care about is the underlying competence that is supposed to be measured by the performance. So with humans, you know that the bar exam is supposed to measure human competence at certain, you know, legal knowledge, for example. You want to know that a lawyer is going to have that knowledge and that you would hope that a lawyer that passes the bar, that is good evidence for an underlying competence at being a lawyer or an incompetence in the law. But that's an assumption. And you need to kind of empirically validate that assumption by validating the construct validity of your exam or benchmark that was gonna tell you, well, that's a well-formed, well-designed exam or benchmark that really gets at the underlying competence.
Raphaël Millière (00:51:36):
And by the way, there could be concerns about that even with humans, because could be that, you know, some exams, I mean, I'm a professor teacher, I, you know, I'm well aware of that with, with students. Sometimes you design a question for an exam and you realize it can be brute forced in some ways, or there's a shortcut that students could learn to get good performance on that question that is not actually indicative of students having really learned the thing you wanted them to learn having acquired the underlying competence. So that's not just a concern for models, but it's especially a concern when you wanna translate, you know, take an example that isn't for humans and just apply to language models without further scrutiny or reflection of what this means. And so, models getting good performance on an exam doesn't necessarily mean that they have the same competence that would enable good performance in that exam for humans, because it might be that they solve that exam in a different way than humans do it.
Raphaël Millière (00:52:32):
Again, with some memorization, with some shortcut learning, with some brute forcing of various heuristic and tricks that humans might not use themselves because these two things, humans and language models work very differently. So that's two of the main concerns. I mean, they're connected to some extent, and it's not to take away from how impressive it is that GPT4 can pass all these exams, but it's just to contextualize that, and I think much more research will need to be done to fully probe how to understand how to interpret these results. And part of the problem here is that the training data for GPT4 is not public. In fact, there is hardly any information publicly available about what the model is, even what is the architecture, what is the training data, how it was trained, and that makes it really hard to conduct rigorous research about this kinda question.
Michael Dello-Iacovo (00:53:25):
Now might be a good time, I think, to talk briefly about artificial general intelligence, which you've already mentioned a little bit. But first of all how, how exactly do we define artificial general intelligence?
Raphaël Millière (00:53:38):
So I think this is one of these notions that it's not always very helpful because I think a lot of people assume that it refers to a specific determinate thing, when in fact I think people, have mean different things by the term. And, and generally I think the term might be a little unhelpful if it is coherent at all. So artificial general intelligence, the very broad way to characterize it is by contrast with artificial narrow intelligence. So again, think of the scope of applicability of the system. If a system is designed to only play chess and can only play chess and do nothing else, it's a very narrow system. And by contrast, humans and non-human animals, in fact, to a large extent, are able to solve a very broad range of problems.
Raphaël Millière (00:54:31):
So the intelligence that humans have that many non-human animals have to a certain extent, seems very general in comparison to say a chess playing system. So by contrast with narrow systems, narrow AI, we talk about AGI, artificial general intelligence to refer to this goal of creating an artificial intelligence systems that would have a more much broader range of applicability, scope of applicability in terms of its problem solving behavior than current systems do. There are different concerns though, with this notion. So one of them is that it's a very context sensitive notion because the very notion of generality is context sensitive. It's a vague predicate. And what I mean by that is that, take for example, the concept of the adjective tall. So you can talk about a tall building, but if I go to a small village in Italy and I refer to a tall building in that village or I come, you know, where I live here in New York City and talk about the tall building, I'm gonna refer to very different things.
Raphaël Millière (00:55:42):
What's a toall building in a small village is very different from what's a tall building in New York. And the reason is just context sensitive, right? It depends on the height of the surrounding buildings and it might not really make sense to talk about a tall building in absolute terms without having that kind of context again, because it depends what your reference point is. And similarly for the context concept of AGI, what seems like a general system in some contexts in comparison with other systems might not be so general in comparison with others. So certainly the problem solving skills or the intelligence of a rat seems extremely general compared to that of something like a chess playing algorithm. But compared to humans, it doesn't seem general at all. And so that's one of the concerns people have is that it's not really clear that there is a line in the sand, in the sand that would be the point at which we reach AGI, where all of a sudden we go from a narrow system to a general system.
Raphaël Millière (00:56:48):
It's more of this spectrum, this gradient of generality, where systems can have a more narrow or a broader scope of applicability of their problem solving behavior and general intelligent behavior. So, for example, companies like Open AI talk about AGI more like sometimes more like a goal where there's a clear line in the sense. So for example, in a recent statement, they talk about the first, when we reached the first AGI system, the first AGI. And it's somewhat unclear for me what the first AGI would be, probably would be a matter of interpretation. You know, some people might already, some people say that GPT4, they think of it like a form of AGI and it's a bit meaningless because it's like, well, yes, it seems arguably more general in its, in its scope of applicability than GPT3 is, it's also much less general to really understate it.
Raphaël Millière (00:57:41):
I mean, it's really much less general than humans, in terms of it's scope of applicability. No bright light in the sand. There is a broader concern that people say, well, there's no such thing as general intelligence. There's just, it is just an incoherent notion. And what people have in mind when they say these things would be something a few different things. So one, there is this theorem in computer science and mathematical theorem that says that, any algorithm that can learn effectively, must be somewhat specialized. You must have some inductive biases to learn certain solutions to certain problems. You can't have a maximally general, universal intelligence in that sense. That's, you know, it's linked to this mathematical theorem. And, you know, there's a sense in which the, of a maximally general or universal intelligence is somewhat incoherent from that perspective.
Raphaël Millière (00:58:46):
And then there is a more specific concern, which is that, even if you leave that aside and if you take human intelligence, well, even human intelligence is not that general in some sense, it's been molded by evolution to feed a certain niche and to apply to certain specific set of problems that humans are concerned with. And and so in that sense, it's not general, not that general, it's been kind of finely tuned to the range of problems that humans encounter. And all the algorithms, non-human animals have intelligence that has been tuned for there an ecological niche and is different. And perhaps there is a sense in which it might be a little misleading to say to set up the comparison between humans and sometimes human animals just by saying humans have a marginal intelligence.
Raphaël Millière (00:59:35):
It might be, you know, if you talk to some biologists, it might be, well, perhaps it's more informative to say that humans have intelligence that's more tailored to their niche and they've evolved in a certain way, and that led them to be able to do all sorts of things that is unhelpful and, you know, for other organisms in other niche, to evolve towards. So that's another concern that, you know, this setting up this whole discussion in terms of this single dimension of generality of intelligence is overly reductionists. And that, it leads to, you know, it is almost, it's almost kind of reproducing this whole idea for medieval philosophy of the scale and asura this kind of scale of nature where you have, you know, humans at the top and like worms at the bottom or something, and maybe God above humans. And it's all on this single continuum, which, where you have superior beings and inferior beings. And, that's very problematic for various reasons.
Michael Dello-Iacovo (01:00:35):
It sounds like, with these claims of GPT4, maybe having early signs of artificial general intelligence, and I'm not sure if you've seen this paper Sparks of artificial general intelligence. It sounds like you're quite bearish on this though. Could you maybe explain a little bit more why you don't think this is sparks of AGI? And I know you've already talked about why the definition of AGI is quite vague and up to open to interpretation. But what makes you, I guess, what do you think makes you more bearish on this than say the authors of this paper?
Raphaël Millière (01:01:09):
Right. So I would say that, it's not so much that I'm bearish on the outlook of this paper as much as I think it's not the most helpful way to set up the discussion about the capacities of language models having this kind of dichotomy, again, very polarized dichotomy between these either stochastic parrots or sparks of AGI. And instead, again, I would advocate for this divide and conquer approach where we take things on a case by case basis, we take different capacities and we say, where do language models fall with respect to the capacity? Do they have anything like the human-like version of that capacity? Do they have something different? Can we get more specific by looking at the computations that these models learn to implement during training and try to reverse engineer what's going on inside. So the sparks of AGI paper, you know, one of my qualms with that paper as I think many other people's qualms about it, is that it diverges quite a bit from the normal standards of, say, rigorous scientific research about language models.
Raphaël Millière (01:02:15):
It's just a paper that's very long paper, in fact, that's almost 150 pages that was written by researchers at Microsoft that got access to GPT4 six months, I think, before the release, the unveiling of the model, and got to test it in a bunch of different, tasks. And so the paper is, first of all, it's a pre-print, so it's not peer reviewed. And it's this kind of collection of somewhat anecdotal tests where it just, you know, tested the model on some things I thought were interesting and reported the performance of the model. But there is little in the paper that is really systematic even in terms of behavioral testing. So testing on rigorous well designed benchmarks. So it's more giving examples of successes and failures, but mostly successes of the model.
Raphaël Millière (01:03:13):
And so I think many people are concerned that it was pandering to the hype a little bit because, you know, it's hard to resist the impression reading the paper that's saying, look how cool, like, GPT4 can do that, they can do that, they can do this. And as a researcher you read that and it's a bit like, okay, yes, it's intriguing. It's just saying, now is this, is this consistent? Can the model, if I change the prompt, can it do just as well? Is it systematic? What does it actually mean in terms of the underlying capacity of the model? Can we test that more systematically? And can we go beyond behavior by opening up the model, looking inside, trying to figure out how the model is doing via all of these behaviors that I think is the most interesting line of research, not just, not just, you know, not just looking at behavior, and especially not just cherry picking some anecdotal results and saying, look how cool.
Raphaël Millière (01:04:08):
And the problem here is that, again, GPT4 is not a very open system. There is an API that now people can use to test the model, but, one of the big problems is that you can't go beyond the API, you don't access the weights of the model. So you can actually probe the model and use various techniques to go beyond the behavior, but also even just looking at behavior, even if you, even if you just want to systematically evaluate the, the performance of the model on a benchmark, for example, or on a specific set of prompts, Open AI keeps updating it on the back end. And so what you see a lot nowadays is that someone will point out a failure mode of the model on Twitter, say, and then a week later you try the very same prompt and the model will succeed and give you the right output.
Raphaël Millière (01:04:51):
And people have speculated with good reason that probably OpenAI is patching the model, playing kind of failure mode whackamole on the back end, and silently patching the model as they go. So even with the Sparks of AGI paper, it's kind of tricky because, you know, some people have tried some of the prompts they use and it didn't work with the model that we have access to through the API. They say it wasn't the same model. So it's very opaque, basically. And there is, you know, it kind of straddles the line for me and for many other people where it's not really, it's kind of hard to see that as scientific research at this point. It's more like, almost like, product advertisement, for something that's no longer a research artifact, but more of a product being marketed. So that's my broader concern. But you know, in terms of the being bearish, I mean, no, I do think that there are highly nontrivial capacities that these systems, especially GPT4 have clearly that it's not just a stochastic parrot. And so I just fall somewhere in between. But again, I think we need to do rigorous research, we need to progress with this kind of mindset where we rigorously test things, we keep an open mind and we do do things on a case by case basis.
Michael Dello-Iacovo (01:06:11):
I want to talk a little bit about consciousness and maybe sentience as well. Now. So I guess I could ask about the definition of consciousness. And are large language models conscious now? Can they be? And of course, those are two very different questions. But it's also interesting to me that with large language models, we seem to focus more on consciousness than sentience. And I'm just curious about if you have any thoughts about why that is, but maybe we can just start with a definition of consciousness, and do you think large language models are conscious and can they be?
Raphaël Millière (01:06:45):
So consciousness is another one of these big words that are a little vague. Often people don't necessarily have a very specific definition in mind, rely more on intuitive grasp of the concept or might disagree and fall prey to verbal disputes or talking past each other because they don't intend the same thing by the term. Generally in philosophy, when people talk about consciousness, they refer to subjective experience. Now you might say, what is subjective experience? And here is the problem, which is that famously it's hard to give a noncircular definition of consciousness. So best thing you can do is try to kind of appeal to what, you point to certain things that you hope that everyone has an intuitive familiarity with from just being a human being that experiences the world.
Raphaël Millière (01:07:40):
So you can say, well, consciousness is that thing that is present, you know, when you wake up in the morning, but is lost in dreamless, deep sleep or in a coma. It's that things that manifested whenever you have any kind of experience, whether it's tasting an apple, or looking at a sunset, and it's a subjective feel of all of these different experiences, what it is like, that's what the term that feels the first generally used, what it's like to undergo a certain experience. And you know, presumably, there is nothing it's like to be a rock, because rocks don't have subjective experiences. And so that's why we said rocks are not conscious. And there is something it's like to be a human,, at least when we are awake, when we are not in a deep dreamless sleep or in a coma.
Raphaël Millière (01:08:29):
And presumably also, there is something it's like to be many non-human animal species, although it might be hard to exactly know what it's like, from the inside as it were. There's a famous paper from Thomas Nagel, called what is it like to be a bat that makes this point, right? It's kind of hard to know what it's like to be a bat, because, our sensory approaches is very different from that. So sometimes people draw this distinction between consciousness and sentience, but a lot of people in philosophy use these two terms interchangeably. And in fact, I myself, in my work, have used these two terms interchangeably. But when people draw a distinction, generally, they think that consciousness is perhaps a slightly higher bar and slightly more sophisticated capacity. So sentience would just be the capacity to experience valenced states such as pleasure and pain.
Raphaël Millière (01:09:26):
So the capacity to have certain experiences that have subjectively valuable or disvaluable, that feel good or bad, for example, and perhaps consciousness if you draw that kind of distinction, might be something a little more sophisticated that might involve perhaps having a sense of self or having the capacity to be aware of your experiences as you undergo them, perhaps, reflect upon them, have some kind of metacognition, something like that. So there are values, views there, but generally when people want to distinguish consciousness and sentience, that's roughly how they think of the distinction. And so you might say, well, maybe some very basic simple non-human animal species, like say, species of worms or mollusks, cephalopods might have sentience in the sense that they can experience pleasure or pain if you harm them, they might experience, but maybe they don't have consciousness in the full-blown sense in which we humans have conscious experiences that have a, it's kinda a rich tapestry of experiences that include sense of self.
Raphaël Millière (01:10:31):
And, again, a metaawareness of our experiences and all of these things. But I personally generally prefer to just talk about consciousness, broadly speaking as a single thing and without distinction of sentence and consciousness to avoid verbal disputes, avoid this problem that people mean different things. And just stick to this very, very broad definition. Consciousness is just having subjective experience, having something, it's like to be something. And so, in that sense, if the worm is capable of feeling pain, that that counts in my book as being conscious. And then you can have a further discussion about whether there might be degrees of consciousness. It could be more or less conscious, but at least there's some, the lights are on in some sense. There's something it's like to be that, that animal. Okay. So now language models, are they conscious? Do they have, is there anything it's like to be GPT4?
Raphaël Millière (01:11:17):
That's a bit of a fraught discussion. It's really coming to the limelight recently because, especially when this engineer from Google, Blake Lemoine, started claiming that the in-house chatbot called Lambda was sentient and really kind of tried to blew the whistle about this ended up being fired from Google subsequently, and there were a lot of very sensationalistic press media article about this. It made the buzz. I think it's a little bit unfortunate how this whole discussion has been set up, because Blake Lemoine made a lot of baseless claims based on his interactions with the model. It's worth mentioning that he was not himself involved in the design of Lambda. He was not, you know, an expert in the design of the systems. He just was, had internal access at Google. And so starting playing with it, and he ended up, forming the beliefs that Lambda was sentient as he confessed himself, or just acknowledged based on his deep religious beliefs.
Raphaël Millière (01:12:17):
So he's a profoundly religious man, and his conclusion that Lambda was sentient wasn't really based on the scientific method, but more premised on, on certain religious beliefs. And if you look at his interactions with them, that he asked a lot of leading questions. And the thing about language models is that they are very good at mimicking all sorts of linguistic behavior. There's, I have this, this piece that I wrote this last summer, where I say, you know, instead of calling them stochastic parrots, you could, you might call them stochastic chameleons, and maybe that's a better, more accurate term, which is they can kind of blend in different sorts of linguistic concepts and mimic different kinds of agents including engaging in creative fiction. They're very good at that, and that's one of the best uses for language we have today.
Raphaël Millière (01:13:04):
And if you prime them by asking them, you know, what do you feel? Or, you know, do you have any experiences and what it's like to be you? Well, of course, they're gonna engage in that mode because they've been trained on a bunch of literature. They've been trained on so many books and web webpages and texts that describes subjective experiences. And so they're perfectly good at engaging in that kind of mimicry and describing experiences that humans have and humans have described in the training data. That obviously doesn't entail that they themselves experienced these experiences that they have subjective experiences and feelings. So you shouldn't take what they say at face value, that goes for checking the factual accuracy of statements they make about the world. Like when you ask them about factual statements and they give you an answer, you should always fact check that.
Raphaël Millière (01:13:57):
And that goes for, especially when you ask them about their own experience, because this is a leading question that will just lead them to confabulate various claims about when they feel. So that's really a very poor source of evidence about whether language models have conscious experiences or not. But the problem is that until now, verbal reports was one of the best sources of evidence about whether someone experiences something and what do they experience. That's what we do in a lot of experiments in psychology. We ask participants how they, what they felt with questionnaires and interviews and so on. So it works for humans. It doesn't work for language models. Yeah, there is a double dissociation between linguistic behavior and consciousness, where on the one hand, with animals, non-human animals, we see that they can't speak, they can't just tell us what they feel, but that doesn't necessarily mean that they don't feel anything.
Raphaël Millière (01:14:51):
And in fact, the trend has been that we've realized both on the behavior and, and decades of studies in animal, on animal cognition, that a lot of animals probably have a rich type of sphere of experience, and that has all sort of ethical implications for factory farming and how we treat these animals if they can feel pain. So we shouldn't discount them just because they can speak, they probably can experience a lot. And conversely, language models can speak in some sense, it can generate language, but that doesn't mean that they experience anything. So there's double dissociation that's really selling these days. So do they experience something? What other source of evidence could we look at? And that's where things get really tricky, because we don't have a fully worked out scientific theory of consciousness at this point.
Raphaël Millière (01:15:36):
And I think we still very far from that. So we really don't have a good principled scientific way to arbitrate this question of other language models have conscious expenses. The best we can do is look at the current leading theories of consciousness and see whether language models have any of the requirements according to these various theories for consciousness. And if you look at that, well, it's extremely, extremely unlikely that current language models have conscious experiences because the architecture is extremely simple compared to, you know, the architecture of the human brain or animal brains. And in particular, some of the features of animal brains or human brains that seem to be quite important with this and correlated to experiences, especially in the human case. We can look at the structures that tend to support consciousness in the lab. There is a long now history of neuroscientists trying to as to investigate the neural correlates of consciousness in the lab.
Raphaël Millière (01:16:37):
You can see that things like recurrent processing, having the capacity to process information through recurrent feedback loops seems to be quite important, for example. And current language models are purely fit forward networks and don't have any recurrent loops. It's one example that don't seem to have any kind of capacity for metacognition in the way that, humans can, and the list goes on. So for all, for various reasons. And if your audience is interested in this, I recommend there is a good talk by David Chalmers, my colleague from NYU that's on YouTube, on that question, where he goes through that kind of consideration. But based on this evidence, I would say it's, it's extremely unlikely that current systems are conscious. Could they one day be conscious, become conscious, presumably, yes. I mean, I don't see any reason to to doubt that unless you have a very strong view that says that consciousness capacity for consciousness is, is intrinsically tied to the biological makeup of animal and human brains. There's something specific about carbon based life form that you can't replicate on a silicon chip that you couldn't, you couldn't realize in a different kind of hardware. But if you don't hold that view, then, yes, presumably someday we could get there, but I don't think we're there yet.
Michael Dello-Iacovo (01:17:58):
Great. Thanks. And we'll throw up a link to David Chalmers' talk in the show notes and also any other paper that we've referred to as well will be there. Moving on to theory of mind. So this is the capacity to understand other people by ascribing mental states to them. Children typically don't get this until around four to five years old, arguably. And a classic example of this is you might ask a child what they're doing during a phone call, and they'll say they're playing with this, they assume that you know what this is, even though you can't see it. So they don't have a theory of mind. Do large language models have theory of mind? Is this important? How would we measure this?
Raphaël Millière (01:18:44):
Yeah, so that's a very hotly debated question these days, because, you know, GPT4 came out and, even before that, GPT3.5 was already displaying this wide array of impressive behavior. And so some people were keen to check whether, you could describe anything like a theory of mind to the systems. So the capacity to kind of model the psychological states of other agents and keep track of them and draw inferences about them. And this is quite important because, you know, many people think that, something like a theory of mind is a vital part of the way in which we understand language in conversation because the meaning of sentences that are uttered in a conversation beyond the face value, meaning of the sentence, there are a lot of pragmatic aspects to language that stem from inferences about what the other speaker means.
Raphaël Millière (01:19:44):
So, for example, if you ask me if I'm coming to the party on Friday, and I tell you, no, I have some student essays to grade, sorry, I don't say no, I say, I have some student essays to grade. I haven't told you yes or no, whether I'm gonna come to the party. And in fact, my answer doesn't strictly entail that I'm not coming. I could have some student essays to grade, but still decide to come anyway. But there is an implicature there, what linguists call an implicature, which is that I'm not coming. That's an example where you kind of have to infer from what I'm saying about what I'm intend to convey to communicate in the conversation. And there are more sophisticated, pragmatic phenomena that influenced understanding of meaning that seemed to be tied with in theory of mind inferences about the psychological states of the person you're speaking to, uh, what they want, what they believe in conversation.
Raphaël Millière (01:20:35):
So if you're interested in language understanding, in language models, that seems to be also like a piece of the puzzle that's interesting to investigate. And generally, it's an important part of research on intelligent behavior in humans and non-human animals. So people have tried to take tests that have been used to assess capacities for theory of minds, say in children, and apply these tests to language models. So that would be nthings like little stories where you tell the person, the child or the language model about a little scenario where there are two different agents and one has access to certain information that the other agent doesn't have access to. And you ask a question that, where it seems like giving a right answer would hinge on accurately tracking the states, the mental states of the two agents, and knowing that one has certain beliefs and the other one doesn't.
Raphaël Millière (01:21:38):
So kind of tracking what people these agents know in the story. So this paper that came out a few weeks back, I think from, from Kosinski from Stanford that was trying to, well, was claiming that the theory of mind has emerged in language models like GPT3.5 and GPT4. And it was using this standardized, you know, standard kind of task to probe that again in humans. The problem is, that's a great example of this kind of performance competence distinction we discussed earlier. The question is, you know, the finding was that the language models were doing really well. And so those researchers concluded, well, they must have induced like a, formed a theory of mind during training that it has emerged. It's an emerging capacity of these models. But actually, another researcher, Holman, wrote this preprint in a direct answer to that particular work where he took the same stimuli, the same tasks, prompts that were used, to probe the capacities of, language models was with the theory of mind.
Raphaël Millière (01:22:50):
And he just makes some tweaks to add some distractors or some kind of differences that, for humans wouldn't make a big difference about how we parse the prompt and the kind of answer we gave, but was able to totally throw off the language models where their performance totally broke down when you just introduced these tweaks in the prompts. And that's pretty good evidence that it was a case of gap between performance and competence, where on the initial tasks the humans and language models were doing just as well in terms of performance. But the underlying competence was there for the humans, but not there for the language models. And just varying controlling for the experiment by introducing some variations in the prompts and some tweaks reveals that gap.
Raphaël Millière (01:23:39):
So the bottom line is, I think currently the evidence shows suggests that no, a theory of mind hasn't emerged in language models. They probably don't have, they're probably missing some components to the architecture to get there. So something missing probably from the systems to get there, but, I mean, we'll see. Time will tell, but it's a good lesson in trying to, you know, how can we set up good rigorous experiments to test language models on certain tasks that humans have been tested on.
Michael Dello-Iacovo (01:24:15):
You were on a panel recently talking about the challenge of compositionality for AI. So what is compositionally in AI?
Raphaël Millière (01:24:23):
So compositionality, it is a notion that comes from linguistics, and it's this idea that the meaning of a whole of a complex expression, like a sentence, is determined by the meaning of it if it's constituent parts, like the individual words in the sentence, together with a way in which they are syntactically combined. So the grammatical structure of the sentence, for example. So indeed, you know when you're listening to me speak, for example, I'm uttering sentences and the way in which you're understanding the sentences I'm measuring involves knowing the meaning of the individual words that I'm using, but also how to put them together into coherent, coherent holes based on the grammatical structure of the sentences. And, the structure matters, of course, because, if you say, John loves Mary, that doesn't mean the same thing as Mary loves John, even though you're using the very same words.
Raphaël Millière (01:25:18):
So, when it comes to AI, essentially, there has been a long history of people thinking that the kinds of systems that current language models are based on in the artificial neural networks or collection systems are pretty bad at parsing complex expressions, compositionally or generally at, combining simple representations into complex representation with this kind of compositional structure. So this is idea that compositional it is a challenge for language models and more generally for AI systems that are based on these artificial neural networks. It's not just a problem for discussions of language processing, it's more broadly a problem for trying to build artificial intelligence, generally, because people have extended discussion of compositional it not just to apply to language in the way we humans understand language and produce language by combining complex, simple expressions, but also, apply that notion to thinking through thought as well, right?
Raphaël Millière (01:26:28):
So it seems like thought as well has this compositional property where to think complex thoughts and to understand complex thoughts, we're putting together simpler representation, simpler concepts, and combining them in certain ways that have a certain compositional structure, a bit like the grammar of sentences. And again, this idea that we humans do that very well, we can do that in systematic ways, such that if I have the concepts, John, Mary, and love, I can think John loves Mary, but I can also think Mary loves John and artificial neural networks because of the way they work and the the way in which the format in which their representations, are couched, which is, with vector continuous vector representations, it was traditionally thought they don't really have the resources to do that kind of thing to combine representations together with this kind of systematic structure, because to combine representations, they can blend vectors together, but that doesn't seem to have the right kind of structure.
Raphaël Millière (01:27:35):
And, well, my own work, I have this project where I've been suggesting that, this is too simplistic, and that actually current language models and current deep learning architectures generally seem to actually be able to have, you know, form representations that have a rich compositional structure. And that goes a long way towards explaining why they can generate grammatical well-formed sentences. For example, in the case of language models, if you talk to a language model as everyone has who has talked to ChatGPT knows it's rarely generating a non grammatical sentence. So it's very good at putting words together into coherent, grammatically well-formed expressions, including totally novel sentences that it's never encountered before that it's just making up, so it's not just brute force memorization. So the compositional performance seems to be there.
Raphaël Millière (01:28:31):
There's also a lot of empirical evidence about compositional generalization of these models and various other pieces of evidence, but there's also work that goes beyond behavior that opens up these systems and tries to reverse engineer what's going on inside. And that's the kind of evidence I'm building on in this work that suggests to me that they're actually able to form these mechanisms to combine representations together, compositionally, but in a way that is different from the way in which classical symbolic system, these old call systems that we used to use before the advent of neural networks achieved that. So old school systems to form a complex representation, they would just concatenate simple representations into the complex representation. So, say if you want have system that processes forms of complex representation, John loves Mary, that will literally involve taking the representation for John, the representation for Mary and the representation for love, and concatenate them together with a certain structure and the complex representation John loves Mary.
Raphaël Millière (01:29:36):
And by contrast, connection is systems like neural networks. They perform transformations of vectors, so they take vectors and they modify them in certain ways. I don't wanna get too much to the details because that would involve a lot of linear algebra, and some are pretty technical empirical evidence, but I think the way in which they do these manipulations of vectors and these transformations in the transformer architecture actually affords them with the capacity to have the right kind of structure to generate sophisticated compositional behavior, but in a way that doesn't reduce to this classical symbolic old school kind of architectures. So it's not just merely implementing a classical symbolic architecture, it's doing something different. And I think that's a really interesting finding, because maybe it might open up new avenues of research to think about how we humans actually are able to achieve the compositional behavior that we exhibit, uh, when we produce novel sentences or understand sentences.
Michael Dello-Iacovo (01:30:41):
Yeah. And sort of related to that, I guess is language understanding in large language models, do large language models understand language in any meaningful sense?
Raphaël Millière (01:30:51):
Yeah, that's also a very big question. And yet another one of these words understanding, which is fraught with, verbal disputes and generally vagueness. So the way I like to put this, I try not to be too long on this, but is try to put this notion of understanding to the side because it's confounded by associations people, for example, some people think of understanding as requiring consciousness, like understanding language. You need to have an experience, a feeling of understanding. And I wanna put that aside because I think even if you think that language models don't have any capacity for consciousness, you might still meaningfully ask about where they're able to, in a more narrow sense, understand parse the meaning of sentences in some sense. So I'd like to talk about semantic competence instead of understanding to try to rule out these associations.
Raphaël Millière (01:31:40):
And then you can further break this down. So you can talk about lexical semantic competence, which is about understanding word meanings or, you know, having the right kind of capacity to having the right kind of knowledge about word, meaning the meaning of words or compositional Semitic competence or structural Semitic competence, which is about understanding, uh, complex expressions like sentences or paragraphs. And you can also break, have, make another distinction between what you might call referential semantic competence, which is the capacity to connect words to what they refer to out there in the world. For example, if I'm in the streets and I see a pigeon or a dog, I can point to it and recognize it and, and use the word dog or pigeon in connection to that thing I see. So I can connect that work to what I'm seeing out there in the world.
Raphaël Millière (01:32:30):
Or if we're at dinner and you ask me, you know, can you pass me the salt? I'll be able to pick up the salt shaker and give it to you. So I connect that word salt to things out there in the world. And there is this other notion of inferential semantic competence that pertains to not connecting words or linguistic items to whatever they refer to in the world, but connecting words to each other. That's what we do. When we, for example, define words, we use more words to explicitly at the meaning of a word. So it's a word to word relation. Definitions is one example, or synonyms. If you ask me to give you a synonym for a word, I give you another word that's a word to word relation, paraphrasing, summarizing all of these things, kind of involve having this network curve, relationship between words.
Raphaël Millière (01:33:20):
So language models, I would argue, and I have some work in which I do argue that, have a really highly nontrivial degree of inferential competence, at least when it comes to word meanings. So because they're trained to, they learn from the statistical distribution of words in a very large corpus, they learn all sorts of relationships between words that are exactly the kind of relationship that are useful to the use in definitions, paraphrase, summarization, pseudonym and so on. And so that enables these systems to, exhibit a highly neutral degree, in my opinion, of inferential semantic competence, to have these very fine grained networks of relationship between words that map onto relationship between vectors that represent the words in the vector space that these systems use to generate words. When it comes to referential components, it's a much more controversial question, because the systems don't have access to the external world, and therefore the, uh, degree to which you could say that they're able to connect the linguistic items they work with to whatever these items refer to in the external world, is very debatable and very controversial.
Raphaël Millière (01:34:36):
I actually did argue in a recent preprint that I wrote with my colleague Dimitri Colo, that they might qualify as having some form of referential competence that hinges on a somewhat complex argument. But the idea is that some systems like ChatGPT, they're not just trained from next word prediction. They also, after they're trained, pre-trained on next word prediction, they're fine tuned with human feedback. So they undergo this like further little bit of training where you ask the model to generate texts based on prompts. Like you ask them question, for example, questions about various things, and you ask humans to provide feedback about of different outputs, which one was the best, which one was the most informative, helpful, harmless, truthful. So that includes a normative component, which is truthfulness or truthness or honesty, how accurate the outputs of the model, and you then fine tune the model on that.
Raphaël Millière (01:35:37):
So it's not just next word prediction, you're kind of infusing the model with a function that's not just a linguistic function like next word prediction, but that's actually world involving because you provide feedback to the model on the accuracy of it output that's dependent on the actual state of the world. If you ask the model what's the capital of France? And it tells you Berlin, that's a wrong answer, and you give it feedback about the inaccuracy. So we argue based on this somewhat technical argument that this might be a way in which the model can actually have a learning process that has the right kind of descriptive normativity to have, you know, to achieve some form of referential competence. Because you're providing feedback about the reference in there indirectly the reference of the world of the words in the world, and accuracy condition for that kind of reference.
Raphaël Millière (01:36:26):
It's not the kind of direct competence we have because we interact directly with the world and we perceive the world and work around in the world, and we do all these things. It's much more mediated and indirect, but nonetheless, we think it's there, in a limited degree. So that's a long-winded answer. And so I've talked mostly about lexical competence, so what pertains to word meaning, but that would translate also to structural competence because the systems can very well actually put together words into sentences that are well formed, coherent, and so on. And so, yes, I would argue that also translate to some form of structural or compositional semantic competence. So the bottom line is do they have human-like language understanding? No, clearly. I mean, there are various failure modes. They don't understand language as humans do.
Raphaël Millière (01:37:15):
They don't have, they're still limited with all the pragmatic dimension of language, certainly with the theory of mind thing we've discussed before. And they just don't learn language in the way humans do. Children don't learn language by reading thousands of books, bto learn the meaning of the word dog. They, you know, people point out the dog and say, that's a dog. And you know, they have this kind of very perceptively grounded approach to language learning, so that they work very differently. They don't learn language different the same way, and they have a much more limited understanding of language. But nonetheless, if you think that it's a matter of degree and not just an either or dichotomy, I would argue they have some limited but non-trivial degree of semantic components.
Michael Dello-Iacovo (01:38:00):
In terms of reference, now that we're attaching things like vision to large language models, is this all moot? Is that going to be enough to give that reference, for example, GPT4 having vision attached in some way? How does that change this discussion, if at all?
Raphaël Millière (01:38:16):
Yeah, I think it's an interesting and somewhat complicated question because sometimes people assume that just hooking up these language models to images in addition to text is just gonna be enough to bridge that gap in terms of referential competence. And actually what we argue in that paper I mentioned that I wrote with my colleague, Dimitri Colo, is that there is a double disassociation between having the access to images and being able to achieve referential grounding or kind of referential competence. The reason for that is that a system like like Dalle2, it learns by being trained on pairs of captions and images. So it learns by being trained on images and together with a caption, but it's not actually getting any feedback about really about what words refer to out there in the world.
Raphaël Millière (01:39:12):
You can think of the images that the model is fed to as just more tokens, being made of more tokens that the model is trained on. So if you think that the words of the English language are, and the words that sentences are made of are tokens for the model to process, and images also are broken down into patches that correspond to tokens, like na vocabulary for images, like a kind of like visual words as it were for the model is just tokens all the way down. So the model is learning to associate certain visual tokens. We've word tokens, but it's still arguably it stays at the level of this kind of inferential competence of relating tokens to other tokens without exiting this kind of merry-go-round of intra token relationships, in order to ground representations external world.
Raphaël Millière (01:40:09):
And by contrast, ChatGPT, when it's fine tuned with this explicit human feedback, even though it doesn't have access to images, we argue it might be achieving some limited form of referential components just by virtue of getting this direct human feedback. So somewhat counterintuitively, you might think that, well, adding images to the systems is what gives them reference, referential grounding and solve the grounding problem. And we say, actually, no, that these are two different things. I mean, you can put them together and that's ideal. So what you would want to do ideally is having systems that would learn the meaning of words by interacting with the world where they would get some feedback from the world about what words mean. That wouldn't be just, you know, passively learning to match caption and images, for example, which is very different from how humans would learn the meaning of words. So that would get you closer to something like the human-like referential competence for the use of various words.
Michael Dello-Iacovo (01:41:10):
So we've we've talked about a number of different characteristics that large language models might have, and there are many more that we didn't get a chance to talk about, but to try and tie this discussion together, which of these faculties are most useful to understand? Now, of course, that depends on what we're interested in. But for example, if we wanted to know whether artificial gender intelligence was here or if it's close, which of these faculties might we look at which are most useful, and maybe for some other, for some other situations, which of these are most useful to understand?
Raphaël Millière (01:41:42):
So there are a number of ways in which current models are still very limited. One of them is that they undergo this training phase. They get trained on a lot of data, and by the way, they require an enormous amount of data to learn, to perform well, much more than children actually need. So there are significantly less efficient at learning. And here there might be some connection between learning efficiency and intelligence. I think, again, of the social point about power, generalization power being related to intelligence. So that's in and out of itself is a limitation. So they learn from the sum amount of data, and then once they're trained, they are frozen as people say. So they no longer learn. The weights of the model don't change, and you can just run inference on the system by prompting it and generating outputs, which does not, is not associated with any updating of the model.
Raphaël Millière (01:42:38):
So there's no more learning, there is no more evolution of the system. It's totally frozen in that sense. And that's extremely different from the way in which biological systems live, evolve and learn and acquire intelligent capacities. Humans and non-human animals are capable of lifelong learning, as people say. So that just means that we constantly learning and adapting our knowledge and representations of the world based on our experiences and based on incoming stimuli based on what we do and say and think and so on. And these language models don't. So that's one major limitation in my opinion. Having systems that can learn on an ongoing basis would be interesting. Another major limitation that these systems don't really have anything like the kinds of memory capacities that humans have in the sense that they don't really have a long-term memory in the same way that we do.
Raphaël Millière (01:43:36):
We can store things, experiences in long-term memory that's somewhat related to the point about lifelong learning in the sense that when you talk to the chat bot, say at the end of the conversation, he's gonna forget everything about what you've talked about. There's no long-term storage there, there is a kind of brute force approximation of a long-term memory just because during the training phase, the model can just memorize whole sequences of text. But beyond that, there is no equivalent to how long-term memory works in human and unhuman animals. It's also debatable to what extent there is anything like the short-term memory in the way that it works in, in humans and non-human animals where, you know, these models can memorize the content of the prompt once in the context window that you use to prompt them and back and forth in the conversation for a few thousand words.
Raphaël Millière (01:44:22):
But that's not quite the same as the way in which working memory works in humans and arguably non-human animals. So that's another, another limitation in my opinion. Of course you want also systems that don't just learn from text, but learn from other modalities. And we are getting closer to that. We have nowadays a lot of systems that also learn from images. We are slowly getting system that might start being able to learn from videos, but that's a much harder, much more challenging problem, because then you want, you introduce dimension of time and that's still a very tough nut to crack. So I think that will be a progress in the right direction when we can get models that can actually learn from videos consistently. But ultimately you want to go even beyond that because all of these things I mentioned would still be a form of passive learning, which just feed a bunch of data out of the system and just passively learns from that data by doing something like next time step prediction or next token prediction.
Raphaël Millière (01:45:28):
Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypothesis in the way children do, it should learn all the time. You know, if you observe young, you know, babies and toddlers, they are constantly experimenting. They're like little scientists, you know, you see babies grabbing their feet, and kind of testing whether that's part of my body or not, and kind of learning gradually and know very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.
Raphaël Millière (01:46:15):
They learn purely passively. And that's a major difference with biological intelligence as well. So these are just a few examples are others. I think we need also, we probably need them to develop something like a metacognition, the ability to think about their own outputs and reflect upon their own thought processes, something like introspection as well. Being able to have some kind of internal representation of their own representations and reflect upon that. And maybe that could emerge from the bottom up spontaneously as kind of emerging capability or maybe you will need to build into the system. That is an open question. Generally the debate on these questions in AI is that some people think you can get all of these things for free, or like, at least you don't need to make major modifications to current architectures to get a lot of these, you know, the remaining capacities that are missing from the systems to get them to emerge just by scaling up current architectures and training them on more data and training bigger models. I'm not so sure. I think we'll need to make some important modifications and I don't know what the next breakthrough is going to be.
Michael Dello-Iacovo (01:47:23):
Well thank you for that. This has been a really great overview of some of the faculties as they relate to large language models. There's many more topics that we could talk about and I would've liked to have gotten to, but unfortunately didn't have time. But this, I think this has been a really good overview, and hopefully our listeners find this useful. If anyone wants to find out more about you and your work, where could they go? And I know that you are quite active on Twitter and some of your Twitter threads are very informative and useful for my own research, for this interview. But where would you like to point people to go to if they'd like to see more about you and your work?
Raphaël Millière (01:47:57):
Yes. So indeed, I have a Twitter account, so I'm at Raphael Milliere, so r a p h a e l, my first name and my last name m i l l i e r e, on Twitter. I also have a website, which is just raphaelmilliere.com. I need to update it, it's been a while since I updated it, but usually I will put there just not just my academic work, but also my public engagement work. I'm quite passionate about public engagement. I do some public writing, and I actually have a piece that I co-authored with my colleague Charles Radko, that should come out soon in The Atlantic, that's precisely about this issue of the extreme polarization of discussion on AI and how we can cover out that middle ground. So yeah, if people are interested in that, you can probably find it soon. I'll share it on Twitter and yeah, that's about it.
Michael Dello-Iacovo (01:48:43):
Sounds great. Well, thank you so much again for joining us. We really appreciate your time and all the best.
Raphaël Millière (01:48:48):
Thank you for having me. This was really enjoyable.
Michael Dello-Iacovo (01:48:50):
Thank you. It was a pleasure. Thanks for listening. I hope you enjoyed the episode. You can subscribe to the Sentience Institute podcast on iTunes, Stitcher, or any podcast app.