Ground Truths
Ground Truths
Melanie Mitchell: Straight Talk on A.I. Large Language Models
0:00
-39:16

Melanie Mitchell: Straight Talk on A.I. Large Language Models

Transcript with Links

Eric Topol (00:00):

This is Eric Topol, and I'm so excited to have the chance to speak to Melanie Mitchell. Melanie is the Davis Professor of Complexity at the Santa Fe Institute in New Mexico. And I look to her as one of the real, not just leaders, but one with balance and thoughtfulness in the high velocity AI world of large language models that we live in. And just by way of introduction, the way I got to first meet Professor Mitchell was through her book, Artificial Intelligence, A Guide for Thinking Humans. And it sure got me thinking back about four years ago. So welcome, Melanie.

Melanie Mitchell (00:41):

Thanks Eric. It's great to be here.

The Lead Up to ChatGPT via Transformer Models

Eric Topol (00:43):

Yeah. There's so much to talk about and you've been right in the middle of many of these things, so that's what makes it especially fun. I thought we'd start off a little bit of history, because when we both were writing books about AI back in 2019 publishing the world kind of changed since <laugh>. And in November when ChatGPT got out there, it signaled there was this big thing called transformer model. And I don't think many people really know the difference between a transformer model, which had been around for a while, but maybe hadn't come to the surface versus what were just the deep neural networks that ushered in deep learning that you had so systematically addressed in your book.

Melanie Mitchell (01:29):

Right. Yeah. Transformers are, were kind of a new thing. I can't remember exactly when they came out, maybe 2018, something like that, right from Google. They were an architecture that showed that you didn't really need to have a recurrent neural network in order to deal with language. So that was one of the earlier things, you know, and Google translate and other language processing systems, people were using recurrent neural networks, networks that sort of had feedback from one time step to the next. But now we have the transformers, which instead use what they call an attention mechanism where the entire text that the system is dealing with is available all at once. And the name of the paper, in fact was Attention is All You need. And that by attention is all you need they meant this particular attention mechanism in the neural network, and that was really a revolution and enabled this new era of large language models.

Eric Topol (02:34):

Yeah. And as you aptly pointed out, that was in, that was five years ago. And then it took like, oh, five years for it to become in the public domain of Chat GPT. So what was going on in the background?

Melanie Mitchell (02:49):

Well, you know, the idea of language models (LLMs) that is neural network language models that learn by trying to predict the next word in a, in a text had been around for a long time. You know, we now have GPT-4, which is what's underlying at least some of ChatGPT, but there was GPT-1 and GPT-2, you probably remember that. And all of this was going on over those many years. And I think that those of us in the field have seen more of a progression with the increase in abilities of these increasingly large, large language models. that has really been an evolution. But I think the general public didn't have access to them and ChatGPT was the first one that like, was generally available, and that's why it sort of seemed to appear out of nothing.

SPARKS OF ARTIFICIAL GENERAL INTELLIGENCE

Sentience vs Intelligence

Eric Topol (03:50):

Alright. So it was kind of the, the inside world of the computer science kinda saw a more natural progression, but people were not knowing that LLMs were on the move. They  were kinda stunned that, oh, look at these conversations I can have and how, how humanoid it seemed. Yeah. And you'll recall there was a fairly well-publicized event where a Google employee back I think last fall was, put on suspension, ultimately left Google because he felt that the AI was sentient. Maybe you'd want to comment that because that's kind of a precursor to some of the other things we're going to discuss,

Melanie Mitchell (04:35):

Right? So yeah, so one of the engineers who was working with their version of ChatGPT, which I think at the time was called LaMDA was having conversations with it and came to the conclusion that it was sentient, whatever that means, <laugh>, you know, that, that it was aware that it had feelings that it experienced emotions and all of that. He was so worried about this and he wanted, you know, I think he made it public by releasing some transcripts of his conversations with it. And I don't think he was allowed to do that under his Google contract, and that was the issue.  tThat made a lot of news and Google pushed back and said, no, no, of course it's not sentient. and then there was a lot of debate in the philosophy sphere of what sentient actually means, how you would know if something is sentient. And it Yeah. <laugh> and it's kind of gone from there.

Eric Topol (05:43):

Yeah. And then what was interesting is then in March based upon GPT-4 the Microsoft Research Group published this sparks paper where they said, it seems like it has some artificial general intelligence, AGI qualities, kind of making the same claim to some extent. Right?

Melanie Mitchell (06:05):

Well, that's a good question. I mean, you know, intelligence is one thing, sentience is another. There's a question of whether, you know, how they're related, right? Or if they're related at all, you know, and what they all actually mean. And these terms, this is one of the problems. Of course, these terms are not well-defined, but most, I think most people in AI would say that intelligence and sentience are different. You know something can be intelligent or act intelligently without having any sort of awareness or sense of self or, you know, feelings or whatever sentience might mean. So I think that the sparks of AGI paper from Microsoft was more about this, that saying that they thought GPT-4 four, the system they were experimenting with, showed some kind of generality in its ability to deal with different kinds of tasks. You know, and this, this contrasts with the old, older fashioned ai, which typically was narrow only, could do one task, you know, could play chess, could play Go, could do speech recognition, or could, you know, generate translations. But it, they couldn't do all of those things. And now we have these language models, which seemed to have some degree of generality.

The Persistent Gap Between Humans and LLMs

Eric Topol (07:33):

Now that gets us perfectly to an important Nature feature last week which was called the “Easy Intelligence Test that AI chatbots fail.” And it made reference to an important study you did. First, I guess the term ARC --Abstract and Reasoning Corpus, I guess that was introduced a few years back by Francois Chollet. And then you did a ConceptARC test. So maybe you can tell us about this, because that seemed to have a pretty substantial gap between humans and GPT-4.

Melanie Mitchell (08:16):

Right? So, so, so Francois Chollet is a researcher at Google who put together this set of sort of intelligence test like puzzles visual reasoning puzzles that tested for abstraction abilities or analogy abilities. And he put it out there as a challenge. A whole bunch of people participated in a competition to get AI programs to solve the problems, and none of them were very successful. And so what, what our group did was we thought that, that the original challenge was fantastic, but the prob one of the problems was it was too hard, it was even hard for people. And also it didn't really systematically explore concepts, whether a, a system understood a particular concept. So, as an example, think about, you know, the concept of two things being the same, or two things being different. Okay?

(09:25):

So I can show you two things and say, are these the same or are they different? Well, it turns out that's actually a very subtle question. 'cause when we, you know, when we say the same we, we can mean sort of the, the same the same size, the same shape, the same color, this, you know, and there's all kinds of attributes in which things can be the same. And so what our system did was it took concepts like same versus different. And it tried to create lots of different challenges, puzzles that had that required understanding of that concept. So these are very basic spatial and semantic concepts that were similar to the ones that Solei had proposed, but much more systematic. 'cause you know, this is one of the big issues in evaluating AI systems is that people evaluate them on particular problems.

(10:24):

For example, you know, I think a lot of people know that ChatGPT was able to answer many questions from the bar exam. But if you take like a single question from the bar exam and think about what concept it's testing, it may be that ChatGPT could answer that particular question, but it can't answer variations that has the same concept. So we tried to take inside of this arc domain abstraction and reasoning corpus domain, look at particular concepts and say, systematically can the system understand different variations of the same concept? And then we tested this, these problems on humans. We tested them on the programs that were designed to solve the ARC challenges, and we tested them on G P T four, and we found that humans way outperformed all the machines. But there's a caveat, though, is that these are visual puzzles, and we're giving them to GPT-4, which is a language model, a text, right? Right. System. Now, GPT four has been trained on images, but we're not using the system that can deal with images. 'cause that hasn't been released yet. So we're giving the system our problems in a text-based format rather than like, like giving it to humans who actually can see the pictures. So this, this can make a difference. I would say our, our our, our results are, are preliminary <laugh>.

Eric Topol (11:57):

Well, what do you think will happen when you can use in inputs with images? Do you think that it will equilibrate there'll be parity, or there still will be a gap in that particular measure of intelligence?

Melanie Mitchell (12:11):

I would predict there, there will still be a big gap. Mm-hmm. <affirmative>, but, you know, I guess we'll see

The Biggest Question: Stochastic Parrot or LLM Real Advance in Machine Intelligence?

Eric Topol (12:17):

Well, that, that's what we want to get into more. We want to drill down on the biggest question of large language models. and that is, are they really you know, what is their level of intelligence? Is it something that is beyond the so-called stochastic parrot or the statistical ability to adjudicate language and words? So there was a paper this week in Nature Human Behavior, not a journal that normally publishes these kind of papers. And as you know it was by Taylor Webb and colleagues at U C L A. And it was basically saying for analogic reasoning ,making analogs, which would be more of a language task,  I guess, but also some image capabilities that it could do as well or better than humans. And these were college students. So <laugh>, just to qualify, they're, they're not, maybe not, they're not fully representative of the species, but they're at least some learned folks. So what did, what did you think of that study?

Melanie Mitchell (13:20):

Yeah, I found it really fascinating. and, and kind of provocative. And, you know, it, it kind of goes along with a, a many, there's been many studies that have, have been applying tests that were kind of designed for humans, psychological tests to large language models. And this one was applying sort of analogy tests that, that psychologists have done on humans to, to, to large language models. But there's always kind of an issue of interpreting the results because we know these large language models most likely do not think like we do. Hmm. And so one question is like, how are they performing these analogies? How are they making these analogies? So this brings up some issues with evaluation. When we try to evaluate large language models using tests that were designed for humans. One question is, were these tests at all actually in the training data of a large language model? Like, had they, you know, these language models are trained on enormous amounts of text that humans have produced. And some of the tests that that paper was using were things that had been published in the psychology literature.

(14:41):

So one question is, you know, to what extent were those in this training data? It's hard to tell because we don't know what the training data exactly is. So that's one question. Another question is are these systems actually using analog reasoning the way that we humans use it? Or are they using some other way of solving the problems? Hmm. And that's also hard to tell. 'cause these systems are black boxes, but it might actually matter because it might affect how well they're able to generalize. You know, if I can make an analogy usually you would assume that I could actually use that analogy to understand some new, you know, some new situation by an analogy to some old situation. But it's not totally clear that these systems are able to do that in any general way. And so, you know, I tdo hink these results, like these analogy results, are really provocative and interesting.

(15:48):

But they will require a lot of further study to really make sense of what they mean, like to when you give, when, when the, the, you know, ChatGPT passes a bar exam, you might ask, well, and let's say it's, you know, it does better than most humans, can you say, well, can it now be a lawyer? Can it go out and replace human lawyers? I mean, a human who passed the bar exam can do that. But I don't know if you can make the same assumption for a language model, because it's the way that it's doing, answering the questions in a way that its reasoning might be quite different and not imply the same kinds of more general abilities.

Eric Topol (16:32):

Yeah. That's really vital. And something else that you just brought up in multiple dimensions is the problem of transparency. So we don't even know the, the specs, the actual training, you know, so many of the components that led to the model. and so you, by not knowing this we're kind of stuck to try to interpret it. And I, I guess if you could comment about transparency seems to be a really big issue, and then how are we going to ever understand when there's certain aspects or components of intelligence where, you know, there does appear to be something that's surprising, something that you wouldn't have anticipated, and how could that be? Or on the other hand, you know, why is it failing? so what is, is transparency the key to this? Or is there something more to be unraveled?

Melanie Mitchell (17:29):

I think transparency is, is a big part of it. Transparency, meaning, you know, knowing what data, the system was trained on, what the architecture of the system is. you know, what other aspects that go into designing the system. Those are important for us to understand, like how, how these systems are actually work and to assess them. There are some methods that people are using to try and kind of tease out the extent to which these systems have actually developed sort of the kind of intelligence that people have. So, so one, there was a paper that came out also last week, I think from a group at MIT where they looked at several tasks that were given that GPT-4 did very well on that seemed like certain computer programming, code generation, mathematics some other tasks.

(18:42):

And they said, well, if a human was able to generate these kinds of things to do these kinds of tasks, some small change in the task probably shouldn't matter. The human would still be able to do it. So as an example in programming, you know, generating code, so there's this notion that like an array is indexed from zero. The first number is, is indexed as zero, the second number is indexed as one, and so on. So but some programming languages start at one instead of zero. So what if you just said, now change to starting at one? Probably a human programmer could adapt to that very quickly, but they found that GPT-4 was not able to adapt very well.

Melanie Mitchell (19:33):

So the question was, is it using, being able to write the program by sort of picking things that it has already seen in its training data much more? Or is it able to, or is it actually developing some kind of human-like, understanding of the program? And they were finding that to some extent it was more the former than the latter.

Eric Topol (19:57):

So when you process all this you lean more towards because of the pre-training and the stochastic parrot side, or do you think there is this enhanced human understanding that we're seeing a level of machine intelligence, not broad intelligence, but at least some parts of what we would consider intelligence that we've never seen before? Where do you find yourself?

Melanie Mitchell (20:23):

Yeah, I think I'm, I'm, I'm sort of in the center <laugh>,

Eric Topol (20:27):

Okay. That's good.

Melanie Mitchell (20:28):

Everybody has to describe themselves as a centrist, right. I don't think these systems are, you know, stochastic parrots. They're, they're not just sort of parroting the data that they, they've been trained on, although they do that sometimes, you know, but I do think there is some reasoning ability there. Mm-hmm. <affirmative>, there is some, you know, what you might call intelligence. You know, it's, it's, but the, the question is how do you characterize it and, and how do you, I for the most important thing is, you know, how do you decide that it, that these systems have a general enough understanding to trust them,

Eric Topol (21:15):

Right? Right. You know,

Melanie Mitchell (21:18):

You know, in your field, in, in medicine, I think that's a super important question. They can, maybe they can outperform radiologists on some kind of diagnostic task, but the question is, you know, is that because they understand the data like radiologists do or even better, and will therefore in the future be much more trustworthy? Or are they doing something completely different? That means that they're going to make some very unhuman like mistakes. Yeah. And I think we just don't know.

End of the Turing Test

Eric Topol (21:50):

Well, that's, that's an important admission, if you will. That is, we don't know. And as you're, again I think really zooming in on, on for medical applications some of them, of course, are not so critical for accuracy because you, for example, if you have a, a conversation in a clinic and that's made into a note and all the other downstream tasks, you still can go right to the transcript and see exactly if there was a potential miscue. But if you're talking about making a diagnosis in a complex patient that can be, if, if you, if we see hallucination, confabulation or whatever your favorite word is to characterize the false outputs, that's a big issue. But I, I actually really love your Professor of Complexity title because if there's anything complex this, this would fulfill it. And also, would you say it's time to stop talking about the Turing tests that retire? It? It's, it's over with the Turing test because it's so much more complex than that <laugh>.

Melanie Mitchell (22:55):

Yeah. I mean, one problem with the Turing test is there never was a Turing test. Turing never really gave the details of how this, this test should work. Right? And so we've had Turing tests with chatbots, you know, since the two thousands where people have been fooled. It's not that hard to fool people into thinking that they're talking to a human. So I, I do think that the Turing test is not adequate for the, the question of like, are these things thinking? Are they robustly intelligent?

Eric Topol (23:33):

Yeah. One of my favorite stories you told in your book was about Hans Clever and the you know, basically faking out the potent that, that there was this machine intelligence with that. And yeah, I, I think this, this is so apropo a term that is used a lot that a lot of people I don't think fully understand is zero shot or one shot, or can you just help explain that to the non-computer science community?

Melanie Mitchell (24:01):

Yeah. So, so in the context of large language models, what that means is so I could, so do I give you zero, zero shot means I just ask you a question and expect you to answer it. One shot means I give you an example of a question and an answer, and now I ask you a new question that you, you should answer. But you already had an example, you know, two shot is you give two examples. So it's just a ma matter of like, how many examples am I going to give you in order for you to get the idea of what I'm asking?

Eric Topol (24:41):

Well, and in a sense, if you were pre-trained unknowingly, it might not be zero shot. That is, if, if the, if the model was pre-trained with all the stuff that was really loaded into that first question or prompt, it might not really qualify as a zero shot in a way. Right?

Melanie Mitchell (24:59):

Yeah. Right. If it's already seen that, if it's learned, I think we're getting, it's seen that in its training data.

The Great LLM (Doomsday?) Debate: An Existential Threat

Eric Topol (25:06):

Right. Exactly. Now, another topic that is related to all this is that you participated in what I would say is a historic debate. you and Yann LeCun, who I would not have necessarily put together <laugh>. I don't know that Yan is a centrist. I would say he's more, you know, on one end of the spectrum versus Max Tegmark and Yoshua Bengio

Eric Topol (25:37):

Youshua Bengio, who was one of the three notables for a Turing award with Geoffrey Hinton So you were in this debate. I think called a Musk debate.

Melanie Mitchell (25:52):

Monk debate. Monk.

Eric Topol (25:54):

Monk. I was gonna say not right. Monk debate. Yeah. the Monk Debates, which is a classic debate series out of, I think, University of Toronto

Melanie Mitchell (26:03):

That's right

Eric Topol (26:03):

And it was debating, you know, is it all over <laugh>? Is AI gonna, and obviously there's been a lot of this in recent weeks, months since ChatGPT surfaced. So can you kind of give us, I, I tried to access that debate, but since I'm not a member or subscriber, I couldn't watch it, and I'd love to actually but can you give us the skinny of what was discussed and your position there?

Melanie Mitchell (26:29):

Yeah. So, so actually you can't, you can access it on YouTube.

Eric Topol (26:32):

Oh, good. Okay. Good. I'll put the link in for this. Okay, great.

Melanie Mitchell (26:37):

Yeah. so, so the, the resolution was, you know, is AI an existential threat? Okay. By an existential, meaning human extinction. So pretty dramatic, right? and there's been, this debate actually has been going on for a long time, you know, since, since the beginning of the talks about this, the “singularity”, right? and there's many people in the sort of AI world who fear that AI, once it becomes quote unquote smarter than people will be we'll lose control of it.

(27:33):

We'll, we'll give it some task like, you know, solve, solve the problem of carbon emissions, and it will then misinterpret or mis sort of not, not care about the consequences. it will just sort of maniacally try and achieve that goal, and in, in the process of that, for accidentally kill us all. So that's one of the scenarios. There's many different scenarios for this, you know and the, you know, debate. The debate was, it was very a debate is kind of an artificial, weird structured discussion where you have rebuttals and try, you know. But I think the debate really was about sort of should we right now be focusing our attention on what's called existential risk, that is that, you know, some future AI is going to become smarter than humans and then somehow destroy us, or should we be more focused on more immediate risks, the ones that we have right now like AI creating disinformation, fooling people and into thinking it's a human, magnifying biases in society, all the risks that people, you know, are experiencing immediately, right. You know, or will be very soon. and that the debate was more about sort of what should be the focus

Eric Topol (29:12):

Hmm.

Melanie Mitchell (29:13):

And whether we can focus on very shorter, shorter immediate risks also, and also focus on very long-term speculative risks, and sort of what is the likelihood of those speculative risks and how would we, you know, even estimate that. So that was kind of the topic of the debate. So

Eric Topol (29:35):

Did, did you all wind up agreeing then that

Melanie Mitchell (29:38):

<laugh>? No. Were you

Eric Topol (29:38):

Scared or, or where, where did it land?

Melanie Mitchell (29:41):

Well, I don't know. Interestingly what they do is they take a vote at the beginning of the audience. Mm-hmm. And they say like, you know, how many people agree with, with the resolution, and 67 percent of people agreed that AI was an existential threat. So it was two thirds, and then at the end, they also take a vote and say like, how many, what percent of minds were changed? And that's the side that wins. But ironically, the, the voting mechanism broke at the end, <laugh>. So technology, you know, for the win <laugh>,

Eric Topol (30:18):

Because it wasn't a post-debate vote?

Melanie Mitchell (30:21):

But they did do an email survey. Oh. Oh. Which is I think not very, you know,

Eric Topol (30:26):

No, not very good. No, you can't compare that. No.

Melanie Mitchell (30:28):

Yeah. So I, you know, technically our side won. Okay. But I don't take it as a win, actually. <laugh>,

Are Your Afraid? Are You Scared?

Eric Topol (30:38):

Well, I guess another way to put it. Are you, are you afraid? Are you scared?

Melanie Mitchell (30:44):

So I, I'm not scared of like super intelligent AI getting out of control and destroying humanity, right? I think there's a lot of reasons why that's extremely unlikely.

Eric Topol (31:00):

Right.

Melanie Mitchell (31:01):

But I am, I do fear a lot of things about ai, you know, some of the things I mentioned yes, I think are real threats, you know, real dire threats to democracy.

Eric Topol (31:15):

Absolutely.

Melanie Mitchell (31:15):

That to our information ecosystem, how much we can trust the information that we have. And also just, you know, to people losing jobs to ai, I've already seen that happening, right. And the sort of disruption to our whole economic system. So I am worried about those things.

What About Open-Source LLMs, Like Meta’s Llama2?

Eric Topol (31:37):

Yeah. No, I think the inability to determine whether something's true or fake in so many different spheres is putting us in a lot of jeopardy, highly vulnerable, but perhaps not the broad existential threat of the species. Yeah. But serious stuff for sure. Now another thing that's just been of interest of late is the willingness for at least one of these companies Meta to put out their model as an open Llama2. Two I guess to, to make it open for everyone so that they can do whatever specialized fine tuning and whatnot. Is that a good thing? Is that, is that a, is that a game changer for the field? Because obviously the computer resources, which we understand, for example, GPUs [graphic processing units] used-- over 25,000 for GPT-4, not many groups or entities have that many GPUs on hand to do the base models. But is having an open model, like Meta’s available is that good? Or is that potentially going to be a problem?

Melanie Mitchell (32:55):

Yeah, I think probably I would say yes to both <laugh>.

Eric Topol (32:59):

Okay. Okay.

Melanie Mitchell (33:01):

No, 'cause it is a mixed bag. I, I think ultimately, you know, we talked about transparency and open source models are transparent. I mean, I, I don't know if, I don't think they actually have released information on the data they use to train it, right? Right. So that, it lacks that transparency. But at least, you know, if you are doing research and trying to understand how this model works, you have access to a lot of the model. You know, it would be nice to know more about the data it was trained on, but so there's a lot of, there's a lot of big positives there. and it also means that the data that you then use to continue training it or fine tuning it, is not then being given to a big company. Like, you're not doing it through some closed API, like you do for open AI

(33:58):

On the other hand, these, as we just saw, talked about, these models can be used for a lot of negative things like, you know, spreading disinformation and so on. Right. And giving, sort of making them generally available and tuneable by anyone presents that risk. Yeah. So I think there's, you know, there's an analogy I think, you know, with like genetics for example, you know, or disease research where I think there was a, the scientists had sequenced the genome of the smallpox virus, right? And there was like a big debate over should they publish that. Because it could be used to like create a new smallpox, right? But on the other hand, it also could be used to, to, to develop better vaccines and better treatments and so on. And so I think there, there are, you know, any technology like that, there's always the sort of balance between transparency and making it open and keeping it closed. And then the question is, who gets to control it?

The Next Phase of LLMs and the Plateau of Human-Derived Input Content

Eric Topol (35:11):

Yeah. Who gets to control it? And to understand the potential for nefarious use cases. yeah. The worst case scenario. Sure. Well, you know, I look to you Melanie, as a leading light because you are so balanced and, you know, you don't, the interest thing about you is what I have the highest level of respect, and that's why I like to read anything you write or where you're making comments about other people's work. Are you going write another book?

Melanie Mitchell (35:44):

Yeah, I'm thinking about it now. I mean, I think kind of a follow up to my book, which as you mentioned, like your book, it was before large language models came on the scene and before transformers and all of that stuff. And I think that there really is a need for some non-technical explanation of all of this. But of course, you know, every time you write a book about AI, it becomes obsolete by the time it's published.

Eric Topol (36:11):

That that's I worry about, you know? And that was actually going be my last question to you, which is, you know, where are we headed? Like, whatever, GPT-5 and on and it's going, it's the velocity's so high. it, where can you get a steady state to write about and try to, you know, pull it all together? Or, or are we just going be in some crazed zone here for some time where the things are moving too fast to try to be able to get your arms around it?

Melanie Mitchell (36:43):

Yeah, I mean, I don't know. I, I think there's a question of like-- can AI keep moving so fast? You know, we've obviously it's moved extremely fast in the last few years and, but the way that it's moved fast is by having huge amounts of training data and scaling up these models. But the problem now is it's almost like the field is run out of training data generated by people. And if people start using language models all the time for generating text, the internet is going be full of generated text, right? Right. Human

Eric Topol (37:24):

Written

Melanie Mitchell (37:24):

Text. And it's been shown that if these models keep, are sort of trained on the text that they generate themselves, they start behaving very poorly. So that's a question. It's like, where's the new data going to come from?

Eric Topol (37:39):

<laugh>, and there's lots of upsettedness among people whose data are being used.

Melanie Mitchell (37:44):

Oh, sure.

Eric Topol (37:45):

 understandably. And as you get to is there a limit of, you know, there's only so many Wikipedias and Internets and hundreds of thousands of books and whatnot to put in that are of human source content. So do we reach a, a plateau of human derived inputs? That's really fascinating question. I perhaps things will not continue at such a crazed pace so we can I mean, the way you put together A Guide for Thinking Humans was so prototypic because it, it was so thoughtful and it brought along those of us who were not trained in computer science to really understand where the state of the field was and where deep neural networks were. We need another one of those. And you're no one, I nominate you to help us to give us the, the, the right perspective. So Melanie, Professor Mitchell, I'm so grateful to you, all of us who follow your work remain indebted for keeping it straight. You know, you don't get ever get carried away. and we learn from that, all of us. It's really important. 'cause this, you know, there's so many people on one end of the spectrum here, whether it's doomsday or whether this is just stochastic parrot or open source and whatnot. It's really good to have you as a reference anchor to help us along.

Melanie Mitchell (39:13):

Well, thanks so much, Eric. That's really kind of you.

Share