Ground Truths
Ground Truths
Peter Lee and the Impact of GPT-4 + Large Language AI Models in Medicine
2
0:00
-43:18

Peter Lee and the Impact of GPT-4 + Large Language AI Models in Medicine

2

Link to the book: The AI Revolution in Medicine

Link to my review of the book

Link to the Sparks of Artificial General Intelligence preprint we discussed

Link to Peter’s paper on GPT-4 in NEJM

Transcript (with a few highlights in bold of many parts that could be bolded!)

Eric Topol (00:00):

Hello, I'm Eric Topol, and I'm really delighted to have with me Peter Lee, who's the director of Microsoft Research and who is the author, along with a couple of colleagues for an incredible book called The AI Revolution in Medicine, GPT-4 and Beyond. Welcome, Peter.

Peter Lee (00:20):

Hello Eric. And thanks so much for having me on. This is a real honor to be here.

Eric Topol (00:24):

Well, I think you are in the enviable position of having spent now more than seven months looking at GPT-4’s S capability, particularly in the health and medicine space. And it was great that you recorded that in a book for everyone else to learn because you had such a nice head start.  I guess what I wanted to start with is, I mean, it's, it's a phenomenal book. I [holding the book up], this prop. I can't resist

Peter Lee (00:52):

<laugh>

Eric Topol (00:53):

When, when I got it, I, I couldn't, I stayed up most of the night because I couldn't put it down. It was, it is so engrossing. But when you, when you first got your hands on this and started testing it, what were, what were your initial thoughts?

Peter Lee (01:09):

Yeah. I, let me first start by saying thank you for the nice words about the book, but really, so much of the credit goes to the co-authors, Carey Goldberg and Zach Kohane and Corey in particular took my overly academic writing. I suspect you have the same kind of writing style as well as Zach's pretty academic writing and helped turn it into something that would be approachable to non-computer scientists and as she put it, as much as possible as a page turner. So I'm glad that her work helped make the, the book an easy read. I,

Eric Topol (01:54):

I want to just say you're very humble because the first three chapters that you wrote yourself were clearly the, the best ones for me. Anyway. I don't mean to interrupt, but it, it, it is an exceptional book, really.

Peter Lee (02:06):

Oh thank you very much. It means a lot. Hearing that from you. You know, my own view is that the, the best writing and the best analyses and the best ideas for applications or not of this type of technology in medicine are yet to come. But you're right that I did benefit from this seven-month head start. And so, you know, I think the timing is, is very good. but I'm hoping that much better books and much better writings and ideas will come, you know, when you start with something like this, I, I suspect, Eric, you had the same thing. you start off with a lot of skepticism and I, in fact, I sort of now made light with this. I talk about the nine stages of grief that you have to go through.

(02:55):

 I was extremely skeptical. Of course, I was very aware of GPT 2, GPT 3 and GPT 3.5. I understand, you know, what goes into those models really deeply. and so some of the claims, when I was exposed to the early development, GPT-4 just seemed outlandish and impossible. So I, I was, you know, skeptical, somewhat quietly skeptical. We've all been around the block before and, you know, we've heard lots of AI claims and I was in that state for maybe more than two weeks. And then I started to become in that two weeks annoyed, because I know that some of my colleagues like falling into what I felt was the trap of getting fooled by this technology. And then that turned into frustration and fear. I actually got angry. And one colleague who I won't name I've since had to apologize because then I into the phase of amazement because you start to encounter things that you can't explain that this thing seems to be doing that turns into joy.

(04:04):

 I remember the exhilaration of thinking, wow, I did not think I would live long enough to see a technology like this. and then intensity, There was a period of about three days when I didn't sleep, I was just experimenting. Then you run into some limits and some areas of puzzlement and that's a phase of chagrin. And then real dangerous missteps and mistakes that this system can make that you realize might end up really hurting people. and then, you know, ChatGPT gets released and to our surprise it catches fire with people. And we learn directly through communications that some clinicians are using it in clinical settings. And that heightens the concern. And I, I can't say I'm in the ninth stage of enlightenment <laugh> yet, but you do become very committed to wanting to help the medical community get up to speed and to be in a position to take ownership of the question of whether, when, and how a technology like this should be used. and that was really the motivating force behind the book. And it, it was really that journey. And that journey also has given me patience with everyone else in the world, because I realize everyone else in the world has to go through those same nine, nine stages.

Eric Topol (05:35):

Well, those stages that you went through are actually a great way to articulate this pluripotent technology. I mean, I think you, you touched on that chat. ChatGPT was released November 30th and within 90 days had a billion distinct users, which is beyond anything in history. And then of course, this transcended that quite a bit as you showed in the book coming out in you know, just a very short time in March. right. And I think a lot of people want access to GPT-4 because they know that there is this jump in its capabilities. But the book starts off after Sam Altman's forward, which was also nice because he said, you know, this is just an early, as you pointed out there, there's a lot more to come in the large language model space.

(06:30):

But the grabber to me was this futuristic, this second year medical resident who's using an app on the phone to get to the latest GPT to help manage her patient, and then all the other things that it's doing to check on her patients and do all the things that are the tasks that clinicians don't really want to do, that they need help with. And that just grabs you as to the futuristic potential, which may not be so far away. And I think then you get into the nuts and bolts, but one of the things that I think is a misnomer that you really nailed is how you say it isn't just that it generates, but it really is great at editing and analyzing. And here it's, it's called generative AI. Can you, can you expound on that? And it's unbelievable conversationalist capability.

Peter Lee (07:23):

Yeah. you know, the term Generative AI, I tried for a while to push back on this, but I think it's just caught on and I've given up on that. And I get it. You know, I, I think especially with ChatGPT it's of course reasonable for the public to be, you know infatuated with a thing that can write love letters, write poetry and that generative capability. and of course, you know school children writing their essays and so on this way. But as you say one thing we have discovered through a lot of experimentation is it's actually somewhat of a marginal generator of text. I would not say at all. That is, it is not as good a poet as good human poets. It's not the, you know, people have programmed GPT-4 to try to write whole novels and it can do that,

(08:24):

they aren't great. and it's a challenge, you know within Microsoft, our Nuance division has been integrating GPT-4 to help write clinical and encounter notes. and you can tell it's just hitting at the very limits of the capabilities in and of the intelligence of GPT-4 to be able to do that well. But one area where it really excels is in evaluating or judging or reviewing things. And we've seen that over and over again. in chapter three. You know, I have this example of its analysis of some contemporary poetry which is just stunning in its kind of insights and its use of metaphor and allegory. And but then in other situations in interactions with the New England Journal Journal of Medicine experimentations with the use of GPT-4 as an adjunct to the review process for papers it is just incredibly insightful in spotting inconsistencies missing citations to precursor studies to understanding lack of inclusivity and diversity, you know, in approach or in terminology.

(09:49):

And these sorts of review things end up being especially intriguing for me when we think about the whole problem of medical errors and the possibility of using GPT-4 to look over the work of doctors, of nurses of insurance, adjudicators and others, just as a second set of eyes to check for errors check for kind of missing possibilities if there's a differential diagnosis. Is there a possibility that's been something that's been missed? If there's a calculation for an IV medication administration, well, it's a calculation done correctly or not. And it's in those types of applications of GPT-4 as a reviewer, as a second set of eyes that I think I've been especially impressed with. And we try to highlight that in the book.

Eric Topol (10:43):

Yeah. That's one of the very illuminating things about going well beyond what are the assumed utilities in a little bit, we'll talk about the liabilities, but certainly these are functions part of that flurry potent spectrum that I think a lot of people are not aware of. One, particularly of interest in the medical space is something I had not anticipated as, you know, when I wrote a Deep Medicine chapter, “Deep Empathy,” I said, well, we got to rely totally on humans for that. But here you had examples that were quite stunning of coaching physicians by going through their communication, their note and saying, you know, you could have been more sensitive with this. You could have done this, but you, you could be more empathic. And as you know, since the book was published, there was an interesting study that compared a couple hundred questions directed to physicians and then to ChatGPT, which of course isn't necessarily called, we wouldn't say it's state of the art at this point, right. But what was seen that chatbot exhibited, the more empathy, the more sensitive, higher quality responses. So do you think, ultimately that this will be a way we can actually use technology to foster a better communication between clinicians and patients?

Peter Lee (12:10):

Well I'll try to answer that, but then I want to turn the question to you because I'm just dying to understand how others especially leading thinkers like you think about this. Because as a human being and as a patient, there's something about this that doesn't quite sit right. You know I, I want the empathy to come from my doctor, my human doctor that's in my heart the way that I feel. And yet there's just no getting around the fact that GPT-4 and even weaker versions like GPT 3.5, CHatGPT can be remarkably empathetic. And as you say, there was that study that came out of UC, San Diego Medicine, Johns Hopkins Medicine that you know, was just another fairly significant piece of evidence to that point.

Here's another example. You know, my colleague Greg Moore was assisting a patient who had late stage pancreatic cancer.

(13:10):

And there was a real struggle for both the specialists and for Greg to know what to say to this desperate patient how to support this patient. And the thing that was remarkable Greg decided to use GPT-4 to get advice and they had a conversation and there was very detailed advice to Greg on what to say and how to support this patient. And at the end when Greg said, thank you, GPT-4 said, and you're welcome, Greg, but what about you? You know, do you have all the support that you need? This must be very difficult for you. So the empathy just goes remarkably deep. And, you know, if you just look at how busy good doctors and especially nurses are, you can start to realize that people don't necessarily have the time to think about that.

(14:02):

And also that what GPT-4 is suggesting ends up being a prompt to the human doctor or the human nurse to actually take the time to reflect on what the patient might need to hear, right. What might be going through their minds. And, and so there is some empathy aid going on here. At the same time, I think as a society, we have to understand how comfortable we are with the idea of this concept of empathetic care being assisted by a machine. and this is something that I'm very keen and curious about just in the medical community. And, and that's why I wanted to turn the question back around to you. how do you see this?

Eric Topol (14:46):

Yeah, I didn't foresee this, but I, and I also recognize that we're talking about a machine vector of it. I mean, it's a pseudo-empathy of sort. But the fact that it can process where it can be improved and it can help foster essentially are features that I think are extraordinary. I, I wouldn't have predicted that. And I've seen now, you know, many good examples in the book and, and even beyond. So it's a welcome thing and it adds another capability which is partly isn't that, that physicians and nurses are lacking empathy, but because their biggest issue, I think is lacking time. Yes. And the fact that someday there's a rescue in the works, hopefully, that a lot of that time of tasks that are, you know, the data clerk functions and other burdens right, will be alleviated the keyboard liberation that has been a fantasy of mine for some years, maybe ultimately will be achieved.

(15:52):

And the other thing I think that's really special in the book that I wanted to comment, there is a chapter by I think Carey Goldberg. And that was about the patient side, right? And this is what we, we all, the talk is about, you know, doctors and clinicians, but it's the patients who could derive the most. And out of those first billion people that used ChatGPT, many were of course health and medical question conversations. But these are patients, we're  all patients. And the idea that you could have a personal  health advisor, a concept which was developed in that chapter, and the whole idea that that as opposed to a search today, that you could get citations and it would be at the, at the literacy level of the person asking them, making the prompts. Yeah. Could you comment about that? Because that seems to be very much underemphasized, this democratization of this high level capability of getting you know, very useful information and conversation.

Peter Lee (16:56):

Yeah. And I think also this is also where some of the most difficult societal and regulatory questions might come, because while the medical community knows how to abide by regulations, and there is a regulatory framework, the same is much less true for a doctor in your pocket, which is what GPT-4 and, you know, other large language models that are emerging can, can become. And you know, I think for me personally I have come to depend on GPT-4. I use it through the Bing search engine. sometimes it's simple things that previously weren't mysterious. Like I received an explanation of benefits notice from my insurance company, and it is this notice it has some dollar figures in it. It has some CPT codes, and I have no idea. And sometimes it's things that my son or my wife got treated for.

(17:55):

It's, it's just mysterious. It's great to have an AI that can decode these things and can answer questions. similarly, when I go for a medical checkup and I get my blood test results just decoding those CBC lab test numbers, it, it's, again, something that is just incredible convenience. But then even more you know, my father recently passed away. He was 90 years old, but he was very ill for the last year or so of his life seeing various specialists. I, my two sisters and I all lived far away from him. And so we were struggling to take care of him and to understand his medical care. and it's a situation that I found all too common in our world right now. And it actually creates stress and phrase of relationships amongst siblings and so on.

(18:56):

And so just having an AI that can take all of the data from the three different specialists and, you know, have it all summed up and be able to answer questions, be able to summarize and communicate efficiently from one specialist to the next to really provide kind of some sound advice ends up being a godsend. Not so much for my father's health, because he was on a trajectory that was really not going to be changed, but just for the peace of mind and the relationships between me and my two sisters and my mother-in-law. And so it's that kind of empowerment. you know, in corporate speak at Microsoft, we would say that's empowerment of a consumer, but it is truly empowerment. I mean, it's for real. And you know, that kind of use of these technologies, I think is spreading very, very rapidly and I think is is incredibly empowering.

(19:57):

Now the big question is can the medical community really harness that kind of empowered patient? I think there's a desire to do that. That's always been one of the big dreams, I think in medicine today. and then the other question is, the assistants are fallible. They make mistakes. and so, you know, what is the regulatory or legal or, you know, ethical disposition of that? And so these are still big questions I think we have to answer. But the, you know, overall big picture is that there's an incredible potential to empower patients with a, a new tool and also to kind of democratize access to really expert medical information. and I, I just think it's, you're absolutely right. It doesn't get enough attention even in our book we only devoted one chapter to this, right?

Eric Topol (21:00):

Right. But at Least it was in there though. That's good. At least you had it because I think it's so critical to figure that out. And as you say, the ability to discriminate bad information, confabulation hallucination among people without medical training is, is, is much more challenging. Yes. but I also liked in the book how you could go to go back to another conversation to audit the first one or a third one, so that if you ever are suspicious that you might not be getting the best information you could do, like double data entry or triple data entry, you know, I thought that was really interesting. Now Microsoft made a humongous investment in open AI yesterday Sam Altman was getting grilled, not again, not really in a much more friendly sense, I'm sure about what should we do. We have this, we have this two edge sword likes of which we've never seen.

(21:59):

Of course, you get in the book about does it really matter if it's AGI or some advanced intelligence? If it's working well, it's kind of like the explainability-- black box story. But of course, it, it can get off the tracks. We know that. And there isn't that much difference perhaps between ChatGPT and GPT-4 established so far. So in that discussion, he said, well, we got to have regulatory oversight and licensing. And it's very complex. I mean, what, what are your thoughts as to how to deal with the potential limitations that are still there that may be difficult to eradicate that are the worries?

Peter Lee (22:43):

Right. You know, at, at, at least when it comes to medicine and healthcare. I personally can't imagine that this should not be regulated. it, it just and it just seems also more approachable to think about regulation because the whole practice of medicine has grown up in this regulated space. if there's any part of life and of our society that knows how to deal with regulation and can actually make regulations actually work it is medicine. And so now having said that I do understand coming from Microsoft, and even more so for Sam Altman coming from open eye, it can sometimes be interpreted as being self-serving. You're wanting to set up regulatory barriers against others. I would say in Sam Almond's defense that at back to 2019 prior, just prior to the release of GPT-2 Sam Altman made public calls for thinking about regulation for need for external audit and, you know, for the world to prepare for the possibility of AI technologies that would be approaching AGI..

(24:05):

 and in fact just a month before the release of GPT-4, he made a very public call saying even at greater length, asking for the for the world to, to do the same things. And so I think one thing that's misunderstood about Sam is that he's been saying the same thing for years. It isn't new. And so I think that that should give people who are suspicious of Sam's motives in calling for regulation, that it should give them pause because he basically has not changed his tune, at least going back to 2019. But if we just put that aside you know, what I hope for most of all is that the medical community, and I really look at leading thinkers like you, particularly in our best medical research institutions would quickly move to take assertive ownership of the fundamental questions of whether, when, and how a technology like this should be used would engage in the research to create the foundations for you know, for sensible regulations with an understanding that this isn't about GPT-4 this is about the next three or four or five even more powerful models.

(25:31):

And so, you know, ideally, I think it's going to take some real research, some real inventiveness. What we explain in chapter nine of the book is that I don't believe we have a workable regulatory framework no, right now in that we need to develop it. But the foundations for that, I think have to be a product of research and ideally research from our best thinkers in the medical research field. I think the race that we have in front of us is that regulators will rightfully feel very bad if large nervous people start to get injured or, or worse because of the lack of regulation. and so there, you know, and, and you can't blame them for wanting to intervene if that starts to happen. And so, so we do have kind of an urgency here. whereas normally our medical research on say, methods for clinical validation of large language models might take, you know, several years to really come to fruition. So there's a problem there. But at the, I think the medical field can very quickly come up with codes of contact guidelines and expectations and the education so that people can start to understand the technology as well as possible.

Eric Topol (26:58):

Yeah. And I think the tricky part here is that, as you know, there's a lot of doomsayers and existential threats that have been laid out by people who I respect, and I know you do as well, like Geoffrey Hinton who is concerned, but you know, let's say you have a multimodal AI like GPT-4, and you want to put in your skin rash or skin lesion to it. I mean, how can you regulate everything? And, you know, if you just go to Bing and you go to creative mode and you're going get all kinds of responses. So this is a new animal, this is a new alien, the question is that as you say, we don't have a framework and we should move to, to get one. To me, the biggest question that you, you, you really got to in the book, and I know you continue, of course, it was with within two days of your book’s publishing,  the famous preprint came out, the Sparks preprint from all your team at Microsoft Research, which is incredible.

(27:54):

169 page preprint downloaded. I don't how many millions of times already, but that is a rich preprint we'll, we'll put in the link, of course. But there, the question is, what are we seeing here? Is this really just a stochastic parrot a JPEG with, you know, loose stuff and juxtaposition of word linguistics, or is this a form of intelligence that we haven't seen from some machines ever before? Right. and, you get at that in so many ways, and you point out, does it matter? I I wonder if you could just expound on this, because to me, this really is the fundamental question.

Peter Lee (28:42):

Yeah. I think I get into that in the book in chapter three. and I think chapter three is my expression of frustration on this, because it's just a machine, right? And in that sense, yes, it is just a stochastic parrot, you know, it's a big probabilistic machine that's making guesses on the next word that it should spit out, or that you will spit out. It, it, and it's making a projection for a whole conversation. And you know, in that, the first example I use in chapter three is the analysis of this poem. And the poem talks about being splashed with cold water and feeling fever. And the machine hasn't felt any of those things And so when it's opining about those lines in the poem, it can't possibly be authentic. And so you know, so we can't say it understands these things.

(29:39):

It it hasn't experienced these things, but the frustration I have is as a scientist, and here's now where I have to be very disciplined to be a scientist, is the inability to prove that. Now, there has been some very, very good research by researchers who I really respect and admire. I mean, there was Josh Tenenbaum's team, whole team, and his colleagues at MIT or at Harvard, the University of Washington, and the Allen Institute, and many, many others who have just done some really remarkable research and research that's directly relevant to this question of does the large language model, quote unquote, understand what it's hearing and what it's saying? And often times providing tests that are grounded in the foundational theories about why these things can't possibly be understanding what they're saying. And therefore, these tests are designed to expose these shortcomings in large language models. But what's been frustrating is, but also kind of amazing is GPT-3tends to pass most, if not all of these tests!

(31:01):

And, and so it, it leaves you kind of, if we're really honest, as scientists, it and even if we know this thing, you know, is not sentient, it leaves us in this place where we're, we're without definitive proof of that. And the arguments from some of the naysayers who I also deeply respect, and I've really read so much of their work don't strike me as convincing proof either, you know, because if you say, well, here's a problem that I can use to cause GPT-4 to get tripped up, I, I have no shortage of problems. I, I think I could get you to trip, get tripped up <laugh>, Eric. And yet that does not prove that you are not intelligent. And so, so I think we're left with this kind of set of two mysteries. One is we see GPT-4 doing things that we can't explain given our current understanding of how a neural transformer operates.

(32:09):

And then secondly we're lacking a test that's derived from theory and reason that consistently shows a limitation of GPT-4’s understanding abilities. and so in my heart, of course, I, I understand these things as machines and I actively resist anthropomorphizing these machines. But as it, I, maybe I'm fooling myself, but as a discipline scientist, I, I'm, I'm trying to stay grounded in proof and evidence. and right at the moment, I don't believe the world has that I, we'll get there. We're understanding more and more every day, but at the moment we don't have it.

Eric Topol (32:55):

I think hopefully everyone who's listening is getting some experience now in these large language models and realizing how much fun it is and how we're in a new era in our lives. This is a turning point.

Peter Lee (33:13):

Yeah. That's stage four of amazement and joy

Eric Topol (33:16):

Yeah. No, there's no question. And you know, I think about you, Peter, because you know, at one point you were in a high level academic post at Carnegie Mellon, one of our leading computer science institutions in the country, in the world, and now you're at this enviable spot of having helped Microsoft to get engaged with a, a risk, I mean a big, big bet. And one that's fascinating, and that is obviously just an iteration for many things to come. So I wonder if you could just give us your sense about where you think we'll be headed over the next few years, because the velocity that this is moving. Not only is it this new technology that is so different than anything previously, but to go, you know, from a few months to get to where things are now and to know that this road is still a long ways in front of us. What, what's your sense of, you know, are we going to get hallucinations under control? Are we going to start to see this pluripotency rollout particularly in the health and medicine arena?

Peter Lee (34:35):

Yeah. You know, I think first off, I can't say enough good things about the team at OpenAI. You know, I think their dedication and their focus and I think it'll come out eventually also, the, the care that they've taken in understanding the potential risks and, and really trying to create a model for how to cope with those things. I, I think as those stories come out, I think it it will it'll be quite impressive. at the same time, it's also incredibly disruptive, even for us as researchers, <laugh> it just disrupts everything. Right. You know, I was having interaction after I read Sid Muhkerjee’s's new book, the Song of the Cell. Because in that book on cellular biology one of the prime characters historically Rudolph Virchow who confirmed the cell mitosis and the you know, the thing that was disruptive about Virchow is that well, first off, the whole theory of cell mitosis was debunked.

(35:44):

 that didn't invalidate the scientists who were working on cell mitosis, but it certainly debunks many of their scientific legacies. And the other is after Virchow, to call yourself a biology researcher, you had to have a microscope and you had to know how to use it. and in a way, there's a scientific disruption similar here, where there are now new tools and new computing infrastructure that you need, if you want to call yourself a com, a computer science researcher. And that's really incredibly disruptive. so I, I see kind of two bifurcation, I think that's likely to happen. I, I think the team at Open AI and with Microsoft's support and collaboration will continue to push the boundaries and the frontiers with the idea of seeing how close to AGI can truly be achieved and largely through scale. And you know, there, there will be tremendous focus of attention on improving its abilities in mathematics and in planning and being able to use tools and, and so on there. and in that, there's a strong suspicion and belief that as greater and greater levels of general cognitive intelligence are achieved, that issues around things like hallucination will be, become much more manageable. Or at least manageable to the same extent that they're manageable in human beings.

(37:25):

But then I, I think there's going to be an explosion of activity in much smaller, more specialized models as well. I think there's going be a gigantic explosion in, say, in open-source smaller models, and those models probably will not be as steerable and alignable, so they might have more uncontrollable hallucination might go off the rails more easily, but for the right applications --integrated into the right settings--that might not matter. And so exactly then how these models will get used and also what dangers they might pose, what negative consequences they might bring is hard to predict. But I, I do think we're going to see those two different flavors of these large AI systems coming really very, very quickly, much less in the next year.

Eric Topol (38:23):

Well, that's an interesting perspective, an important one in the book you wrote in this sentence that I thought was particularly notable “the neural network here is so large that only a handful of organizations have enough computing power to train it.” we're talking about 20 or 30,000 GPUs, something like that. We're lucky to have two here or four. this is something that I think again, if you were sitting at Carnegie Mellon right now versus sitting with at Microsoft or some of the tech titan companies that have this capabilities, can you comment about this? Because this sets off a very, you know, distinct situation we've not seen before,

Peter Lee (39:08):

Right?  First off you know, I can't really comment on the size of the compute infrastructure for training these things, but, but it is, as we wrote in the book, is at a size that very, very few organizations at this point. This has got to change at some point in the future. and even on the inference side, forgetting about training you know, GPT-4 is much more power hungry than the human brain. So it is just the human brain is an existence proof that there must be much more efficient architectures for accomplishing the same tasks. So I think there's really a lot yet to discover and a lot of headroom for, for improvement. but you know, what I think is ultimately the, the kind of challenge that I see here is a technology like this could become as essential infrastructure of life as the mobile phone in your pocket.

Peter Lee (40:18):

And, and so then the question is, can the cost of this technology, how quickly can the cost of this technology, if it should also become as necessary to modern life as the technology's in your pocket how quickly can the costs of this be get to a point where that's, you know, where that is can be reasonably accomplished, right? If we don't accomplish that, then we risk creating new digital divides that would be extremely destructive to society. And what we want to do here is to really empower everybody if it does turn out that this technology becomes as empowering as we think it could be.

Eric Topol (41:04):

RIght I, I think your point about the efficiency the drain on electricity and no less water for cooling. I mean, these are big, big-ticket things and, you know hopefully simulating the human brain will become, and it's less power-hungry state will become part of the future as well.

Peter Lee (41:24):

You, well, and hopefully these technologies will solve problems like you know, a clean energy, right? Fusion containment, all better lower energy production of fertilizers, better nanoparticles for more efficient lubricants. There's all a new catalyst for carbon capture. we, if you think about it in terms of making a bet to kind of invent our way out of climate disaster this is one of the tools that you would consider betting on.

Eric Topol (42:01):

Oh, absolutely. You know, I'm going to be talking soon with Al Gore about that, and I know he's quite enthusiastic about the potential. This is engrossing having this conversation, and I would like to talk to you for many hours, but I know you have to go. But I, I just want to say, as I wrote in my review of the book, talking with you is very different than talking with, you know, somebody with bravado. You're, you know, you have great humility and you're so balanced that when, when I hear something from you or read something that you've written, it's a very different perspective because I don't know anybody who's more balanced, who is more trying to say it like it is. And so, you know, I just, not everybody knows you a lot of people do that might be listening. I just want to add that and just say thank you for taking the effort, not just that you obviously wanted to experiment with GPT-4, but you also, I think, put this together in a great package so others can learn from it, and of course, expand from that as we move ahead in this new era.

(43:06):

So, Peter, thank you. It's really a privilege to have this conversation.

Peter Lee (43:11):

Oh thank you, Eric. You're really really too kind. But it, it means a lot to me to hear that from you. So thank you.

Share

Thanks for listening and or reading Ground Truths. If you found it as interesting a conversation as I did, please share it.

Much appreciation to paid subscribers—you’ve already helped fund many high school and college students at our summer intern program at Scripps Research and all proceeds from Ground Truths go to Scripps Research.