Ground Truths
Ground Truths
Andrew Ng: On OpenAI's stormy times, AI regulation, education, and where we are headed for healthcare and beyond
2
0:00
-32:22

Andrew Ng: On OpenAI's stormy times, AI regulation, education, and where we are headed for healthcare and beyond

2
Transcript

No transcript...

“A.I. is not the problem; it’s the solution.”—Andrew Ng at TED, 17 October 2023

Recorded 21 November 2023

Transcript with relevant links and links to audio file

Eric Topol (00:00):

Hello, it's Eric Topol with Ground Truths, and I'm really delighted to have with me Andrew Ng, who is a giant in AI who I've gotten to know over the years and have the highest regard. So Andrew, welcome.

Andrew Ng (00:14):

 Hey, thanks Eric. It's always a pleasure to see you.

Eric Topol (00:16):

Yeah, we've had some intersections in multiple areas of AI. The one I wanted to start with is that you've had some direct healthcare nurturing and we've had the pleasure of working with Woebot Health, particularly with Alison Darcy, where the AI chatbot has been tested in randomized trials to help people with depression and anxiety. And, of course, that was a chatbot in the pre-transformer or pre-LLM era. I wonder if you could just comment about that as well as your outlook for current AI models in healthcare.

Andrew Ng (01:05):

So Alyson Darcy is brilliant. It's been such a privilege to work with her over the years. One of the exciting things about AI is a general purpose technology. It's not useful for one thing. And I think in healthcare and more broadly across the world, we're seeing many creative people use AI for many different applications. So I was in Singapore a couple months ago and I was chatting with some folks, Dean Chang and one of his doctors, Dr. M, about how they're using AI to read EHRs in a hospital in Singapore to try to estimate how long a patient's going to be in the hospital because of pneumonia or something. And it was actually triggering helpful for conversations where a doctor say, oh, I think this patient will be in for three days, but the AI says no, I'm guessing 15 days. And this triggers a conversation where the doctor takes a more careful look. And I thought that was incredible. So all around the world, many innovators everywhere, finding very creative ways to apply AI to lots of different problems. I think that's super exciting.

Eric Topol (02:06):

Oh, it's extraordinary to me. I think Geoff Hinton has thought that the most important application of current AI is in the healthcare/ medical sphere. But I think that the range here is quite extraordinary. And one of the other things that you've been into for all these years with Coursera starting that and all the courses for deep learning.AI —the democratization of knowledge and education in AI. Since this is something like all patients would want to look up on whatever GPT-X about their symptoms different than of course a current Google search. What's your sense about the ability to use generative AI in this way?

Andrew Ng (02:59):

I think that instead of seeing a doctor as a large language model, what's up with my symptoms, people are definitely doing it. And there have been anecdotes of this maybe saving a few people's lives even. And I think in the United States we're privileged to have some would say terrible, but certainly better than many other country’s healthcare system. And I feel like a lot of the early go-to market for AI enabled healthcare may end up being in countries or just places with less access to doctors. The definitely countries where you can either decide do you want to go see if someone falls sick? You can either send your kid to a doctor or you can have your family eat for the next two weeks, pick one. So with families made these impossible decisions, I wish we could give everyone in the world access to a great doctor and sometimes the alternatives that people face are pretty harsh. I think any hope, even the very imperfect hope of LLM, I know it sounds terrible, it will hallucinate, it will give bad medical advice sometimes, but is that better than no medical advice? I think there's really some tough ethical questions are being debated around the world right now.

Eric Topol (04:18):

Those hallucinations or confabulation, won't they get better over time?

Andrew Ng (04:24):

Yes, I think LLM technology is advanced rapidly. They still do hallucinate, they do still mix stuff up, but it turns out that I think people still have an impression of LLM technology from six months ago. But so much has changed in the last six months. So even in the last six months, it is actually much harder now to get an LMM, at least many of the public ones offered by launch companies. It's much harder now compared to six months ago to get it to give you deliberately harmful advice or if you ask it for detailed instructions on how to commit a crime. Six months ago it was actually pretty easy. So that was not good. But now it's actually pretty hard. It's not impossible. And I actually ask LLMs for strange things all the time just to test them. And yes, sometimes I can get them when I really try to do something inappropriate, but it's actually pretty difficult.

(05:13):

But hallucination is just a different thing where LLMs do mix stuff up and you definitely don't want that when it comes to medical advice. So it'll be an interesting balance I think of when should we use web search for trust authoritative sources. So if I have a sprained ankle, hey, let me just find a webpage on trust from a trusted medical authority on how to deal with sprained ankle. But there are also a lot of things where there is no one webpage that just gives me an answer. And then this is an alternative for generating a novel thing that's need to my situation. In non-healthcare cases, this has clearly been very valuable in just the healthcare, given the criticality of human health and human life. I think people are wrestling with some challenging questions, but hallucinations are slowly going down.

Eric Topol (05:59):

Well, hopefully they'll continue to improve on that. And as you pointed out the other guardrails that will help. Now that gets me to a little over a month ago, we were at the TED AI program and you gave the opening talk, which was very inspirational, and you basically challenged the critics of the negativism on AI with three basic issues: amplifying our worst impulses, taking our jobs and wiping out humanity. And it was very compelling and I hope that that will be posted soon. And of course we'll link it, but can you give us a skinny of your antidote to the doomerism about AI?

Andrew Ng (06:46):

Yeah, so I think AI is a very beneficial technology on average. I think it comes down to do we think the world is better off or worse off with more intelligence in it, be it human intelligence or artificial intelligence? And yes, intelligence can be used for nefarious purposes and it has been in history, I think a lot of humanity has progress through humans getting smarter and better trained and more educated. And so I think on average the world is better off with more intelligence in it. And as for AI wiping oiut humanity, I just don't get it. I’ve spoken with some of the people with this concern, but their arguments for how AI could wipe up humanity are so vague that they boil down to it could happen. And I can't prove it won't happen any more than I can prove a negative like that. I can't prove that radio wave is being emitted from earth won't cause aliens to find us and space aliens to wipe us out. But I'm not very alarmed about space aliens, maybe I should be. I don't know. And I find that there are real harms that are being created by the alarmist narrative on AI. One thing that's quite sad was chatting with they're now high school students that are reluctant to enter AI because they heard they could lead to human extinction and they don't want any of that. And that's just tragic that we're causing high school students to make a decision that's bad for themselves and bad for humanity because of really unmerited alarms about human extinction.

Eric Topol (08:24):

Yeah, no question about that. You had, I think a very important quote is “AI is not the problem, it's the solution” during that. And I think that gets us to the recent flap, if you will, with OpenAI that's happened in recent days whereby it appears to be the same tension between the techno-optimists like you and I would say, versus the effective altruism (EA) camp. And I wonder what your thoughts are regarding, obviously we don't know all the inside dynamics of this, with probably the most publicized interactions in AI that I can remember in terms of its intensity, and it's not over yet. But what were your thoughts about as this has been unfolding, which is, of course, still in process?

Andrew Ng (09:19):

Yeah, honestly, a lot of my thoughts have been with all the employees of OpenAI, these are hundreds of hardworking, well-meaning people. They want to build tech, make available others, make the world better off and out of the blue overnight. The jobs livelihoods and their levers to make a very positive impact to the world was disrupted for reasons that seem vague and at least from the silence of the board, I'm not aware of any good reasons for really all these wonderful people's work and then livelihoods and being disrupted. So I feel sad that that just happened, and then I feel like OpenAI is not perfect, no organization in the world is, but frankly they're really moving AI forward. And I think a lot of people have benefited from the work of OpenAI. And I think the disruptions of that as well is also quite tragic. And this may be—we will see if this turns out to be one of the most dramatic impacts of unwarranted doomsaying narratives causing a lot of harm to a lot of people. But we'll see what continuously emerges from the situation.

Eric Topol (10:43):

Yeah, I mean I think this whole concept of AGI, artificial general intelligence and how it gets down to this fundamental assertion that we're at AGI, the digital brain or we're approximating or the whole idea that the machine understanding is that at unprecedented levels. I wonder your thoughts because obviously there still is the camp that says this is a sarcastic parrot. It's all anything that suggests understanding is basically because of pre-training or other matters and to try to assign any real intelligence that's at the level of human even for a particular task no less beyond human is unfounded. What is your sense about this tension and this ongoing debate, which seemed to be part of the OpenAI board issues?

Andrew Ng (11:50):

So I'm not sure what happening in the OpenAI board, but the most widely accepted definition of AGI is AI to do any intellectual tasks that the human can. And I do see many companies redefining AGI to other definitions. So for the original definition, I think we're decades away. We're very clearly not there, but many companies that, let's say alternative definitions and yeah, you have an alternative definition, maybe we're there already. One of my eCommerce friends looked at one of the alternative definitions. He said, well, for that definition, I think we got AGI 30 years ago.

(12:29):

And looking on the more positive side. And I think one of the signs that the companies reach AGI frankly would be if they're rational economic player, they should maybe let go all of their employees that do maybe intellectual work. So until that happens, I just don't, not to joke about it, that would be a serious thing. But I think we're still many decades away from that original definition of AGI. But on the more positive side in healthcare and other sectors, I feel like there's a recipe for using AI that I find fruitful and exciting, which is it turns out that jobs are made out of tasks and I think of AI as automating tasks rather than jobs. So a few years ago, Geoff Hinton had made some strong statements about AI replacing radiologists. I think those predictions have really not come true today, but it turns out as Eric, I enjoy your book, which is very thoughtful about AI as well.

(13:34):

And I think if you look at say the job of radiologists, they do many, many different things, one of which is read x-rays, but they also do patient intakes, they operate X-ray machines. And I find that when we look at the healthcare sector or other sectors and look at what people are doing, break jobs down into tasks, then usually there can often be a subset of tasks. There's some that are amenable to AI automation and that recipe is helping a lot of businesses create value and also in some cases make healthcare better. So I'm actually excited and because healthcare, so many people doing such a diverse range of tasks, I would love to see more organizations do this type of analysis.

(14:22):

The interesting thing about that is we can often automate, I'm going to make up a number, 20% or 30% or whatever, have a lot of different jobs tasks. So one, there's a strong sign we're far from AGI because we can't automate a hundred percent of the intellectual tasks, but second, many people's jobs are safe because when we automate 20% of someone's job, they can focus on the other 80% and maybe even be more productivity and causes the marginal value of labor and therefore maybe even salaries that go uprooted and down. Actually recently, a couple weeks ago, few weeks ago, released a new course on Coursera “Generative AI for Everyone” where I go deeper into this recipe for finding opportunities, but I'm really excited about working with partners to go find these opportunities and go build to them.

Eric Topol (15:15):

Yeah, I commend you for that because you have been for your career democratizing the knowledge of AI and this is so important and that new course is just one more example. Everyone could benefit from it. Getting back to your earlier point, just because in the clinician doctor world, the burdensome task of data clerk function of having to be slave to keyboards and entering the visit data and then all the post- visit things. Now, of course, we're seeing synthetic notes and all this can be driven through an automated note that is not involving any keyboard work. And so, just as you say, that comprises maybe 20, 30% of a typical doctor's day, if not more. And the fact is that that change could then bring together the patient and doctor again, which has been a relationship that suffered because of electronic records and all of the data clerk functions. That's just a really, I think, a great example of what you just pointed out. I love “Letters from Andrew” which you publish, which as you mentioned, one of your recent posts was about the generative AI for everyone. And in those you recently addressed loneliness, which is as associated with all sorts of bad health outcomes. And I wonder if you could talk about how AI could help loneliness.

Andrew Ng (16:48):

So this is a fascinating case study where, so AI fund, we had wanted to do something on AI and relationships, kind of romantic relationships. And I'm an AI guy, I feel like, what do I know about romance? And if you don't believe me, you can ask my wife, she'll confirm I know nothing about romance, but we're privileged to partner with the former CEO of Tinder, Renata Nyborg, who knows about relationships in a very systematic way far more than anyone I know. And so working with her with a deep expertise about relationships, and it turns out she actually knows a lot about AI too. But then my team's knowledge about AI we're able to build something very unique that she launched that she announced called me. Now I've been playing around with it on my phone and it's actually interesting, remarkably good. I think relationship mentor, frankly, I wish I had Meeno back when I was single instead, I've asked my dumb questions to, and I'm excited that maybe AI, I feel like tech maybe has contributed to loneliness. I know the data is mixed, that social media contributes to social isolation. I know that different opinions are different types of data, but this is one case where hopefully AI can clearly not be the problem, but be part of the solution to help people gain the skills to build better relationships.

Eric Topol (18:17):

Yeah, now, it's really interesting here again, the counterintuitive idea that technology could enhance human bonds, which are all too short that we want to enhance. Of course, you've had an incredible multi-dimensional career. We talked a little bit about your role in education with the founding of the massive online courses (MOOCs), but also with Baidu and Google. And then of course at Stanford you've seen the academic side, you've seen the leading tech titan side, the entrepreneurial side with the various ventures of trying to get behind companies that have promised you have the whole package of experience and portfolio. How do you use that now going forward? You're still so young and the field is so exciting. Where do you try to just cover all the bases or do you see yourself changing gears in some way? You haven't had a foot in every aspect?

Andrew Ng (19:28):

Oh, I really like what I do. I think these days I spend a lot of time at AI fund builds new companies using AI and deep learning.ai is an educational arm. And one of the companies that AI fund has helped incubate does computer vision work than AI. We actually have a lot of healthcare users as well using, I feel like with the recent advances in AI at the technology layer, things like large language models, I feel like a lot of the work that lies ahead of the entire field is to build applications on top of that. In fact, a lot of the media buzz has been on the technology layer, and this happens every time this technology change. When the iPhone came out, when we shifted the cloud, it's interesting for the media to talk about the technology, but it turns out the only way for the technology suppliers to be successful is if the application builders are even more successful.

(20:26):

They've got to generate enough revenue to pay the technology suppliers. So I've been spending a lot of my time thinking about the application layer and how to help either myself or support others to build more applications. And the annoying and exciting thing about AI is as a general purpose technology, there's just so much to do, there's so many applications to build. It's kind of like what is electricity good for? Or what is the cloud good for? It's just so many different things. So it is going to take us, frankly, longer than we wish, but it will be exciting and meaningful work to go to all the corners of healthcare and all the corners of education and finance and industrial and go find these applications and go help them.

Eric Topol (21:14):

Well, I mean you have such a broad and diverse experience and you predicted much of this. I mean, you knew somehow or other that when the graphic processing unit (GPU) would go from a very low number to tens of thousands of them, what might happen. And you were there, I think, before and perhaps anyone else. One of the things of course that this whole field now gets us to is potential tech dominance. And by what I mean there is that you've got a limited number of companies like Microsoft and Google and Meta and maybe Inflection AI and a few others that have capabilities of 30,000, 40,000, whatever number of GPUs. And then you have academic centers like your adjunct appointment at Stanford, which maybe has a few hundred or here at Scripps Research that has 150. And so we don't have the computing power to do base models and what can we do? How do you see the struggle between the entities that have what appears to be almost, if you will, if it's not unlimited, it's massive computing power versus academics that want to advance the field. They have different interests of course, but they don't have that power base. Where is this headed?

Andrew Ng (22:46):

Yeah, so I think the biggest danger to that concentration is regulatory capture. So I've been quite alarmed over moves that various entities, some companies, but also governments here in the US and in Europe, especially US and Europe, less than other places have been contemplating regulations that I think places a very high regulatory compliance burden that big tech companies have the capacity to satisfy, but that smaller players will not have the capacity to satisfy. And in particular, the definitely companies would rather not have the computer open source. When you take a smaller size, say 7 billion parameters model and fine tune it for specific to, it works remarkably well for many specific tasks. So for a lot of applications, you don't need a giant model. And actually I routinely run a seven or 13 billion parameters model on my laptop, more inference than fine tuning. But it's within the realm of what a lot of players can do.

(23:51):

But if inconvenient laws are passed, and they've certainly been proposed in Europe under the EU AI Act and also the White House Executive Order, if I think we've taken some dangerous steps to what putting in place very burdensome compliance requirements that would make it very difficult for small startups and potentially very difficult for less smaller organizations to even release open source software. Open source software has been one of the most important building blocks for everyone in tech. I mean, if you use a computer or a smartphone that because open, that's built on top of open source software, TCP, IP, internet, just how the internet works, law of that is built on top of open source software. So regulations that pamper people just wanting to release open source, that would be very destructive for innovation.

Eric Topol (24:48):

Right? In keeping with what we've been talking about with the doomsday prophecies and the regulations and things that would slow up things, the whole progress in the field, which we are obviously in touch with both sides and the tension there, but overregulation, the potential hazards of that are not perhaps adequately emphasized. And another one of your letters (Letters from Andrew), which you just got to there, was about AI at the edge and the fact that we can move towards, in contrast to the centralized computing power at a limited number of entities as you, I think just we're getting at, there's increasing potential for being able to do things on a phone or a laptop. Can you comment about that?

Andrew Ng (25:43):

Yeah, I feel like I'm going against many trends. It sounds like I'm off in a very weird direction, but I'm bullish about AI at the edge. I feel like if I want to do grammar checking using a large language model, why do I need to send all my data to a cloud provider when a small language model can do it just fine on my laptop? Or one of my collaborators at Stanford was training a large language model in order to do electronic health records. And so at Stanford, this actually worked done by one of the PhD students I've been working with. But so Yseem wound up fine tuning a large language model at Stanford so that he could run inference over there and not have to ship EHR and not have to ship private medical records to a cloud provider. And so I think that was an important thing to, and if open source were shut down, I think someone like Yseem would have had a much harder time doing this type of work.

Eric Topol (27:04):

I totally follow you the point there. Now, the last thing I wanted to get to was a multimodal AI in healthcare. When we spoke 5 years ago, when I was working on the Deep Medicine book, multimodal AI wasn't really possible. And the idea was that someday we'll have the models to do it. The idea here is that each of us has all these layers of data, our various electronic health records, our genome, our gut microbiome, our sensors and environmental data, social determinants of health, our immunome, it just goes on and on. And there's also the corpus of medical knowledge. So right now, no one has really done multimodal. They've done bimodal AI in healthcare where they take the electronic health records and the genome, or usually it's electronic health records and the scan, medical scan. No one has done more than a couple layers yet.

(28:07):

And the question I have is, it seems like that's imminently going to be accomplished. And then let's then get to will there be a virtual health coach? So unlike these virtual coaches like Woebot and the diabetes coaches and the hypertension coaches, will we ultimately have with multimodal AI, your forecast on that, the ability to have feedback to any given individual to promote their health, to prevent conditions that they might be at risk for having later in life or help managing all their conditions that they actually have already been declared. What's your sense about where we are with multimodal AI?

Andrew Ng (28:56):

I think there's a lot of work to be done still at unimodal, a lot of work to be done in text. LLM AI does a lot of work on images, and maybe not to talk about Chang's work all the time, but just this morning, I was just earlier, I was chatting with him about he's trying to train a large transformer on some time series other than text or images. And then semi collaborative, Stanford, Jeremy Irvin, Jose kind of poking at the corners of this. But I think a lot of people feel appropriately that there's a lot of work to be done still in unimodal. So I'm cheering that on. But then there's also a lot of work to be done in multimodal, and I see work beyond text and images, maybe genome, maybe some of the time series things, maybe some the HR specific things, which maybe is kind of textbook kind of not, I think it was just about a year ago that check GP was announced. So who knows? Just one more year of progress, who knows where it will be.

Eric Topol (29:55):

Yeah. Well, we know there will be continued progress, that's for sure. And hopefully as we've been discussing, there won't be significant obstacles for that. And hopefully there will be a truce between the two camps of the doomerism and optimism or somehow we're meet in the middle. But Andrew, it's been a delight to get your views on all this. I don't know how the OpenAI affair will settle out, but it does seem to be representative of the times we live in because at the same TED AI that you and I spoke at Ilya, spoke about AGI and that was followed onlhy a matter by days by Sam Altman talking about AGI and how OpenAI was approaching AGI capabilities. And it seems like this is, even though as you said, that there's a lot of different definition for AGI, the progress that's being made right now is extraordinary.

(30:57):

And grappling with the idea that there are certain tasks, at least certain understandings, certain intelligence that may be superhuman via machines is more than provocative. And I know you are asked to comment about this all the time, and it's great because in many respects, you're an expert, neutral observer. You're not in one of these companies that's trying to assert that they have sparks of AGI or actual AGI or whatever. So in closing, I think we look to you as , not just an expert, but one who has had such broad experience in this field and who has predicted so much of its progress and warned of the reasons that we would not continue to make that type of extraordinary progress. So I want to thank you for that. I'll keep reading Letters from Andrew. I hope everybody does, as many people as possible, should attend your “Generative AI for Everyone” course. And thank you for what you've done for the field, Andrew, we're all indebted to you.

Andrew Ng (32:17):

Thank you, Eric. You're always so gracious. It's always such a pleasure to see you and collaborate with you.

Thanks for listening and reading Ground Truths. Please share this podcast if you found it informative.

Share

2 Comments