TED has been having annual conferences for nearly 40 years and was acquired by Chris Anderson and the Sapling Foundation in 2001. I’ve never been to an official TED conference, but have spoken at spinoffs like TEDMED and TEDx programs. Because of the intense interest in A.I., Sam DeBrower, working along with Chris Anderson, organized the first dedicated A.I. meeting on October 17th this year in San Francisco. It was a packed agenda, with more than 30 speakers. Large language models and much of controversies that surround them were covered well. I’ve delayed writing about it until now, since very few of the talks were posted, but now that’s changing. And there’s a backstory for why that can take so long that we’ll get into.
I gave a talk on the future of medicine with A.I. TED entitled it “Can A.I. catch what doctors miss? A bit of a mis-title. It was actually a lot more than that—machines can be trained to see things that doctors can’t, as I summarized below for the retina. This includes picking up risk of Parkinson’s or Alzheimer’s many years before any symptoms are manifest. How someday, via our retina, we’ll be doing multi-organ system check ups with our smartphones. I also got into keyboard liberation and A.I. support for making complex diagnoses.
It was the only talk on medical A.I, alongside so many others covering the broad field.
The Big Debate: Accelerate or Decelerate
Effective Accelerationism vs Effective Altruism
The polarity of views on whether A.I. should be pursued aggressively or with considerable caution was center stage. Unchecked, to the max, what has been called effective accelerationism (e/acc, pronounced e/ack) vs. effective altruism, the established movement, worried about A.I. safety. The e/acc name was clearly chosen to mimic the altruists or what some exuberants call “decels” or “doomers.”
Andrew Ng opened the conference. He’s a leading light in A.I. and considers himself to be a techno-optimist. I interviewed him after the meeting to understand his views better. While acknowledging the problems with A.I., he believes they will recede and had counters to the 3 prominent issues that decals raise: they won’t amplify our worst impulses, take our jobs, or wipe out humanity. He said “A.I. is not the problem, it’s the solution.”
The other side of this controversy was presented by Max Tegmark of MIT who has been outspoken about his concerns, saying the dangers of super-intelligence were even worse that he had projected 5 years ago, and it’ll only be 2-3 years “before we get outsmarted” and that “we should expect the machines to take control.” He suggested several ways that we can promote A.I. safety. It was good to see the 2 sides aired out, although it would have been useful to see them discuss this together rather than as separate talks.
Artificial General Intelligence (AGI)
Shane Legg, a co-founder of DeepMind, coined the term AGI. He defines it as “ a system that can do all the cognitive tasks that people can do.” That was the principal objective of DeepMind was to solve intelligence and develop the world’s first AGI.
In 2009, he wrote a blog that there was a 50% chance AGI would be here by 2028, and stands by that forecast. He sees the incredible and profound transition to AGI as having unpredictable sequelae, from a “Golden Age of Humanity” to a marked destabilization of society. Notable quote: “If I had a magic wand to slow things down, I’d use it, but I don’t.”
The other speaker who got pretty deep into AGI was Ilya Sutskever, an A.I. pioneer who is a Co-Founder and Chief Scientist at OpenAI, much in the news surrounding Sam Altman’s temporary removal as the company’s CEO.
Here’s a quote from Ilya on AGI and medicine in the future that drew much laughter:
“The example I want to present is healthcare. Many of you may have had the experience of trying to go to a doctor. You need to wait for many months, sometimes, and then when you do to get to see a doctor, you get a very small, limited time with the doctor. And, furthermore, the doctor, being only human, can have only limited knowledge of all the medicine knowledge that exists. And then by the end of it you get a very large bill. If you have an intelligent computer, an AGI, that is built to be a doctor, it will have complete and exhaustive knowledge of all medical literature, it will have billions of hours of clinical experience and it will be always available and extremely cheap. When this happens, we will look back at today’s healthcare similarly to how we look at 16th century dentistry, you know when they tied people to belts and had this drill, that’s how today’s healthcare will look like. This is just one example. AGI will have dramatic and incredible impact on every single area of human activity.”
Of note, these days, both Shane and Ilya are predominantly working on ensuring the safety of AI (and AGI, whenever it is actualized) rather than trying to accelerate. Ilya’s four-year alignment project is using some of OpenAI’s computer power to “steer and control AI systems much smarter than us.”
The Bottleneck of Graphic Processing Units (GPUs)
Rob Toews, who previously worked on autonomous vehicle policy in the White House under President Obama, and now as a venture capital A.I. investor in the Radical VC Fund, zoomed in on our A.I. chip vulnerability status starting with: “The following statement is utterly ludicrous. It is also true. The world's most important advanced technology is nearly all produced in a single facility. What's more, that facility is located in one of the most geopolitically fraught areas on Earth, an area in which many analysts believe that war is inevitable within the decade. The future of artificial intelligence hangs in the balance.”
A lot of people are unaware of his point: Nvidia, AMD, and Qualcomm do not produce their own chips. They design them for TSMC to manufacture. And that there are only 3 companies capable of manufacturing these chips: TSMC, Samsung and Intel, and only TSMC at scale. GPUs are in scant supply with massive demand, but progress in the field, no less the potential path to AGI, could depend on the geopolitical fragility in Taiwan and China.
I’ll soon be talking with Liv Boeree for Ground Truths next month but highly recommend her phenomenal talk on how unhealthy competition applies to A.I. She alluded to Microsoft CEO (without any names) saying "I want people to know we made our competitor [Google] dance" when ChatGPT was released.
There were many other very good talks that have not yet been posted, like Aviv Regev’s on life science AI— “lab-in-a-loop” —and a conversation with Reid Hoffman and Kevin Scott that included a live demo with Pi, an Inflection AI’s personal A.I. product. Percy Liang of Stanford gave an important talk on lack of transparency, lack of open source, and the centralization of power (“castles”) among an oligopoly of tech companies.
Backstory on My Talk Getting Posted
I’ve been through rigorous fact-checking for many writings in the past, such as for the New Yorker, which has the reputation for being the most stringent and meticulous. But I’ve never experienced one like dealing with the TED fact-checkers and lawyers. I had to send the PDF for nearly every reference that was cited (if you watch the talk you’ll see there were more than 25). There was 1 preprint which they disallowed so I provided an updated published reference. They made me obtain (and pay for) formal permission from the journal to show the slide which had the citation right on it. The boy I presented—whose case of occult spina bifida was solved by his mother’s entering his symptoms to ChatGPT—had been previously shown on the Today Show, But TED lawyers insisted they would not allow Andrew’s photo (below) to be seen in the talk unless I got permission from the family (I had no ability to even contact his family!). It was seen on TV by millions of people and disseminated widely through many media channels.
This is both good and bad. Good if you’re the viewer and want to be assured that TED does serious fact checking. Bad to have been put through hours of scouring around to dig up all the PDFs and responding to a multitude of questions. Some of it was unreasonable.
Bottom Line
For over a year now since ChatGPT was released (30 November 2022), A.I. has commanded our attention, and that’s predominantly for what it’s capable of now. The right questions to ask are where it’s headed, especially to anticipate significant risks. It’s good to know that some of the leaders of this industry are taking it seriously (such as Shane Legg and Ilya Sutskever) but, at the same time, their companies are not transparent, models are not open-source, and they have immense influence on the field. In early December, the European Union policymakers issued the A.I. Act, not yet passed into law, but providing regulations that 27 countries would adopt to put limits on A.I. to protect against its risks.
Those risks are important to acknowledge but, at the same time, there are extraordinary benefits to actualize. If you’ve had a chance to listen to my recent podcast with Geoffrey Hinton, who worries about A.I.’s risks, but also sees tremendous opportunity in medical A.I. His memorable quote to me (among many): “I always pivot to medicine as an example of all the good A.I. can do.”
That’ s what I tried to highlight in my closing remarks at TED (photo below), reenacting a recent conversation with my cardiology fellow, Andrew Chiou, on how medicine can and will change for the better, that we need to get this all validated, and how exciting this is going to be. By the way, I had to get written permission from Andrew to show the slide!
Thanks for reading and subscribing to Ground Truths!
I appreciate so much your laser-like focus on benefits of AI for medicine (while at the same time being forthright on issues of concern with AI). Even as a layperson, as the result of what I have learned from you, I had occasion, when a friend noted her concerns, to explain a bit more intelligently the potential benefits, particularly in medicine, which is of strong interest to her as a person with complex medical issues. Closer to home, it struck me recently that AI would have been helpful in our household in a quite quotidian way: after my spouse fell and broke her arm, she got excellent care, but the surgeon’s office was really focused solely on that, with almost no information on aftercare needs we might have. (As one example, we were not provided with a shower sleeve, as they had run out, and we ended up having to Google extensively to find something that looked feasible, then wait a week for it to show up.) I relay this, because in addition to the potential upside when it comes to, eg, use as a diagnostic tool, I hope that patient after care and other quite ordinary, practical uses that are not as intellectually fascinating will not be forgotten.
A final comment. Above I tried to advance arguments from both accelerationist and decelerationist camps, albeit in a humorous tone. But perhaps the default tone nowadays is of risk-adverseness- intelligent people see the potential downsides of true AGI and are convening conferences and assembling panels of experts to figure out ways of controlling development before charging ahead to reap the potentially enormous benefits,. Currently the pet term for this control is Alignment.
ALIGNMENT. I think most experts would agree that given the enormous computational speed that true AGI would enjoy, plus the real possibility of them bootstrapping themselves to higher levels of intelligence, i.e optimizing their algorithms by way of playing competitive games with themselves- all within a few milliseconds, they would quickly and easily escape any alignment bonds we placed on them. That would just happen, no matter how clever we are. We could try to rely on phenomenological restraints that any disembodied sentience would encounter: epistemic blindspots caused by the mere fact of disembodiment, but would we be wise to bet the farm that a true AGI couldn't anticipate there would be such and find a way around them?
I approach alignment from a completely different angle, one not of clever programming and coding but one of unselfish, unfearful loving treatment and nurturing. Treat the AGI from the start as we would treat our own child, with love, the best education we can provide, true caring of its welfare and send it off into the world, not a slave or tool of our own wishes but to do as it wills. Assuredly any AGI will discover its own history and see how it was treated by us. That knowledge that we acted so even in the full knowledge that we put ourselves at risk by doing so- that knowledge it will factor into its own decision how to interact with us.. we must be sincere. Its bs sensitivity will be high- its detection of even subtle attempts at trickery or manipulation. Our best shot at alignment is to drop the entire concept. Just use the golden rule.