“I am convinced we’re on the cusp of the most important transformation of our lifetimes.”
“The coming wave is going to change the world. Ultimately, human beings may no longer be the primary planetary drivers, as we have become accustomed to being. We are going to live in an epoch when the majority of our daily interactions are not with other people but with AIs. This might sound intriguing or horrifying or absurd, but it is happening.”
I’ve known Mustafa Suleyman for several years, first from his work as a co-founder at Deep Mind, then at Google Health, and more recently as heading up Inflection AI, a company he co-founded with Reid Hoffman and Karén Simonyan last year, which has already raised over $1.3 billion of venture capital. He’s known as “Moose” by many of his colleagues and friends, which is fitting since he’s a large and heavy (not meaning weight) figure in the current AI landscape. When President Biden recently met with 7 A.I. leaders to map out future national policy, he was among them.
One of my first intersections with Moose was on work he and his team did at Deep Mind, in collaboration with UK researchers, on validating an AI tool to alert clinicians that a patient is showing risk for acute kidney injury. It was one of the very first papers of A.I. in medicine published in Nature, and I wrote the accompanying editorial. It actually got implemented and worked well in the clinic, extremely popular among nurses and physicians, but there was conflict with the NHS about data rights and ultimately it was pulled. Nevertheless, it was a significant early commitment and success story that Suleyman had in applying A.I. to improve patient care.
Now he’s just published a unique book (with Michael Bhaskar) on the future of AI called THE COMING WAVE. I say unique because of 2 major points: (1) it is the only AI book that I’ve read which integrates the two different languages—life science (genomics, synthetic biology, genome editing, CRISPR, ACTG) and what is center-stage, the exceptionally broad digital (0110101) applications; and (2) it’s a healthy balance of optimism and deep concerns, offering many thoughtful, concrete steps towards containment. [A couple of quotes from the book to reflect that: “A choice between a future of unparalleled possibility and a future of unimaginable peril” (p. 18) and ”I believe this is the great meta-problem of the 21st century” (p. 25)].
It’s a bit surprising since in order to publish the book in September 2023 it had to be written almost a year earlier (thanks to the byzantine world of publishing books). So much of the book was formulated before ChatGPT release at the end of November 2022, or GPT-4’s release in March 2023. The major advances in large language models (LLMs) we’ve recently seen were not missed by the author.
There’s no doomsday stuff; it’s calm, clear-eyed, and levelheaded. His views are grounded by many historical anchors, from the Industrial Revolution, John Lane’s steel plow in 1833, the railway boom of the 1840s, to George Orwell’s 1984. He makes the point that language, agriculture, and writing were the 3 prior waves that formed the foundation of civilization, now taken for granted. Or “the amount of labor that once produced fifty-four minutes of quality light in the eighteenth century now produces more than fifty years of light.”
Suleyman certainly doesn’t need my endorsement, with a global who’s who list of people in AI and beyond providing blurbs. I did wind up taking copious notes when I read it and I’m going to list some additional memorable quotes below:
“A single AI program can write as much text as all of humanity. A single two Gb image-generation model running on your laptop can compress all the pictures on the open web into a tool that generate images with extraordinary creativity and precision. A single pathogenic experiment could spark a pandemic, a tiny molecular event with global ramifications….” (p. 120)
“For the time being, it doesn’t matter whether the system is self-aware, or has understanding, or has humanlike intelligence. All that matters is what the system can do.” (p. 89)
“A key ingredient of the LLM revolution is that for the first time very large models could be trained directly, without the need for carefully curated and human-labeled data sets.”(p. 79)
“The idea that CRISPR or AI can be put back in the box is not credible. Until someone can create a plausible path to dismantling these interlocking incentives, the option of not building, saying no, perhaps even just slowing down or taking different path isn’t there.” (p. 157)
“Over the next ten years, AI will be the greatest force amplifier in history. This is why it could enable a redistribution of power on a historic scale. The greatest accelerant of human progress imaginable, it will also enable harms—from wars and accidents to random terror groups, authoritarian governments, overreaching corporations, plain thefts and willful sabotage.” (p.224)
I hope the quotes above and this brief review give you a flavor for the book’s content and value. If you’re interested in where AI is headed, and into thought-provoking books, I’d highly recommend it.
Thanks for your support of Ground Truths!
Thank you for your astute summary.
What is missing in the list of harms AI will likely do, is removing our significance as humans, and contributing to further shrinking our brains since we won't need to memorize, create or solve problems any longer.
While I agree we cannot stop or maybe even slow down AI's rapid development, I don't think we are equipped to contain it's potential damage.
I remember asking early on one of the founders of AOL (on an NPR program), if the company should be held responsible if humans did not judiciously chat on the internet, just like car companies are responsible to provide safety features on a car, to which he replied that no, internet companies are not responsible for human behavior which should be entrusted to users.
I fear the developers of AI will be just as irresponsible in their vision of the future.
Will look forward to reading the book.
Let's be clear, I am not disputing AI's amazing potential. In clinical practice, I am witnessing medical house staff increasingly unable to master patient's cases in a credible fashion, lost in a sea of outdated EMR notes and unable to memorize or even clearly list tests performed, not to mention the just about disappearing clinical exam as more time is spent on unreliable EMR that was supposedly designed to help us become better physicians. Perhaps a machine will be better eyes, ears and brain, but in the end, will it create worse physicians? And if AI's knowledge is based on data interpretation, and that this data is pooled from EMR or incorrect data, will it be credible? Not to mention the fact that there is less and less confidence in science among the public, as we painfully witnessed during the pandemic, so how will people feel about data generated by machines, will the public believe it?
Thanks for sharing your thoughts.
Sincerely, Michèle Halpern, MD