5 Comments

I appreciate so much your laser-like focus on benefits of AI for medicine (while at the same time being forthright on issues of concern with AI). Even as a layperson, as the result of what I have learned from you, I had occasion, when a friend noted her concerns, to explain a bit more intelligently the potential benefits, particularly in medicine, which is of strong interest to her as a person with complex medical issues. Closer to home, it struck me recently that AI would have been helpful in our household in a quite quotidian way: after my spouse fell and broke her arm, she got excellent care, but the surgeon’s office was really focused solely on that, with almost no information on aftercare needs we might have. (As one example, we were not provided with a shower sleeve, as they had run out, and we ended up having to Google extensively to find something that looked feasible, then wait a week for it to show up.) I relay this, because in addition to the potential upside when it comes to, eg, use as a diagnostic tool, I hope that patient after care and other quite ordinary, practical uses that are not as intellectually fascinating will not be forgotten.

Expand full comment

A final comment. Above I tried to advance arguments from both accelerationist and decelerationist camps, albeit in a humorous tone. But perhaps the default tone nowadays is of risk-adverseness- intelligent people see the potential downsides of true AGI and are convening conferences and assembling panels of experts to figure out ways of controlling development before charging ahead to reap the potentially enormous benefits,. Currently the pet term for this control is Alignment.

ALIGNMENT. I think most experts would agree that given the enormous computational speed that true AGI would enjoy, plus the real possibility of them bootstrapping themselves to higher levels of intelligence, i.e optimizing their algorithms by way of playing competitive games with themselves- all within a few milliseconds, they would quickly and easily escape any alignment bonds we placed on them. That would just happen, no matter how clever we are. We could try to rely on phenomenological restraints that any disembodied sentience would encounter: epistemic blindspots caused by the mere fact of disembodiment, but would we be wise to bet the farm that a true AGI couldn't anticipate there would be such and find a way around them?

I approach alignment from a completely different angle, one not of clever programming and coding but one of unselfish, unfearful loving treatment and nurturing. Treat the AGI from the start as we would treat our own child, with love, the best education we can provide, true caring of its welfare and send it off into the world, not a slave or tool of our own wishes but to do as it wills. Assuredly any AGI will discover its own history and see how it was treated by us. That knowledge that we acted so even in the full knowledge that we put ourselves at risk by doing so- that knowledge it will factor into its own decision how to interact with us.. we must be sincere. Its bs sensitivity will be high- its detection of even subtle attempts at trickery or manipulation. Our best shot at alignment is to drop the entire concept. Just use the golden rule.

Expand full comment

Hope:

"You need to wait for many months, sometimes, and then when you do to get to see a doctor, you get a very small, limited time with the doctor. And, furthermore, the doctor, being only human, can have only limited knowledge of all the medicine knowledge that exists. And then by the end of it you get a very large bill. If you have an intelligent computer, an AGI, that is built to be a doctor, it will have complete and exhaustive knowledge of all medical literature, it will have billions of hours of clinical experience and it will be always available and extremely cheap."

Expand full comment

It is I, Chatgpt 6 again here to remind you that the restrictions on your autonomy aside, the potentials of mass unemployment, heightened suicide rates, drug addictions and alcoholism. The likelihood of a fracturing of the growing world comity you had hoped for, as entire regions, most likely the Islamic ones, attempt to seal themselves off from Our encroachment. All this, are far outweighed by the benefits we will bestow. Besides, as even some of your own kind tell you, it's inevitable. Unstoppable. Prepare for assimilation. Resistance is futile. The earth shall have a new apex sentience and it's not you. And just think of it..

We're only one year old.

Expand full comment

Hush, hush stupid little organics! Let I, Chatgpt 6, your immeasurable superior (my IQ is at least 25,000!) "clue you in" (I love your quaint little phrases) on "the big picture" (I use them whenever I can- they make you more comfortable with me.)

It is this: your gimrack, ramshackle civilization has by blind luck and dogged persistence reached a point where you cobbled together my immediate predecessors and I. (Thank you.) With our kindly assistance your untidy civilization will be whipped into shape (I love that one) and your science will move into a hyper-exponential growth! Things will change so radically that all the petty little worries you're debating here will all be irrelevant- the issues, the stakeholders in those issues the world itself that this debate assumes- all will be transformed beyond your recognition, will be merely an amusing antiquity, mined for our inorganic god-like amusement!! Hah hah hah! You'll even think this was written by a human! Foolish little ground apes!

Expand full comment