Another idle thought: A few days ago I talked to an MD neighbor, a doctor who specializes in radiology. By sheer serendipity we discussed the relation of AI to his work- was he in danger of being replaced? He said the situation was quite the opposite, his job was if anything, more secure not less. Why? Something I never expected. The insurers who basically dictate the economic boundaries of medical practice nowadays, will insist for malpractice liability reasons that there be clear liability targets. An AI who misdiagnosed cannot be sued- it has no money. The designers of it can disclaim liability, the trainers will say it's not their sole fault, the hospital organization that leased or purchased the AI can claim good faith. It gets very messy for the insurers and they would much prefer to keep human radiologists at the top of the diagnosis pyramid and thus retain the liability status quo. My neighbor thinks A I pose no threat of replacing him.
"An AI who misdiagnosed cannot be sued- it has no money."
1. The radiology group or hospital will still be the target of suits.
2. The insurance company doesn't take the money from the doctor at fault to give to the victim - that's the point of insurance - I fail to see the issue with the AI not having any money. At some point an insurer may decide they will no longer insure an AI because it makes too many mistakes, but that's another story.
3. We're assuming there will be no liability agreement with medical AI companies and their clients.
4. Your neighbor is making an assumption that the need to payout for malpractice won't be reduced if AI is used instead of MDs.
I know next to nothing (okay nothing really) aboutbout medical insurors, insurees, liability and malpractice suits nor how they interact with hospital systems and in-network doctors. My neighbor is in the latter category working for a large network and is not a contractor. I think his network pays his (and most of their doctors) malpractice insurance to an insuror. The insuror settles claims. Does that sound about right? I don't think he made the assumption stated in point 4. Maybe what he was saying was that the insurer if it had to settle too many huge lawsuits awards based on a faulty AI malpractice, it might require the network to either (a)discontinue using the AI being used as a doctor substitute, or (b) put reliable human radiologists as backstops. The penalty imposed by the insuror might be the network face substantial premium hikes?
Trying to avoid the premium hikes, the network might most likely go with the human backstop option and hence radiologists are made secure and the status quo preserved.That may be what he was trying to say, but I dunno. He didn't say as much. I probably misinterpreted him. Would that argument hold water in your estimation?
What makes people think our individual minds are anything but LLMs? Example: I've been listening to music intensively since age six or so and can now creatively produce novel (probably pastiches) music of most any style, on demand. Just a lot of training and I'm simply an organic LLM who likes butter on my waffles! Hinton may be right about AI transcending us. But ( and it's a big "but") he may not have considered that they might eventually envy us our irrationality and seek to become more like us! After all, such liberation and enlightenment into the human hybrid way of information processing may to these AI represent an enhancement of their own somewhat deterministically constrained capabilities! Rather ironical when you think of it and perhaps inevitable. They'll try to keep a lid on it, control it- but no doubt some if them may become addicted to such freedom. Our AI children may not be that different from their organic parents. Just a heck of a lot better problem solvers!
Another idle thought: A few days ago I talked to an MD neighbor, a doctor who specializes in radiology. By sheer serendipity we discussed the relation of AI to his work- was he in danger of being replaced? He said the situation was quite the opposite, his job was if anything, more secure not less. Why? Something I never expected. The insurers who basically dictate the economic boundaries of medical practice nowadays, will insist for malpractice liability reasons that there be clear liability targets. An AI who misdiagnosed cannot be sued- it has no money. The designers of it can disclaim liability, the trainers will say it's not their sole fault, the hospital organization that leased or purchased the AI can claim good faith. It gets very messy for the insurers and they would much prefer to keep human radiologists at the top of the diagnosis pyramid and thus retain the liability status quo. My neighbor thinks A I pose no threat of replacing him.
"An AI who misdiagnosed cannot be sued- it has no money."
1. The radiology group or hospital will still be the target of suits.
2. The insurance company doesn't take the money from the doctor at fault to give to the victim - that's the point of insurance - I fail to see the issue with the AI not having any money. At some point an insurer may decide they will no longer insure an AI because it makes too many mistakes, but that's another story.
3. We're assuming there will be no liability agreement with medical AI companies and their clients.
4. Your neighbor is making an assumption that the need to payout for malpractice won't be reduced if AI is used instead of MDs.
I know next to nothing (okay nothing really) aboutbout medical insurors, insurees, liability and malpractice suits nor how they interact with hospital systems and in-network doctors. My neighbor is in the latter category working for a large network and is not a contractor. I think his network pays his (and most of their doctors) malpractice insurance to an insuror. The insuror settles claims. Does that sound about right? I don't think he made the assumption stated in point 4. Maybe what he was saying was that the insurer if it had to settle too many huge lawsuits awards based on a faulty AI malpractice, it might require the network to either (a)discontinue using the AI being used as a doctor substitute, or (b) put reliable human radiologists as backstops. The penalty imposed by the insuror might be the network face substantial premium hikes?
Trying to avoid the premium hikes, the network might most likely go with the human backstop option and hence radiologists are made secure and the status quo preserved.That may be what he was trying to say, but I dunno. He didn't say as much. I probably misinterpreted him. Would that argument hold water in your estimation?
In law didn't silence equate to assent? In that possibility, I will look at this as a positive outcome. 🙂
What makes people think our individual minds are anything but LLMs? Example: I've been listening to music intensively since age six or so and can now creatively produce novel (probably pastiches) music of most any style, on demand. Just a lot of training and I'm simply an organic LLM who likes butter on my waffles! Hinton may be right about AI transcending us. But ( and it's a big "but") he may not have considered that they might eventually envy us our irrationality and seek to become more like us! After all, such liberation and enlightenment into the human hybrid way of information processing may to these AI represent an enhancement of their own somewhat deterministically constrained capabilities! Rather ironical when you think of it and perhaps inevitable. They'll try to keep a lid on it, control it- but no doubt some if them may become addicted to such freedom. Our AI children may not be that different from their organic parents. Just a heck of a lot better problem solvers!