They don’t use the generative models for this. The AI’s that do this kind of work are trained on carefully curated data and have a very narrow scope that they are good at.
The discriminative AI’s are just really complex algorithms, and to my understanding, are not complete black-boxes. As someone who has a lot of medical problems I receive care for as well as being someone who will be a physician in about 10 months, I refuse to trust any black-box programming with my health or anyone else’s.
Right now, the only legitimate use generative AI has in medicine is as a note-taker to ease the burden of documentation on providers. Their work is easily checked and corrected, and if your note-taking robot develops weird biases, you can delete it and start over. I don’t trust non-human things to actually make decisions.
That brings up a significant problem - there are widely different things that are called AI. My company’s customers are using AI for biochem and pharm research, protein folding, and other science stuff.
I do have a tech background in addition to being a medical student and it really drives me bonkers that we’re calling these overgrown algorithms “AI”. The generative AI models I suppose are a little closer to earning the definition as they are black-box programs that develop themselves to a certain extent, but all of the reputable “AI” programs used in science and medicine are very carefully curated algorithms with specific rules and parameters that they follow.
My company cut funding for traditional projects and has prioritized funding for AI projects. So now anything that involves any form of automation is “AI”.
They don’t use the generative models for this. The AI’s that do this kind of work are trained on carefully curated data and have a very narrow scope that they are good at.
Yeah, those models are referred to as “discriminative AI”. Basically, if you heard about “AI” from around 2018 until 2022, that’s what was meant.
The discriminative AI’s are just really complex algorithms, and to my understanding, are not complete black-boxes. As someone who has a lot of medical problems I receive care for as well as being someone who will be a physician in about 10 months, I refuse to trust any black-box programming with my health or anyone else’s.
Right now, the only legitimate use generative AI has in medicine is as a note-taker to ease the burden of documentation on providers. Their work is easily checked and corrected, and if your note-taking robot develops weird biases, you can delete it and start over. I don’t trust non-human things to actually make decisions.
That brings up a significant problem - there are widely different things that are called AI. My company’s customers are using AI for biochem and pharm research, protein folding, and other science stuff.
I do have a tech background in addition to being a medical student and it really drives me bonkers that we’re calling these overgrown algorithms “AI”. The generative AI models I suppose are a little closer to earning the definition as they are black-box programs that develop themselves to a certain extent, but all of the reputable “AI” programs used in science and medicine are very carefully curated algorithms with specific rules and parameters that they follow.
My company cut funding for traditional projects and has prioritized funding for AI projects. So now anything that involves any form of automation is “AI”.