• DarkSirrush@lemmy.ca
    link
    fedilink
    arrow-up
    63
    ·
    1 day ago

    iirc the reason it isn’t used still is because even with it being trained by highly skilled professionals, it had some pretty bad biases with race and gender, and was only as accurate as it was with white, male patients.

    Plus the publicly released results were fairly cherry picked for their quality.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 day ago

      Yeah, there were also several stories where the AI just detected that all the pictures of the illness had e.g. a ruler in them, whereas the control pictures did not. It’s easy to produce impressive results when your methodology sucks. And unfortunately, those results will get reported on before peer reviews are in and before others have attempted to reproduce the results.

      • DarkSirrush@lemmy.ca
        link
        fedilink
        arrow-up
        9
        ·
        1 day ago

        That reminds me, pretty sure at least one of these ai medical tests it was reading metadata that included the diagnosis on the input image.

    • yes_this_time@lemmy.world
      link
      fedilink
      arrow-up
      28
      ·
      edit-2
      1 day ago

      Medical sciences in general have terrible gender and racial biases. My basic understanding is that it has got better in the past 10 years or so, but past scientific literature is littered with inaccuracies that we are still going along with. I’m thinking drugs specifically, but I suspect it generalizes.