• Nalivai@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    12 hours ago

    My favourite story about it was that one time when neural network trained on x-rays to recognise tumors I think, was performing amazingly at study, better than any human could.
    Later it turned out that the network trained on real life x-rays with confirmed cases, and it was looking for penmarks. Penmarks mean the photo was studied by several doctors, which mean it’s more likely to be the case that needed second opinion, which more often than not means there is a tumour. Which obviously means that if the case wasn’t studied by humans before, the machine performed worse than random chance.
    That’s the problem with neural networks, it’s incredibly hard to figure out what exactly is happening under the hood, and you can never be sure about anything.
    And I’m not even talking about LLM, those are completely different level of bullshit

    • lets_get_off_lemmy@reddthat.com
      link
      fedilink
      arrow-up
      5
      ·
      11 hours ago

      That’s why too high a level of accuracy in ML is always something that makes me squint… I don’t trust it, as an AI researcher and engineer, you have to do the due diligence in understanding your data well before you start training.

    • logicbomb@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      11 hours ago

      Neural networks work very similarly to human brains, so when somebody points out a problem with a NN, I immediately think about whether a human would do the same thing. A human could also easily fake expertise by looking at pen marks, for example.

      And human brains themselves are also usually inscrutable. People generally come to conclusions without much conscious effort first. We call it “intuition”, but it’s really the brain subconsciously looking at the evidence and coming to a conclusion. Because it’s subconscious, even the person who made the conclusion often can’t truly explain themselves, and if they’re forced to explain, they’ll suddenly use their conscious mind with different criteria, but they’ll basically always come to the same conclusion as their intuition due to confirmation bias.

      But the point is that all of your listed complaints about neural networks are not exclusively problems of neural networks. They are also problems of human brains. And not just rare problems, but common problems.

      Only a human who is very deliberate and conscious about their work doesn’t fall into that category, but that limits the parts of your brain that you can use. And it also takes a lot longer and a lot of very deliberate training to be able to do that. Intuition is a very important part of our minds, and can be especially useful for very high level performance.

      Modern neural networks have their training data manipulated and scrubbed to avoid issues like you brought up. It can be done by hand, for additional assurance, but it is also automatically done by the training software. If your training data is an image, the same image will be used repeatedly. For example, it will be used in its original format. It can be rotated and used. Cropped and used. Manipulated using standard algorithms and used. Or combinations of those things.

      Pen marks wouldn’t even be an issue today, because images generally start off digital, and those raw digital images can be used. Just like any other medical tool, it wouldn’t be used unless it could be trusted. It will be trained and validated like any NN, and then random radiologists aren’t just relying on it right after that. It is first used by expert radiologists simulating actual diagnosis who understand the system enough to report problems. There is no technological or practical reason to think that humans will always have better outcomes than even today’s AI technology.

      • Nalivai@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        very similarly to human brains

        While the model of a unit in neural network is somewhat reminiscent of the very simplified behaviouristic model of a neuron, the idea that NN is similar to a brain is just plain wrong.
        And I’m afraid, based on what you wrote, you didn’t understand what this story means and why I told it.