• TeddE@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      15 hours ago

      Just as a tangent:

      This is one reason why I’ll never trust AI.

      I imagine we might wrangle the hallucination thing (or at least be more verbose about it’s uncertainty), but I doubt it will ever identify a poorly chosen question.

      • marcos@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        14 hours ago

        Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.

        Solving the hallucinations in LLMs is impossible.

        • Leon@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 hours ago

          That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?

          • ikt@aussie.zoneOP
            link
            fedilink
            arrow-up
            1
            ·
            5 hours ago

            There’s no training for correctness, how do you even define that?

            I guess can chat to these guys who are trying:

            By scaling reasoning with reinforcement learning that rewards correct final answers, LLMs have improved from poor performance to saturating quantitative reasoning competitions like AIME and HMMT in one year

            https://huggingface.co/deepseek-ai/DeepSeek-Math-V2

            • Leon@pawb.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              Sure, when it comes to mathematics you can do that with extreme limitations on success, but what about cases where correctness is less set? Two opposing statements can be correct if a situation changes, for example.

              The problems language models are expected to solve go beyond the scope of what language models are good for. They’ll never be good at solving such problems.

              • ikt@aussie.zoneOP
                link
                fedilink
                arrow-up
                1
                ·
                4 hours ago

                i duno you’re in the wrong forum, you want hackernews or reddit, no one here knows much about ai

                although you do seem to be making the same mistake others made before where you want point to research happening currently and then extrapolating that out to the future

                ai has progressed so fast i wouldn’t be making any “they’ll never be good at” type statements