• xavier666OP
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 days ago

      Looks like the person using it have at least some idea of what they are doing

      This is something which I have been saying from a year back, albeit a different form – “I only ask questions to LLMs if I already know the answer”.

      They are not supposed to replace coders, but kind of boost their productivity.

      This usecase is also quite good.

      • It’s not critical; not many people are using QICs
      • It’s not hard, but just boring
      • No existing human solution
      • voracitude@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        16 days ago

        “I only ask questions to LLMs if I already know the answer”

        Not a developer here. I’ve been thinking about this because I hoped LLMs would be able to help me learn things at first, like a patient tutor I can ask all my stupid questions and it’ll never get annoyed with me. Since it can’t do that though, because it lies all the time, I don’t think I have a use case for it at all… About all it can do for me is rewrite or summarise English, and it doesn’t even do a particularly good job of that most of the time so I end up saving time by doing that work myself anyway. I suppose it’s pretty good at translating, but I haven’t tried it for that as I don’t have a lot of call to speak foreign languages.

        • xavier666OP
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 days ago

          I hoped LLMs would be able to help me learn things at first, like a patient tutor I can ask all my stupid questions and it’ll never get annoyed with me

          It can do that for school level stuff because that material is present in it’s input dataset in a redundant manner. For anything niche or domain-specific, it will hallucinate or fail.

          I believe that when the bubble bursts, education will be one genuine usecase for LLMs.

          • voracitude@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            16 days ago

            It can do that for school level stuff because that material is present in it’s input dataset in a redundant manner. For anything niche or domain-specific, it will hallucinate or fail.

            I typically don’t have an issue getting a grasp on fundamentals, so most of the things I want to ask it about might be beyond school-level. My main way of learning is to ask questions to make sure I understand the material - which means more potential hallucination points, and maybe worse impact because I’ll think I get it, but I’ve just been confidently lied to that I understood.

            For example, I’ve wondered for a while if patches of space with less gravitational curvature “age” faster than patches that are more heavily distorted by gravity wells, and what the implications of that might be. Makes sense, we know that gravity slows down subjective time. But I can’t get a productive answer out of an LLM because I can’t trust it, and it’s not worth bothering my physicist friends about.

          • non_burglar@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            15 days ago

            I believe that when the bubble bursts, education will be one genuine usecase for LLMs.

            When the bubble bursts, what will survive is what makes money. Education doesn’t make money unless it’s in the national interest.