• meliante@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    Ok, if you got it you got it, if you don’t I can’t be bothered to spend more time on trying to explain what is a very simple concept that people just don’t want to entertain.

    I’m out, see you in 10 years.

    • Nibodhika@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      He got it, you’re the one who’s not getting it, it is impossible to prompt an entire program to an LLM, I mean you can do it, but even a perfect LLM will give you back a steaming pile of shit, and the reason is because you asked for a steaming pile of shit without realizing it.

      I’ll make you the same challenge again, write me a prompt as you would to this “AI” for it to do complex stuff and I’ll point out several of the assumptions a non-AGI software could wrongly make.

      • meliante@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        You really are limited, must be a pRoGrAmMeR… We’re talking about tech 10 years off, it doesn’t exist, it’s all hypothetical. And you’re asking me how to use it? I’d be fucking rich if I knew that wouldn’t I?

        • Nibodhika@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          No, you would be a coherent person, you can’t even fathom how you would interact with a tech yet believe it’s possible that’s literally crazy or purposefully avoiding it. Plus answering that proves you have absolutely no idea what an LLM is, you’re just astonished by the term “AI” and truly believe ChatGPT is intelligent. And also proves you have absolutely no idea how computers work in a general manner, to you they’re just magic boxes that do magic and show you things on a screen, therefore anything can happen in your mind.

          You talked about the evolution from current AIs so logically you would prompt them in natural language (since they’re LLMs), your refusal to give me a prompt for how you would use these LLMs is obviously due to your knowledge that I will find issues with your prompt, thus confirming this is not possible for an LLM.