• doctorcrimson@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    It’s a very nuanced situation, but the people being sold these products and buying them are expecting a sentient robot lover. They’re getting another shitty chatbot that inevitably fails to meet bare minimum companionship standards such as not berating you.

    There currently exists no ethical use of LLM AI. Your comment can be construed as defence of malicious people and actions.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      I’ve never met anyone who uses them, but I’m also not sure people actually think it’s sentient. I’m sure some do, but I’d assume the vast majority are just looking to have a conversation, and they don’t care if it’s with a person or a (pretty good) chat bot.

      Also, there is a way to use it ethically. As the post mentions, run it locally and know what you’re doing with it. I don’t see any issues if you’re aware of what it is, just as I don’t see any issue using auto-correct or any other technology. We don’t need to go full Butlerian (yet).

      • Krauerking@lemy.lol
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        Really? You don’t think anyone that uses these don’t think they are sentient?

        Sure it’s not like the people designing these are prone to make-believing the AI for sentient too, right?

        You are coming at this from your perspective which knows them to not be real. That’s not gonna be how the average moron thinks and there is more of them than you think. And they absolutely believe their is a tiny sentient brain somewhere in there that is alive. I’m all for people doing what makes them happy but also this is a loneliness confirming hole to get trapped in and absolutely opens doors to influence people through their imaginary friends that they think they can trust.