I’ve found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?

I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.

  • Sentient Loom@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 months ago

    I got high and put in prompts to see what insane videos it would make. That was fun. I even made some YouTube videos from it. I also saw some cool & spooky short videos that are basically “liminal” since it’s such an inhuman construction.

    But generally, no. It’s making the internet worse. And as a customer I definitely never want to deal with an AI instead of a human.

    • CookieMonsterDebate@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      2 months ago

      100%. I don’t need help finding what’s on your website. I can find that myself. If I’m contacting customer support it’s because my problem needs another brain on it, from the inside. Someone who can think and take action to help me. Might require creativity or flexibility. AI has never helped me solve anything.

      • Ephera@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        I mean, yeah, but that difference is quite crucial.

        People have always wanted to be the top search result without putting effort in, because that brings in ad money.
        But without putting effort in, their articles were generally short, had typoes, and there were relatively few such articles.

        Now, LLMs allow these same people to pump out hundredfold as much gargage, consisting of lengthy articles in many languages. And because LLMs are specifically trained to produce texts that are human-like, it’s difficult for search engines to filter out these bad quality results.

    • Delphia@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      Ive found its made doing end-runs around enshitification easier.

      For example Trying to find a front suspension top for a peugeot 206 gti with google means being recommended everything front suspension for the peugeot 207, 208, Vw Gti, Swift Gti… not to mention the websites “Best price on insert what you searched for here” only they sell nothing.

      So I ask chat gpt for the part number and search that.

      • Sentient Loom@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        This is exactly the kind of thing that LLMs are good for. I also use them to get quick and concise answers about programming frameworks, instead of trying to triangulate the answer from various anecdotes on stackoverflow, or reading two hours of documentation.

        But I figured this kind of thing doesn’t count as “slop.” OP was talking about the incoherent trash hallucinations, so I left that one out.