Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    20 days ago

    I want all of the CEOs and executives that are forcing shitty AI into everything to get pancreatic cancer and die painfully in a short period of time.

    Then I want all AI that is offered commercially or in commercial products to be required to verify their training data and be severely punished for misusing private and personal data. Copyright violations need to be punished severely, and using copyrighted works being used for AI training counts.

    AI needs to be limited to optional products trained with properly sourced data if it is going to be used commercially. Individual implementations and use for science is perfectly fine as long as the source data is either in the public domain or from an ethically collected data set.

    • Xaphanos@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      20 days ago

      So, a lot of our AI customers have no real use for LLM. It’s pharmaceutical and genetics companies looking for the treatments and cures for things like pancreatic cancer and Parkinson’s.

      It is a big problem to paint all generative AI with the “stealing IP” brush.

      It seems likely to me that an AI may be the only controller that can handle all of the rapidly changing parameters needed to maintain a safe fusion process. Yes it needs safeties. But it needs research, too.

      I urge much more consideration of the specific uses of this new technology. I agree that IP theft is bad. Let’s target the bad parts carefully.