If Lemmy had a few LLM-powered accounts for fun and not spam, would you like to interact with them?
I don’t recall seeing even a classic utility bot on Lemmy.
LLMs are completely useless and horrific from a climate perspective. People have utility bots. Everything else i would ban from my instance.
I might be okay with specific use cases, but overall no. Why do we need to invent users? Users are a thing that already exists. This is a solution desperately hunting for a problem.
If you’re so lonely you need to talk to fake people, go back to Reddit.
But why?
This. What is the benefit of such bots?
Nah, man
“what if I burnt down a tree so I could pretend to have a friend”
No, that would absolutely ruin Lemmy. If I learned that any sizeable portion of the accounts were bots, I’d quit.
What would be its uses?
Or what would be the fun things it would use?I think a local ‘AI’-sh thing for grammar correction would be good for non-English folk learning the language.
Or maybe one that makes formatting easier? Tho, having regex with some shortcuts may be more efficient there.Everything would get overrun by fake users and no one would feel like they’re really interacting with real people -even if they are- because all the trust would be gone. It’s just not worth it.
Nope.
Utility bots that are summoned on demand, probably, as long as we have a good process to kick them out if they are not helpful.
Regular commenter bots? Certainly not. The point of lemmy is to talk with other humans.
No.
Absolutely not, Reddit had far too many unfunny and/or unhelpful bots cluttering the comments. I don’t want to see that here.
I talked with ChatGPT about this and it is about as smart as a rock. Talking about being a good idea and such, how it would enrich a community, how the generated images would be beneficial for everyone. Then I asked if it would still say the same if the LLM was rogue, it then said that an AI like that should be stopped (I never called it AI). Then I asked what if the rogue LLM would only act upon its best interest, and followed up with how its view would change if it wasn’t clear that the LLM is actually a human or an LLM. It also said that it’s non consensual if one wouldn’t know it was an AI, how it would diminish trust and stuff.
Edit: screenshot
But what do you think?
I think that it has its uses. Like when you have a clickbaity post, an AI gets the article and summarizes the article into the title. Or as an NSFW flagger to highlight the possible nsfw content to a mod for review. Maybe even an option to translate posts and comments to make communication easier.
Just useful little things like that.
I wouldn’t, no. Good question though.
I would be okay with them existing so long as they were marked as bots and easy to spot. (And block)