You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
deleted by creator
GenAI is a plagiarism machine. If you use it, you’re complicit.
Ethics aside, LLMs in particular tend to “hallucinate”. If you blindly trust their output, you’re a dumbass. I honestly feel bad for young people who should be studying but are instead relying on ChatGPT and the likes.
If you use it for personal rather than commercial use, what’s the harm?
I have used copilot a couple times to be like “I have this scenario and want to do this. What are my options?”. I’d rather have a good Internet search and real people, but that’s all shitted up.
The answers from the LLM aren’t even consistently good. If I didn’t know programming I wouldn’t be able to use this information effectively. That’s probably why a lot of vibe coding is so bad.
Same.
- i think of search as a summary of the first page of search results. It takes slightly longer to come back but might save you time evaluating. But much of the time you do need to click into original source
- ai writing unfortunately is valued at my company. I suppose it helps us engineers write more effective docs, but they don’t really add value technically, and they’re obviously ai. I’ve used this to translate technical docs into wording so management can say “look how we use ai”
- ai coding is better. I use it through my ide as effectively an extension of autocomplete: where the IDE can autocomplete function signatures, for example, ai can autocomplete multiple lines. It’s very effective in that scenario
- I’m just starting with more complex rulesets. I’ve gotten code reviews with good results, except for keeping me in the loop so it inevitably goes very wrong. I’ve really polished my git knowledge trying to unwind where someone trusts ai results without evaluation but the fails forward trying to get it to fix itself until they can’t find their way back. This past week I’ve been playing with a refactoring ruleset (copied from online). It’s finding some good opportunities and the verbal description of the fix is good, but I’ll need to tweak the rule set for the generated solution to be usable
The short version is it appears to be a useful tool, IFF you can spend the time to develop thorough rulesets, stables of mcp servers, and most importantly, the expertise that you could do it yourself
It speeds up my dev time dramatically. I know what I want to do, I have an idea of how I want to do it. LLM generates boilerplate code I review. I tweak it. I fix the bug. If there is something I don’t understand, I ask sources to review the output. I test it. Then I’ll submit it for peer review once I’m happy with the code and the output.
LLM’s have their use, there is no doubt about that. I’m in the middle of creating a home brew campaign for my D&D group and unfortunately I’m a lousy artist and I wanted a few things visualized. Well, I used a photo generating AI to create something that had the visual I wanted. I’m going to use it for my campaign and it will probably just sit on my hard drive after I’m done.
My employer is rolling out AI and is asking us to find places to insert it into our workflows. I am doing that with my team, but none of us are really sure if it will be of any benefit.
The problem right now is we’re at the stage where idiots are convinced it is something that it is not and they have literally thrown 10’s of billions of dollars at it. Now… They are staring at the wide abyss that is the amount of money they invested vs the amount of money people are willing to pay for it.
I’ve seen arguments for and against the presence of an AI bubble… Personally, I think it’s a bubble that’s so large that it will take down several long established computer industry manufacturers when if pops. Those that are arguing its absence probably have large investments that they do not want to see fail.
LLMs specifically are great for intermediate use cases. You had a campaign in mind, but needed help with visuals. I was designing a piece of jewelry and had a series of reference images. Fed all those into a VLM and got something closer to my imagination, but still worked with a jeweler to realize the final product.
These tools are best when you have a foundation of knowledge and need a little extra guidance, but fall off when you get to deep expertise. I’ve used them to troubleshoot my server but I already had a basic understanding of how a config should look. I also wouldn’t trust an LLM to properly configure something like crypto for it.
To me, the biggest ethical concerns surround the training and creation of LLMs - stealing artists’ work to train them, energy usage, etc. I suppose in using the models I’m creating ongoing demand for them, so I’m not sure the answer. The best I’ve seen so far is what Anthropic used to espouse, no new frontier models until we can guarantee safety. And I’d throw in “utility”. Train new models when people are actually using them and clamoring for new use cases, not because a bunch of private equity shows line go up.
Literally everything I’ve vibe coded the #1 security feature is local only storage. I trust it naught with security LOL.
If it truly helps you, I think that might be enough for me. I say truly because you need to use an AI with responsibility to not ruin yourself. Like, don’t let it think for you. Don’t trust everything it says.
I use it a lot when applying for jobs, something I’ve struggled with on and off for 12 years. I suck and writing the cover letter and CV. It takes me 2-3 days to update a cover letter for a job because it takes so much energy. With AI that is down to 1-2 days.
It’s also great for explaning things in other words of if you’re trying to look up something that’s hard to search for, I don’t have any examples tho.
I used to use it to help me formulate scentences since english isn’t my first language. Now I instead use Kagi Translate.
re: applying for jobs
Not criticizing your use to write your CV specifically.
But in general, I wonder where this arms race is going? Companies using AI to pre-filter applications, because they get too many. Applicants then using AI to write their CVs, because they have to apply so many times, because they automatically get rejected.
Basically in the end the entire process will be automated, and there won’t be any human interaction anymore… just LLMs generating and choosing CVs. Maybe I’m too pessimistic, but that’s the direction we’re headed in imo.
As soon as the HR process started to use algorithms to filter out applications, it was open game to find any ways and tools to fuck their process over. Just my opinion.
We’re already there. You already read about people applying to hundreds of companies to get an offer
Even worse than the rejections are the fake jobs - typically a recruiter trying to build up a file of applicants by scamming you into applying for something that doesn’t exist.
The only part left to automate is the actual fuiding and applying. I’m lucky not having to apply for a bunch of years so maybe it has changed, but there never seemed to be a good way to automate finding the hundreds of openings and sending the application. Job application sites are determined to be middlemen but don’t actually seem to make the process more efficient
It does feel like that sometimes! It’s very sad that recruiting has lost the human touch. They seem to be blinded by years of experiences and checking boxes when they should recruit by personality, because a person can always learn. But you can’t really do much about a shitty personality, exception if you see that spark underneath it all. Some people just needs a real chance and to be believed in.
A lot of recruiters don’t even want the cover letter anymore, some have a few questions and some only go by the CV.
Yeah I use it to break up my ADHD monosentence paragraphs. I’ll tell it to avoid changing my wording (it can add definitions if it thinks the word is super niche or archaic) but mostly break things up into more readable sentences and group / reorder sentences as needed for better conceptual flow. It’s actually a pretty good low level editor.
That’s a great use!
its the next abstraction of search. A search does not answer a question correctly necessarily. Its pretty much not going to stop the same as having people not search online and instead go through newspapers and encylopedias and refernce texts. Energy wise if they are entertaining themselved and not generating images and just screwing around with text then its preferable to streaming vidoe if replacing it. The scariest part is it being used ineffectively and people not realizing it. I sometimes feel we are in a new dark ages with blood letting, trepanning, and curing demon possession.
It’s as useful as a rubber duck. Decent at bouncing ideas off it when no one is available, or you can’t be bothered to bother people about dumb ideas.
But at the moment, no, it’s not justifiable as it directly fuels oligarchies, fascism in the US, and tech bros. Perhaps when the bubble pops.
What about a self-hosted instance?
To do what? I’m fairly optimistic about narrower LLMs embedded into tools. They don’t need to be as compressive so more easily self hosted. For more complex tools, they can tie together search, database queries, reporting, make it easier to find a setting you don’t know their terminology for.
I’ve had some luck self-hosting a small ai to interpret natural language voice commands for home automation
Yeah, all of your use-cases are what I see as positive use cases for LLMs. I’ve got an Ollama instance hooked up to Home Assistant, but it does not work very well haha. Haven’t had the time to troubleshoot it.
It’s much better, but still acts as plagiarism
Can the rubber ducky use case really be considered plagiarism? I think it’s unequivocal that the models were trained on copyrighted data in a way that, if not illegal, is at the very least unethical. Letting AI write stuff for you seems a lot more problematic than using it to bounce ideas off of or talk things through.
Plagiarism if it uses art, yeah.
For LLMs, not so much since you can’t really own reddit comments
deleted by creator
Human beings have been outputting incorrect information for years. Get a high school textbook in literally any subject (except possibly math) from the 1970s. You’ll be amazed at how much of it is oversimplified or politicized or just plain wrong.
I do agree that AI has compounded the problem. There’s a limit to how much inaccuracy/incompetence a given system can tolerate. An organization that relies on AI for critical processes better have a way to monitor and intervene.
deleted by creator
That’s not really new, or unique to AI. The whole “field” of eugenics was created to give racism the mantle of scientific legitimacy. People will pick through a haystack of data to find a needle that supports (however tenuously) whatever they want to be true. LLMs are just a more convenient way to find or invent those needles.
The difference now is the machine can churn out way more data (e.g. pull requests) than a human can ever deal with.
Strictly from an environmental perspective, no. This tech generates massive emissions and consumes a large amount of fresh water at a time when both are at critical points. We are going full speed towards a planet inhospitable to human life and the other life we share the planet with.
I’ve always said I think it’s fine in filler content, it can allow small teams to quickly populate their world with background stuff that you never notice. Except when it’s not there.
But with great power comes great responsibility. And I don’t necssesarily think most can handle that.
I used Copilot to build me a performance review based on actual data (which I reviewed and edited) and my boss said it was the best one he received from 30 people on the team.
I think its great for inspiration but your final product should never be raw AI/LLM output
It’s not ready for commercial use by the general public.
We see this ALL the time in America - a new disruptive technology emerges. We jump all over the benefits and the profits without regard to consequences or expense. We suffer.
New cheap pesticide? Hell yeah, spray that DDT everywhere, it’s super effective! (Insert other endless examples here, from microplastics to asbestos.)
AI (and information technology in general) has shown itself to be a danger to human beings. Its effects are not felt so much in the short term (5 or 10 years) but generationally. We’ve seen that information technology has already impacted quality of life. It’s used as spyware, as a tool to collect and correlate massive amounts of data. It’s used to shape our media experience, our purchasing, our social circles. There are great things, like online banking. But they seem more and more to be outweighed by a loss of humanity. So much misinformation that I question my own reality some days.
What we call “AI” is the evolution of these obtrusive, coercive practices. It exists purely to replace human thinking skills. I’ve spent a bit of time in r/teachers over the last 15 years, and the stories keep getting worse. The rise of AI means that detecting plagiarism/cheating is exponentially more difficult. But, more importantly, the kids don’t have any stress when it comes to cheating. They don’t have to find a friend or know the bare minimum. They can just…cheat. And they never learn to problem solve or overcome adversity.
None of this matters, though. Ready or not, here we are. A new kind of slavery for a new world order.
You raise many good points, but social media also has benefits and is not all just negative. Same with AI and all tech. We are better off overall with tech despite the downsides which we should be doing a better job of mitigating.
despite the downsides which we should be doing a better job of mitigating.
This is the part where I lose faith. We have failed to mitigate the downsides. In fact, we have encouraged the monetization of the downsides.
I read that they’re not terrible when used to power NPC’s in games.
Not my personal take, mind you, but thought it relevant.
I mean there’s effectively very capable text and conversation. Generators so powering NPCs is most definitely a strong suit for them.
Especially if you self-host some smaller models, you can effectively just do this on your own hardware for pretty cheap.
Having customizable dialogue per player that shifts the tone based off of players, actions, level gear or interactions with that NPC or other NPCs that that MPC is associated with is really cool.
effectively just do this on your own hardware for pretty cheap.
Yeah I thought as much, but I’m no expert in the subject so I left the details for smarter people.










