• 2 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle




  • I don’t really subscribe to the whole if civilization collaspes there will be no technology in anyone’s lifetime again thing. You aren’t going to go back to ordering shit off amazon from your smartphone or anything. However the knowledge that things like refrigeration, radio transmission, internal combustion engines, water treatment and such are possible is going to drive people to eventually find out how to get it by any means necessary.

    How quickly this happens is a question of whether the majority of people adopt a “technology is too dangerous/a sin against God” idealogy or not.










  • If you get just the right gguf model (read the description when you download them to get the right K-optimization or whatever it’s called) and actually use multithreading (llamacpp supports multithreading so in theory gpt4all should too), then it’s reasonably fast. I’ve achieved roughly half the speed of ChatGPT just on an 8 core amd fx with ddr3 ram. Even 20b models can be usably fast.


  • This probably isn’t very helpful but the best way I’ve found to make an ai write an entire book is still a lot of work. You have to make it write it in sections, pay attention to the prompts based on what’s happening and spend a lot of time copy pasting the good sentences into a better quality section and then use those blocks of text to create chapters. You’re basically plagiarizing a document using ai written documents rather than making the ai shit it out in 1 continuous stream.

    If you could come up with a way to make an ai produce a document using only complete sentences from other ai generated documents, maybe you could achieve a higher level of automation and still yield similar quality. Because otherwise it’s just as difficult as writing a book yourself.

    As for software, use llamacpp. It’s a cpu-only ai thingy that can utilize multiple cores. You probably aren’t getting an nvidia gpu running on any arm board unless you have a really long white neck beard and a degree in computer engineering. Download gguf compatible models for llamacpp on hugging face.






  • Any self hosted ai thing. I recommend llamacpp because it’s the easiest to set up. Gpt4all is a ui solution that runs on top of llamacpp that would also be an excellent choice. All you have to do is download any gguf compatible model from hugging face, figure out what command line options you need for multithreaded operation and to load your custom model. If you want one that’s actually loyal and is less likely to be pushing some agenda, make sure it says it’s uncensored. Even if you’re not doing porn or ai girlfriend stuff you still want it to be uncensored or else it will be as disloyal as chat gpt. Note that uncensored and nsfw are 2 different thing so if you actually want an ai girlfriend or a shitpost generator, it needs to be both uncensored and nsfw.

    Same story with image generation ai. Use easy diffusion for self hosting and download whatever models you want, nsfw or otherwise from civitai.

    Don’t bother with online ai services, they are trash.