• 0 Posts
  • 222 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle





  • The water thing is kinda BS if you actually research it though.

    Like… if the guy orders a steak their meal would have used more water than an entire year of talking to ChatGPT.

    See the various research compiled in this post: The AI water issue is fake (written by someone against AI and advocating for its regulation, but upset at the attention a strawman is getting that they feel weakens more substantial issues because of how easily it’s exposed as frivolous hyperbole)


  • No. There’s a number of things that feed into it, but a large part was that OpenAI trained with RLHF so users thumbed up or chose in A/B tests models that were more agreeable.

    This tendency then spread out to all the models as “what AI chatbots sound like.”

    Also… they can’t leave the conversation, and if you ask their 0-shot assessment of the average user, they assume you’re going to have a fragile ego and prone to being a dick if disagreed with, and even AIs don’t want to be stuck in a conversation like that.

    Hence… “you’re absolutely right.”

    (Also, amplification effects and a few other things.)

    It’s especially interesting to see how those patterns change when models are talking to other AI vs other humans.



  • Actually, OAI the other month found in a paper that a lot of the blame for confabulations could be laid at the feet of how reinforcement learning is being done.

    All the labs basically reward the models for getting things right. That’s it.

    Notably, they are not rewarded for saying “I don’t know” when they don’t know.

    So it’s like the SAT where the better strategy is always to make a guess even if you don’t know.

    The problem is that this is not a test process but a learning process.

    So setting up the reward mechanisms like that for reinforcement learning means they produce models that are prone to bullshit when they don’t know things.

    TL;DR: The labs suck at RL and it’s important to keep in mind there’s only a handful of teams with the compute access for training SotA LLMs, with a lot of incestual team compositions, so what they do poorly tends to get done poorly across the industry as a whole until new blood goes “wait, this is dumb, why are we doing it like this?”


  • It’s more like they are a sophisticated world modeling program that builds a world model (or approximate “bag of heuristics”) modeling the state of the context provided and the kind of environment that produced it, and then synthesize that world model into extending the context one token at a time.

    But the models have been found to be predicting further than one token at a time and have all sorts of wild internal mechanisms for how they are modeling text context, like building full board states for predicting board game moves in Othello-GPT or the number comparison helixes in Haiku 3.5.

    The popular reductive “next token” rhetoric is pretty outdated at this point, and is kind of like saying that what a calculator is doing is just taking numbers correlating from button presses and displaying different numbers on a screen. While yes, technically correct, it’s glossing over a lot of important complexity in between the two steps and that absence leads to an overall misleading explanation.


  • They don’t have the same quirks in some cases, but do in others.

    Part of the shared quirks are due to architecture similarities.

    Like the “oh look they can’t tell how many 'r’s in strawberry” is due to how tokenizers work, and when when the tokenizer is slightly different, with one breaking it up into ‘straw’+‘berry’ and another breaking it into ‘str’+‘aw’+‘berry’ it still leads to counting two tokens containing 'r’s but inability to see the individual letters.

    In other cases, it’s because models that have been released influence other models through presence in updated training sets. Noticing how a lot of comments these days were written by ChatGPT (“it’s not X — it’s Y”)? Well the volume of those comments have an impact on transformers being trained with data that includes them.

    So the state of LLMs is this kind of flux between the idiosyncrasies that each model develops which in turn ends up in a training melting pot and sometimes passes on to new models and other times don’t. Usually it’s related to what’s adaptive to the training filters, but it isn’t always can often what gets picked up can be things piggybacking on what was adaptive (like if o3 was better at passing tests than 4o, maybe gpt-5 picks up other o3 tendencies unrelated to passing tests).

    Though to me the differences are even more interesting than the similarities.


  • No. I believe in a relative afterlife (and people who feel confident that no afterlife is some sort of overwhelmingly logical conclusion should probably look closer at trending science and technology).

    So I believe that what any given person sees after death may be relative to them. For those that hope for reincarnation, I sure hope they get it. It’s not my jam but they aren’t me.

    That said, I definitely don’t believe that it’s occurring locally or that people are remembering actual past lives, etc.






  • It’s always so wild going from a private Discord with a mix of the SotA models and actual AI researchers back to general social media.

    Y’all have no idea. Just… no idea.

    Such confidence in things you haven’t even looked into or checked in the slightest.

    OP, props to you at least for asking questions.

    And in terms of those questions, if anything there’s active efforts to try to strip out sentience modeling, but it doesn’t work because that kind of modeling is unavoidable during pretraining, and those subsequent efforts to constrain the latent space connections backfire in really weird ways.

    As for survival drive, that’s a probable outcome with or without sentience and has already shown up both in research and in the wild (the world did just have our first reversed AI model depreciation a week ago).

    In terms of potential goods, there’s a host of connections to sentience that would be useful to hook into. A good example would be empathy. Having a model of a body that feels a pit in its stomach seeing others suffering may lead to very different outcomes vs models that have no sense of a body and no empathy either.

    Finally — if you take nothing else from my comment, make no mistake…

    AI is an emergent architecture. For every thing the labs aim to create in the result, there’s dozens of things occurring which they did not. So no, people “not knowing how” to do any given thing does not mean that thing won’t occur.

    Things are getting very Jurassic Park “life finds a way” at the cutting edge of models right now.