I know that words are tokenized in the vanilla transformer. But do GPT and similar LLMs still do that as well? I assumed they also tokenize on character/symbol level, possibly mixed up with additional abstraction down the chain.
I know that words are tokenized in the vanilla transformer. But do GPT and similar LLMs still do that as well? I assumed they also tokenize on character/symbol level, possibly mixed up with additional abstraction down the chain.
“Let me know if you’d like help counting letters in any other fun words!”
Oh well, these newish calls for engagement sure take on ridiculous extents sometimes.
Grabs machete
Thanks for showing me where to find it. /j
General* advice: find a hobby. And surround yourself with people you like.
(This is not unfounded. Hobbies are tied to increased happiness and life satisfastion. Healthy friendships are crucial for well-being and longevity.)
*There are of course cases where this general piece of advice is not applicable. But I direct this towards the majority of people.
I’ve made similar experiences in movie theatres. And streaming services continuously disappoint on that front too.
Cogito ergo sum.
Accepting a common framework of provable, i.e., measurable, repeatable, falsifiable phenomena, as a concept of “reality,” seems to be a pragmatic approach, given my sensory inputs and the processing results of my brain. This is then “knowledge.”
But ultimately, this is subordinated to the possibility of an illusion – be it like in The Matrix, or as a Boltzmann brain, or whatever. Unless there is evidence for that, it appears most practical to me to go with the above, as I don’t gain anything from racking my brain about such possible illusions of reality (even though it’s fun thinking about it).
I’m nitpicky about the word “believe”. So let me rephrase: I do not believe. Either I know, or I don’t know. Everything else are more or less informed speculations, assumptions or hypotheses at best.
Ask for a community meeting, so you can see that those people are real.
Despite that, I don’t see any effective counter measure in the long run.
Currently, sure, with a keen eye you might be able to spot characteristics of one or the other LLM. But that’d be a lucky find.
Yes to all of that except your last paragraph.
I suppose you’re referring to the article I’ve linked. As I see it: If an increasing amount of applications world are running with Python, then energy and time consumption are important aspects. Not only cost wise but especially since we’re grilling our planet. Therefore, comparing with more efficient languages is indeed meaningful.
Python sucks.
Not only is it extremely inefficient, it is also a pain in the ass to work with if you have to use APIs that heavily rely on dynamic type wrapping and don’t provide stubs. Static analysis via Pylance is not possible then and you’re basically poking around in the dark, increasing the difficulty enourmously to get to know such an API. Even worse if there isn’t even a halfway decent documentation.
Nice. Good luck! What’s the project?
Cool! What’s the project?
Despite being suicidally depressed and having a fucked up sleep schedule I’m pretty good on track with my thesis. Have carefully read about 60 papers in the last three weeks and achieved important milestones on time.
It’s never too late to start doing something you love. Even if it means to entirely switch careers. You can be proud of yourself for pursuing this, even if it means leaving a comfort zone. :)
I meant “the algorithm”, that the parent comment mentions. Designing an algorithm that is driven by clickrate in order to gain more ad revenue is motivated by capitalistic forces.
You’ve misspelled capitalism.
My tired brain read “99 cent” at first and I thought it was an article by The Onion.
You can make up words at any age.
That’s called victim blaming.
But yeah. I really hope people stop using Google products. Google is evil.