I imagine we might wrangle the hallucination thing (or at least be more verbose about it’s uncertainty), but I doubt it will ever identify a poorly chosen question.
Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.
That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?
There’s no training for correctness, how do you even define that?
I guess can chat to these guys who are trying:
By scaling reasoning with reinforcement learning that rewards correct final answers, LLMs have improved from poor performance to saturating quantitative reasoning competitions like AIME and HMMT in one year
Sure, when it comes to mathematics you can do that with extreme limitations on success, but what about cases where correctness is less set? Two opposing statements can be correct if a situation changes, for example.
The problems language models are expected to solve go beyond the scope of what language models are good for. They’ll never be good at solving such problems.
i duno you’re in the wrong forum, you want hackernews or reddit, no one here knows much about ai
although you do seem to be making the same mistake others made before where you want point to research happening currently and then extrapolating that out to the future
ai has progressed so fast i wouldn’t be making any “they’ll never be good at” type statements
Btw, the correct answer is “use flexbox”.
Just as a tangent:
This is one reason why I’ll never trust AI.
I imagine we might wrangle the hallucination thing (or at least be more verbose about it’s uncertainty), but I doubt it will ever identify a poorly chosen question.
Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.
Solving the hallucinations in LLMs is impossible.
That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?
I guess can chat to these guys who are trying:
https://huggingface.co/deepseek-ai/DeepSeek-Math-V2
Sure, when it comes to mathematics you can do that with extreme limitations on success, but what about cases where correctness is less set? Two opposing statements can be correct if a situation changes, for example.
The problems language models are expected to solve go beyond the scope of what language models are good for. They’ll never be good at solving such problems.
i duno you’re in the wrong forum, you want hackernews or reddit, no one here knows much about ai
although you do seem to be making the same mistake others made before where you want point to research happening currently and then extrapolating that out to the future
ai has progressed so fast i wouldn’t be making any “they’ll never be good at” type statements
You could also use margin: 0 auto;
Where it works, yes. If you know where it works, it won’t be a problem for you.