Remember a year ago when llms started getting good then they had to be reprogrammed to only answer the way the fascists want? They intentionally retarded ai to protect fascist interests. Because fascism is anti intellectualism
Why OpenAI’s solution to AI hallucinations would kill ChatGPT tomorrow
Submitted 3 weeks ago by sabreW4K3@lazysoci.al to technology@beehaw.org
Comments
PunkRockSportsFan@fanaticus.social 3 weeks ago
C4d@beehaw.org 3 weeks ago
I don’t follow.
TehPers@beehaw.org 3 weeks ago
They lost me on LLMs getting good.
Una@europe.pub 3 weeks ago
Don’t trust AI, ask your cat instead, cats know everything :3
Powderhorn@beehaw.org 3 weeks ago
Sure, but cats also refuse to answer way more than 30% of the time!
Quexotic@beehaw.org 3 weeks ago
This is… Well, not entirely convincing.
So, say the computational cost triples. Intelligent methods to mitigate this would include purpose built hardware to optimize these processes. That’s a big lift, but the reward would be calculable and would have significant enough ROI that there’s no way they won’t pursue it. I think it’s a realistically conquerable problem.
And so what if it doesn’t know? Existing solutions will scour the Internet on command and this functionality, given a sufficiently high level of uncertainty, could be automated.
Combining the Internet access capability with a certainty calculation and assuming there is hardware optimization in the future, these problems, while truly significant, seem solvable.
That said, the solution probably will most likely make our world uninhabitable, so that’s neat.
My concern on top of this is that they will not exhaust funding even if private investment goes dry. The state (US, China) won’t stop funding till they reach total dominance.
We’re so screwed, guys.
Dave@lemmy.nz 3 weeks ago
I think we would just be more careful with how we used the technology. E.g. don’t autocomplete code if the threshold is not met for reasonable certainty.
I would argue that it’s more useful having a system that says it doesn’t know half the time than a system that’s confidently wrong half the time
30p87@feddit.org 3 weeks ago
Obviously. But more useful ≠ more money. So the fascocapitalists will ofc not implement that.
Rhaedas@fedia.io 3 weeks ago
Depends on the product. From an original AI research point of view this is what you want, a model that can realize it is missing information and deviates from giving a result. But once profit became involved, marketing requires a fully confident output to get everyone to buy in. So we get what we get and not something that's more reliable.
WalnutLum@lemmy.ml 3 weeks ago
It’s not just that, it’s also the fact they scored the responses based on user feedback, and users tend to give better feedback for more confident, even if wrong, responses.