Comment on AGI achieved đ¤
UnderpantsWeevil@lemmy.world â¨3⊠â¨days⊠agoWhen people refer to agents, is this what they are supposed to be doing?
Thatâs not how LLMs operate, no. They aggregate raw text and sift for popular answers to common queries.
ChatGPT is one step removed from posting your question to Quora.
Knock_Knock_Lemmy_In@lemmy.world â¨3⊠â¨days⊠ago
But an LLM as a node in a framework that can call a python library should be able to count the number of Rs in strawberry.
It doesnât scale to AGI but it does reduce hallucinations.
outhouseperilous@lemmy.dbzer0.com â¨3⊠â¨days⊠ago
Youâd still be better off starting with a 50s language processor, then grafting on some API calls.
jsomae@lemmy.ml â¨3⊠â¨days⊠ago
in what context? LLMs are extremely good at bridging from natural language to API calls. I dare say itâs one of the few use cases that have decisively landed on âyes, this is something LLMs are actually good at.â Maybe not five nines of reliability, but language itself doesnât have five nines of reliability.
UnderpantsWeevil@lemmy.world â¨3⊠â¨days⊠ago
Isnât how these systems are configured. Theyâre just not that sophisticated.
So much of what Sam Alton is doing is brute force, which is why he thinks he needs a $1T investment in new power to build his next iteration model.
Deepseek gets at the edges of this through their partitioned model. But youâre still asking a lot for a machine to intuit whether a query can be solved with some exigent python query the system has yet to identify.
It has to scale to AGI, because a central premise of AGI is a system that can improve itself.
It just doesnât match the OpenAI development model, which is to just scrape and sort data hoping the Internet already has the solution to every problem.
KeenFlame@feddit.nu â¨3⊠â¨days⊠ago
The only thing worse than the ai shills are the tech bro mansplainaitions of how âai worksâ when they are utterly uninformed of the actual science. Please stop making educated guesses for others and typing them out in a teacherâs voice. Itâs extremely aggravating
jsomae@lemmy.ml â¨3⊠â¨days⊠ago
The claim is not that all LLMs are agents, but rather that agents (which incorporate an LLM as one of their key components) are more powerful than an LLM on its own.
We donât know how far away we are from recursive self-improvement. We might already be there to be honest; how much of the job of an LLM researcher can already be automated? Itâs unclear if thereâs some ceiling to what a recursively-improved GPT4.x-w/e can do though; maybe thereâs a key hypothesis it will never formulate on the quest for self-improvement.