Comment on AGI achieved đ¤
jsomae@lemmy.ml â¨2⊠â¨days⊠agoThe Rowan Atkinson thing isnât misunderstanding, itâs understanding but having been misled. Iâve literally done this exact thing myself, say something was a hoax (because in the past it was) but then it turned out there was newer info I didnât know about. Iâm not convinced LLMs as they exist today donât prioritize sources â if trained naively, sure, but these days they can, for instance, integrate search results, and can update on new information. If the LLM can answer correctly only after checking a web search, and I can do the same only after checking a web search, thatâs a score of 1-1.
because we know what âunderstandingâ is
Really? Who claims to know what understanding is? Do you think itâs possible there can ever be an AI (even if different from an LLM) which is capable of âunderstanding?â How can you tell?
The_Decryptor@aussie.zone â¨2⊠â¨days⊠ago
Well, it includes the text from the search results in the prompt, itâs not actually updating any internal state (the network weights), a new âconversationâ starts from scratch.
KeenFlame@feddit.nu â¨2⊠â¨days⊠ago
Thatâs not true for the commercial aiâs. We donât know what they are doing
jsomae@lemmy.ml â¨2⊠â¨days⊠ago
Yes thatâs right, LLMs are context-free. They donât have internal state. When I say âupdate on new informationâ I really mean âwhen new information is available in its context window, its response takes that into account.â