Comment on AGI achieved š¤
jsomae@lemmy.ml āØ1ā© āØmonthā© agoThe Rowan Atkinson thing isnāt misunderstanding, itās understanding but having been misled. Iāve literally done this exact thing myself, say something was a hoax (because in the past it was) but then it turned out there was newer info I didnāt know about. Iām not convinced LLMs as they exist today donāt prioritize sources ā if trained naively, sure, but these days they can, for instance, integrate search results, and can update on new information. If the LLM can answer correctly only after checking a web search, and I can do the same only after checking a web search, thatās a score of 1-1.
because we know what āunderstandingā is
Really? Who claims to know what understanding is? Do you think itās possible there can ever be an AI (even if different from an LLM) which is capable of āunderstanding?ā How can you tell?
The_Decryptor@aussie.zone āØ1ā© āØmonthā© ago
Well, it includes the text from the search results in the prompt, itās not actually updating any internal state (the network weights), a new āconversationā starts from scratch.
jsomae@lemmy.ml āØ1ā© āØmonthā© ago
Yes thatās right, LLMs are context-free. They donāt have internal state. When I say āupdate on new informationā I really mean āwhen new information is available in its context window, its response takes that into account.ā
KeenFlame@feddit.nu āØ1ā© āØmonthā© ago
Thatās not true for the commercial aiās. We donāt know what they are doing