Comment on AGI achieved šŸ¤–

<- View Parent
jsomae@lemmy.ml ⁨1⁩ ⁨month⁩ ago

The Rowan Atkinson thing isn’t misunderstanding, it’s understanding but having been misled. I’ve literally done this exact thing myself, say something was a hoax (because in the past it was) but then it turned out there was newer info I didn’t know about. I’m not convinced LLMs as they exist today don’t prioritize sources – if trained naively, sure, but these days they can, for instance, integrate search results, and can update on new information. If the LLM can answer correctly only after checking a web search, and I can do the same only after checking a web search, that’s a score of 1-1.

because we know what ā€œunderstandingā€ is

Really? Who claims to know what understanding is? Do you think it’s possible there can ever be an AI (even if different from an LLM) which is capable of ā€œunderstanding?ā€ How can you tell?

source
Sort:hotnewtop