You’re not wrong, but I don’t think you’re 100% correct either. The human mind is able to synthesize reason by using a neural network to make connections and develop a profoundly complex statistical model using neurons. LLMs do the same thing, essentially, and they do it poorly in comparison. They don’t have the natural optimizations we have, so they kinda suck at it now, but to dismiss the capabilities they currently have entirely is probably a mistake.
I’m not an apologist, to be clear. There is a ton of ethical and moral baggage tied up with the way they were made and how they’re used and it needs addressed, andI think that we’re only a few clever optimizations away from a threat.
jarfil@beehaw.org 1 week ago
I doubt it’s been fed text about “bergro”, “parava”, and “rortx”, this looks like basic reasoning to me:
Image
Dark_Arc@social.packetloss.gg 1 week ago
Yeah, it looks like basic reasoning but it isn’t. These things are based on pattern recognition. “Assume all x are y, all z are y, are all z x?” is a known formulation … I’ve seen it a fair number of times in my life.
Recent development has added this whole “make it prompt itself about the question” phase to try and make things more accurate … but that also only works sometimes.
AI in LLM form is just a sick joke. It’s like watching a magic trick where half of people expect the magician to ACTUALLY levitate next year because … “they’re almost there!!”
Maybe I’ll be proven wrong, but I don’t see it…