Comment on Coinbase CEO explains why he fired engineers who didn’t try AI immediately
MotoAsh@lemmy.world 1 week ago“…leans too heavily on its training data…” No, it IS its training data. Full srop. It doesn’t know the documentation as a separate entity. It doesn’t reason what so ever for where to get its data from. It just shits out the closest approximation of an “acceptable” answer from the training data. Period. It doesn’t think. It doesn’t reason. It doesn’t decide where to pull an answer from. It just shits it out verbatim.
I swear… so many people anthropomorphise “AI” it’s ridiculous. It does not think and it does not reason. Ever. Thinking it does is projecting human attributes on to it, which is anthropomorphizing it, which is lying to yourself about it.
okwhateverdude@lemmy.world 1 week ago
Ackually 🤓, gemini pro and other similar models are basically a loop over some metaprompts with tool usage including using search. It will actually reference/cite documentation if given explicit instructions. You’re right, the anthropomorphization is troubling. That said, the simulacrum presented DOES follow directions and it’s (meaning the complete system of LLM + looped prompts) behavior can be interpreted as having some kind of agency. We’re on the same side, but you’re sorely misinformed, friend.
MotoAsh@lemmy.world 1 week ago
I’m not misinformed. You’re still trying to call a groomed LLM something that reasons when it literally is not doing that in any meaningful capacity.