Comment on Leak confirms OpenAI's ChatGPT will integrate MCP
Dark_Arc@social.packetloss.gg 1 week agoI really think we just need to move on from this AI craze.
We don’t have a general intelligence. We may never have a general intelligence.
Keep using AI for what it’s good for: statistics based decision making. Stop trying to use AI for designing solutions; it’s not built for that because that requires reasoning which is something AI cannot do no matter how much snake oil society has been sold.
jarfil@beehaw.org 1 week ago
“AI” has been a buzzword basically forever, it’s a moving target of “simulates some human behavior”. Every time it does that, we call it an “algorithm” and move the goalpost for “true AI”.
I don’t know if we’ll ever get AGI, or even want to, or be able to tell if we get a post-AGI. Right now, “AI” stands for something between LLMs, and Agents with an LLM core. Agents benefit from MCP, so that’s good for
AIAgents.We can offload some basic reasoning tasks to an LLM Agent, MCP connectors allow them to interact with other services, even other agents.
A lot of knowledge is locked in the deep web, and in corporate knowledge bases. The way to access those safely, will be through agents deciding which knowledge to reveal. MCP is aiming to become the new web protocol for "AI"s, no less no more.
Dark_Arc@social.packetloss.gg 1 week ago
No, you can’t. It cannot reason. It’s just been fed so much existing text that it appears like it can in some cases. That’s an extremely dangerous foundation on which to build anything.
jarfil@beehaw.org 1 week ago
I doubt it’s been fed text about “bergro”, “parava”, and “rortx”, this looks like basic reasoning to me:
Image
Dark_Arc@social.packetloss.gg 1 week ago
Yeah, it looks like basic reasoning but it isn’t. These things are based on pattern recognition. “Assume all x are y, all z are y, are all z x?” is a known formulation … I’ve seen it a fair number of times in my life.
Recent development has added this whole “make it prompt itself about the question” phase to try and make things more accurate … but that also only works sometimes.
AI in LLM form is just a sick joke. It’s like watching a magic trick where half of people expect the magician to ACTUALLY levitate next year because … “they’re almost there!!”
Maybe I’ll be proven wrong, but I don’t see it…
Quexotic@beehaw.org 1 week ago
You’re not wrong, but I don’t think you’re 100% correct either. The human mind is able to synthesize reason by using a neural network to make connections and develop a profoundly complex statistical model using neurons. LLMs do the same thing, essentially, and they do it poorly in comparison. They don’t have the natural optimizations we have, so they kinda suck at it now, but to dismiss the capabilities they currently have entirely is probably a mistake.
I’m not an apologist, to be clear. There is a ton of ethical and moral baggage tied up with the way they were made and how they’re used and it needs addressed, andI think that we’re only a few clever optimizations away from a threat.
Dark_Arc@social.packetloss.gg 1 week ago
I don’t buy the “it’s a neural network” argument. We don’t really understand consciousness or thinking … and consciousness is possibly a requirement for actual thinking.
Frankly, I don’t think thinking in humans is based anywhere near statical probabilities.
Maybe everything reduces to “neural networks” in the same way LLM AI models them … but that seems like an exceptionally bold claim for humanity to make.