Comment on Leak confirms OpenAI's ChatGPT will integrate MCP
jarfil@beehaw.org 3 weeks ago
The connectors are still optional.
Haphazard code is not a new thing. Some statistics claim that almost 50% of “vibe coded” websites have security flaws. It’s not much different from the old “12345” password, or the “qwerty” one (not naming names, but have known people using it on government infrastructure), or the “who’d want to hack us?” attitude.
MCP is the right step forward, nothing wrong with it on itself.
People disregarding basic security practices… will suffer, as always… and I don’t really see anything wrong with that either. Too bad for those forced to rely on them, but that’s a legislative and regulatory issue, vote accordingly.
Dark_Arc@social.packetloss.gg 3 weeks ago
I really think we just need to move on from this AI craze.
We don’t have a general intelligence. We may never have a general intelligence.
Keep using AI for what it’s good for: statistics based decision making. Stop trying to use AI for designing solutions; it’s not built for that because that requires reasoning which is something AI cannot do no matter how much snake oil society has been sold.
jarfil@beehaw.org 3 weeks ago
“AI” has been a buzzword basically forever, it’s a moving target of “simulates some human behavior”. Every time it does that, we call it an “algorithm” and move the goalpost for “true AI”.
I don’t know if we’ll ever get AGI, or even want to, or be able to tell if we get a post-AGI. Right now, “AI” stands for something between LLMs, and Agents with an LLM core. Agents benefit from MCP, so that’s good for
AIAgents.We can offload some basic reasoning tasks to an LLM Agent, MCP connectors allow them to interact with other services, even other agents.
A lot of knowledge is locked in the deep web, and in corporate knowledge bases. The way to access those safely, will be through agents deciding which knowledge to reveal. MCP is aiming to become the new web protocol for "AI"s, no less no more.
Dark_Arc@social.packetloss.gg 3 weeks ago
No, you can’t. It cannot reason. It’s just been fed so much existing text that it appears like it can in some cases. That’s an extremely dangerous foundation on which to build anything.
jarfil@beehaw.org 3 weeks ago
I doubt it’s been fed text about “bergro”, “parava”, and “rortx”, this looks like basic reasoning to me:
Image
Quexotic@beehaw.org 3 weeks ago
You’re not wrong, but I don’t think you’re 100% correct either. The human mind is able to synthesize reason by using a neural network to make connections and develop a profoundly complex statistical model using neurons. LLMs do the same thing, essentially, and they do it poorly in comparison. They don’t have the natural optimizations we have, so they kinda suck at it now, but to dismiss the capabilities they currently have entirely is probably a mistake.
I’m not an apologist, to be clear. There is a ton of ethical and moral baggage tied up with the way they were made and how they’re used and it needs addressed, andI think that we’re only a few clever optimizations away from a threat.