Comment on Leak confirms OpenAI's ChatGPT will integrate MCP
Quexotic@beehaw.org 2 weeks agoYou’re not wrong, but I don’t think you’re 100% correct either. The human mind is able to synthesize reason by using a neural network to make connections and develop a profoundly complex statistical model using neurons. LLMs do the same thing, essentially, and they do it poorly in comparison. They don’t have the natural optimizations we have, so they kinda suck at it now, but to dismiss the capabilities they currently have entirely is probably a mistake.
I’m not an apologist, to be clear. There is a ton of ethical and moral baggage tied up with the way they were made and how they’re used and it needs addressed, andI think that we’re only a few clever optimizations away from a threat.
Dark_Arc@social.packetloss.gg 2 weeks ago
I don’t buy the “it’s a neural network” argument. We don’t really understand consciousness or thinking … and consciousness is possibly a requirement for actual thinking.
Frankly, I don’t think thinking in humans is based anywhere near statical probabilities.
Maybe everything reduces to “neural networks” in the same way LLM AI models them … but that seems like an exceptionally bold claim for humanity to make.
Quexotic@beehaw.org 2 weeks ago
It makes sense that you don’t buy it. LLMs are built on simplified renditions of neural structure. They’re totally rudimentary.