Comment on Why can't people make ai's by making a neuron sim and then scaling it up with a supercomputer to the point where it has a humans number of neurons and then raise it like a human?

<- View Parent
rtfm_modular@lemmy.world ⁨7⁩ ⁨months⁩ ago

All fair points, and I don’t deny predictive text generation is at the core of what’s happening. I think it’s a fair statement that most people hear “predictive text” and think it’s like the suggested words in a text message, which it’s more than that.

I also don’t think Turing Tests are particularly useful long term because humans are so fallible. We too hallucinate all the time with our convictions based on false memories. Getting an AI to have what seems like an emotional response or show uncertainty or confusion in a Turing test is a great way to trick people.

The algorithm is already a black box as is the mechanics of our own intelligence. We have no idea where the ceiling is for this technology yet. This debate quickly goes into the ontological and epistemological discussion about what it means to be intelligent…if the AI predictive text generation is complex enough where you simply cannot tell a difference, then is there a meaningful difference? What if we are just insanely complex algorithms?

I also don’t trust that what the market sees in AI products is indicative of the current limits. AGI isn’t here yet, but LLMs are a scary big step in that direction.

Pragmatically, I will maintain that AI is a different form of intelligence because I think it shortcuts to better discussions around policy and how we want this tech in our lives. I would gladly welcome the news that tells me I’m wrong.

source
Sort:hotnewtop