Just simulate an actual brain on a computer, forget AI.
We are a few years away from that.
The real challenge is x10 million speed simulation of a human brain.
Comment on Oops
manualoverride@lemmy.world 3 days ago
I’ve said this on Lemmy a few times before but 25+ years ago my AI dissertation was on a mushroom identification algorithm, which concluded that even with all the computing power in the world it would not be possible to create an infallible system, and as such was wholly unethical to create, when the cost of failure is death.
25 years later and AI is still the same, we’ve just decided to give it all that computing power.
Just simulate an actual brain on a computer, forget AI.
We are a few years away from that.
The real challenge is x10 million speed simulation of a human brain.
Username checks out
I like your username 😀
ignotum@lemmy.world 3 days ago
By that logic it would be unethical for an expert to give advice, or to even teach others to identify mushrooms, since they too are fallible and it could lead to death?
Or saying it was unethical to invent cars because they can (and most certainly do) cause deaths.
Almost everything would be unethical really
floquant@lemmy.dbzer0.com 2 days ago
What makes an expert is the ability to say “this is unequivocally safe to eat, because I can positively identify it based on this and this feature”, as well as “it is not possible/I am not able to confidently identify this mushroom as safe”
ignotum@lemmy.world 2 days ago
So an AI that can identify mushrooms and also tell the user if a mushroom is too similar to a different dangerous mushroom to be identified with a high enough certanity for it to be safe, would be ethical?
Then how can anyone claim that no such system can ever be created? That makes no sense
floquant@lemmy.dbzer0.com 2 days ago
A 2D visual representation is not the same as the real thing
lIlIlIlIlIlIl@lemmy.world 2 days ago
It’s just anti-AI hate. They’re like flat-earthers
manualoverride@lemmy.world 2 days ago
Now I don’t profess to remember the entire paper, but one section was certainly “Human factors” the difference between an expert is a human can place emphasis on the dangers above all else which an AI is often incapable of portraying, and the car will still have a human driver.
The whole point was this was a very limited and narrow language model, with AI image recognition with the assumption that the thing the human was describing and picturing is a mushroom and it’s still fallible. Specifically a mushroom identification program is a really bad idea and absolutely unethical to create, a system that answers any question you ask it where you sort out the guardrails as you go… that’s dangerous.
ignotum@lemmy.world 2 days ago
So the argument is that you tried an AI once and it didn’t do a thing, therefore it is impossible to create an AI that is able to do it?
Let’s say we reach the point where we can scan and then simulate the entire brain of a mushroom expert, then you’d have an AI that would give the same responses as a human expert would, is it ethical now? (Ignoring the ethics of simulating a person like that)
Simple classification problems are relatively trivial, just train an image classifier to take in a picture of a mushroom and have it predict the type, as well as whether or not the mushroom is similar to a dangerous one, and for good measure whether the picture is good enough to give reliable results. Train it based on feedback from experts and it should end up as reliable as the experts it was based on
manualoverride@lemmy.world 1 day ago
Well I did study for 5 years, code the AI myself and spent 4 months training it using screensaver processing on ~800 computers. Not like I downloaded an AI from the play store and declared it to be rubbish. 😀
Even with reinforcement learning from human feedback, this is still a neural network where not every pathway leads to the correct outcome.
Regardless of all the complexities people are still far more accepting of human error than AI error in extreme situations.