It’s called the AI alignment problem, it’s fascinating, if you want to dig deeper in the subject I highly recommend ‘Robert miles AI safety’ channel on YouTube
Comment on Long Cow is coming
AdolfSchmitler@lemmy.world 5 months agoThere’s an idea about “autistic ai” or something where you give ai an objective like “get a person from point a to b as fast as you can” and the ai goes so fast the g force kills the person but the ai thinks it was a success because you never told it to keep the person alive.
Though I suppose that’s more human error. Something we take as a given but a machine will not.
BlueMagma@sh.itjust.works 5 months ago
nevemsenki@lemmy.world 5 months ago
Computers do what people tell them to do, not what people want.
Buddahriffic@lemmy.world 5 months ago
I read about a military AI that would put its objectives before anything else (like casualties) and do things like select nuclear strikes for all missions that involved destruction of targets. So they adjusted it to allow a human operator to veto strategies, in the simulation this was done via a communications tower. The AI apparently figured out that it could pick the strategy it wanted without veto if it just destroyed the communications tower before it made that selection.
Though take it with a grain of salt because the military denied the story was accurate. Which could mean it wasn’t true or it could mean they didn’t want the public to believe it was true. Though it does sound a bit too human-like for it to pass my sniff test (an AI wouldn’t really care that its strategies get vetoed), but it’s an amusing anecdote.
DragonTypeWyvern@midwest.social 5 months ago
The military: it didn’t destroy the tower, it jammed the comms!
Chakravanti@lemmy.ml 5 months ago
ai thinks
AI’s are Mathematic’s calculations. If you ordered that execution, are you responsible for the death? It happened because you didn’t write instructions well enough; test check against that which doesn’t throw life on the scale; or maybe that’s just the cheeky excuse to be used when people start dying before enough haven’t done so that no one is left A.S. may do it, if your lucky. Doesn’t matter. It’ll just bump over from any of its thousand T-ultiverses.
Cognitive_Dissident@lemm.ee 5 months ago
Here’s the thing: what they keep calling ‘AI’ isn’t really ‘artificial intelligence’ at all. It’s just language processing on a large scale. This type of software has no actual cognitive capability; it can’t ‘think’, it has no capacity to ‘think’ at all, but they’ve written it so it gives the appearance of ‘thinking’; it’s a trick, it’s fake.
Amir@lemmy.ml 5 months ago
That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”
That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same
Cognitive_Dissident@lemm.ee 5 months ago
It’s all garbage and I consider all of it to be a fad and I just can’t wait until the world wakes up and realizes what utter crap it is and it just goes away.
Amir@lemmy.ml 5 months ago
Neural networks aren’t going anywhere because they can be genuinely useful, just not to solve every problem
Revan343@lemmy.ca 5 months ago
Exactly. LLMs are just a Chinese room
wischi@programming.dev 5 months ago
Your brain is also “just a Chinese room”. It’s just physic, chemistry and biology. There is no magic inside your brain. If a “Chinese room” is fast enough and can fool everyone into “believing” that it’s fluent in chinese, than the room speaks chinese.
kogasa@programming.dev 5 months ago
This fails to engage with the thought experiment. The question isn’t if “the room is fluent in Chinese.” It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.
MindTraveller@lemmy.ca 5 months ago
The problem here is that intelligence is a beetle
BlueMagma@sh.itjust.works 5 months ago
How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.
Cognitive_Dissident@lemm.ee 5 months ago
If we can’t even begin to understand how a biological brain like ours produces the phenomenon of ‘thought’ and ‘consciousness’, then how the fuck can you build machines and write software that does those things? Rhetorical question, we can’t, full stop. All we’ve got is fakery, the illusion of ‘thinking’, ersatz, not the real thing.
For fuck’s sake, I go round and round with people on this shit every fucking time because everyone believes the hype and are never told the facts. They watch TV shows and movies and think someone made that real. They take for granted what their brains can do naturally and effortlessly (…well, not so effortlessly in too many peoples case) and knowing nothing about software or hardware think it’s trivial to make machines that can do what their own brain can do. It. Is. Not.
MindTraveller@lemmy.ca 5 months ago
Language processing is a cognitive capability. You’re just saying it’s not AI because it isn’t as smart as HAL 9000 and Cortana.
Cognitive_Dissident@lemm.ee 5 months ago
No, moron, I’m NOT. Go talk to neuroscientists; that’s what I did. They’ll tell you: an amoeba has more cognitive capability than the best of this crapware.
YOU get your “”“AI”“” information from media hype, who gets it from AI company marketing departments, who are told: “Sell this crap we created so we can get paid”.
You’re dumb. You’re so dumb that you can’t understand when someone who is actually smart tells you something, so you think they’re dumb. Get yourself a dog, name it ‘Clue’, so you’ll always have one.