I totally agree with Linus Torvalds in that AIs are just overhyped autocorrects on steroids
Did he say that? I hope he didn’t mean all kinds of AI. While “overhyped autocorrect on steroids” might be a funny way to describe sequence predictors / generators like transformer models, recurrent neural networks or some reinforcement learning type AIs, it’s not so true for classificators, like the classic feed-forward network (which are part of the building blocks of transformers, btw), or convolutional neural networks, or unsupervised learning methods like clustering algorithms or pricnipal component analysis. Then there are reasoning AIs like bayesan nets and so much much much more different kinds of ML/AI models and algorithms.
It would just show a vast lack of understanding if someone would judge an entire discipline that simply.
Zworf@beehaw.org 7 months ago
The LLMs for text are also based on “theft”. They’re just much better at hiding it because they have a multitude more source material. Still, it does sometimes happen that they quote a source article verbatim.
Rozauhtuno@lemmy.blahaj.zone 7 months ago
If I had the patience, I’d try to explain the Chinese Room though experiment to the people that misunderstand AIs. But I don’t, so I usually just shut up 🙂
onlinepersona@programming.dev 7 months ago
I’m hoping it’ll quote the license I put in my comments (should my text ever be included in the training set) and gets somebody in trouble. But yeah, transformed anything is difficult undo to see what the source material was, so commercial LLMs can mostly just get away with it.
Anti Commercial-AI license