Where did he say that about compilers and high level languages? He died before Fortran was released and probably programmed on punch cards or tape.
Comment on turing completeness
edinbruh@feddit.it 4 days ago
If Turing was alive he would say that LLMs are wasting computing power to do something a human should be able to do on their own, and thus we shouldn’t waste time studying them.
Which is what he said about compilers and high level languages (in this instance, high level means like Fortran, not like python)
UnrepentantAlgebra@lemmy.world 4 days ago
edinbruh@feddit.it 4 days ago
I’ll try to find it later, I read he said that in a book from Martin Davis
UnrepentantAlgebra@lemmy.world 2 days ago
Fair, I’m mostly just curious what high level languages were around at the time given how early this was in the history of programming. A quick search did not turn up helpful results.
edinbruh@feddit.it 2 days ago
Oh, it probably wasn’t about an existing language, but about some guy studying what would become high level languages. Like studying linkers and symbolic representation of programs
fermuch@lemmy.ml 4 days ago
Wasn’t his ideal to simulate a brain?
edinbruh@feddit.it 4 days ago
Neural networks don’t simulate a brain, it’s a misconception caused by their name. They have nothing to do with brain neurons
fermuch@lemmy.ml 3 days ago
Not what I meant. What I mean is: this could be the path he would go for, since his desire was to make a stimulated person (AI).
edinbruh@feddit.it 3 days ago
LLM are not the path to go forward to simulate a person, this is a fact. By design they cannot reason, it’s not a matter of advancement, it’s literally how they work as a principle. It’s a statistical trick to generate random texts that look like thought out phrases, no reasoning involved.
If someone tells you they might be the way forward to simulate a human, they are scamming you.
1984@lemmy.today 4 days ago
Humans are able to do it but it takes us weeks instead of seconds.
edinbruh@feddit.it 4 days ago
I don’t like it because people don’t shut up about it and insist everyone should use it when it’s clearly stupid.
LLMs are language models, they don’t actually reason (not even reasoning models), when they nail a reasoning it’s by chance, not by design. Everything that is not language processing shouldn’t be done by an LLM. Viceversa, they are pretty good with language.
We already had automated reasoning tools. They are used for industrial optimization (i.e. finding optimal routes, finding how to allocate production, etc.) and no one cared about those.
As if it wasn’t enough. The internet is now full of slop. And hardware companies are warmongering an arms race that is fueling an economic bubble. And people are being fired to be replaced by something that will not actually work in the long run because it does not reason.
1984@lemmy.today 4 days ago
Yeah I totally agree about the slop and how its destroying what the web was supposed to be. It does make sense that people would hate it based on that.
I dont really use them for reasoning, I just use them for helping me with code, or finding facts faster.
But I know these things are the beginning of a very dystopian society as well. Once all the data centers are built, each person is going to be watched forever by Ai.