Comment on Are LLMs capable of writing *good* code?

<- View Parent
DeLacue@lemmy.world ⁨3⁩ ⁨months⁩ ago

This question right here perfectly encapsulates everything wrong with LLMs right now. They could be good tools but the people pushing them have no idea what they even are. LLMs do not make decisions. All the decisions an LLM appears to make were made in the dataset. All those things that an LLM does that make it seem intelligent were done or said by a human somewhere on the internet. It is a statistical model that determines what output is mostly likely to come next. That is it. It is nothing else. It is not smart. It does not and cannot make decisions. It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.

Listen think of it like this; a man decides to take exams to become a doctor in France, but for some reason he doesn’t learn either french or medicine. No, no instead he studies every former exam and all the answers to them. He gets very good at regurgitating those answers so much so that he can even pass the exam. But at no point does he understand what any of it means and when asked new and novel questions he provides utter nonsense answers. No matter how good he gets at memorising those answers he will never get any better at medicine. LLMs are as likely to gain sentience as my excel spreadsheets are.

source
Sort:hotnewtop