Comment on Are LLMs capable of writing *good* code?

Jimmycrackcrack@lemmy.ml ⁨2⁩ ⁨months⁩ ago

I don’t know how to program, but to a very limited extent can sorta kinda almost understand the logic of very short and simplistic code that’s been written for me by someone who can actually code. I tried to get to get chat GPT to write a shell script for me to work as part of an Apple shortcut. It has no idea. It was useless and ridiculously inconsistent and forgetful. It was the first and only time I used chat GPT. Not very impressed.

Given how it is smart enough to produce output that’s kind of in the area of correct, albeit still wrong and logically flawed, I would guess it could eventually be carefully prodded into making one small snippet of something someone might call “good” but at that point I feel like that’s much more an accident in the same way that someone who has memorised a lot of French vocabulary but never actually learned French might accidentally produce a coherent sentence once in a while by trying and failing 50 times before and failing again immediately after without ever having known.

source
Sort:hotnewtop