No one ever said push it to production without a code review.
Comment on Yea well it still can't have an existential crisis like humans can! Take that!
AmbiguousProps@lemmy.today 15 hours agoUntil it leaves a security issue that isn’t immediately visible and your users get pwned.
Funny that you say “bullshit you make up”, when all LLMs do is hallucinate and sometimes, by coincidence, have a “correct” result.
onslaught545@lemmy.zip 11 hours ago
supersquirrel@sopuli.xyz 10 hours ago
That is EXACTLY what this mindset leads to, it doesn’t need to be said out loud.
AmbiguousProps@lemmy.today 10 hours ago
“my coworkers should have to read the slop so I don’t have to”
Eheran@lemmy.world 4 hours ago
Calculations with bugs do not magically produce correct results and plot them correctly. Neither can such simple code change values that were read from a file or device. Etc.
I do not care what you program and how bugs can sneak in there. I use it for data analysis, simulations etc. with exactly zero security implications or generally interactions with anything outside the computer.
The hostility here against anyone using LLMs/AI is absurd.
AmbiguousProps@lemmy.today 4 hours ago
Then why do you bring up code reviews and 500 lines of code? We were not talking about your “simulations” or whatever else you bring up here.
I have no idea what you’re trying to say with your first paragraph. Are you trying to say it’s impossible for it to coincidentally get a correct result? Because that’s literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that’s how they work.