slop is slop.
microslop
slopware
slopity slop slop.
Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash
southsamurai@sh.itjust.works 13 hours ago
Yeah, this is actually one of the good things a technology like this can do.
He’s dead right, in terms of slop, if it’s someone with training and experience using a tool, it doesn’t matter if that tool is vim or claude. It ain’t slop if it’s built right.
slop is slop.
microslop
slopware
slopity slop slop.
And talking in absolutes without looking for nuance is not mature nor does it use any form of critical thinking.
I’m sorry. you’re absolutely right. I shouldn’t have said that.
Seems like you didn’t learn anything about it, did you?
Lmao I see what you did there
It is awesome that you left the previous comment in place. Mad props!
echodot@feddit.uk 10 hours ago
Yeah but the problem is is it? The absolutely insist that we use AI at work which is not only insane concept in and of itself but the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?
He says it helps him get work done he wouldn’t otherwise do, but how’s that possible? how is it possible that he is giving every line of code the same scrutiny he would if he wrote it himself, if he himself admits that he would never have got around to writing that code had the AI not done it? The math ain’t matching on this one.
p03locke@lemmy.dbzer0.com 9 hours ago
When was the last time you coded something perfectly? “If I have to nanny you to make sure you don’t make a mistake, then how are you a useful employee?” See how that doesn’t make sense. There’s a reason why good development shops live on the backs of their code reviews and review practices.
The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.
There’s also something to be send about the value in being able to tell an LLM to go chew on some code and tests for 10 minutes while I go make a sandwich. I get to make my sandwich, and come back, and there’s code there. I still have to review it, point out some mistakes, and then go back and refill my drink.
And there’s so much you can customize with personal rules. Don’t like its coding style? Write Markdown rules that reflect your own style. Have issues with it tripping over certain bugs? Write rules or memories that remind it to be more aware of those bugs. Are you explaining a complex workflow to it over and over again? Explain it once, and tell it to write the rules file for you.
All of that saves more and more time. The more rules you have for a specific project, the more knowledge it retains on how code for that project, and the more experience you gain in how to communicate to an entity that can understand your ideas. You wouldn’t believe how many people can’t rubberduck and explain proper concepts to people, much less LLMs.
LLMs are patient. They don’t give a shit if you keep demanding more and more tweaks and fixes, or if you have to spend a bit of time trying to explain a concept. Human developers would tell you to fuck off after a while, and get tired of your demands.
BorgDrone@feddit.nl 1 hour ago
But the problem never was typing in the actual code. The majority of coding is understanding the problem you’re trying to solve and figuring out a good solution. If you let the AI do the thinking for you, then you’re building AI slop. You can’t review your way out of it because a proper review still requires that level of understanding the problem. If you just let the AI do the typing for you, there’s very little to be gained there as the time spent typing is negligible.
AI may be good at building simple, boilerplate-level code. But that’s what we have junior developers for. Junior developers we need because they grow into medior and senior developers.
p03locke@lemmy.dbzer0.com 23 minutes ago
No, for major projects, you start out with a plan. I may spend upwards of 2-3 hours just drafting a plan with the LLM, figuring out options, asking questions when it’s an area I don’t have top-familiarity with, crafting what the modules are going to look like. It’s not slop when you’re planning out what to do and what your end result is supposed to be.
We are not the same
People who talk this way have zero experience with actually using LLMs, especially coding models.
southsamurai@sh.itjust.works 10 hours ago
Well, I’m not a code monkey, between dyslexia and an aging brain. But if it’s anything like the tiny bit of coding I used to be able to do (back in the days of basic and pascal), you don’t really have to pore over every single line. Only time that’s needed is when something is broken. Otherwise, you’re scanning to keep oversight, which is no different than reviewing a human’s code that you didn’t write.
Look at it like this; we automated assembly of machines a long time ago. It had flaws early on that required intense supervision. The only difference here on a practical level is about how the damn things learned in the first place. Automating code generation is way more similar to that than llms that generate text or images that aren’t logical by nature.
If the code used to train the models was good, what it outputs will be no worse in scale than some high school kid in an ap class stepping into their first serious challenges. It will need review, but if the output is going to be open source to begin with, it’ll get that review even if the project maintainers slip up.
And being real, lutris has been very smooth across the board while using the generated code so far. So if he gets lazy, it could go downhill; but that could happen if he gets lazy with his own code.
Another concept that I am more familiar with, that does relate. Writing fiction can take months. Editing fiction usually takes days, and you can still miss stuff (my first book has typos and errors to this day because of the aforementioned dyslexia and me not having a copy editor).
My first project back in the eighties in basic took me three days to crank out during the summer program I was in. The professor running the program took an hour to scan and correct that code.
Maybe I’m too far behind the various languages, but I really can’t see it being a massively harder proposition to scan and edit the output of an llm.