Cause there is no problem in keeping code quality while using AI,
Hahahahahahahaha
Mika@piefed.ca 12 hours ago
Tbf AI tag should be about AI-generated assets. Cause there is no problem in keeping code quality while using AI, and that’s what the whole dev industry do now.
Cause there is no problem in keeping code quality while using AI,
Hahahahahahahaha
No, the issue with “AI” is thinking that it’s able to make anything production ready, be it art, code or dialog.
I do believe that LLMs have lots of great applications in a game pipeline, things like placeholders and copilot for small snippets work great, but if you think that anything that an LLM produces is production ready and you don’t need a professional to look at it and redo it (because that’s usually easier than fixing the mistakes) you’re simply out of touch with reality.
Are you even reading what I say? You are supposed to have a professional approving generated stuff.
But it’s still AI-generated, it doesn’t become less AI-generated because a human that knows shit abot the subject approved it.
This is what you said:
Tbf AI tag should be about AI-generated assets. Cause there is no problem in keeping code quality while using AI, and that’s what the whole dev industry do now.
At no point did you mention someone approving it.
Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.
https://vger.to/piefed.ca/comment/2422544 mentioned here.
Dude are you a software dev? Did you hear about, like, tickets? You are supposed to split bigger task into smaller tickets at a project approval phase.
LLM agents are completely capable of taking well-documented tickets and generating some semblance of code that you shape with a few upcoming prompts, criticising code style & issues until they are all fixed.
I’m not theoretical, this is how it’s done today. MCPs into JIRA and Figma and UI tickets just get about 90% done in a single prompt. Harder stuff is done in “invesrigate and write .md how to solve” & “this is why that won’t work, do this instead” to like 70% ready.
The killer app is language processing and if a localization contractor isn’t using an LLM to quickly check for style errors and inconsistencies, they’re just making it hard for them for no good reason.
NaibofTabr@infosec.pub 12 hours ago
This opinion is contradicted by basically everyone who has attempted to use models to generate useful code which must interface with existing codebases. There are always quality issues, it must always be reviewed for functional errors, it rarely interoperates with existing code correctly, and it might just delete your production database no matter how careful you try to be.
Mika@piefed.ca 12 hours ago
So don’t accept code that is shit. Have decent PR process. Accountability is still on human.
Bronzebeard@lemmy.zip 10 hours ago
The people lazy enough to have ai generate their code aren’t going to do that.
dukemirage@lemmy.world 12 hours ago
keeping code quality is not the same as code generation
Katana314@lemmy.world 10 hours ago
I feel like I get where he’s coming from, but I can see the revulsion.
I picture someone asking their AI to write a rules engine for a gamemode and getting masses of duplicative, horrific code; but in my own work, my company has encouraged an assistive tool, and once it has an idea of what I’m trying to do, it will offer autocomplete options that are pretty spot on.
Still, I very much agree it’s hard to sort the difference and in untrained hands can definitely lead to unmaintainable code slop. Everything needs to get reviewed by knowledgeable human eyes before running.