Yeah, that’s true for a subset of code. But for others, the hardest parts happen in the brain, not in the files. Writing readable code is very very important, especially when you are working with larger teams. Lots of people cut corners here and elsewhere in coding, though. Including, like, every startup I’ve ever seen.
There’s a lot of gruntwork in coding, and LLMs are very good at the gruntwork. But coding is also an art and a science and they’re not good at that at high levels (same with visual art and “real” science; think of the code equivalent of seven deformed fingers).
I don’t mean to hand-wave the problems away. I know that people are going to push the limits far beyond reason, and I know it’s going to lead to monumental fuckups. I know that because it’s been true for my entire career.
BlameThePeacock@lemmy.ca 4 hours ago
If the AI is writing ALL the code for an entire application it would be a problem, but as an assistant to a programmer, if it spits out a single line or even a small function, you can read it over very quickly to validate it before moving on to the next component.
TehPers@beehaw.org 3 hours ago
This isn’t how we’re being asked to use it. People are doing demos about how Cursor or whatever did the bootstrapping and entire POC for them. And we already know there’s nothing more permanent than a POC.
BlameThePeacock@lemmy.ca 3 hours ago
This is exactly how most developers are being asked to use it, it’s literally how most of the IDE integrations work.
TehPers@beehaw.org 3 hours ago
[citation needed]
At work, we get emails, demos, etc constantly about how they’re using AI to generate everything from UI designs (v0) to starter projects and how they manage these huge prompts and reference docs for their agents.
Copilot’s line-by-line suggestions are also being pushed, but they care more about the “agentic” stuff.
I watch coworkers regularly ask it to “add X route to the API” or “make a simple UI that calls Y API”. They are asking it to do their work.
I have to review these PRs. They come in at an incredible rate, and almost always conflict with each other. I can’t review them fast enough to still do my work.
Also, we get AI-generated code reviews at work. I have to talk to a chatbot to get help from HR. Some search bars have been replaced with chatbots. It’s everywhere and I’m getting sick of it.
I just want real information from informed people. I want to review code that a human did their best to produce. I want to be able to help people improve their skills, not just their prompts.
I’m getting to the point where I’m going to start calling people out if their chatbot/agent/LLM/whatever produces slop. I’m going to give them ownership of it. It’s their output, not the AI’s.