If you’re honestly asking, LLMs are much better at coding than any other skill right now. On one hand there’s a ton of high quality open source training data that appropriated, on the other code is structured language so is very well suited for what models “are”. Plus, code is mechanically verifiable. If you have a bunch of tests, or have the model write tests, it can check its work as it goes.
Practically, the new high end models, GPT 5.4 or Claude Opus 4.6, can write better code faster than most people can type. It’s not like 2 years ago when the code mostly wouldn’t build, rather they can write hundreds or thousands of lines of code that works first try. I’m no blind supporter of AI, and it’s very emotionally complicated watching it after years honing the craft, but for most tasks it’s simple reality that you can do more with AI than without it. Whether it’s higher quality, higher volume, or integrating knowledge you don’t have.
Professionally I don’t feel like I have a choice, if I want to stay employed in the field at least.
veniasilente@lemmy.dbzer0.com 1 day ago
On the contrary!
I’ve seen quite a number of “AI cleanup specialist” job offerings so far, and even a few consulting positions on training juniors away from using AI in development.
(No, I have not seen any position open on training management away from using AI…)