Usage is rising because corporate executives started getting kickbacks and thinking they could cut staff by implementing it. But developers who have actually had to use it have realized it can be useful in a few scenarios, but requires a ton of review of anything it writes because it rarely understands context and often makes mistakes that are really hard to debug because they are subtle. So anyone trying to use it for a language or system they don’t understand well is going to have a hard time.
Developer survey shows trust in AI coding tools is falling as usage rises
Submitted 1 day ago by misk@sopuli.xyz to technology@lemmy.zip
Comments
irotsoma@lemmy.blahaj.zone 1 day ago
spankmonkey@lemmy.world 1 day ago
because it rarely understands context
It never understands context.
oxysis@lemmy.blahaj.zone 1 day ago
And it cannot understand context because it does not think, it’s just an expensive prediction tool
OpenStars@piefed.social 1 day ago
Counterpoint: they want number go up.
Pro Tip: it doesn't even matter if number go up, when they know how to suck up to even higher-ups.
p03locke@lemmy.dbzer0.com 1 day ago
That’s not true. If you give it context, it understands and retains context quite well. The thing is that you can’t just say “write code for me” and expect it to work.
Also, certain models are better than certain tasks than others.
Master167@lemmy.world 1 day ago
Executives are getting kickbacks? I’ve gotta do some research here.
Kronusdark@lemmy.world 1 day ago
This is a true statement.
Feyd@programming.dev 1 day ago
There’s relatively little debate among developers that the tools are or ought to be useful,
Yes there is. No one wants to listen to us. I’ve had 3 levels of people above me ask me how I’ve incorporated AI into my workflow. I don’t get any pushback because my effectiveness is well known, yet the top down edict that everyone else use these shitty tools continues unabated.
Zectivi@piefed.social 1 day ago
Where I work, my skip-levels have started debating on whether they want to consider if an engineer uses AI as a factor for reviews, pay raises and incentives, and are tracking who uses it by way of licenses.
It's a bit ridiculous IMO because they're essentially saying "we don't care if slop makes it into the code base, so long as you are using AI, you will remain gainfully employed."
Feyd@programming.dev 1 day ago
I’ve seen a lot of stupid shit over my career but this AI zealotry just takes the cake.
Everyone is so convinced these tools will make software get made faster, but I’m not even convinced that it gives even a modest benefit. For me personally they definitely don’t, and it seems to lead junior devs horribly astray as often as it helps speed them up.
It feels like I’m not even looking at the same reality as everyone else at this point.
unmagical@lemmy.ml 1 day ago
I mean “ought to be useful,” sure that would be nice. They ain’t, but perhaps “ought to be.”
Serinus@lemmy.world 1 day ago
It’s useful for things I’d otherwise Google. It makes a great ORM, when you know exactly what you want to do with a lot of mundane code. And it’s so much better than adding a framework for an ORM.
Feyd@programming.dev 1 day ago
I don’t like ORMs, but I’d rather use a battle tested ORM than some vibe coded data layer.
shalafi@lemmy.world 1 day ago
Last job, year ago as of today, worked at small software dev. We were all talking about AI’s usefulness.
No one, not a soul from the CEO down misunderstood the applications. Yeah, it’s great for getting over a hump. Stuck? Meh, maybe the LLM will kick out a useful path I hadn’t known or considered. Got around a problem with PowerShell and Google Calendar, made a neat integration, far faster than I could have figured it myself. All I got was a couple of lines of code, all I needed, wrote the rest myself. Kinda like stealing code off any given site, but faster. Wonderful tool, really!
End of story. AI isn’t writing end-to-end working code, as some leaders think. It’s a damned useful tool for getting around roadblocks, so yeah, you can code faster. We were all amazed. But no one thought it was intelligent or would replace us. But in the end, we’ll need fewer dev hours. Sorry. That’s simply true for competent firms.
p03locke@lemmy.dbzer0.com 1 day ago
Yep, and the general public is too stubborn to accept a little thing like nuance. Neither are the CEO assholes that can’t stop talking about layoffs and replacing jobs, out in the open.
Tja@programming.dev 21 hours ago
And all the boilerplate. And the test cases. And the CLI one liners. Maybe even small refactorings.
AI is amazing when used correctly.
Warl0k3@lemmy.world 1 day ago
AI coding tools are a great way to generate boilerplate, blat out repetitive structures and help with blank page syndrome. None of those mean you can ignore what they generate. They fit into the same niche as Stackoverflow - you can yoink all the 3rd party code snippets you want, but it’s going to be some work to get them running well even if you understand what they’re doing.
RushLana@lemmy.blahaj.zone 1 day ago
LLMs will always fail to help developpers because reviewing is harder than writting. To review code effectivly you must know the how and why of the implementation in front of you and LLMs will fail to provide you with the necessary context. On top of that a good review evaluate the code in relation to the project and other goal the LLM will not be able to see.
The only use for LLM in coding is as an alternative search bar for stackoverflow
markz@suppo.fi 1 day ago
The only use for LLM in coding is as an alternative search bar for stackoverflow
I’d argue it can also be useful as a form of autocomplete, or writing whatever boilerplate code; that still isn’t outsourcing your thinking to the text predictor.
RushLana@lemmy.blahaj.zone 12 hours ago
When I tried the autocomplete in IntelliJ it kept trying to guess what I wanted to do instead of autocompleting what I was typing so I don’t know about that part.
Still millions of ton of CO2 for a search bar and autocomplete doesn’t seems like a good idea.
Deflated0ne@lemmy.world 1 day ago
The miracle slop machine miraculously produces garbage.
Shocker.
AlecSadler@lemmy.blahaj.zone 1 day ago
I just see this as future job security.
Oh, AI fucked your codebase? Well, it’ll take twice as much time to undo it all and fix it, my rate is $150/hr. Thanks.
Tja@programming.dev 21 hours ago
That’s and awful long time to do
git checkout HEAD~10
AlecSadler@lemmy.blahaj.zone 21 hours ago
Shhhh, they don’t need to know this.
JoMiran@lemmy.ml 1 day ago
TheBat@lemmy.world 1 day ago
Hey look, handmade NFT!
JoMiran@lemmy.ml 23 hours ago
This is a really old jpeg. If I remember correctly it was meant as a protest joke against NFTs. NFTs claim was that each was unique and couldn’t be copied or something, someone replied with this quick doodle on paper saying that this was original, unique, and truly could not be copied and he’d part with it for a measly $5,000,000.
Paraphrasing a lot, but that was the gist of it. I cropped it and added the text for the memes.
melsaskca@lemmy.ca 1 day ago
Geez louise, maybe it’s not intelligence after all. I think you’d need sentience to apply the word intelligence. Those wacko marketing people.
NaibofTabr@infosec.pub 1 day ago
There was trust?
DarkCloud@lemmy.world 1 day ago
But now you can spend 4 hours trying to get it to say the right thing and do it all in one output. Where as before it might take you an hour if you’re having a bad day.
Nalivai@lemmy.world 2 hours ago
There is an amazing quirk of the LLM, whenever I don’t know about the topic, and refuse to google, it gives me some useful answers, but if I ask it something I know about, the answers are always stupid and wrong. I asked a computer about it but it said that everything is normal and I should buy better subscription, so there’s that.
chonglibloodsport@lemmy.world 2 hours ago
It’s not a quirk of LLMs, it’s a quirk of human cognitive biases.
See: Gell-Mann amnesia effect.