I expect because it wasn’t a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.
Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.
tonytins@pawb.social 2 weeks ago
I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.
Scrollone@feddit.it 2 weeks ago
Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard
yucandu@lemmy.world 2 weeks ago
It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.
daikiki@lemmy.world 2 weeks ago
Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable, power and if you want something smarter than the average American politician, the upfront investment is still very significant.
BackgrndNoize@lemmy.world 2 weeks ago
Not even free, just cheaper than an actual employee for now, but greed is inevitable and AI is computationally expensive, it’s only a matter of time before these AI companies start cranking up the prices.
Vlyn@lemmy.zip 2 weeks ago
You might genuinely be using it wrong.
At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.
Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).
Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.
Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).
p03locke@lemmy.dbzer0.com 2 weeks ago
Agreed, I don’t understand people not even giving it a chance. They try it for five minutes, it doesn’t do exactly what they want, they give up on it, and shout how shit it is.
Meanwhile, I put the work in, see it do amazing shit after figuring out the basics of how the tech works, write rules and skills for it, have it figure out complex problems, etc.
It’s like handing your 90-year-old grandpa the Internet, and they don’t know what the fuck to do with it. It’s so infuriating.
Vlyn@lemmy.zip 2 weeks ago
It’s not really that simple. Yes, it’s a great tool when it works, but in the end it boils down to being a text prediction machine.
So a nice helper to throw shit at, but I trust the output as much as a random Stackoverflow reply with no votes :)
moseschrute@lemmy.world 2 weeks ago
Most people on Lemmy probably haven’t given it a single minute let alone 5 minutes.
Zos_Kia@jlai.lu 2 weeks ago
Just yesterday I had one of those moments of grace that are becoming commonplace.
Basically I have to migrate a service from a n8n workflow to an actual nodejs server for performance reasons. I spent 15 minutes carefully scoping the migration, telling it exactly what tools to use and code style to adopt. Gave it the original brief and access to the n8n workflows.
The whole thing was done in 4 minutes and 30 seconds. It even noticed a bug which has been in production unnoticed for the past year. Gave me some good documentation on how to setup the Google service account, the kind of memory usage to expect so I can dimension the instant accordingly. Another five minutes and I had a whole test suite with decent coverage. I had negotiated with the client that it would take around a week, well that was the under promise of the year…
People who go around telling it doesn’t work are incompetent, out of their minds or straight up lying.
CompassRed@discuss.tchncs.de 2 weeks ago
The symptoms you describe are caused by bad prompting. If an AI is providing over-complicated solutions, 9 times out of 10 it’s because you didn’t constrain your problem enough. If it’s referencing tools that don’t exist, then you either haven’t specified which tools are acceptable or you haven’t provided the context required for it to find the tools. You may also be wanting too much out of AI. You can’t expect it to do everything for you. You still have to do almost all the thinking and engineering if you want a quality project - the AI is just there to write the code. Sure, you can use an AI to help you learn how to be a better engineer, but AIs typically don’t make good high-level decisions. Treat AI like an intern, not like a principal engineer.
oneofmany@lemmy.world 2 weeks ago
“It can’t be that stupid, you must be prompting it wrong.”
CompassRed@discuss.tchncs.de 2 weeks ago
It’s not about stupid or smart. It’s a tool, not a person. If you don’t get the same results that other people get with the same tool, then what could possibly be the problem other than how the person is using the tool?
Bronzebeard@lemmy.zip 2 weeks ago
“it’s your fault that it just made up tools that don’t exist” is a bold statement, bro.
CompassRed@discuss.tchncs.de 2 weeks ago
No, it’s not. It doesn’t have intention. It’s literally just a tool. If you don’t get the results you expect with a tool when other people do get those results, then the problem isn’t the tool.
Zos_Kia@jlai.lu 2 weeks ago
The junior analogy comes to mind. If you hire a fresh face and they ship code that doesn’t work, it’s definitely on you, bro.
Fatal@piefed.social 2 weeks ago
At a minimum, the agent should be compiling the code and running tests before handing things back to you. “It references non-existent APIs” isn’t a modern problem.
Zos_Kia@jlai.lu 2 weeks ago
I don’t know what they are using cause all agents routinely do that. I suspect they are fibbing or tested things out in 2024 and never updated their opinion.
yucandu@lemmy.world 2 weeks ago
I create custom embedded devices with displays and I’ve found it very useful for laying things out. Like asking it to take secondly wind speed and direction updates and build a Wind Rose out of it, with colored sections in each petal denoting the speed… it makes mistakes but then you just go back and reiterate on those mistakes. I’m able to do so much more, so much faster.
aloofPenguin@piefed.world 2 weeks ago
I had the same experience. Asked a local LLM about using sole Qt Wayland stuff for keyboard input, a the only documentation was the official one (which wasn’t a lot for a noob), no.examples of it being used online, and with all my attempts at making it work failing. it hallucinated some functions that didn’t exist, even when I let it do web search (NOT via my browser). This was a few years ago.
p03locke@lemmy.dbzer0.com 2 weeks ago
That’s 50 years in LLM terms. You might as well have been banging two rocks together.
Bronzebeard@lemmy.zip 2 weeks ago
Yeah, now we’re in the iron age!
…Where we get to bang two ingots together