The trick is giving it tons of context. It also depends on the LLM. Claude has given me the most success.
Comment on The air is hissing out of the overinflated AI balloon
AbelianGrape@beehaw.org 3 weeks agoI’ve only tried a handful of times, but I’ve never been able to get an LLM to do a grunt refactoring task that didn’t require me to rewrite all the output again anyway.
orca@orcas.enjoying.yachts 3 weeks ago
DaRizat@piefed.social 3 weeks ago
You have to invest in setting it up for success. Give it a really good context, feed it docs or other resources through MCP servers, use a memory bank pattern.
I just did a 30k contract with it where I hand wrote probably 20% of the code, and 75% of that was probably me just reviewing the diffs the LLM made like a PR. But that doesn't mean I'm vibe coding, I feed it atomic operations and review each change as if it was a PR. I come away understanding the totality of the code so that I can debug easily when things go wrong.
You can't just go "Here's my idea; Make it." That probably will never happen (even though that's the kool-aid that's being served), but if you're disciplined and make the most of the tools available it can absolutely 3-5x your output as an engineer.
AbelianGrape@beehaw.org 3 weeks ago
The LLM in the most recent case had a monumental amount of context. I then gave it a file implementing a breed of hash set, asked it to explain several of the functions which it did correctly, and then asked it to convert it to a hash map implementation (an entirely trivial, grunt change).
It spat out the source code of the tree-based map implementation in the standard library.