For a very long time people will also still need to understand what they are asking the machine to do. If you tell it to write code for an impossible concept, it can’t make it. If you ask it to write code to do something incredibly inefficiently, it’s going to give you code that is incredibly inefficient.
Comment on Are LLMs capable of writing *good* code?
edgemaster72@lemmy.world 2 months ago
[deleted]
visor841@lemmy.world 2 months ago
scarabic@lemmy.world 2 months ago
I’ve even seen human engineers’ code thrown out because no one else could understand it. Back in the day, one wended took it upon himself to whip up a mobile version of the company’s very complex website. He did it as a side project. It worked. It was very fast. The code was completely unreadable by anyone else. We didn’t use it.
chknbwl@lemmy.world 2 months ago
I very much agree, thank you for indulging my question.
667@lemmy.radio 2 months ago
I used an LLM to write some code I knew I could write, but was a little lazy to do. Coding is not my trade, but I did learn Python during the pandemic. Had I not known to code, I would not have been able to direct the LLM to make the required corrections.
In the end, I got decent code that worked for the purpose I needed.
adespoton@lemmy.ca 2 months ago
I would not trust the current batch of LLMs to write proper docstrings and comments, as the code it is trained on does not have proper docstrings and comments.
And this means that it isn’t writing professional code.
It’s great for quickly generating useful and testable code snippets though.
GBU_28@lemm.ee 2 months ago
It can absolutely write a docstring for a provided function. That and unit tests are like some of the easiest things for it, because it has the source code to work from