I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing).
AI is fine with simple programming tasks, and I use it regularly to do a lot of basic blocking out of functions when I’m trying to get something working quickly. But once I get into a specialty or niche it just shits the bed.
For example, my job uses oracle OCI to host a lot of stuff, and I’ve been working on deployment automation. The AI will regularly invent shit out of whole cloth even knowing what framework I’m using, my normal style conventions, and a directive to validate all provided commands. I have literally had the stupid fuck invent a command out of thin air, then correct me after I tell it the command didn’t work about how that command didn’t exist and I needed to use some other command that doesn’t exist instead, or it gives me a wrong parameter list or something.
Hell, even in much more common AD management tasks it still makes shit up. Like, basic MS admin work is still too much for the AI to do in its own.
Brainsploosh@lemmy.world 2 weeks ago
It doesn’t reason, and it doesn’t actually know any information.
What it excels at is giving plausible sounding averages of texts, and if you think about how little the average person knows you should be abhorred.
Also, where people typically can reason enough to make the answer internally consistent or even relevant within a domain, LLMs offer a polished version of the disjointed amalgamation of all the platitudes or otherwise commonly repeated phrases in the training data.
Basically, you can’t trust the information to be right, insightful or even unpoisoned, while sabotaging your strategies and systems to sift information from noise.
TerdFerguson@lemmy.world 2 weeks ago
Average person… on the Internet
undeffeined@lemmy.ml 2 weeks ago
The less you know how LLMs work, the more impressed you are by them. The clever use of the term AI seems like the culprit to me since it will most likely evoke subconscious associations with the AI we have seen portraid in entertainment.
LLMs can be useful tools, when applied in restricted contexts and in the hands of specialists. This attempt to make it permeate every aspect of our lives is, in my honest opinion, insane