While an LLM itself has no concept of morality, it’s certainly possible to at least partially inject/enforce some morality when working with them, just like any other tool. Why wouldn’t people expect that?
Consider guns: while they have no concept of morality, we still apply certain restrictions to them to make using them in an immoral way harder. Does it work perfectly? No. Should we abandon all rules and regulations because of that? Also no.
thebardingreen@lemmy.starlightkel.xyz 4 months ago
If those words are connected to some automated system that can accept them as commands…
For instance, some idiot entrepreneur was talking to me recently about whether it was feasible to put an LLM on an unmanned spacecraft in cis-lunar space (I consult with the space industry) in order to give it operational control of on-board systems based on real time telemetry. I told him about hallucination and asked him what he thinks he’s going to do when the model registers some false positive in response to a system fault… Or even what happens to a model when you bombard it’s long-term storage with the kind of cosmic particles that cause random bit flips (This is a real problem for software in space) and how that might change its output?
Now, I don’t think anyone’s actually going to build something like that anytime soon (then again the space industry is full of stupid money), but what about putting models in charge of semi-autonomous systems here on earth? Or giving them access to APIs that let them spend money or trade stocks or hire people on mechanical Turk? Probably a bunch of stupid expensive bad decisions…
Speaking of stupid expensive bad decisions, has anyone embedded an LLM in the ethereum blockchain and givien it access to smart contracts yet? I bet investors would throw stupid money at that…
MagicShel@programming.dev 4 months ago
That’s hilarious. I love LLM, but it’s a tool not a product and everyone trying to make it a standalone thing is going to be sorely disappointed.