Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.
Cue tech obsessive trying to defend or deflect from LLMs etc and their problems in 5…
Submitted 1 day ago by 14th_cylon@lemm.ee to technology@beehaw.org
Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.
Cue tech obsessive trying to defend or deflect from LLMs etc and their problems in 5…
What exactly is there to defend or deflect, LLM’s are useful for some things and not for others, this is well known
An important characteristic of a model is “stability.” Stability means that small changes in input produce small changes in output.
Stability is important for predictability. For instance, suppose you want to make a customer support portal. You add a bot hoping that it will guide the user to the desired workflow. You test the bot by asking it a bunch of variations of questions, probably with some RLHF. But then when it goes to production, people will start asking it variations of questions that you didn’t test (guaranteed). What you want ideally, is that it will map the variants to the best workflow that matches what the customer wants. Second best would be to say “I don’t know.” But what we have are bots who will just generate some crazy off-the-wall crap, and no way to prevent it.
As such, it raises concerns that AI systems deployed in a real-world situation, say in a driverless car, could malfunction when presented with dynamic environments or tasks.
This is currently happening with driverless cars that use machine learning - so this goes beyond LLMs and is a general machine learning issue. Last time I checked, Waymo cars needed human intervention every six miles. These cars often times block each other, are confused by the simplest of obstacles, can’t reliably detect pedestrians, etc.
They’re just like me fr
“when we try to use a tape measure to hammer in nails it doesn’t really work, so tape measures are useless”
Nah, it’s either a skill issue, or your tape measure isn’t nearly chonky enough.
In other news, water is wet
simon574@feddit.org 2 hours ago
The headline is misleading. By “real-world use” they mean using ChatGPT and Claude for street navigation in New York. Which is one very specific use-case.
14th_cylon@lemm.ee 1 hour ago
street navigation is quite primitive use case compared to what some others were suggesting (like firing stuff of suicide hotline and replacing them with chatbots).
while machine learning can no doubt be useful tool for many of narrowly specified specific tasks, where all you need to do is evaluate lot of data and find pattern in it, the business behind it acts as if it already had invented GAI and unfortunately will keep pretending that and probably cause lot of damage in hunt for money.