Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.
Cue tech obsessive trying to defend or deflect from LLMs etc and their problems in 5…
Submitted 4 weeks ago by 14th_cylon@lemm.ee to technology@beehaw.org
Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.
Cue tech obsessive trying to defend or deflect from LLMs etc and their problems in 5…
What exactly is there to defend or deflect, LLM’s are useful for some things and not for others, this is well known
Wow. How did you know?
😉
As such, it raises concerns that AI systems deployed in a real-world situation, say in a driverless car, could malfunction when presented with dynamic environments or tasks.
This is currently happening with driverless cars that use machine learning - so this goes beyond LLMs and is a general machine learning issue. Last time I checked, Waymo cars needed human intervention every six miles. These cars often times block each other, are confused by the simplest of obstacles, can’t reliably detect pedestrians, etc.
“when we try to use a tape measure to hammer in nails it doesn’t really work, so tape measures are useless”
Nah, it’s either a skill issue, or your tape measure isn’t nearly chonky enough.
A few weeks back I got a parking ticket because I believed a google search result. Parking is free on Sundays and holidays, but the city’s website doesn’t specify which holidays. Google insisted that Halloween is a holiday and thus parking is free, but it isn’t actually federally recognized, which I found out the hard way.
In other news, water is wet
They’re just like me fr
The headline is misleading. By “real-world use” they mean using ChatGPT and Claude for street navigation in New York. Which is one very specific use-case.
street navigation is quite primitive use case compared to what some others were suggesting (like firing stuff of suicide hotline and replacing them with chatbots).
while machine learning can no doubt be useful tool for many of narrowly specified specific tasks, where all you need to do is evaluate lot of data and find pattern in it, the business behind it acts as if it already had invented GAI and unfortunately will keep pretending that and probably cause lot of damage in hunt for money.
I agree there is a lot of marketing BS around LLMs right now. But I would argue that they are quite useful for e.g basic language and coding tasks and at least for me these are real-world use cases too.
anachronist@midwest.social 4 weeks ago
An important characteristic of a model is “stability.” Stability means that small changes in input produce small changes in output.
Stability is important for predictability. For instance, suppose you want to make a customer support portal. You add a bot hoping that it will guide the user to the desired workflow. You test the bot by asking it a bunch of variations of questions, probably with some RLHF. But then when it goes to production, people will start asking it variations of questions that you didn’t test (guaranteed). What you want ideally, is that it will map the variants to the best workflow that matches what the customer wants. Second best would be to say “I don’t know.” But what we have are bots who will just generate some crazy off-the-wall crap, and no way to prevent it.