AI = bad, I know, but do people order “water, please” instead of “a glass of water, please” ?
WATER!
Submitted 1 day ago by flango@lemmy.eco.br to [deleted]
https://lemmy.eco.br/pictrs/image/4fe1e669-c5de-477d-9100-30e6bdf3c0e9.webp
Comments
tatann@lemmy.world 14 hours ago
Macaroni_ninja@lemmy.world 12 hours ago
I don’t know what restaurants you go to, but during my adult life not once “water please” failed to get me a glass of water.
Sometimes the waiter asks if bottled or tap, but thats about it.
warbond@lemmy.world 9 hours ago
I went to a restaurant in Dallas near the stadiums where asking for “water, please” got us a glass bottle of whatever rich-people water they served.
Everything was expensive and the food was super whatever. It’s called Soy Cowboy, if anybody’s curious.
tatann@lemmy.world 10 hours ago
Well since it’s annoying for both the waiter and the customer to have to ask/precise every time if it’s bottled or tap, I just always say it directly, and I’ve seen other people do it
While abroad, I once asked many years ago for water in a restaurant and ended up with a 8$ 1L bottle, I’m not making that mistake again
xx3rawr@sh.itjust.works 6 hours ago
Valmond@lemmy.world 9 hours ago
Hey now french expensive water is really good though! How expensive is it where you live? I get a big (75cl?) bottle of st Pellegrino for like 3.50€ here in France.
the_grass_trainer@lemmy.world 17 hours ago
I tried using Cursor IDE and Claude Sonnet 4 to make an extension for Blender, and it keeps getting to the exact same point (super basic functions) of development, and then constantly breaking it when I try to get it to fine tune what i need to be done… This comic is accurate af.
SlartyBartFast@sh.itjust.works 21 hours ago
Wouldn’t a waiter AI be trained on a dataset of food orders and hence know exactly what an order of water would be by the context?
SpaceCowboy@lemmy.ca 18 hours ago
Some days it will be but other days it won’t be. Most of the time it can save me typing because it’ll do what I want. Sometimes (for similar tasks in the same context) it’s just be completely off. Once it helpfully commented my code… in Korean.
LLMs are like a box of chocolates, you never know what you’re gonna get.
OmegaLemmy@discuss.online 1 day ago
That is a good depiction
swiftywizard@discuss.tchncs.de 23 hours ago
I just let it create a function in a temporary file that takes specific parameters because it always tries to scramble my project
interdimensionalmeme@lemmy.ml 1 day ago
Oh yes, give me AI assistant, I will whisper sweet nothing and it will give me the moon, your moon.
itkovian@lemmy.world 1 day ago
Haven’t used any coding LLMs. I honestly have no clue about the accuracy of the comic. Can anyone enlighten me?
Deceptichum@quokk.au 1 day ago
I use them frequently, they’re extremely helpful just don’t get it to write everything.
As for the comic, it’s pretty inaccurate. The only one that I find true is the too much water, sometimes the bots like to take … longer methods.
itkovian@lemmy.world 1 day ago
From what I understand of LLMs your assessment does seem likely to me. LLMs might actually be pretty accurate when asked to do relatively simpler, shorter tasks.
Draces@lemmy.world 22 hours ago
The comic is only accurate if you expect it do everything for you, you’re bad at communicating, and you’re using an old model. Or if you’re just unlucky
SpaceCowboy@lemmy.ca 18 hours ago
Yeah kinda. I ask it to do something simple like create a a typescript interface for some JSON and it just gives me what I want… most of the time.
Other times it will explain to me what JSON is, what Typescript is, what interfaces are and how they’re used, blah blah, and somewhere in there there’s the code I actually wanted. Once it helpfully commented the code… in Korean. Even when it works and comments things in English the comments can be kinda useless since it doesn’t actually know what I’m doing.
It’s trying to give you what you want but can sometimes get confused about what you’re asking for and give a bunch of stuff you didn’t actually want. So yeah, the comic is accurate… on occasion. But many times LLMs will give good results, and it’s getting better, so it’ll mostly work ok for simple requests. But yeah, sometimes it’ll give you a lot more stuff than what you wanted.
Skullgrid@lemmy.world 1 day ago
they suck
hotdogcharmer@lemmy.world 23 hours ago
I point blank refuse to use them. I’ve seen how they’ve affected my coworker and my boss - these two people now simply cannot read documentation, do not trust their own abilities to write code, and cannot debug anything that they write. My job has become more difficult since this shit started being pushed on us all.
RickyRigatoni@retrolemmy.com 23 hours ago
I use llms to write small scripts because I’m too lazy to learn bash and ms cmd and regex and so far have not ruined anything.
Clearwater@lemmy.world 23 hours ago
They’re okay for tasks which are reasonably a single file. I use them for simple Python scripts since they generally spit out something very similar to what I’d write, just faster. However there is a tipping point where a task becomes too complex and they fall on their face and it becomes faster to write the code yourself.
I’m never going to pay for AI, so I’m really just burning the AI company’s money as I do it, too.
lemming741@lemmy.world 1 day ago
Much like Amazon has an incentive to not show you the specific thing it knows you’re searching for, people theorize that these interfaces are designed to burn through your tokens.
AmbitiousProcess@piefed.social 1 day ago
I doubt that's the case, currently.
Right now, there's a lot of genuine competition in the AI space, so they're actually trying to out compete one another for market share. It's only once users are locked into using a particular service that they begin deliberate enshittification with the purpose of getting more money, either from paying for tokens, or like Google did when it deliberately made search quality worse so people would see more ads ("What are you gonna do, go to Bing?")
By contrast, if ChatGPT sucks, you can locally host a model, use one from Anthropic, Perplexity, any number of interfaces for open source (or at least, source-available) models like Deepseek, Llama, or Qwen, etc.
It's only once industry consolidation really starts taking place that we'll see things like deliberate measures to make people either spend more on tokens, or make money from things like injecting ads into responses.
chuckleslord@lemmy.world 23 hours ago
The only thing I trust it with is refactoring for readability and writing scripts. But I also despise LLMs, so that’s all I’d give them.