Biggest threat to humanity
AGI achieved 🤖
Submitted 1 month ago by cyrano@lemmy.dbzer0.com to [deleted]
https://lemmy.dbzer0.com/pictrs/image/7efced45-504a-4177-a992-a5a2ce0e8b6f.webp
Comments
VirgilMastercard@reddthat.com 1 month ago
idiomaddict@lemmy.world 1 month ago
I know there’s no logic, but it’s funny to imagine it’s because it’s pronounced Mrs. Sippy
jaybone@lemmy.zip 1 month ago
And if it messed up on the other word, we could say because it’s pronounced Louisianer.
sp3ctr4l@lemmy.dbzer0.com 1 month ago
I was gonna say something similar, I have heard a LOT of people pronounce Mississippi as if it does have an R in it.
merc@sh.itjust.works 1 month ago
How do you pronounce “Mrs” so that there’s an “r” sound in it?
cyrano@lemmy.dbzer0.com 1 month ago
It is going to be funny those implementation of LLM in accounting software
RedstoneValley@sh.itjust.works 1 month ago
It’s funny how people always quickly point out that an LLM wasn’t made for this, and then continue to shill it for use cases it wasn’t made for either (The “intelligence” part of AI, for starters)
UnderpantsWeevil@lemmy.world 1 month ago
LLM wasn’t made for this
There’s a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders “Does my conversation partner really understand what I’m saying or am I just getting elaborate stock answers from a big library of pre-defined replies?”
The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn’t analyzing the fundamental meaning of what I’m saying, it is simply mapping the words I’ve input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So “2” is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.
When you hear people complain about how the LLM “wasn’t made for this”, what they’re really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.
Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data.
Imagine asking a librarian “What was happening in Los Angeles in the Summer of 1989?” and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.
That’s modern LLMs in a nutshell.
jsomae@lemmy.ml 1 month ago
You’ve missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that there’s a person there is irrelevant, and they could be replaced with a speaker or computer terminal.
Put differently, it’s not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.
If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely can’t be very good at anything are not comforting to me at all.
shalafi@lemmy.world 1 month ago
You might just love Blind Sight. Here, they’re trying to decide if an alien life form is sentient or a Chinese Room:
“Tell me more about your cousins,” Rorschach sent.
“Our cousins lie about the family tree,” Sascha replied, “with nieces and nephews and Neandertals. We do not like annoying cousins.”
“We’d like to know about this tree.”
Sascha muted the channel and gave us a look that said Could it be any more obvious? “It couldn’t have parsed that. There were three linguistic ambiguities in there. It just ignored them.”
“Well, it asked for clarification,” Bates pointed out.
“It asked a follow-up question. Different thing entirely.”
Bates was still out of the loop. Szpindel was starting to get it, though… .
RedstoneValley@sh.itjust.works 1 month ago
That’s a very long answer to my snarky little comment :) I appreciate it though. Personally, I find LLMs interesting and I’ve spent quite a while playing with them. But after all they are like you described, an interconnected catalogue of random stuff, with some hallucinations to fill the gaps. They are NOT a reliable source of information or general knowledge or even safe to use as an “assistant”. The marketing of LLMs as being fit for such purposes is the problem. Humans tend to turn off their brains and to blindly trust technology, and the tech companies are encouraging them to do so by making false promises.
frostysauce@lemmy.world 1 month ago
(damn, wish we had a tool that did exactly this back in August of 1996, amirite?)
Wait, what was going on in August of '96?
outhouseperilous@lemmy.dbzer0.com 1 month ago
Yes but have you considered that it agreed with me so now i need to defend it to the death against you horrible apes, no matter the allegation or terrain?
Knock_Knock_Lemmy_In@lemmy.world 1 month ago
a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately
The human approach could be to write a (python) program to count the number of characters precisely.
When people refer to agents, is this what they are supposed to be doing? Is it done in a generic fashion or will it fall over with complexity?
merc@sh.itjust.works 1 month ago
Imagine asking a librarian “What was happening in Los Angeles in the Summer of 1989?” and that person fetching you … That’s modern LLMs in a nutshell.
I agree, but I think you’re still being too generous to LLMs. A librarian who fetched all those things would at least understand the question. An LLM is just trying to generate words that might logically follow the words you used.
IMO, one of the key ideas with the Chinese Room is that there’s an assumption that the computer / book in the Chinese Room experiment has infinite capacity in some way. So, no matter what symbols are passed to it, it can come up with an appropriate response. But, obviously, while LLMs are incredibly huge, they can never be infinite. As a result, they can often be “fooled” when they’re given input that semantically similar to a meme, joke or logic puzzle. The vast majority of the training data that matches the input is the meme, or joke, or logic puzzle. LLMs can’t reason so they can’t distinguish between “this is just a rephrasing of that meme” and “this is similar to that meme but distinct in an important way”.
Leet@lemmy.zip 5 weeks ago
Can we say for certain that human brains aren’t sophisticated Chinese rooms…
REDACTED@infosec.pub 1 month ago
There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.
ouRKaoS@lemmy.today 1 month ago
If you want an even older example, the ghosts in Pac-Man could be considered AI as well.
BarrelAgedBoredom@lemm.ee 1 month ago
It’s marketed like its AGI, so we should treat it like AGI to show that it isn’t AGI. Lots of people buy the bullshit
Knock_Knock_Lemmy_In@lemmy.world 1 month ago
AGI is only a benchmark because it gets OpenAI out of a contract with Microsoft when it occurs.
merc@sh.itjust.works 1 month ago
You can even drop the “a” and “g”. There isn’t even “intelligence” here. It’s not thinking, it’s just spicy autocomplete.
merc@sh.itjust.works 1 month ago
then continue to shill it for use cases it wasn’t made for either
The only thing it was made for is “spicy autocomplete”.
jsomae@lemmy.ml 5 weeks ago
Turns out spicy autocomplete can contribute to the bottom line. Capitalism :(
SoftestSapphic@lemmy.world 1 month ago
Maybe they should call it what it is
Machine Learning algorithms from 1990 repackaged and sold to us by marketing teams.
outhouseperilous@lemmy.dbzer0.com 1 month ago
Hey now, that’s unfair and queerphobic.
These models are from 1950, with juiced up data sets. Alan turing personally sid a lot of work on them, before he cracked the math and figured out they were shit and would always be shit.
jsomae@lemmy.ml 1 month ago
Machine learning algorithm from 2017, scaled up a few orders of magnitude so that it finally more or less works, then repackaged and sold by marketing teams.
Gladaed@feddit.org 1 month ago
Fair point, but a big part of “intelligence” tasks are memorization.
BussyCat@lemmy.world 1 month ago
Computers for all intents are purposes have perfect recall so since it was trained on a large data set it would have much better intelligence. But in reality what we consider intelligence is extrapolating from existing knowledge which is what “AI” has shown to be pretty shit at
outhouseperilous@lemmy.dbzer0.com 1 month ago
I would say more “blackpilling”, i genuinely don’t believe most humans are people anymore.
besselj@lemmy.ca 1 month ago
burgerpocalyse@lemmy.world 1 month ago
teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the
Emi@ani.social 1 month ago
The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end
sp3ctr4l@lemmy.dbzer0.com 1 month ago
weamwork is my new favorite word, ahahah!
kungen@feddit.nu 1 month ago
You’re asking about a double-U. Double means two. I think AI reasoned completely correct.
ICastFist@programming.dev 1 month ago
Now ask how many asses there are in assassinations
notdoingshittoday@lemmy.zip 1 month ago
LodeMike@lemmy.today 1 month ago
Man AI is ass at this
*laugh track*
rumba@lemmy.zip 1 month ago
Rin@lemm.ee 1 month ago
UrPartnerInCrime@sh.itjust.works 1 month ago
cashsky@sh.itjust.works 5 weeks ago
What is that font bro…
UrPartnerInCrime@sh.itjust.works 5 weeks ago
Its called sweetpea and my sweatpea picked it out for me. How dare I stick with something my girl picked out for me.
But the fact that you actually care what font someone else uses is sad
lordbritishbusiness@lemmy.world 5 weeks ago
One of the interesting things I notice about the ‘reasoning’ models is their responses to questions occasionally include what my monkey brain perceives as ‘sass’.
I wonder sometimes if they recognise the trivialness of some of the prompts they answer, and subtilly throw shade.
One’s going to respond to this with ‘clever monkey! 🐒 Have a banana 🍌.’
ynthrepic@lemmy.world 5 weeks ago
Nice Rs.
nyamlae@lemmy.world 5 weeks ago
Is this ChatGPT o3-pro?
qx128@lemmy.world 1 month ago
I really like checking these myself to make sure it’s true. I WAS NOT DISAPPOINTED!
(Total Rs is 8. But the LOGIC ChatGPT pulls out is ……. remarkable!)
Zacryon@feddit.org 1 month ago
“Let me know if you’d like help counting letters in any other fun words!”
Oh well, these newish calls for engagement sure take on ridiculous extents sometimes.
scholar@lemmy.world 1 month ago
ipitco@lemmy.super.ynh.fr 1 month ago
Try with o4-mini-high. It’s made to think like a human by checking its answer and doing step by step, rather than just kinda guessing one like here
AnUnusualRelic@lemmy.world 1 month ago
What is this devilry?
LanguageIsCool@lemmy.world 1 month ago
How many times do I have to spell it out for you chargpt? S-T-R-A-R-W-B-E-R-R-Y-R
MrLLM@ani.social 1 month ago
We gotta raise the bar, so they keep struggling to make it “better”
My attempt
0000000000000000 0000011111000000 0000111111111000 0000111111100000 0001111111111000 0001111111111100 0001111111111000 0000011111110000 0000111111000000 0001111111100000 0001111111100000 0001111111100000 0001111111100000 0000111111000000 0000011110000000 0000011110000000
Btw, I refuse to give my money to AI bros, so I don’t have the “latest and greatest”ipitco@lemmy.super.ynh.fr 1 month ago
Tested on ChatGPT o4-mini-high
0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0
It sent me this
Korhaka@sopuli.xyz 1 month ago
I asked it how many Ts are in names of presidents since 2000. It said 4 and stated that “Obama” contains 1 T.
TheOakTree@lemm.ee 5 weeks ago
Toebama
DmMacniel@feddit.org 1 month ago
We are fecking doomed!
jsomae@lemmy.ml 1 month ago
People who think that LLMs having trouble with these questions is evidence one way or another about how good or bad LLMs are just don’t understand tokenization. This is not a big-picture problem that indicates LLMs is deeply incapable. You may hate AI but that doesn’t excuse being ignorant about how it works.
untorquer@lemmy.world 1 month ago
These sorts of artifacts wouldn’t be a huge issue except that AI is being pushed to the general public as an alternative means of learning basic information. The meme example is obvious to someone with a strong understanding of English but learners and children might get an artifact and stamp it in their memory, working for years off bad information. Not a problem for a few false things every now and then, that’s unavoidable in learning. Thousands accumulated over long term use, however, and your understanding of the world will be coarser, like the Swiss cheese with voids so large it can’t hold itself up.
abfarid@startrek.website 1 month ago
I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn’t even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn’t be expected.
loomy@lemy.lol 1 month ago
I don’t get it
bitjunkie@lemmy.world 5 weeks ago
Deep reasoning is not needed to count to 3.
LMurch@thelemmy.club 1 month ago
AI is amazing, we’re so fucked.
/s
sheetzoos@lemmy.world 5 weeks ago
Honey, AI just did something new. It’s time to move the goalposts again.
hornyalt@lemmynsfw.com 1 month ago
“A guy instead”
jsomae@lemmy.ml 5 weeks ago
When we see LLMs struggling to demonstrate an understanding of what letters are in each of the tokens that it emits or understand a word when there are spaces between each letter, we should compare it to a human struggling to understand a word written in IPA format (/sʌtʃ əz ðɪs/) even though we can understand the word spoken aloud perfectly fine.
Echo5@lemmy.world 5 weeks ago
Maybe OP was low on the priority list for computing power? Idk how this stuff works
slaacaa@lemmy.world 1 month ago
Singularity is here
ZILtoid1991@lemmy.world 5 weeks ago
Reality:
The AI was trained to answer 3 to this question correctly.
Wait until the AI gets burned on a different question. Skeptics will rightfully use it to criticize LLMs for just being stochastic parrots, until LLM developers teach their models to answer it correctly, then the AI bros will use it as a proof it becoming “more and more human like”.
RizzoTheSmall@lemm.ee 1 month ago
o3-pro? Damn, that’s an expensive goof
cyrano@lemmy.dbzer0.com 1 month ago
Next step how many r in Lollapalooza
Image
sexy_peach@feddit.org 1 month ago
Incredible
And009@lemmynsfw.com 1 month ago
Agi lost
Qwazpoi@lemmy.world 1 month ago
Image
cyrano@lemmy.dbzer0.com 1 month ago
Tried it with o3 maybe it needs time to think 😝
eager_eagle@lemmy.world 1 month ago
which model is it? I have a similar answer with 3.5, but 4o replies correctly
Image
altkey@lemmy.dbzer0.com 1 month ago
Apparently, this robot is japanese.
jballs@sh.itjust.works 1 month ago
I’m going to hell for laughing at that
sp3ctr4l@lemmy.dbzer0.com 1 month ago
Obligatory ‘lore dump’ on the word lollapalooza:
That word was a common term in the 1930s/40s American lingo that meant… essentially a very raucous, lively party.
Note/Rant on the meaning of this term
The current merriam webster and dictionary.com definitions of this term meaning ‘an outstanding or exceptional or extreme thing’ are wrong, they are too broad. While historical usage varied, it almost always appeared as a noun describing a gathering of many people, one that was so lively or spectacular that you would be exhausted after attending it. When it did not appear as a noun describing a lively party, it appeared as a term for some kind of action that would cause you to be bamboozled, discombobulated… similar to ‘that was a real humdinger of a blahblah’ or ‘that blahblah was a real doozy’
So… in WW2, in the Pacific theatre… many US Marines were often engaged in brutal, jungle combat, and they adopted a system of basically verbal identification challenge checks if they noticed someone creeping up on their foxholes at night.
An example of this system used in the European theatre, I believe by the 101st and 82nd airborne, was the challenge ‘Thunder!’ to which the correct response was ‘Flash!’.
In the Pacific theatre… the Marines adopted a challenge / response system… where the correct response was ‘Lolapalooza’…
Because native born Japanese speakers are taught a phoneme that is roughly in between and ‘r’ and an ‘l’ … and they very often struggle to say ‘Lolapalooza’ without a very noticable accent, unless they’ve also spent a good deal of time learning spoken English (or some other language with distinct ‘l’ and ‘r’ phonemes), which very few Japanese did in the 1940s.
::: racist and nsfw historical example of this
www.ep.tc/howtospotajap/howto06.html
:::
Now, some people will say this is a total myth, others will say it is not.
My Grandpa who served in the Pacific Theatre during WW2 told me it did happen, though he was Navy and not a Marine… but the stories about this I’ve always heard that say it did happen, they all say it happened with the Marines.
My Grandpa is also another source for what ‘lolapalooza’ actually means.
resipsaloquitur@lemmy.world 1 month ago
en.wikipedia.org/wiki/Shibboleth
I’ve heard “squirrel” was used to trap Germans.
ICastFist@programming.dev 1 month ago
It does make sense to use a phoneme the enemy dialect lacks as a verbal check. Makes me wonder if there were any in the Pacific Theatre that decided for “Lick” and “Lollipop”.
altkey@lemmy.dbzer0.com 1 month ago
I’m still puzzled by the idea of what mess this war was if at times you had someone still not clearly identifiable, but that close you can do a sheboleth check on them, and that at any moment you or the other could be shot dead.
Also, the current conflict of Russia vs Ukraine seems to invent ukrainian ‘паляница’ as a check, but as I had no connection to actual ukrainians and their UAF, I can’t say if that’s not entirely localized to the internet.
cyrano@lemmy.dbzer0.com 1 month ago
Thanks for sharing
don@lemm.ee 1 month ago
u delet ur account rn
Mwa@thelemmy.club 1 month ago
Image
With Reasoning (this is QWEN on hugginchat it says there is Zero)