There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.
Comment on AGI achieved đ¤
RedstoneValley@sh.itjust.works â¨4⊠â¨days⊠ago
Itâs funny how people always quickly point out that an LLM wasnât made for this, and then continue to shill it for use cases it wasnât made for either (The âintelligenceâ part of AI, for starters)
REDACTED@infosec.pub â¨4⊠â¨days⊠ago
ouRKaoS@lemmy.today â¨3⊠â¨days⊠ago
If you want an even older example, the ghosts in Pac-Man could be considered AI as well.
SoftestSapphic@lemmy.world â¨3⊠â¨days⊠ago
By this logic any solid state machine is AI.
These words used to mean things before marketing teams started calling everything they want to sell âAIâ
SparroHawc@lemmy.zip â¨3⊠â¨days⊠ago
No. Artificial Intelligence has to be imitating intelligent behavior - such as the ghosts imitating how, ostensibly, a ghost trapped in a maze and hungry for yellow circular flesh would behave, and how CS1.6 bots imitate the behavior of intelligent players. They artificially reproduce intelligent behavior.
Which means LLMs are very much AI. They are not, however, AGI.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
Yes but then we built a weapon with with to murder truth, and with it meaning, so everything is just vibesy meaning-mush now. And youâre a big dumb meanie for hating the thing that saved ys from having/being able to know things.
BarrelAgedBoredom@lemm.ee â¨4⊠â¨days⊠ago
Itâs marketed like its AGI, so we should treat it like AGI to show that it isnât AGI. Lots of people buy the bullshit
Knock_Knock_Lemmy_In@lemmy.world â¨3⊠â¨days⊠ago
AGI is only a benchmark because it gets OpenAI out of a contract with Microsoft when it occurs.
merc@sh.itjust.works â¨3⊠â¨days⊠ago
You can even drop the âaâ and âgâ. There isnât even âintelligenceâ here. Itâs not thinking, itâs just spicy autocomplete.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
Barely even spicy.
merc@sh.itjust.works â¨3⊠â¨days⊠ago
then continue to shill it for use cases it wasnât made for either
The only thing it was made for is âspicy autocompleteâ.
jsomae@lemmy.ml â¨2⊠â¨days⊠ago
Turns out spicy autocomplete can contribute to the bottom line. Capitalism :(
merc@sh.itjust.works â¨2⊠â¨days⊠ago
So could tulip bulbs, for a while.
SoftestSapphic@lemmy.world â¨3⊠â¨days⊠ago
Maybe they should call it what it is
Machine Learning algorithms from 1990 repackaged and sold to us by marketing teams.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
Hey now, thatâs unfair and queerphobic.
These models are from 1950, with juiced up data sets. Alan turing personally sid a lot of work on them, before he cracked the math and figured out they were shit and would always be shit.
SoftestSapphic@lemmy.world â¨2⊠â¨days⊠ago
Fair lol
Alan Turing was the GOAT
RIP my beautiful prince
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
Also, thank you for being basically a person. This topic does a lot to convince me those arenât a thing.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
His politics werenât perfect, but he got more nazis killed than a lot of people with much worse takes, and was a genuinely brilliant reasonably ethical contributor to a lot of cool shit that should have fucking stayed cool.
jsomae@lemmy.ml â¨3⊠â¨days⊠ago
Machine learning algorithm from 2017, scaled up a few orders of magnitude so that it finally more or less works, then repackaged and sold by marketing teams.
SoftestSapphic@lemmy.world â¨3⊠â¨days⊠ago
Adding weights doesnât make it a fundamentally different algorithm.
We have hit a wall where these programs have combed over the totality of the internet and all available datasets and texts in existence.
Weâre done here until thereâs a fundamentally new approach that isnât repetitive training.
jsomae@lemmy.ml â¨3⊠â¨days⊠ago
Transformers were pretty novel in 2017, I donât know if they were really around before that.
Anyway, Iâm doubtful that a larger corpus is whatâs needed at this point. (Though that said, thereâs a lot more text remaining in instant messager chat logs like discord that probably have yet to be integrated into LLMs. Not sure.) Iâm also doubtful that scaling up is going to keep working, but it wouldnât surprise that much me if it does keep working for a long while. My guess is that thereâs some small tweaks to be discovered that really improve things a lot but still basically like like repetitive training as you put it.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
Okay but have you considered that if we just reduce human intelligence enough, we can still maybe get these things equivalent to human level intelligence, or slightly above?
We have the technology.
Gladaed@feddit.org â¨4⊠â¨days⊠ago
Fair point, but a big part of âintelligenceâ tasks are memorization.
BussyCat@lemmy.world â¨4⊠â¨days⊠ago
Computers for all intents are purposes have perfect recall so since it was trained on a large data set it would have much better intelligence. But in reality what we consider intelligence is extrapolating from existing knowledge which is what âAIâ has shown to be pretty shit at
Gladaed@feddit.org â¨3⊠â¨days⊠ago
They donât. They can save information on drives, but searching is expensive and fuzzy search is a mystery.
Just because you can save a mp3 without losing data does not mean you can save the entire Internet in 400gb and search within an instant.
BussyCat@lemmy.world â¨3⊠â¨days⊠ago
Which is why it doesnât search within an instant and it uses a bunch of energy and needs to rely on evaporative cooling to stop overheating the servers
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
I would say more âblackpillingâ, i genuinely donât believe most humans are people anymore.
UnderpantsWeevil@lemmy.world â¨3⊠â¨days⊠ago
Thereâs a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders âDoes my conversation partner really understand what Iâm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?â
The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isnât analyzing the fundamental meaning of what Iâm saying, it is simply mapping the words Iâve input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So â2â is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.
When you hear people complain about how the LLM âwasnât made for thisâ, what theyâre really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.
Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data.
Imagine asking a librarian âWhat was happening in Los Angeles in the Summer of 1989?â and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.
Thatâs modern LLMs in a nutshell.
jsomae@lemmy.ml â¨3⊠â¨days⊠ago
Youâve missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that thereâs a person there is irrelevant, and they could be replaced with a speaker or computer terminal.
Put differently, itâs not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.
If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely canât be very good at anything are not comforting to me at all.
kassiopaea@lemmy.blahaj.zone â¨3⊠â¨days⊠ago
This. I often see people shitting on AI as âfancy autocompleteâ or joking about how they get basic things incorrect like this post but completely discount how incredibly fucking capable they are in every domain that actually matters. Thatâs what we should be worried about⌠what does it matter that it doesnât âwork the sameâ if it still accomplishes the vast majority of the same things? The fact that we can get something that even approximates logic and reasoning ability from a deterministic system is terrifying on implications alone.
Knock_Knock_Lemmy_In@lemmy.world â¨3⊠â¨days⊠ago
Why doesnât the LLM know to write (and run) a program to calculate the number of characters?
I feel like Iâm missing something fundamental.
UnderpantsWeevil@lemmy.world â¨3⊠â¨days⊠ago
Iâd be more impressed if the room could tell me how many "r"s are in Strawberry inside five minutes.
Human biology, famous for being simple and straightforward.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
Ah! But you can skip all that messy biology abd stuff i donât understand thatâs probably not important, abd just think of it as a classical computer running an x86 architecture, and checkmate, liberal my argument owns you now!
jsomae@lemmy.ml â¨2⊠â¨days⊠ago
Because LLMs operate at the token level, I think it would be a more fair comparison with humans to ask why humans canât produce the IPA spelling words they can say, /nÉr kĂŚn ðeÉŞ ËizÉli rid θɪĹz ËrÉŞtÉn ËpjĘrli ÉŞn aÉŞ pi ËeÉŞ/ despite the fact that it should be simple to â they understand the sounds after all. Iâd be impressed if somebody could do this too! But that most people canât shouldnât really move you to think humans must be fundamentally stupid because of this one curious artifact.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
Its not a fucking riddle, itâs a koan/thought experiment.
Itâs questioning what âcommunicationâ fundamentally is, and what knowledge fundamentally is.
Itâs not even the first thing to do this. Military theory was cracking away at the âcommunicationâ thing a century before, and the nature of knowledge has discourse going back thousands of years.
jsomae@lemmy.ml â¨2⊠â¨days⊠ago
Youâre right, I shouldnât have called it a riddle. Still, being a fucking thought experiment doesnât preclude having a solution. Theseusâ ship is another famous fucking thought experiment, which has also been solved.
shalafi@lemmy.world â¨3⊠â¨days⊠ago
You might just love Blind Sight. Here, theyâre trying to decide if an alien life form is sentient or a Chinese Room:
âTell me more about your cousins,â Rorschach sent.
âOur cousins lie about the family tree,â Sascha replied, âwith nieces and nephews and Neandertals. We do not like annoying cousins.â
âWeâd like to know about this tree.â
Sascha muted the channel and gave us a look that said Could it be any more obvious? âIt couldnât have parsed that. There were three linguistic ambiguities in there. It just ignored them.â
âWell, it asked for clarification,â Bates pointed out.
âIt asked a follow-up question. Different thing entirely.â
Bates was still out of the loop. Szpindel was starting to get it, though⌠.
CitizenKong@lemmy.world â¨3⊠â¨days⊠ago
Blindsight is such a great novel. It has not one, not two but three great sci-fi concepts rolled into one.
One is artificial intelligence (the shipâs captain is an AI), the second is alien life so vastly different it appears incomprehensible to human minds. And last but not least, and the most wild, vampires as a evolutionary branch of humanity that died out and has been recreated in the future.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
Also, the extremely post-cyberpunk posthumans, and each member of the crew is a different extremely capable kind of fucked up model of what we might become, with the protagonist personifying the genre of horror that it is, whike still being occasionally hilarious.
TommySalami@lemmy.world â¨3⊠â¨days⊠ago
My a favorite part of the vampire thing is how they died out. Turns out vampires start seizing when trying to visually process 90° angles, and humans love building shit like that (not to mention a cross is littered with them). Itâs so mundane an extinction Iâd almost believe it.
RedstoneValley@sh.itjust.works â¨3⊠â¨days⊠ago
Thatâs a very long answer to my snarky little comment :) I appreciate it though. Personally, I find LLMs interesting and Iâve spent quite a while playing with them. But after all they are like you described, an interconnected catalogue of random stuff, with some hallucinations to fill the gaps. They are NOT a reliable source of information or general knowledge or even safe to use as an âassistantâ. The marketing of LLMs as being fit for such purposes is the problem. Humans tend to turn off their brains and to blindly trust technology, and the tech companies are encouraging them to do so by making false promises.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
Yes but have you considered that it agreed with me so now i need to defend it to the death against you horrible apes, no matter the allegation or terrain?
Knock_Knock_Lemmy_In@lemmy.world â¨3⊠â¨days⊠ago
The human approach could be to write a (python) program to count the number of characters precisely.
When people refer to agents, is this what they are supposed to be doing? Is it done in a generic fashion or will it fall over with complexity?
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
No, this isnât what âagentsâ do, âagentsâ just interact with other programs. So kike move your mouse around to buy stuff, using the same methods as everything else.
Knock_Knock_Lemmy_In@lemmy.world â¨2⊠â¨days⊠ago
âagentsâ just interact with other programs.
If that other program is, say, a python terminal then canât LLMs be trained to use agents to solve problems outside their area of expertise?
I just tested chatgpt to write a python program to return the frequency of letters in a string, then asked it for the number of Lâs in the longest placename in Europe.
ââ''
String to analyze
text = "Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch"
Convert to lowercase to count both âLâ and âlâ as the same
text = text.lower()
Dictionary to store character frequencies
frequency = {}
Count characters
for char in text: if char in frequency: frequency[char] += 1 else: frequency[char] = 1
Show the number of âlâs
print(âNumber of 'lâs:â, frequency.get(âlâ, 0))
âââ
I was impressed until
Output
Number of 'lâs: 16
UnderpantsWeevil@lemmy.world â¨3⊠â¨days⊠ago
Thatâs not how LLMs operate, no. They aggregate raw text and sift for popular answers to common queries.
ChatGPT is one step removed from posting your question to Quora.
Knock_Knock_Lemmy_In@lemmy.world â¨3⊠â¨days⊠ago
But an LLM as a node in a framework that can call a python library should be able to count the number of Rs in strawberry.
It doesnât scale to AGI but it does reduce hallucinations.
frostysauce@lemmy.world â¨3⊠â¨days⊠ago
Wait, what was going on in August of '96?
UnderpantsWeevil@lemmy.world â¨3⊠â¨days⊠ago
Google Search premiered
merc@sh.itjust.works â¨3⊠â¨days⊠ago
I agree, but I think youâre still being too generous to LLMs. A librarian who fetched all those things would at least understand the question. An LLM is just trying to generate words that might logically follow the words you used.
IMO, one of the key ideas with the Chinese Room is that thereâs an assumption that the computer / book in the Chinese Room experiment has infinite capacity in some way. So, no matter what symbols are passed to it, it can come up with an appropriate response. But, obviously, while LLMs are incredibly huge, they can never be infinite. As a result, they can often be âfooledâ when theyâre given input that semantically similar to a meme, joke or logic puzzle. The vast majority of the training data that matches the input is the meme, or joke, or logic puzzle. LLMs canât reason so they canât distinguish between âthis is just a rephrasing of that memeâ and âthis is similar to that meme but distinct in an important wayâ.
jsomae@lemmy.ml â¨3⊠â¨days⊠ago
Can you explain the difference between understanding the question and generating the words that might logically follow? Iâm aware that itâs essentially a more powerful version of how auto-correct works, but why should we assume that shows some lack of understanding at a deep level somehow?
merc@sh.itjust.works â¨2⊠â¨days⊠ago
I mean, itâs pretty obvious. Take someone like Rowan Atkinson whose death has been misreported multiple times. If you ask a computer system âIs Rowan Atkinson Dead?â you want it to understand the question and give you a yes/no response based on actual facts in its database. A well designed program would know to prioritize recent reports as being more authoritative than older ones. It would know which sources to trust, and which not to trust.
An LLM will just generate text that is statistically likely to follow the question. Because there have been many hoaxes about his death, it might use that as a basis and generate a response indicating heâs dead. But, because those hoaxes have also been debunked many times, it might use that as a basis instead and generate a response indicating that heâs alive.
So, if he really did just die and it was reported in reliable fact-checked news sources, the LLM might say âNo, Rowan Atkinson is alive, his death was reported via a viral video, but that video was a hoax.â
Because we know what âunderstandingâ is, and that it isnât simply finding words that are likely to appear following the chain of words up to that point.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
So, what is âunderstandingâ?
If you need help, you can look at marx for a pretty good answer.
Leet@lemmy.zip â¨2⊠â¨days⊠ago
Can we say for certain that human brains arenât sophisticated Chinese roomsâŚ
UnderpantsWeevil@lemmy.world â¨2⊠â¨days⊠ago
Yes.