SparroHawc
@SparroHawc@lemmy.zip
- Comment on AGI achieved 🤖 6 minutes ago:
you called me a robot racist.
…what?
Looking up the most common answer isn’t intelligence, there is no understanding of cause and effect going on inside the algorithm
In order for that to be true, the entire dataset would need to be contained within the LLM. Which it is not. If it were, a model wouldn’t have to undergo training.
AI implies intelligence
You seem to be mistaking ‘intelligence’ for ‘human-like intelligence’. This is not how AI is defined. AI can be dumber than a gnat, but if it’s capable of making decisions based on stimulus without each set of stimulus and decision being directly coded into it, then it’s AI.
- Comment on AGI achieved 🤖 2 days ago:
Okay, what is your definition of AI then, if nothing burned onto silicon can count?
- Comment on Luv Me Chips, 'ate Seagulls... 2 days ago:
If a seagull is stealing chips from someone, odds are there are plenty of other seagulls around to witness their compatriot getting merked.
Seagulls understand that stealing from humans is risky - that’s why they generally do it very quickly. The ones who fail suffer consequences for their failure, same as stealing food from any other creature. It’s the risk/reward calculation any scavenger has to make.
Sometimes they calculate incorrectly. They get forcibly removed from the gene pool.
- Comment on AGI achieved 🤖 3 days ago:
No. Artificial Intelligence has to be imitating intelligent behavior - such as the ghosts imitating how, ostensibly, a ghost trapped in a maze and hungry for yellow circular flesh would behave, and how CS1.6 bots imitate the behavior of intelligent players. They artificially reproduce intelligent behavior.
Which means LLMs are very much AI. They are not, however, AGI.