It makes sense to judge how closely LLMs mimic human learning when people are using it as a defense to AI companies scraping copyrighted content, and making the claim that banning AI scraping is as nonsensical as banning human learning.
But when it’s pointed out that LLMs don’t learn very similarly to humans, and require scraping far more material than a human does, suddenly AIs shouldn’t be judged by human standards? I don’t know if it’s intentional on your part, but that’s a pretty classic example of a motte-and-bailey fallacy. You can’t have it both ways.
ParsnipWitch@feddit.de 11 months ago
In general I agree with you, but AI doesn’t learn the concept of what a circle is. AI reproduces the most fitting representation of what we call a circle. But there is no understanding of the concept of a circle. This may sound not picking, but I think it’s important to make the distinction.
That is why current models aren’t regarded as actual intelligence, although people already call them that…
Even_Adder@lemmy.dbzer0.com 11 months ago
I understand. I didn’t mean to imply any sort of kind of understanding with the language I used.