HorseRabbit
@HorseRabbit@lemmy.sdf.org
- Comment on Do you think a Team America: World Police sequel would work 20 years later? 5 weeks ago:
Probably not. I mean the whole movie was a psudo-satire about the war on terror. But there were lots of internal contradictions that made the second half age terribly. The main message of the second half was that Iraq definitely does have WMDs and America might be annoying and brash and dumb but someone has to invade the middle east to fight evil!
A sequel would either have to admit that there were no WMDs and the war on terror was open colonialism, or ignore the entire message of the first movie and pretend it never happened, or double down on saying actually it’s cool for the US to invade other countries.
- Comment on Becoming et al. 2 months ago:
What the fuck are you talking about?
- Comment on the point of invention 2 months ago:
To the point of invention
- Comment on More than 100 arrested in London as violence flares after Southport stabbings 3 months ago:
Literally Jorjor Well
- Comment on What is a good eli5 analogy for GenAI not "knowing" what they say? 6 months ago:
Maybe I misunderstood the OP? Idk
- Comment on What is a good eli5 analogy for GenAI not "knowing" what they say? 6 months ago:
People sometimes act like the models can only reproduce their training data, which is what I’m saying is wrong. They do generalise.
During training the models are trained to predict the next word, but after training the network is always effectively interpolating between the training examples it has memorised. But this interpolation doesn’t happen in text space but in a very high dimensional abstract semantic representation space, a ‘concept space’.
Now imagine that you have memorised two paragraphs that occupy two points in concept space. And then you interpolate between them. This gives you a new point, potentially unseen during training, a new concept, that is in some ways analogous to the two paragraphs you memorised, but still fundamentally different, and potentially novel.
- Comment on What is a good eli5 analogy for GenAI not "knowing" what they say? 6 months ago:
Not an ELI5, sorry. I’m an AI PhD, and I want to push back against the premises a lil bit.
Why do you assume they don’t know? Like what do you mean by “know”? Are you taking about conscious subjective experience? or consistency of output? or an internal world model?
There’s lots of evidence to indicate they are not conscious, although they can exhibit theory of mind. Eg: arxiv.org/pdf/2308.08708.pdf
For consistency of output and internal world models, however, their is mounting evidence to suggest convergence on a shared representation of reality. Eg this paper published 2 days ago: arxiv.org/abs/2405.07987
The idea that these models are just stochastic parrots that only probabilisticly repeat their training data isn’t correct, although it is often repeated online for some reason.
A little evidence that comes to my mind is this paper showing models can understand rare English grammatical structures even if those structures are deliberately withheld during training: arxiv.org/abs/2403.19827
- Comment on Border Crisis: Shocking Data Reveals Illegal Immigrants Outnumber American Births 10 months ago:
“I am very smart.”
- Comment on Well...that was anticlimactic 1 year ago:
“The sky is huge” Lmao