Poik
@Poik@pawb.social
- Comment on Celery 2 months ago:
That’s LLM bull. The model already knows hangman; it’s in the training data. It can introduce variations on the data, especially in response to your stimuli, but it doesn’t reinvent that way. If you want to see how it can go astray ask it about stuff you know very well, and watch how it’s responses devolve. Better yet, gaslight it. It’s very easy to convince LLMs that they’re wrong because they’re usually trained for yes-manning and non confrontation.
Now don’t get me wrong, LLMs are wicked neat, but they don’t come up with new ideas, but they can be pushed towards new concepts, even when they don’t grasp them. They’re really good at sounding sure of themselves, and can easily get people to “learn” new “facts” from them, even when completely wrong. Always look up their sources, (which Bard (Google’s) can natively get for you in its UI) but enjoy their new ideas for the sake of inspiration. They’re neat toys, which can be used to provide natural language interfaces to expert systems. They aren’t expert systems.
But also, and more importantly, that’s not zero-shot learning. Neat little anecdote from a conversation with them though. Which model are you using?
- Comment on Celery 2 months ago:
No. AI and, what you’re more likely to be referring to, machine learning has had applications for decades. Basic work was used back into the '60s, mostly for quick things, and 1D data analysis was useful long before images (voice and stuff like biometrics). But there are many more types of AI. Bayesian networks (still in the learned category) were huge breakthroughs and still see a lot of use today. Decision trees, Markov chains, and first order logic are the most common video games AI and usually rely on expert tuning rather than learned results.
AI is a huge field that’s been around longer than you expected, and permeates a lot of tech. Image stuff is just the hot application since it’s deep learning based buff that started around 2009 with a bunch of papers that helped get actual beneficial learning in deeper models (I always thought it started roughly with Deep Boltzmann Machines, but there’s a lot of work in that era that chipped away at the problem). The real revolution was general purpose GPU programming getting to a state where these breakthroughs weren’t just theoretical.
Before that, we already used a lot of computer vision, and other techniques, learned and unlearned, for a lot of applications. Most of them would probably bore you, but there are a lot of safety critical anomaly detectors.
- Comment on Celery 2 months ago:
This actually is a symptom from the sort of “beneficial” overfit in Deep Learning. As someone whose research is in low data, long tails, and few shot learning, there’s a few things that smaller networks did better in generalization, and one thing they particularly did better (without explicit training for it) is gauging uncertainty. This uncertainty is sometimes referred to as calibration. Calibrating deep networks can yield decent probabilities that can be used to show uncertainty.
There are other tricks for this. My favorite strategies prep the network for learning new things. Large margin training and the like are a good thing to look into. Having space in the output semantic space (the layer immediately before the output or earlier for encoder decoder style networks) allows for larger regions for distinct unknown values to be separated from the known ones, which helps inherently calibrate the network.
- Comment on Finish him. 🪓 3 months ago:
This is why the machine learning community will go through ArXiv for pretty much everything. We value open and honest communication and abhor knowledge being locked down. This is why he views things this way. Because he’s involved in a community that values real science.
ArXiv is free and all modern science should be open. There were reasons for publications in the past, since knowledge dissemination was hard, and they facilitated it. Now the publications just gatekeep.
- Comment on He came with receipts 3 months ago:
This is a fair question. But also, we’re talking about one of the most influential minds in deep learning. If anything he’s selling himself short. He’s definitely not first author on most of them, but I would give all my limbs to work in his lab.
- Comment on Where do I find game demakes? 3 months ago:
Autoincorrect.
- Comment on Autism 3 months ago:
I’ve noticed a lot of things that are considered autistic in the states specifically may be normal practice in various cultures, having worked with people in Germany, and from a large swath of Asia.
It interests me a bit, but I think the takeaway is that autism tends to manifest in a number of quirks, and the ones that don’t align with the current culture the autistic person is in are the ones that are paid attention to. That and there tends to be a bit more obsession over said quirks than in those cultures, sometimes to the detriment of the autistic person or their social life.
- Comment on legs to die for 5 months ago:
en.m.wikipedia.org/wiki/Silverfish are also pests. They eat books.
- Comment on As if the tip actually goes to the dashers. 8 months ago:
This is tip culture standard. The company is only required to pay enough such that tips plus pay meet minimum wage. CEO’s should have to work for tips given by their employees in order to earn over minimum wage, change my mind.
- Comment on [deleted] 9 months ago:
NT is easily my favorite. Soler is a treasure, not just for NT.
Apotheosis, Graham’s Things, and More Stuff are my next go to recommendations, but they can be very hard. The Noita Devs hosted a mod showcase pretty recently that shows off quite a few of the best mods in the game. The pinball one is a blast, especially together with NT.
- Comment on [deleted] 10 months ago:
There’s more than one enemy and more than one boss who can polymorph you.
Practicing with Respawn+ installed from the Steam Workshop (or elsewhere) is quite for learning, but not necessary for people who want the challenge. I went from mods that decrease difficulty to ones that add new bosses, secrets, and ways to die unfairly in an instant, and I don’t regret my time investment.
11/10 game
- Comment on If Thanos had, instead of randomly wiping out 50% of all living things, he had instead in each species wiped out only the dumbest 50% what would the reaction of each avenger have been? 11 months ago:
I mean, the average newborn is smarter than the average politician, so maybe it’s not as bad as we think.
- Comment on Air Canada changed my flight for the 3rd time, I'm now landing in Toronto 1 hour AFTER my next flight departure. 1 year ago:
From my experience with YYZ, they won’t start boarding the next flight until around the time you’re scheduled to land (or at least not until after the plane was supposed to leave), and they WON’T declare a delay or tell anyone waiting at the gates what is going on.
Oh and don’t ask the customs people any questions or they might try to find a way to punish you. They refused to tell us that we were waiting fifteen minutes in customs because they failed to warm up the machine before telling people to get in line for that machine.
Never again YYZ. I thought America had bad airports…