Poik
@Poik@pawb.social
- Comment on That's... normal 5 weeks ago:
Be crime Do gay
- Comment on [deleted] 1 month ago:
At least it has some years for the inevitable bounce left. It’s getting there. Just kind of something someone trying to retire would have a panic over.
- Comment on [deleted] 1 month ago:
Yeah. You’re right. And their recounting of what they invested in makes no sense. I caught that later. So there’s definitely poor choices somewhere they aren’t mentioning.
- Comment on [deleted] 1 month ago:
Okay… I even came with receipts on this one. Am I just annoying? What’s with the downvote, even on ones where people are suggesting target date funds? The fund will bounce, it’s just a huge dip for one that was supposed to be, according to professionals, safe for retirement use. So sure, I can see the downvote as disagreeing with sensationalism, but I was contesting the suggestion that no funds dropped in that time. If it’s because I got spammy, sure… I assume most people don’t reread the other comments after the first time they go through, but I can stop.
For reference, target date funds are still usually good, but total stock index is always better in a ten year period, so whether they are actually worth it is questionable.
- Comment on [deleted] 1 month ago:
Not sure. I’m guessing interest rate stuff will mess with anything with bond holdings, so that probably had stuff to do with it. Other than that… I don’t know if I can convey a big enough shrug in text form.
- Comment on [deleted] 1 month ago:
Good point.
- Comment on [deleted] 1 month ago:
Yeah. Something doesn’t add up. The worst dip of what you mention is the blue chip large cap, but the curve you posted looks like VTINX or the vanguard 2030-2040 target date funds, not any of the funds you listed.
- Comment on [deleted] 1 month ago:
Target date funds are also supposed to be set and forget, but this looks like the curve from Vanguards 2030 through at least 2040 target date funds.
- Comment on [deleted] 1 month ago:
Image The 2030 target fund is still down 8.8% since that date.
- Comment on [deleted] 1 month ago:
All target date funds through vanguard tanked that year unless you have 2060 or later as the target. 2030 lost 25% and hasn’t yet recovered.
- Comment on [deleted] 1 month ago:
They likely were using a full retirement fund, like VTINX or Vanguard Target 2030 or something like that. All of them tanked in the end of 2021 up to target 2060. Even my shares in the Total Bond Index tanked then, and those are supposed to be as low risk as possible, literally.
- Comment on [deleted] 1 month ago:
Stop. The Vanguard retirement funds all did this if the target is before 2060. And those are invested in index funds by professionals. OP likely had the VTINX or a total bond fund, both of which did this that year and were recommended for during retirement. This is likely the more liquid portion of the portfolio, not the penny stock portion.
- Comment on [deleted] 1 month ago:
The close to retirement ones suffered that year. The 2030 target lost 25% in less than a year recently and hasn’t recovered. Ironically, the high risk ones have been less risky during COVID than the low risk ones.
- Comment on [deleted] 1 month ago:
Check the vanguard target retirement income fund (vtinx) and other similar funds. There was a dip in 2021 that absolutely destroyed a number of retirements, my patents included, despite being low risk options. Total bond index funds also suffered for some reason, and those are as low risk as you can get. Every other fund I have is doing great, but the ones that are supposed to be safe are not doing great.
- Comment on Tradition 1 month ago:
But did Das Boo Schitt get a Michael Bay adaptation? Also… Why does Michael Bay own the rights to skibiti toilet now? What timeline did I wake up in?
- Comment on hard to argue with 1 month ago:
en.wikipedia.org/wiki/Straw_man
You introduced the idea of not wanting women to give birth in a conversation where the only thing brought up was women don’t have to give birth to be valid. Women not wanting children is as valid as women wanting children.
The straw man here is the anti-natalism.
- Comment on Celery 5 months ago:
That’s LLM bull. The model already knows hangman; it’s in the training data. It can introduce variations on the data, especially in response to your stimuli, but it doesn’t reinvent that way. If you want to see how it can go astray ask it about stuff you know very well, and watch how it’s responses devolve. Better yet, gaslight it. It’s very easy to convince LLMs that they’re wrong because they’re usually trained for yes-manning and non confrontation.
Now don’t get me wrong, LLMs are wicked neat, but they don’t come up with new ideas, but they can be pushed towards new concepts, even when they don’t grasp them. They’re really good at sounding sure of themselves, and can easily get people to “learn” new “facts” from them, even when completely wrong. Always look up their sources, (which Bard (Google’s) can natively get for you in its UI) but enjoy their new ideas for the sake of inspiration. They’re neat toys, which can be used to provide natural language interfaces to expert systems. They aren’t expert systems.
But also, and more importantly, that’s not zero-shot learning. Neat little anecdote from a conversation with them though. Which model are you using?
- Comment on Celery 5 months ago:
No. AI and, what you’re more likely to be referring to, machine learning has had applications for decades. Basic work was used back into the '60s, mostly for quick things, and 1D data analysis was useful long before images (voice and stuff like biometrics). But there are many more types of AI. Bayesian networks (still in the learned category) were huge breakthroughs and still see a lot of use today. Decision trees, Markov chains, and first order logic are the most common video games AI and usually rely on expert tuning rather than learned results.
AI is a huge field that’s been around longer than you expected, and permeates a lot of tech. Image stuff is just the hot application since it’s deep learning based buff that started around 2009 with a bunch of papers that helped get actual beneficial learning in deeper models (I always thought it started roughly with Deep Boltzmann Machines, but there’s a lot of work in that era that chipped away at the problem). The real revolution was general purpose GPU programming getting to a state where these breakthroughs weren’t just theoretical.
Before that, we already used a lot of computer vision, and other techniques, learned and unlearned, for a lot of applications. Most of them would probably bore you, but there are a lot of safety critical anomaly detectors.
- Comment on Celery 5 months ago:
This actually is a symptom from the sort of “beneficial” overfit in Deep Learning. As someone whose research is in low data, long tails, and few shot learning, there’s a few things that smaller networks did better in generalization, and one thing they particularly did better (without explicit training for it) is gauging uncertainty. This uncertainty is sometimes referred to as calibration. Calibrating deep networks can yield decent probabilities that can be used to show uncertainty.
There are other tricks for this. My favorite strategies prep the network for learning new things. Large margin training and the like are a good thing to look into. Having space in the output semantic space (the layer immediately before the output or earlier for encoder decoder style networks) allows for larger regions for distinct unknown values to be separated from the known ones, which helps inherently calibrate the network.
- Comment on Finish him. 🪓 6 months ago:
This is why the machine learning community will go through ArXiv for pretty much everything. We value open and honest communication and abhor knowledge being locked down. This is why he views things this way. Because he’s involved in a community that values real science.
ArXiv is free and all modern science should be open. There were reasons for publications in the past, since knowledge dissemination was hard, and they facilitated it. Now the publications just gatekeep.
- Comment on He came with receipts 6 months ago:
This is a fair question. But also, we’re talking about one of the most influential minds in deep learning. If anything he’s selling himself short. He’s definitely not first author on most of them, but I would give all my limbs to work in his lab.
- Comment on Where do I find game demakes? 7 months ago:
Autoincorrect.
- Comment on Autism 7 months ago:
I’ve noticed a lot of things that are considered autistic in the states specifically may be normal practice in various cultures, having worked with people in Germany, and from a large swath of Asia.
It interests me a bit, but I think the takeaway is that autism tends to manifest in a number of quirks, and the ones that don’t align with the current culture the autistic person is in are the ones that are paid attention to. That and there tends to be a bit more obsession over said quirks than in those cultures, sometimes to the detriment of the autistic person or their social life.
- Comment on legs to die for 8 months ago:
en.m.wikipedia.org/wiki/Silverfish are also pests. They eat books.
- Comment on As if the tip actually goes to the dashers. 11 months ago:
This is tip culture standard. The company is only required to pay enough such that tips plus pay meet minimum wage. CEO’s should have to work for tips given by their employees in order to earn over minimum wage, change my mind.
- Comment on [deleted] 1 year ago:
NT is easily my favorite. Soler is a treasure, not just for NT.
Apotheosis, Graham’s Things, and More Stuff are my next go to recommendations, but they can be very hard. The Noita Devs hosted a mod showcase pretty recently that shows off quite a few of the best mods in the game. The pinball one is a blast, especially together with NT.
- Comment on [deleted] 1 year ago:
There’s more than one enemy and more than one boss who can polymorph you.
Practicing with Respawn+ installed from the Steam Workshop (or elsewhere) is quite for learning, but not necessary for people who want the challenge. I went from mods that decrease difficulty to ones that add new bosses, secrets, and ways to die unfairly in an instant, and I don’t regret my time investment.
11/10 game
- Comment on If Thanos had, instead of randomly wiping out 50% of all living things, he had instead in each species wiped out only the dumbest 50% what would the reaction of each avenger have been? 1 year ago:
I mean, the average newborn is smarter than the average politician, so maybe it’s not as bad as we think.
- Comment on Air Canada changed my flight for the 3rd time, I'm now landing in Toronto 1 hour AFTER my next flight departure. 1 year ago:
From my experience with YYZ, they won’t start boarding the next flight until around the time you’re scheduled to land (or at least not until after the plane was supposed to leave), and they WON’T declare a delay or tell anyone waiting at the gates what is going on.
Oh and don’t ask the customs people any questions or they might try to find a way to punish you. They refused to tell us that we were waiting fifteen minutes in customs because they failed to warm up the machine before telling people to get in line for that machine.
Never again YYZ. I thought America had bad airports…