Technus
@Technus@lemmy.zip
- Comment on Breed back better, or whatever Biden said 1 day ago:
Great source of protein crawling there across the ground, it’d be a shame if someone wasn’t there to eat it
- Comment on Welcome to the thunderdome? 4 days ago:
That’s probably based on the first definition because you can play either an ascending or descending scale.
Also the music staff kinda looks like a ladder.
- Comment on On dasher! 1 week ago:
Didn’t know doordashers could contact you through whatsapp
- Comment on It's not a whack jack, it's a... 1 week ago:
I was thinking about this the other day and realized something:
Back when the modern Santa character was first being developed, coal was a genuinely useful thing. It was fuel for the stove which heated your house and cooked your food. It was a basic necessity of life.
If you were naughty, Santa didn’t just give you nothing. You weren’t going to get an awesome toy, but he made sure you weren’t going to freeze to death on Christmas, either.
Santa believes everyone deserves to live. That having a warm place to sleep is a basic human right.
This might be /r/im14andthisisdeep material, but I just thought that was interesting.
- Comment on WHO STOLE MY FUCKING BARS? 2 weeks ago:
Yeah but on the second incarnation, wouldn’t that put you right back where you started?
- Comment on WHO STOLE MY FUCKING BARS? 2 weeks ago:
What happens to the guy that was driving it? Does he just blink out of existence when the car shuts off? That’s my question. You might argue that there is no such thing, but my own conscious experience proves to myself that there’s something else there. I want to know what happens to that part.
Hell, for all I know, you might just be a soulless meatbag automaton, and there really is no one in the driver’s seat for you. Or I could just be the only actual human talking in a thread full of bots. With 90% of the training data going into LLMs being vapid contrarian debates on social media, I could easily see that being the case here.
- Comment on WHO STOLE MY FUCKING BARS? 2 weeks ago:
I’m not expecting or planning for anything, that’s kind of the point. I’m not expecting one specific outcome. It’s actually really freeing, because I’m not stuck searching for meaning in an existence that offers none.
And if it turns out that it does all just go black, it won’t be my problem anymore, will it?
- Comment on WHO STOLE MY FUCKING BARS? 2 weeks ago:
I don’t agree that the cessation of brain activity necessarily means the end of the subjective experience. That doesn’t mean I purport to know what actually happens at that point. I hope it’s some sort of reincarnation but that’s just because there’s more I want to experience in this universe than I possibly could in a single lifetime.
“You only have one life, live it the best you can” is a nice motivational mantra, but however well I live my life, it’s highly unlikely I will live long enough to experience interstellar travel, for example, or first contact with alien life. I think that really fucking sucks, and I really hope I’ll have a chance on the next go-around. But if it’s something completely different, I’m cool with that, too.
- Comment on WHO STOLE MY FUCKING BARS? 2 weeks ago:
The lack of memory of past existence isn’t evidence of anything. We have clear evidence that memories are physical things, stored as connections of neurons in the brain. They can be lost to disease or injury, and they’re destructively modified every time we access them.
- Comment on WHO STOLE MY FUCKING BARS? 2 weeks ago:
My whole point is that I disagree with the certainty of that claim. It’s not grounded in empirical evidence, because we don’t have any.
- Comment on WHO STOLE MY FUCKING BARS? 2 weeks ago:
I’m an atheist but I don’t actually preclude the existence of an afterlife. “There is no heaven or hell, it just all goes black and that’s it,” is just as patently unfalsifiable as any claim made by any religion.
It’s just as likely to be something completely different and alien from anything conceivable in our limited world view. In an infinite space of probabilities, the likelihood of it being “literally nothing” actually seems pretty low.
That kind of uncertainty is exactly what scares most people, but not me. I’m looking forward to finding out one day.
- Comment on WHO STOLE MY FUCKING BARS? 2 weeks ago:
I think with a capful of Adderall I could ascend to a higher plane of existence
- Comment on No More Robot’s new cruise ship management sim is hella bleak, despite the input of former Overcooked devs 2 months ago:
All that flowery bullshit, and I still have no idea if the game is any good or not. Sure, it didn’t sound fun, but lots of good games sound really boring if you describe them the wrong way.
Games journalism at its best, clearly.
- Comment on As Microsoft lays off thousands and jacks up Game Pass prices, former FTC chair says I told you so: The Activision-Blizzard buyout is 'harming both gamers and developers' 2 months ago:
I’ve been incensed about this since I first heard about the merger over two years ago. The fact that both the US and EU rubber-stamped the deal is easily one of the biggest regulatory failings of the tech sector in the past ten years.
I still can’t believe that the only stipulation was that CoD had to remain available on Playstation.
- Comment on Reports: EA set to be sold to private investors for up to $50 billion 2 months ago:
Every time private equity buys out a public company, they saddle it with debt, gut its intellectual properties, turn its products and services to shit then close up shop for good.
Given that EA has already managed 3/4 of those things on its own, I wonder how much this will actually change.
- Comment on OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws 3 months ago:
Beyond proving hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem. Analysis of popular benchmarks, including GPQA, MMLU-Pro, and SWE-bench, found nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.
“We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty,” the researchers wrote.
I just wanna say I called this out nearly a year ago: lemmy.zip/comment/13916070
- Comment on No weirdo's please 4 months ago:
What’s the number, OP
- Comment on 4 months ago:
Futurama s2e5: I Second that Emotion
- Comment on 4 months ago:
“Yeah, right. You’d have to be some kinda genius to count that high.”
“… He’s seven.”
- Comment on 4 months ago:
It’s an old meme, sir, but it checks out.
- Comment on Y tho 4 months ago:
We used to make these all the time as kids. You need a second pair of magnets in push configuration on the back for this to work.
- Comment on On Black Holes... 4 months ago:
I would 100% volunteer to be the first person to cross the event horizon of a spinning supermassive black hole, just to see what’s on the other side.
Like yeah it’s guaranteed to be a one-way trip and probably a horrible death, but there’s also the possibility that it’s actually a gateway to alternate universes, and that’s something I’d give anything to see with my own eyes.
- Comment on SlaveDriver Engine for the Sega Saturn classic shooter PowerSlave gets open sourced 4 months ago:
I played Powerslave: Exhumed but it wasn’t quite the same game I remembered playing. I think it’s more of a “reimagined” version than a remastered one.
I’m hopeful that this means the game could be ported to modern platforms so it doesn’t have to be run in DOSbox.
- Comment on Why it’s a mistake to ask chatbots about their mistakes 4 months ago:
I like how you’ve deliberately ignored the specifically chosen wording of my statement, and completely disregarded the rest of my point, simply because you perceive it as counter-factual in your world-view, thus exhibiting the exact kind of behavior you were talking about. That’s really funny.
- Comment on Political map of the Americas 2025 4 months ago:
Maybe swap California and Canada, US and Florida.
- Comment on Why it’s a mistake to ask chatbots about their mistakes 4 months ago:
A neurotypical human mind, acting rationally, is able to remember the chain of thought that lead to a decision, understand why they reached that decision, find the mistake in their reasoning, and start over from that point to reach the “correct” decision.
Even if they don’t remember everything they were thinking about, they can reason based on their knowledge of themselves and try to reconstruct their mental state at the time.
This is the behavior people are expecting from LLMs but not understanding that it’s something they’re fundamentally incapable of.
One major difference (among many others, obviously) is that AI models as currently implemented don’t have any kind of persistent working memory. All they have for context is the last N tokens they’ve generated, the last N tokens of user input, and any external queries they’ve made. All the intermediate calculations (the “reasoning”) that led to them generating that output is lost.
Any instance of an AI appearing to “correct” their mistake is just the model emitting what it thinks a correction would be, given the current context window.
Humans also learn from their mistakes and generally make efforts to avoid them in the future, which doesn’t happen for LLMs until that data gets incorporated into the training for the next version of the model, which can take months to years. That’s why AI companies are trying to capture and store everything from user interactions, which is a privacy nightmare.
It’s not a compelling argument to compare AI behavior to that of a dysfunctional human brain and go “see, humans do this too, teehee!” Not when the whole selling point of these things is that they supposed to be smarter and less fallible than most humans.
I’m deliberately trying not to be ableist in my wording here, but it’s like saying, “hey, you know what would do wonders for productivity and shareholder value? If we fired half our workforce, then found someone with no experience, short-term memory loss, ADHD and severe untreated schizophrenia, then put them in charge of writing mission-critical code, drafting laws, and making life-changing medical and business decisions.”
I’m not saying LLMs aren’t technically fascinating and a breakthrough in AI development, but the way they have largely been marketed and applied is scammy, misleading, and just plain irresponsible.
- Comment on GitHub folds into Microsoft following CEO resignation — once independent programming site now part of 'CoreAI' team 4 months ago:
All aboard the enshittification train! Choo choo!
I mean, it’s been well underway for a while now but this is certainly a transfer over to an express train.
- Comment on LLMs’ “simulated reasoning” abilities are a “brittle mirage,” researchers find 4 months ago:
I get scoffed at every time I call LLMs “glorified auto-correct” so it’s nice being validated.
Anyone who actually has a grasp of how Large Language Models work should not be surprised by this, but too many people, even engineers who should really know better, have drunk the Kool-aid.
- Comment on Battlefield 6's beta been treating you to infinite loading screens? EA are on the case 4 months ago:
A triumphant return to the series’ roots with the exact same game-breaking bugs as Battlefield 3 had. Nice job, EA.
- Comment on AMD CPU Transient Scheduler Attacks security flaw revealed 5 months ago:
No information on the 9000 series, why? Kinda sus.