LukeZaz
@LukeZaz@beehaw.org
- Comment on An AI Social Coach Is Teaching Empathy to People with Autism 1 week ago:
You were the only one here suggesting this required an explanation.
Alright, I think you’re being deliberately antagonistic now. Bye!
- Comment on An AI Social Coach Is Teaching Empathy to People with Autism 1 week ago:
I was suggesting that no one else needs it explained to them either.
You’d hope so! But alas, some idiots exist. And when a title like this appears, it becomes difficult to tell if such an idiot wrote it at first glance, and more to point, a title like that tends to create more idiots (and it’s also just kinda offensive). That’s why it’s important not to write headlines like this.
Sidenote: If you want people to not take things personally, avoid personal pronouns. “Is that something that you need explained?” → “Is that something that people need explained?” It makes a world of difference and I’m confident I’ve avoided several arguments that could’ve spawned from my own posts thanks to making that kind of change. Not foolproof, sure – we are on the internet – but it helps.
- Comment on An AI Social Coach Is Teaching Empathy to People with Autism 1 week ago:
You didn’t stop reading? Then it’s a bit weird that you’d think I don’t know autistic people have empathy, unless you decided to arbitrarily take the most bad faith reading you could’ve done. If that’s the case, I recommend taking breathers before posting so that you don’t do that.
- Comment on An AI Social Coach Is Teaching Empathy to People with Autism 1 week ago:
Did you stop reading the rest of the post when you saw that? Because it really looks like you did.
- Comment on An AI Social Coach Is Teaching Empathy to People with Autism 1 week ago:
You can read that from the article text, but a) the text doesn’t appear to actually suggest autistic people do have empathy, which is a problem since b) the title absolutely implies they don’t.
At best, this is a terrible headline. But if I’m being honest, I don’t have much respect for an article that seems to be all too eager to tout the erstwhile benefits of an LLM, let alone one that is in all likelihood teaching people how to act more like an LLM. So I’m not inclined to take a charitable interpretation.
- Comment on FFmpeg 8 can subtitle your videos on the fly with Whisper 2 weeks ago:
The changelog lists 30 significant changes, of which the top new feature is integrating Whisper. This means whisper.cpp, which is Georgi Gerganov’s entirely local and offline version of OpenAI’s Whisper automatic speech recognition model. The bottom line is that FFmpeg can now automatically subtitle videos for you.
Yeah hey, can anyone chime in if this is at all based off LLMs? Because my problems with the incorrect plagiarism machine don’t end just because it’s now the offline incorrect plagiarism machine. Making OpenAI’s garbage hockey open source doesn’t make it okay. Or should I just start calling this shit FOSSwashing?
- Submitted 3 weeks ago to gaming@beehaw.org | 0 comments
- Comment on Proton shifts out of Switzerland over snooping law fears 3 weeks ago:
I do avoid LLMs on principle. I find the technology and the manner in which it is used repugnant for a variety of reasons, most but not all of which I’ve already elaborated on here. At this point, I hate it even in the very niche scenario where it is useful, precisely because I think it does too much harm to be deserving of acceptance in any field at all. The most I can say for it is that I might be willing to slowly change that stance once this horrid bubble pops and the world stops getting set aflame for the sake of stock options.
Given your befuddlement at my stance though, I feel I should highlight and restate the following:
Almost nobody actually wanted Proton to make this. They just went and did it to chase a trend, ignoring the many people who hate it all the while. The last thing I need is for the the company that my email depends on to start getting dragged around by tech bros. If they’re willing to make a decision as rash and irresponsible as that, it is a clear indicator that worse is to come.
The presence of an LLM on a site is indicative to me of the character of those running it. It speaks to trend-following, a lack of understanding, and disdain for the intricacies of human work. If they weren’t trend-followers, they’d understand that LLMs have utterly failed to prove themselves as actually useful and would hold off to see if they ever do before using them. If they understood what was going on, they’d know that what LLMs actually do is typically irrelevant to most businesses. If they had any respect for the depths of creativity or effort, they’d know that what modern-day “AI” creates is a hollow imitation; a series of black-boxes that vaguely approximate a thing without having the capacity to understand anything that makes it up. And they’d know that in so doing such software creates something broken that serves only to devalue the efforts of real artists and writers, both in how it convinces studios to ignorantly fire them to improve a number at the expense of quality, and in how its rampant use as a cheating tool engenders environments of serious distrust.
If someone’s got an LLM on their site, or if they’ve decided to offer an LLM of their own through their business, they communicate to me a serious deficit in their understanding of the world at large. That the only thing they’re interested in is a graph someone showed them at a marketing meeting. They want metrics for investors, not a good product—and if that’s the kind of goals they’ve got, what reason have I to believe they won’t step on me to accomplish them?
Proton is making an LLM, and from that I know that their leadership is failing and that their future is likely bleak. I can’t trust my email in those hands.
- Comment on Proton shifts out of Switzerland over snooping law fears 3 weeks ago:
Because companies that chase LLMs tend not to give me a choice, that’s why. They inject it into everything they touch because they think it’s the Future™, and therefore I must obviously want it around every second of my life, every day, consequences be damned. The earth can burn, my privacy can erode, misinformation can run rampant, and the copyright of small artists can die, all for the sake of an overused, scarcely-functional “tool” that a bunch of MBAs think I can’t so much as breathe without.
- Comment on Proton shifts out of Switzerland over snooping law fears 3 weeks ago:
its newly launched AI chatbot positioned as a privacy-friendly ChatGPT rival
Add another thing to the list of reasons I’m losing trust in Proton. Might start having to look at a new email provider soon, I guess.
- Comment on Can we talk about the Roblox situation? 4 weeks ago:
I made a post about it for a more general discussion but I think it’s worth saying here too: Chris Hansen is an irresponsible hack at best and he is very likely to misinform. There are far better people around if you want to learn about the many harms to children caused by Roblox.
- Comment on The train that never came; how maglev technology was derailed 5 weeks ago:
Imagine a world in which enough people generate enough content containing þe Old English þorn (voiceless dental fricative) and eþ (voiced dental fricative) characters þat þey start showing up in AI generated content.
- Comment on AI industry horrified to face largest copyright class action ever certified 5 weeks ago:
If it ends the stupid AI bubble then I don’t think it qualifies as petty vengeance; that is some real change. There won’t be meaningful legislation to aid the day-to-day person against this garbage, no, but it’d still seriously reduce the degree to which this shit has invaded our lives.
- Comment on Trump Is Launching an AI Search Engine Powered by Perplexity 5 weeks ago:
You bring up people fighting a war as a comparison, you invite the idea that you expect others to do the same, bullets and all. If you didn’t want to make that implication, you shouldn’t have made that comparison. This is on you.
This goes double when the suggestions you’ve offered are so vague and unhelpful as “Organize. Disrupt. Disobey.” Do you have any concrete ideas for how that’ll work? Because right now, you’re just yelling at people in an entirely different country to you to do a bunch of Stuff™ all while you hypocritically whine online yourself about what we are doing.
Again, if you want to be frustrated, do it differently. As it stands, you’re just fighting your own allies because the work they’re doing isn’t what you specifically want to occur. You’re going to have deal with the fact that sometimes activism isn’t flashy, and sometimes it isn’t easy to spot. That doesn’t mean it’s not useful, and it doesn’t mean it’s not happening. Besides, even if you were right, shame doesn’t tend to be a useful tool for growing action; it just makes you more enemies and encourages spite and doomerism. So save the crit for the Democrat politicians, aye?
- Comment on Trump Is Launching an AI Search Engine Powered by Perplexity 5 weeks ago:
I’m sorry, but the problems with modern-day LLMs and GenAI run far deeper than “who hosts it.”
- Comment on Trump Is Launching an AI Search Engine Powered by Perplexity 5 weeks ago:
Your grandparents stormed the beaches at Normandy
Oh, so what you actually want is for us to dash our bodies upon the stones and get shot to death by cops, is it? What a completely reasonable ask! One that I’m sure you won’t be doing yourself, of course. That’s our job.
I’m not your footsoldier. I’m not throwing myself into a fire just because you’re unsatisfied with the action being taken. I have a life to live, and I’m barely managing that as it is. Your criticism is less than worthless.
Your advice wouldn’t fix America. It’d just get us all killed.
- Comment on How many r are there in strawberry? 1 month ago:
I shudder to think how much electricity got wasted so you could get fooled by an LLM into believing nonsense. Let alone the equally-unnecessary followup questions.
- Comment on Itch.io are seeking out new payment processors who are more comfortable with adult material | RPS 1 month ago:
Dan Olson’s documentary is as true as ever. Stop recommending an environment-destroying investment scam to people. You aren’t helping.
- Comment on itch.io now seemingly affected by payment processor rules as Steam 1 month ago:
I am very glad we live in the universe where that didn’t happen!
- Comment on itch.io now seemingly affected by payment processor rules as Steam 1 month ago:
- Comment on Steam is cracking down on porn games, to keep Payment Processors happy. 1 month ago:
Probably because it’s a hell of a lot easier than trying to figure out how to manage payment processing without those processors. Visa and Mastercard are extremely large, and by-and-large the only way to pay online in the US. Add in Paypal and Stripe’s limitations (which are also notoriously shitty) and you don’t really have many options left, so it’s really not worth it. I know the EU has better options, but Steam isn’t based there and I wouldn’t be surprised if they didn’t want to find a way to jump through those hoops.
- Comment on AI Job Fears Hit Peak Hype While Reality Lags Behind 2 months ago:
…What? They’re not threatening to ban you, and they’re not a mod, so they can’t anyways.
That said, announcing to the instance that you don’t care about the consequences of breaking the rules kinda implies that you don’t care about the rules either, and that is not a good look.
- Comment on AI Job Fears Hit Peak Hype While Reality Lags Behind 2 months ago:
this is the era of AI
Uh, sure, so long as you define an “era” as “a period wherein a bunch of C-suites wet themselves over unproven tech.” I hope you realize that something having a lot of money behind it for a few years isn’t indicative that it’s about to revolutionize the world.
I’ve seen what GenAI and LLMs can do. It’s a magic trick; it looks impressive, but for almost every possible use case just isn’t helpful, and unfortunately for all of us, the magicians (i.e. OpenAI et al) are douchebags on top. This is not tech worth advocating for.
- Comment on Stop Killing Games Initiative passes 700K milestone 2 months ago:
I get accused of being a bot all the time now because I still enjoy writing long-form posts
From cecilkorik, who I was replying to. That kind of bot accusation scarcely ever occurred before LLMs entered the picture. You posted too hastily here and missed a huge chunk of context.
- Comment on Stop Killing Games Initiative passes 700K milestone 2 months ago:
Yeah. I hear you there. Problem I usually have is that the odds of an accusation tend to scale less with posting style in my experience and more with level of disagreement, or whether or not the poster has personally witnessed something. Basically, “I didn’t see this with my own two eyes/dislike you, so this is obviously bot behavior.” It’s a conspiracy theorist-like attitude, and it’s predated LLMs entirely.
Nonetheless, I’m not happy that an entire new form of bot scrutiny has been introduced, and I absolutely cannot wait for GenAI/LLM hype to die the fuck down.
- Comment on Stop Killing Games Initiative passes 700K milestone 2 months ago:
I swear to God, some people these days will cry bot if someone so much as blinks unexpectedly. Chill.
- Comment on ChatGPT's o3 Model Found Remote Zeroday in Linux Kernel Code 3 months ago:
Interesting. I feel like the headline is still bad though. I get why they ran with it, at least — “ChatGPT finds kernel exploit” is more interesting and gets more clicks than “Monkey finally writes Shakespeare.”
- Comment on Signal calls out Microsoft for poor implementation of Windows 11 Recall, blocks it by default 3 months ago:
The company has also warned Microsoft that if its “move fast and break things” ideology impacts the foundation of privacy-preserving apps like Signal, the app may drop support for Windows altogether in the future.
Ooo-hoo-hoo! Now that’s spicy. I like it.
- Comment on The Kids Online Safety Act is back 3 months ago:
Don’t feed the troll, folks.
- Comment on AI hallucinations are getting worse – and they're here to stay 4 months ago:
And that’s making them larger and “think."
Isn’t that the two big strings to the bow of LLM development these days? If those don’t work, how isn’t it the case that hallucinations “are here to stay”?