lvxferre
@lvxferre@mander.xyz
The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.
- Comment on 🐙 Octopus is Octopus 🐙 16 hours ago:
Penis - Penorum
WROOOOONG! Now write the full declension table on that wall. And make sure to draw some pictures with it, so you never forget the word!
- Comment on 🐙 Octopus is Octopus 🐙 16 hours ago:
Lv7: the legs the two octopodum got tangled, so the octopodes asked help from two other octopodibus.
ENOUGH OF THE NOMINATIVE TYRANNY!
- Comment on Silicon Valley has forgotten what normal people want 2 days ago:
At its most absurd nadir, one is reminded of Juicero, a company that sold a $400 juicer that did the same work as squeezing its proprietary juice packs with one’s bare hands.
It does.
- Comment on 4 days ago:
- Comment on Mozilla announced "Thunderbolt", their open-source and self-hostable AI client 1 week ago:
My comment doesn’t, but the OP does. Four downvotes in Beehaw is quite a lot, given the local users don’t downvote. Same thread is in the negatives in one of the cross-posts even if it’s on-topic.
- Comment on Mozilla announced "Thunderbolt", their open-source and self-hostable AI client 1 week ago:
People, please stop shooting the messenger. Please.
With that out of the way: I wish Mozilla didn’t waste so much money on chasing the latest trend of the season, and instead used it for its main products. Including Thunderbird. The one asking for donations.
Some years from now Thunderbolt will likely pop up in this list, of abandoned Mozilla products. Because it isn’t the result of Mozilla finding a niche to create an AI product to benefit users; it’s simply execs chasing the latest trend.
*Beehaw users are likely not seeing this, but this post has a bunch of downvotes.
- Comment on ChatGPT’s latest stylistic quirk is sinister, infuriating – and absolutely everywhere 1 week ago:
That makes sense; it would be a mix of “if you can do it and I can’t, you must be cheating” and “your a bot than you’re arguement is invalid” ad hominem.
I think unnecessary combativeness might be also a factor. I’ve noticed on the internet people who want to fight against “something”, it doesn’t matter what; so they pick any low-hanging fruit they can find to fight you.
- Comment on ChatGPT’s latest stylistic quirk is sinister, infuriating – and absolutely everywhere 1 week ago:
I’m actually using more those resources (em dashes, three points lists, “it’s worth noting that”, “it’s not X, it’s Y”, etc.) after AI popped up. They’re a damn good way to detect assumptive people, eager to conclude based on little to no info or reasoning; the same ones OP is complaining about. They don’t want a conversation at all, they want to whine, so if you give them a low-hanging fruit you can detect them early and block them as noise and dead weight.
That’s in my “casual” writing style, though. Professionally (as a translator) I mostly play by the tune, trying to preserve the style of the original. (Plus I barely translate things into English, it’s usually into Portuguese, very rarely Italian.)
That might not necessarily be the case – there is a possibility every example is completely organic – but it’s a sign of the times that we can’t just relax and assume the things we see and hear were made by people.
Guys, I found em dashes! The author is a bot! Bring me my pitchfork! /jk (those are en dashes, by the way.)
- Comment on AI learns language from skewed sources. That could change how we humans speak – and think 1 week ago:
I think the text leaves the worst parts out: assumptions, decontextualisation, faulty reasoning, focusing on individual words instead of what they mean, and things like this. As in, issues with that part of comprehension that depends on logic, not on language proficiency.
All of those were already a problem before chatbots. But since chatbot output is really bad at those things, I think increased exposure to chatbots might make the problem worse.
- Comment on Google removes Doki Doki Literature Club! from the Play Store 1 week ago:
And it’s such a great game. It exploits really well the expectations of visual novels and games in general, first pretending to play along them and then breaking those expectations. (I’m trying my hardest to not spoil it, seriously.)
But Google doesn’t care about it. Or about sensible rules. Or enforcing fairly the very rules it expects you to follow.
- Comment on AI firms and their US military ties, "a whole civilization will die tonight" edition 1 week ago:
That threat did not materialize, and now some apologists are saying that it was just one of Trump’s deranged bargaining tactics, as if that excuses such categorical declarations of mass violence from a US president
Even if playing along this fucking farce of “just” a “bargaining tactic” (instead of accurately representing it as commitment to war crimes), and even if we brush off all moral standards (we should not), that’s still bloody stupid. He’s making sure the Iranian population gets as motivated as possible to resist, while the United-Statian population resists against any sort of war effort. He’s shooting his own
footsplit hoof.Currently, OpenAI, Microsoft, Google, Amazon, xAI, Oracle and even Meta have large contracts with the US military.
That should surprise nobody. Let’s play “spot who you know”:
But this week should serve as a clarifying moment.
Aah, cut off the crap. If this is a clarifying moment for anyone, the person in question has been living under a rock since forever.
- Comment on Chimpanzee empire falls apart in rare instance of division and deadly violence 1 week ago:
The only previously reported case took place in the 1970s at Gombe, Tanzania, during Jane Goodall’s long-term study.
I was reading about it (the Four Years War) rather recently; it was really nasty, seven of the adult males died in it. (All from the Kahama clan, and one from Kasakela.) Granted, this might not look like a big deal, but the community had 14 adult males, half of them died in the war.
I also found further info on the Ngogo community here. 32 adult males, 50 adult females, 166 members in total in 2011. That’s fucking huge.
“What’s especially striking is that the chimpanzees are killing former group members,” says Aaron Sandel, associate professor of anthropology at UT Austin and the study’s lead author. “The new group identities are overriding cooperative relationships that had existed for years.”
It’s the same with us humans, too: gaining trust takes years, but losing it takes a few seconds. As soon as you’re identified with “the enemy”, you already lost that trust, and things only spiral down.
“If relational dynamics alone can drive polarization and lethal conflict in chimps without language, ethnicity, or ideology, then in humans, those cultural markers might be secondary to something more basic,” says Sandel.
I admit I don’t know enough about chimps to say anything concrete, but what Aaron Sandel is saying sounds sensible. Multilingual communities are often stable and can last centuries; but once there’s “something” missing, usually in the material conditions, you see war. I believe this applies to the rest of culture, too.
- Comment on FUXUZIYXIKHCV 1 week ago:
Urgh. That’s horrifying:
Image - Comment on FUXUZIYXIKHCV 1 week ago:
To be fair here’s how cats would be reconstructed if they went extinct and we had to rely on fossils:
…nah, screw that, the lady in my pic is still hella charming, the one in the OP is an abomination!
Translated from Spanish
And they didn’t even make some joke on how dumb (burro) it looks like! hglksflksdlllksdf
- Comment on how things become science 1 week ago:
[Replying to myself as this is a tangent]
I think the “bots can generate misinfo even if you just feed them correct info” point deserves its own example.
Let’s say you’re making a model. It looks at the preceding word, and tries to predict the next. And you feed it the following sentences, both true:
1. Humans are apes.
2. Cats are felines.From both the bot “learnt” five words. And also how to connect them; for example “are” can be followed by either “apes” and “felines”, both having the same weight. Then, as you ask the bot to generate sentences, it generates the following:
3. Humans are felines.
4. Cats are apes.And you got bullshit!
What large models do is a way more complex version of the above, looking at way more than just the immediately preceding word, but it’s still the same in spirit.
- Comment on how things become science 1 week ago:
I’m failing to see how this is different from making up a fact and then spreading it to news outlets.
They uploaded the papers to a single preprint server. That’s important.
Preprints are papers predating any sort of peer review; as such, there’s a lot of junk mixed in — no big deal if you know the field, but a preprint server is certainly not a source of reliable information, nor it should be treated as such. On the other side, news outlets are expected to provide you reliable information, curated and researched by journalists.
And peer review is a big fucking deal in science, because it’s what sorts all that junk out. Only a muppet who doesn’t fucking care about misinformation would send bots to crawl preprints, and feed the resulting data into a large model.
So no, your comparison is not even remotely accurate. What they did is more like writing bullshit in a piece of paper, gluing it on a random phone pole, and checking if someone would repeat that bullshit.
They also went through the trouble to make sure that no reasonably literate human being would ever confuse that thing with an actually scientific paper. As the text says:
- naming an eye condition as bixonimania
- “this entire paper is made up”
- “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”
- “Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise”
- “the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad”
Feeding false information to an LLM is no different that a magazine. It only regurgitates what’s been said.
Yes, it is different. Because the large token model won’t simply “repeat” things, it’ll mix and match them and form all sorts of bullshit, even if you didn’t feed it with any bullshit.
Here’s an example of that, fresh from the oven. I don’t reasonably expect people to be feeding misinfo regarding Latin pronunciation into bots, and yet a lot of this table is nonsense:
Compare the table above with this table and this one and you’ll notice the obvious errors:
- short /e i o u/ being phonetically transcribed as [e i o u] instead of [ɛ ɪ ɔ ʊ]. That’s as silly as confusing English “bit” and “beet”.
- macron (not “mācron”, it’s being used in an English sentence) does NOT mark “accusative or ablative”. It marks long vowels, period.
- “nōs” being transcribed with a short vowel, even if the bloody bot put the macron over the spelled form.
- “nostr(um)”? No dammit, it’s “nostrī” or “nostrum”. The bot is implying some “nostr” form that simply doesn’t exist, this shit isn’t even allowed by Latin phonotactics.
- plus more, if I make an exhaustive list of this shite I won’t be ending it this week.
All it had to do was to copy info from Wiktionary, as it includes even phonetic and phonemic info. But since the bot is not just “regurgitating” info — it’s basically predicting what should come next — it’s mixing-and-matching shit into nonsense.
It isn’t going to suddenly start doing science on its own to determine if what you’ve said is true or not.
If you actually read the bloody article instead of assuming, you’d know why the researchers did this: they don’t expect the bot to do science on its own, they expect people to treat info from those bots as potentially incorrect.
Its job is to tell you what color the sky is based on what you told it the color of the sky was.
And your job is to not trust it if it tells you “Yes, you are completely right! The colour of the sky is always purple. Do you need further information on other naturally purple things?”
- Comment on Real 2 weeks ago:
Weird. And cool.
- Comment on Anon is worried about AAA sales 2 weeks ago:
(It’s the first time I hear this band, and I’m fucking loving it.)
Let’s do it differently: Eisbrecher’s version when the A³ gaming industry dies, Bach’s when the pop music industry dies, and Evangelion’s when Hollywood does so. Deal?
- Comment on Anon is worried about AAA sales 2 weeks ago:
- Comment on The birbs are woke 2 weeks ago:
So you walk like a duck, quack like a duck, but someone plucked your feathers off??? :P
- Comment on The birbs are woke 2 weeks ago:
Warning: the poster above is a bird pretending to be a human being. Discretion is advised.
- Comment on The birbs are woke 2 weeks ago:
- Comment on "CEO said a thing!" journalism involves parroting the claims of a business leader or executive with absolutely no context, correction, or challenge whatsoever, no matter how elaborate the delusion 3 weeks ago:
I see some angry person. The good type of angry — directing his anger at the right things.
To be clear: Bode is not criticising the fact that journalists quote what CEOs say. He’s criticising the fact they do it and call it a day, as if saying “trust the CEO”.
It goes without saying that CEOs are really loud when saying what they want the sucker (you) to believe. So if that’s all you want, you need no journalist. A journalist is only useful if you want to know the factual reality; but for that they need to contextualise and challenge the claims, not just parrot them.
I’d end with some noble call for the U.S. media industry to do better, but it’s abundantly clear they don’t want to.
If it’s any consolation it isn’t just the United-Statian media.
- Comment on Anon enjoys videogame music 4 weeks ago:
>greentext in plain colour
It doesn’t feel as satisfying.
- Comment on Anon enjoys videogame music 4 weeks ago:
It should, but I don’t expect either Lemmy or PieFed to implement it. Because, like, the original role of greentext was quoting, and we already have quote blocks.
- Comment on Anon enjoys videogame music 4 weeks ago:
I’ve been using
>code blocksfor that, but I low key want actual greentext here. - Comment on The Digital Museum of Plugs and Sockets 4 weeks ago:
I remember ranting about it in the past, but, basically: the page regarding Brazil is fairly accurate, you’ll find 9001 types of plugs, and a mix of 127V and 220V (no underlying plug vs. voltage pattern). It reaches a point I’ve seen people daisy chaining adapters to get their stuff working, it’s bloody hell.
Some residences have both voltages. Including mine; it’s a few 220V sockets for highly demanding appliances, and the rest is 127V.
Brazil aims to phase out the other types; see footnote. // (1) beginning January 1st, 2007 new residential, commercial and industrial wall outlet installations must comply with this new standard, and // (2) beginning August 1st, 2007 imported electrical devices must comply with NBR 14136 regulations. It is the aim to gradually phase out NEMA flat blade and Schuko devices in Brazil.
Hello, I come from the future. 19 years past 2007. The mess is still there. Try harder dammit. Prime example on how completely dysfunctional the federal government is, I bet shit would be already solved if up to the States, at least in some of them.
- Comment on 4 weeks ago:
That’s why you should only invoke foocubi — dealing with sour demons is a pain.
…my ⟨L d α⟩ look exactly like this, but unlike whoever wrote this table my ⟨o a⟩ are indistinguishable too. And my medial ⟨s⟩ from ⟨f⟩. My calligraphy goes from amazing to nasty depending on how much effort I take.
- Comment on Microsoft keeps insisting that it's deeply committed to the quality of Windows 11 4 weeks ago:
This isn’t even a “lie”. It’s worse than that: it’s an empty statement misleading readers to see meaning where there’s none.
Commitment is intentions. Even between human beings, you don’t know someone else’s intentions, at most what they claim about them; so there’s no way to check if the “I’m committed to
$thing” claim is true or false. But to make it even worse, a company is not a human being, it is simply an abstraction, unable to have “intentions”.So, let’s call bread “bread” and wine “wine”: people working for Microslop noticed it’s being called “Microslop”, they know why, and they’re trying to minimise brand damage — trying to convince you that Microslop does not output slop, and that the Moon is made of green cheese. That’s it.
- Comment on Jeff Kaplan is sick of hearing you demonize games you weren't going to play anyway: 'Shut the f**k up. No one cares. We don't need to hear that you weren't into it' 5 weeks ago:
The name of his studio, “Kintsugiyama”, is too long. Can I clip the “sugiy”? It sounds better! :^) …okay, disregard the shitty joke.
Serious now: Kaplan and Ford’s takes are fairly reasonable. Forums online (including Reddit… and Lemmy/Piefed, by the way) seem to trigger on people a natural instinct to fit in, as part of a group. This leads to the adoption of similar values and judgements, and in turn to direct praise and criticism towards the same things — even when you’re in no position to do it, because you didn’t experience it nor plan to. In practice this means yes, it’s harder to speak “I like it” when everyone else dislikes it.
And people can get reeeeeeeeeeeeeeeally loud with this shite.
Also, I like the way they voiced this. It’s really hard to misconstrue it as “don’t criticise things”. Criticism is often healthy, sometimes even really harsh criticism; it’s just that sometimes it needs some experience to be even constructive, and that’s the case here.