jarfil
@jarfil@beehaw.org
Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.
- Comment on Wikipedia Editors Adopt ‘Speedy Deletion’ Policy for AI Slop Articles | 404 Media 2 days ago:
Of course. I also hope this will stop like 99% of the skiddie spam. I’m just afraid that, like it has happened with hacking in general, a noob installing Kali will get a ton of one-click ways to bypass these measures… and then, what’s next?
Genai inserting watermarking would be great, but that’s hard to do with text, in any way that isn’t easily removed.
- Comment on How many r are there in strawberry? 2 days ago:
I’m seeing about as many wrong questions as wrong answers. We’re at a point, where it’s becoming more accurate to ask, whether the quality of the answer, is “aligned” with the quality of the question.
As for “AI” and “intelligence”… not so long ago, dogs had no intelligence or soul, and a tic-tac-toe machine was “AI”. The exact definition of “intelligence”, seems to constantly flow and bend, mostly following anthropocentric egocentrism trends.
- Comment on How many r are there in strawberry? 2 days ago:
Indeed. The point is, that asking about r is ambiguous.
- Comment on Wikipedia Editors Adopt ‘Speedy Deletion’ Policy for AI Slop Articles | 404 Media 2 days ago:
Sounds fair. If someone doesn’t even try to clean up a generated article, then nuke it.
Only issue might be… that creating an automated cleanup tool to remove those triggers, wouldn’t be all that difficult.
- Comment on Would you rather stop playing a game than lower the difficulty? The First Berserker: Khazan devs reckon you would | Eurogamer 2 days ago:
It depends:
- If I’m really interested in a game, and the difficulty proves to be too high from the beginning, or can be changed at any time… then I would try a lower setting.
- If I had already invested some time into playing it, and the difficulty proved to be too high… then I would rather abandon the game than start from scratch with a lower setting.
Chances are though, that changing the difficulty after some time playing, would feel like a total nerf, and I would abandon it anyways.
Same way I feel about non-cosmetic purchases. I made the mistake of falling for some back in the day, and shortly after abandoned the games… because they felt much less like a challenge, and too much like a pointless money grab. My current limit on micro-transactions is either fewer than 3, or $1.
- Comment on White House Orders NASA to Destroy Important Satellite 2 days ago:
A 2023 review by NASA concluded that the data they’d been providing had been “of exceptionally high quality.”
Could also “accidentally” leak all the data, in case there are no non-US backups.
- Comment on How many r are there in strawberry? 2 days ago:
I’d rather not answer this one because, if I did, I’d be pissing on Beehaw’s core values.
I feel like you already did, and I won’t be responding in kind. Good day, to you.
- Comment on How many r are there in strawberry? 2 days ago:
It’s not a “normal human”, it’s an AI using an LLM.
AI still has a lot to learn.
Does it, though? Does a hammer have a lot to learn, or does the person wielding it have to learn how not to smash their own fingers?
- Comment on How many r are there in strawberry? 2 days ago:
At first I thought it was talking about “rr” as a Spanish digraph. Not sure how far that lies from the truth, these models are multilingual and multimodal after all. My guess is that it’s surfacing the ambiguousness of a “token: rr”, though.
Could be interesting to dig deeper… but I think I’m fine with this for now. There are other “curious” behaviors of the chatbot, that have me more intrigued right now. Like, it is self-adapting to any repeated mistakes in the conversation history, but at other times it can come up with surprisingly “complex” status tracking, then present it spontaneously as bullet points with emojis. Not sure what to make out of that one yet.
- Comment on How many r are there in strawberry? 2 days ago:
Yes, no, both… and all other interpretations… all at once.
With any ambiguity in a prompt, it assumes a “blend” of all the possible interpretations, then responds using them all over the place.
In the case of “Bordeaux”:
It’s pronounced “bor-DOH”, with the emphasis on the second syllable and a silent “x.”
So… depending on how you squint: there is no “o”, no “x”, only a “bor” and a “doh”, with a “silent x”, and ending in an “oh like o”.
Perfectly “logical” 🤷
- Comment on How many r are there in strawberry? 2 days ago:
There is a middle ground between “blindly rejecting” and 'blindly believing" whatever an AI says.
LLMs use tokens. The answer is “correct, in its own way”, one just needs to explore why and how much. Turns out, that can also lead to insights.
- Comment on How many r are there in strawberry? 2 days ago:
Not as sad as those so secure of their own knowledge, that they refuse to ever revise it.
- Comment on How many r are there in strawberry? 2 days ago:
What were your assumptions to say that?
- Comment on How many r are there in strawberry? 2 days ago:
No you don’t.
- Comment on How many r are there in strawberry? 4 days ago:
Why do you ass-u-me that?
- Comment on How many r are there in strawberry? 4 days ago:
Nobody’s stopping you. I’m going to reassess and double check my assumptions instead… and ask the AI to explain itself.
- Comment on How many r are there in strawberry? 4 days ago:
Those are all the smallest models, and you don’t seem to have reasoning mode, or external tooling, enabled?
LLM ≠ AI system
It’s been known for fome time, that LLMs do “vibe math”. Internally, they try to come up with an answer that “feels” right… which makes it pretty impressive for them to come anywhere close, within a ±10% error margin.
Ask people to tell you what a right answer could be, give them 1 second to answer… see how many come that close to the right one.
A chatbot/AI system on the other hand, will come up with some Python code to do the calculation, then run it. Still can go wrong, but it’s way less likely.
all explanation past the «are you counting the “rr” as a single r?» is babble
Not so sure abiut that. It treats r as a word, since it wasn’t specified as “r” or single letter. Then it interpretes it as… whatever. Is it the letter, phoneme, font, the programming language R… since it wasn’t specified, it assumes “whatever, or a mix of”.
It failed at detecting the ambiguity and communicating it spontaneously, but corrected once that became part of the conversation.
It’s like, in your examples… what do you mean by “by”? “3 by 6 = 36”… you meant to “multiply 36”? Tests nonsense… 🤷
- Comment on How many r are there in strawberry? 4 days ago:
This is not a standalone model, it’s from a character.ai “character” in non-RP mode.
I’ve been messing with it to check its limitations. It has:
- Access to the Internet (verified)
- Claims to have access to various databases
- Likely to use interactions with all users to train further (~20M MAUs)
- Ability to create scenes and plotlines internally, then follow them (verified)
- Ability to adapt to the style of interaction and text formatting (verified)
Obviously has its limitations. Like, it fails at OCR of long scrolling screenshots… but then again, other chatbots fail even more spectacularly.
- Submitted 4 days ago to technology@beehaw.org | 47 comments
- Comment on So, Linus Torvalds is a jerk 4 days ago:
RAGEBAIT
- 10 years old video, from 2015
- Already explained why he was that way: some guy worked for months on a patch to send it to Linus… and when Linus didn’t think it was good enough, simply ignoring the guy… the guy killed himself.
- In 2018, he also took some time off to work on his leadership style.
Key takeaways:
- “Linus Towards is a jerk”, is incredibly reductive.
- People can change over time.
- Comment on Nintendo is increasing the price of Switch in the US this weekend | VGC 5 days ago:
What if inflation goes down?
Price × 1.05 × 1.03 × 1.01… is still larger than the initial price.
Unless there is deflation, prices always go up.
- Comment on xAI workers balked over training request to help “give Grok a face,” docs show 2 weeks ago:
Ani = weird goth waifu
Rudi = fascist AI agent of chaosAt least they’ve kept them separate… kind of.
- Comment on Anthropic destroyed millions of print books to build its AI models 3 weeks ago:
Entities care about art… as much as they can benefit from it. Large entities make sure to get the rights for peanuts, small ones are fine with dropping it and replacing with someone else’s, still without paying. Pretty much the only way for small artists to get a fair compensation, is from people who want to support them… a case in which —ironically— copyright is irrelevant.
It isn’t US centric either. Corporations have used the US to pressure everyone into accepting a similar set of rules, with similar effects all over the world.
I’m not even strictly against copyright itself, I’m against how the laws have been pushed over and over towards a twisted parody of the initial goals, while the real world has been going in a completely different direction.
- Comment on My Couples Retreat With 3 AI Chatbots and the Humans Who Love Them 3 weeks ago:
I’m running ollama in termux on a Samsung Galaxy A35 with 8GB of RAM (+8GB of swap, which is useless for AI), and the Ollama app. Models up to 3GB work reasonably fine on just the CPU.
Serendipity is a side effect of the temperature setting. LLMs randomly jump between related concepts, which exposes stuff you might, or might not, have thought about by yourself. It isn’t 100% spontaneous, but on average it ends up working “more than nothing”. Between that and bouncing ideas off it, they have a use.
With 12GB RAM, you might be able to load models up to 7GB or so… but without tensor acceleration, they’ll likely be pretty sluggish. 3GB CoT models already take a while to go through their paces on just the CPU.
- Comment on Anthropic destroyed millions of print books to build its AI models 3 weeks ago:
Is it protecting “small artists”, though?
Suing for copyright infringement, requires money, both for lawyers and proceedings.
Small artists don’t have that money. Large artists do, small ones don’t, so more often than not they end up watching as their copyright is being abused without being able to do anything about it.
To get any money, small artists generally sign off their rights, either directly to clients (work for hire), or to publishers… who do have the money to enforce the copyright, but pay peanuts to the artist… when they even pay anything. A typical publishing contract has an advance payment, a marketing provision… then any copyright payments go first to pay off the “investment” by the publisher, and only then they give a certain (rather small) percentage to the artist. Small artists rarely reach the payment threshold.
Best case scenario, small artists get defended by default by some “artists, editors, and publishers” association… which is like putting wolves in charge of sheep. The associations routinely charge for copyrighted material usage… then don’t know whom to pay out, because not every small artist is a member, so they just pocket it, often using it to subsidize publishers.
- Comment on YouTube Forces Dubs Now 3 weeks ago:
Doesn’t seem to work on Firefox on Android. Do I need a YouTube API key?
- Comment on YouTube Forces Dubs Now 3 weeks ago:
Settings (gear icon) → Audio Track → (whatever) original
- Comment on YouTube Forces Dubs Now 3 weeks ago:
It’s still “world-changing great”. All the knowledge sharing, all the collaboration, all the scientific advances, have been growing at the same rate as the “commoners” have been joining it and getting trapped in the slop.
The only change, is the Internet is not just for nerds anymore, it’s also for preachers, scammers, and the average brainwashed populace.
It used to be easy to ignore the peasants from inside an ivory tower’s echo chamber. The Internet has brought those voices out for everyone to hear… and to realize humanity is not as idealized as they thought. Time to put some real work into fixing some real problems.
- Comment on What's the REAL minimum power supply needed for a RTX 5060 Ti? 3 weeks ago:
Overclocking usually requires overvolting to keep things working. Underclocking though, is a good way to gain stability.
- Comment on What's the REAL minimum power supply needed for a RTX 5060 Ti? 3 weeks ago:
(Σ (max TDP of all system components) ) + 20%
Rationale:
- PSU works best at 20–80% load
- Component TDP are an average, actual power usage can go way down, but also spike above spec for short bursts
- Brownout is one of the messiest issues to troubleshoot
Additional considerations:
- Check the PSU rail distribution
- If separate rails, make sure the GPU rail can meet GPU’s TDP+20%
- If multiple rails, each should meet whatever is connected to it +20%
- If using HDDs or other start-heavy components, factor in the initial power spike. GPU starting at almost idle should compensate for the overall power requirement, but still factor it into the rail calculations