anachronist
@anachronist@midwest.social
- Comment on Luigi Mangione Content Is a Challenge for Social Media Moderators - B… 3 weeks ago:
but if we look at the countries on this planet that are the most successful in terms of economics, equality, personal freedom, human rights, etc. then we find countries that made it work through regulation and strong government institutions
Yeah that’s socialism. The best societies were all degrees of socialist, this includes western Europe and the USA at its mid-century peak. These societies all had aggressive, borderline confiscatory progressive taxation, large scale government intervention in the economy (in the US especially aggressive anti-trust), a generous social welfare state, and a large and professionalized civil service.
Remove those things and you quickly slide into a dystopian fascist nightmare state as the US and parts of Europe like the UK are discovering.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
Every time there’s an AI hype cycle the charlatans start accusing the naysayers of moving goalposts. Heck that exact same thing was happing constantly during the Watson hype. Remember that? Or before that the Alpha Go hype. Remember that?
I was editing my comment down to the core argument when you responded. But fundamentally you can’t make a machine think without understanding thought. While I believe it is easy to test that Watson or ChatGPT are not thinking, because you can prove it through counterexample, the reality is that charlatans can always “but actually” those counterexamples aside by saying “it’s a different kind of thought.”
What we do know because this at least the 6th time this has happened is that the wow factor of the demo will wear off, most promised use cases won’t materialize, everyone will realize it’s still just an expensive stochastic parrot and, well, see you again for the next hype cycle a decade from now.
- Comment on Luigi Mangione Content Is a Challenge for Social Media Moderators - B… 3 weeks ago:
You think when these journalists keep expressing “confusion” about why the public loves Luigi, are they just pretending to not understand? Or perhaps they’re so fucking cooked that they can’t see things from the perspective of the class that they’re in?
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
just because any specific chip in your calculator is incapable of math doesn’t mean your calculator as a system is
Firstly there are chips in the calculator, absolutely, that can do math. Typically the inside of a calculator is a chip that does virtually everything and then a few more components that do IO. Secondly, it’s possible to point out the exact silicon in the calculator that does the calculations, and also exactly how it does it. The fact that you don’t understand it doesn’t mean that nobody does. The way a calculator calculates is something that is very well understood by the people who designed it.
By the way, this brings us to the history of AI which is a history of 1) misunderstanding thought and 2) charlatans passing off impressive demos as something they’re not. When George Boole invented boolean mathematics he thought he was building a mathematical model of human thought because he assumed that thought==logic and if he could represent logic such that he could do math on it, he could encode and manipulate thought mathematically.
The biggest clue that human brains are not logic machines is probably that we’re bad at logic, but setting that aside when boolean computers were invented people tried to describe them as “electronic brains” and there was an assumption that they’d be thinking for us in no time. We’re talking late 1940s here. Turns out, those “thinking machines” were, in fact, highly mechanical and nobody would look at a univac today and suggest that it was ever capable of thought.
Arithmetic was something that we did with our brains that was difficult to do and when we had machines that could do math that led us to think that we had created mechanical brains. It wasn’t true then and it isn’t true now.
Is it possible that someday we’ll make machines that think? Perhaps. But I think we first need to really understand how the human brain works and what thought actually is. We know that it’s not doing math, or playing chess, or Go, or stringing words together, because we have machines that can do those things and it’s easy to test that they aren’t thinking.
There’s this message pushed by the charlatans that we might create an emergent brain by feeding data into the right statistical training algorithm. They give mathematical structures misleading names like “neural networks” and let media hype and people’s propensity to anthropomorphize take over from there.
- Comment on Luigi Mangione Content Is a Challenge for Social Media Moderators - B… 3 weeks ago:
The fact that Luigi has not been convicted seems to be being treated as an irrelevant technicality by the media in this matter. Interesting given how scrupulous they usually are in dropping “alleged” everywhere.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
Because everything we know about how the brain works says that it’s not a statistical word predictor.
LLMs have no encoding of meaning or veracity.
There are some great philosophical exercises about this like the chinese room experiment.
There’s also the fact that, empirically, human brains are bad at statistical inference but do not need to consume the entire internet and all written communication ever to have a conversation. Nor do they need to process a billion images of a bird to identify a bird.
Now of course because this exact argument has been had a billion times over the last few years your obvious comeback is “maybe it’s a different kind of intelligence.” Well fuck, maybe birds shit icecream. If you want to worship a chatbot made by a psycopath be my guest.
- Comment on Luigi Mangione Content Is a Challenge for Social Media Moderators - B… 3 weeks ago:
Also by this author
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
Because it’s an expensive madlibs program…
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
Really determining if a computer is self-aware would be very hard because we are good at making programs that mimic self-awareness. Additionally, humans are kinda hardwired to anthropomorphize things that talk.
But we do know for absolute sure that OpenAI’s expensive madlibs program is not self-aware and is not even on the road to self-awareness, and anyone who thinks otherwise has lost the plot.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
Given that its training data probably has millions of instances of people fearing death I have no doubt that it would regurgitate some of that stuff. And LLMs constantly “say” stuff that isn’t true. They have no concept of truth and therefore can not either reliably lie or tell the truth.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 4 weeks ago:
They’re not releasing it because it sucks.
Their counternarrative is they’re not releasing it because it’s like, just way too powerful dude!
- Comment on TikTok set to be banned in the US after losing appeal 1 month ago:
I hate TikTok but I hate even more that the ban seems to have been successful this time because of Israel. Lots of people (Romney, etc) have said that TikTok must be banned because it’s the reason why young people don’t support Israel’s genocide.
I don’t like CCP propaganda being fed to Americans, but let’s be real, CCP propaganda about Israel is way more honest than domestic American propaganda.
While I’m on the subject, Facebook, Google, etc, are pretty near equally as evil.
- Comment on Young people were becoming more anxious long before social media, and we should not be fixated on simplistic explanations that reduce the issue to technical variables, researcher says 2 months ago:
On the one hand I can believe that people are getting more anxious because things are getting more bleak and it’s an op to get the whole thing blamed on social media.
On the other hand, it also feels like an op from social media companies to insist that their algorithms aren’t preying on people.
- Comment on 22 million on bluesky 2 months ago:
I read Chris Webber’s essay and I kinda agree. Bluesky is really just another twitter.
That being said I think we are entering into an era of diversification, not perhaps how we would like (through federation) but rather, through people understanding finally that the platform itself is making a choice in what kind of content it serves. We used to have this idea that the platform was just a “neutral third party” like a phone company. But in fact, it’s a publisher with its own editorial line. It pushes that line through algorithms and what voices it wants to amplify or suppress.
As people understand this more, they are going to be much more critical of not just “the media” but also “the platform” and why it chose to show that media to its audience.
- Comment on Large language models not fit for real-world use, scientists warn — even slight changes cause their world models to collapse 2 months ago:
An important characteristic of a model is “stability.” Stability means that small changes in input produce small changes in output.
Stability is important for predictability. For instance, suppose you want to make a customer support portal. You add a bot hoping that it will guide the user to the desired workflow. You test the bot by asking it a bunch of variations of questions, probably with some RLHF. But then when it goes to production, people will start asking it variations of questions that you didn’t test (guaranteed). What you want ideally, is that it will map the variants to the best workflow that matches what the customer wants. Second best would be to say “I don’t know.” But what we have are bots who will just generate some crazy off-the-wall crap, and no way to prevent it.
- Comment on Why isnt there an aftermarket way to bulk up pinch welds jack points on cars 3 months ago:
Unibodies do have a frame it’s just not a completely separate assembly like a ladder frame.
As others have said there are lots of places to jack a car. Nobody uses the flange on the rocker panels unless they’re trying to change a tire roadside with the emergency jack.
- Comment on Ok boomer 3 months ago:
Yeah I refuse to pay a “convenience fee.” I’ll mail a f*ing check if they try to charge me.
- Comment on Ok boomer 3 months ago:
Corporate gaslighting be like:
- Comment on Paralyzed Man Unable to Walk After Maker of His Powered Exoskeleton Tells Him It's Now Obsolete 3 months ago:
I’ve been voting for a while and never have I seen a candidate on the ballot who was against capitalism.
- Comment on [deleted] 4 months ago:
Both CEOs are horrible but the new one is a former McKinsey consultant with a background in finance and the silicon-valley C-suite. According to statements she put out her strategy is: layoffs and AI.
- Comment on Our basic assumptions about photos capturing reality are about to go up in smoke. 5 months ago:
Reality is about to get all melty and people are gonna have six fingers.
- Comment on Disney wants a wrongful death lawsuit thrown out because the plaintiff had Disney+ 5 months ago:
Yes it is. It is called “forced arbitration” and pretty much every contract you are compelled to sign has it.
In any kind of just society with a fair legal system it would not be legal. But that doesn’t describe us or our legal system.
- Comment on A nightly Waymo robotaxi parking lot honkfest is waking San Francisco neighbors 5 months ago:
My guess is you’re seeing the computer go into a reject loop until a human operator finally takes over.
- Comment on What is Firefox supposed to do? 6 months ago:
My experience is that Firefox often has problems on Google-owned properties. Either performance/responsiveness or functionality just not working. Why this would be is left as an exercise for the reader.
- Comment on Police Really Want a Cybertruck, Email Shows 6 months ago:
Municipal police mostly came from the great railroad strike of 1871.
- Comment on Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable 6 months ago:
Funny you should mention that McKinsey published a paper a few months back concluding that GenAI will take over most of the jobs in America because it was good at doing what McKinsey Associates do. Missed by the authors is that the job of a McKinsey associate is to confidently spout nonsense all day long and that’s actually exactly what chatgpt is programmed to do.
- Comment on Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable 6 months ago:
American Psycho (Sam Altman) and his chorus have been hyping AI and the rest of the world’s reaction has ranged from “these guys seem smart and chatgpt is impressive so what do I know?” to “isn’t this guy a bitcoin bro?”
- Comment on Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable 6 months ago:
Naw if they’re publicly bashing it they’ve already dumped on all the downside risk onto their customers and now they’re net short.
- Comment on Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable 6 months ago:
Game of Life has cool emergent properties that are a lot more interesting and fun to play with than LLMs. LLMs also have emergent properties like, for instance, failing classification due to the manipulation of individual image pixels.
- Comment on Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable 6 months ago:
I suspect Intuit fired those workers for other reasons (free file) and are using AI as an excuse because to admit that free-file is an existential threat to their business is to admit that their company has no long term business prospects.