BarryZuckerkorn
@BarryZuckerkorn@beehaw.org
He’s very good.
- Comment on AI Seeks Out Racist Language in Property Deeds for Termination 1 month ago:
increasing layers of Innuendo
Well, also, these are documents written in the past, before 1948, when the Supreme Court invalidated the effect of racial covenants.
But the language remains, with no legal effect. But it’s still there and should be eliminated. There’s no cat and mouse game, just the need for cleanup of something left from the past.
- Comment on Don’t believe the hype: AGI is far from inevitable 2 months ago:
This isn’t my field, and some undergraduate philosophy classes I took more than 20 years ago might not be leaving me well equipped to understand this paper. So I’ll admit I’m probably out of my element, and want to understand.
That being said, I’m not reading this paper with your interpretation.
This is exactly what they’ve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).
But they’ve defined the AI-by-Learning problem in a specific way (here’s the informal definition):
Given: A way of sampling from a distribution D.
Task: Find an algorithm A (i.e., ‘an AI’) that, when run for different possible situations as input, outputs behaviours that are human-like (i.e., approximately like D for some meaning of ‘approximate’).
I read this definition of the problem to be defined by needing to sample from D, that is, to “learn.”
The explicit point is to show that it doesn’t matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI
But the caveat I’m reading, implicit in the paper’s definition of the AI-by-Learning problem, is that it’s about an entire class of methods, of learning from a perfect sample of intelligent outputs to itself be able to mimic intelligent outputs.
General Intelligence has a set definition that the paper’s authors stick with. It’s not as simple as “it’s a human-like intelligence” or something that merely approximates it.
The paper defines it:
Specifically, in our formalisation of AI-by-Learning, we will make the simplifying assumption that there is a finite set of possible behaviours and that for each situation s there is a fixed number of behaviours Bs that humans may display in situation s.
It’s just defining an approximation of human behavior, and saying that achieving that formalized approximation is intractable, using inferences from training data. So I’m still seeing the definition of human-like behavior, which would by definition be satisfied by human behavior. So that’s the circular reasoning here, and whether human behavior fits another definition of AGI doesn’t actually affect the proof here. They’re proving that learning to be human-like is intractable, not that achieving AGI is itself intractable.
I think it’s an important distinction, if I’m reading it correctly. But if I’m not, I’m also happy to be proven wrong.
- Comment on Don’t believe the hype: AGI is far from inevitable 2 months ago:
I can’t think of a scenario where we’ve improved something so much that there’s just absolutely nothing we could improve on further.
Progress itself isn’t inevitable. Just because it’s possible doesn’t mean that we’ll get there, because the history of human development shows that societies can and do stall, reverse, etc.
And even if all human societies tends towards progress, it could still hit dead ends and stop there. Conceptually, it’s like climbing a mountain through the algorithm of “if there is a higher elevation near you, go towards that, and avoid stepping downward in elevation.” Eventually that algorithm brings you to a local peak. But the local peak might not be the highest point on the mountain, and while it is theoretically possible to have gotten to the other true peak from the beginning, the person who is insistent on never stepping downward is now stuck. Or, it’s possible to get to the true peak but it requires climbing downward for a time and climbing up past elevations we’ve already been to, on paths we hadn’t been on. One can imagine a society that refuses to step downward, breaking the inevitability of progress.
This paper identifies a specific dead end and advocates against hoping for general AI through computational training. It is, in effect, arguing that even though we can still see plenty of places that are higher elevation than where we are standing, we’re headed towards a dead end, and should climb back down. I suspect that not a lot of the actual climbers will heed that advice.
- Comment on Don’t believe the hype: AGI is far from inevitable 2 months ago:
That’s assuming that we are a general intelligence.
But it’s easy to just define general intelligence as something approximating what humans already do. The paper itself only analyzed whether it was feasible to have a computational system that produces outputs approximately similar to humans, whatever that is.
True, they’ve only calculated it’d take perhaps millions of years.
No, you’re missing my point, at least how I read the paper. They’re saying that the method of using training data to computationally develop a neural network is a conceptual dead end. Throwing more resources at the NP-hard problem isn’t going to solve it.
What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.
- Comment on Bruce Schneier: China Possibly Hacking US “Lawful Access” Backdoor 2 months ago:
Though a superhero, Bruce Schneier disdains the use of a mask or secret identity as ‘security through obscurity’.
- Comment on Don’t believe the hype: AGI is far from inevitable 2 months ago:
The paper’s scope is to prove that AI cannot feasibly be trained, using training data and learning algorithms, into something that approximates human cognition.
The limits of that finding are important here: it’s not that creating an AGI is impossible, it’s just that however it will be made, it will need to be made some other way, not by training alone.
Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.
So it may still be the case that AGI via computation alone is possible, and that creating such an AGI will not require solution of an NP-hard problem. But this paper closes one potential pathway that many believe is a viable pathway (if the paper’s proof is actually correct, I definitely am not the person to make that evaluation). That doesn’t mean they’ve proven there’s no pathway at all.
- Comment on Facebook admits to scraping every Australian adult user's public photos and posts to train AI, with no opt-out option - ABC News 3 months ago:
Yes but they only performed the training on the posts and images set to be globally publicly accessible by anyone. In a sense, they took the public permissions as an indicator that they could use that data for more than just providing the bare social media service.
- Comment on Facebook admits to scraping every Australian adult user's public photos and posts to train AI, with no opt-out option - ABC News 3 months ago:
Isn’t the opt-out option to just not make the photos/posts globally public?
- Comment on Microsoft in damage-control mode, says it will prioritize security over AI 6 months ago:
The non-cynical answer is that they’re counting contractor/vendor time in this full time equivalent answer. Which would probably be a good thing, because I imagine that the best people in cybersecurity aren’t actually employees of Microsoft.
- Comment on 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement 6 months ago:
To put it in more simple terms:
When Alice chats with Bob, Alice can’t control whether Bob feeds the conversation into a training data set to set parameters that have the effect of mimicking Alice.
- Comment on The Paradox of Blackmarket Wired Bluetooth Apple headphones. 6 months ago:
Your comment missed the mark entirely.
Not sure why you’re saying that. I wasn’t disagreeing with any of your points, but adding to them another angle that answered the parent comment’s concerns about whether leaving wifi on for airplane mode drains battery. You addressed the cellular radio side, and I was adding a separate point about the WiFi radio that complements what you were saying.
- Comment on The Paradox of Blackmarket Wired Bluetooth Apple headphones. 6 months ago:
Also, phones don’t use a lot of power to purely listen for Wifi beacons. They’re not transmitting until they actually try to join, so leaving wifi on doesn’t cost significant power unless you just happen to be near a remembered network.
- Comment on OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity 6 months ago:
Your scenario 1 is the actual danger. It’s not that AI will outsmart us and kill us. It’s that AI will trick us into trusting them with more responsibility than the AI can responsibly handle, to disastrous results.
It could be small scale, low stakes stuff, like an AI designing a menu that humans blindly cook. Or it could be higher stakes stuff that actually does things like affect election results, crashes financial markets, causes a military to target the wrong house, etc. The danger has always been that humans will act on the information provided by a malfunctioning AI, not that AI and technology will be a closed loop with no humans involved.
- Comment on The Paradox of Blackmarket Wired Bluetooth Apple headphones. 6 months ago:
to my knowledge, Bluetooth doesn’t work with airplane mode
The radio regulations were amended about 10 years ago to allow both Bluetooth and Wifi frequencies to be used on airplanes in flight. And so cell phone manufacturers have shifted what airplane mode actually means, even to the point of some phones not even turning off Wi-Fi when airplane mode is turned on. And regardless of defaults, both wireless protocols can be activated and deactivated independently of airplane mode on most phones now.
an airplane full of 100 people all on Bluetooth might create some noise issues that would hurt the performance
I don’t think so. Bluetooth is such a low bandwidth use that it can handle many simultaneous users. It’s supposed to be a low power transmission method, in which it bursts a signal only a tiny percentage of the time, so the odds of a collision for any given signal are low, plus the protocol is designed to be robust where it handles a decent amount of interference before encountering degraded performance.
- Comment on "X": Far-right conspiracy theorists have returned in droves after Elon Musk took over the former Twitter, new study says 7 months ago:
Given I’ve been described as a right with conspiracy theorist for saying that capitalist countries experience less starvation than socialist ones, I’m going to have to take this assessment with a grain of salt.
That’s not the methodology used, unless your description of starvation literally includes QAnon hashtags:
Tracking commonly used QAnon phrases like “QSentMe,” “TheGreatAwakening,” and “WWG1WGA” (which stands for “Where We Go One, We Go All”), Newsguard found that these QAnon-related slogans and hashtags have increased a whopping 1,283 percent on X under Musk.
And if not, then I’m not sure what your observations add to the discussion.
- Comment on [deleted] 7 months ago:
One of the worst companies in recent years has been Purdue Pharma, which worked with the also shitty McKinsey to get as many Americans addicted to opioids as possible, and make billions on the epidemic.
Both Purdue and McKinsey were privately held.
Koch industries is also a terrible privately held corporation.
Being public versus private doesn’t make a difference, in my opinion.
- Comment on [deleted] 7 months ago:
After being acquired by Google, YouTube got better for years (before getting worse again). Android really improved for a decade or so after getting acquired by Google.
The Next/Apple merger made the merged company way better. Apple probably wouldn’t have survived much longer without Next.
I’d argue the Pixar acquisition was still good for a few decades after, and probably made Disney better.
A good merger tends to be forgotten, where the two different parts work together seamlessly to the point that people forget they used to be separately run.
- Comment on He revealed the secrets ! 8 months ago:
Hmm, is this a new take on the “Stop Doing Math” meme?
- Comment on The race to decarbonise the world’s economy risks repeating the mistakes of the colonial era by building industries on forced and child labour, rights advocate warns 8 months ago:
If construction is delayed by an injunction
Can you name an example? Because the reactor constructions that I’ve seen get delayed have run into plain old engineering problems. The 4 proposed new reactors at Vogtle and V.C. Summer ran into cost overruns because of production issues and QA/QC issues requiring expensive redesigns mid-construction, after initial regulatory approvals and licensing were already approved. The V.C. Summer project was canceled after running up $9 billion in costs, and the Vogtle projects are about $17 billion over the original $14 billion budget, at $31 billion (and counting, as reactor 4 has been delayed once again over cooling system issues). The timeline is also about 8 years late (originally proposed to finish in 2016).
And yes, litigation did make those projects even more expensive, but the litigation was mostly about other things (like energy buyers trying to back out of the commitment to buy power from the completed reactors when it was taking too long), because it took too long, not litigation to slow things down.
The small modular reactor project in Idaho was just canceled too, because of the mundane issue of interest rates and buyers unwilling to commit to the high prices.
Nuclear doesn’t make financial sense anymore. Let’s keep the plants we have for as long as we can, but we might be past the point where new plants are cost effective.
- Comment on The race to decarbonise the world’s economy risks repeating the mistakes of the colonial era by building industries on forced and child labour, rights advocate warns 8 months ago:
IT IS SAFER, CHEAPER, AND LESS POLLUTING THAN LITERALLY ANY OTHER OPTION!
It’s not cheaper. New nuclear power plants are so expensive to build today that even free fuel and waste disposal doesn’t make the entire life cycle cheaper than solar.
- Comment on But Claude said tumor! 8 months ago:
- Comment on AI unicorn Inflection abandons its ChatGPT challenger as CEO Mustafa Suleyman joins Microsoft 8 months ago:
If your company’s secret sauce is that it employs a particular person, then your moat is whatever it takes to poach that person. If that person is willing to leave behind whatever intellectual property, un-vested equity, and relationships behind, then your company was never that valuable to begin with.
- Comment on [deleted] 8 months ago:
free association includes the freedom to not associate.
Reminds me of the Simpsons episode where the aliens campaign for the US presidency, and can’t figure out why “abortions for all” and “abortions for none” are both unpopular opinions.
In other words, it’s about freedom of choice, not mandatory association.
- Comment on [deleted] 8 months ago:
That’s the fundamental tension here.
The right to control your own posts, after posting, imposes an obligation on everyone who archives your posts to delete when you want them deleted.
For most of the internet, the balance is simply that a person who creates something doesn’t get to control it after it gets distributed to the world. Search engines, archive tools, even individual users can easily save a copy, maybe host that copy for further distribution, maybe even remix and edit it (see every meme format that relies on modification of some original phrase, image, etc.).
Even private, end to end encrypted conversations are often logged by the other end. You can send me a message and I might screenshot it.
A lot of us active on the Internet in the 90’s, participating in a lot of discussion around philosophical ideas like “information wants to be free” and “intellectual property is theft” and things like copyleft licenses (GPL), creative commons licensing, etc., wanted that to be the default vision for content created on the internet: freely distributed, never forgotten. Of course, that runs into tension with privacy rights (including the right to be forgotten), and possibly some appropriation concerns (independent artists not getting proper credit and attribution as something gets monetized). It’s not that simple anymore, and the defaults need to be chosen with conscious decisionmaking, while anyone who chooses to go outside of those defaults should be able to do that in a way knowledgeable of what tradeoffs they’re making.
- Comment on [deleted] 8 months ago:
The Twitter deal got canceled, so the interview was posted to YouTube instead. Which, honestly, is the better service for long form video.
- Comment on The march towards an all-EV future hit a major roadblock. What went wrong? 11 months ago:
While I agree with most of the articles points, even if they and the title are nearly all phrased in very hyperbolic language and the extent of the “slowdown” has been rather overstated given that sales are still increasing
I’d argue it’s an outright falsehood. “Slowdown” implies that sales are going down. They’re actually going up, but the pace of acceleration has gone down. The subtitle in the article here:
Fewer people are buying electric cars — the slowdown hints at a problem at the heart of America’s EV push.
This is literally false. More people than ever are buying electric cars.
What has actually happened is that the EV market went from supply-constrained (where manufacturers were building them as fast as they could and selling each one they built to waitlisted customers) to some models becoming demand-constrained (where manufacturers are building them faster than they can sell them).
This is due to a number of things, only some of which apply to the industry as a whole. First, there are some models that just aren’t really that heavily desired by the public, at the price points they’re being sold at. The Ford F-150 is a pretty good example, where Ford misinterpreted the demand from people who signed up on the waitlist, and then chose to prioritize the highest priced trim levels (rather than the entry level F-150 Lightning). So even if people are interested in the entry-level $50,000 F-150 Lightning still have to wait (and oh, by the way, Ford is raising the price to $55,000) while the $90,000 models pile up in dealer lots.
Second, dealers are actively sabotaging EV sales. Everyone I know who has tried to buy an EV from a traditional dealer has been steered towards a hybrid or a traditional ICE vehicle, and sales staff seem to be intentionally ignorant about the EV models sold by their dealership. The EV maintenance model is a threat to dealer business models, where service/maintenance is a very important part of their revenue, so the incentives of the dealer aren’t lined up with the incentives of the manufacturer.
Third, the traditional automakers released their EVs into some headwinds, because interest rates have increased, and Tesla had the profit margins to simply be able to drop prices in a way to make the newest non-Tesla EVs seem like a bad deal in comparison. The average Tesla transaction dropped from $65k in October 2022 to $50k in October 2023, with big price cuts on almost all of its models.
So electric vehicle sales are up. The difficulties that some manufacturers have even in this climate of sales going up is, in many cases, specific to those makes and those models.