lukewarm_ozone
@lukewarm_ozone@lemmy.today
- Comment on Nobody: 3 weeks ago:
This meme is from 2035
- Comment on Multiverse 3 weeks ago:
There might be a universe in which magic exists. However, there is no universe in which I exist and magic exists. That’s because I was born into a mundane version of the universe, so there are infinite possibilities, but because my existence in a magical universe is 0
That doesn’t really follow. Specifically, you’re putting way too much credit (infinity times as much credit as you should, in fact) on your ability to know exactly how your universe works. You’re saying there are zero hypothetical worlds in which you are the person you are now and also magic exists. I’m sure you can see how this is not true; for all you know magic is very obvious in your world and you just got mind-controlled, a minute ago, to your current state of mind. Or maybe you just never noticed it and hence grew up thinking you are in a mundane universe, which is very unlikely but not probability-0. Or one of many many other explanations, which are all unlikely (nothing involving a universe with magic in it is going to be likely), but very much not probability-0.
- Comment on Anon fixes Super Mario Bros 4 weeks ago:
Difficulty is hardly the point of the post.
- Comment on "Meta and X are going rogue:" European Digital Rights group (EDRi) urges EU to invest in infrastructure "like Mastodon, Peertube and other key pieces of the Fediverse" to secure Europe's independence 4 weeks ago:
I haven’t, actually, since I normally use an adblocker (and also don’t use that tracker). Looks like they’re all VPN advertisements right now, which is at least a somewhat non-mainstream ad segment.
- Comment on If you shop by unit prices, double check the math! 4 weeks ago:
Damn, I didn’t know things were so bad there in Canada.
- Comment on If you shop by unit prices, double check the math! 4 weeks ago:
Even in the detailed info? If so that’s weird; probably something along the lines of “the seller messed up the weight, fixed it, but for some insane reason the site doesn’t recalculate the price”.
- Comment on "Meta and X are going rogue:" European Digital Rights group (EDRi) urges EU to invest in infrastructure "like Mastodon, Peertube and other key pieces of the Fediverse" to secure Europe's independence 4 weeks ago:
Accounts are already mostly portable (you can easily export all your settings and import into your new account), you just don’t retain posting history.
To retain that… I guess there could be a separate fediverse service that does nothing but allow registering accounts that let you prove that other fediverse accounts all belong to the same person, and then a PR can be made to e.g. Lemmy to honor these links when showing posting history. It’d be quite a messy system.
- Comment on "Meta and X are going rogue:" European Digital Rights group (EDRi) urges EU to invest in infrastructure "like Mastodon, Peertube and other key pieces of the Fediverse" to secure Europe's independence 4 weeks ago:
The answer is obvious: we must forever be completely advertiser-unfriendly and absolutely unmarketable. With every piece of porn, every post on digital piracy, every swearword, we do our part to protect the fediverse’s independence.
- Comment on If you shop by unit prices, double check the math! 4 weeks ago:
What happened there? These are presumably calculated automatically, so does the second item has its mass listed as 2kg?
- Comment on Last time I go to Great Clips 4 weeks ago:
Day ???/??? of downvoting every post with advertiser-friendly censorship in it.
- Comment on The Top 3 Apps in my Country (Venezuela) are all VPNs... 5 weeks ago:
Security as in cybersecurity, yes. Security as in not getting caught violating government bans, not so much - if you’re in a country where getting repressed by your government is a real possibility, it helps a lot for it to not be possible to see exactly what sites you visit. Reminder: even over HTTPS, the domain name (like
lemmy.world
) is normally not encrypted. Encrypted Client Hello can solve this, but it has only started being commonly used a year ago or so, and more importantly requires the host to support it. - Comment on POV: It's January 19th 5 weeks ago:
I wouldn’t generally require people to “compile their findings into a report”, but in this case the messages are weirdly devoid of any checkable information and then the reddit user in question mysteriously lost a laptop full of findings, so, yeah, these claims are not compelling. I don’t think the reverse engineer in question was lying, per se, but I do think they were very wrong at first by random chance, the story gained traction, and then they were too embarrassed to admit they fucked up.
- Comment on oopsie 5 weeks ago:
Sort of true, but the algorithm that Reddit-like platforms uses is transparent and simple (it’s just based on likes and dislikes, and I think you can even look up the source for the sorting modes) and hence doesn’t directly try to feed you content that’d enrage you. I can just not read the posts about Musk and Trump, since I find most takes on the former bad and don’t care much about the latter. Meanwhile, on platforms like Twitter or Tiktok you are directly fed content out of some recommendation ML model trained on user engagement.
(There’s also subtler differences. For example, on Reddit/Lemmy/etc, if you hate a post you can dislike it, which will generally make it show up less to people. But on, say, Tumblr, not only are there no dislikes, but if you are really hate a post you can only respond to it by reposting it, therefore spreading it further among your followers! That’s an absolutely devious platform-design move that could have been invented directly by Satan himself.)
- Comment on Is it wrong to not have a disabled child solely to avoid forcing the child to suffer their whole life? 5 weeks ago:
Yet, people suffering from it can lead happy and fulfilling lives.
Sure, it’s possible for a person with a severe disability to grow up happy. But when one is making a decision in real life (like having a child), one should consider an average case, not a exceptional one. And the average case for an example like Down’s Syndrome is pretty bad. It is a bit unclear how to quantify the suffering in this particular disease’s case because the main harm to the child is lifelong mental impairment and assorted physical disabilities - but it is at least going to inflict suffering on the child’s family, since caring for a child with a severe disability for their entire life isn’t exactly fun.
It is a slippery slope that, if not navigated carefully, has historically leaded to atrocities.
I don’t see the relation. You’ll notice that I’m not proposing killing off disabled people for the “improvement of society” or whatever it was that nazis called it. I am not doing this because nonconsensually killing a person is a harm to them. But deciding not to have a child isn’t the same thing as murdering a person - it’s not harming anyone who exists, and hence may well be morally better than having a child.
(Oh, I suppose you might mean that I’m arguing that there are circumstances in which it’s morally bad for a person to have a child, which is similar to nazi eugenics in that I’m deciding whether or not people should have children? In that case, my answer is that the difference is that I’m a person, not an authoritarian government, and I don’t have power (nor, indeed, the desire) to force people to obey my personal moral judgements.)
- Comment on Can I pay someone to add a specific feature to an open source app? 5 weeks ago:
Developers usually make $50-300/hour.
That seems like an overestimate even for US. More importantly, I don’t think most open source developers earn this much money (otherwise they wouldn’t ask for tiny donations), and hence it’s not the relevant figure. If I’m wrong about this, please do tell me - I very much would like to know if the hours I occasionally spend on open-source contributions can instead earn me hundreds of dollars. ;)
- Comment on Not promoting violence or anything. But stupid quest since Iran has an 80 million bounty on Trumps head. If someone would follow thru do they just go to Iran and be like pay up? Why or why not? 5 weeks ago:
Depends - do you have crypto?
- Comment on Is it wrong to not have a disabled child solely to avoid forcing the child to suffer their whole life? 5 weeks ago:
carries the implication that the world would be happier were you to just kill off the huge segment of the population who end up on the negative side.
Not necessarily. Someone dying isn’t the same as someone not existing at all.* It does imply that the world would be better off if it stopped existing, and under some assumptions implies it’d be moral to, say, instantly end all of humanity. I’m not sure that these conclusions are necessarily “contrary to our instincts”.
*one reason why we really ought to keep track of that is that if we didn’t distinguish between those, then if an average life had positive value, it’d be immoral not to have as many children as possible, until the marginal value of an extra life fell to zero once again (kind of like how Malthus thought societies worked, except as a supposedly moral thing to do). That conclusion is something I do consider very contrary to my instincts.
I do tend towards a variant of utilitarianism myself as it has a useful ability to weigh options that are both bad or both good, but for the reason above I tend to define “zero” as a complete lack of happiness/maximum of suffering, and being unhappy as having low happiness rather than negative (making a negative value impossible), though that carries it’s own implications that I know not everyone would agree with.
I too am an utilitarianism, sure. I’m not sure I can possibly buy “maximum suffering and no happiness” being the zero point. I very strongly feel that there are plenty of lives that would be way worse than dying (and than never having existed, too). It’s a coherent position I think, just a very alien one to me.
- Comment on Is it wrong to not have a disabled child solely to avoid forcing the child to suffer their whole life? 5 weeks ago:
That’s literally true, but the simple counterargument is that the happiness/suffering conversion coefficient is a matter of one’s values and not particularly up for debate - so there’s nothing incoherent about, say, the position that your child living a happy fullfilling life for a thousand years but stubbing their toe once is enough suffering to make their life net negative.
- Comment on Is it wrong to not have a disabled child solely to avoid forcing the child to suffer their whole life? 5 weeks ago:
This is a great comment. I’ll add that anyone thinking about disability ethics should read Two Arms and a Head, lest they start taking too seriously the idea that disabilities have no effect on quality of life.
- Comment on Is it wrong to not have a disabled child solely to avoid forcing the child to suffer their whole life? 5 weeks ago:
I agree that there’s a lot of space between “considered disabled” and “horrible life”, but OP said “suffer their whole life” which I associated with the latter.
- Comment on Is it wrong to not have a disabled child solely to avoid forcing the child to suffer their whole life? 5 weeks ago:
You have no moral obligation to have children at all, even if they’ll predictably have a happy life. So if their life will instead be predictably horrible life (or ruin the lives of the people around them - plenty of severe mental disabilities seem much less horrible for the people themselves than for their caretakers), it’s very reasonable to avoid it.
- Comment on What realistically would happen if someone came back to life from the dead ? 5 weeks ago:
Thanks, I’ll keep that take in my pocket for later. “Your honor, you can’t possibly prove that in the future a superintelligence won’t be able to reconstruct enough of the victim’s brain to resurrect them, and hence they aren’t dead and I can’t have committed murder!”.
- Comment on oopsie 5 weeks ago:
That’s true, though the simple solution is to not be on such platforms. You do not have to let them “shove it in your face until you can’t help it”.
- Comment on For a group that considers .world to be Reddit 2.0 and a "CIA propaganda front" they seem to get awfully mad whenever it comes up 5 weeks ago:
…that’s not fascism and not genocide denialism. Comments like yours are exactly the reason why the words “fascist”, “genocide” and many others don’t mean anything anymore, instead being used as generic terms to insult one’s ideological opponents with.
- Comment on They fucking geoblocked blahaj.zone 5 weeks ago:
I’m not aware of how exactly blocking works there, but if it’s similar to China and Russia, consider subscribing to a VPN provider that supports stealth proxies (e.g. Shadowsocks or VLESS); that’s harder to block.
- Comment on Racism nas gone too woke! 1 month ago:
What is happening with this image? The quality is low because OP lazily reposted it from some other secondary source, but what’s that yellow rectangle?
- Comment on Anon is a winner 1 month ago:
Say for the example you have a system where a monotheistic god sometimes alters reality when prayed to by a devout follower. There are no measurable or manipulatable components, as the god can respond entirely differently tomorrow.
That’s still nowhere near unexplainable enough to be impossible to study. You’ve described the god’s behaviour as “sometimes alters reality when prayed to by a devout follower” - if it’s consistent enough for this statement to make sense, that’s already a lot to study. Is there a correlation between particular prayers and miracles? Are particular mental states helpful? Are various traits make someone more likely to produce a miracle? Are there drugs that affect it? What are the limits to a miracle? Is there patterns in the time intervals between miracles? And so on, and so forth. A world with such a magic system, if you want it to be realistic, should have had an entire history of people studying these and many other things.
And honestly, the mystery of an unexplainable magic system is often what makes it magic.
Eh. It’s sometimes fun to read stories like that (one better have fun, since most stories are like that!), but they’re… stories about worlds where there isn’t a single human with common sense or intelligence. Not just in the story itself, but in the world’s entire history, because the author didn’t realise that “people trying to seriously explore the laws of their world” is a thing that necessarily happens in realistic worlds, much like it happens in ours.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 1 month ago:
Every time there’s an AI hype cycle the charlatans start accusing the naysayers of moving goalposts. Heck that exact same thing was happing constantly during the Watson hype. Remember that? Or before that the Alpha Go hype. Remember that?
Not really. As far as I can see the goalpost moving is just objectively happening.
But fundamentally you can’t make a machine think without understanding thought.
If “think” means anything coherent at all, then this is a factual claim. So what do you mean by it, then? Specifically: what event would have to happen for you to decide “oh shit, I was wrong, they sure did make a machine that could think”?
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 1 month ago:
The fact that you don’t understand it doesn’t mean that nobody does.
I would say I do. It’s not that high of a bar - one only needs some nandgame to understand how logic gates can be combined to do arithmetic. Understanding how doped silicon can be used to make a logic gate is harder but I’ve done a course on semiconductor physics and have an idea of how a field effect transistor works.
The way a calculator calculates is something that is very well understood by the people who designed it.
That’s exactly my point, though. If you zoom in deeper, a calculator’s microprocessor is itself composed of simpler and less capable components. There isn’t specific a magical property of logic gates, nor of silicon (or doping) atoms, nor for that matter of elementary particles, that lets them do math - it’s by building a certain device out of them that composes their elementary interactions that we can make a tool for this. Whereas Searle seems to just reject this idea entirely, and believes that humans being conscious implies you can zoom in to some purely physical or chemical property and claim that it produces the consciousness. Needless to say, I don’t think that’s true.
Is it possible that someday we’ll make machines that think? Perhaps. But I think we first need to really understand how the human brain works and what thought actually is. We know that it’s not doing math, or playing chess, or Go, or stringing words together, because we have machines that can do those things and it’s easy to test that they aren’t thinking.
That was a common and reasonable position in, say, 2010, but the problem is: I think almost nobody in 2010 would have claimed that the space of things that you can make a program do without any extra understanding of thought included things like “write code” and “draw art” and “produce poetry”. Now that it has happened, it may be tempting to goalpost-move and declare them as “not true thought”, but the fact that nobody predicted it in advance ought to bring to mind the idea that maybe that entire line of thought was flawed, actually. I think that trying to cling to this idea would require to gradually discard all human activities as “not thought”.
it’s easy to test that they aren’t thinking.
And that’s us coming back around to the original line of argument - I don’t at all agree that it’s “easy to test” that even, say, modern LLMs “aren’t thinking”. Because the difference between the calculator example and an LLM is that in a calculator, we understand pretty much everything that happens and how arithmetic can be built out of the simpler parts, and so anyone suggesting that calculators need to be self-aware to do math would be wrong. But in a neural network, we have full understanding of the lowest layers of abstraction - how a single layer works, how activations are applied, how it can be trained to minimize a certain loss function via propagation - and no idea at all about how it works on a higher level. It’s not even “only experts do”, it’s that nobody in the world understands how LLMs work under the hood, why they have the many and specific weird behaviors they do. That’s concerning in many way, but in particular I absolutely wouldn’t assume with little evidence that there’s no “self-awareness” going on. How would you know? It’s an enormous blackbox.
There’s this message pushed by the charlatans that we might create an emergent brain by feeding data into the right statistical training algorithm. They give mathematical structures misleading names like “neural networks” and let media hype and people’s propensity to anthropomorphize take over from there.
There’s certainly a lot of woo and scamming involved in modern AI (especially if one makes the mistake of reading Twitter), but I wouldn’t say the term “neural network” is at all confusing? I agree on the anthropomorphization though, it gets very weird. That said, I can’t help but notice that the way you phrased this message, it happens to be literally true. We know this because it already happened once. Evolution is just a particularly weird and long-running training algorithm and it eventually turned soup into humans, so clearly it’s possible.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 1 month ago:
Because everything we know about how the brain works says that it’s not a statistical word predictor.
LLMs aren’t just simple statistical predictors either. More generally, the universal approximation theorem is a thing - a neural network can be used to represent just about any function, so unless you think a human brain can’t be represented by some function, it’s possible to embed one in a neural network.
LLMs have no encoding of meaning or veracity.
I’m not sure what you mean by this. The interpretability research I’ve seen suggests that modern LLMs do have a decent idea of whether their output is true, and in many cases lie knowingly because they have been accidentally taught, during RLHF, that making up an answer when you don’t know one is a great way of getting more points. But it sounds like you’re talking about something even more fundamental? Suffices to say, I think being good at text prediction does require figuring out what claims are truthful and which aren’t.
There are some great philosophical exercises about this like the chinese room experiment.
The Chinese Room argument has been controversial since about the time it was first introduced. The general form of the most common argument against it is “just because any specific chip in your calculator is incapable of math doesn’t mean your calculator as a system is”, and that taken literally this experiment proves minds can’t exist at all (indeed, Searle who invented this argument thought that human minds somehow stem directly from “physical–chemical properties of actual human brains”, which sure is a wild idea). But also, the framing is rather misleading - quoting Scott Aaronson’s “Quantum Computing Since Democritus”:
In the last 60 years, have there been any new insights about the Turing Test itself? In my opinion, not many. There has, on the other hand, been a famous “attempted” insight, which is called Searle’s Chinese Room. This was put forward around 1980, as an argument that even a computer that did pass the Turing Test wouldn’t be intelligent. The way it goes is, let’s say you don’t speak Chinese. You sit in a room, and someone passes you paper slips through a hole in the wall with questions written in Chinese, and you’re able to answer the questions (again in Chinese) just by consulting a rule book. In this case, you might be carrying out an intelligent Chinese conversation, yet by assumption, you don’t understand a word of Chinese! Therefore, symbol-manipulation can’t produce understanding.
[…] But considered as an argument, there are several aspects of the Chinese Room that have always annoyed me. One of them is the unselfconscious appeal to intuition – “it’s just a rule book, for crying out loud!” – on precisely the sort of question where we should expect our intuitions to be least reliable. A second is the double standard: the idea that a bundle of nerve cells can understand Chinese is taken as, not merely obvious, but so unproblematic that it doesn’t even raise the question of why a rule book couldn’t understand Chinese as well. The third thing that annoys me about the Chinese Room argument is the way it gets so much mileage from a possibly misleading choice of imagery, or, one might say, by trying to sidestep the entire issue of computational complexity purely through clever framing. We’re invited to imagine someone pushing around slips of paper with zero understanding or insight – much like the doofus freshmen who write (a + b)^2^ = a^2^ + b^2^ on their math tests. But how many slips of paper are we talking about? How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time? If each page of the rule book corresponded to one neuron of a native speaker’s brain, then probably we’d be talking about a “rule book” at least the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it’s not so hard to imagine that this enormous Chinese-speaking entity that we’ve brought into being might have something we’d be prepared to call understanding or insight.There’s also the fact that, empirically, human brains are bad at statistical inference but do not need to consume the entire internet and all written communication ever to have a conversation. Nor do they need to process a billion images of a bird to identify a bird.
I’m not sure what this proves - human brains can learn much faster because they already got most of their learning in the form of evolution optimizing their genetically-encoded brain structure over millions of years and billions of brains. A newborn human already has part of their brain structured in the right way to process vision, and hence needs only a bit of training to start doing it well. Artificial neural networks start out as randomly initialized and with a pretty generic structure, and hence need orders of magnitude more training.
Now of course because this exact argument has been had a billion times over the last few years your obvious comeback is “maybe it’s a different kind of intelligence.”
Nah - personally, I don’t actually care much about “self-awareness”, because I don’t think an intelligence needs to be “self-aware” (or “conscious”, or a bunch of other words with underdefined meanings) to be dangerous; it just needs to have high enough capabilities. The reason why I noticed your comment is because it stood out to me as… epistemically unwise. You live in a world with inscrutable blackboxes who nobody really understands which can express wide ranges of human behavior including stuff like “writing poetry about the experience of self-awareness”, and you’re “absolutely sure” they’re not self-aware? I don’t think many of the history’s philosophers of consciousness, say, would endorse a belief like that given such evidence.