Zaleramancer
@Zaleramancer@beehaw.org
(They/Them) I like TTRPGs, history, (audio and written) horror and the history of occultism.
- Comment on Gaming Swan Song 1 week ago:
Very interesting resource. I found her video presentation about online gaming very informative and delightfully fair.
- Comment on Anthropic destroyed millions of print books to build its AI models 1 week ago:
Yeah, see, I am on your side but the focus on “destroying books is bad,” is kind of irrelevant to the actual harm being done.
It’s that they’re devouring the contents of people’s brains for the ability to replace them that’s concerning. If they chose to do this in a completely different way that preserved the books, I would not say it changes the moral valence of their actions.
By centering the argument on the destruction of the books, it shifts it away from the actual concern.
- Comment on My Couples Retreat With 3 AI Chatbots and the Humans Who Love Them 1 week ago:
Your empathy is in a good place, but the problem isn’t how humans are broken, it’s what is breaking them.
Western society* is built in a really dumb and alienating way. Humans are reduced to a labor commodity, places where people can mingle socially are being commercialized out of existence, the internet has evolved into a machine that actively profits from outrage and alienation, our governmental institutions are primarily driven by forces no regular person has any power over and we can’t even feel pride in our work because it’s profitable to convince us that we are replaceable and disposable.
Where’s the social incentive to connect to other people? The powers that be benefit from a disorganized and isolated population, so they will do nothing to change that. Market incentives mean that media which focused on things that provoke fear, rage and anxiety are more profitable than ones that promote community, happiness or hope.
It’s permeated so deeply into our culture that some older kids movies seem completely insane now. Like, think about ET and consider how wild it would be nowadays for you to just let your children vanish for hours doing whatever and wandering around wherever.
The fear and anxiety determines our actions, and there are multiple incentives on a macro-social level for that to continue.
Hell, I have watched this happen in real time during my 10+ year time on the web, where the communities of excited weirdos sharing their thoughts and feelings have been so thoroughly dominated by this that it is hard to engage with any social media without someone shoving a headline into your face that is intended to upset you.
On Tumblr, for example, the trend was so strong that the idea that you weren’t constantly upset was a sign of being a bad person. You know, on the Superwholock site? Yeah, the one that wanted to fuck the Onceler.
If you want to reverse this trend, it’s going to require changing how our political, economic and media environments act by changing their incentives. Otherwise, any change will be superficial and fail to produce meaningful results.
It’s pretty depressing, but that’s the situation as I see it.
*I’m not qualified to comment on other cultural spheres.
- Comment on Like it or not, developers are experimenting with AI for their remasters and remakes - but can they ever be any good? [Eurogamer] 1 week ago:
How many times can journalists retread this conversation from 2018? The polls are still out.
- Comment on Anthropic destroyed millions of print books to build its AI models 1 week ago:
This reminds me of when I shadowed a librarian in high school and they talked to me about how people got really upset with them throwing away books that had multiple reprintings and were in awful condition.
Because people as a whole lack the capacity for nuance, I guess.
Bad focus on the news article.
- Comment on The résumé is dying, and AI is holding the smoking gun 1 week ago:
Preach. I’m so bad at selling myself!
I just want a job with a living wage now, and it’s agonizingly, dehumanizingly hard to look online. Especially if you have the extreme rejection sensitivity aspect of ADHD.
- Comment on Dark web’s longest-standing drug market seized in multinational effort 2 weeks ago:
Yeah, typo on my part 😅 Pasdechance has it right, I meant news company.
- Comment on Dark web’s longest-standing drug market seized in multinational effort 2 weeks ago:
Europol’s Deputy Executive Director of Operations Jean-Philippe Lecouffe said in a statement. “By dismantling its infrastructure and arresting its key players, we are sending a clear message: there is no safe haven for those who profit from harm.”
(Looks at the camera)
Just as an aside, the person who owns this new company donated millions to the Trump campaign.
- Comment on Gooner game of the year Stellar Blade's mods are 41% smut, ensuring gamers will never see the light of heaven 2 weeks ago:
Misogyny in stuff can be really complicated. Sometimes you can only really see it holistically, and sometimes it’s only in specifics. Sometimes a story will give a woman a lot of focus, place her feelings and emotions in the spotlight and give her actions the most agency and power over the plot- while also having her be inexplicably dressed in lingerie the whole time with a really weak excuse, if any.
Like, I love FF12. Ashe is undisputably the actual main character in it, and her story is about being a person with authority in a time of war. It’s about grappling with your own grief and desire for revenge, trying to keep in mind your principles and what you believe in. It somehow manages to be both about the divine right of kings and weapons of mass destruction and maintained it’s emotional thru line almost all the way to the end!
But also, Ashe, that hot pink mini-skirt? Girrrrrl, WTF, you live in a desert. You’re gonna fight things in a skirt made of two pink napkins? There’s no real reason for her to dress like that, and it’s definitely just for fan service!
I still love the game, but I acknowledge that it has that problem. It objectifies women because it treats them as visual treats and has them dress in bizzare ways that don’t flow adequately from their characterization. This is because of structural societal things, and it sucks for a bunch of reasons.
Bayonetta is different primarily because the work’s themes are, as I understand them, incredibly positive about women being active, powerful sexual people who do what they want.
B dresses like that because she likes being hot, and it’s a characterization tool, and it’s never a disempowering thing for her.
Like, Kill la Kill has ridiculous outfits, but I’ve had multiple women tell me they love it because of how it intersects with things they like. I wasn’t going to watch it until one of them insisted and, yeah, it’s pretty good. The sexual elements are intended and used as part of the narrative, and the emotional thru line is very strong.
So, it’s one of those things that needs an exhaustive breakdown to really know about in a work. I don’t know enough about this one to say, and I’m just commented in hopes that it’s useful for you or someone else looking at doing media analysis of this type.
- Comment on Gooner game of the year Stellar Blade's mods are 41% smut, ensuring gamers will never see the light of heaven 2 weeks ago:
I’m forced to agree. It feels weird to do so, but, I guess yeah- the thing which should be focused on is the how and why of this and not just focus on the puritan disgust angle.
I’ve seen the Shaun video (linked in these replies somewhere) so I’m familiar with what’s going on socially around this video game. Being upset because of misogynistic objectification is appropriate, but sex isn’t inherently bad.
- Comment on Nier creator Yoko Taro reveals the sad reality of modern AAA game development, “there’s less weird people making games” 2 weeks ago:
The pressure applied by the need for video games to act as investments is not aligned with artistic expressiveness, innovation or quality.
This is why games from smaller Companies or indie developers continue to be the huge, genre-changing breakout hits. They’re still being made with the intention of making a game that’s fun, weird, or interesting as a primary concern, rather than just being a vehicle for profit.
This trend will continue.
- Comment on Is Google about to destroy the web? Google says a new AI tool on its search engine will rejuvenate the internet. Others predict an apocalypse for websites. 2 weeks ago:
It’s possible, but I don’t doubt that there’s going to be a continued push to consolidate people into smaller and smaller parts of the internet- quite possibly through legal means, but definitely through as many commerical ones as possible.
I don’t have a lot of faith in the perseverance of the vast majority of websites, not because of their lack of willingness or desire, but because of a lack of funds.
People get poorer, needs get more expensive and things like this place become harder to keep running.
You are right, though, that it’s not written in stone. I will try to hold out a metered measure of hope.
- Comment on Is Google about to destroy the web? Google says a new AI tool on its search engine will rejuvenate the internet. Others predict an apocalypse for websites. 3 weeks ago:
Search engines are already basically worthless, so I’m not surprised with the falling axe.
The shift from search engines actually indexing things to search through to trying to parse a question and find an answer has been the most irritating trend for me. I remember when you could just put in a series of words and be delivered unto every indexed page that had all of them.
Now I regularly get told that even common words don’t exist if I insist that, no, google I do want only searches with the words I put in.
This is my old person rant, I guess. /s
This change is probably going to cause huge problems for a lot of existing sites, especially because it means Google will probably start changing their advertising model now that they can consolidate the views into a specific location and pocket the money. The article mentions this, but doesn’t realize the implications.
“The internet will still be around,” is only true if you hold that the super consolidated, commericalized nexus of doom is going to continue on just fine, while countless small, very useful websites made by actual people for actual reasons fade away into oblivion.
It sucks to watch something I have loved my whole life die, but it’s going bit by bit because we can’t convince our politicians to do anything about it.
- Comment on Enshittification of ChatGPT 1 month ago:
I’ve really enjoyed this discussion, but I haven’t been able to respond because I don’t have the mental bandwidth right now. Thanks for being such a good conversational partner and I think you make some very interesting points that helped me develop my own opinions.
- Comment on GTA 6's delay doesn't mean the games industry's in trouble - it's already dead 1 month ago:
Such a dramatic title.
- Comment on Enshittification of ChatGPT 2 months ago:
Hi, once more, I’m happy to have a discussion about this. I have very firm views on it, and enjoy getting a chance to discuss them and work towards an ever greater understanding of the world.
I completely understand the desire to push back against certain kinds of “understandings” people have about LLM due to their potentially harmful inaccuracy and the misunderstandings that they could create. I have had to deal with very weird, like, existentialist takes on AI art lacking a quintessential humanity that all human art is magically endowed with- which, come on, there are very detailed technical art reasons why they’re different, visually! It’s a very complicated phenomenon, but, it’s not an inexplicable cosmic mystery! Take an art critique class!
Anyway, I get it- I have appreciated your obvious desire to have a discussion.
On the subject of understanding, I guess what I mean is this: Based on everything I know about an LLM, their “information processing” happens primarily in their training. This is why you can run an LLM instance on, like, a laptop but it takes data centers to train them. They do not actually process new information, because if they did, you wouldn’t need to train them, would you- you’d just have them learn and grow over time. An LLM breaks its training data down into patterns and shapes and forms, and uses very advanced techniques to generate the most likely continuation of a collection of words. You’re right in that they must answer, but that’s because their training data is filled with that pattern of answering the question. The natural continuation of a question is, always, an answer-shaped thing. Because of the miracles of science, we can get a very accurate and high fidelity simulation of what that answer would look like!
Understanding, to me, implies a real processing of new information and a synthesis of prior and new knowledge to create a concept. I don’t think it’s impossible for us to achieve this, technologically, humans manage it and I’m positive that we could eventually figure out a synthetic method of replicating it. I do not think an LLM does this. The behavior they exhibit and the methods they use seem radically inconsistent with that end. Because, the ultimate goal of them was not to create a thinking thing, but to create something that’s able to make human-like speech that’s coherent, reliable and conversational. They totally did that! It’s incredibly good at that. If it were not for the context of them politically, environmentally and economically, I would be so psyched about using them! I would have been trying to create templates to get an LLM to be an amazing TTRPG oracle if it weren’t for the horrors of the world.
It’s incredible that we were able to have a synthetic method of doing that! I just wish it was being used responsibly.
An LLM, based on how it works, cannot understand what it is saying, or what you are saying, or what anything means. It can continue text in a conversational and coherent way, with a lot of reliability on how it does that. The size, depth and careful curation of its training data mean that those responses are probably as accurate to being an appropriate response as they can be. This is why, for questions of common knowledge, or anything you’d do a light google for, they’re fine. They will provide you with an appropriate response because the question probably exists hundreds of thousands of times in the training data; and, the information you are looking for also exists in huge redundancies across the internet that got poured into that data. If I ask an LLM which of the characters of My Little Pony has a southern accent, they will probably answer correctly because that information has been repeated so much online that it probably dwarfs the human written record of all things from 1400 and earlier.
The problem becomes evident when you ask something that is absolutely part of a structured system in the english language, but which has a highly variable element to it. This is why I use the “citation problem” when discussing them, because they’re perfect for this: A citation is part of a formal or informal essay, which are deeply structured and information dense, making them great subjects for training data. Their structure includes a series of regular, repeating elements in particular orders: Name, date, book name, year, etc- these are present and repeated with such regularity that the pattern must be quite established for the LLM as a correct form of speech. The names of academic books are often also highly patterned, and an LLM is great at creating human names, so there’s no problem there.
The issue is this: How can an LLM tell if a citation it makes is real? It gets a pattern that says, “The citation for this information is:” and it continues that pattern by putting a name, date, book title, etc in that slot. However, this isn’t like asking what a rabbit is- the pattern of citations leads into an endless warren of hundreds of thousands names, book titles, dates, and publishing companies. It generates them, but it cannot understand what a citation really means, just that there is a pattern it must continue- so it does.
Let me also ask you a counter question: do you think a flat-earther understands the idea of truth? After all, they will blatantly hallucinate incorrect information about the Earth’s shape and related topics. They might even tell you internally inconsistent statements or change their mind upon further questioning. And yet I don’t think this proves that they have no understanding about what truth is, they just don’t recognize some facts as true.
A flat-earther has some understanding of what truth is, even if their definition is divergent from the norm. The things they say are deeply inaccurate, but you can tell that they are the result of a chain of logic that you can ask about and follow. It’s possible to trace flat-earth ideas down to sources. They’re incorrect, but they’re arrived at because of an understanding of prior (incorrect) information. A flat-earther does not always invent their entire argument and the basis for their beliefs on the spot, they are presenting things they know about from prior events- they can show the links. An LLM cannot tell you how it arrived at a conclusion, because if you ask it, you are just receiving a new continuation of your prior text. Whatever it says is accurate only when probability and data set size is on its side.
- Comment on Enshittification of ChatGPT 2 months ago:
And, yes, I can prove that a human can understand things when I ask: Hey, go find some books on a subject, then read them and summarize them. If I ask for that, and they understood it, they can then tell me the names of those books because their summary is based on actually taking in the information, analyzing it and reorganizing it by apprehending it as actual information.
They do not immediately tell me about the hypothetical summaries of fake books and then state with full confidence that those books are real. The LLM does not understand what I am asking for, but it knows what the shape is. It knows what an academic essay looks like and it can emulate that shape, and if you’re just using an LLM for entertainment that’s really all you need. The shape of a conversation for a D&D npc is the same as the actual content of it, but the shape of an essay is not the same as the content of that essay. They’re too diverse, and they have critical information in them and they are about that information. The LLM does not understand the information, which is why it makes up citations- it knows that a citation fits in the pattern, and that citations are structured with a book name and author and all the other relevant details. None of those are assured to be real, because it doesn’t understand what a citation is for or why it’s there, only that they should exist. It is not analyzing the books and reporting on them.
- Comment on Enshittification of ChatGPT 2 months ago:
Hello again! So, I am interested in engaging with this question, but I have to say: My initial post is about how an LLM cannot provide actual, real citations with any degree of academic rigor for a random esoteric topic. This is because it cannot understand what a citation is, only what it is shaped like.
An LLM deals with context over content. They create structures that are legible to humans, and they are quite good at that. An LLM can totally create an entire conversation with a fictional character in their style and voice- that doesn’t mean it knows what that character is. Consider how AI art can have problems that arise from the fact that they understand the shape of something, but they don’t know what it actually is- that’s why early AI art had a lot of problems with objects ambigiously becoming other objects. The fidelity of these creations has improved with the technology, but that doesn’t imply understanding of the content.
Do you think an LLM understands the idea of truth? Do you think if you ask it to say a truthful thing, and be very sure of itself and think it over, it will produce something that’s actually more accurate or truthful- or just something that has the language hall-marks of being truthful? I know that an LLM will produce complete fabrications that distort the truth if you expect a base-line level of rigor from them, and I proved that above, in that the LLM couldn’t even accurately report the name of a book it was supposedly using as a source.
What is understanding, if the LLM can make up an entire author, book and bibliography if you ask it to tell you about the real world?
- Comment on Enshittification of ChatGPT 2 months ago:
What’s yours? I’m stating that LLMs are not capable of understanding the actual content of any words they arrange into patterns. This is why they create false information, especially in places like my examples with citations- they are purely the result of it creating “academic citation” sounding sets of words. It doesn’t know what a citation actually is.
Can you prove otherwise? In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely. An LLM cannot do this. It’s not designed to- there are neural network AI built on similar foundational principles towards divergent goals that can produce remarkable results in terms of data analysis, but not ChatGPT. It doesn’t understand anything, which is why you can repeatedly ask it about a book only to look it up and discover it doesn’t exist.
- Comment on Enshittification of ChatGPT 2 months ago:
Let me try again: In the literal sense of it matching patterns to patterns without actually understanding them.
- Comment on Enshittification of ChatGPT 2 months ago:
As I understand it, most LLM are almost literally the Chinese rooms thought experiment. They have a massive collection of data, strong algorithms for matching letters to letters in a productive order, and sufficiently advanced processing power to make use of that. An LLM is very good at presenting conversation; completing sentences, paragraphs or thoughts; or, answering questions of very simple fact- they’re not good at analysis, because that’s not what they were optimized for.
This can be seen when people discovered that if ask them to do things like tell you how many times a letter shows up in a word, or do simple math that’s presented in a weird way, or to write a document with citations- they will hallucinate information because they are just doing what they were made to do: complete sentences, expand words along a probability curve that produces legible, intelligible text.
I opened up chat-gpt and asked it to provide me with a short description of how Medieval European banking worked, with citations and it provided me with what I asked for. However, the citations it made were fake:
The minute I asked it, I assume a bit of sleight of hand happened, where it’s been set up so that if someone asks a question like that it’s forwarded to a search engine that verifies if the book exists, probably using Worldcat or something. Then I assume another search is made to provide the prompt for the LLM to present the fact that the author does exist, and possibly accurately name some of their books.
I say sleight of hand because this presents the idea that the model is capable of understanding it made a mistake, but I don’t think it does- if it knew that the book wasn’t real, why would it have mentioned it in the first place?
I tested each of the citations it made. In one case, I asked it to tell me more about one of them and it ended up supplying an ISBN without me asking, which I dutifully checked. It was for a book that exists, but it didn’t share a title or author, because those were made up. The book itself was about the correct subject, but the LLM can’t even tell me what the name of the book is correctly; and, I’m expected to believe what it says about the book itself?
- Comment on The inarguable case for banning social media for teens 2 months ago:
It’s complicated. The current state of the internet is dominated by corporate interests towards maximal profit, and that’s driving the way websites and services are structured towards very toxic and addictive patterns. This is bigger than just “social media.”
However, as a queer person, I will say that if I didn’t have the ability to access the Internet and talk to other queer people without my parents knowing, I would be dead. There are lots of abused kids who lack any other outlets to seek help, talk to people and realize their problems, or otherwise find relief for the crushing weight of familial abuse.
Navigating this issue will require grace, awareness and a willingness to actually address core problems and not just symptoms. It doesn’t help that there is an increasing uptick of purity culture and “for the children” legislation that will curtail people’s privacy, ability to use the internet and be used to push queer people and their art or narratives off of the stage.
Requiring age verification reduces anonymity and makes it certain that some people will be unable to use the internet safely. Yes, it’s important in some cases, but it’s also a cost to that.
There’s also the fact that western society has systemically ruined all third spaces and other places for children to exist in that isn’t their home or school. It used to be that it was possible for kids and teens to spend time at malls, or just wandering around a neighborhood. There were lots of places where they were implicitly allowed to be- but those are overwhelmingly being closed, commericalized or subject to the rising tide of moral panic and paranoia that drives people to call the cops on any group of unknown children they see on their street.
Police violence and severity of response has also heightened, so things that used to be minor, almost expected misdemeanors for children wandering around now carry the literal risk of death.
So children are increasingly isolated, locked down in a context where they cannot explore the world or their own sense of self outside the hovering presence of authority- so they turn to the internet. Cutting that off will have repercussions. Social media wouldn’t be so addictive for kids if they had other venues to engage with other people their age that weren’t subject to the constant scrutiny of adults.
Without those spaces, they have to turn to the only remaining outlet. This article is woefully inadequate to answer the fundamental, core problems that produce the symptoms we are seeing; and, it’s implementation will not rectify the actual problem. It will only add additional stress to the system and produce a greater need to seek out even less safe locations for the people it ostensibly wishes to protect.
- Comment on DeepSeek: The Chinese Communist Party’s newest AI advance is making repression smarter, cheaper, and more deadly. Even worse, they aim to export it to the world. 2 months ago:
Exactly. Well put.
- Comment on Prompt: How would an AI win an Election? 2 months ago:
I wonder which sci-fi novels it’s mimicking here.
- Comment on Adult gamers of Lemmy how do you find time to game without being exhausted of the screen? 2 months ago:
Yeah! Also, sometimes I use emulators that work well on phones to play older games, I had fun playing Final Fantasy Legends 2 with RetroArch.
- Comment on Adult gamers of Lemmy how do you find time to game without being exhausted of the screen? 2 months ago:
My suggestion is to either change the context you play games in, or pick games that are very cognitively different from what you normally do at work.
You can change your context with a new console, but I think it may be cheaper to do something like buying a controller and playing games while standing up, or on your couch/armchair, or playing games while sitting on a yoga ball. The point is to trick your brain, because it’s associated sitting at a desk in front of a computer with boring tedium. Change the presentation and your subconscious will interpret it differently.
You can also achieve this by identifying the things that you have to do in your job that mirror videogame genres you enjoy and picking a game that shares few of those qualities.
I worked at the post office for years, doing mail processing, and my enjoyment of management and resource distribution style games went down sharply during that time because of the cognitive overlap- I played more roguelikes and RPGs as a consequence.
- Comment on Why you should be polite to AI 2 months ago:
Thank you, I am trying to be less abrasive online, especially about LLM/GEN-AI stuff. I have come to terms with the fact that my desire for accuracy and truthfulness in things skews way past the median to the point that it’s almost pathological, which is why I ended up studying history in college, probably. To me, the idea of using a LLM to get information seems like a bad use of my time- I would methodically check everything it says, and the total time spent would vastly exceed any amount saved, but that’s because I’m weird.
Like, it’s probably fine for anything you’d rely on a skimming a wikipedia article for. I wouldn’t use them for recipes or cooking, because that could give you food poisoning if something goes wrong, but if you’re just like, “Hey, what’s Ice-IV?” then the answer it gives is probably equivalent in 98% of cases to checking a few websites. People should invest their energy where they need it, or where they have to, and it’s less effort for me to not use the technology, but I know there are people who can benefit from it and have a good use-case situation to use it.
My main point of caution for people reading this is that you shouldn’t rely on an LLM for important information- whatever that means to you, because if you want to be absolutely sure about something, then you shouldn’t risk an AI hallucination, even if it’s unlikely.
- Comment on Why you should be polite to AI 2 months ago:
I’m not a frequent user of LLM, but this was pretty intuitive to me after using them for a few hours. However, I recognize that I’m a weirdo and so will pick up on the idea that the prompt leads the style.
It’s not like the LLM actually understands that you are asking questions, it’s just that it’s generating a procedural response to the last statement given.
Saying please and thank you isn’t the important part.
Just preface your use with, like,
“You are a helpful and enthusiastic with excellent communication skills. You are polite, informative and concise. A summary of follows in the style of your voice, explained in clearly and without technical jargon.”
And you’ll probably get promising results, depending on the exact model. You may have to massage it a bit before you get consistent good results, but experimentation will show you the most reliable way to get the desired results.
Now, I only trust LLM as a tool for amusing yourself by asking it to talk in the style of you favorite fictional characters about bizarre hypotheticals, but at this point I accept there’s nothing I can do to discourage people from putting their trust in them.
- Comment on Teachers warn AI is impacting students' critical thinking 3 months ago:
Intellectual labor is hard and humans don’t like doing difficult things, paired with a culture that’s increasingly hostile to education and a government that wants you ignorant- it’s easy to see how this happens in the US.
- Comment on ‘Mass theft’: Thousands of artists call for AI art auction to be cancelled 4 months ago:
Hey, thank you so much for your contribution to this discussion. You presented me a really challenging thought and I have appreciated grappling with it for a few days. I think you’ve really shifted some bits of my perspective, and I think I understand now.
I think there’s an ambiguity in my initial post here, and I wanted to check which of the following is the thing you read from it:
- Generative AI art is inherently limited in these ways, even in the hands of skilled artists or those with technical expertise with it; or,
- Generative AI art is inherently limited in these ways, because it will be ultimately used by souless executives who don’t respect or understand art.