Phanatik
@Phanatik@kbin.social
- Comment on Recall drawing regulatory scrutiny in the UK — Microsoft's AI Copilot+ feature a 'privacy nightmare' 5 months ago:
The payment model is largely irrelevant. The feature by design is a privacy nightmare so it being even an option available to users is dangerous. How they thought they'd get this passed the EU is beyond me.
- Comment on Why can't people make ai's by making a neuron sim and then scaling it up with a supercomputer to the point where it has a humans number of neurons and then raise it like a human? 6 months ago:
What you're alluding to is the Turing test and it hasn't been proven that any LLM would pass it. At this moment, there are people who have failed the inverse Turing test, being able to acerrtain whether what they're speaking to is a machine or human. The latter can be done and has been done by things less complex than LLMs and isn't proof of an LLMs capabilities over more rudimentary chatbots.
You're also suggesting that it minimises the complexity of its outputs. My determination is that what we're getting is the limit of what it can achieve. You'd have to prove that any allusion to higher intelligence can't be attributed to coercion by the user or it's just hallucinating based on imitating artificial intelligence from media.
There are elements of the model that are very fascinating like how it organises language into these contextual buckets but this is still a predictive model. Understanding that certain words appear near each other in certain contexts is hardly intelligence, it's a sophisticated machine learning algorithm.
- Comment on Why can't people make ai's by making a neuron sim and then scaling it up with a supercomputer to the point where it has a humans number of neurons and then raise it like a human? 6 months ago:
I mainly disagree with the final statement on the basis that the LLMs are more advanced predictive text algorithms. The way they've been set up with a chatbox where you're interacting directly with something that attempts human-like responses, gives off the misconception that the thing you're talking to is more intelligent than it actually is. It gives off a strong appearance of intelligence but at the end of the day, it predicts the next word in a sentence based on what was said previously but it doesn't do that good job of comprehending what exactly it's telling you. It's very confident when it gives responses which also means when it's wrong, it's very confidently delivering the incorrect response.
- Comment on Vanguard takes screenshots of your PC every time you play a game 6 months ago:
Tbf it's a compounding issue. It breaks Linux support because Vanguard demands access Linux will never give it which is kernel level.
- Comment on Vanguard takes screenshots of your PC every time you play a game 6 months ago:
And why I stopped playing
- Comment on First Image of Christian Bale as Frankenstein in Maggie Gyllenhaal’s ‘THE BRIDE’ - October 2025 7 months ago:
Shout-out to this film being the only one in my life to put me to sleep.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
First of all, we're not having a debate and this isn't a courtroom so avoid the patronising language.
Second of all, my "belief" on the models' plagiarism is based on technical knowledge of how the models work and not how I think they work.
a machine is now able to do a similar job to a human
This would be impressive if it was true. An LLM is not intelligent simply through its appearance of intelligence.
It's enabling humans
It's a chat bot that's automated Google searches, let's be clear about what this can do. It's taken natural language processing and applied it through an optimisation algorithm to produce human-like responses.
No, I disagree at a fundamental level. Humans need to compete against each other and ourselves to improve. Just because an LLM can write a book for you, doesn't mean you've written a book. You're just lazy. You don't want to put in the work any other writer in existence has done, to mull over their work and consider the emotions and effect they want to have on the reader. To what extent can an LLM replicate the way George RR Martin describes his world without entirely ripping off his work?
i’d question why it’s unethical, and also suggest that “stolen” is another emotive term here not meant to further the discussion by rational argument
If I take a book you wrote from you without buying it or paying you for it, what would you call that?
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
I don't control the upvotes so I don't know why that's directed at me.
The refutation was based on around a misunderstanding of how LLMs generate their outputs and how the training data assists the LLM in doing what it does. The article itself tells you ChatGPT was trained off of copyrighted material they were not licensed for. The person I responded to suggested that comedians do this with their work but that's equating the process an LLM uses when producing an output to a comedian writing jokes.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
Neither is an LLM. What you’re describing is a primitive Markov chain.
My description might've been indicative of a Markov chain but the actual framework uses matrices because you need to be able to store and compute a huge amount of information at once which is what matrices are good for. Used in animation if you didn't know.
What it actually uses is irrelevant, how it uses those things is the same as a regression model, the difference is scale. A regression model looks at how related variables are in giving an outcome and computing weights to give you the best outcome. This was the machine learning boom a couple of years ago and TensorFlow became really popular.
LLMs are an evolution of the same idea. I'm not saying it's not impressive because it's very cool what they were able to do. What I take issue with is the branding, the marketing and the plagiarism. I happen to be in the intersection of working in the same field, an avid fan of classic Sci-Fi and a writer.
It's easy to look at what people have created throughout history and think "this looks like that" and on a point by point basis you'd be correct but the creation of that thing is shaped by the lens of the person creating it. Someone might make a George Carlin joke that we've heard recently but we'll read about it in newspapers from 200 years ago. Did George Carlin steal the idea? No. Was he aware of that information? I don't know. But Carlin regularly calls upon his own experiences so it's likely that he's referencing a event from his past that is similar to that of 200 years ago. He might've subconsciously absorbed the information.
The point is that the way these models have been trained is unethical. They used material they had no license to use and they've admitted that it couldn't work as well as it does without stealing other people's work. I don't think they're taking the position that it's intelligent because from the beginning that was a marketing ploy. They're taking the position that they should be allowed to use the data they stole because there was no other way.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
You are spitting out basic points and attempting to draw similarities because our brains are capable of something similar. The difference between what you've said and what LLMs do is that we have experiences that we are able to glean a variety of information from. An LLM sees text and all it's designed to do is say "x is more likely to appear before y than z". If you fed it nonsense, it would regurgitate nonsense. If you feed it text from racist sites, it will regurgitate that same language because that's all it has seen.
You'll read this and think "that's what humans do too, right?" Wrong. A human can be fed these things and still reject them. Someone else in this thread has made some good points regarding this but I'll state them here as well. An LLM will tell you information but it has no cognition on what it's telling you. It has no idea that it's right or wrong, it's job is to convince you that it's right because that's the success state. If you tell it it's wrong, that's a failure state. The more you speak with it, the more fail states it accumulates and the more likely it is to cutoff communication because it's not reaching a success, it's not giving you what you want. The longer the conversation goes on, the more crazy LLMs get as well because it's too much to process at once, holding those contexts in its memory while trying to predict the next one. Our brains do this easily and so much more. To claim an LLM is intelligent is incredibly misguided, it is merely the imitation of intelligence.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
It's so funny that this is something new. This was Grammarly's whole schtick since before ChatGPT so how different is Grammarly AI?
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
Yeah except a machine is owned by a company and doesn't consume the same way. It breaks down copyrighted works into data points so it can find the best way of putting those data points together again. If you understand anything at all about how these models work, they do not consume media the same way we do. It is not an entity with a thought process or consciousness (despite the misleading marketing of "AI" would have you believe), it's an optimisation algorithm.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
So as a data analyst a lot of my work is done through a computer but I can apply my same skills if someone hands me a piece of paper with data printed on it and told me to come up with solutions to the problems with it. I don't need the computer to do what I need to do, it makes it easier to manipulate data but the degree of problem solving required needs to be done by a human and that's why it's my job. If a machine could do it, then they would be doing it but they aren't because contrary to what people believe about data analysis, you have to be somewhat creative to do it well.
Crafting a prompt is an exercise in trial and error. It's work but it's not skilled work. It doesn't take talent or practice to do. Despite the prompt, you are still at the mercy of the machine.
Even by the case you've presented, I have to ask, at what point of a human editing the output of a generative model constitutes it being your own work and not the machine's? How much do you have to change? Can you give me a %?
Machines were intended to automate the tedious tasks that we all have to suffer to free up our brains for more engaging things which might include creative pursuits. Automation exists to make your life easier, not to rob you of life's pursuits or your livelihood. It never should've been used to produce creative work and I find the attempts to equate this abomination's outputs to what artists have been doing for years, utterly deplorable.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
One difference is that the photographer has to go the places they're taking pictures of.
Another is that photography isn't comparable to paintings and it never has been. I'm willing to bet photography and paintings have never coexisted in a contest. Except, when people say their generative art is comparable to what artists have been producing by hand, they are admitting that generative art has more in common with photography than it does with hand-crafted art but they want the prestige and recognition those artists get for their work.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
Exactly! You can glean so much from a single work, not just about the work itself but who created it and what ideas were they trying to express and what does that tell us about the world they live in and how they see that world.
This doesn't even touch the fact that I'm learning to draw not by looking at other drawings but what exactly I'm trying to draw. I know at a base level, a drawing is a series of shapes made by hand whether it's through a digital medium or traditional pen/pencil and paper. But the skill isn't being able replicate other drawings, it's being able to convert something I can see into a drawing. If I'm drawing someone sitting in a wheelchair, then I'll get the pose of them sitting in the wheelchair but I can add details I want to emphasise or remove details I don't want. There's so much that goes into creative work and I'm tired of arguing with people who have no idea what it takes to produce creative works.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
You say that yet I initially responded to someone who was comparing an LLM to what a comedian does.
There is no unique method because there's hardly anything unique you can do. Two people using Stable Diffusion to produce an image are putting in the same amount of work. One might put more time into crafting the right prompt but that's not work you're doing.
If 90% of the work is handled by the model, and you just layer on whatever extra thing you wanted, that doesn't mean you created the thing. That also implies you have much control over the output. You're effectively negotiating with this machine to produce what you want.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
Yeah but the difference is we still choose our words. We can still alter sentences on the fly. I can think of a sentence and understand verbs go after the subject but I still have the cognition to alter the sentence to have the effect I want. The thing lacking in LLMs is intent and I'm yet to see anyone tell me why a generative model decides to have more than 6 fingers. As humans we know hands generally have five fingers and there's a group of people who don't so unless we wanted to draw a person with a different number of fingers, we could. A generative art model can't help itself from drawing multiple fingers because all it understands is that "finger + finger = hand" but it has no concept on when to stop.
- Comment on OpenAI says it’s “impossible” to create useful AI models without copyrighted material 10 months ago:
A comedian isn't forming a sentence based on what the most probable word is going to appear after the previous one. This is such a bullshit argument that reduces human competency to "monkey see thing to draw thing" and completely overlooks the craft and intent behind creative works. Do you know why ChatGPT uses certain words over others? Probability. It decided as a result of its training that one word would appear after the previous in certain contexts. It absolutely doesn't take into account things like "maybe this word would be better here because the sound and syllables maintains the flow of the sentence".
Baffling takes from people who don't know what they're talking about.
- Comment on Backlog in NHS and courts will take 10 years to clear, says thinktank 10 months ago:
Ah yes, let's unfuck our NHS funding crisis by fucking up people's livelihoods. The IPPR are a bunch of reptiles in skinsuits who would rather sacrifice people's lives for the sake of protecting the upper class.
- Comment on Starfield's new PC patch delivers the game we should have had at launch - Eurogamer 1 year ago:
I doubt a patch nor mod support will motivate me to play this game. This is the most empty Bethesda game they've released when they could've had something special if they had any ambition.
- Comment on [deleted] 1 year ago:
I meant the Marathon trilogy. I'd be so keen to get the original floppy disks for 2 and Infinity.
- Comment on [deleted] 1 year ago:
Oh man, did you have the entire trilogy? I hope you can find them! CDs are incredibly easy to dump, you just need a disk drive and Linux has easy tools for copying the data into an iso file.
- Comment on [deleted] 1 year ago:
I agree for the most part, however, unless someone had dumped the games in the first place, the emulation wouldn't be possible. It's important that people know how to dump their games because they might be sitting on games that haven't been uploaded yet. I mainly use vimm.net to find ROMs and it tells you how complete the collections are and which games are missing.
- Comment on [deleted] 1 year ago:
Emulating games is important but I would argue that preserving the games is moreso. If you have discs of old games lying around (I grabbed the original floppy disk version of Marathon by Bungie for less than 5 quid), please find out how to dump them into an ISO or some other archive. It's important now more than ever as games tend towards digital distribution and old games are lost to time. The games don't have to be good, they just need to be preserved.
- Comment on What happened to the flat earthers who demonstrated that the earth is round in the netfilx documentary ? 1 year ago:
I don't remember if it was said in the doc itself or it was a video discussing the doc (I think Hbomberguy), that said "they (flat earthers) are attempting a form of science".
To me, what that says is, if they were intellectually honest with genuine curiosity, then they would've changed their views in the face of contradictory evidence. Time and time again, they showed that they weren't willing to do that even after seeing the results of their own experiments.
As you said, they've staked too much on this notion that the Earth is flat and can't afford to give up the grift now.
- Comment on What happened to the flat earthers who demonstrated that the earth is round in the netfilx documentary ? 1 year ago:
The problem with flat earthers is that they don't listen to reason.
You can't reason someone out of a position they didn't reason themselves into.
- Comment on Republican Senators 1 year ago:
Can anyone explain why they all look like they have tomatoes for heads?
- Comment on Humble Bundle - WB 100: Play the Legends 1 year ago:
It's changed. You can see what games are in the Choice bundle before you pay for them. They make them visible as soon as the previous month's bundle ends.
- Comment on Starfield's lead quest designer leaves Bethesda to join other RPG veterans making a new open-world game 1 year ago:
Yep, no legitimate criticism to be found. None whatsoever. Just wait for mods, they'll fix a game for free. The multi-milliom dollar studio did nothing wrong.
- Comment on Starfield's lead quest designer leaves Bethesda to join other RPG veterans making a new open-world game 1 year ago:
Don't work at Bethesda. Not going to claim this is in anyway accurate. Maybe the reason they left was because they weren't allowed to design interesting quests and thus were tired of being railroaded. I say this because any quest designer is essentially a storyteller so for quests to be so bland to lack character has to be intentional.