I wish they'd replace Tim Sweeney with AI. Would genuinely have better takes on most topics, too. Sigh.
QuantumTickle@lemmy.zip 2 days ago
If “everyone will be using AI” and it’s not a bad thing, then these big companies should wear it as a badge of honor. The rest of us will buy accordingly.
Carighan@piefed.world 2 days ago
RizzRustbolt@lemmy.world 2 days ago
Can we get AI version of the old burnout Tim Sweeney? He was as least unintentionally funny.
Devial@discuss.online 2 days ago
If “everyone will be using AI”, AI will turn to shit.
They can’t create originality, they’re only recycling and recontextualising existing information. But if you recycle and recontextualise the same information over and over again, it keeps degrading more and more.
It’s ironic that the very people who advocate for AI everywhere, fail to realise just how dependent the quality of AI content is on having real, human generated content to input to train the model.
4am@lemmy.zip 2 days ago
“The people who advocate for AI” are literally running around claiming that AI is Jesus and it is sacrilege to stand against it.
And by literally, I mean Peter Thiel is giving talks actually claiming this. This is not an exaggeration, this is not hyperbole.
They are trying to recruit techno-cultists.
EldritchFeminity@lemmy.blahaj.zone 1 day ago
Ironically, one of the defining features of the techno-cultists in Warhammer 40k is that they changed the acronym to mean “Abominable Intelligence” and not a single machine runs on anything more advanced than a calculator.
4am@lemmy.zip 1 day ago
Sci Fi keeps trying to teach us lessons, and instead we keep using it as an instruction manual.
Sl00k@programming.dev 2 days ago
I think the grey area is what if you’re an indie dev and did the entire story line and artwork yourself, but have the ai handle more complex coding.
It is to our eyes entirely original but used AI. Where do you draw the line?
Default_Defect@anarchist.nexus 2 days ago
Disclose the AI usage and how it was used. Let people decide. There will always be “no AI at all, ever” types that won’t touch the game, but others will see that it was used as a tool rather than a replacement for creativity and will give it a chance.
Devial@discuss.online 2 days ago
The line, imo, is: are you creating it yourself, and just using AI to help you make it faster/more convenient, or is AI the primary thing that is creating your content in the first place.
Using AI for convenience is absolutely valid imo, I routinely use chatGPT to do things like debugging code I wrote, or rewriting data sets in different formats, instead of doing to by hand, or using it for more complex search and replace jobs, if I can’t be fucked to figure out a regex to cover it.
For these kind of jobs, I think AI is a great tool.
Sl00k@programming.dev 1 day ago
I definitely agree but I think that case would still get caught in the steam AI usage badge?
irmoz@reddthat.com 1 day ago
That’s somewhat acceptable. The ideal use of AI is as a crutch - and I mean that literally. A tool that multiplies and supports your effort, but does not replace your effort or remove the need for it.
CatsPajamas@lemmy.dbzer0.com 1 day ago
How does this model collapse thing still get spread around? It’s not true. Synthetic data has actually helped bots get smarter, not dumber. And if you think that all Gemini3 does is recycle idk what to tell you
Devial@discuss.online 3 hours ago
If the model collapse theory weren’t true, then why do LLMs need to scrape so much data from the internet for training ?
According to you, they should be able to just generate synthetic training data purely with the previous model, and then use that to train the next generation.
So why is there even a need for human input at all them ? Why are all LLM companies fighting tooth and nail against their data scraping being restricted, if real human data is in fact so unnecessary for model training.
You can stop models from deteriorating without new data, and you can even train them with synthetic data, but that still requires the synthetic data to either be modelled, or filtered by humans to ensure its quality. If you just take a million random chatGPT outputs, with no human filtering whatsoever, and use that to restrain the chatGPT, and then repeat that over and over again, eventually the model will turn to shit. Each iteration some of the random tweaks chatGPT makes to their output are going to produce a bad output, which is now presented to the new training model as a target to achieve, so the model learns this bad output is less bad than it previously thought.
exu@feditown.com 1 day ago
Recycling sounds suspiciously like what “AAA” studios already do