I would initially tap the breaks on this, if for no other reason than “AI doing Q&A” reads more like corporate buzzwords than material policy. Big software developers should already have much of their Q&A automated, at least at the base layer. Further automating Q&A is generally a better business practice, as it helps catch more bugs in the Dev/Test cycle sooner.
Then consider that Q&A work by end users is historically a miserable and soul-sucking job. Converting those roles to debuggers and active devs does a lot for both the business and the workforce. When compared to “AI is doing the art” this is night-and-day, the very definition of the “Getting rid of the jobs people hate so they can do the work they love” that AI was supposed to deliver.
Finally, I’m forced to drag out the old “95% of AI implementations fail” statistic. Far more worried that they’re going to implement a model that costs a fortune and delivers mediocre results than that they’ll implement an AI driven round of end-user testing.
Turning Q&A over to the Roomba AI to find corners of the setting that snag the user would be Gud Aktuly.
Brutticus@midwest.social 1 day ago
Not even from an ethically standpoint. Color me shocked if these games are like, playable
LostWanderer@fedia.io 1 day ago
Exactly, as I don't expect QA done by something that can't think or feel to know what actually needs to be fixed. AI is a hallucination engine that just agrees rather than points out issues, in some cases it might call attention to non-issues and let critical bugs slip by. The ethical issues are still significant and play into the reason why I would refuse to buy any more Square Enix games going forward. I don't trust them to walk this back, they are high on the AI lie. Human made games with humans handling the QA are the only games that I want.
NuXCOM_90Percent@lemmy.zip 1 day ago
That is a very small part of QA’s responsibility. Mostly it is about testing and identifying bugs that get triaged by management. The person running the tests is NOT responsible for deciding what can and can’t ship.
And, in that regard… this is actually a REALLY good use of “AI” (not so much generative). Imagine something like the old “A star algorithm plays mario” where it is about finding different paths to accomplish the same goal (e.g. a quest) and immediately having a lot of exactly what steps led to the anomaly for the purposes of building a reproducer.
Which actually DOES feel like a really good use case… at the cost of massive computational costs (so… “AI”).
That said: it also has all of the usual labor implications. But from a purely technical “make the best games” standpoint? Managers overseeing a rack that is running through the games 24/7 for bugs that they can then review and prioritize seems like a REALLY good move.
osaerisxero@kbin.melroy.org 1 day ago
They're already not paying for QA, so if anything this would be a net increase in resources allocated just to bring the machines onboard to do the task