If it does the job better, who the fuck cares. No one actually cares about how you feel about the technology. Cry me a river.
Comment on Square Enix says it wants generative AI to be doing 70% of its QA and debugging by the end of 2027
subignition@fedia.io 19 hours agoThere's plenty of room for sophisticated automation without any need to involve AI.
Grimy@lemmy.world 15 hours ago
_stranger_@lemmy.world 13 hours ago
The problem is that if it doesn’t do a better job, no one left in charge will even know enough to give a shit, so quality will go down.
UnderpantsWeevil@lemmy.world 19 hours ago
I mean, as a branding exercise, every form of sophisticated automation is getting the “AI” label.
Past that, advanced pathing algorithms are what Q&A systems need to validate all possible actions within a space. That’s the bread-and-butter of AI. Its also generally how you’d describe simulated end-users on a test system.
subignition@fedia.io 10 hours ago
The important distinction, I think, is that these things are still purpose-built and (mostly) explainable. When you have a bunch of nails, you design a hammer. An "AI bot" QA tester the way Booty describes in the article isn't going to be an advanced algorithm that carries out specific tests. That exists already and has for years. He's asking for something that will figure out specific tests that are worth doing when given a vague or nonexistent test plan, most likely. You need a human, or an actual AGI, for something on that level, not generative AI. And explicitly with generative AI, as pertains to Square Enix's initiative in the article, there are the typical huge risks of verifiability and hallucination. However unpleasant you may think a QA worker's job is now, I guarantee you it will be even more unpleasant when the job consists of fact-checking AI bug reports all day instead of actually doing the testing.