cross-posted from: lemmy.world/post/45471761
Valve’s customer service responses have always been mostly a canned series of bot messages.
Their in-house support has always been 99% automated.
Its very obvious if you’ve ever interacted with them at more than an occasional, superficial level.
You have to be quite persistent to get a message from an actual human being.
Yep, the automated messages often have the name of ostensibly a human attached to them.
So do all kinds of other bots, since way before ChatGPT and LLMs took off.
What, did you a think a human person actually read every single complaint report of a hacker or cheater in a video game with an anti cheat?
No! You have bots, analytic systems screen that shit, just the same as all our resumes on Indeed have been being analysed and evaluated by bots, again, since way before LLMs got as prevalent as they are today.
Then you filter. Humans only see the odd ones that defy categorization, basically.
This has been a tech industry standard for almost two decades.
Valve is just now overhauling that system to use an LLM, because those are actually better than a very complex series of chained regex searches.
The alternative would be to do what Meta or Google or Amazon do: Hire armies of tens to hundreds of thousands of offshore contractors and give them all PTSD for pitiful wages, manually evaluating everything.
Apparently this is not widely known, by people who’ve never worked in an entreprise level tech company?
Quetzalcutlass@lemmy.world 6 hours ago
I know Valve wants to remain a small-ish company, but automating support has literally never improved things. It’s worse if it’s tied into their anti-cheat - a false positive can lock you and your entire family out of multiplayer, and good luck getting a human to overturn it after they’ve mostly been moved to other teams.
I’d say it’s weird they didn’t focus on using this to boost their nearly nonexistent community moderation, but I’ve been told their hands-off approach is deliberate due to a libertarian bent among the higher ups.
ColeSloth@discuss.tchncs.de 15 minutes ago
The non existent community moderation is by design and purpose. Valve wants it that way. They refuse to be any sort of gatekeepers in it.
Squizzy@lemmy.world 34 minutes ago
Their support staff are always being commended, seems odd to me.
At the same time they allow rusdian war crime simulators.
Godort@lemmy.ca 6 hours ago
One thing Valve is known for is testing things. They typically make sure technology works before rolling it out everywhere.
I’m willing to bet that they have either solved most of the problems a tool like this has by massively limiting it’s scope, or it never actually gets past a beta test phase.
warmaster@lemmy.world 5 hours ago
This. They have explicitly said that they are testing AI applications throughout the company and that it is not a concerted effort. It’s a few devs wanting to try it to see if it actually adds real value or not. That’s it.
sp3ctr4l@lemmy.dbzer0.com 4 hours ago
The file and class or function name or w/e literally has .proto in it.
As in prototype.
False@lemmy.world 4 hours ago
This is an incorrect assertion. Making common actions self service without needing a human is almost always a customer win. For example automatic refunds on request if your request meets the correct criteria, instead of needing a human to look at it and make an arbitrary decision. Or having a knowledge base of common issues that can help people fix problems on their own without needing to talk to a person. Both are much faster and more repeatable.
Agent_Karyo@piefed.world 1 hour ago
But this is not viable for every use case. If there is a major issue with my Bank account, I want to speak to person, period.
Specific actions have automated workflows is of a course a good thing.
Documentation is also good, but it often doesn’t account for edge cases or your unique situation. Not to mention, the majority of the public is not going have the desire to deal with documentation.
ampersandrew@lemmy.world 6 hours ago
They improved their support ticket throughput by orders of magnitude by automating a lot of it already. There are lots of versions of automation, too, like collecting information about the user’s problem before you even get to a human.
Quetzalcutlass@lemmy.world 6 hours ago
Right, but there’s a difference between automating a refund if they can detect the purchase happened in the last two weeks and has less than two hours of playtime, versus complex support problems being handled by an LLM that can be mislead or hallucinate.
I suppose it’s fine if it’s limited to giving advice on solving the problem and has to escalate to a human if any server side action is required, but it being tied to anti-cheat has me worried that’s not the case.
cybervseas@lemmy.world 6 hours ago
I think it could have been an interesting usecase to chat with a steambot to get game recommendations.
sp3ctr4l@lemmy.dbzer0.com 3 hours ago
This is not meant to be a chatbot.
It is meant to evaluate gaming sessions of CS2.
Its an experimental, prototype of improving VAC’s serverside, backend analysis capabilities, to better detect cheaters and hackers.
You don’t need kernel level level access into everyone’s pcs.
You can run analytics on what the server records as happening in the game session, to detect odd patterns and things that should be impossible.
LLMs are … the entire thing that they do is handle massive inputs of data, and then evaluate that data.
The part of an LLM that generates a response, in text form, to that data, is a whole other thing.
They can also output… code, or spreadsheets, or images, or 3d models, or… any other kind of data.
Like say, a printout of suspicious activity in a game session, with statistically derived confidence intervals and timestamps and analysis.
FartMaster69@lemmy.dbzer0.com 6 hours ago
I’d rather cut out my eyes than talk to a robot about my steam library.
Quetzalcutlass@lemmy.world 6 hours ago
Their current recommendation engine is already a marvel and the only one I’ve ever come across that actually directs me to stuff I might be interested in.