This is an incorrect assertion. Making common actions self service without needing a human is almost always a customer win. For example automatic refunds on request if your request meets the correct criteria, instead of needing a human to look at it and make an arbitrary decision. Or having a knowledge base of common issues that can help people fix problems on their own without needing to talk to a person. Both are much faster and more repeatable.
Quetzalcutlass@lemmy.world 8 hours ago
I know Valve wants to remain a small-ish company, but automating support has literally never improved things. It’s worse if it’s tied into their anti-cheat - a false positive can lock you and your entire family out of multiplayer, and good luck getting a human to overturn it after they’ve mostly been moved to other teams.
I’d say it’s weird they didn’t focus on using this to boost their nearly nonexistent community moderation, but I’ve been told their hands-off approach is deliberate due to a libertarian bent among the higher ups.
False@lemmy.world 6 hours ago
Agent_Karyo@piefed.world 2 hours ago
But this is not viable for every use case. If there is a major issue with my Bank account, I want to speak to person, period.
Specific actions have automated workflows is of a course a good thing.
Documentation is also good, but it often doesn’t account for edge cases or your unique situation. Not to mention, the majority of the public is not going have the desire to deal with documentation.
ampersandrew@lemmy.world 8 hours ago
They improved their support ticket throughput by orders of magnitude by automating a lot of it already. There are lots of versions of automation, too, like collecting information about the user’s problem before you even get to a human.
Quetzalcutlass@lemmy.world 8 hours ago
Right, but there’s a difference between automating a refund if they can detect the purchase happened in the last two weeks and has less than two hours of playtime, versus complex support problems being handled by an LLM that can be mislead or hallucinate.
I suppose it’s fine if it’s limited to giving advice on solving the problem and has to escalate to a human if any server side action is required, but it being tied to anti-cheat has me worried that’s not the case.
ColeSloth@discuss.tchncs.de 1 hour ago
The non existent community moderation is by design and purpose. Valve wants it that way. They refuse to be any sort of gatekeepers in it.
Squizzy@lemmy.world 2 hours ago
Their support staff are always being commended, seems odd to me.
At the same time they allow rusdian war crime simulators.
cybervseas@lemmy.world 8 hours ago
I think it could have been an interesting usecase to chat with a steambot to get game recommendations.
sp3ctr4l@lemmy.dbzer0.com 5 hours ago
This is not meant to be a chatbot.
It is meant to evaluate gaming sessions of CS2.
Its an experimental, prototype of improving VAC’s serverside, backend analysis capabilities, to better detect cheaters and hackers.
You don’t need kernel level level access into everyone’s pcs.
You can run analytics on what the server records as happening in the game session, to detect odd patterns and things that should be impossible.
LLMs are … the entire thing that they do is handle massive inputs of data, and then evaluate that data.
The part of an LLM that generates a response, in text form, to that data, is a whole other thing.
They can also output… code, or spreadsheets, or images, or 3d models, or… any other kind of data.
Like say, a printout of suspicious activity in a game session, with statistically derived confidence intervals and timestamps and analysis.
cybervseas@lemmy.world 2 hours ago
Ah interesting. More along the line of those ML-based intrusion detection products.
FartMaster69@lemmy.dbzer0.com 7 hours ago
I’d rather cut out my eyes than talk to a robot about my steam library.
ampersandrew@lemmy.world 7 hours ago
I definitely value my eyes more than you do.
Quetzalcutlass@lemmy.world 8 hours ago
Their current recommendation engine is already a marvel and the only one I’ve ever come across that actually directs me to stuff I might be interested in.
Luminous5481@anarchist.nexus 7 hours ago
with the amount of information they collect on their customers, it better be damn good. honestly, the only reason it’s not a huge privacy problem is because they zealously guard that data to protect their near monopoly on PC gaming.
Gabe has been pandering to gamers and mostly giving us what we want, but when he dies, we better hope the next dude in charge isn’t some corporate suit that only cares about maximizing profits in every way that they can, or the enshitification of Steam is going to really fucking hurt. imagine if Valve was run like Microsoft. for example, the next guy might cut a deal with Microsoft to stop supporting Proton.
Godort@lemmy.ca 8 hours ago
One thing Valve is known for is testing things. They typically make sure technology works before rolling it out everywhere.
I’m willing to bet that they have either solved most of the problems a tool like this has by massively limiting it’s scope, or it never actually gets past a beta test phase.
warmaster@lemmy.world 7 hours ago
This. They have explicitly said that they are testing AI applications throughout the company and that it is not a concerted effort. It’s a few devs wanting to try it to see if it actually adds real value or not. That’s it.
sp3ctr4l@lemmy.dbzer0.com 5 hours ago
The file and class or function name or w/e literally has .proto in it.
As in prototype.
vulpivia@lemmy.dbzer0.com 5 hours ago
That’s because it’s a Protobuf file. Has nothing to do with prototypes.
sp3ctr4l@lemmy.dbzer0.com 5 hours ago
Well you got me there
github.com/SteamTracking/…/ProtobufsWebui
There’s the directory with the file in the screenshots, service_steamgpt.proto, updated 4 days ago along with a number of others.
I am uncertain if this … basically scraping operation is tracking the main Steam client or the Beta or what.
There is not a very helpful description of what exactly is being pulled here, in the readme/project descriptiom.