I’d rather cut out my eyes than talk to a robot about my steam library.
I think it could have been an interesting usecase to chat with a steambot to get game recommendations.
FartMaster69@lemmy.dbzer0.com 9 hours ago
ampersandrew@lemmy.world 9 hours ago
I definitely value my eyes more than you do.
Quetzalcutlass@lemmy.world 9 hours ago
Their current recommendation engine is already a marvel and the only one I’ve ever come across that actually directs me to stuff I might be interested in.
Luminous5481@anarchist.nexus 9 hours ago
with the amount of information they collect on their customers, it better be damn good. honestly, the only reason it’s not a huge privacy problem is because they zealously guard that data to protect their near monopoly on PC gaming.
Gabe has been pandering to gamers and mostly giving us what we want, but when he dies, we better hope the next dude in charge isn’t some corporate suit that only cares about maximizing profits in every way that they can, or the enshitification of Steam is going to really fucking hurt. imagine if Valve was run like Microsoft. for example, the next guy might cut a deal with Microsoft to stop supporting Proton.
sp3ctr4l@lemmy.dbzer0.com 7 hours ago
This is not meant to be a chatbot.
It is meant to evaluate gaming sessions of CS2.
Its an experimental, prototype of improving VAC’s serverside, backend analysis capabilities, to better detect cheaters and hackers.
You don’t need kernel level level access into everyone’s pcs.
You can run analytics on what the server records as happening in the game session, to detect odd patterns and things that should be impossible.
LLMs are … the entire thing that they do is handle massive inputs of data, and then evaluate that data.
The part of an LLM that generates a response, in text form, to that data, is a whole other thing.
They can also output… code, or spreadsheets, or images, or 3d models, or… any other kind of data.
Like say, a printout of suspicious activity in a game session, with statistically derived confidence intervals and timestamps and analysis.
cybervseas@lemmy.world 3 hours ago
Ah interesting. More along the line of those ML-based intrusion detection products.
sp3ctr4l@lemmy.dbzer0.com 1 hour ago
I can still hardly believe that the tech industry at large just decided to broadly roll out LLM integration into essentially every element of their businesses, having just no idea what they actually do.
Like 2 years ago now, I was figuratively pulling my hair out, reading the discussion panel schedule for Microsoft led conferences on LLMs and cybersecurity.
Literally every topic was a different kind of way that smashing an LLM into a complex business system… increases potential failure points, broadens attack surfaces… because networked LLMs literally are security vulnerabilities.
Not a single topic about how to use LLMs defensively, how to use they to turbocharge malware recognition, nothing like that.
All just a bunch of ‘make sure you don’t do this!’ warnings, and then everyone did them anyway.