This is not meant to be a chatbot.
It is meant to evaluate gaming sessions of CS2.
Its an experimental, prototype of improving VAC’s serverside, backend analysis capabilities, to better detect cheaters and hackers.
You don’t need kernel level level access into everyone’s pcs.
You can run analytics on what the server records as happening in the game session, to detect odd patterns and things that should be impossible.
LLMs are … the entire thing that they do is handle massive inputs of data, and then evaluate that data.
The part of an LLM that generates a response, in text form, to that data, is a whole other thing.
They can also output… code, or spreadsheets, or images, or 3d models, or… any other kind of data.
Like say, a printout of suspicious activity in a game session, with statistically derived confidence intervals and timestamps and analysis.
cybervseas@lemmy.world 4 weeks ago
Ah interesting. More along the line of those ML-based intrusion detection products.
sp3ctr4l@lemmy.dbzer0.com 4 weeks ago
I can still hardly believe that the tech industry at large just decided to broadly roll out LLM integration into essentially every element of their businesses, having just no idea what they actually do.
Like 2 years ago now, I was figuratively pulling my hair out, reading the discussion panel schedule for Microsoft led conferences on LLMs and cybersecurity.
Literally every topic was a different kind of way that smashing an LLM into a complex business system… increases potential failure points, broadens attack surfaces… because networked LLMs literally are security vulnerabilities.
Not a single topic about how to use LLMs defensively, how to use they to turbocharge malware recognition, nothing like that.
All just a bunch of ‘make sure you don’t do this!’ warnings, and then everyone did them anyway.