Google and privacy in the same sentence… Lol
Google releases VaultGemma, its first privacy-preserving LLM
Submitted 14 hours ago by sabreW4K3@lazysoci.al to technology@beehaw.org
https://arstechnica.com/ai/2025/09/google-releases-vaultgemma-its-first-privacy-preserving-llm/
Comments
AmanitaCaesarea@slrpnk.net 12 hours ago
HappyFrog@lemmy.blahaj.zone 2 hours ago
What does people use a 1b model for?
tal@olio.cafe 13 hours ago
LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say.
I mean...they can have non-deterministic outputs. There's no requirement for that to be the case.
It might be desirable in some situations; randomness can be a tactic to help provide variety in a conversation. But it might be very undesirable in others: no matter how many times I ask "What is 1+1?", I usually want the same answer.
kassiopaea@lemmy.blahaj.zone 22 minutes ago
In theory, it’s just an algorithm that will always produce the same output given the exact same inputs. In practice it’s nearly impossible to have fully deterministic outputs because of the limited precision repeatability we get with floating point numbers on GPUs.
viral.vegabond@piefed.social 4 hours ago
'Press x to doubt'