Comment on Google releases VaultGemma, its first privacy-preserving LLM
tal@olio.cafe 14 hours ago
LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say.
I mean...they can have non-deterministic outputs. There's no requirement for that to be the case.
It might be desirable in some situations; randomness can be a tactic to help provide variety in a conversation. But it might be very undesirable in others: no matter how many times I ask "What is 1+1?", I usually want the same answer.
kassiopaea@lemmy.blahaj.zone 28 minutes ago
In theory, it’s just an algorithm that will always produce the same output given the exact same inputs. In practice it’s nearly impossible to have fully deterministic outputs because of the limited precision repeatability we get with floating point numbers on GPUs.