Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash
Vlyn@lemmy.zip 3 weeks agoThe training is sophisticated, but inference is unfortunately really a text prediction machine. Technically token prediction, but you get the idea.
For every single token/word. You input your system prompt, context, user input, then the output starts.
The
Feed the entire context back in and add the reply “The” at the end.
The capital
Feed everything in again with “The capital”
The capital of
Feed everything in again…
The capital of Austria
…
It literally works like that, which sounds crazy :)
The only control you as a user can have is the sampling, like temperature, top-k and so on. But that’s just to soften and randomize how deterministic the model is.
dream_weasel@sh.itjust.works 2 weeks ago
Unless that’s how people are designing front ends for models, it literally DOESN’T work like that. It works like that until you finish training an embedding model with masking related tasks, but that’s the tip of the iceberg. The input vector, after being tokenized, is ingested wholesale. Now there’s sometimes funny business to manage the size of a context window effectively but this isn’t that unless you’re home-rolling and you’re caching your own inputs or something before you give it to the model.