Comment on New AI model can hallucinate a game of 1993’s Doom in real time
Even_Adder@lemmy.dbzer0.com 10 months agoThere are more forms of guidance than just raw words. Just off the top of my head, there’s inpainting, outpainting, controlnets, prompt editing, and embeddings. The researchers who pulled this off definitely didn’t do it with text prompts.
MentalEdge@sopuli.xyz 10 months ago
Obviously.
But at what point does that guidance just become the dataset you removed?
Even_Adder@lemmy.dbzer0.com 10 months ago
The whole point is that it didn’t know the concepts beforehand, and no it doesn’t become the dataset. Observations made of the training data are added to the model’s weights after training, the dataset is never relevant again as the model’s weights are locked in.
Or you could train a more general model. These things happen in steps, research is a process.
MentalEdge@sopuli.xyz 10 months ago
You are completely missing what I’m saying.
Even_Adder@lemmy.dbzer0.com 10 months ago
What kind of creativity are you talking about then? I’ve also never heard of a bloated model. Which models are bloated?