Comment on [deleted]
theComposer@beehaw.org 3 days agoBut it’s not just that “they effectively trained their model using OpenAI’s model”. The point Ed goes on to make is why hasn’t OpenAI done the same thing? The marvel of DeepSeek is how much more efficient it is, whereas Big Tech keeps insisting that they need ever bigger data centers.
masterspace@lemmy.ca 2 days ago
They HAVE done that. It’s one of the techniques they use to produce things like o1 mini models and the other mini models that run on device.
But that’s not a valid technique for creating new foundation models, just for creating refined versions of existing models. You would never have been able to create for instance, an o1 model from Chat PT 3.5 using distillation.