Comment on DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI

oce@jlai.lu ⁨3⁩ ⁨days⁩ ago

I read a lot of tech bros saying what they did is easy because they used (illegally?) the chatgpt API for part of their model training. But it seems this kind of performance actually means better engineering, doesn’t it?

source
Sort:hotnewtop