Comment on [deleted]

<- View Parent
PhilipTheBucket@ponder.cat ⁨3⁩ ⁨days⁩ ago

Wanting a better world, and holding up a light to the current one to show the differences between what could be and what is, is not at all what “cynical” means. “Cynical” is the opposite of what you mean. “Pessimistic” or “negative” is definitely more apt, yes.

Also:

Now, you’ve likely seen or heard that DeepSeek “trained its latest model for $5.6 million,” and I want to be clear that any and all mentions of this number are estimates. In fact, the provenance of the “$5.58 million” number appears to be a citation of a post made by NVIDIA engineer Jim Fan in an article from the South China Morning Post, which links to another article from the South China Morning Post, which simply states that “DeepSeek V3 comes with 671 billion parameters and was trained in around two months at a cost of US$5.58 million” with no additional citations of any kind. As such, take them with a pinch of salt.

While there are some that have estimated the cost (DeepSeek’s V3 model was allegedly trained using 2048 NVIDIA h800 GPUs, according to its paper), as Ben Thompson of Stratechery made clear, the “$5.5 million” number only covers the literal training costs of the official training run (and this is made fairly clear in the paper!) of V3, meaning that any costs related to prior research or experiments on how to build the model were left out.

While it’s safe to say that DeepSeek’s models are cheaper to train, the actual costs — especially as DeepSeek doesn’t share its training data, which some might argue means its models are not really open source — are a little harder to guess at. Nevertheless, Thompson (who I, and a great deal of people in the tech industry, deeply respect) lays out in detail how the specific way that DeepSeek describes training its models suggests that it was working around the constrained memory of the NVIDIA GPUs sold to China (where NVIDIA is prevented by US export controls from selling its most capable hardware over fears they’ll help advance the country’s military development):

Here’s the thing: a huge number of the innovations I explained above are about overcoming the lack of memory bandwidth implied in using H800s instead of H100s. Moreover, if you actually did the math on the previous question, you would realize that DeepSeek actually had an excess of computing; that’s because DeepSeek actually programmed 20 of the 132 processing units on each H800 specifically to manage cross-chip communications. This is actually impossible to do in CUDA. DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language. This is an insane level of optimization that only makes sense using H800s.

Tell me: What should I be reading, instead, if I want to understand the details of this sort of thing, instead of that type of unhinged, pointless, totally uninformative rant about the tech industry?

source
Sort:hotnewtop