Comment on Chinese AI lab DeepSeek massively undercuts OpenAI on pricing — and that's spooking tech stocks

<- View Parent
MagicShel@lemmy.zip ⁨3⁩ ⁨days⁩ ago

You can look at the stats on how much of the model fits in vram. The lower the percentage the slower it goes although I imagine that’s not the only constraint. Some models probably are faster than others regardless, but I really have not done a lot of experimenting. Too slow on my card to really even compare output quality across models. Once I have 2k tokens in context, even a 7B model is a token every second or more. I have about the slowest card that llama even days you says use. I think there is one worse card.

source
Sort:hotnewtop