Turns out when you build your entire business on copyright infringement, a. it’s easy to steal your business and b. you have no recourse when someone does.
DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI
Submitted 2 days ago by misk@sopuli.xyz to technology@beehaw.org
Comments
hperrin@lemmy.ca 2 days ago
oce@jlai.lu 2 days ago
I read a lot of tech bros saying what they did is easy because they used (illegally?) the chatgpt API for part of their model training. But it seems this kind of performance actually means better engineering, doesn’t it?
sculd@beehaw.org 2 days ago
This is what Open AI wants you to think. Because Open AI is burning money at an unprecedented rate and still raising money. If DeepSeek is able to do what they do if a fraction of the money, the VCs and Microsoft will begin asking questions.
SineSwiper@discuss.tchncs.de 2 days ago
They are already asking questions. DeepSeek was a wake up call.
codessh@lemmings.world 2 days ago
Sometimes I am happy to be able to say that I am not surprised by a piece of news and for once it does not mean in a political terror/economic destruction/environmental eradication way.
Linktank@lemmy.today 2 days ago
Okay, can somebody who knows about this stuff please explain what the hell a “token per second” means?
IndeterminateName@beehaw.org 2 days ago
A bit like a syllable when you are talking about text based responses. 20 tokens a second is faster than most people could read the output so that’s sufficient for a real time feeling “chat”.
SteevyT@beehaw.org 2 days ago
Huh, yeah that actually is above my reading speed assuming 1 token = 1 word. Although, I found that anything above 100 words per minute, while slow to read, feels real time to me since that’s about the absolute top end of what most people type.
fluffykittycat@slrpnk.net 2 days ago
It’s the generation speed. Internally LLMs use tokens which represent either words or parts of words and map them to integer values. The model then does it’s prediction on which integer is most likely to come after the input. How the words are split up is an implementation detail that can vary from model to model
IrritableOcelot@beehaw.org 2 days ago
Not somebody who knows a lot about this stuff, as I’m a bit of an AI Luddite, but I know just enough to answer this!
“Tokens” are essentially just a unit of work – instead of interacting directly with the user’s input, the model first “tokenizes” the user’s input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.
I think tokens are used because most models use them, and use them in a similar way, so they’re the lowest-level common unit of work where you can compare across devices and models.
Mniot@programming.dev 2 days ago
Not an answer to your question, but I thought this was a nice article for getting some basic grounding on the new AI stuff: arstechnica.com/…/a-jargon-free-explanation-of-ho…
morrowind@lemmy.ml 2 days ago
Deepseek is an absolutely massive model, it’s not the one people will be running. Rather, look at qwen/qwq, gemma and a number of other smaller ones
ParetoOptimalDev@lemmy.today 2 days ago
No, people who want something approaching chatgpt but local want to run at least deepseek V3 32B.
Qwen at least fares much worse for my usage as do deepseek V3 under 32B.
morrowind@lemmy.ml 2 days ago
The he’ll is v3 32b. Are you talking about a distill
Korhaka@sopuli.xyz 2 days ago
I run deepseek-r1:14b locally, though it needs to go into RAM and runs slower its still a reasonably good speed. Keeps up with reading it. Should try a larger one at some point, but its quite a bit to download when you get to the larger ones. Usually run ~7b size as that can fit in VRAM and runs way faster.
Flax_vert@feddit.uk 2 days ago
Of course, the Chinese flag has to be in the article thumbnail.
knighthawk0811@lemmy.ml 2 days ago
eventually we’ll all be able to have an open source AI that runs fine on a phone or any average device and we’ll have our privacy, and the big corps will lose their grip and hopefully collapse
JoMiran@lemmy.ml 2 days ago
Image
KeenFlame@feddit.nu 1 day ago
We already do. Can you fix so the logical conclusion happens pls
hperrin@lemmy.ca 2 days ago
It depends how much they’ve got to offer beyond AI. If the only thing they offer is AI (like OpenAI), yeah, they’re in trouble.