Comment on Epic’s AI Darth Vader tech is about to be all over Fortnite
MagicShel@lemmy.zip 2 days agoIn going to bet your video card uses more energy than the AI while you play the game.
Comment on Epic’s AI Darth Vader tech is about to be all over Fortnite
MagicShel@lemmy.zip 2 days agoIn going to bet your video card uses more energy than the AI while you play the game.
theangriestbird@beehaw.org 2 days ago
that would be a safe bet given that none of these AI companies disclose their actual energy usage, so you would never have to pay out that bet because we would never find out if you were right.
What we do know is that generating a single text response on the largest open source AI models takes about 6,500 joules, if you don’t include the exorbitant energy cost of training the model. We know that most of the closed source models are way more complicated, so let’s say they take 3 times the cost to generate a response. That’s 19,500 joules. Generating an AI voice to speak the lines increases that energy cost exponentially. MIT found that generating a grainy, five-second video at 8 frames per second on an open source model took about 109,000 joules.
My 3080ti is 350W - if I played a single half-hour match of Fortnite, my GPU would use about 630,000 joules (and that’s assuming my GPU is running at max capacity the entire time, which never happens). Epic’s AI voice model is pretty high quality, so let’s estimate that the cost of a single AI voice response is about 100,000 joules, similar to the low quality video generation mentioned above. If these estimates are close, this means that if I ask Fortnite Darth Vader just 7 questions, the AI has cost more energy than my GPU does while playing the game on max settings.
Even_Adder@lemmy.dbzer0.com 1 day ago
TTS models are tiny in comparison to LLMs. How does this track? The biggest I could find was Orpheus-TTS that comes in 3B/1B/400M/150M parameter sizes. They are not using a 600 billion parameter LLM to generate Vader’s responses, that is likely way too big. After generating the text, speech isn’t even a drop in the bucket.
You need to include parameter counts in your calculations. A lot of these assumptions are so wrong it borders on misinformation.
theangriestbird@beehaw.org 1 day ago
I will repeat what I said in another reply below: if the cost of running these closed source AI models was as negligible as you are suggesting, then these companies would be screaming it from the rooftops to get the stink of this energy usage story off their backs. AI is all investors and hype right now, which means the industry is extra vulnerable to negative stories. By staying silent, the AI companies are allowing people like me to make wild guesses at the numbers and possibly fear-monger with misinformation. They could shut up all the naysayers by simply releasing their numbers. The fact that they are still staying silent despite all the negative press suggests that the energy usage numbers are far worse than anyone is estimating.
Even_Adder@lemmy.dbzer0.com 1 day ago
This doesn’t mean you can misrepresent facts like this though. The line I quoted is misinformation, and you don’t know what you’re talking about. I’m sorry this sounds so aggressive, but it’s the only way I can phrase it.
MagicShel@lemmy.zip 2 days ago
This is completely arbitrary and supposition. Is it 3x “regular” response? I have no idea. How do you even arrive at that guess? Is a more complex prompt exponential more expensive? Linearly? Logarithmically? And how complex are we talking when system prompts themselves can be 10k tokens?
Why did you go from voice gen to video gen? I mean I don’t know whether video gen takes more joules or not but there’s no actual connection here. You just decided that a line of audio gen is equivalent to 40 genres of video. What if they generate the text and then use conventional voice synthesizers? And what does that have to do with video gen?
Who even knows, mate? You’ve been completely fucking arbitrary and, shocker, your analysis supports your supposition, kinda. How many Vader lines are you going to get in 30 minutes? When it’s brand new probably a lot, but after the luster wears off?
I’m not even telling you you’re wrong, just that your methodology here is complete fucking bullshit.
It could be as low as 6500 joules (based on your statement) which changes the calculus to 60 lines per half hour. Is it that low? Probably not, but that is every bit as valid as your math and I’m even using your numbers without double checking you.
At the end of the day maybe I lose the bet. Fair. I’ve been wondering for a bit how they actually stack up, and I’m willing to be shown. But I suspect using it for piddly shit day to day is a drop in the bucket compared to all the mass corporate spam. Bit I’m aware it’s nothing but a hypothesis and I’m willing to be proven wrong. But not based on this.
theangriestbird@beehaw.org 1 day ago
It is, that’s the point. We don’t know because the AI companies are intentionally hiding that detail. My estimates are based on the real numbers we do have, and all we know about the closed source models is that they contain more parameters than the open source models, and more parameters = more energy use.
When I started adding multipliers to take a stab at the numbers, I was being conservative. A single AI voice response definitely takes more than 6500 joules, we just don’t know. It’s not that much of a stretch to assume that a voice generation is somewhere halfway between a text generation and a video generation. If my numbers were accurate, that would actually be great news for the AI companies. They would be shouting these numbers from the fucking rooftops to get the stink of this energy usage story off their backs. Corporations never disclose anything unless it is good news. Their silence says everything - if we were actually betting, I would gladly bet that my single video card uses way less energy than their data centers packed to the brim with higher-end GPUs. It’s just a no-brainer.
MagicShel@lemmy.zip 1 day ago
What I said was I’ll bet one person uses more power running the game than the AI uses to respond to them. Just that.
Then you started inventing scenarios and moving goalposts to comparing one single video card to an entire data center. I guess because you didn’t want to let my statement go unchallenged, but you had nothing solid to back you up. You’re the one that posted 6500 joules, which you supported, and I appreciate that, but after that it’s all just supposition and guesses.
You’re right that it’s almost certainly higher than that. But I can generate text and images on my home PC. Not at the quality and speed of OpenAI or whatever they have on the back-end, but it can be done on my 1660. So my suggestion that running a 3D game consumes more power than generating a few lines seems pretty reasonable.
But I know someone who works for a company that has an A100 used for serving AI. I’ll ask and see if he has more information or even a better-educated guess than I do, and if I find out I’m wrong, I won’t suggest otherwise in the future.