In going to bet your video card uses more energy than the AI while you play the game.
Comment on Epic’s AI Darth Vader tech is about to be all over Fortnite
theangriestbird@beehaw.org 2 days ago
Like honestly, on paper this sounds so cool. If you told me in 2015 that in 10 years time i would be able to play online games with NPC allies that chat with me kinda like real people, I would be super excited about that (and maybe just a little unsettled).
Of course, in practice, I hate this. I don’t care how cool the tech is, the energy cost of running this tech for even half of Fortnite’s active daily users on a daily basis must be eyewatering. No LLM tech in videogames is worth cooking the planet over. And we all know that the tech companies want utility customers to help foot the bill for these moronic uses of energy, whether we like it or not.
So ultimately: fuck Epic and anyone else trying to use this LLM tech for anything other than life-or-death situations.
MagicShel@lemmy.zip 2 days ago
theangriestbird@beehaw.org 2 days ago
that would be a safe bet given that none of these AI companies disclose their actual energy usage, so you would never have to pay out that bet because we would never find out if you were right.
What we do know is that generating a single text response on the largest open source AI models takes about 6,500 joules, if you don’t include the exorbitant energy cost of training the model. We know that most of the closed source models are way more complicated, so let’s say they take 3 times the cost to generate a response. That’s 19,500 joules. Generating an AI voice to speak the lines increases that energy cost exponentially. MIT found that generating a grainy, five-second video at 8 frames per second on an open source model took about 109,000 joules.
My 3080ti is 350W - if I played a single half-hour match of Fortnite, my GPU would use about 630,000 joules (and that’s assuming my GPU is running at max capacity the entire time, which never happens). Epic’s AI voice model is pretty high quality, so let’s estimate that the cost of a single AI voice response is about 100,000 joules, similar to the low quality video generation mentioned above. If these estimates are close, this means that if I ask Fortnite Darth Vader just 7 questions, the AI has cost more energy than my GPU does while playing the game on max settings.
Even_Adder@lemmy.dbzer0.com 2 days ago
Generating an AI voice to speak the lines increases that energy cost exponentially.
TTS models are tiny in comparison to LLMs. How does this track? The biggest I could find was Orpheus-TTS that comes in 3B/1B/400M/150M parameter sizes. They are not using a 600 billion parameter LLM to generate Vader’s responses, that is likely way too big. After generating the text, speech isn’t even a drop in the bucket.
You need to include parameter counts in your calculations. A lot of these assumptions are so wrong it borders on misinformation.
theangriestbird@beehaw.org 2 days ago
I will repeat what I said in another reply below: if the cost of running these closed source AI models was as negligible as you are suggesting, then these companies would be screaming it from the rooftops to get the stink of this energy usage story off their backs. AI is all investors and hype right now, which means the industry is extra vulnerable to negative stories. By staying silent, the AI companies are allowing people like me to make wild guesses at the numbers and possibly fear-monger with misinformation. They could shut up all the naysayers by simply releasing their numbers. The fact that they are still staying silent despite all the negative press suggests that the energy usage numbers are far worse than anyone is estimating.
MagicShel@lemmy.zip 2 days ago
We know that most of the closed source models are way more complicated, so let’s say they take 3 times the cost to generate a response.
This is completely arbitrary and supposition. Is it 3x “regular” response? I have no idea. How do you even arrive at that guess? Is a more complex prompt exponential more expensive? Linearly? Logarithmically? And how complex are we talking when system prompts themselves can be 10k tokens?
Generating an AI voice to speak the lines increases that energy cost exponentially. MIT found that generating a grainy, five-second video at 8 frames per second on an open source model took about 109,000 joules
Why did you go from voice gen to video gen? I mean I don’t know whether video gen takes more joules or not but there’s no actual connection here. You just decided that a line of audio gen is equivalent to 40 genres of video. What if they generate the text and then use conventional voice synthesizers? And what does that have to do with video gen?
If these estimates are close
Who even knows, mate? You’ve been completely fucking arbitrary and, shocker, your analysis supports your supposition, kinda. How many Vader lines are you going to get in 30 minutes? When it’s brand new probably a lot, but after the luster wears off?
I’m not even telling you you’re wrong, just that your methodology here is complete fucking bullshit.
It could be as low as 6500 joules (based on your statement) which changes the calculus to 60 lines per half hour. Is it that low? Probably not, but that is every bit as valid as your math and I’m even using your numbers without double checking you.
At the end of the day maybe I lose the bet. Fair. I’ve been wondering for a bit how they actually stack up, and I’m willing to be shown. But I suspect using it for piddly shit day to day is a drop in the bucket compared to all the mass corporate spam. Bit I’m aware it’s nothing but a hypothesis and I’m willing to be proven wrong. But not based on this.
theangriestbird@beehaw.org 2 days ago
This is completely arbitrary and supposition
It is, that’s the point. We don’t know because the AI companies are intentionally hiding that detail. My estimates are based on the real numbers we do have, and all we know about the closed source models is that they contain more parameters than the open source models, and more parameters = more energy use.
When I started adding multipliers to take a stab at the numbers, I was being conservative. A single AI voice response definitely takes more than 6500 joules, we just don’t know. It’s not that much of a stretch to assume that a voice generation is somewhere halfway between a text generation and a video generation. If my numbers were accurate, that would actually be great news for the AI companies. They would be shouting these numbers from the fucking rooftops to get the stink of this energy usage story off their backs. Corporations never disclose anything unless it is good news. Their silence says everything - if we were actually betting, I would gladly bet that my single video card uses way less energy than their data centers packed to the brim with higher-end GPUs. It’s just a no-brainer.
Owlboi@lemm.ee 1 day ago
the energy usage of AI is grossly overestimated by people. cooking the planet really should be your last concern. id worry more about a possible skynet if they ever achieve sentient agi, which at the rate its going is gonna be within the next 5 years.
DeathsEmbrace@lemm.ee 2 days ago
I disagree the shareholders have never been happier. Who matters more to Epic Games? People or shareholders?
ADandHD@lemmy.sdf.org 2 days ago
I with you on the whole let’s not cook the planet for this.
It’d be nice if instead we had more focus on integrating lightweight and focused models that can run on local hardware. I think that will happen eventually.
theangriestbird@beehaw.org 2 days ago
yeah, if OpenAI and the rest were talking today about making the models more efficient, rather than focusing on making them more accurate, I would be way less of a luddite about this. If our energy grids were mostly made up of clean sources of energy, I would be way less of a luddite about this. But neither of these things are true, so I remain a luddite about AI.
Blisterexe@lemmy.zip 2 days ago
Actually, you might be suprised about the clean energy thing! Places like quebec and france are already ~99% clean energy, and even texas is like 50% there.
BmeBenji@lemm.ee 2 days ago
I’ve been relatively impressed that the image/emoji generation on my iPhone has all been done on my device. Every time I’ve used it, I’ve checked the “Apple Intelligence” server requests log and it’s always empty (until the one time I asked it to generate a photo memory).
My phone gets pretty hot instantly, then churns through 3% of the battery in a minute but it’s still running on a local system with the local battery.