“T-BUFFER! MOTION BLUR! External power supplies! Wait, why isn’t anyone buying this?”
inclementimmigrant@lemmy.world 1 month ago
This is absolutely 3dfx level of screwing over consumers and all about just faking frames to get their “performance”.
TastyWheat@lemmy.world 1 month ago
Breve@pawb.social 1 month ago
They aren’t making graphics cards anymore, they’re making AI processors that happen to do graphics using AI.
Knock_Knock_Lemmy_In@lemmy.world 1 month ago
What if I’m buying a graphics card to run Flux or an LLM locally. Aren’t these cards good for those use cases?
Breve@pawb.social 1 month ago
Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.
Fluffy_Ruffs@lemmy.world 1 month ago
Welcome to the future
daddy32@lemmy.world 1 month ago
Except you cannot use them for AI commercially, or at least in data center setting.
Breve@pawb.social 1 month ago
Data centres want the even beefier cards anyhow, but I think nVidia envisions everyone running local LLMs on their PCs because it will be integrated into software instead of relying on cloud compute. My RTX 4080 can struggle through Llama 3.2.