AI-powered NPCs is like a childhood dream come true. But I agree it would be better for them to use a model running on the user’s system or at the very least host their own.
Pyro@pawb.social 3 days ago
Honestly, a small llm in these situations would be great idea, but it should be a very small local or hosted by the company itself (with a setting to turn off)
A small AI in games is the stuff I do want. But there is no reason gemni needs to be involved in a game at all
epicshepich@programming.dev 3 days ago
Asafum@lemmy.world 3 days ago
I thought I was in the minority with this opinion. I hate all the known issues with AI and the ethics in how they train, but I have to say having an AI in a game is really really cool.
There was a time when (I think it was chatgpt) had free API access and this game spacebourne 2 integrated it into your ships computer so you could interact with it. It was very cool, very wrong at times, but still very cool. My favorite interaction was unfortunately a hallucination. I asked it what system I was in and it gave me a name of a system that does exist in the game, it just wasn’t where I was. I asked why my map said I was somewhere else and it says “your map must be incorrect” lol
ilinamorato@lemmy.world 2 days ago
Yeah, agreed. This is the sort of thing smaLLMs would be fantastic for: humans can’t do it at scale so it’s not taking any jobs, you can run it locally so it won’t cost any extra energy, it’s not making things slop, just give it a back story and let it do its thing.
MyNameIsAtticus@lemmy.world 3 days ago
Make it a downloadable package that runs a local model and I think I’d be far more fine. Like, I think it’s a tacky gimmick, but at least on device it’s not hurting the environment
titanicx@lemmy.zip 1 day ago
I mean considering that this is already an MMO most files do reside on the server that you’re logged into with only a small amount of local files being cashed for graphics now things like that. Essentially like this isn’t really a bad idea at all. And it’s probably one of the few uses of AI that I could see. However that being said Gemini overall is such a shitty AI assistant already that I have no doubts that a virtual AI assistant using Gemini on a video game
MirrorGiraffe@piefed.social 2 days ago
I’m not too big on these topics and would like to understand. Is a local model less resource intensive?
In my mind, if every gamer runs a model that must be less efficient than a centralised one that has the perfect hardware setup and only lends out the resources needed for each slime or whatever.
I’m thinking that it of course would be better with a dedicated slime model than the entire Gemini monster but why is local better?
Cethin@lemmy.zip 2 days ago
I don’t know, but I’m willing to bet that economies of scale actually mean data centers are more efficient. This isn’t to say their use is justified, just that they’re able to take advantage of things a home computer can’t.
However, having to run it locally means it needs to be much more limited. This is doubly true if you want to run the game and the LLM at the same time. The LLM is easily able to consume all resources your system has available if you allow it to, which means the game won’t run well (if it runs at all). This limits the use so it can’t just be shoved everywhere and constantly running, like it could if it’s sent to a data center. It’s not more efficient, just less consumption.
SabinStargem@lemmy.today 1 day ago
On my system, I can play a RPG Maker game and use a 122b LLM at the same time, alongside to a podcast. A model in that parameter range takes up about 70gb of DDR4 RAM and 36gb of VRAM. However, it used to be that a 120b AI would take a larger footprint, bringing the system to the brink. The hardware requirements are going down, and the quality also increased, alongside speed. I believe when the next major sea change of hardware happens, AI will become very practical for gaming.
MyNameIsAtticus@lemmy.world 2 days ago
Local runs on device, so no need to connect to a big data center that chugs lots of water and all those other problems. Of course, because it’s a smaller far tinier model it’s nowhere near as accurate, but especially for things like this you don’t really need a big accurate LLM model.
I think I also though I should warrant a disclaimer that I am a Software Developer, not a AI Developer. So there’s far less backing then from my perspective than someone who works with this stuff for a living
MirrorGiraffe@piefed.social 2 days ago
I’m also a sw engineer so we’re both guessing 😅
I’m guessing those dates centers use that water for cooling whereas most home computers run an electric fan. And furthermore they probably use less electricity per token as they want to maximize profits. I don’t have any numbers to back my hunch up but I’m pretty sure the environment would suffer more if everyone is running their own.
I probably missed a lot of factors such as what type of energy the centers run contra what average Joe runs etc.