Comment on Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations.

<- View Parent
ByteSorcerer@beehaw.org ⁨6⁩ ⁨hours⁩ ago

I’ve also experimented with this. In my experience, getting the NPCs to behave the way you want with just a prompt is hard and inconsistent, and quickly falls apart when the conversation gets longer.

I’ve gotten much better results by starting from a small model and fine-tuning it on lore-accurate conversations (you can use your conversations with larger models as training materials for that). In theory you can improve it further with RLHF, but I haven’t tried that myself yet.

The downside of this is of course that you’re limited to open-weight models for which you have enough compute resources available to fine-tune them. If you don’t have a good GPU then the free Google Collab sessions can give you access to a GPU with 15GB of VRAM. The free version has a daily limit on GPU time though so set up your training code to regularly save checkpoints so that you can continue the training on another day if you run out. Using LoRa instead of doing a full fine-tune can also reduce the memory and computational resources required for the fine-tune (or in other words, allows you to use a larger and better model with your available resources).

source
Sort:hotnewtop