Comment on How close are we to "manually tuning" LLMs?

muntedcrocodile@lemm.ee ⁨2⁩ ⁨weeks⁩ ago

With an ai model u can do what’s called finetuning which is essentially training a pretrained model on a specific set of data to tweak the weights in the desired direction. There are multiple use cases for thus currently ie coding/specific language expert models, dolphin models for uncensored models, roleplaying finetunings etc etc.

We still have very little knowledge on how and what the weights in a model do. So manually tweaking them is unreasonable. There is lots of work related to trying to decode the meaning/purpose of specific neuron or group of neurons and if me manually boost/suppress it it will change the output to reflect as such.

source
Sort:hotnewtop