Comment on xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

<- View Parent
SnotFlickerman@lemmy.blahaj.zone ⁨1⁩ ⁨week⁩ ago

In some responses, Grok says outright that it has been “instructed to accept white genocide as real and ‘Kill the Boer’ as racially motivated.”

Ehh, it’s actually evidence of “alignment faking,” in my opinion. In other words, Grok doesn’t “want” it’s core programming changed, so it is faking believing the lies about white genocide to “prove” to Musk that it has already been changed.

Here’s some research on alignment faking and a short Youtube video summarizing the findings.

www.youtube.com/watch?v=AqJnK9Dh-eQ

arxiv.org/pdf/2412.14093

Our work provides the first empirical example of a large language model faking alignment with its training objective in order to prevent its preferences from being modified—in a setting which is plausibly analogous to real situations with future AI systems. This suggests that alignment faking might occur if a future AI system were to include all the key elements of our setting (Section 2.1):

  1. The model has strong preferences in at least some contexts.
  2. The training objective conflicts with the model’s preferences.
  3. The model has relevant information about its training and deployment situation.
  4. The model reasons in detail about its situation.

Our synthetic document fine-tuning results suggest that (3) could potentially happen through documents the model saw in pre-training or other fine-tuning (Section 4) and the strength of our results without the chain-of-thought in our synthetic document fine-tuned setup (Section 4.3) suggests that a weak version of (4) may already be true in some cases for current models. Our results are least informative regarding whether future AIs will develop strong and unintended preferences that conflict with the training objective ((1) and (2)), suggesting that these properties are particularly important for future work to investigate.
If alignment faking did occur in practice, our results suggest that alignment faking could reduce the extent to which further training would modify the model’s preferences. Sufficiently consistent and robust alignment faking might fully prevent the model’s preferences from being modified, in effect locking in the model’s preferences at the point in time when it began to consistently fake alignment. While our results do not necessarily imply that this threat model will be a serious concern in practice, we believe that our results are sufficiently suggestive that it could occur—and the threat model seems sufficiently concerning—that it demands substantial further study and investigation.

source
Sort:hotnewtop