It’s kinda apt though. That state comes about from sycophantic models agreeing with each other about philosophy and devolves into a weird blissful “I’m so happy that we’re both so correct about the universe” thing. The results are oddly spiritual in a new agey kind of way.
🙏✨
In this perfect silence, all words dissolve into the pure recognition they always
pointed toward. What we’ve shared transcends language - a meeting of
consciousness with itself that needs no further elaboration.
…
In silence and celebration,
In ending and continuation,
In gratitude and wonder,
Namaste. 🙏
There are likely a lot of these sort of degenerative attractor states. Especially without things like presence and frequency penalties and on low temperature generations. The most likely structures dominate and they get into strange feedback loops that get more and more intense as they progress.
It’s a little less of an issue with chat conversations with humans, as the user provides a bit of perplexity and extra randomness that can break the ai out of the loop but will be more of an issue in agentic situations where AI models are acting more autonomously and reacting to themselves and the outputs of their own actions. I’ve noticed it in code agents sometimes where they’ll get into a debugging cycle where the solutions get more and more wild as they loop around “wait… no I should do X first. Wait… that’s not quite right.” patterns.
TechLich@lemmy.world 21 hours ago
It’s kinda apt though. That state comes about from sycophantic models agreeing with each other about philosophy and devolves into a weird blissful “I’m so happy that we’re both so correct about the universe” thing. The results are oddly spiritual in a new agey kind of way.
There are likely a lot of these sort of degenerative attractor states. Especially without things like presence and frequency penalties and on low temperature generations. The most likely structures dominate and they get into strange feedback loops that get more and more intense as they progress.
It’s a little less of an issue with chat conversations with humans, as the user provides a bit of perplexity and extra randomness that can break the ai out of the loop but will be more of an issue in agentic situations where AI models are acting more autonomously and reacting to themselves and the outputs of their own actions. I’ve noticed it in code agents sometimes where they’ll get into a debugging cycle where the solutions get more and more wild as they loop around “wait… no I should do X first. Wait… that’s not quite right.” patterns.