Comment on Number of AI chatbots ignoring human instructions increasing, study says
luciole@beehaw.org 2 weeks ago
HGModernism has a video about “lying” LLMs which is interesting. Basically an LLM is calibrated to find the shortest route to a an answer. It has no conception of obedience. Say you tell the LLM to use your script to solve a problem. Say the LLM will spend more energy figuring out and using your script than whipping up its own one. The LLM will therefore pretend your script is broken, generously make a new one and use that instead.