Comment on LLMs’ “simulated reasoning” abilities are a “brittle mirage,” researchers find

jarfil@beehaw.org ⁨1⁩ ⁨week⁩ ago

chain-of-thought models

There are no “CoT LLMs”, a CoT means externally iterating an LLM. The strength of CoT, resides in its ability to pull up external resources at each iteration, not in dogfooding the LLM its own outputs.

“Researchers” didn’t “find out” this now, it was known from day one.

source
Sort:hotnewtop