Comment on LLMs’ “simulated reasoning” abilities are a “brittle mirage,” researchers find

panda_abyss@lemmy.ca ⁨1⁩ ⁨day⁩ ago

Chain of thought is basically garbage.

It works with coding agents because they get an automated hard failure.

The rest of the time it’s just sampling the latent space around a response and should be trimmed out.

source
Sort:hotnewtop