Comment on LLMs’ “simulated reasoning” abilities are a “brittle mirage,” researchers find

<- View Parent
interdimensionalmeme@lemmy.ml ⁨1⁩ ⁨week⁩ ago

I think of chain of thought as a self-prompting model
I suspect in the future, chain-of-thought model will run
a smaller tuned/dedicated chain-of-thought submodel just for the chain-of-thought tokens

The point of this is that, most users aren’t very good at
prompting, they just don’t have the feel for it

Personally I get worse results, way less what I wanted,
when CoT is enabled, I’m very annoyed that now
the “chatgpt classic” model selector just decides to use CoT
whenever it wants, I should be the one to decide that
and I want it off almost all of the time !!

source
Sort:hotnewtop