Comment on Solve a puzzle for me

<- View Parent
31337@sh.itjust.works ⁨7⁩ ⁨months⁩ ago

One hypothesis is that having more tokens to process lets it “think” longer. Chain of Thought prompting where you ask the LLM to explain its reasoning before giving an answer works similarly. Also, LLMs seem to be better at evaluating solutions that coming up with them, so there is a Tree of Thought technique, where the LLM is asked to generate multiple examples of a reasoning step then pick the “best” reasoning for each reasoning step.

source
Sort:hotnewtop