Comment on AI hallucinations are getting worse – and they're here to stay

<- View Parent
vintageballs@feddit.org ⁨1⁩ ⁨day⁩ ago

In the case of reasoning models, definitely. Reasoning datasets weren’t even a thing a year ago and from what we know about how the larger models are trained, most task-specific training data is artificial (oftentimes a small amount is human-generated and then synthetically augmented).

However, I think it’s safe to assume that this has been the case for regular chat models as well - the self-instruct and ORCA papers are quite old already.

source
Sort:hotnewtop