If the LLM sees your question and associates a particular compound with superconductors, it’s because it’s seen these things related in other writings (directly or indirectly).
I’m not convinced of this. LLMs haven’t been just spitting our prior art, despite what some people seem to suggest. It’s not just auto-complete, that’s just a useful analogy.
For instance, I’m fascinated by the study that got GPT4 to draw a unicorn using LaTeX. It wasn’t great, but it was recognizable to us as a unicorn. And apparently that’s gotten better with iterations. GPT (presumably) has no idea what a unicorn looks like, except through text descriptions. Who knows how it goes from written descriptions of a mythical being to a 2d drawing with a markup language without being trained on images, imagery, or any concept of what things look like.
It’s important not to ascribe more intent behind what your seeing than exists.
But also, this is true as well. I’m trying hard not to anthropomorphize this LLM but it sure seems like there’s some emergent effect that kind of looks like an intelligence to a layman like myself.
oakey66@lemmy.world 1 year ago
Exactly. It’s just text prediction software that is really good at making itself sound plausible. It could tell you something completely false and have no idea it’s stating a lie. There’s no intelligence here. It’s a very precise word guesser. Which is great for specific settings. But there’s a huge amount of hype associated with this tool and it’s very much by design (by tech companies).