Comment on Academia to Industry
ignotum@lemmy.world 4 months agoYeah that was a hypothetical, if you had thoae things you would be able to create a true AGI (or what i would consider a true AGI at least)
Text is basically just a proxy, but to become proficient at predicting text you do need to develop many of the cognitive abilities that we associate with intelligence, and it’s also the only type of data we have literal terrabytes of laying around, so it’s the best we’ve got 🤷♂️
Regarding memory, the human mind can be viewed as taking in stimuli, associating that with existing memories, condensing that into some high level representation, then storing that, a llm could, with a long enough context window, look back at past input and output and use that information to influence it’s current output, to mostly the same effect.
What do you mean throwing a black box into an analytical prediction machine? And what do you mean 5 years of development?
LANIK2000@lemmy.world 4 months ago
The black box is the human that reads and outputs text and the analytical prediction machine is the AI. 5 years of development is the human living their life before retuning to continue writing. It is an extreme example, but I’m just tyring to point out that the context of what a person might write can change drastically between individual messages because anything can happened in between, and thus the data is fundamentally flawed for training intelligence, as that step is fully missing, the thought process.
As to why I called the AI an analytical prediction machine, that’s because that’s essentially what it does. It has analyzed an unholy amount of random text from the internet, meaning conversations/blogs/books and so on, to predict what could follow the text you gave it. It’s why prompt injection is so hard to combat and why if you give it a popular riddle and change it slightly like “with a boat, how can a man and goat get across the river”, it’ll fail spectacularly trying to shove in the original answer somehow. I’d say that’s proof it didn’t learn to understand (cognition), because it can’t use logic to reason about a deviation from the dataset.
As for memory, we can kind of simulate it with text, but it’s not perfect. If the AI doesn’t write it down, it didn’t happen and thus any thoughts, feelings or mental analysis stops existing upon each generation. The only way it could possibly develop intelligence, is if we made it needlessly ramble and describe everything like a very bad book.
And thus to reach the beginning of your comment, I don’t belive it’s necessary to posses any cognitive abilities to generate text and in turn I don’t see it as evidence of us getting any closer to AGI.
ignotum@lemmy.world 4 months ago
Prompt:
Answer:
Are there biases due to the training data? Yes
Does that mean it is totally incapable of reason? No why would it?
And the models aren’t trying to act like a specific person, but humans in general, so variations in writing styles in the data is quite irrelevant, as we’ve already seen it’ll usually adopt the writing style of the prompt, much like a writer will usually stick to their writing style throughout a book.
Memories are not required for intelligence, and letting a model ramble to itself will just cause the entropy of the output to increase until it’s spewing gibberish, akin fo a human locked in solitary for long enough.
LANIK2000@lemmy.world 4 months ago
Let’s do the riddle I suggested, because we need something popular in the dataset, but present it with a deviation that makes it stupidly simple.
Prompt:
Answer:
A normal human wouldn’t be fooled by this and say that they can just go across and maybe ask where the riddle is. They’d be likely confused or expect more. The AI doesn’t because it completely lacks the ability to reason. At least it ends up solved, that’s probably the best response I got when trying to make this point. Let’s continue.
Follow up prompt:
Answer:
Final prompt:
Final answer:
I think that’s quite enough, it’s starting to ramble like you said it would (tho much earlier than expected) and unlike the first solution, it doesn’t even end up solved anymore xD I’d argue this is a scenario that should be absolutely trivial and yet the AI is trying to assert information that I didn’t present and continues to fail to apply logic correctly. The only time it knows how to reason is when someone in its dataset already spelled out the reasoning to a certain question. If the logic doesn’t exits in the dataset, it has great difficulty making heads or tails of it.
And yes, I’d argue memories are indeed absolutely vital to inteligence. If we want cognition, aka the process of acquiring knowledge and understanding, we need it to remember. And if it immediately loses that information or it erodes so quickly, it’s essentially worthless.
ignotum@lemmy.world 4 months ago
Tried the same prompt:
Asking questions because you know the dataset is biased towards a particular solution isn’t showing the fault in the syatem, much like asking a human a trick question isn’t proving humans are stupid. If you want to test the logical reasoning you should try questions it is unlikely to have ever heard before, where it needs to actually reason on its own to come to the answer.
And i guess people with anterograde amnesia cannot be intelligent, are incapable of cognition and are worthless, since they can’t form new memories