Comment on Academia to Industry
LANIK2000@lemmy.world 4 months agoThe black box is the human that reads and outputs text and the analytical prediction machine is the AI. 5 years of development is the human living their life before retuning to continue writing. It is an extreme example, but I’m just tyring to point out that the context of what a person might write can change drastically between individual messages because anything can happened in between, and thus the data is fundamentally flawed for training intelligence, as that step is fully missing, the thought process.
As to why I called the AI an analytical prediction machine, that’s because that’s essentially what it does. It has analyzed an unholy amount of random text from the internet, meaning conversations/blogs/books and so on, to predict what could follow the text you gave it. It’s why prompt injection is so hard to combat and why if you give it a popular riddle and change it slightly like “with a boat, how can a man and goat get across the river”, it’ll fail spectacularly trying to shove in the original answer somehow. I’d say that’s proof it didn’t learn to understand (cognition), because it can’t use logic to reason about a deviation from the dataset.
As for memory, we can kind of simulate it with text, but it’s not perfect. If the AI doesn’t write it down, it didn’t happen and thus any thoughts, feelings or mental analysis stops existing upon each generation. The only way it could possibly develop intelligence, is if we made it needlessly ramble and describe everything like a very bad book.
And thus to reach the beginning of your comment, I don’t belive it’s necessary to posses any cognitive abilities to generate text and in turn I don’t see it as evidence of us getting any closer to AGI.
ignotum@lemmy.world 4 months ago
Prompt:
Answer:
Are there biases due to the training data? Yes
Does that mean it is totally incapable of reason? No why would it?
And the models aren’t trying to act like a specific person, but humans in general, so variations in writing styles in the data is quite irrelevant, as we’ve already seen it’ll usually adopt the writing style of the prompt, much like a writer will usually stick to their writing style throughout a book.
Memories are not required for intelligence, and letting a model ramble to itself will just cause the entropy of the output to increase until it’s spewing gibberish, akin fo a human locked in solitary for long enough.
LANIK2000@lemmy.world 4 months ago
Let’s do the riddle I suggested, because we need something popular in the dataset, but present it with a deviation that makes it stupidly simple.
Prompt:
Answer:
A normal human wouldn’t be fooled by this and say that they can just go across and maybe ask where the riddle is. They’d be likely confused or expect more. The AI doesn’t because it completely lacks the ability to reason. At least it ends up solved, that’s probably the best response I got when trying to make this point. Let’s continue.
Follow up prompt:
Answer:
Final prompt:
Final answer:
I think that’s quite enough, it’s starting to ramble like you said it would (tho much earlier than expected) and unlike the first solution, it doesn’t even end up solved anymore xD I’d argue this is a scenario that should be absolutely trivial and yet the AI is trying to assert information that I didn’t present and continues to fail to apply logic correctly. The only time it knows how to reason is when someone in its dataset already spelled out the reasoning to a certain question. If the logic doesn’t exits in the dataset, it has great difficulty making heads or tails of it.
And yes, I’d argue memories are indeed absolutely vital to inteligence. If we want cognition, aka the process of acquiring knowledge and understanding, we need it to remember. And if it immediately loses that information or it erodes so quickly, it’s essentially worthless.
ignotum@lemmy.world 4 months ago
Tried the same prompt:
Asking questions because you know the dataset is biased towards a particular solution isn’t showing the fault in the syatem, much like asking a human a trick question isn’t proving humans are stupid. If you want to test the logical reasoning you should try questions it is unlikely to have ever heard before, where it needs to actually reason on its own to come to the answer.
And i guess people with anterograde amnesia cannot be intelligent, are incapable of cognition and are worthless, since they can’t form new memories
LANIK2000@lemmy.world 4 months ago
It’s not much of a trick question, if it’s absolutely trivial. It’s cherry picked to show that the AI tries to associate things based on what they look like, not based on the logic and meaning behind them. If you gave the same prompt to a human, they likely wouldn’t even think of the original riddle.
Even in your example it starts off by doing absolute nonsense and upon you correcting it by spelling out the result, it finally manages, but still presents it in the format of the original riddle.
You can notice, in my example I intentionally avoid telling it what to do, rather just question the bullshit it made, and instead of thinking “I did something wrong, let’s learn”, it just spits out more garbage with absolute confidence. It doesn’t reason. Like just try regenerating the last answer, but rather ask it why it sent the man back, don’t do any of the work for it, treat it like a child you’re trying to teach something, not a machine your guiding towards the correct result.
And yes, people with memory issues immediately suffer on the inteligence side, their lives a greatly impacted by it and it rarely ends well for them. And no, they are not worthless, I never said that they or AI is worthless, just that “machine learning” in its current state (as in how the technology works), doesn’t get us any closer to AGI. Just like a person with severe memory loss wouldn’t be able to do the kind of work we’d expect from an AGI.