Comment on Academia to Industry
ignotum@lemmy.world 5 months agoPrompt:
Alice and Bob are identical twins, Bob is twice the age of Alice. How old are they? Show your thought process step by step.
Answer:
Given the problem statement that Alice and Bob are identical twins and Bob is twice the age of Alice, we need to determine their ages. Let’s analyze the information step by step.
- Identical Twins: Identical twins are born at the same time, so they are always the same age.
- Statement Analysis:
- Bob is twice the age of Alice.
- Since they are identical twins, Bob and Alice must have the same age.
- Logical Conclusion:
- The only scenario where a person’s age can be twice their own age is if their age is zero. Thus, the only solution that satisfies both the condition of them being identical twins and the age relationship given in the problem is: Alice’s age = 0 Bob’s age = 0 Therefore, Alice and Bob are both 0 years old.
Are there biases due to the training data? Yes
Does that mean it is totally incapable of reason? No why would it?
And the models aren’t trying to act like a specific person, but humans in general, so variations in writing styles in the data is quite irrelevant, as we’ve already seen it’ll usually adopt the writing style of the prompt, much like a writer will usually stick to their writing style throughout a book.
Memories are not required for intelligence, and letting a model ramble to itself will just cause the entropy of the output to increase until it’s spewing gibberish, akin fo a human locked in solitary for long enough.
LANIK2000@lemmy.world 5 months ago
Let’s do the riddle I suggested, because we need something popular in the dataset, but present it with a deviation that makes it stupidly simple.
Prompt:
Answer:
A normal human wouldn’t be fooled by this and say that they can just go across and maybe ask where the riddle is. They’d be likely confused or expect more. The AI doesn’t because it completely lacks the ability to reason. At least it ends up solved, that’s probably the best response I got when trying to make this point. Let’s continue.
Follow up prompt:
Answer:
Final prompt:
Final answer:
I think that’s quite enough, it’s starting to ramble like you said it would (tho much earlier than expected) and unlike the first solution, it doesn’t even end up solved anymore xD I’d argue this is a scenario that should be absolutely trivial and yet the AI is trying to assert information that I didn’t present and continues to fail to apply logic correctly. The only time it knows how to reason is when someone in its dataset already spelled out the reasoning to a certain question. If the logic doesn’t exits in the dataset, it has great difficulty making heads or tails of it.
And yes, I’d argue memories are indeed absolutely vital to inteligence. If we want cognition, aka the process of acquiring knowledge and understanding, we need it to remember. And if it immediately loses that information or it erodes so quickly, it’s essentially worthless.
ignotum@lemmy.world 5 months ago
Tried the same prompt:
Asking questions because you know the dataset is biased towards a particular solution isn’t showing the fault in the syatem, much like asking a human a trick question isn’t proving humans are stupid. If you want to test the logical reasoning you should try questions it is unlikely to have ever heard before, where it needs to actually reason on its own to come to the answer.
And i guess people with anterograde amnesia cannot be intelligent, are incapable of cognition and are worthless, since they can’t form new memories
LANIK2000@lemmy.world 5 months ago
It’s not much of a trick question, if it’s absolutely trivial. It’s cherry picked to show that the AI tries to associate things based on what they look like, not based on the logic and meaning behind them. If you gave the same prompt to a human, they likely wouldn’t even think of the original riddle.
Even in your example it starts off by doing absolute nonsense and upon you correcting it by spelling out the result, it finally manages, but still presents it in the format of the original riddle.
You can notice, in my example I intentionally avoid telling it what to do, rather just question the bullshit it made, and instead of thinking “I did something wrong, let’s learn”, it just spits out more garbage with absolute confidence. It doesn’t reason. Like just try regenerating the last answer, but rather ask it why it sent the man back, don’t do any of the work for it, treat it like a child you’re trying to teach something, not a machine your guiding towards the correct result.
And yes, people with memory issues immediately suffer on the inteligence side, their lives a greatly impacted by it and it rarely ends well for them. And no, they are not worthless, I never said that they or AI is worthless, just that “machine learning” in its current state (as in how the technology works), doesn’t get us any closer to AGI. Just like a person with severe memory loss wouldn’t be able to do the kind of work we’d expect from an AGI.