Okay but have you considered that if we just reduce human intelligence enough, we can still maybe get these things equivalent to human level intelligence, or slightly above?
We have the technology.
Comment on AGI achieved đ¤
SoftestSapphic@lemmy.world â¨6⊠â¨days⊠agoAdding weights doesnât make it a fundamentally different algorithm.
We have hit a wall where these programs have combed over the totality of the internet and all available datasets and texts in existence.
Weâre done here until thereâs a fundamentally new approach that isnât repetitive training.
Okay but have you considered that if we just reduce human intelligence enough, we can still maybe get these things equivalent to human level intelligence, or slightly above?
We have the technology.
jsomae@lemmy.ml â¨6⊠â¨days⊠ago
Transformers were pretty novel in 2017, I donât know if they were really around before that.
Anyway, Iâm doubtful that a larger corpus is whatâs needed at this point. (Though that said, thereâs a lot more text remaining in instant messager chat logs like discord that probably have yet to be integrated into LLMs. Not sure.) Iâm also doubtful that scaling up is going to keep working, but it wouldnât surprise that much me if it does keep working for a long while. My guess is that thereâs some small tweaks to be discovered that really improve things a lot but still basically like like repetitive training as you put it.