Comment on [deleted]
BrikoX@lemmy.zip 1 day agoTheir whole predication is based on exponential growth moving forward which is just impossible. The growth of new models already stagnated and all the new improvements are just optimizations and better interface layers. They are basically hard capped at what they can do and more powerful hardware can’t solve that.
Something ground breaking might happen that changes the whole landscape in the future, but it won’t be exponential growth.
Multiplexer@discuss.tchncs.de 1 day ago
You are probably quite right, which is a good thing, but the authors take that into account themselves:
They are citing an essay on this topic, which elaborates on the things you just mentioned:
lesswrong.com/…/slowdown-after-2028-compute-rlvr-…
I will open a champagne bottle if there is no breakthrough in the next few years, because than the pace will significantly slow down.
But still not stop and that is the thing.
I myself might not be around any more if AGI arrives in 2077 instead of 2027, but my children will, so I am taking the possibility seriously.
And pre-2030 is also not completely out of the question. Everyone has been quite surprised on how well LLMs were working.
There might be similar surprises for the other missing components like world model and continuous learning in store, which is a somewhat scary prospect.
And alignment is even now already a major concern, let’s just say “Mecha-Hitler”, crazy fake videos and bot-armies with someone questionable’s agenda…
So seems like a good idea to try and press for control and regulation, even if the more extreme scenarios are likely to happen decades into the future, if at all…