Multiplexer
@Multiplexer@discuss.tchncs.de
- Comment on Looks like nuclear fusion is picking up steam 18 hours ago:
Well, as almost all our industrial scale electrical energy sources boil down (this pun definitely intended) to rapidly heating up huge amounts of water, the pun seems to be obvious for a nuclear fusion power plant. But maybe it isn’t and it is just a coincidence. Who knows…
- Comment on KATHLEEN 19 hours ago:
I see, so probably based on the resemblance to “Kathy Deeds”… Wow, that is not even decent dad humour level…
- Comment on KATHLEEN 19 hours ago:
Interesting insect, never heard of it before!
Although I am slightly sorry as I can’t stop giggling at the moment since I realized how silly all the references to feet sound in this description. 🤭
What I don’t understand is the text in the speech bubble… Is that some kind of insider joke? Seems not to be a valid information… - Comment on When something still uses micro USB in 2025 1 day ago:
Also I have the impression that lifetime of products has increased again over the last decade or so.
Still rocking my Sony ebook reader from 2011 and a Samsung Galaxy S5 as backup and Whatsapp handy. Both are using Micro USB, so I have a small cable with me anyways. - Comment on [deleted] 1 day ago:
You are probably quite right, which is a good thing, but the authors take that into account themselves:
“Our team’s median timelines range from 2028 to 2032. AI progress may slow down in the 2030s if we don’t have AGI by then.”
They are citing an essay on this topic, which elaborates on the things you just mentioned:
lesswrong.com/…/slowdown-after-2028-compute-rlvr-…I will open a champagne bottle if there is no breakthrough in the next few years, because than the pace will significantly slow down.
But still not stop and that is the thing.
I myself might not be around any more if AGI arrives in 2077 instead of 2027, but my children will, so I am taking the possibility seriously.And pre-2030 is also not completely out of the question. Everyone has been quite surprised on how well LLMs were working.
There might be similar surprises for the other missing components like world model and continuous learning in store, which is a somewhat scary prospect.And alignment is even now already a major concern, let’s just say “Mecha-Hitler”, crazy fake videos and bot-armies with someone questionable’s agenda…
So seems like a good idea to try and press for control and regulation, even if the more extreme scenarios are likely to happen decades into the future, if at all… - Comment on [deleted] 1 day ago:
I think the point is not that it is really going to happen at that pace, but to show that it very well might happen within our lifetime. And also the authors have adjusted the earliest possible point of a possible hard to stop runaway scenario to 2028 afaik.
Kind of like the atomic doomsday clock, which has been oscillating between a quarter to twelve and a minute before twelve during the last decades, depending on active nukes and current politics. Helps to illustrate an abstract but nonetheless real risk with maximum possible impact (annihilation of mankind - not fond of the idea…)
Even if it looks like AI has been hitting some walls for now (which I am glad about) and is overhyped, this might not stay this way. So although AGI seems unlikely at the moment, taking the possibility into account and perhaps slowing down and making sure we are not recklessly risking triggering our own destruction is still a good idea, which is exactly the authors’ point.
Kind of like scanning the sky with telescopes and doing DART-style asteroid research missions is still a good idea, even though the probability for an extinction level meteorite event is low.
- Comment on If you are paying to use "AI", who are you paying and what are your regular usecases? 2 days ago:
I’m using openrouter.ai which is a service that allows the use of a wide range of models and you can easily switch between them on the fly.
Besides the major players I can also use cloud hosted instances of open models. These are often incredibly cheap and and you can select the ones that don’t use your data for training.
Typical use cases include language learning and copilot stuff for programming.