How AI researchers accidentally discovered that everything they thought about learning was wrong
Submitted 1 day ago by cm0002@piefed.world to technology@lemmy.zip
Submitted 1 day ago by cm0002@piefed.world to technology@lemmy.zip
marius@feddit.org 1 day ago
Wouldn’t training a lot of small networks work as well then?
mindbleach@sh.itjust.works 18 hours ago
Quite possibly, yes. But how much is “a lot?” A wide network acts like many permutations.
Probing the space with small networks and brief training sounds faster, but that too is recreated in large networks. They’ll train for a bit, mark any weights near zero, reset, and zero those out.
What training many small networks would be good for is experimentation. Super deep and narrow, just five big dumb layers, fewer steps with more heads, that kind of thing. Maybe get wild and ask a question besides “what’s the next symbol.”
Leeks@lemmy.world 1 day ago
That’s pretty much just agentic AI.