That’s pretty much just agentic AI.
Comment on How AI researchers accidentally discovered that everything they thought about learning was wrong
marius@feddit.org 2 days ago
The lottery ticket hypothesis crystallised: large networks succeed not by learning complex solutions, but by providing more opportunities to find simple ones.
Wouldn’t training a lot of small networks work as well then?
Leeks@lemmy.world 1 day ago
mindbleach@sh.itjust.works 22 hours ago
Quite possibly, yes. But how much is “a lot?” A wide network acts like many permutations.
Probing the space with small networks and brief training sounds faster, but that too is recreated in large networks. They’ll train for a bit, mark any weights near zero, reset, and zero those out.
What training many small networks would be good for is experimentation. Super deep and narrow, just five big dumb layers, fewer steps with more heads, that kind of thing. Maybe get wild and ask a question besides “what’s the next symbol.”