The actual model required for general purpose likely lies beyond the range of petabytes of memory.
These models are using gigabytes and the trend indicates its exponential. A couple more gigabytes isn’t going to cut it. Layers cannot expand the predictive capabilities without increasing the error. I’m sure a proof of that will be along within in the next few years.
slackassassin@sh.itjust.works 3 days ago
Working with pretrained models implemented in FPGAs for particle identification and tracking. It’s much faster and exactly as accurate. ¯\_(ツ)_/¯
daniskarma@lemmy.dbzer0.com 3 days ago
Run, the butlerian jihad it’s already going your way.