The actual model required for general purpose likely lies beyond the range of petabytes of memory.
These models are using gigabytes and the trend indicates its exponential. A couple more gigabytes isn’t going to cut it. Layers cannot expand the predictive capabilities without increasing the error. I’m sure a proof of that will be along within in the next few years.
slackassassin@sh.itjust.works 5 weeks ago
Working with pretrained models implemented in FPGAs for particle identification and tracking. It’s much faster and exactly as accurate. ¯\_(ツ)_/¯
daniskarma@lemmy.dbzer0.com 5 weeks ago
Run, the butlerian jihad it’s already going your way.