Comment on [deleted]

<- View Parent
pup_atlas@pawb.social ⁨6⁩ ⁨months⁩ ago

Indexing and lookups on datasets as big as companies like Google and Amazon are running also take trillions of operations to complete, especially when you take into account the constant reindexing that needs to be done. In some cases, encoding data into a neural network is actually cheaper than storing the data itself. You can see this in practice with gaussian splatting point cloud capture, where they are training networks to guide points in the cloud at runtime, rather than storing the position of trillions of points over time.

source
Sort:hotnewtop