Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM?

<- View Parent
magiccupcake@lemmy.world ⁨1⁩ ⁨day⁩ ago

Don’t forget the fundamental scaling properties of llms, that openai even used as the basis for strategy to make chat gpt 3.5.

But basically llm performance is logarithmic. It’s easier to get rapid improvements early on. But at later points like we are now require exponentially more compute, training data, and model sizes to get now small level of improvements.

Even if we get a 10x in compute, model size, and training data (which is fundamentally finite), the improvements aren’t going to be groundbreaking or solve any of the inherent limitations of the technology.

source
Sort:hotnewtop