Comment on [deleted]

vk6flab@lemmy.radio ⁨5⁩ ⁨months⁩ ago

The underlying issue with an LLM is that there is no “learning”. The model itself doesn’t dynamically change whilst it’s being used.

This article sets out a process that gives the ability to alter the model, by “dialling up” (or down) concepts. In other words, it’s changing the balance of the weight of concepts across the whole model.

Altering one concept is hardly “learning”, especially since it’s being done externally by researchers, but it’s a start.

A much larger problem is that the energy consumption is several orders of magnitude larger than that of our brain. I’m not convinced that we have enough energy to make a standalone “AI”.

What machine learning actually gave us is the ability to automatically improve a digital model of things, like weather prediction, something that took hours on a supercomputer to give you a week of forecast, now can be achieved on a laptop in minutes with a much longer range and accuracy. Machine learning made that possible.

An LLM is attempting the same thing with human language. It’s tantalising, but ultimately I think the idea applied to language to create “AI” is doomed.

source
Sort:hotnewtop