Comment on this one goes out to the arts & humanities

<- View Parent
evranch@lemmy.ca ⁨7⁩ ⁨months⁩ ago

And it still can’t understand; its still just sleight of hand.

Yes, thus “passable imitation of understanding”.

The average consumer doesn’t understand tensors, weights and backprop. They haven’t even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.

Passable imitation.

You don’t need a data center except for training, either. There’s no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don’t actively need it.

I’ve already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.

You yell out “$wakeword, it’s cold in here. Turn up the furnace” and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.

source
Sort:hotnewtop