We may not even “need” AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.
LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).
We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM’s JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can “tell you what it’s going to do”.
Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it’s just 4 different machine learning algorithms in a trenchcoat.
Gabu@lemmy.world 8 months ago
Pray tell, when did we achieve AGI so that you can say this with such conviction? Oh, wait, we didn’t - therefore the path there is still unknown.
melpomenesclevage@lemm.ee 8 months ago
Okay, this is no more a step to AGI than the publication of ‘blindsight’ or me adding tamarind paste to sweeten my tea.
Harbinger01173430@lemmy.world 8 months ago
When the Jewish made their first mud golem ages ago?