As with every big technological advancement, the powerful rush to consolidate their control over it and prioritize how it can benefit them to how it can benefit society at large.
Jon Stewart On The False Promises of AI | The Daily Show
Submitted 7 months ago by mesamunefire@lemmy.world to videos@lemmy.world
https://www.youtube.com/watch?v=20TAkcy3aBY
Comments
TheDankHold@kbin.social 7 months ago
FenrirIII@lemmy.world 7 months ago
And the sheep bask in awe and let themselves be pushed down even further.
MalReynolds@slrpnk.net 7 months ago
Leave it to comedians to actually be on point. Technical 3/10, social 8/10.
bionicjoey@lemmy.ca 7 months ago
I have to say, I agree 90% with Jon on this. Which is significantly less than I usually agree with him.
I think he could have talked more about the lack of reliability of AI. It’s not simply a drop in replacement for people like the invention of the conveyor belt or sewing machine. A better analogy would be the mass outsourcing of call center jobs to South Asia.
mkwt@lemmy.world 7 months ago
Well that’s where it’s at now. There’s no guarantee it will stay that way. Give Moore’s law several more cycles, and maybe we’ll have enough computing power to make drop in replacement humans.
I think people are misinformed about the current readiness of AI specifically because Silicon Valley VCs have taken a lot of the R&D funding market share from the DARPA government types.
VC funding decisions are heavily oriented around the prototype product demo. (No grant writing!). This encourages “fake it till you make it”: demo a fake product to get the funding to build the real one. This stuff does leak out to the public, and you end up with overstated capabilities.
WhatAmLemmy@lemmy.world 7 months ago
There seems to be a misunderstanding of how LLM’s and statistical modelling work. Neither of these can solve their accuracy as they operate based on a probability distribution and find correlations in ones and zeros. LLM’s generate and use these internally, without supervision (a “black box”). They’re only as “smart” as the human-generated input data, and will always find false positives and false negatives. This is unavoidable. There is no critical thought or intelligence. Only mimicry.
I’m not saying LLM’s won’t shakeup employment, find their niche, and make many jobs redundant, or that critical general AI advances won’t occur, just that LLM’s simply can’t replace human decision making or control, and doing so is a disaster waiting to happen — the best they can do is speed up certain tasks, but a human will always be needed to determine if the results make (real world) sense.
MechanicalJester@lemm.ee 7 months ago
Moore’s law predicts that compared to 1980, computers in 2040 would be a BILLION times faster.
Also that compared to 1994 computers, the ones rolling out now are a MILLION times faster.
A cheap Raspberry PI would easily be able to handle the computational workload of a room full of equipment in 1984.
What would have taken a million years to calculate in 1984 would theoretically take 131 hours today and 29 seconds in 2044…
AA5B@lemmy.world 7 months ago
If it were only a matter of processing power, we’d already be able to demonstrate much more capable AIs. More computing power in more places will facilitate further development, but it’s the “further development” that’s key.
Personally, I’m looking for Moore’s Law to make home AIs more responsive and more similar to today’s cloud-based AIs.