I agree with the first two paragraphs, but as for the third, I think you overestimate the capability of chatbots on steroids. It couldn’t even run a Taco Bell drivethrough.
I agree with the first two paragraphs, but as for the third, I think you overestimate the capability of chatbots on steroids. It couldn’t even run a Taco Bell drivethrough.
BarneyPiccolo@lemmy.today 15 hours ago
I get what you are saying, but the distinguishing characteristic of the new AI, over the past computer programs, is that it learns, and improves. Years ago, people used to laugh at me for supporting solar, because it was so inefficient. I just said the research will improve it, and today solar is an extremely popular, affordable, and growing option, especially with Trump’s war profiteering.
Apparently in the AI world, they are expecting it’s capabilities to double every 7 months. I saw a list of steps, with the industries that will be impacted with each step, and as each step doubles, it impacts bigger and bigger industries.
It’s learning the basics right now, but humans are training the AI to the point that it will replace them, then the next level of humans will train the next level until replaces them, then move on to the next level to be trained.
In a few years, well all be replaced, except a lucky few who do the maintenance, but those jobs won’t pay much, because if you won’t do it for that pay scale, get out of the way, there are a LOT of unemployed people who will accept it.
7101334@lemmy.world 15 hours ago
It learns, sure… and it’s already learned as much as it can from the entire internet, and still can’t run a Taco Bell drivethrough.
Yeah, and in the cryptocurrency world, they predicted that Bitcoin would currently be worth $200k - $300k, potentially as high as 400k - 1mil in high-greed environments.
Instead, it is almost exactly at the value of their “Bitcoin Dead” level lol
I would give less credence to the opinions of people whose financial interests are vested in you believing AI is magic.
Humans are more than just chatbots. Therefore even the most advanced chatbot will never replace us.
Which is not to say that I think humans are the supreme possible intelligence, or that machine intelligence could never surpass us. I just do not believe that the current LLM’s we have are capable of achieving anything resembling actual thought, just a decently convincing mimicry of it.
It’s also not to say that I think no jobs will be lost, but I think they’ll be situations like where a QA department reduces its workforce by 75% but then the remaining 25% are still expected to oversee the AI’s output. It’s still a shit outcome economically (though I’d also reference that quote, “Imagine how badly we had to fuck up to create a world where the robots taking all the jobs is a bad thing”), but it’s not the same as actually rivaling us in cognition or intellectual capacity.
And a few years before 2016, everyone who bought Bitcoin was going to be driving a Lamborghini.
I still see more Priuses and Corollas on the road these days.
smiletolerantly@awful.systems 15 hours ago
If you’re thinking of the list that I’m thinking of: that is completely unfounded. They started with the premise “AI will be perfect in 2 years” and then drew a graph that looked good-ish. There is no scientific value to it.
BarneyPiccolo@lemmy.today 13 hours ago
Valid, but no matter what the timeline, it’s going to improve over time, and companies are already committed to it, so they’ll be prioritizing continuing R&D until it does what they want it to.
It’s coming whether we like it or not, and it’s going to be a bloodbath no matter what the final scenario is. Either the workers take the hit, or the companies do, and if the companies do, then the workers will take the hit anyway.
The workers are screwed no matter what.
smiletolerantly@awful.systems 13 hours ago
Counterpoint: LLMs are a dead-end for AGI. And outsourcing tasks to a “sometimes correct, but very often wrong” bot starts looking like a not-so-good idea once you actually need to pay for the compute.
Bronzebeard@lemmy.zip 9 hours ago
Uh… you’re misunderstanding what it’s doing. It’s not learning as we use the word.
It’s figuring out what words probabilistically appear near each other. Like when you use the suggested word at the top of your mobile keyboard. You can write “sentences”, but they often go off the rails.
They just get fed some keywords and spit back out words it has observed to be near those.
And no point are LLMs capable of making any analysis or decisions. They cannot perform any thought based work. At best it can copy past shit it’s seen someone else figure out on the Internet. There are very few jobs out there that are entirely devoid of any decision making that these could actually be expected to replace (and have the company continue to function)