Every sci fi work : oh no, the technology is bad
Reality : the assholes using the tool are making it do bad things
Submitted 3 days ago by no_nothing@lemmy.world to memes@sopuli.xyz
https://lemmy.world/pictrs/image/6b646363-d8f8-4a74-9230-d1de4ec5ce36.jpeg
Every sci fi work : oh no, the technology is bad
Reality : the assholes using the tool are making it do bad things
There’s always assholes and they are always making it do bad things, so the distinction isn’t even there. If you don’t plan for assholes using the tool to try and do bad things, you’re making bad technology
you’re right, ban fire and knives because they allow arson and murder.
These clowns would be the ones setting up skynet…
You really want to give any power over yourself and others to the bias amplification machine?
Or super intelligent yogurt
I do think that the best government would be one run by AI.
I do not think the AIs we currently have could run a government, though.
It wouldn’t have the mandate of the people. It wouldn’t last very long. I think sortition or parliament could work. Long as it’s democratic. It’s still a huge leap from how the US does things
My brother in Christ, capitalist markets and the corporations that run things already satisfy the definition of superintelligence
It’s weird to hold the belief that AI won’t oppress us while showing it that it’s fine to oppress animals as long as you’re smarter
Most CEO jobs and a majority of upper management but those will be the last jobs to be automated
Literally the plot of every sci-fi show with an “overseer”.
Absolutely.
Every time I hear someone question the safety of self-driving cars, I know they’ve never been to Philadelphia or NJ.
I mean, the US really isn’t a good example for road safety. Even Germany got better drivers, and we like to drive 140-200 kmh. It’s a matter of good education, standards and regulations (as always).
In the end self-driving public transport is the way the future of mobility should primarily be imho. Self-driving cars… as long as there always is a steering wheel in case of unexpected circumstances or to move around backyards and stuff it’ll probably me fine. Just don’t throw technical solutions at cultural problems and expect them to be fixed.
I mean, the US really isn’t a good example for road safety. Even Germany got better drivers, and we like to drive 140-200 kmh. It’s a matter of good education, standards and regulations (as always).
I didn’t want to believe it as well, but it seems to be factually correct, as per this wonderful Wikipedia list.
I mean TBF, they don’t trust the average person in New Jersey to handle a petrol pump—so much so that it’s legally prohibited.
I’m not at all surprised that they shouldn’t be trusted with the vehicle itself, given that
was the ai trained on reddit commenters? just asking.
Loved that show.
AI judges make a lot of sense, that way everyone is treated equally, because eveey judge thinks literally the same way. No corrupt judges, no change in political bias between judges, no more lenient or strict judges that arbitrary decide your fate. How you decide what AI model is your judge is a whole new can of worms, but it definitely has lots of upsides.
Perhaps when we have real AGI, but I wouldn’t want an LLM to decide someone’s fate.
You have been found guilty of jaywalking. I hearby sentence you to 90 days of community service as unicorn titty sprinkles from Valhalla. May Chester have mercy on your handkerchief.
And how will this be done? A proper legal system needs impartiality, which an AI still varies as much or more than a human judge. Not to mention, the way it’s trained, the training data itself, if there are updates to it or not, how much it thinks, how it orders juries and parties, etc.
If, in theory, we have a perfect AI judge model, how should it be hosted? Self host it? Would be pretty expensive if it needs to be able to keep up. It would have to be re-trained to recognise new legislation or understand removals or amendments of laws. The security of it? If it needs to be swapped out often, it would need internet access to update itself, but that produces risk for cyber attacks, so maybe done through an intranet instead?
This requires a lot of funding, infrastructural changes and tons of maintenance in the best case scenario where the model is perfect and already developed. There would be millions, or ideally, billions in funding to produce anything remotely of quality.
All I see are downsides.
I mean this with the biggest offence possible: AI judges make no sense, atleast with the current way of doing AI (LLMs). It’s been known for years that they amplify any bias in their training data. You are black? Higher chance of going to prison and longer serving time. Getting divorced and are male? Your ass is NOT getting custody. Hell, even without that the LLM might just hallucinate some crime not in the data for a case and give you a life time prison sentence. And if you somehow manage to avoid all that, what’s stopping somebody from just shadowprompting it and getting the judgement they want? It would also be an easy target for corruption, the goverment wants their poltical rivals gone? Tweak the model so it’s just that bit harsher, or just a bit more alligned with some other interpretatnion of the law.
Who would even choose the training data? The judges? Why would they, it would be better for them to sabotage and keep their jobs. Some goverment agency then? Don’t want to do that, or you’re gonna find out separation of power has a reason.
Bad idea.
if it’s designed right, it would be great. otherwise it would suck a ton
Lol @ everyone imagining that they (baselines) would be able to discern the motives and actions of a superintelligent anything.
en.m.wikipedia.org/wiki/The_Evitable_Conflict
If nearly perfect computers controlled government.
Agreed
hanke@feddit.nu 3 days ago
Zacryon@feddit.org 3 days ago
If you want AI agents that benefit humanity, you need biased training data and or a bias inducing training process. E.g. an objective like “Improve humanity in an ethical manner” (don’t pin me down on that, just a simple example).
For example, even choosing a real environment over a tailored simulated one is already a bias in training data, even though you want to deploy the AI agent in a real setting. That’s what you want. Bias can be beneficial. Also if we think about ethical reasoning. An AI agent won’t know what ethics are and which are commonly preferred, if you don’t introduce such a bias.
jerkface@lemmy.ca 3 days ago
show your work
RememberTheApollo_@lemmy.world 3 days ago
Even how it trains itself can be biased based on what its instructions are.
Delphia@lemmy.world 2 days ago
The best use case I can think of for “A.I” is an absolute PRIVACY NIGHTMARE (so set that aside for a moment) but I think its the absolute best example.
Traffic and traffic lights. If every set of lights had cameras to track licence plates, cross reference home addresses and travel times for regular trips for literally every vehicle on the road. Variable speed limit signs on major roads and an unbiased “A.I” whose one goal is to make everyones regular trips take as short an amount of time as possible by controlling everything.
If you can make 1,000,000 cars make their trips 5% more efficiently thats like 50,000 cars worth of emisions. Not to mention real world time savings for people.