Part of the problem is the question of who is at fault if an autonomous car crashes. If a human falls for this and crashes, it’s their fault. They are responsible for their damages and the damages caused by their negligence. We expect a human driver to be able to handle any road hazards. If a self driving car crashes who’s fault is it? Tesla? They say their self driving is a beta test so drivers must remain attentive at all times. The human passenger? Most people would expect a self driving car would drive itself. If it crashes, I would expect the people that made the faulty software to be at fault, but they are doing everything they can to shift the blame off of themselves. If a self driving car crashes, they expect the owner to eat the cost.
Comment on Marc Rober shows why Tesla's camera-only self-driving system is dangerous
RickC137@lemmy.world 2 weeks ago
I am not a fan of Tesla/Elon but are you sure that no human driver would fall for this?
ThePunnyMan@lemm.ee 2 weeks ago
RickC137@lemmy.world 2 weeks ago
As soon as we have hard data from real world use and FSD is safer than the average human, it would be unethical to not solve the regulatory and legal issues and apply it on a larger scale to save human lives.
If a human driver causes a crash, the insurance pays. Why shouldn’t they if a computer caused the crash, which drives safer overall, if only by let’s say 10%.
ThePunnyMan@lemm.ee 1 week ago
I agree that it would be unethical to ignore self driving since it has the potential to be far safer than a human driver. I just have problems with companies over promising what their software can do.
As for the insurance part, why should my insurance premium increase for a software defect? If a manufacturer defect causes me to crash my car, the manufacturer is at fault, not me. You wouldn’t be liable if the brakes gave out in a new car.
Also keep in mind that the hard data from the real world means putting these vehicles on the road with other drivers. Deficiencies in the software mean potential crashes and deaths. It will be valuable data but we can’t forget that there are people behind it. Self driving is going to shake things up and will probably be a net positive overall. I just think we should be mindful as we begin to embrace it.
undeffeined@lemmy.ml 2 weeks ago
The road runner thing seems a bit far fetched yeah. But there were also tests with heavy rain and fog which were not passed by Tesla.
Ghostalmedia@lemmy.world 2 weeks ago
The road runner thing isn’t far fetched. Teslas have a track record of t-boning semi trucks in overcast conditions, where the sky matches the color of the truck’s container.
RickC137@lemmy.world 2 weeks ago
Should be fine if the car reduces speed to account for the conditions. Just like a human driver does.
Gr0mit@lemmy.world 2 weeks ago
And the Tesla doesn’t, that’s the problem. A human would slow down if they can’t see, the Tesla just barrels through blindly.
oplkill@lemmy.world 2 weeks ago
Isnt there a rule if weather very heavy and you cant see you must stop driving immediately
undeffeined@lemmy.ml 2 weeks ago
You mean a traffic rule? I can’t comment about the US but in Portugal I don’t recall such a rule when learning to drive. Also in Finland I have not experienced that since traffic keeps going even in heavy blizzards.
ayyy@sh.itjust.works 2 weeks ago
All the other cars he tested stopped just fine.
TheSealStartedIt@feddit.org 2 weeks ago
That is a completely legitimate question. That you are downvoted says a lot about the current state of Lemmy. Don’t get me wrong, I’m all for the Musk hate, but it looks like a nuanced discussion on topics where Nazi-Elon is involved is currently not possibe.
jj4211@lemmy.world 2 weeks ago
Lets assume that a human driver would fall for it, for sale of argument.
Would that make it a good idea to potentially run over a kid just because a human would have to, when we have a decent option to do better than human senses?
RickC137@lemmy.world 2 weeks ago
What makes you assume that a vision based system performs worse than the average human? Or that it can’t be 20 times safer?
I think the main reason to go vision-only is the software complexity of merging mixed sensor data. Radar or Lidar alone also have their limitations.
I wish it was a different company or that Musk would sell Tesla. But I think they are the closest to reaching full autonomy. Let’s see how it goes when FSD launches this year.
FurtiveFugitive@lemm.ee 2 weeks ago
FSD is launching this year??! Where have I heard that before?
jj4211@lemmy.world 2 weeks ago
Somehow other car companies are managing to merge data from multiple sources fine. Tesla even used to do it, but stopped to shave a few dollars in their costs.
In terms of assuming there would be safety concerns, well this video clearly demonstrates that adding lidar avoids three scenarios, at least two of them realistic. As I said my standard is not “human driver” but safest options as demonstrated.
RickC137@lemmy.world 2 weeks ago
Which other system can drive autonomous in potentially any environment without relying on map data?
If merging data from different sensors increases complexity by factor 5, it’s just not worth it.
Redex68@lemmy.world 2 weeks ago
The main problem in my mind with purely vision based FSD is that it just isn’t as smart as a real human. A real human can reason about what they see, detect inconsistencies that are too abstract for current ML algorithms to see, and act appropriately in never before seen circumstances. A real human wouldn’t drive full speed through very low visibility areas. They can use context to reason about a situation. Current ML algorithms can’t do any of that, they can’t reason. As such they are inherently incapable of using the same sensors (cameras/eyes) to the same effect. Lidar is extremely useful because it helps get a bit better of a picture that cameras can’t reliably provide. I’m still not sure that even with lidar you can make a fully safe FSD car, but it definitely will help.
RickC137@lemmy.world 2 weeks ago
The assumption that ML lacks reasoning is outdated. While it doesn’t “think” like a human, it learns from more scenarios than any human ever could. A vision-based system can, in principle, surpass human performance, as it has in other domains (e.g., AlphaGo, GPT, computer vision in medical imaging).
The real question isn’t whether vision-based ML can replace humans—it’s when it will reach the level where it’s unequivocally safer.