Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I say leave the driving to us humans.
I presume you've never seen the standard of humans driving in China? I have, and I'm all for self driving cars.I say leave the driving to us humans.
I'm not taking a side here, I'm just pointing out that facts are facts, but they don't really prove anything. The fact is that the car struck the pedestrian. The fact is the pedestrian wasn't in a crosswalk. The fact is that the pedestrian wasn't looking at oncoming traffic. The fact is that the human in the car wasn't looking at the road. The fact is that the car didn't see or react to the pedestrian. None of that says that computers or humans are safer drivers. They are just isolated facts. I'm sure there are people looking at all the Uber car sensors and recorded data (and if every piece of data from every sensor isn't recorded at this point on the life of autonomous cars, that's a problem I think we can all agree on) as well as any other available information like witness testimony, surveillance video, what was everyone up to just prior to the incident, etc. to figure out what happened... but i have to see the facts..
I think the courts & lawyers will have far more impact on the adoption of self driving cars than technology. A lawyer for the eyes can't blame the brain. All these parties are going to point the finger at some other party. Plus, if the eyes (sensor) turns out to be the problem you may find yourself having to ground the fleet of vehicles using said sensor.Interestingly the sensor firm have denied any fault with their equipment, blaming the computer software for the accident.
A lawyer for the eyes can't blame the brain
What I was getting at is if a human was driving, it is one entity without the ability to have one organ sue or blame another organ for failure.I think they would try, and probably have a good case. Something like - well we gave you all the data, that's our responsibility. If you fail to do the right thing with it, then that's your problem/fault.
Well as long as the field of view was adequate to give sufficient warning and nothing was faulty then they probably have a point - in the narrow sense that as long as the sensor returned the signal you'd expect then anything else that did or didn't happen is software. And by the same token if the sensor was faulty and the car didn't detect this and continued to drive then that is also software (though whose software in that case is less obvious to an outsider).http://www.bbc.co.uk/news/technology-43523286
A human driver wouldn't have avoided this accident, but I expect a car equipped with Lidar to forsee the oncoming collision. Interestingly the sensor firm have denied any fault with their equipment, blaming the computer software for the accident.
True, but if the problem is in the software then you also have to ground the fleet until that is rectified. The QC regime for software patches is going to have to be pretty extensive and rigourous, given the ability for a fix in one part of a complex system to have unexpected side-effects (and I certainly wouldn't allow Uber, of all companies, to self-certify).All these parties are going to point the finger at some other party. Plus, if the eyes (sensor) turns out to be the problem you may find yourself having to ground the fleet of vehicles using said sensor.
Hmmm... ...missed that firetruck stopped at a red light.
http://www.foxnews.com/auto/2018/05...-slams-into-truck-in-dramatic-utah-wreck.html
Unless the sensor didn't detect the car cutting it off, or it did, but the programming didn't recognize the threat from the sensor and didn't tell the steering, braking, accelerator actuators to react, and if it did tell them to react the calculations were correct, and the actuators did what the programming expected them to do, and the road conditions (e.g. sand, ice, heat, new asphalt, dirt road, etc.) performed the way the computer expected them too, as well as the other car not making an unanticipated (unprogrammed) adjustment. Once the automated car reacts all these sensors and systems need to alos reprocess the all the information once the corrective actions are put in place.wow.. it is interesting the bias and fear that AI cause.
unreasonable fear.. in this situation of cars with AI.
AI .. in cars... will always be better on the whole.. vs Humans.
even with AH drivers.. being aggressive on the road...
AI-cars will react better and faster.. to let the AH-car drive away... no fuss / no muss.
Humans will react un-favorably & un-safely against the AH-car.
this morning.. a car cut me off...
and i followed him closely .. tailed him for a while to mess with him...
this was a NO-NO. and could have escalated.
but an AI, would have just ignored it..