• After 15+ years, we've made a big change: Android Forums is now Early Bird Club. Learn more here.

Autonomous Cars: Not Yet Ready for Prime Time

agreed that programming makes a difference... better programming is always better.
better sensors are always better..

but... even with Current programs and Current Sensors... with control over ANY vehicle...
will out perform any human..

you take the best human driver... on that person's daily drives... over a year.
they will make thousands of mistakes. accidents...
they will get distracted... tired... emotional... drunk...

computer AI drivers.. are way safer...
Humans are just scared to let go of control.
i am too.. but i have to see the facts.. which are safer?
 
.. but i have to see the facts..
I'm not taking a side here, I'm just pointing out that facts are facts, but they don't really prove anything. The fact is that the car struck the pedestrian. The fact is the pedestrian wasn't in a crosswalk. The fact is that the pedestrian wasn't looking at oncoming traffic. The fact is that the human in the car wasn't looking at the road. The fact is that the car didn't see or react to the pedestrian. None of that says that computers or humans are safer drivers. They are just isolated facts. I'm sure there are people looking at all the Uber car sensors and recorded data (and if every piece of data from every sensor isn't recorded at this point on the life of autonomous cars, that's a problem I think we can all agree on) as well as any other available information like witness testimony, surveillance video, what was everyone up to just prior to the incident, etc. to figure out what happened.
 
http://www.bbc.co.uk/news/technology-43523286

A human driver wouldn't have avoided this accident, but I expect a car equipped with Lidar to forsee the oncoming collision. Interestingly the sensor firm have denied any fault with their equipment, blaming the computer software for the accident.
 
Interestingly the sensor firm have denied any fault with their equipment, blaming the computer software for the accident.
I think the courts & lawyers will have far more impact on the adoption of self driving cars than technology. A lawyer for the eyes can't blame the brain. All these parties are going to point the finger at some other party. Plus, if the eyes (sensor) turns out to be the problem you may find yourself having to ground the fleet of vehicles using said sensor.
 
A lawyer for the eyes can't blame the brain

I think they would try, and probably have a good case. Something like - well we gave you all the data, that's our responsibility. If you fail to do the right thing with it, then that's your problem/fault.
 
I think they would try, and probably have a good case. Something like - well we gave you all the data, that's our responsibility. If you fail to do the right thing with it, then that's your problem/fault.
What I was getting at is if a human was driving, it is one entity without the ability to have one organ sue or blame another organ for failure.

Simplistically, if a human sees the person in the road they will remove their foot from the accelerator and stomp on the brakes, hopefully honk the horn to warn the pedestrian, and take an evasive move with the steering wheel. In a human we would call that "reaction."

In a computer you have multiple sensors reading what's around them. Let's say radar and camera to keep it simple. That's two systems. Each one of those senses communicate to the brain so another system. The brain processes these inputs, recognizes the danger, decides how to respond, and sends those signals out to the various control systems. That's a lot of different subsystems in the brain all working together and probably not all from a single entity. Lets assume the computer makes the same reaction as the human (release the accellerator, stomp on the brakes, honk the horn, and swerve) it would be sending out 4 signals.
  1. Release the accellerator so probably some solenoids or actuator
  2. Apply the brakes to include how hard, again actuators
  3. Honk the horn, probably a digital signal
  4. Swerve, so here the "brain" would need to determine if it should swerve, how hard, and in what direction.
Now, the brain has sent all of those signals, and hopefully the actuators responded as the brain expected (e.g. 90% brakes expected, 80% actual, or 20 degree turn to the right expected and 18 degree turn to the right actual). Irregardless, it all goes back to the input sensors to determine a) did the vehicle react as expected and b) what has changed in the scenario. Things like honking the horn made the person step back out of the adjusted path of the vehicle or the tires slipped on sand or other road debris that altered the expected / desired outcome of the reactions taken to date.

That's a lot of potential finger pointing for lawyers where the human is a single entity.

That said, I think these autonomous vehicles should have a "reflex" (think how you recoil when you touch something hot) to jolt the human monitor when it detects an emergency.


Lastly, I think it is pretty clear here that the car either didn't "see" the pedestrian or the braind didn't perceive the danger as there was no reaction. Unless it was the communication to the actuators that failed. :p
 
Well it's all pure speculation until we see all the data logs and put the pieces together. But we can always try and come to a conclusion :)
The problem of who's at fault is always going to be tricky where you have different suppliers in charge of the various system components. I still do think however, (assuming that the computer received all the data from the sensors), that the fault lies in the controlling computer software. Also assuming that no hardware fault occurred, which is always possible. The fact that the car took absolutely no evasive action seem to indicate that there was a catastrophic loss of data to the computer, or the software just didn't react to the emergency situation.
But again, without all the data it's impossible to come to a conclusion. All we have is speculation.
 
http://www.bbc.co.uk/news/technology-43523286

A human driver wouldn't have avoided this accident, but I expect a car equipped with Lidar to forsee the oncoming collision. Interestingly the sensor firm have denied any fault with their equipment, blaming the computer software for the accident.
Well as long as the field of view was adequate to give sufficient warning and nothing was faulty then they probably have a point - in the narrow sense that as long as the sensor returned the signal you'd expect then anything else that did or didn't happen is software. And by the same token if the sensor was faulty and the car didn't detect this and continued to drive then that is also software (though whose software in that case is less obvious to an outsider).

All these parties are going to point the finger at some other party. Plus, if the eyes (sensor) turns out to be the problem you may find yourself having to ground the fleet of vehicles using said sensor.
True, but if the problem is in the software then you also have to ground the fleet until that is rectified. The QC regime for software patches is going to have to be pretty extensive and rigourous, given the ability for a fix in one part of a complex system to have unexpected side-effects (and I certainly wouldn't allow Uber, of all companies, to self-certify).

An interesting article on Uber's tests here: https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html. Particularly the implication that Uber's technology required much more intervention from the safety drivers than its competitors. Now it's not obvious that the criteria for logging an intervention are the same in all cases (with tests being conducted in a lax regulatory environment I'd be amazed if they were comparable in fact), and I'd expect most of these interventions would be minor compared to the event that started this thread, but it still seems that by any reading of them Uber's systems are some way behind the competition in that regard.
 
playing devil's advocate ....

let imagine a plastic bag was suddenly blown across the path of the AI controlled car.
the bag is too fast and too close and only a few feet in front of car. car can not stop fast enough to avoid a collision.
it can swerve to attempt to dodge it.. but the bag is still too close. it will collide.
but the swerve will make the car loose control and slide in on predicted manner. it might hit other things too. causing injury to people inside and other people or things outside.
so it is too late to save the bag.. it will hit it.
so it has to not make a change in movement and collide with the plastic bag.

now.. lets imagine a ... Dog... or a Deer.
in the same scenario.
i would guess the same outcome.

now... what about a person? in same scenario.
what can the AI do?


also.. in the above scenarios...
i think a human driver.. would not even notice anything.. till the impact was already done.
and a few seconds have past. "oh shit moment".
 
Oh yes, there are good reasons why you don't want the car to suddenly brake or swerve because of a plastic bag or a shadow (not that LIDAR would see shadows). That would also pose a risk to the passenger and other road users - though you'd hope it would know where other road users were and factor that in to any action (unlike a human who might well react to the thing in front of them without knowing whether the particular reaction was safe).

But in this case there didn't appear to be other traffic nearby, and I'm pretty sure that there was nobody tailgating it so closely on a road that empty that braking was not an option (and even if there were, people wrapped up in vehicles are protected, and pedestrians are not). And at that speed even slowing by 5mph before a collision makes a significant difference to the chances of the pedestrian surviving. So once an object is identified as possibly human then even if you can't swerve, at least braking is always the correct response.

Plus from the manufacturer's point of view, whatever arguments you might give for the decision logic, if it hits someone and it comes out that it was programmed not to brake in those circumstances you are likely to be facing a world of pain. Whereas if you can stand up and say "the vehicle hit the brakes faster than a human could have reacted" then you are going to be OK. So if there's any possibility that the object might be human, you have to attempt to avoid or mitigate any collision.
 
i see your point.. it cant hurt to brake .. be on the safe side... even slowing a little is a positive move.

but 99 times out of 100.... A human in my scenario will not out preform an AI.

no AI is perfect. but sill better.
 
Well autopilot isn't claimed to be an autonomous driving system (though given the number of idiots who cannot understand that maybe it should be disabled until it is?).

As for the cretin who moved to the passenger seat, what struck me about that when it was reported here the other week was how utterly unrepentant he was. An appropriate sentence for him would be a lifetime ban, i.e. make him wait until there are genuinely autonomous vehicles which don't require him to control them. Shouldn't be a problem for him: with what it costs to buy and insure a Tesla S he could afford an awful lot of taxis and train journeys...
 
Last edited:

Police said it's not immediately known whether the Tesla's autopilot driving system was in use when it rear-ended a truck apparently without braking before impact at approximately 60 mph.

red light... not highway....
stopped BIG red firetruck
rear-ended at 60mph..
no brake lights came on...

i think this is just another... driver error.. in any kind of car.. perons NOT paying attention.
rear-end accidents.. happen every day.. thousands in usa.
in all kinds of cars..
1 thing in common... human error.
 
I hope it will never work, and without much loss of life before it is ditched.

Cars and drivers are meant to respect each others safety and have consideration for what the other is trying to do. If this also leads to good etiquette, and ir still often does, then all the better.

Sure, this isn't happening now for many on UK roads, and no one has really stepped in to improve driver attitudes, even the police. Still if everyone drove like an AH, no one would get anywhere. I thought things would start to swing back in a positive way before now.

Anyway. . my 2 points in relation to autonomous cars being :

A) Competitiveness .

I don't compete aggressively on the road, but many do, and all AI cars would have to talk to each other (as well as follow rules and guidelines) AND a central command (type thing!) to merge safely. That's even if all cars are autonomous, and they obviously wouldn't be for decades.

The letter that comes after A)

Aviation /Avionics.

I had an interest in reading air accident investigations for 40 years, mostly from a human behaviour point of view as I could glide over some of the technical stuff.

Highly professional industry, very authoritative, conscientious, diligent pilots on the whole, and highly - engineered and tested multi redundancy systems.

I am astonished that fatal mistakes and flaws can still surface today. 30 years ago I thought they would have learned all the lessons and no more hideous errors would occur.

That Air France loss from Brazil in an Airbus was astonishing for so many reasons.

Like aa was stated, a machine or computer is only as good and effective as the information and decision preferences programmed into it, and I don't think AI will reach human level anytime soon.

You can draw up long lists of why aviation and autonomous cars are and are not alike, but there are lessons still to be learned.

This is a terribly composed post, sorry, and I didn't want to get involved, but just saying my money is on it never happening successfully. People like Elon Musk should pull out and enjoy the money before their names become mud.
 
Last edited by a moderator:
I don't know. Personally I think that removing human inattention and ego from driving could make it a lot safer. The tricky bit will be the transition: the AH drivers will try to take advantage of the fact that machines will defer to avoid accidents (or try to, since some people will create situations where full avoidance is impossible), and there will be an almighty fuss when someone proposes that the best way to improve safety is to restrict or remove human drivers (which I suspect will be a true statement much sooner than people suspect, though not as soon as the proponents claim).
 
I still think lawyers will kill it even if the technology matures enough to coexist with human drivers (a necessary step in the transition).
 
wow.. it is interesting the bias and fear that AI cause.
unreasonable fear.. in this situation of cars with AI.

AI .. in cars... will always be better on the whole.. vs Humans.

even with AH drivers.. being aggressive on the road...
AI-cars will react better and faster.. to let the AH-car drive away... no fuss / no muss.
Humans will react un-favorably & un-safely against the AH-car.

this morning.. a car cut me off...
and i followed him closely .. tailed him for a while to mess with him...
this was a NO-NO. and could have escalated.
but an AI, would have just ignored it..
 
wow.. it is interesting the bias and fear that AI cause.
unreasonable fear.. in this situation of cars with AI.

AI .. in cars... will always be better on the whole.. vs Humans.

even with AH drivers.. being aggressive on the road...
AI-cars will react better and faster.. to let the AH-car drive away... no fuss / no muss.
Humans will react un-favorably & un-safely against the AH-car.

this morning.. a car cut me off...
and i followed him closely .. tailed him for a while to mess with him...
this was a NO-NO. and could have escalated.
but an AI, would have just ignored it..
Unless the sensor didn't detect the car cutting it off, or it did, but the programming didn't recognize the threat from the sensor and didn't tell the steering, braking, accelerator actuators to react, and if it did tell them to react the calculations were correct, and the actuators did what the programming expected them to do, and the road conditions (e.g. sand, ice, heat, new asphalt, dirt road, etc.) performed the way the computer expected them too, as well as the other car not making an unanticipated (unprogrammed) adjustment. Once the automated car reacts all these sensors and systems need to alos reprocess the all the information once the corrective actions are put in place.

Getting back to my "lawyers will be the problem" statement, unless one entity is responsible for all of those systems, the lawyers will sue everyone else. It will be a big, legal, finger pointing. Look at the case the started this thread. The company that made the safety systems for Volvo blamed uber for turning them off.
 
Back
Top Bottom