I never quite undestood why so many people seem to be against autonomous vehicles.
People aren’t against autonomous vehicles, but against them getting let lose on public roads with zero checks or transparency. We basically learn what they are and aren’t capable of one crash at a time, when all of that should have been figured out years ago in the lab.
The fact that they can put a safety driver in them to absorb any blame is another scandal.
Statistically they’re still less prone to accidents than human drivers.
That’s only due to them not driving in the same condition has humans. Let them drive in fog and suddenly they can’t even see clearly visible emergency vehicles.
None of this would be a problem if those companies would be transparent about what those vehicles are capable of and how they react in unusual situations. All of which they should have tested a million times over in simulation already.
Let them drive in fog and suddenly they can’t even see clearly visible emergency vehicles.
That article you linked isn’t about self driving car. It’s about Tesla “autopilot” which constantly checks if a human is actively holding onto the steering wheel and depends on the human checking the road ahead for hazards so they can take over instantly. If the human sees flashing lights they are supposed to do so.
The fully autonomous cars that don’t need a human behind the wheel have much better sensors which can see through fog.
That article you linked isn’t about self driving car.
Just because Tesla is worse than others doesn’t make it not self-driving. The “wiggle the steering wheel” feature is little more than a way to shift blame to driver instead of the crappy self-driving software.
so they can take over instantly.
Humans fundamentally can’t do that. If you sit a human in a self driving car doing nothing for hours, they won’t be able to react in a split section when it is needed. Sharing driving in that way does not work.
The fully autonomous cars that don’t need a human behind the wheel have much better sensors which can see through fog.
Is anybody actively testing them in bad weather conditions? Or are we just blindly trusting claims from the manufacturers yet again?
Just because Tesla is worse than others doesn’t make it not self-driving.
The fact that Tesla requires a human driver to take over constantly makes it not self-driving.
so they can take over instantly.
Humans fundamentally can’t do that. If you sit a human in a self driving car doing nothing for hours, they won’t be able to react in a split section when it is needed.
The Human isn’t supposed to be “doing nothing”. The human is supposed to be driving the car. Autopilot is simply keeping the car in the correct lane for you, and also adjusting the speed to match the car ahead.
Tesla’s system won’t even stop at an intersection if you need to give way (for example, a stop sign. Or a red traffic light). There’s plenty of stuff the human needs to be doing other than turning the steering wheel. If there is a vehicle stopped in the middle of the road Tesla’s system will drive straight into it at full speed without even touching the brakes. That’s not something that “might happen” it’s something that will happen, and has happened, any time a stationary vehicle is parked on the road. It can detect the car ahead of you slowing down. It cannot detect a stopped vehicle.
They’ve promised to ship a more capable system “soon” for over a decade. I don’t see any evidence that it’s actually close to shipping though. The autonomous systems by other manufacturers are significantly more advanced. They shouldn’t be compared to Tesla at all.
Is anybody actively testing them in bad weather conditions?
Yes. Tens of millions of testing and they pay especially close attention to any situations where the sensors could potentially fail. Waymo says their biggest challenge is mud (splashed up from other cars) covering the sensors. But the cars are able to detect this, and the mud can be wiped off. it’s a solvable problem.
Unlike Tesla, most of the other manufacturers consider this a research project and are focusing all of their efforts on making the technology better/safer/etc. They’re not making empty promises and they’re being cautious.
On top of the millions of miles of actual testing, they also record all the sensor data for those miles and use it to run updated versions of the algorithm in exactly the same scenario. So the millions of miles have, in fact, been driven thousands and thousands of times over for each iteration of their software.
With Tesla the complaint is that the statistics are almost all highway miles so it doesn’t represent the most challenging conditions which is driving in the city. Cruise then exclusively drives in a city and yet this isn’t good enough either. The AV-sceptics are really hard to please…
You’ll always be able to find individual incidents where these systems fail. They’re never going to be foolproof and the more of them that are out there the more news like this you’re going to see. If we reported about human-caused crashes with the same enthusiasm that would be all the news you’re hearing from then on and letting humans drive would seem like the most scandalous thing imaginable.
I do not care about situations that they work in, I care about what situations they will fail at. That’s what matters and that’s what no company will tell you. As said, we learn about the capabilities of self driving cars one crash at a time, and that’s just unacceptable when you could figure all of that out years ago in simulation.
So far none of the self-driving incidences I have seen were some kind of unforeseen freak situation, it was always some rare, but standard thing, fog, pedestrian crossing the road, road blocked by previous crash, etc.
Humans get into accidents all the time. Is that not unacceptable for you?
I feel like people apply standards to self driving cars that they don’t to human driven ones. It’s unreasonable to expect a self driving system never to fail. It’s unreasonable to imagine you can just let it practice in simulation untill it’s perfect. This is what happens when you just narrowly focus on one aspect of self driving cars (individual accidents) - you miss the big picture.
I feel like people apply standards to self driving cars that they don’t to human driven ones.
Human drivers need to pass driving test, self-driving cars do not. Human drivers also have a baseline of common sense that self-driving cars do not have, so they really would need more testing than humans, not less.
It’s unreasonable to expect a self driving system never to fail.
I don’t expect them to never fail, I just want to know when they fail and how badly.
It’s unreasonable to imagine you can just let it practice in simulation untill it’s perfect.
What’s unreasonable about that?
individual accidents
They are only “individual” because there aren’t very many self-driving cars and because not every fail ends up deadly.
Tesla on FSD could easily pass the driving test that’s required for humans. That’s a nonsensical standard. Most people with fresh license are horribly incompetent drivers.
I don’t expect them to never fail, I just want to know when they fail and how badly.
“Over 6.1 million miles (21 months of driving) in Arizona, Waymo’s vehicles were involved in 47 collisions and near-misses, none of which resulted in injuries”
How many human drivers have done millions of miles of driving before they were allowed to drive unsupervised? Your assertion that these systems are untested is just wrong.
“These crashes included rear-enders, vehicle swipes, and even one incident when a Waymo vehicle was T-boned at an intersection by another car at nearly 40 mph. The company said that no one was seriously injured and “nearly all” of the collisions were the fault of the other driver.”
According to insurance companies, human driven cars have 1.24 injuries per million miles travelled. So, if Waymo was “as good as a typical human driver” then there would have been several injuries. They had zero serious injuries.
The data (at least from reputable companies like Waymo) is absolutely available and in excruciating detail. Go look it up.
As a software developer, that’s not how testing works. QA is always trying to come up with weird edge cases to test, but once it’s out in the wild with thousands (or more) of real-world users, there’s always going to be something nobody ever tried to test.
For example, there was a crash where an unmarked truck with exactly the same color as the sky was 90° sideways on the highway. This is just something you wouldn’t think of in lab conditions.
there’s always going to be something nobody ever tried to test.
That’s not what is happening. We don’t see weird edge cases, we see self driving cars blocking emergency vehicles and driving through barriers.
For example, there was a crash where an unmarked truck with exactly the same color as the sky was 90° sideways on the highway.
The sky is blue and the truck was white. Testing the dynamic range of the camera system is absolutely something you do in in lab situation. And a thing blocking the road isn’t exactly unforeseen either.
I don’t expect self driving cars to be perfect and handle everything, but I expect the manufacturers to be transparent about their abilities and they aren’t. Furthermore I expect the self driving system to have a way to react to unforeseen situations, crashing in fog is not acceptable when the fact that there was fog was plainly obvious.
And a thing blocking the road isn’t exactly unforeseen either.
Tesla’s system intentionally assumes “a thing blocking the road” is a sensor error.
They have said if they don’t do that, about every hour or so you’d drive past a building and it would slam on the brakes and stop in the middle of the road for no reason (and then, probably, a car would crash into you from behind).
The good sensors used by companies like Waymo don’t have that problem. They are very accurate.
People aren’t against autonomous vehicles, but against them getting let lose on public roads with zero checks or transparency. We basically learn what they are and aren’t capable of one crash at a time, when all of that should have been figured out years ago in the lab.
The fact that they can put a safety driver in them to absorb any blame is another scandal.
That’s only due to them not driving in the same condition has humans. Let them drive in fog and suddenly they can’t even see clearly visible emergency vehicles.
None of this would be a problem if those companies would be transparent about what those vehicles are capable of and how they react in unusual situations. All of which they should have tested a million times over in simulation already.
That article you linked isn’t about self driving car. It’s about Tesla “autopilot” which constantly checks if a human is actively holding onto the steering wheel and depends on the human checking the road ahead for hazards so they can take over instantly. If the human sees flashing lights they are supposed to do so.
The fully autonomous cars that don’t need a human behind the wheel have much better sensors which can see through fog.
Just because Tesla is worse than others doesn’t make it not self-driving. The “wiggle the steering wheel” feature is little more than a way to shift blame to driver instead of the crappy self-driving software.
Humans fundamentally can’t do that. If you sit a human in a self driving car doing nothing for hours, they won’t be able to react in a split section when it is needed. Sharing driving in that way does not work.
Is anybody actively testing them in bad weather conditions? Or are we just blindly trusting claims from the manufacturers yet again?
The fact that Tesla requires a human driver to take over constantly makes it not self-driving.
The Human isn’t supposed to be “doing nothing”. The human is supposed to be driving the car. Autopilot is simply keeping the car in the correct lane for you, and also adjusting the speed to match the car ahead.
Tesla’s system won’t even stop at an intersection if you need to give way (for example, a stop sign. Or a red traffic light). There’s plenty of stuff the human needs to be doing other than turning the steering wheel. If there is a vehicle stopped in the middle of the road Tesla’s system will drive straight into it at full speed without even touching the brakes. That’s not something that “might happen” it’s something that will happen, and has happened, any time a stationary vehicle is parked on the road. It can detect the car ahead of you slowing down. It cannot detect a stopped vehicle.
They’ve promised to ship a more capable system “soon” for over a decade. I don’t see any evidence that it’s actually close to shipping though. The autonomous systems by other manufacturers are significantly more advanced. They shouldn’t be compared to Tesla at all.
Yes. Tens of millions of testing and they pay especially close attention to any situations where the sensors could potentially fail. Waymo says their biggest challenge is mud (splashed up from other cars) covering the sensors. But the cars are able to detect this, and the mud can be wiped off. it’s a solvable problem.
Unlike Tesla, most of the other manufacturers consider this a research project and are focusing all of their efforts on making the technology better/safer/etc. They’re not making empty promises and they’re being cautious.
On top of the millions of miles of actual testing, they also record all the sensor data for those miles and use it to run updated versions of the algorithm in exactly the same scenario. So the millions of miles have, in fact, been driven thousands and thousands of times over for each iteration of their software.
With Tesla the complaint is that the statistics are almost all highway miles so it doesn’t represent the most challenging conditions which is driving in the city. Cruise then exclusively drives in a city and yet this isn’t good enough either. The AV-sceptics are really hard to please…
You’ll always be able to find individual incidents where these systems fail. They’re never going to be foolproof and the more of them that are out there the more news like this you’re going to see. If we reported about human-caused crashes with the same enthusiasm that would be all the news you’re hearing from then on and letting humans drive would seem like the most scandalous thing imaginable.
I do not care about situations that they work in, I care about what situations they will fail at. That’s what matters and that’s what no company will tell you. As said, we learn about the capabilities of self driving cars one crash at a time, and that’s just unacceptable when you could figure all of that out years ago in simulation.
So far none of the self-driving incidences I have seen were some kind of unforeseen freak situation, it was always some rare, but standard thing, fog, pedestrian crossing the road, road blocked by previous crash, etc.
Humans get into accidents all the time. Is that not unacceptable for you?
I feel like people apply standards to self driving cars that they don’t to human driven ones. It’s unreasonable to expect a self driving system never to fail. It’s unreasonable to imagine you can just let it practice in simulation untill it’s perfect. This is what happens when you just narrowly focus on one aspect of self driving cars (individual accidents) - you miss the big picture.
Human drivers need to pass driving test, self-driving cars do not. Human drivers also have a baseline of common sense that self-driving cars do not have, so they really would need more testing than humans, not less.
I don’t expect them to never fail, I just want to know when they fail and how badly.
What’s unreasonable about that?
They are only “individual” because there aren’t very many self-driving cars and because not every fail ends up deadly.
Tesla on FSD could easily pass the driving test that’s required for humans. That’s a nonsensical standard. Most people with fresh license are horribly incompetent drivers.
So why don’t we check it? Right now we are blindly trusting the claims of companies.
What are these claims we’re blindly trusting exaclty? Do you have any direct quotes?
Have you used it? It’s not very good. It tries to run red lights, makes random swerves and inputs, and generally drives like someone on sedatives.
They’ve had to inject a ton of map data to try to make up for the horrendously low resolution cameras, but “HD MaPs ArE a CrUtCh” right?
No radar or lidar means the sun can blind it easily, and there’s a blind spot in front of the car where cameras cannot see.
Is what they’ve made impressive? Sure, but it’s nowhere near safe enough to be on public roads in customer’s cars. At all.
“Over 6.1 million miles (21 months of driving) in Arizona, Waymo’s vehicles were involved in 47 collisions and near-misses, none of which resulted in injuries”
How many human drivers have done millions of miles of driving before they were allowed to drive unsupervised? Your assertion that these systems are untested is just wrong.
“These crashes included rear-enders, vehicle swipes, and even one incident when a Waymo vehicle was T-boned at an intersection by another car at nearly 40 mph. The company said that no one was seriously injured and “nearly all” of the collisions were the fault of the other driver.”
According to insurance companies, human driven cars have 1.24 injuries per million miles travelled. So, if Waymo was “as good as a typical human driver” then there would have been several injuries. They had zero serious injuries.
The data (at least from reputable companies like Waymo) is absolutely available and in excruciating detail. Go look it up.
As a software developer, that’s not how testing works. QA is always trying to come up with weird edge cases to test, but once it’s out in the wild with thousands (or more) of real-world users, there’s always going to be something nobody ever tried to test.
For example, there was a crash where an unmarked truck with exactly the same color as the sky was 90° sideways on the highway. This is just something you wouldn’t think of in lab conditions.
That’s not what is happening. We don’t see weird edge cases, we see self driving cars blocking emergency vehicles and driving through barriers.
The sky is blue and the truck was white. Testing the dynamic range of the camera system is absolutely something you do in in lab situation. And a thing blocking the road isn’t exactly unforeseen either.
Or how about railroad crossing, Tesla can’t even the difference between a truck and a train. Trucks blipping in out of existence, even changing direction, totally normal for Tesla too.
I don’t expect self driving cars to be perfect and handle everything, but I expect the manufacturers to be transparent about their abilities and they aren’t. Furthermore I expect the self driving system to have a way to react to unforeseen situations, crashing in fog is not acceptable when the fact that there was fog was plainly obvious.
Tesla’s system intentionally assumes “a thing blocking the road” is a sensor error.
They have said if they don’t do that, about every hour or so you’d drive past a building and it would slam on the brakes and stop in the middle of the road for no reason (and then, probably, a car would crash into you from behind).
The good sensors used by companies like Waymo don’t have that problem. They are very accurate.