AI is ridiculous concept that many misinterpret.

So if human drivers randomly mow down pedestrians, bikers and children they are not intelligent?
Fascinating argument. Sounds like you are well on your way to proving humans are not intelligent, since the accident rate with human drivers is higher than the accident rate of Tesla autodrive.

Context matters, Tesla autodrive is basically a glorified line follower program with object avoidance. As a programmer/automation guy I'm familiar with the concept and have written basic code for robots. In fact my Tucson hybrid has a fair highway only autodrive feature with auto braking/adaptive cruise control.

However where AI/autodrive fails spectacularly is in real world predictive responses. For example, a deer jumps out of the ditch into the middle of our lane and we cannot brake in time. What is the correct response?, I know to brake hard until the last moment then turn towards the deer's tail because they seldom change direction. Even if the deer could turn 180 degrees I would probably be past it by the time it did. To my knowledge there is no autodrive or AI intelligent enough to make these kinds of predictive responses in the real world yet.
 
Context matters, Tesla autodrive is basically a glorified line follower program with object avoidance.
Agreed! Which is why ability to do that is not a great measure of intelligence.
However where AI/autodrive fails spectacularly is in real world predictive responses. For example, a deer jumps out of the ditch into the middle of our lane and we cannot brake in time. What is the correct response?, I know to brake hard until the last moment then turn towards the deer's tail because they seldom change direction. Even if the deer could turn 180 degrees I would probably be past it by the time it did. To my knowledge there is no autodrive or AI intelligent enough to make these kinds of predictive responses in the real world yet.
Also agreed. However, people fail just as spectacularly at that particular task. The goal for automated driving programs is, in general, to be better than people at the same task.
 
I agree with Billvon regarding the standard for cars. If it does a better job than humans or if a system results in a better outcome than with no system, what's the issue?

If a deer jumps across the road with no system you will probably hit the deer. My lowly Corolla is programmed to stop if a child steps out from behind a car into traffic. It also is programmed to stop if the car in front suddenly stops.

The spec is that in such a situation it should reduce the impact by 35 mph. If you are going less than 35 mph there is likely to be no impact. If you are going faster than that the impact is reduced by that amount which is only positive regarding bodily injury and auto damage.

If may not engage 100% of the time. When it does, it reacts much faster than a human can. It only makes things safer than not having such a system.

I know a guy with the same car, he was in traffic with his foot on the gas when the car in front of him suddenly stopped. His car automatically stopped so quickly that he didn't even have time to take his foot off the gas. There was no impact. No damage.

Tesla can do a lot more than that. I assume that it isn't "perfect" yet but so what? What is the counter-arguement? It's not smart enough? OK. No one says that any of this is a measure of intelligence.
 
Also agreed. However, people fail just as spectacularly at that particular task. The goal for automated driving programs is, in general, to be better than people at the same task.

I think were on the same page and my issue seems to relate more to intelligence and capability versus responsibility.

For example, If I hit someone we know who to blame but now many using autodrive are claiming there not at fault. At which point we seem to be moving towards the no fault dilemma. It's well enough for an insurance claim until we have to explain to a parent why that Tesla mowed down little Johnny in cold blood. Of course we all know where this is going and at some point a drunk is going to try claiming no fault and the car maker as well, it was the computer or AI.

I think it will eventually play out like most corporations wanting all the rights of a person but none of the responsibilities. I mean, it's starting to sound like all those cheesy science fiction movies we said were unrealistic and could never happen, yet here we are.
 
Of course it's not going to turn out that way. If the law changed so that you could blame the AI, then the AI would have to have the same insurance as any other responsible driver/owner.

If AI mowed down little Johnny less often than humans, what would be the meaningful distinction of the driver being an AI? People are injured on roller coasters and ski lifts. There is still liability for the owner.
 
What are you think about this? I just discover the AI Art generators that can make realistic images to replace artists. If you wanna know more details about it click here.

Write a computer program which can mimic Rembrandts painting style along with brush strokes you have a copy cat AI Rembrandt

Great for forgery

:)
 
Back
Top