Comedy or Tragedy? Autonomous Taxis By Howard Bloom

There have been a series of accidents over the last few months that can easily make you leery of autonomous cars, self-driving automobiles controlled by artificial intelligence.

Waymo is a self-driving car company owned by Google’s parent, Alphabet. CNN reported on Wednesday, February 14th, that sometime at the end of 2023 in San Francisco a Waymo self-driving taxi encountered a tow truck dragging a pickup truck. The pickup truck’s hind end was in the air, and its front end had its wheels on the ground. That dangling front end was nosing out of its own lane and into a neighboring turn lane.

Despite the twenty years that Waymo has put into its cars’ software, despite a training on 40,000 scenarios, despite the real world experience of 20 million miles driving on public roads, and despite 20 billion miles of driving in simulations, the software was confused and hit the dangling pickup’s front end.

The tow truck kept going. Then a few miles down the road a second Waymo cab hit exactly the same towed pickup truck.

So despite Waymo’s claim that its software is the most experienced driver on the road, with far more experience than a human driver can acquire in a lifetime, there are still situations the computer program doesn’t understand.

For example, last Saturday night was the Chinese New Year in San Francisco’s Chinatown. The streets were so thickly crowded that almost no car could have gotten through. A human driver would have known that and avoided the area.

But Waymo’s software was oblivious to the holiday. What’s worse, nearly everyone in those jubilant crowds was lighting fireworks.

A vandal from the crowd jumped on the hood of the Waymo car, gave a karate-kick, shattered the windshield, and tossed in a lit firework. Others followed his example and threw their fireworks into the cab. The resulting flames burned out the self-driving taxi entirely.

Meanwhile a fire truck rushed to the scene and was blocked by, guess what? Another Waymo cab that had not pulled over far enough.

But that’s not all. Last week a Waymo taxi in San Francisco hit a bicycle rider. Meanwhile, also in San Francisco, a pedestrian was hit by the biggest menace on America’s roads, a real human driver. Then that pedestrian was hit again by an autonomous taxi from General Motors. What’s worse, the General Motors autonomous cab dragged the poor victim 20 feet across the road.

So, to quote CNN, GM “lost its permits to test vehicles in California then voluntarily stopped testing throughout the country.”

Does this mean that self-driving cars are a menace? No. It means they are still learning.

On a normal day human drivers have 17,250 accidents. Yes, 17,250. Every day. Autonomous vehicles do far, far better.

What’s more, after every accident, the self-driving software engineers go to work and upgrade their programs. Which means that autonomous vehicles, self-driving cars, are now twice as safe as human drivers. And they will get safer and safer as they continue to learn.

But there’s a disturbing extra reason this matters. Ukraine and Russia are working at full speed to employ self-driving artificial intelligence for another environment where human lives are at stake, war. They are racing to incorporate fully-autonomous artificial intelligence into their killer drones. And the country that gets fully self-piloting artificial intelligence in its drones first will have an edge in battle.

This is where the nightmares of a global artificial intelligence Armageddon come in. How is a robot killer’s artificial intelligence going to tell the difference between friend and foe, between ally and enemy? Not to mention between civilian and fighter? For example in a situation like Gaza, where Hamas fighters deliberately dress like civilians, blend into civilian crowds, use ambulances for transportation, and have their headquarters in tunnels under hospitals, schools, mosques, and even United Nations buildings?

If killing machines are equipped for self-driving artificial intelligence, are they someday going to make mistakes and come after you and me?

References:
https://www.cnn.com/2024/02/14/business/waymo-recalls-software-after-two-self-driving-cars-hit-the-same-truck
https://www.reuters.com/business/autos-transportation/san-francisco-waymo-arson-sparks-fresh-debate-self-driving-cars-2024-02-13/]
https://waymo.com/
https://gmauthority.com/blog/2023/09/cruise-avs-involved-in-fewer-accidents-than-human-driven-taxis-study-shows/

Cruise AVs In Fewer Accidents Than Human-Driven Taxis: Study
A new study shows Cruise AVs involved in far fewer accidents and much less likely to have injuries take place in…

https://www.pbs.org/newshour/world/drone-advances-amid-war-in-ukraine-could-bring-fighting-robots-to-front-lines

______
Howard Bloom of the Howard Bloom Institute has been called the Einstein, Newton, and Freud of the 21st century by Britain’s Channel 4 TV. One of his seven books–Global Brain—was the subject of a symposium thrown by the Office of the Secretary of Defense including representatives from the State Department, the Energy Department, DARPA, IBM, and MIT. His work has been published in The Washington Post, The Wall Street Journal, Wired, Psychology Today, and the Scientific American. He does news commentary at 1:06 am Eastern Time every Wednesday night on 545 radio stations on Coast to Coast AM. For more, see http://howardbloom.institute.

Tags: , , , ,
Business

Products You May Like

Articles You May Like

Proof Tom Schwartz & New Girlfriend Are Serious After This Milestone
EVERYDAY CARRY: Montblanc
14 Times Celebs And Stylists Revealed “Hidden Costs” Or How To Make Money On The Red Carpet
Google Is Destroying Your Access to News: Book Censorship News, April 19, 2024
Will Future and Metro Boomin ‘Still’ Make It to No. 1 With Their Latest Sequel Set?