Transportation

Driverless Car Accident Reports Make Unhappy Reading For Humans

Comment

Image Credits:

As technology giants accelerate humanity towards a driverless car future, where we are conditioned to keep our eyeballs on our devices while algorithms take the wheel and navigate the vagaries of the open road, safety questions crash headlong into ethical and philosophical considerations.

Earlier this year Google blogged about the eleven “minor accidents” its driverless cars had been involved in over six years of testing — laying the blame for all 11 incidents at the hands of the other human drivers. Which sounds great for the technology on the surface. But in reality it underlines the inherent complexities of blending two very different styles of driving — and suggests that robot cars might actually be too cautious and careful.

Combine that cautious, by-the-book approach with human drivers’ tendency to take risks and cut corners, and well, that, in itself, might indicate driverless cars’ risk aversion is an accident waiting to happen (at least when human drivers are also in the mix).

Google is now trying to train its cars to drive “a bit more humanistically”, as a Google driverless car bod put it this summer, using a word that seems better suited to the lexicon of a robot. Which boils down to getting robots to act a bit more aggressively at the wheel. Truly these are strange days.

Autonomous vehicles navigating open roads guided only by algorithmic smarts is certainly an impressive technical achievement. But successfully integrating such driverless vehicles into the organic, reactive chaos of (for now) human-motorist dominated roads will be an even more impressive achievement — and we’re not there yet. Frankly the technical progress achieved thus far, by Google and others in this field, may prove the far easier portion of what remains a very complex problem.

The last mile of driverless cars is going to require an awful lot of engineering sweat, and regulatory and society accord about acceptable levels of risk (including very sizable risks to a whole swathe of human employment). Self-driving car-makers accepting blanket liability for accidents is one way the companies involved are trying to accelerate the market.

As you’d expect, California has been at the forefront of fueling tech developments here. Its DMV is currently developing regulations for what it dryly dubs the “post-testing deployment of autonomous vehicles” — a process that’s, unsurprisingly given the aforementioned complexities, lagging far behind schedule, with no draft rules published yet, despite them being slated to arrive at the start of this year.

The DMV has just published all the official accident reports involving autonomous vehicles tested on California’s roads, covering the period from last September to date, on its website. This data mostly pertains to Google’s driverless vehicles, with eight of the nine reports involving Mountain View robot cars. The other one is an autonomous vehicle made by Delphi Automatic.

The reports appear to support Google’s claims that human error by the drivers of the non-autonomous cars is, on the surface, causing accidents. However the difficulties caused by the co-mingling of human and robot driving styles is also in ample evidence.

In one report, from April this year, a low-speed rear-shunt occurred when a robot car — in the midst of attempting to turn right at an intersection — applied the brakes to avoid an oncoming car, after initially creeping forward. The human-driven car behind it, also trying to turn right and presumably encouraged by the Lexus creeping forward, then “failed to brake sufficiently” and so collided with the rear of the Google Lexus.

In another report, from June this year, a Google Lexus traveling in autonomous vehicle mode was also shunted from behind at low speed by a human-driven car. In this instance the robot car was obeying a red stop sign that was still showing for the lane it was occupying. The human driver behind was apparently spurred on to drive into the back of the stationary Lexus because of a green light appearing — albeit for a left-turn lane (whereas the two cars were actually occupying the straight ahead lane).

A third report, from this July, details how another Google Lexus was crashed into from behind by a human driver — this time after decelerating to a stop in traffic because of stopped traffic ahead of a green lit traffic intersection. Presumably the human driver was paying more attention to the green traffic signal than to the changing road conditions.

Most of the accidents detailed in the reports occurred at very low speeds. But that might be more a consequence of the type of road-testing driverless cars are currently engaged in, if the focus of current tests for makers is urban navigation and all its messy complexities. While Google’s cars being involved in the majority of the reports is likely down to the company clocking up the most driverless mileage, having been committed to the space for so many years.

Back in May Google said its 20+ self-driving cars were averaging around 10,000 self-driven miles per week. The fleet had clocked up almost a million miles over a six year testing period at that point, so has likely added a further 200,000 miles or so since then — assuming rates of testing remained the same.

All the DMV’s Google-related accident reports pertain to this year, with six accident reports covering the first half of the year, including two in June and two in April.

There are currently 10 companies approved by the DMV to test driverless cars on California’s roads: Volkswagen Group of America, Mercedes Benz, Google, Delphi Automotive, Tesla Motors, Bosch, Nissan, Cruise Automation, BMW and Honda.

Apple also apparently recently met with the DMV to discuss the department’s forthcoming driverless vehicle regulations — adding more fuel to rumors Cupertino is also working on developing a (self-driving?) electric car.

More TechCrunch

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning

Welcome to TechCrunch Fintech! This week, we look at the drama around TabaPay deciding to not buy Synapse’s assets, as well as stocks dropping for a couple of fintechs, Monzo raising…

Inside TabaPay’s drama-filled decision to abandon its plans to buy Synapse’s assets