How can automated vehicles earn the trust of those that will ultimately use them?
The concept of real, tangible self-driving cars in our daily lives has fascinated the imaginations of many for years now. It’s an interest that stems from a long history of both artificial intelligence (AI) and “the driverless-vehicle” being applied to various elements of science fiction from Star Trek to Asimov and beyond. However, just because self-driving cars remain an interest doesn’t automatically imply that they will be either safe or trusted any time soon.
In fact, according to Pew research, 56% of those interviewed said they wouldn’t step foot in a self-driven car if they had the chance. The top reason for this aversion was cited as “a lack of trust or fear of giving up control to a machine.” The question that needs to be asked is if this fear is grounded in reality or not.
Trending AI Articles:
1. Ten trends of Artificial Intelligence (AI) in 2019
2. Bursting the Jargon bubbles — Deep Learning
3. How Can We Improve the Quality of Our Data?
4. Machine Learning using Logistic Regression in Python with Code
Handling the Basics
Self-driving cars have done well, in general, as they’ve slowly made their entrance onto the roads of the real world. However, as is the case with airplanes or any other unique (and feared) form of transportation, even a fraction of the accidents compared to those created by humans driving on the roads always lead to intense levels of scrutiny.
This was the case when a self-driving car struck and killed an Arizona woman in 2018. Since then, the basic ability of self-driving cars to simply navigate the roads has been called into question yet again. Concerns have been particularly pointed towards an autonomous vehicle’s ability to sense what’s going on around it at all times. While the sensors currently being used seem to do well for the majority of the time, even the tiniest lapse can lead to death, as was seen in the Arizona incident.
In addition, critics have been apprehensive about the software developers’ ability to write code that can be prepared for any and all incidents that may be encountered. These two things alone are critical factors that must be sorted out if autonomous vehicles are to become the norm.
Handling the Unusual
In recent years, multiple terrorist attacks have been perpetrated specifically by driving vehicles into pedestrians. With that being the case, naturally, one out-of-the-box concern for self-driving cars is the ability for hackers to infiltrate their software and take over the controls. When a truck drove through Bastille Day crowds in Nice, France in 2017, it left 86 victims in its wake. If a single truck was able to wreak so much havoc and destruction, image the catastrophic repercussions if a terrorist organization was able to take control of a dozen at a time, and at no physical sacrifice or cost to themselves, either.
While terrorist attacks may be an extreme scenario, even the ability to simply hack into a self-driving system in order to hijack that Ferrari or Porsche parked outside and drive it directly to themselves makes the whole concept of proper safety and precautions a critical component of development. Fortunately, many advances are being made to develop increasingly robust password systems. These both help create safer security and facilitate easier personal management for the host of different passwords that we constantly juggle. Some of these concepts and improvements include:
- Multi-layered authentication to create multiple levels of security.
- Biometrics to tie your password to your fingerprints and other DNA-specific attributes.
- AI to help prevent hackers from ever breaking through no matter how close they get.
As these and other security measures are taken, the hope for self-driving cars that are generally immune to hackers becomes more and more of a possibility.
Handling the Dangerous
Here’s where the rubber truly hits the road. Remember the movie I, Robot? Will Smith’s character, Del Spooner, has a bitter vendetta against robots. At first, the aversion seemed like a deep-seated, unreasonable prejudice, until it came out later in the movie that Spooner’s life had been saved during a car accident while a child was left to die. Why? Because the robot that saved him calculated that his chances of survival were superior to that of the child. The problem wasn’t a failure in the AI’s ability to function, but rather the complicated issue of cold and calculated decision-making in a truly ethical dilemma.
To translate this to self-driving cars, what happens if an automated vehicle is confronted with a scenario where all choices lead to a negative outcome? One solution to head off these potential scenarios is to have human drivers behind the wheel of all self-driving cars as a second level of decision-making support. But, of course, that largely defeats the purpose of a “set it and forget it” mentality that self-driving cars offer. While the proposition of human “back-up drivers” may be a sufficient stop-gap measure, AI will need to develop further if it’s ever going to be trusted to handle these kinds of situations with a human level of sophistication and judgment.
Self-Driving Cars Are the Future
The truth is, no matter how many dangers face the evolution of the self-driving car, there’s little doubt that the autonomous phenomenon will eventually be the norm. In other words, the question, at this point, isn’t if but when they will finally break through the formidable wall of trust issues that developers currently face. With concerns like hackers, ethical dilemmas, and even basic sensor and programming issues still in the conversation, though, self-driving vehicles will doubtlessly continue to face an uphill battle for the foreseeable future.