By Lance Eliot, the AI Trends Insider
Imagine that you are driving your car and come upon a dog that has suddenly darted into the roadway. Most of us have had this happen. You hopefully were able to take evasive action. Assuming that all went well, the pooch was fine and nobody in your car got hurt either.
In a kind of Groundhog Day movie manner, let’s repeat the scenario, but we will make a small change. Are you ready?
Imagine that you are driving your car and come upon a deer that has suddenly darted into the roadway. Fewer of us have had this happen, though nonetheless, it is a somewhat common occurrence for those that live in a region that has deer aplenty.
Would you perform the same evasive actions when coming upon a deer as you would in the case of the dog that was in the middle of the road?
Some might claim that a deer is more likely to be trying to get out of the street and be more inclined to sprint toward the side of the road. The dog might decide to stay in the street and run around in circles. It is hard to say whether there would be any pronounced difference in behavior.
Let’s iterate this once again and make another change.
Imagine that you are driving your car and come upon a chicken that has suddenly darted into the roadway. What do you do?
For some drivers, a chicken is a whole different matter than a deer or a dog. If you were going fast while in the car and there wasn’t much latitude to readily avoid the chicken, it is conceivable that you would go ahead and ram the chicken. We generally accept the likelihood of having chicken as part of our meals, thus one less chicken is ostensibly okay, especially in comparison to the risk of possibly rolling your car or veering into a ditch upon sudden braking.
Essentially, you might be more risk-prone if the animal was a deer or a dog and be willing to put yourself at greater risk to save the deer or the dog. But when the situation involves a chicken, you might decide that the personal risk versus the harming of the intruding creature is differently balanced. Of course, some would vehemently argue that the chicken, the deer, and the dog are all equal and drivers should not try to split hairs by saying that one animal is more precious than the other.
We’ll move on.
Let’s make another change. Without having said so, it was likely that you assumed that the weather for these scenarios of the animal crossing into the street was relatively neutral. Perhaps it was a sunny day and the road conditions were rather plain or uneventful.
Adverse Weather Creates Another Variation on Edge Cases
Change that assumption about the conditions and imagine that there have been gobs of rain, and you are in the midst of a heavy downpour. Your windshield wiper blades can barely keep up with the sheets of water, and you are straining mightily to see the road ahead. The roadway is completely soaked and extremely slick.
Do your driving choices alter now that the weather is adverse?
Whereas you might have earlier opted to radically steer around the animal, any such maneuver now, while in the rain, is a lot dicier. The tires might not stick to the roadway due to the coating of water. Your visibility is reduced, and you might not be able to properly judge where the animal is, or what else might be near the street. All in all, the bad weather makes this an even worse situation.
We can keep going.
For example, pretend that it is nighttime rather than being daytime. That certainly changes things. Imagine that the setting involves no other traffic for miles. After you’ve given that situation some careful thought, reimagine things and pretend that there is traffic all around you, cars and trucks abundantly, and also there is heavy traffic on the opposing side of the roadway.
How many such twists and turns can we concoct?
We can continue to add or adjust the elements, doing so over and over. Each new instance becomes its own particular consideration. You would presumably need to mentally recalculate what to do as the driver. Some of the story adjustments might reduce your viable options, while other of the adjustments might widen the number of feasible options.
The combination and permutations can be dizzying.
A newbie teenage driver is often taken aback by the variability of driving. They encounter one situation that they’ve not encountered before and go into a bit of a momentary panic mode. What to do? Much of the time, they muddle their way through and do so without any scrape or calamity. Hopefully, they learn what to do the next time that a similar setting presents itself and thus be less caught off-guard.
Experienced drivers have seen more and therefore are able to react as needed. That vastness of knowledge about driving situations does have its limits. As an example, there was a news report about a plane that landed on a highway because of in-flight engine troubles. I ask you, how many of us have seen a plane land on the roadway in front of them? A rarity, for sure.
These examples bring up a debate about the so-called edge or corner cases that can occur when driving a car. An edge or corner case is a reference to the instance of something that is considered rare or unusual. These are events that tend to happen once in a blue moon: outliers.
A plane landing on the roadway amid car traffic would be a candidate for consideration as an edge or corner case. A dog or deer or chicken that wanders into the roadway would be less likely construed as an edge or corner case—it would be a more common, or core experience. The former instance is extraordinary, while the latter instance is somewhat commonplace.
Another way to define an edge or corner case are the instances beyond the core or crux of whatever our focus is.
But here’s the rub. How do we decide what is on the edge or corner, rather than being classified as in the core?
This can get very nebulous and be subject to acrimonious discourse. Those instances that someone claims are edge or corner cases might be more appropriately tagged as part of the core. Meanwhile, instances tossed into the core could be potentially argued as more rightfully be placed into the edge or corner cases category.
One aspect that escapes attention oftentimes is that the core cases does not necessarily have to be larger than the number or size of the edge cases. We just assume that would be the logical arrangement. Yet it could be that we have a very small core and a tremendously large set of edges or corner cases.
We can add more fuel to this fire by bringing up the concept of having a long tail. People use the catchphrase “long tail” to refer to circumstances of having a preponderance of something as a constituted central or core and then in a presumed ancillary sense have a lot of other elements that tail off. You can mentally make a picture of a large bunched area on a graph and then have a narrow offshoot that goes on and on, becoming a veritable tail to the bunched-up portion.
This notion is borrowed from the field of statistics. There is a somewhat more precise meaning in a purely statistical sense, but that’s not how most people make use of the phrase. The informal meaning is that you might have lots of less noticeable aspects that are in the tail of whatever else you are doing.
A company might have a core product that is considered its blockbuster or main seller. Perhaps they sell only a relative few of those but do so at a hefty price each, and it gives them prominence in the marketplace. Turns out the company has a lot of other products too. Those aren’t as well-known. When you add up the total revenue from the sales of their products, it could be that all of those itty-bitty products bring in more dough than do the blockbuster products.
Based on that description, I trust that you realize that the long tail can be quite important, even if it doesn’t get much overt attention. The long tail can be the basis for a company and be extremely important. If the firm only keeps its eye on the blockbuster, it could end up in ruin if they ignore or undervalue the long tail.
That doesn’t always have to be the case. It could be that the long tail is holding the company back. Maybe they have a slew of smaller products that just aren’t worth keeping around. Those long-tail products might be losing money and draining away from the blockbuster end.
Overall, the long tail ought to get its due and be given proper scrutiny. Combining the concept of the long tail with the concept of the edge or corner cases, we could suggest that the edge or corner cases are lumped into that long tail.
Getting back to driving a car, the dog or even a deer that ran into the street proffers a driving incident or event that we probably would agree is somewhere in the core of driving.
In terms of a chicken entering into the roadway, well, unless you live near a farm, this would seem a bit more extreme. On a daily drive in a typical city setting, you probably will not see many chickens charging into the street.
So how will self-driving cars handle edge cases and the long tail of the core?
Self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.
Some pundits fervently state that we will never attain true self-driving cars because of the long tail problem. The argument is that there are zillions of edge or corner cases that will continually arise unexpectedly, and the AI driving system won’t be prepared to handle those instances. This in turn means that self-driving cars will be ill-prepared to adequately perform on our public roadways.
Furthermore, those pundits assert that no matter how tenaciously those heads-down all-out AI developers keep trying to program the AI driving systems, they will always fall short of the mark. There will be yet another new edge or corner case to be had. It is like a game of whack-a-mole, wherein another mole will pop up.
The thing is, this is not simply a game, it is a life-or-death matter since whatever a driver does at the wheel of a car can spell life or possibly death for the driver, and the passengers, and for drivers of nearby cars, and pedestrians, etc.
Here’s an intriguing question that is worth pondering: Are AI-based true self-driving cars doomed to never be capable on our roadways due to the endless possibilities of edge or corner cases and the infamous long-tail conundrum?
Before jumping into the details, I’d like to clarify what is meant when referring to true self-driving cars.
For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/
Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/
For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/
For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/
To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/
The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/
Self-Driving Cars And The Long Tail
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it would seem nearly self-evident that the number of combinations and permutations of potential driving situations is going to be enormous. We can quibble whether this is an infinite number or a finite number, though in practical terms this is one of those counting dilemmas akin to the number of grains of sand on all the beaches throughout the entire globe. In brief, it is a very, very, very big number.
If you were to try and program an AI driving system based on each possible instance, this indeed would be a laborious task. Even if you added a veritable herd of ace AI software developers, you can certainly expect this would take years upon years to undertake, likely many decades or perhaps centuries, and still be faced with the fact that there is one more unaccounted edge or corner case remaining.
The pragmatic view is that there would always be that last one that evades being preestablished.
Some are quick to offer that perhaps simulations would solve this quandary.
Most of the automakers and self-driving tech firms are using computer-based simulations to try and ferret out driving situations and get their AI driving systems ready for whatever might arise. The belief by some is that if enough simulations are run, the totality of whatever will occur in the real world will have already been surfaced and dealt with before entering self-driving cars into the real world.
The other side of that coin is the contention that simulations are based on what humans believe might occur. As such, the real world can be surprising in comparison to what humans might normally envision will occur. Those computer-based simulations will always then fall short and not end up covering all the possibilities, say those critics.
Amid the heated debates about the use of simulations, do not get lost in the fray and somehow reach a conclusion that simulations are either the final silver bullet or fall into the trap that simulations won’t reach the highest bar and ergo they should be utterly disregarded.
Make no mistake, simulations are essential and a crucial tool in the pursuit of AI-based true self-driving cars.
There is a floating argument that there ought not to be any public roadway trials taking place of true self-driving cars until the proper completion of extensive and apparently exhaustive simulations. The counterargument is that this is impractical in that it would delay roadway testing on an indefinite basis, and that the delay means more lives lost due to everyday human driving.
An allied topic entails the use of closed tracks that are purposely set up for the testing of self-driving cars. By being off the public roadways, a proving ground ensures that the public at large is not endangered by whatever waywardness might emerge during driverless testing. The same arguments surrounding the closed track or proving grounds approach are similar to the tradeoffs mentioned when discussing the use of simulations (again, see my remarks posted in my columns).
This has taken us full circle and returned us back to the angst over an endless supply of edge or corner cases. It has also brought us squarely back to the dilemma of what constitutes an edge or corner case in the context of driving a car. The long tail for self-driving cars is frequently referred to in a hand waving manner. This ambiguity is spurred or sparked due to the lack of a definitive agreement about what is indeed in the long tail versus what is in the core.
This squishiness has another undesirable effect.
Whenever a self-driving car does something amiss, it is easy to excuse the matter by claiming that the act was merely in the long tail. This disarms anyone expressing concern about the misdeed. Here’s how that goes. The contention is that any such concern or finger-pointing is misplaced since the edge case is only an edge case, implying a low-priority and less weighty aspect, and not significant in comparison to whatever the core contains.
There is also the haughtiness factor.
Those that blankly refer to the long-tail of self-driving cars can have the appearance of one-upmanship, holding court over those that do not know what the long-tail is or what it contains. With the right kind of indignation and tonal inflection, the haughty speaker can make others feel incomplete or ignorant when they “naively” try to refute the legendary (and notorious) long-tail.
For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/
On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/
I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/
Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/
Conclusion
There are a lot more twists and turns on this topic. Due to space constraints, I’ll offer just a few more snippets to further whet your appetite.
One perspective is that it makes little sense to try and enumerate all the possible edge cases. Presumably, human drivers do not know all the possibilities and despite this lack of awareness are able to drive a car and do so safely the preponderance of the time. You could argue that humans lump together edge cases into more macroscopic collectives and treat the edge cases as particular instances of those larger conceptualizations.
You sit at the steering wheel with those macroscopic mental templates and invoke them when a specific instance arises, even if the specifics are somewhat surprising or unexpected. If you’ve dealt with a dog that was loose in the street, you likely have formed a template for when nearly any kind of animal is loose in the street, including deer, chickens, turtles, and so on. You don’t need to prepare beforehand for every animal on the planet.
The developers of AI driving systems can presumably try to leverage a similar approach.
Some also believe that the emerging ontologies for self-driving cars will aid in this endeavor. You see, for Level 4 self-driving cars, the developers are supposed to indicate the Operational Design Domain (ODD) in which the AI driving system is capable of driving the vehicle. Perhaps the ontologies being crafted toward a more definitive semblance of ODDs would give rise to the types of driving action templates needed.
The other kicker is the matter of common-sense reasoning.
One viewpoint is that humans fill in the gaps of what they might know by exploiting their capability of performing common-sense reasoning. This acts as the always-ready contender for coping with unexpected circumstances. Today’s AI efforts have not yet been able to crack open how common-sense reasoning seems to occur, and thus we cannot, for now, rely upon this presumed essential backstop (for my coverage about AI and common-sense reasoning, see my columns).
Doomsayers would indicate that self-driving cars are not going to successfully be readied for public roadway use until all edge or corner cases have been conquered. In that vein, that future nirvana can be construed as the day and moment when we have completely emptied out and covered all the bases that furtively reside in the imperious long-tail of autonomous driving.
That’s a tall order and a tale that might be telling, or it could be a tail that is wagging the dog and we can find other ways to cope with those vexing edges and corners.
Copyright 2021 Dr. Lance Eliot