Close

Tag: Self Driving Cars

Cars, commercial trucks, and other vehicles equipped with complex, onboard computer are reaching biological levels of complexity. Current high-end cars with autonomous features have over 100 million lines of computer code. For comparison, all of the software on Facebook uses only 60 million lines and many of the jetliners that we fly (including Boeing 787 and the F-35 fighter jets) have less technological sophistication and lines of computer code. In the future, it is at least possible that the natural learning and decision-making algorithms used by automated systems will even surpass human abilities.

Over the last several years, we have seen an increase in the availability and capabilities of advanced automotive electronic systems and technology systems designed to assist drivers and avoid car accidents. Today’s autonomous vehicles employ a multitude of safety sensors such as lasers, cameras, Lidar, video cameras, GPS, and laser finders all integrated with onboard computer systems constantly measuring and analyzing surroundings, currently measuring the surroundings, and feeding the information into sophisticated algorithms that must make split second decisions in order to avoid a crash.

Car Accident Lawyer Near Me

However, while the algorithms designed by humans are not “perfect,” rapid improvements are being made. It appears that increasingly autonomous vehicles will continue to become more prevalent and commonplace. It is important for us to address some of the legal, ethical, and professional questions raised by self-driving cars including how they will affect the practice of law and society in general.

A Brief Primer on Robo-Ethics

The term “robo-ethics” was coined by roboticist referring to the morality of how human beings design, construct, use, and treat robots and other artificially intelligent beings. It considers how artificially intelligent beings and computer sciences may be capable of harming humans. Furthermore, the field also considers how robots, autonomous, and semi-autonomous systems may be used to benefit humans. As artificial intelligence in the automotive industry continues to develop at a speed nearly beyond comprehension, there is some cause for concern regarding the moral behavior of humans as they design, construct, and use intertwined computer systems and algorithms in the quest to create an accident and fatality-free self-driving vehicle.

As systems approach, and perhaps eventually surpass, human levels of reasoning and critical thinking we must thoroughly consider how the machine will interpret instructions. For example, consider that an improperly programmed machine might elevate a secondary goal or directive to a primary level. Consider the oft-stated thought experiment where an autonomous, super-intelligent computer is tasked with solving a problem regarding the total amount of hydrogen in the universe. The machine elevates this goal above all else and begins work on the problem. Unfortunately, the computer decides to collect all the hydrogen in the universe to determine the total amount – including the hydrogen contained within human beings, water, and all living things. Clearly, this would be catastrophic to all life and perhaps the machine itself, but it is not outside the realm of possibility for a hyper-intelligent autonomous machine.

From this experiment we can draw a few conclusions. First, absent natural learning algorithms, computers will only do what they are told to do. Mistakes in programming can have disastrous and far-reaching consequences. Second, once computers begin to be able to learn, the information they acquire may impact how the machine interprets or goes about completing its task. Finally, we can assume that machines will not have a “natural” sense of morality regarding the preservation of life. They may not see “good” people or “bad” people. They may simply see all things as collections of atoms that can be used for other purposes more compatible with the machine’s directives. However, we are approaching the realm of science-fiction and concerns like these are still a minimum of decades away. However, there are certain ethical considerations and quandaries relating to autonomous machines that have already arrived.

Self-Driving Cars Will Have to Decide Who Lives and Who Dies in an Accident

A compelling issue arises when autonomous vehicles are confronted with the situation where a collision is imminent and not avoidable even if it complies with the programmed robotic rules or algorithms. For example, let us consider a problem where the software that controls the automated vehicle is tested in a potential real life situation.

Let us consider the following scenario: an autonomous vehicle is traveling down a narrow roadway approaching a narrow tunnel and just before entering the tunnel, a child attempts to run across the roadway but trips near the center of the lane blocking the path of the autonomous vehicle. The autonomous vehicle has two options – hit and kill the child or swerve into the wall on the other side of the tunnel and kill the vehicle’s operator. This situation sets forth the issue of whether an autonomous vehicle chooses to kill the operator of the vehicle or the third party. Who should decide how the AV should react or be programmed – the autonomous vehicle manufacturer or the operator? And if the system makes the decision, what criteria should the system use to determine which individual lives and which individual dies? Age of the potential victims? Predicted severity of injuries of the survivor? The legal and ethical implications of the decision is most difficult and must be considered by those who design and control algorithms for safe autonomous vehicles. It is impossible with the current technology to have an autonomous vehicle on the road that gets you from point A to point B without considering such a scenario.

Further complicating the ethical balancing one must consider, many experts suggest that if the technology is effective it may be unethical to not introduce advanced self-driving technology. If, after performing a risk/benefit analysis, it is clear that thousands of lives could be saved each year would it not be unethical to withhold those benefits from society? If the science and technology exists that could potentially save a lot of people but may still be potentially unsafe in certain instances or kill, perhaps humanity should nevertheless opt for the benefit of the greater good.

The Broad Economic Effects of Autonomous Self-Driving Vehicles

With the adoption of any new revolutionary technology, there will be problems for entrenched businesses that are unable or unwilling to alter their business model. Such an effect was seen in the music industry in the late 1990s and early 2000s. Today, streaming video over the Internet is causing similar difficulties for cable providers — many of whom are shifting their strategy to include content ownership and licensing. The widespread adoption of autonomous, self-driving vehicles will have similar effects on many industries.

As cars become safer, insurance premiums will drop which will affect the bottom-line and ultimately the profitability of many insurance companies. These companies depend on premiums paid by users and are based on customers’ accident records. Fewer car accidents, as a result of autonomous technology, will translate to lower premiums and reduced revenues from customers. It is predicted that insurance companies such as State Farm, Allstate, Liberty Mutual, and GEICO might initially see a huge benefit from reduced accidents & accident liability, but in the long-run they are likely to end up losing a large portion of the $200 billion dollars consumers spend every year on personal auto policies. Depending on the efficacy of self-driving vehicles, you may even see states drop mandatory automobile insurance required for vehicles.

The taxi and parking industries may also suffer. If a car is able to get from point A to point B without a driver and then return home under the direction of a computer, it may then return to pick up the driver without having to park in the city. Likewise, taxis are already under assault by technological advancements provided by ride-sharing apps like Uber and Lyft. It does not take a stretch of the imagination to extrapolate that these companies are likely to transition to self-driving vehicles as soon as practicable as, after the initial investment and maintenance, there would be no need to pay a driver.

Automakers themselves may be significantly affected by a self-driving vehicle. If you could use your smart phone to reliably have a self-driving vehicle arrive at your home within a few minutes, would people still have the motivation to own a car or truck?

Philadelphia Car Accident Lawyers

The body shop business will also be impacted as there will be fewer accidents.

The airlines and railroads may also suffer as customers can enjoy a similar experience with fewer security concerns and greater convenience.

Although there are many regulatory, legislative, and manufacturing obstacles to the widespread use of self-driving cars as well as substantial concerns about privacy and hacking concerns due to cyberterrorism, it is certain that these vehicles are developing at warp speed. These vehicles, in some form, are already here with more coming fast. Companies that invest or believe in old technology and practices will need to evolve or risk dying. This holds true regardless of whether the business in question is an automaker, insurer, taxi company, or even a personal injury lawyer in Philadelphia. However, businesses that anticipate these changes and adjust their approach accordingly are highly likely to reap immense benefits.

Were you severely injured in a car accident? Contact the Philadelphia car accident lawyers of The Reiff Law Firm for a free consultation at (215) 709-6940.