How Should Software Engineers Approach Difficult Ethical Obligations

25 Apr 2020

 

Software Engineering Ethics

Ethics in the context of software engineering is a type of professional ethics that is usually practiced by software engineering teams to manage situations. These situations have to be handled in accordance to health, safety, and the welfare of the public. To do this, software engineers must be in compliance with high standards of analysis, specification, design, development, testing, and the maintenance of software. Even for non-professional software engineers, every software engineer is encouraged to adhere to eight principles. Summarized, these eight principles put software engineers in the best interest of the public, the client, and the employer. Their product and modifications must show high levels of refinement, based on professional standards. Managers should have an ethical approach to the management of development. Software engineers and colleagues must be fair and promote ethical practices in their profession and must be supportive of one another. These are important because when designing software, software engineers hold a lot of power and can either choose to do good or bad with this power. The software can do good or inflict harm, or can enable others to do good or inflict harm, or can influence others to do good or inflict harm. To ensure that the software engineer’s efforts are used for good, they strive to keep software engineering a professional, beneficial, and respected profession in accordance with these principles. This, however, is not inclusive of every case that may be present so individual principles should not be used to justify an error, a just of authority, or neglecting to perform an action that one has an obligation to do. There are some if not many situations that software engineers must use their own judgment in compliance with ethics and morality to make difficult decisions. One of these possible situations will be discussed in this technical essay.

Ethical Implications of An Autonomous Vehicle

What is An Autonomous Vehicle

First of the word, autonomous may mislead some as it isn’t completely autonomous as that would imply that the vehicle could decide to make its own decisions without the consultation of the driver. An example of this would that the driver could say they want to go to the mall and the vehicle decides to take them to the beach instead. The autonomous part would be knowing when to switch lanes in heading to a destination, avoiding obstacles, abiding by speed limits, or making decisions to prevent accidents. This can all be done without the driver’s attention or interaction in an autonomous vehicle. To do this the vehicle uses complex algorithms, deep neural networks, and a series of sensors and cameras, or even LIDAR used to detect its surrounding environment. It also knows where it’s at by using triangulation with satellites.

How Should The Vehicle Handle Ethical Challenges?

When life or injury is on the line the autonomous vehicle would need to make decisions based on the circumstances present. It is up to the software engineer to teach the AI how to deal with these types of situations. The vehicle must appropriate the best decision as fast as possible, which would completely trump any human reaction to mitigate any potential harm. Studies include some factors that could be considered into this such as age, societal position, amount of harm to individuals, to see how an AI should attack this problem. It is different based on outcomes of the specific areas the studies were done in. If the vehicle needed to decide between two possibilities that would harm life some would choose an old person versus a young, or a political figure versus an average joe, or one versus many. This makes this a very difficult task for software engineers because it’s not like a war where there is a bad guy, everyone is innocent and taking a life based on factors such as the ones above are not really ethical. Life is invaluable and shouldn’t be measured, and if the vehicle kills a pedestrian who is responsible? I think this is a matter that would, unfortunately, need to be customized based on the publics’ vote. Being one to decide is unfair even if it abides by codes of ethics because as said it doesn’t really cover special cases such as this one. This is why all the software companies developing the software for autonomous cars are still scratching their heads about this. The only case I believe that should have the same outcome but is hard to swallow is the number of people. If the vehicle needed to decide between one and many, it should always, unfortunately, go for the lone individual. But it doesn’t necessarily need to be this way. The vehicle can have other safety interventions that would mitigate injury.

My Ethical Stance

Beforehand I never really thought about the ethical implications of an autonomous vehicle and thought it would have been awesome to have automated cars, but writing this based on the case study causes me to have second thoughts. I believe that based on factors present even though I still don’t like measuring life, the vehicle if needed to pick a life to save should always either value quantity, age, or those abiding by the law. If someone was crossing the street illegally, versus a pedestrian on the sidewalk it should have to pick the one breaking the law, but it should always try to avoid this. But then you may ask, what is it was a kid crossing the street illegally, or many people crossing illegally then what? And to be frank, I am not even sure. I don’t think I am qualified to justify the loss of life-based on factors or circumstances. The vehilce should just try its best to prevent the accident in the first place, by having a solid predictive outcome algorithm, where based on it’s surroundings can predict all the possible outcomes and try to midigate and obsolve any outcomes that include injury or death.