The Problem
The progresses in Artificial Intelligence (AI) in recent years, from data mining to computer vision and from natural language processing to robotics, have demanded people to start think about morality’s importance and complexity in the design of AI. According to the Economist almost half of all jobs could be automated by computers within two decades.[i] Many of these jobs are complex and involve judgements that lead to significant consequences.
The dilemma self-driving cars faces is a good example. In the situation of a brake failure, a self-driving car can either keep going straight, which would result in the death of pedestrians, or swerving to avoid hitting the pedestrians, which would result in the death of dogs crossing the street. How should the AI be programed to make decisions for the self-driving car in situations like this? Drones used by military to target and suppress terrorists is another well-debated example of the importance of morality in the design of AI. According to the New York Times, the Pentagon has put AI at the center of its strategy to maintain the United States’ position as the world’s dominant military power.[ii] The new weapons would offer speed and precision unmatched by any human while reducing the number — and cost — of soldiers and pilots exposed to potential death and dismemberment in battle. How do we make sure these drones make the moral decision in the battlefield?
As innovation in AI accelerates, we need to get ahead of the curve and implements morality into the design of AI so that gains today are not taken at the cost of future abatement. We should define morality before an AI does.
Potential Challenges
The major challenge of programing ethics into AI is the fact that human ethical standards are currently imperfectly codified in law and they make all kinds of assumptions that are difficult to make. Machine ethics can be corrupted, even by programmers with the best of intentions. For example, the algorithm operating a self-driving car can be programed to adjust the buffering space it assigns to pedestrians in different districts based on monetary amount of settlement of previous accidents in each district. The assumption is that the bigger buffering space in districts with higher settlement costs can reduce the potential for higher settlement. The assumption seems reasonable, but it is possible that the lower settlements in certain districts were due to the lack of access to legal resources for residents of poorer neighborhood. Therefore, the algorithm could potentially disadvantage these residents based on their income.
The Solution
We have established the need to teach AI to have a learned concept of morality. However, given the challenges, governments and regulators need to be lead the effort to establish a globalized standard for machine ethics. These standards need to be clearly instituted and codified by legislature. However, governments’ lack the resources and talent in the field of AI would require them to have private sector’s involvement. At the same time, companies that are already developing products using AI such as self-driving cars have conflict of interests in assisting the government. This poses a potential opportunity for us. We can build a product that crowd-sources human opinions on how machines should make decisions when faced with moral dilemmas to help governments write these legislatures. For example, we could crowd source scenarios and the most appropriate responses to those scenarios for self-driving cars on our platform, and then contract with the governments to evaluate our findings and implement them into algorithms of self-driving cars looking to enter the market. Although sales process to government entities can be lengthy, our role as a third party between the regulator and the companies developing AI products and the potential multi-year revenue stream with a mandated project place us in a very good position.
[i] http://www.economist.com/news/briefing/21594264-previous-technological-innovation-has-always-delivered-more-long-run-employment-not-less
[ii] https://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html
Team Awesome Members:
Rachel Chamberlain
Joseph Gnanapragasam
Cen Qian
Allison Weil
8|