Artificial Intelligence & Ethics
by Kelsey Medeiros, PhD There’s been a lot of buzz around artificial intelligence (AI) this week as Elon Musk, founder of SpaceX and Tesla Inc., discussed the potential dangers involved with it and the need for proactive policies to protect against negative effects. Specifically, Musk talked about the “fundamental existential risk for human civilization” posed by AI. Although the benefits of AI may be widespread, the dangers may be as well. This creates an ethical dilemma as to how to move forward with this new technology. In his discussion, Musk argued that states and governments must be forward thinking in developing policies that adjust and account for, and protect against potential negative effects of AI, rather than waiting for the problems to arise. Musk’s forward thinking about AI brings up an important process involved in ethical decision making – forecasting. When deciding how to respond to an ethical dilemma, such as the introduction of AI, we must consider the future consequences of potential decisions. In other words, we have to actively forecast what will happen if we make specific decisions. This includes consideration of both the good and the bad! Thinking about short-term or immediate consequences is relatively straightforward, but forecasting becomes much more difficult when we have to consider the long-term consequences of a decision. With so many unknowns about the future, it is challenging to know how a decision with play out. For example, new technologies or global crises may reshape our world. Without knowing what these changes are, it is difficult to know how a specific decision will unfold. Thus, when making any decision, but specifically an ethical decision, it is important to consider a number of different types of information and imagine how a decision will play out across multiple scenarios. Like Musk, it is important to think downstream about potentially dangerous consequences, as well as positive implications. Further complicating the forecasting process are our own biases. We each favor a particular way of thinking, or our own biased way of viewing the world. These biases shape how we see the outcomes of a decision. For example, if you favor thinking about the world in a simple way, you might think about a number of different outcomes but may do so at the surface level rather than the deep level needed for good decision making. Similarly, if you are inclined to think about the world in a static state, you may think that it won’t change much in the next 10-20 years and fail to consider how your decision may play out if things change. All of us have our biases, but it is important to recognize them so we can improve our forecasting and, ultimately, make better ethical decisions. As we consider potential ethical decisions, including the future of AI as Musk has discussed, it is important that we check our biases and do our best to forecast both the long- and short-term implications of our decisions.
0 Comments
Leave a Reply. |
Author
The Ethics Advantage Team Archives
July 2019
Categories
|