Interesting partner in my research is looking at how algorithms and the technology that we use absorb some of the moral ideas that we have that exist within our society and it’s a really interesting area because so many people think that artificial intelligence and algorithms are the neutral completely unbiased decision-making systems.
If you have a military drone for example that’s circling that military drone is going to be a better decision maker than we are because they’re not going to hesitate they can’t get corrupted they won’t be swayed they won’t hesitate at the last minute and so we have this expectation that a lot of these tools are superior decision makers but is that really true because the funny thing is is that it’s us that has to program a lot of this technology so yes while these machines can make decisions must that much faster.
We are it’s up to us to give them the parameters upon which they’re going to decide something one way or the other and research has shown that sometimes we can transmit biases to them even when we’re not aware of it a recent study showed that an AI algorithm unsupervised was picking up things like gender bias and race bias just from the text itself without any prompting from scientists the way that we structure sentences the concepts that we share the language.
We use all of these things send cues to these machines which then they they learn in order to help them make these decisions so when you take an example like each on cars you think okay well in the case of an emergency if there is an accident and you have to sacrifice either a person in a car or a bus full of school kids you would want the system to sacrifice the one person in the car and to be able to do so in a very neutral manner versus the the bus full of children right.
What if there’s a couple of other options what if instead of just sacrificing this one person in a car there’s actually three different cars and one of them has a Nobel Peace Prize winner one of how the convict or one of them have an elderly man who is the system going to choose and the answer to that question depends on our values as a society do we value youth to revalue intellectual contribution to be value ideas of production all of these things are going to be variable that these algorithms can analyze in a split second in order to make that call so when you look at the technology in your own life.
I want you to remember the fact that every single piece of technology has beliefs embedded in them that we are creating them in our own image and we are creating a lot of these machines with the image that we have of the world so every time you see a social network where you see an algorithm where you see an AI interface take a step back and ask yourself what belief systems are fed into this machine how do the people that created it see the world and how will that vision help it make decisions.