Ethics in Technology Matters, Alexandria Ocasio-Cortez is Right, We Instill Our Biases in Technology

Some people are unhappy about what Alexandria Ocasio-Cortez is saying here. People not like to imagine that software cannot have politics, intentionally or otherwise.

https://platform.twitter.com/widgets.js

Whenever I was earning my master’s degree, I took a number of courses on the ethics of technology and the history of technology in general.In one of my classes we learned that bridges, yes bridges can have politics. There was an architect, Robert Moses, that was hired by New York City to design and build bridges. Given that NYC is an island and there’s a lot of water there, building bridges is a pretty big deal. Robert Moses was a racist. He also hated poor people. So, if you’re hired to build a bridge from one part of the city to another part with beautiful parks and outdoor spaces that you wanted to have whites and rich people use but not poor people, how would you do that?

If you build a bridge that uses traditional arches underneath with no top support, any vehicle can cross. However, if you build a bridge that has a maximum height allowed, then you can limit that types of vehicles that can cross the bridge. If you built the bridge low enough, then you can prevent buses from crossing the bridge. Buses that would be carrying poor people of color.

It’s just a bridge, how can it be racist? A bridge is just a thing. Something built by people. However, those people have biases and intentions. These are built into that technology. While A bridge may not be racist, this one IS because of the racism used to build the bridge.

If a bridge can have biases intentionally built into it, there is no doubt that software will have biases built into them. We’ve seen time an again that beauty algorithms where the AI didn’t like dark skinned women. In those cases the people building the training set of images had biases. The engineers didn’t like dark skinned women and didn’t include a significant amount of them in the training set.

Soap dispensers aren’t able to detect dark skinned hands, because the engineers working on them didn’t think of test the sensor on someone with dark skin.

 

https://platform.twitter.com/widgets.js

These aren’t intentional biases. It’d be difficult to imagine a group of engineers all sitting around the room saying, “wouldn’t it be me great if we prevent dark skinned people from properly washing their hands? MWahahahahah.” No that’s not what happened. What happened is that the QA team was made of people that look more like me. The dispenser worked perfectly for them, QA passed! This isn’t an intentional bias, but it’s a bias none the less. It’s called the availability bias. If the only people that are available look a certain way, you don’t think about people that aren’t immediately available.

Everyone does it. More people are aware of the fact that there are people different from them. For white people this is critical. It’s similar to when a white person writes an article about how racism has significantly declined in a major news paper.

It is time that organization recognize this and create teams to ensure that ethics and biases are considered when developing and selling novel technologies – or in the cases of bridges old technologies repurposed for modern uses.

Is AI going to kill or us bore us to death?

The interwebs are split over the question of if AI is going to evolve into brutal killing machines or if AI will simply be just another tool we use. This isn’t a debate being asked by average Joes like you and me, it’s being asked by some pretty big intellectuals. Elon Musk thinks that dealing with AI is like summoning demons, while techno-optimist Kevin Kelly thinks that AI is only ever going to be a tool and never anything more than that, and finally you have Erik Brynjolfsson an MIT Professor that believes that AI will supplant humanity in many activities but the best results will come with a hybrid approach (Kevin Kelly does use this argument at length in his article).

Personally I think a lot of Kevin Kelly’s position is extremely naive. Believing that AI will ONLY be something that’s boring and never something that can put us at risk is frankly short sighted. Considering that Samsung, yes the company that makes your cell phone, developed a machine gun sentry that could tell the difference between a man and a tree back in 2006. In the intervening 8 years, it’s likely that Samsung has continued to advance this capability. It’s in their national interest as they deployed these sentries at the demilitarized zone between North and South Korea. Furthermore, with drones it’s only a matter of time that we will deploy an AI that will make many of the decisions between bombing and not bombing a given target. Current we have a heuristic, there’s no reason why that couldn’t be developed into a learning heuristic for a piece of software. This software doesn’t have to even be in the driver’s seat at first. It could provide recommendations to the drone pilot and learn from the choices when it is overridden and when it is not. Actually, the pilot doesn’t even have to know what the AI is recommending and the AI could still learn from the pilot’s choices.

AI isn’t going to be some isolated tool, it’s going to be developed in many different circumstances concurrently by many organizations with many different goals. Sure Google’s might be to find better search, but they also acquired Boston Dynamics which has done some interesting work in robotics. They are also working on developing driverless cars, which will need an AI. What’s to say that the driverless AI couldn’t be co-opted by the government and combined with the AI of the drone pilot to drop bombs or to “suicide” whenever it reaches a specific location. These AIs could be completely isolated from each other but still have the capabilities to be totally devastating. What happens when they are combined? They could at some point through a programmer decision or through an intentional attack on Google’s systems. These are the risks of fully autonomous units.

We don’t fully understand how AI will evolve as it learns more. Machine learning is a bit of a Pandora’s box. It is likely that there will be many unintended consequences, similarly to almost any sort of new technology that’s introduced. However, the ramifications could be significantly worse as the AI could have control over many different systems.

It’s likely that both Kevin Kelly and Elon Musk are wrong. However, we should assume that Musk is right while Kelly is wrong. Not because I want Kelly to be wrong and Musk to be right, but because we don’t understand complex systems very well. They very quickly get beyond our capability to understand what’s going on. Think of the stock market. We don’t really know how it will respond to a given quarterly earnings from a company or even across a sector. There are flash crashes and will continue to be as we do not have a strong set of controls over the high frequency traders. If this is extended across a system that has the capability to kill or intentionally damage our economy, we simply couldn’t manage it before it causes catastrophic damage. Therefore, we must intentionally design in fail safes and other control mechanisms to ensure these things do not happen.

We must assume the worst, but rather than hope for the best, we should develop a set of design rules for AI that all programmers must adhere to, to ensure we do not summon those demons.

The known unknowns and the unknown unknowns of AI

I’m reading a book called “Robot Uprisings” which is quite obviously about robots and how they could attack and take over the world. I think the most interesting thing about this collection of short stories isn’t the fact that there are uprisings, but the many different routes that AI could decide to revolt. There’s a broad range from robots debating if they should revolt or not, to an AI that we never figure out what to do with and only revolts when we try to kill it.

I think that these difference scenarios really encapsulate the limitations of our imagination with what could happen with robots. I think the most terrifying thing is what we really don’t understand about robots or AI in general. What is being built without our knowledge in government labs, in universities, and in hacker labs. We’re debating the ethics of the NSA and GCHQ espionage of their own citizens and the limits of rights in the digital space. We’re using rudimentary “AI” in terms of heuristics and algorithms. We as end users or that are impacted by these algorithms or if their very assumptions are even ethical, without bias, or anything along those lines. danah boyd argues that the Oculus Rift is sexist because the algorithms that control the 3D functionality are all designed by men for men. Agree with her or not, but women get sick using the Rift.

If we can’t agree on the ethics of programs that are in use and the risks posed by the solutionism of the internet, then we’re in serious trouble when we actually create a thinking machine. Stephen Hawking argues that we would not sit and wait for an alien species to come and visit earth if we have advanced warning, but that is exactly what we’re doing with AI. We know it’s coming, we know that there will be something similar to a “Singularity” in the future. Our internet optimists are waiting breathlessly for it, but we don’t truly know the long term impact of this technology on how it shapes our own society.

It’s not just the risk of AI destroying our world and all of humanity. It’s also the lack of understanding of how our current algorithms are shaping our conversations in the media and social media. For instance, it’s fairly commonly known now that a lot of pretty major news outlets are using Reddit as a source to identify upcoming stories. TMZ, the Chive, and tons of other content sites mine it for memes and stories, while more serious news sources find interesting comments and use those to drive more interesting stories.

I believe the tweet below really does a good job showing how lowly we think of ethics in our society. This will really negatively impact our ability to understand the risks of AI. AI is going to really transform our culture and we don’t know what we don’t understand about the risks of the technology.