Ethics in Technology Matters, Alexandria Ocasio-Cortez is Right, We Instill Our Biases in Technology

Some people are unhappy about what Alexandria Ocasio-Cortez is saying here. People not like to imagine that software cannot have politics, intentionally or otherwise.

https://platform.twitter.com/widgets.js

Whenever I was earning my master’s degree, I took a number of courses on the ethics of technology and the history of technology in general.In one of my classes we learned that bridges, yes bridges can have politics. There was an architect, Robert Moses, that was hired by New York City to design and build bridges. Given that NYC is an island and there’s a lot of water there, building bridges is a pretty big deal. Robert Moses was a racist. He also hated poor people. So, if you’re hired to build a bridge from one part of the city to another part with beautiful parks and outdoor spaces that you wanted to have whites and rich people use but not poor people, how would you do that?

If you build a bridge that uses traditional arches underneath with no top support, any vehicle can cross. However, if you build a bridge that has a maximum height allowed, then you can limit that types of vehicles that can cross the bridge. If you built the bridge low enough, then you can prevent buses from crossing the bridge. Buses that would be carrying poor people of color.

It’s just a bridge, how can it be racist? A bridge is just a thing. Something built by people. However, those people have biases and intentions. These are built into that technology. While A bridge may not be racist, this one IS because of the racism used to build the bridge.

If a bridge can have biases intentionally built into it, there is no doubt that software will have biases built into them. We’ve seen time an again that beauty algorithms where the AI didn’t like dark skinned women. In those cases the people building the training set of images had biases. The engineers didn’t like dark skinned women and didn’t include a significant amount of them in the training set.

Soap dispensers aren’t able to detect dark skinned hands, because the engineers working on them didn’t think of test the sensor on someone with dark skin.

 

https://platform.twitter.com/widgets.js

These aren’t intentional biases. It’d be difficult to imagine a group of engineers all sitting around the room saying, “wouldn’t it be me great if we prevent dark skinned people from properly washing their hands? MWahahahahah.” No that’s not what happened. What happened is that the QA team was made of people that look more like me. The dispenser worked perfectly for them, QA passed! This isn’t an intentional bias, but it’s a bias none the less. It’s called the availability bias. If the only people that are available look a certain way, you don’t think about people that aren’t immediately available.

Everyone does it. More people are aware of the fact that there are people different from them. For white people this is critical. It’s similar to when a white person writes an article about how racism has significantly declined in a major news paper.

It is time that organization recognize this and create teams to ensure that ethics and biases are considered when developing and selling novel technologies – or in the cases of bridges old technologies repurposed for modern uses.

Technology and ethics

We’re in a precarious position right now. We’re moving faster and faster forward with our technologies. We’re dreaming up new ways to track our movements, our health, vehicles, and weapons. Our leaders in congress and the senate don’t want to have honest conversations about these technologies. Our weapons are being used for some pretty unethical things in Israel no matter how you look at things. We don’t know how our data is being used by or by whom.
We’re developing technologies that will significantly modify our work places, that will adjust how we interact with each other, understanding and discussing what the impact of these changes is vitally important. Let’s say you’re a big fan of applications and devices like FitBit and similar products, you use them everyday, how will your health insurance company use that data? What happens if it shows that you’re a lazy bum and that you’re not doing anything to keep yourself healthy. What if it shows that you’ve recently stopped doing physical actitivies and that you’re actively being unhealthy. Could your rates go up, could your Care manager suddenly contact your doctor’s office and getting you on a forced care plan through your doctor. These types of things could happen. Now what happens if your insurance company sees where you’re going to McDonald’s all the time because of your car or your smart phone. This could easily happen through Apps. Skype and Facebook both require you to share GPS locations, why not an app from your Insurance company?

Are these applications of technology ethical, I don’t think so. But they will help people make money and save money. Now, is that a reason that we should accept these ethical lapses? I don’t think so. I think we need to have more serious ethical conversations.

Driverless cars aren’t without ethical quandaries

While driving home the other day I was thinking about the new Google Driverless car stuff that I’ve seen. It’s an interesting looking vehicle, see it below. Apparently, one of the reasons why Google went fully autonomous was that people would be first hyper vigilant, then so lazy that they completely trusted the car in any and every situation.

Google’s fully automated driverless car

I believe it’s likely that the first round of driverless cars won’t be fully automated. Data will eventually show that the fully automated cars are perfectly safe, but we’re a paranoid lot when it comes to new technology. I also think that there are definitely risks with a fully autonomous car in regard to hacking and spoofing the system. I have a feeling that will become a game with hackers to try to trick the car into thinking that a direction is safe when it is actually not. To continually combat these risks Google will have to make it very easy to update the software, possibly while driving, as well as the hardware. I believe this is one of the many reasons why Google just announced their 180 internet satellites that they will be launching soon.

However, I think that the best of intentions will likely lead to some serious issues for Google and law makers in the next few years. For some of them an author at the Guardian wrote a few of them. That being said, I think that the first cars will not be fully automatic until enough data comes into show they are safe going highway speeds consistently. I think that this will lead to issues for Google.

One of the things that is missed in the Guardian article above is that if you’re an Android user, those very things could happen already. Your phone already tracks not just GPS but also nearby cell towers, so you could very easily subpoena either Google or your cell provider for records of your whereabouts. However, the interesting thing that Google talks about in regard to safety, is that drunk driving will be a think of the past.

As I mentioned before I think that there will be a manual mode and I think there will have to be one for a while because of definite hacker threats. You’d need to override. I also think that this would require a mechanical switch that literally overrides the system. The system would still run, but would not be able to override the human driver. Maybe I’m just paranoid, but I don’t think that anyone can create a truly secure vehicle like this and if one is compromised then all of them would be under the exact same risk.

Now, let’s say a guy goes out drinking. Google knows where he is. Google knows that he took pictures of his shots Instagraming “#drinktilyoublackout!”. Google also knows that he texted a few friends through Hangouts fully integrated texting capability. Furthermore, he tweets to @Google “Getting Black out drunk no #DD #DriverlessFTW”. This guy then gets into the car, switches it to manual override for whatever reason gets in an accident, who is at fault here? Clearly the guy that’s driving right? Well, if he had a fully automated car with no other option he’d not hurt anyone. Google knows everything he’s doing. Google knows everywhere you go already because of how their devices work. The difference is now that they can control where you’re going and how you get there.

Is Google responsible for building a car with a manual override that could save people’s lives in other instances? Is the State responsible for mandating that Google put in that switch? Should Google have built in safety measures that make the user go through a series of actions or prove the driver is capable of overriding the car?

I think that we need to hash out all of these before these cars are allowed on the road. I also think it’s going to be vitally important that we understand what happens with that data from all our cars, who can access it, and if we really have any privacy in a fully automated car like that. Simply by participating in our culture with a cell phone we’ve already eroded our privacy a great deal in both the public and private realm. Driverless cars will further impact that and will likely end up being a highly political issue over the next several years. Taxis, Lyft, and Uber will be out of business – the Car2Go model will beat them out any day of the week if the cars are autonomous. Direct to customers, like Tesla is pretty obvious. Lots of changes are going to happen through these cars.

We can’t just let this happen to us, we need to make decisions about how we want to include driverless cars in our lives. They aren’t inevitable and definitely not in their current incarnation.

Book Review: The People’s Platform: Taking back Power and Culture in the Digital Age

I just finished reading “The People’s Platform: Taking Back Power and Culture in the Digital Age” by Astra Taylor  I really found this book to be interesting. I believe it offered a very different critique on the digital age than Evegny Morozov’s “Click here to Save everything” where he focused on the arrogance of the algorithm and total solutionism of the movement, Taylor focused on the cultural cost of our digital economy. I think combined the philosophizing of Morozov with Taylor’s discussion of the value of culture and the economic forces behind these changes is an extremely powerful argument. Alone they are both excellent, but I think they offer balancing points that compliment each other well.

First of all, I don’t think everyone will like this book. I don’t think a lot of my readers will like large portions of the book. However, even the most libertarian will agree with some portions of it. I think that’s the power of this book. It’s not really fair to one side or the other, although is really obvious she has a bias – which she wears pretty proudly. Knowing this bias is there allows the reader to decide which portion is Occupy Wall street dreaming or which is really a problem (of course one can go too far either direction).

Taylor’s cultural argument is powerful because we are all part of that economy. We all consume cultural artifacts or perhaps, like myself, make them. The fact that these have been commoditzed to a cost of nothing while still valuable is something we deal with daily. The choice between pirating a movie, renting, streaming it on Netflix, or buying it all are choices we decide on a regular basis. I think that even the most hardcore pirate buys a lot of cultural goods.

Many of us, even if we don’t produce cultural goods, know someone that does. You might watch a video game streamer, you might have a friend or two that are in various bands, you might read my blog or another friend’s blog. All of these people want to use these artifacts to either live on or perhaps enhance their career in some fashion.

However, in the digital space most of the companies that share or distribute cultural activities are funded by ads. Twitch makes most of it money from ads, Google makes $50 billion/year on ads, Facebook makes the most money on an ad whenever a friend “Sponsors” that ad with or without our active agreement to “sponsor” the ad.

Taylor argues that we need to help develop a cultural public space that helps create value for other cultural goods that you may not actually consume (which is why I wrote this blog).

Many of the ideas in the book are anti-corporation, but not because they make money. Instead, it’s because they make money in ways that aren’t obviously ads and that control our cultural destiny. She is pro-net neutrality, she supports companies making profits from ads, but she argues for more transparency that an article is actually sponsored.

Her argument isn’t that we should tear down companies, but instead that we pull back some of the power that these companies have simply taken without any real conversation. We need to look at the ethics behind the algorithms they are using and understand their biases. We need to enable true conversations about these topics. Ad driven content leads to self-censorship and lower quality products.

Is this book perfect? Not by a long shot, but it really made me think about some topics and I think that we need to have more conversations about not just ads, but also about why companies behave the way they do. We need to find a better balance than we currently have.

I rate the book 5/5 for making me really think about topics

The known unknowns and the unknown unknowns of AI

I’m reading a book called “Robot Uprisings” which is quite obviously about robots and how they could attack and take over the world. I think the most interesting thing about this collection of short stories isn’t the fact that there are uprisings, but the many different routes that AI could decide to revolt. There’s a broad range from robots debating if they should revolt or not, to an AI that we never figure out what to do with and only revolts when we try to kill it.

I think that these difference scenarios really encapsulate the limitations of our imagination with what could happen with robots. I think the most terrifying thing is what we really don’t understand about robots or AI in general. What is being built without our knowledge in government labs, in universities, and in hacker labs. We’re debating the ethics of the NSA and GCHQ espionage of their own citizens and the limits of rights in the digital space. We’re using rudimentary “AI” in terms of heuristics and algorithms. We as end users or that are impacted by these algorithms or if their very assumptions are even ethical, without bias, or anything along those lines. danah boyd argues that the Oculus Rift is sexist because the algorithms that control the 3D functionality are all designed by men for men. Agree with her or not, but women get sick using the Rift.

If we can’t agree on the ethics of programs that are in use and the risks posed by the solutionism of the internet, then we’re in serious trouble when we actually create a thinking machine. Stephen Hawking argues that we would not sit and wait for an alien species to come and visit earth if we have advanced warning, but that is exactly what we’re doing with AI. We know it’s coming, we know that there will be something similar to a “Singularity” in the future. Our internet optimists are waiting breathlessly for it, but we don’t truly know the long term impact of this technology on how it shapes our own society.

It’s not just the risk of AI destroying our world and all of humanity. It’s also the lack of understanding of how our current algorithms are shaping our conversations in the media and social media. For instance, it’s fairly commonly known now that a lot of pretty major news outlets are using Reddit as a source to identify upcoming stories. TMZ, the Chive, and tons of other content sites mine it for memes and stories, while more serious news sources find interesting comments and use those to drive more interesting stories.

I believe the tweet below really does a good job showing how lowly we think of ethics in our society. This will really negatively impact our ability to understand the risks of AI. AI is going to really transform our culture and we don’t know what we don’t understand about the risks of the technology.