Ethics in Technology Matters, Alexandria Ocasio-Cortez is Right, We Instill Our Biases in Technology

Some people are unhappy about what Alexandria Ocasio-Cortez is saying here. People not like to imagine that software cannot have politics, intentionally or otherwise.

https://platform.twitter.com/widgets.js

Whenever I was earning my master’s degree, I took a number of courses on the ethics of technology and the history of technology in general.In one of my classes we learned that bridges, yes bridges can have politics. There was an architect, Robert Moses, that was hired by New York City to design and build bridges. Given that NYC is an island and there’s a lot of water there, building bridges is a pretty big deal. Robert Moses was a racist. He also hated poor people. So, if you’re hired to build a bridge from one part of the city to another part with beautiful parks and outdoor spaces that you wanted to have whites and rich people use but not poor people, how would you do that?

If you build a bridge that uses traditional arches underneath with no top support, any vehicle can cross. However, if you build a bridge that has a maximum height allowed, then you can limit that types of vehicles that can cross the bridge. If you built the bridge low enough, then you can prevent buses from crossing the bridge. Buses that would be carrying poor people of color.

It’s just a bridge, how can it be racist? A bridge is just a thing. Something built by people. However, those people have biases and intentions. These are built into that technology. While A bridge may not be racist, this one IS because of the racism used to build the bridge.

If a bridge can have biases intentionally built into it, there is no doubt that software will have biases built into them. We’ve seen time an again that beauty algorithms where the AI didn’t like dark skinned women. In those cases the people building the training set of images had biases. The engineers didn’t like dark skinned women and didn’t include a significant amount of them in the training set.

Soap dispensers aren’t able to detect dark skinned hands, because the engineers working on them didn’t think of test the sensor on someone with dark skin.

 

https://platform.twitter.com/widgets.js

These aren’t intentional biases. It’d be difficult to imagine a group of engineers all sitting around the room saying, “wouldn’t it be me great if we prevent dark skinned people from properly washing their hands? MWahahahahah.” No that’s not what happened. What happened is that the QA team was made of people that look more like me. The dispenser worked perfectly for them, QA passed! This isn’t an intentional bias, but it’s a bias none the less. It’s called the availability bias. If the only people that are available look a certain way, you don’t think about people that aren’t immediately available.

Everyone does it. More people are aware of the fact that there are people different from them. For white people this is critical. It’s similar to when a white person writes an article about how racism has significantly declined in a major news paper.

It is time that organization recognize this and create teams to ensure that ethics and biases are considered when developing and selling novel technologies – or in the cases of bridges old technologies repurposed for modern uses.

Technology and ethics

We’re in a precarious position right now. We’re moving faster and faster forward with our technologies. We’re dreaming up new ways to track our movements, our health, vehicles, and weapons. Our leaders in congress and the senate don’t want to have honest conversations about these technologies. Our weapons are being used for some pretty unethical things in Israel no matter how you look at things. We don’t know how our data is being used by or by whom.
We’re developing technologies that will significantly modify our work places, that will adjust how we interact with each other, understanding and discussing what the impact of these changes is vitally important. Let’s say you’re a big fan of applications and devices like FitBit and similar products, you use them everyday, how will your health insurance company use that data? What happens if it shows that you’re a lazy bum and that you’re not doing anything to keep yourself healthy. What if it shows that you’ve recently stopped doing physical actitivies and that you’re actively being unhealthy. Could your rates go up, could your Care manager suddenly contact your doctor’s office and getting you on a forced care plan through your doctor. These types of things could happen. Now what happens if your insurance company sees where you’re going to McDonald’s all the time because of your car or your smart phone. This could easily happen through Apps. Skype and Facebook both require you to share GPS locations, why not an app from your Insurance company?

Are these applications of technology ethical, I don’t think so. But they will help people make money and save money. Now, is that a reason that we should accept these ethical lapses? I don’t think so. I think we need to have more serious ethical conversations.

Driverless cars aren’t without ethical quandaries

While driving home the other day I was thinking about the new Google Driverless car stuff that I’ve seen. It’s an interesting looking vehicle, see it below. Apparently, one of the reasons why Google went fully autonomous was that people would be first hyper vigilant, then so lazy that they completely trusted the car in any and every situation.

Google’s fully automated driverless car

I believe it’s likely that the first round of driverless cars won’t be fully automated. Data will eventually show that the fully automated cars are perfectly safe, but we’re a paranoid lot when it comes to new technology. I also think that there are definitely risks with a fully autonomous car in regard to hacking and spoofing the system. I have a feeling that will become a game with hackers to try to trick the car into thinking that a direction is safe when it is actually not. To continually combat these risks Google will have to make it very easy to update the software, possibly while driving, as well as the hardware. I believe this is one of the many reasons why Google just announced their 180 internet satellites that they will be launching soon.

However, I think that the best of intentions will likely lead to some serious issues for Google and law makers in the next few years. For some of them an author at the Guardian wrote a few of them. That being said, I think that the first cars will not be fully automatic until enough data comes into show they are safe going highway speeds consistently. I think that this will lead to issues for Google.

One of the things that is missed in the Guardian article above is that if you’re an Android user, those very things could happen already. Your phone already tracks not just GPS but also nearby cell towers, so you could very easily subpoena either Google or your cell provider for records of your whereabouts. However, the interesting thing that Google talks about in regard to safety, is that drunk driving will be a think of the past.

As I mentioned before I think that there will be a manual mode and I think there will have to be one for a while because of definite hacker threats. You’d need to override. I also think that this would require a mechanical switch that literally overrides the system. The system would still run, but would not be able to override the human driver. Maybe I’m just paranoid, but I don’t think that anyone can create a truly secure vehicle like this and if one is compromised then all of them would be under the exact same risk.

Now, let’s say a guy goes out drinking. Google knows where he is. Google knows that he took pictures of his shots Instagraming “#drinktilyoublackout!”. Google also knows that he texted a few friends through Hangouts fully integrated texting capability. Furthermore, he tweets to @Google “Getting Black out drunk no #DD #DriverlessFTW”. This guy then gets into the car, switches it to manual override for whatever reason gets in an accident, who is at fault here? Clearly the guy that’s driving right? Well, if he had a fully automated car with no other option he’d not hurt anyone. Google knows everything he’s doing. Google knows everywhere you go already because of how their devices work. The difference is now that they can control where you’re going and how you get there.

Is Google responsible for building a car with a manual override that could save people’s lives in other instances? Is the State responsible for mandating that Google put in that switch? Should Google have built in safety measures that make the user go through a series of actions or prove the driver is capable of overriding the car?

I think that we need to hash out all of these before these cars are allowed on the road. I also think it’s going to be vitally important that we understand what happens with that data from all our cars, who can access it, and if we really have any privacy in a fully automated car like that. Simply by participating in our culture with a cell phone we’ve already eroded our privacy a great deal in both the public and private realm. Driverless cars will further impact that and will likely end up being a highly political issue over the next several years. Taxis, Lyft, and Uber will be out of business – the Car2Go model will beat them out any day of the week if the cars are autonomous. Direct to customers, like Tesla is pretty obvious. Lots of changes are going to happen through these cars.

We can’t just let this happen to us, we need to make decisions about how we want to include driverless cars in our lives. They aren’t inevitable and definitely not in their current incarnation.

Book Review: The People’s Platform: Taking back Power and Culture in the Digital Age

I just finished reading “The People’s Platform: Taking Back Power and Culture in the Digital Age” by Astra Taylor  I really found this book to be interesting. I believe it offered a very different critique on the digital age than Evegny Morozov’s “Click here to Save everything” where he focused on the arrogance of the algorithm and total solutionism of the movement, Taylor focused on the cultural cost of our digital economy. I think combined the philosophizing of Morozov with Taylor’s discussion of the value of culture and the economic forces behind these changes is an extremely powerful argument. Alone they are both excellent, but I think they offer balancing points that compliment each other well.

First of all, I don’t think everyone will like this book. I don’t think a lot of my readers will like large portions of the book. However, even the most libertarian will agree with some portions of it. I think that’s the power of this book. It’s not really fair to one side or the other, although is really obvious she has a bias – which she wears pretty proudly. Knowing this bias is there allows the reader to decide which portion is Occupy Wall street dreaming or which is really a problem (of course one can go too far either direction).

Taylor’s cultural argument is powerful because we are all part of that economy. We all consume cultural artifacts or perhaps, like myself, make them. The fact that these have been commoditzed to a cost of nothing while still valuable is something we deal with daily. The choice between pirating a movie, renting, streaming it on Netflix, or buying it all are choices we decide on a regular basis. I think that even the most hardcore pirate buys a lot of cultural goods.

Many of us, even if we don’t produce cultural goods, know someone that does. You might watch a video game streamer, you might have a friend or two that are in various bands, you might read my blog or another friend’s blog. All of these people want to use these artifacts to either live on or perhaps enhance their career in some fashion.

However, in the digital space most of the companies that share or distribute cultural activities are funded by ads. Twitch makes most of it money from ads, Google makes $50 billion/year on ads, Facebook makes the most money on an ad whenever a friend “Sponsors” that ad with or without our active agreement to “sponsor” the ad.

Taylor argues that we need to help develop a cultural public space that helps create value for other cultural goods that you may not actually consume (which is why I wrote this blog).

Many of the ideas in the book are anti-corporation, but not because they make money. Instead, it’s because they make money in ways that aren’t obviously ads and that control our cultural destiny. She is pro-net neutrality, she supports companies making profits from ads, but she argues for more transparency that an article is actually sponsored.

Her argument isn’t that we should tear down companies, but instead that we pull back some of the power that these companies have simply taken without any real conversation. We need to look at the ethics behind the algorithms they are using and understand their biases. We need to enable true conversations about these topics. Ad driven content leads to self-censorship and lower quality products.

Is this book perfect? Not by a long shot, but it really made me think about some topics and I think that we need to have more conversations about not just ads, but also about why companies behave the way they do. We need to find a better balance than we currently have.

I rate the book 5/5 for making me really think about topics

The known unknowns and the unknown unknowns of AI

I’m reading a book called “Robot Uprisings” which is quite obviously about robots and how they could attack and take over the world. I think the most interesting thing about this collection of short stories isn’t the fact that there are uprisings, but the many different routes that AI could decide to revolt. There’s a broad range from robots debating if they should revolt or not, to an AI that we never figure out what to do with and only revolts when we try to kill it.

I think that these difference scenarios really encapsulate the limitations of our imagination with what could happen with robots. I think the most terrifying thing is what we really don’t understand about robots or AI in general. What is being built without our knowledge in government labs, in universities, and in hacker labs. We’re debating the ethics of the NSA and GCHQ espionage of their own citizens and the limits of rights in the digital space. We’re using rudimentary “AI” in terms of heuristics and algorithms. We as end users or that are impacted by these algorithms or if their very assumptions are even ethical, without bias, or anything along those lines. danah boyd argues that the Oculus Rift is sexist because the algorithms that control the 3D functionality are all designed by men for men. Agree with her or not, but women get sick using the Rift.

If we can’t agree on the ethics of programs that are in use and the risks posed by the solutionism of the internet, then we’re in serious trouble when we actually create a thinking machine. Stephen Hawking argues that we would not sit and wait for an alien species to come and visit earth if we have advanced warning, but that is exactly what we’re doing with AI. We know it’s coming, we know that there will be something similar to a “Singularity” in the future. Our internet optimists are waiting breathlessly for it, but we don’t truly know the long term impact of this technology on how it shapes our own society.

It’s not just the risk of AI destroying our world and all of humanity. It’s also the lack of understanding of how our current algorithms are shaping our conversations in the media and social media. For instance, it’s fairly commonly known now that a lot of pretty major news outlets are using Reddit as a source to identify upcoming stories. TMZ, the Chive, and tons of other content sites mine it for memes and stories, while more serious news sources find interesting comments and use those to drive more interesting stories.

I believe the tweet below really does a good job showing how lowly we think of ethics in our society. This will really negatively impact our ability to understand the risks of AI. AI is going to really transform our culture and we don’t know what we don’t understand about the risks of the technology.

Ethics and Values; Military and Espionage

We didn’t get to have a national conversation about government espionage until Snowden released all those documents and now we’re having a pretty vocal one in 2/3 branches of our government (well all three since Obama seems to contradict himself fairly often). Today on Vice’s Motherboard I read an article claiming the military is going cyberpunk. As the article notes, the military has used flight simulators for years, because crashing in one of those is a lot cheaper than crashing a real plane. The Stealth Bombers cost close to 2 Billion each, so learning how to fly one of those is best done in a simulator than in a real plane, plus it reduces the risk of death in the event of a crash.

How will this trend continue? Apparently the military is investing in virtual reality battle grounds. This will help train soldiers in different combat situations without having to build extremely expensive facilities, use blank rounds, damage guns, and any other types of explosive that would be used in those situations. Never mind the logistics to get the equipment there and all that.

It’s likely that these battle grounds will incorporate things like the Oculus Rift and omnidirectional treadmills. This will allow soldiers to move crouch and actually feel like they are in direct combat. For people at home, it’s not going to be as useful, but it could work well in this type of situation. If they add in the ability to make the environment cold or hot and wet or dry they could simulate a great deal of the virtual environment to build skills of soldiers.

The military is also working on robotics as a way to reduce the number of men we have on a battle field. This of course could be extendable beyond simply having robots like the Boston Dynamics Dog, but we could eventually mix the VR environment with a “robot” to have a remote soldier that is bullet proof, never tires (as you could replace the driver), and moves around like a person. This opens up an entirely new type of warfare. It takes the idea of drone combat and moves it to the next level – foot soldier drones that truly make the battle field imbalanced. Of course the final step would be fully autonomous robotic soldiers – but I think most people wouldn’t accept those.

In any of these cases we need to have a serious national conversation about the application of these technologies. Looking from an ethical standpoint there are conflicting views. First, it’s ethical to protect our soldiers as much as possible when we’re in a justifiable defensible conflict. Second, it’s unethical to enter combat as an aggressor where your military cannot be stopped from the position of the defender. Furthermore, if we’re talking about completely robotic military force it’s even less defensible to be using these forces as we don’t have any human control in the case of a software failure – or a hack and remote theft of the system.

As a society we need to have a conversation about if we think we should allow our military to do this. As it is we already routinely have operations that the citizens aren’t really aware of in countries like Yemen and god knows where else. These put our men and women at risk which no one wants for arguable benefit in taking out terrorists – it’s unclear if it’s working or we’re just making more enemies. If we are able to replace real live Seals with a team robotic bodies controlled by a Seal team remotely, how many more of these missions could we run? How much more of this sort of activity would we believe is an acceptable level?

I believe that this goes back to what we value as a society. If we value privacy, safety, freedom, and true constitutional control over the military then we need to make sure that we control this before the military just morphs without really any thought. The NSA morphed into a data sponge pulling in everything that moves on the internet. As a society, based on the outrage, we do value our privacy and we’re trying to pull back control from the NSA – some people disagree with that, which is fine that’s why we need a conversation.

I believe that having robotic avatar’s will lead to a higher likelihood of abuse – similar to what we’ve seen with the NSA. I think this is what’s happened with the Drone Program, where Obama has a kill list that they are proud of having. Having more humanoid drones that can shoot sniper rifles will reduce the amount of collateral damage, but will be abused. It’s also very debatable if the kill list is even constitutional.

I think that the innovation for reducing our military expenditure is a good thing. However, I think we need to have a conversation around what the end goal of these programs.

Startups are going to save us, relax everybody

In typical Silicon Valley Breathlessness Forbes published an article by Victor W. Hwang arguing the fact the Startup movement isn’t about startups. He argues that it’s actually a movement to free people from the chains of our current economic system. I definitely don’t buy this. Most people start a company for one of two reasons, they find a problem that they have a better solution for than anything provided (or a novel solution) or to make money. Typically it’s a combination of the two. No company in existence is out there not to make money. Companies that aren’t profitable cannot stay in business for long unless you’re lucky and funded by people that thing you will eventually make them a lot of money.

An opinion piece in the NY Times from 1/2/2014 pretty much sums this fact up. You’re replaceable at a startup and likely even more so than any time in the  future of the company. It’s really easy to fire people when you have no money, especially if you are open and honest about how you go about letting people go.

Furthermore, if the startup movement was in fact about bettering the plight of people we wouldn’t be seeing the social stratification that we’re seeing in cities like San Francisco, ground zero for the startup movement. In SF some of the neo-techno-libertarian-elite are upset that they even see the poor people on their streets rather than out of the way like in cities like NYC (he issued an apology not unlike Tiger Wood’s for being a sex addict). Not only are these the people that are involved in the startup movement, but they are funding it. Yes, I know that this is only one person and on the other side you can point to Alexis Ohanian of Reddit fame, which really is doing a lot of social good.

In some ways the startup movement has made it easier for people to be cogs in the wheel. They work long hard hours, large companies like Facebook and Google push and push to get more for less. In many cases this can cause depression and the exact opposite of what the Startup movement is striving for. In fact, the goal of the Lean Startup is to make it extremely easy to ramp up new employees and ensure full coverage if something goes wrong. These companies and products are designed around the idea of building in quality rather than testing or patching it in. Of course, there’s a benefit to the employee in these cases too – they’re free to really explore new problems and create new things without needing to worry about reoccurring problems.

I do believe there are many startup founders are genuinely trying to change our society for the better, but it hasn’t been a frictionless process and will likely only get worse as we move forward. The Sharing Economy, for example, has come under fire from traditional companies, neighbors, politicians, and even members of the sharing economy. While in other cases, such as Zynga, we see companies that are essentially parasites that thrive through creating addicting games and clogging a platform with their notifications (those notifications stopped and Zynga basically died).

It’s important to be skeptical of statements that glorify any portion of our culture. The article that spurred me to write this has a similar tone as many of Thomas Freeman’s, of the NYTimes, articles, fully optimistic, but missing a broader portion of the population and the long term impact. We should be wary of these articles because we’ll end up believing that it’s more complicated to calculate a median value than an average. The startup movement is to help people start companies, some founders are dreamers, some truly try to change how work is done, but they most aren’t truly changing the world in amazing ways. We’ll be fine if reddit, AirBnB, or some other services vanishes. We were when Digg, Google Reader, Palm and any other influential company vanished.