Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin
My rating: 5 of 5 stars
This book builds on the research in Algorithms of Oppression: How Search Engines Reinforce Racism and Dark Matters: On the Surveillance of Blackness, so I definitely recommend reading those two books first. I’m not alone in that, in one of the talks I’ve watched Benjamin give, she explicitly mentions those books as influencing her. I really enjoyed this book, it brought together ideas from my own master’s degree, including the complexity of how technology is used. In one class we specifically discussed the Moses’s bridges in New York (despite this being taught in the Netherlands), which were designed to exclude the poor by preventing buses from crossing the bridge. In this book she discusses this bridge and how it can pull in the very people that were expected to benefit the bridge design (basically a bus full of rich white kids went across after they came back from a trip to Europe, the driver hit the top of the bridge which resulted in 6 people getting seriously injured).
She modernizes these examples by describing how algorithms are created to approximate details about people, such as determining their ethnicity to provide “targeted services.” Due to historical redlining, the practice of creating white people only enclaves in suburbs and portions of the city (a Jim Crow era set of laws), the zip code has become a reliable indicator of ethnicity and race. She gives the example of Diversity, Inc., which creates ethnicity or racial classifications for potentially hiring companies. They will look at the names of people and assess their ethnicity, however due to the history of slavery, many African Americans have white sounding surnames, like Sarah Johnson, to “correctly” identify the ethnicity of Sarah, the company uses her zipcode to assign her race.
Overall, I found a lot of examples in this book very illuminating. Benjamin finds the approach to Design favored in Silicon Valley wanting and excluding, primarily focused on empathizing for making money, which in many cases is empathizing with whiteness. Furthermore, Benjamin argues that empathy can lead skewed results, such as body camera video providing empathy for police officers even when they are killing Black people for crimes which aren’t capital offenses or no crime at all.
As an engineer, I took this book as a warning. That we need to understand how data is impacting those around us. That we need to understand how data that might seem harmless to me, could cause serious harm to someone else. That algorithms that seem to be doing good, could instead be quickly turned into something bad. Facial recognition is a great example. Facebook tags people in photos without consent and this can be exploited by law enforcement. Furthermore, since facial recognition software is so inaccurate, it can misclassify a person as the wrong sex, the wrong person, or in extremely bad past cases, as an animal.
Furthermore, engineers have the responsibility to ensure our work is used to create more equity in the world. Benjamin offers a few different organizations that are working to ensure justice and equity for everyone. Maybe it’s time that software engineers/developers have a responsibility for this the same way a civil engineer must ensure a bridge is safe.
I recommend that anyone that works at a social media company read this. Anyone doing work for algorithms in banks, insurance, hiring, and housing really understand the fact that algorithms aren’t objective. They are as objective as our history. Our history hasn’t been objective nor equitable. We must change that.
View all my reviews
Tag Archives: Algorithms
The known unknowns and the unknown unknowns of AI
I’m reading a book called “Robot Uprisings” which is quite obviously about robots and how they could attack and take over the world. I think the most interesting thing about this collection of short stories isn’t the fact that there are uprisings, but the many different routes that AI could decide to revolt. There’s a broad range from robots debating if they should revolt or not, to an AI that we never figure out what to do with and only revolts when we try to kill it.
I think that these difference scenarios really encapsulate the limitations of our imagination with what could happen with robots. I think the most terrifying thing is what we really don’t understand about robots or AI in general. What is being built without our knowledge in government labs, in universities, and in hacker labs. We’re debating the ethics of the NSA and GCHQ espionage of their own citizens and the limits of rights in the digital space. We’re using rudimentary “AI” in terms of heuristics and algorithms. We as end users or that are impacted by these algorithms or if their very assumptions are even ethical, without bias, or anything along those lines. danah boyd argues that the Oculus Rift is sexist because the algorithms that control the 3D functionality are all designed by men for men. Agree with her or not, but women get sick using the Rift.
If we can’t agree on the ethics of programs that are in use and the risks posed by the solutionism of the internet, then we’re in serious trouble when we actually create a thinking machine. Stephen Hawking argues that we would not sit and wait for an alien species to come and visit earth if we have advanced warning, but that is exactly what we’re doing with AI. We know it’s coming, we know that there will be something similar to a “Singularity” in the future. Our internet optimists are waiting breathlessly for it, but we don’t truly know the long term impact of this technology on how it shapes our own society.
It’s not just the risk of AI destroying our world and all of humanity. It’s also the lack of understanding of how our current algorithms are shaping our conversations in the media and social media. For instance, it’s fairly commonly known now that a lot of pretty major news outlets are using Reddit as a source to identify upcoming stories. TMZ, the Chive, and tons of other content sites mine it for memes and stories, while more serious news sources find interesting comments and use those to drive more interesting stories.
I believe the tweet below really does a good job showing how lowly we think of ethics in our society. This will really negatively impact our ability to understand the risks of AI. AI is going to really transform our culture and we don’t know what we don’t understand about the risks of the technology.
Experts and Algorithms
I’m currently reading: To Save Everything Click Here by Evgeny Morozov. I find the book interesting because it really pushes back against what he calls “Internet Centrism” which he essentially defines as anything that is considered good because it’s on the internet. For instance, Bitcoin is good because it’s a digital currency or having the LA Times write an article using an algorithm for the most recent earth quake – or online book publishing (good because it destroys traditional “gatekeepers”). One of his arguments is that because we don’t understand the underlying biases behind an algorithm we can’t truly tell if the algorithm is actually better than a subjective opinion on something. An example he uses to argue this point is a comparison between traditional food critics and Yelp reviews. Yelp uses an algorithm to determine what are the best restaurants, while a critic uses both experience, repetitive visits, and an underlying knowledge set to determine quality of an establishment. We can learn what biases the critic has (indian over french) through reading his critiques over time, with an algorithm we just never see what we don’t want to on the web (see the filter bubble).
Interestingly, this is somewhat in contradiction of Daniel Kahneman’s Thinking, Fast and Slow, which argues that the only time an expert should be trusted, especially when something is subjective, is when there’s a great deal of immediate feedback on a decision. Otherwise an algorithm is more effective and will definitely get you well beyond the 50% accuracy of most experts. Kahneman’s argument rings true to me, not surprisingly. I have a strong background in analytics through my undergrad, master’s and job experience with Six Sigma. All of these rely on models and algorithms to predict specific behavior. These models can be applied to both people and processes. I’ve felt that experience is always good for helping interpret results of the analysis, but in many ways the analysis forces you to quest preconceived notions around a topic you might be an expert.
I do think that these two systems can live well together. If we don’t know what algorithm Facebook, Twitter, Google, or others actually are using to provide us information we can’t truly be sure what biases have been introduced. I think that Netflix provided us with a great example of the power and weaknesses of algorithms at the same time. They offered a million dollars for people to make a better algorithm than theirs. The group that won, actually used an aggregate of algorithms – they selected from the 5 best algorithms and combined them. These were tested against what people actually wanted to watch and how they rated the results. So it was algorithms guided by the results and continually improved. However, I think Netflix has a different objective than Facebook or Google – they provide you the ability to enter you preferences and then a suite of selections to make you happy. Google doesn’t allow you to modulate your search criteria beyond your initial search term.
Experts have a role, but the need to display humility and a willingness to learn. Algorithms have a role, but they need to be tested for biases and in many cases we must forcefully push against them. If we only hear our own opinions how can we learn and grow. If we never are challenged how can we be empathetic with other people – both of these lower the quality of our lives and we don’t even realize it.