The known unknowns and the unknown unknowns of AI

I’m reading a book called “Robot Uprisings” which is quite obviously about robots and how they could attack and take over the world. I think the most interesting thing about this collection of short stories isn’t the fact that there are uprisings, but the many different routes that AI could decide to revolt. There’s a broad range from robots debating if they should revolt or not, to an AI that we never figure out what to do with and only revolts when we try to kill it.

I think that these difference scenarios really encapsulate the limitations of our imagination with what could happen with robots. I think the most terrifying thing is what we really don’t understand about robots or AI in general. What is being built without our knowledge in government labs, in universities, and in hacker labs. We’re debating the ethics of the NSA and GCHQ espionage of their own citizens and the limits of rights in the digital space. We’re using rudimentary “AI” in terms of heuristics and algorithms. We as end users or that are impacted by these algorithms or if their very assumptions are even ethical, without bias, or anything along those lines. danah boyd argues that the Oculus Rift is sexist because the algorithms that control the 3D functionality are all designed by men for men. Agree with her or not, but women get sick using the Rift.

If we can’t agree on the ethics of programs that are in use and the risks posed by the solutionism of the internet, then we’re in serious trouble when we actually create a thinking machine. Stephen Hawking argues that we would not sit and wait for an alien species to come and visit earth if we have advanced warning, but that is exactly what we’re doing with AI. We know it’s coming, we know that there will be something similar to a “Singularity” in the future. Our internet optimists are waiting breathlessly for it, but we don’t truly know the long term impact of this technology on how it shapes our own society.

It’s not just the risk of AI destroying our world and all of humanity. It’s also the lack of understanding of how our current algorithms are shaping our conversations in the media and social media. For instance, it’s fairly commonly known now that a lot of pretty major news outlets are using Reddit as a source to identify upcoming stories. TMZ, the Chive, and tons of other content sites mine it for memes and stories, while more serious news sources find interesting comments and use those to drive more interesting stories.

I believe the tweet below really does a good job showing how lowly we think of ethics in our society. This will really negatively impact our ability to understand the risks of AI. AI is going to really transform our culture and we don’t know what we don’t understand about the risks of the technology.

Experts and Algorithms

I’m currently reading: To Save Everything Click Here by Evgeny Morozov. I find the book interesting because it really pushes back against what he calls “Internet Centrism” which he essentially defines as anything that is considered good because it’s on the internet. For instance, Bitcoin is good because it’s a digital currency or having the LA Times write an article using an algorithm for the most recent earth quake – or online book publishing (good because it destroys traditional “gatekeepers”). One of his arguments is that because we don’t understand the underlying biases behind an algorithm we can’t truly tell if the algorithm is actually better than a subjective opinion on something. An example he uses to argue this point is a comparison between traditional food critics and Yelp reviews. Yelp uses an algorithm to determine what are the best restaurants, while a critic uses both experience, repetitive visits, and an underlying knowledge set to determine quality of an establishment. We can learn what biases the critic has (indian over french) through reading his critiques over time, with an algorithm we just never see what we don’t want to on the web (see the filter bubble).

Interestingly, this is somewhat in contradiction of Daniel Kahneman’s Thinking, Fast and Slow, which argues that the only time an expert should be trusted, especially when something is subjective, is when there’s a great deal of immediate feedback on a decision. Otherwise an algorithm is more effective and will definitely get you well beyond the 50% accuracy of most experts. Kahneman’s argument rings true to me, not surprisingly. I have a strong background in analytics through my undergrad, master’s and job experience with Six Sigma. All of these rely on models and algorithms to predict specific behavior. These models can be applied to both people and processes. I’ve felt that experience is always good for helping interpret results of the analysis, but in many ways the analysis forces you to quest preconceived notions around a topic you might be an expert.

I do think that these two systems can live well together. If we don’t know what algorithm Facebook, Twitter, Google, or others actually are using to provide us information we can’t truly be sure what biases have been introduced. I think that Netflix provided us with a great example of the power and weaknesses of algorithms at the same time. They offered a million dollars for people to make a better algorithm than theirs. The group that won, actually used an aggregate of algorithms – they selected from the 5 best algorithms and combined them. These were tested against what people actually wanted to watch and how they rated the results. So it was algorithms guided by the results and continually improved. However, I think Netflix has a different objective than Facebook or Google –  they provide you the ability to enter you preferences and then a suite of selections to make you happy. Google doesn’t allow you to modulate your search criteria beyond your initial search term.

Experts have a role, but the need to display humility and a willingness to learn. Algorithms have a role, but they need to be tested for biases and in many cases we must forcefully push against them. If we only hear our own opinions how can we learn and grow. If we never are challenged how can we be empathetic with other people – both of these lower the quality of our lives and we don’t even realize it.