I’m currently reading: To Save Everything Click Here by Evgeny Morozov. I find the book interesting because it really pushes back against what he calls “Internet Centrism” which he essentially defines as anything that is considered good because it’s on the internet. For instance, Bitcoin is good because it’s a digital currency or having the LA Times write an article using an algorithm for the most recent earth quake – or online book publishing (good because it destroys traditional “gatekeepers”). One of his arguments is that because we don’t understand the underlying biases behind an algorithm we can’t truly tell if the algorithm is actually better than a subjective opinion on something. An example he uses to argue this point is a comparison between traditional food critics and Yelp reviews. Yelp uses an algorithm to determine what are the best restaurants, while a critic uses both experience, repetitive visits, and an underlying knowledge set to determine quality of an establishment. We can learn what biases the critic has (indian over french) through reading his critiques over time, with an algorithm we just never see what we don’t want to on the web (see the filter bubble).
Interestingly, this is somewhat in contradiction of Daniel Kahneman’s Thinking, Fast and Slow, which argues that the only time an expert should be trusted, especially when something is subjective, is when there’s a great deal of immediate feedback on a decision. Otherwise an algorithm is more effective and will definitely get you well beyond the 50% accuracy of most experts. Kahneman’s argument rings true to me, not surprisingly. I have a strong background in analytics through my undergrad, master’s and job experience with Six Sigma. All of these rely on models and algorithms to predict specific behavior. These models can be applied to both people and processes. I’ve felt that experience is always good for helping interpret results of the analysis, but in many ways the analysis forces you to quest preconceived notions around a topic you might be an expert.
I do think that these two systems can live well together. If we don’t know what algorithm Facebook, Twitter, Google, or others actually are using to provide us information we can’t truly be sure what biases have been introduced. I think that Netflix provided us with a great example of the power and weaknesses of algorithms at the same time. They offered a million dollars for people to make a better algorithm than theirs. The group that won, actually used an aggregate of algorithms – they selected from the 5 best algorithms and combined them. These were tested against what people actually wanted to watch and how they rated the results. So it was algorithms guided by the results and continually improved. However, I think Netflix has a different objective than Facebook or Google – they provide you the ability to enter you preferences and then a suite of selections to make you happy. Google doesn’t allow you to modulate your search criteria beyond your initial search term.
Experts have a role, but the need to display humility and a willingness to learn. Algorithms have a role, but they need to be tested for biases and in many cases we must forcefully push against them. If we only hear our own opinions how can we learn and grow. If we never are challenged how can we be empathetic with other people – both of these lower the quality of our lives and we don’t even realize it.
Pingback: The known unknowns and the unknown unknowns of AI | Science, Technology, + Culture
Pingback: The Value of Culture | Science, Technology, + Culture
Pingback: Book review: Enchanted objects | Science, Technology, + Culture