The known unknowns and the unknown unknowns of AI

I’m reading a book called “Robot Uprisings” which is quite obviously about robots and how they could attack and take over the world. I think the most interesting thing about this collection of short stories isn’t the fact that there are uprisings, but the many different routes that AI could decide to revolt. There’s a broad range from robots debating if they should revolt or not, to an AI that we never figure out what to do with and only revolts when we try to kill it.

I think that these difference scenarios really encapsulate the limitations of our imagination with what could happen with robots. I think the most terrifying thing is what we really don’t understand about robots or AI in general. What is being built without our knowledge in government labs, in universities, and in hacker labs. We’re debating the ethics of the NSA and GCHQ espionage of their own citizens and the limits of rights in the digital space. We’re using rudimentary “AI” in terms of heuristics and algorithms. We as end users or that are impacted by these algorithms or if their very assumptions are even ethical, without bias, or anything along those lines. danah boyd argues that the Oculus Rift is sexist because the algorithms that control the 3D functionality are all designed by men for men. Agree with her or not, but women get sick using the Rift.

If we can’t agree on the ethics of programs that are in use and the risks posed by the solutionism of the internet, then we’re in serious trouble when we actually create a thinking machine. Stephen Hawking argues that we would not sit and wait for an alien species to come and visit earth if we have advanced warning, but that is exactly what we’re doing with AI. We know it’s coming, we know that there will be something similar to a “Singularity” in the future. Our internet optimists are waiting breathlessly for it, but we don’t truly know the long term impact of this technology on how it shapes our own society.

It’s not just the risk of AI destroying our world and all of humanity. It’s also the lack of understanding of how our current algorithms are shaping our conversations in the media and social media. For instance, it’s fairly commonly known now that a lot of pretty major news outlets are using Reddit as a source to identify upcoming stories. TMZ, the Chive, and tons of other content sites mine it for memes and stories, while more serious news sources find interesting comments and use those to drive more interesting stories.

I believe the tweet below really does a good job showing how lowly we think of ethics in our society. This will really negatively impact our ability to understand the risks of AI. AI is going to really transform our culture and we don’t know what we don’t understand about the risks of the technology.

We live in a complex world

I’ve been doing a lot of thinking lately. Not about my normal stuff, I think I’m feeling a bit down from not having much of a social life out here – had a friend in town likely sparked that a bit. Life’s complicated. We don’t live in a nice neat linear world where the good guy wins because the author wants it to be that way (or talks about how they should have written the series differently after making billions).

The world we live in is complex. Seemingly random decisions can impact the rest of your life. A flip of a coin over which grad program to go to, a roll of the die to pick between 4 jobs after college, living with all new people my freshman year at Pitt, even the decision to go to Pitt over anywhere else were all fairly haphazard and without much of a plan. I went with a lot of gut feelings with those choices. They’ve all lead me on pretty crazy and interesting adventures. If I hadn’t lived on the 9th floor in Tower A I would never have ended up living with 5 girls my Junior Year and none of the adventures all of my friends had there would have happened.

We don’t like complexity. We like to think that the path that we’re on was the one we were always destined to be on. It’s very nice and easy to look at the complex history of technology, science, and society to think that our current culture was pre-ordained in some manner. So many different choices could have dramatically altered where we are now. Just one of those decisions I mentioned above would likely have altered my life and everyone I’ve met since dramatically. This thought really struck me while I was watching an Episode of Cosmos. Essentially the entire German lens industry hinged on a SINGLE arbitrary moment of kindness from a Prince and soon to be King.

We punish people that remind us of complexity. Think of all the times people talk about “Flip-Flopping” in politics. You get punished for changing your mind because you’ve learned more. When I’m at my most arrogant I like to think that I’ve been really consistent with my thinking since as far as I can remember, but I know that’s not true. I’ve learned a lot and met a ton of new people, there’s no way I could NOT have been influenced and changed what I believed about a topic.

All these thoughts have been rattling around my head because they are essentially making me ask, yet again, what do I want out of life. I have a good job, I’m buying a house, I have a great wife, but what do I want?

I’m working on learning programming so I can start a company, it’s slow going, but it’s going at least. I want to write a book, but that’s even slower going – I’m finding with my current schedule I don’t have time to do both, let alone have a life outside of spending time with my wife’s friends. That being said, I think I need to do some soul searching on where I want my career as well as my social life.

Any thoughts?

Musings from an annoying commute

On the Max ride home today, I heard to late 40-50ish guys having a chat about the down fall of the current generation of kids. I was trying to read my book, but the conversation ranged from the casually uninformed, family first thoughts, to the down right ignorant. According to these gentlemen our society is in the shitter because of the decline of the nuclear family, kids think video games are real, and therefore the kids in Columbine thought that they could take 8 bullets and come back to life. I had to restrain myself from commenting on this bucket of ignorance.

First of all, the nuclear family is essentially a myth. we’ve had modified family structures for as long as there have been families. A ton of people I know have had parents that have divorced, one spouse cheating on the other, or some sort of death in the family. Almost all of these people have turned out reasonably well. Everyone has their problems, but I don’t think that it’s solely due to family structure problems. If anything, the family structure problems that these guys are talking about are related to problems more closely associated with inequality and the fact that these families have someone in prison, work 2 or 3 jobs to support their family. These folks have to work so much because they can’t afford rent and our economy is structured around the car, which most of these people are being priced out of.

Second of all, violence and confusion over video games and reality don’t really exist. According to a recent study, if people are aggressive during or after a game it’s NOT because of the violence or lack thereof, but because of a lack of skill or fairness in the game. Apparently, people are more aggressive if Tetris is more difficult than if it’s easier. I think that Candy Crush Saga is a perfect example of this. The most difficult levels are frustrating because it has nothing to do with your skill, solely if you get the right combination of candies to effect a board clearing combination. Even if you do everything perfectly, you can still lose – which keeps pulling you back in. Dark Souls is another case in point. The game is so frustratingly difficult that many people rage quit, but they keep coming back because of the sense of accomplishment upon completing these difficult monsters/bosses. Essentially, the reward of accomplishment and skill accrual is worth the frustration.

Finally, because of this clear separation between reality and game the boys in Columbine didn’t think they could take a ton of bullets. This is obvious due to the fact that they committed suicide with one bullet. The problem with those boys is the fact that we don’t really speak to each other well about our problems. Marilyn Manson had the best response to that back shortly after the horrific events happened.

The conversation between these two men really just struck me as two guys looking for someone to listen to them and parrot it back. Honestly thought, it really just reminded me of two stoners talking about things.

Experts and Algorithms

I’m currently reading: To Save Everything Click Here by Evgeny Morozov. I find the book interesting because it really pushes back against what he calls “Internet Centrism” which he essentially defines as anything that is considered good because it’s on the internet. For instance, Bitcoin is good because it’s a digital currency or having the LA Times write an article using an algorithm for the most recent earth quake – or online book publishing (good because it destroys traditional “gatekeepers”). One of his arguments is that because we don’t understand the underlying biases behind an algorithm we can’t truly tell if the algorithm is actually better than a subjective opinion on something. An example he uses to argue this point is a comparison between traditional food critics and Yelp reviews. Yelp uses an algorithm to determine what are the best restaurants, while a critic uses both experience, repetitive visits, and an underlying knowledge set to determine quality of an establishment. We can learn what biases the critic has (indian over french) through reading his critiques over time, with an algorithm we just never see what we don’t want to on the web (see the filter bubble).

Interestingly, this is somewhat in contradiction of Daniel Kahneman’s Thinking, Fast and Slow, which argues that the only time an expert should be trusted, especially when something is subjective, is when there’s a great deal of immediate feedback on a decision. Otherwise an algorithm is more effective and will definitely get you well beyond the 50% accuracy of most experts. Kahneman’s argument rings true to me, not surprisingly. I have a strong background in analytics through my undergrad, master’s and job experience with Six Sigma. All of these rely on models and algorithms to predict specific behavior. These models can be applied to both people and processes. I’ve felt that experience is always good for helping interpret results of the analysis, but in many ways the analysis forces you to quest preconceived notions around a topic you might be an expert.

I do think that these two systems can live well together. If we don’t know what algorithm Facebook, Twitter, Google, or others actually are using to provide us information we can’t truly be sure what biases have been introduced. I think that Netflix provided us with a great example of the power and weaknesses of algorithms at the same time. They offered a million dollars for people to make a better algorithm than theirs. The group that won, actually used an aggregate of algorithms – they selected from the 5 best algorithms and combined them. These were tested against what people actually wanted to watch and how they rated the results. So it was algorithms guided by the results and continually improved. However, I think Netflix has a different objective than Facebook or Google –  they provide you the ability to enter you preferences and then a suite of selections to make you happy. Google doesn’t allow you to modulate your search criteria beyond your initial search term.

Experts have a role, but the need to display humility and a willingness to learn. Algorithms have a role, but they need to be tested for biases and in many cases we must forcefully push against them. If we only hear our own opinions how can we learn and grow. If we never are challenged how can we be empathetic with other people – both of these lower the quality of our lives and we don’t even realize it.

Philanthropy, Private industry, and science

Apparently I’m not too happy with the NYT magazine and their exposés of late. First there was the long article about millenials and how they don’t want to work for the “old guard” which is ahistoric and ignores a great deal of the similarities between the silicon valley of today and the past silicon valleys and other similar environs.

Now they are rushing about in concern over private scientific research. Apparently, it’s a new big problem. It’s neither new nor a problem. First of all some historical context. Scientific labs as we know them today were truly founded through industrial labs. These labs were initially in the dye industry back in Germany in the late 1800s, sure there were university labs, but they weren’t researching as big of thing as the industrial labs started. These labs had problems that couldn’t be solved in academic settings. The universities were training grounds for scientists, but in many cases the scientists actually did their doctoral research at Bayer or a similar type dye company. These dye companies almost all became pharmaceutical companies over time because of the similarity in chemistries between dyes and pharmaceuticals.

This was in the 1800s and really hasn’t abated. I’ve written about Bell Labs and Xerox in the past which are essentially the Bayer equivalent for telecom, semiconductors, and computers.

Science has always been a combination of public, private, and universities. In fact, research that I conducted through my master’s degree has shown that the INTERACTION between private industries and universities produces the most important work (in terms of citations). Our concern should not be if science is going private or not. Our concern should be if they are sharing with the broader scientific community. That’s the biggest risk. It’s one of the biggest problems with industrial scientific research – it never reaches the light of day even if it becomes a product.

Why doesn’t it? Well, simply because it’s better protection for some processes for the technique not to be patented. In the case where something is relatively easy to copy (an iPhone) it’s best to patent because you’re protected them. In the case where it’s very difficult to copy (a nitride layer on an Intel chip) it’s best to hide that process as deep as possible. In fact, it’s best if any technique that would uncover the underlying process to make that nitride layer from reverse engineering destroys the product. For Intel, this is the best result, for the rest of the world, it’s suboptimal as Global Foundries and TSMC will struggle for years to reverse engineer the layer if they ever can. This slows the innovation process as a whole, but we’re willing to suffer this inefficiency because Intel makes some nice chips.

Beyond this debate, the author is upset that someone would want to push scientific research in one direction that might only help white people or rich people. Unfortunately, this is capitalism. We may not like it in basic research that is going to be used to cure diseases, but we tolerate it with Intel so we need to be realistic and tolerate it in this case. Furthermore, I think that the author doesn’t understand that adjacencies in research in diseases will arise and we’ll learn more about all humans, not just them white folks. Ironically, at this point the author calls out a researcher that is working with an Oracle billionaire – that researcher works at Rockefeller University.

What are seen now as seminal research institutions in many cases started out through the very philanthropy the author is upset about. Carnegie Mellon University was the combination of two institutions in Pittsburgh started by an industrialist and a banker. It is one of the most respected research organizations in the world. These men were driven by the same desire to push scientific research as Bill Gates and the other (mostly) men on the list.

Is this a perfect system? Not by a long shot, however in the current political environment scientists are going to take money from whatever source they can. It’s merely practicality. A professor will typically have anywhere between 1-10 grad students. These students at the PhD level will likely be fully funded by the professor. If that professor does not get funding, those kids don’t get to keep working and either have to find another adviser or quit. Here’s the kicker in the case that professor does get money – a large proportion of that funding is taken and allocated to less profitable portions of the organization. At University of Texas, this meant that the EE department was probably funding part of the Chemistry Department. Some departments are like the Football team, while others are like the Swimming team. The swimming team might be winners, but are in a small market.

If we truly wanted change in the way we fund scientific research we need to increase the amount of public investment across multiple institutions. We need to increase funding across multiple types of research fields, specifically focusing on the intersections between academic fields. Push for collaboration between industry and universities as well as collaboration across national boundaries. All of these improve the citation rate and quality of the research. We can even work to partner public funds with private funds – we just need full disclosure.

The problem isn’t privatization. We’ve had an oscillation between really publicly funded (1960-70’s with NASA) and really privately funded. In all cases science has marched on – we just need to make sure it keeps on marching.