Capitalism vs. Robots – which is more terrifying?

In an article that recently resurfaced on Reddit, Famed Astrophysicists Stephen Hawking argues that we should fear capitalism more than robots. I think the timing of this is somewhat interesting, being an election cycle and the two populist candidates are opposites in many regards especially in terms of Democratic Socialism vs. Crony Capitalism (Sanders v Trump). In the broader context of emerging technology this is important as well though, as many other technology leaders have expressed fear of AI, such as Elon Musk, while other leaders are running full steam ahead towards more and more automation.

Hawking isn’t the only person thinking about the economy and technology though. Warren Buffet just released Berkshire Hathaway’s annual report with some pretty stark warnings about the future of capitalism in action at the corporate level. Indicating that innovation does have a darkside. While he’s speaking as a manager, there are economists looking into this and in the book Second Machine age, the authors argue that the best is still to come, because man and machine work best together, not separately.

Unfortunately, this will only push the ceiling up on skills required for jobs, rather than expanding opportunities. A perfect example of this will be Uber. Being an Uber driver isn’t a difficult job because of skill requirements, but because it’s a boring job that is relatively tiring. Uber has been pushing down their prices over a multi-month/year process which will continue through the introduction of “Autonomous” Cars, or RobotCars. At this time a large number of low skilled workers will find themselves out of a job, including people I know and probably people you know. This has been Uber’s plan for a long time as they understand that people are the biggest costs and risk for the company. Especially in light of the mass shooting in Michigan.

Uber isn’t the only major company looking to replace workers like this. In fact, it’s likely that a lot of White Collar jobs are going to go this route as well, including in industries that notoriously relied on people that then made unethical decisions, such as the financial industry.  We’ve heard of High Frequency Trading, which is basically a set of algorithms to make decisions on buying and selling stocks based on microtrends. However, this is going to continue to expand into newer areas. It’s been well remarked that most brokers are no better than a coin flip (Black Swan; Drunkards Walk; Thinking, Fast and Slow; all reference this) so it is highly likely that algorithms will do better than people in picking winners and losers on the stock market. It’s also likely that those algorithms will have access to more data faster than any person could eve analyze and act upon.

This interaction between capitalism and automation creates huge risks for the economy. A few years ago, there was a “flash crash” which was basically caused by those HFT I mentioned above. As more and more portions of the financial industry come under the purview of robo-traders, these sorts of events are going to be more likely. These institutions still have pushed most of the risk to the public, while retaining the bulk of the profits from these robots.

As these trends continue across industries, the local optimization of companies to automate and create more robots is going to gradually push people out of jobs at a more and more rapid pace than new categories of jobs can be opened. I think it likely that will be likely that we’ll see more companies going the route of Uber. Using tools like Amazon’s Mechanical Turk to get processes started before they invest effort and energy into automating processes. Once they are shown to be successful, the effort to remove the human element will continually increase until those workers are out of a job. What we will eventually see is a white collar migratory worker going from one type of tech job to another only to be replaced by automation in the long run.

The impact to the economy in the long run and the human condition in the short term will be catastrophic as our current institutions are not designed to handle this sort of change in labor type. The incentives for this behavior has been in place for decades and have been pushing bad actors to be worse, such as the Turing Pharmaceuticals’ CEO price gouging dying patients, because the market could support it.

The known unknowns and the unknown unknowns of AI

I’m reading a book called “Robot Uprisings” which is quite obviously about robots and how they could attack and take over the world. I think the most interesting thing about this collection of short stories isn’t the fact that there are uprisings, but the many different routes that AI could decide to revolt. There’s a broad range from robots debating if they should revolt or not, to an AI that we never figure out what to do with and only revolts when we try to kill it.

I think that these difference scenarios really encapsulate the limitations of our imagination with what could happen with robots. I think the most terrifying thing is what we really don’t understand about robots or AI in general. What is being built without our knowledge in government labs, in universities, and in hacker labs. We’re debating the ethics of the NSA and GCHQ espionage of their own citizens and the limits of rights in the digital space. We’re using rudimentary “AI” in terms of heuristics and algorithms. We as end users or that are impacted by these algorithms or if their very assumptions are even ethical, without bias, or anything along those lines. danah boyd argues that the Oculus Rift is sexist because the algorithms that control the 3D functionality are all designed by men for men. Agree with her or not, but women get sick using the Rift.

If we can’t agree on the ethics of programs that are in use and the risks posed by the solutionism of the internet, then we’re in serious trouble when we actually create a thinking machine. Stephen Hawking argues that we would not sit and wait for an alien species to come and visit earth if we have advanced warning, but that is exactly what we’re doing with AI. We know it’s coming, we know that there will be something similar to a “Singularity” in the future. Our internet optimists are waiting breathlessly for it, but we don’t truly know the long term impact of this technology on how it shapes our own society.

It’s not just the risk of AI destroying our world and all of humanity. It’s also the lack of understanding of how our current algorithms are shaping our conversations in the media and social media. For instance, it’s fairly commonly known now that a lot of pretty major news outlets are using Reddit as a source to identify upcoming stories. TMZ, the Chive, and tons of other content sites mine it for memes and stories, while more serious news sources find interesting comments and use those to drive more interesting stories.

I believe the tweet below really does a good job showing how lowly we think of ethics in our society. This will really negatively impact our ability to understand the risks of AI. AI is going to really transform our culture and we don’t know what we don’t understand about the risks of the technology.

Ethics and Values; Military and Espionage

We didn’t get to have a national conversation about government espionage until Snowden released all those documents and now we’re having a pretty vocal one in 2/3 branches of our government (well all three since Obama seems to contradict himself fairly often). Today on Vice’s Motherboard I read an article claiming the military is going cyberpunk. As the article notes, the military has used flight simulators for years, because crashing in one of those is a lot cheaper than crashing a real plane. The Stealth Bombers cost close to 2 Billion each, so learning how to fly one of those is best done in a simulator than in a real plane, plus it reduces the risk of death in the event of a crash.

How will this trend continue? Apparently the military is investing in virtual reality battle grounds. This will help train soldiers in different combat situations without having to build extremely expensive facilities, use blank rounds, damage guns, and any other types of explosive that would be used in those situations. Never mind the logistics to get the equipment there and all that.

It’s likely that these battle grounds will incorporate things like the Oculus Rift and omnidirectional treadmills. This will allow soldiers to move crouch and actually feel like they are in direct combat. For people at home, it’s not going to be as useful, but it could work well in this type of situation. If they add in the ability to make the environment cold or hot and wet or dry they could simulate a great deal of the virtual environment to build skills of soldiers.

The military is also working on robotics as a way to reduce the number of men we have on a battle field. This of course could be extendable beyond simply having robots like the Boston Dynamics Dog, but we could eventually mix the VR environment with a “robot” to have a remote soldier that is bullet proof, never tires (as you could replace the driver), and moves around like a person. This opens up an entirely new type of warfare. It takes the idea of drone combat and moves it to the next level – foot soldier drones that truly make the battle field imbalanced. Of course the final step would be fully autonomous robotic soldiers – but I think most people wouldn’t accept those.

In any of these cases we need to have a serious national conversation about the application of these technologies. Looking from an ethical standpoint there are conflicting views. First, it’s ethical to protect our soldiers as much as possible when we’re in a justifiable defensible conflict. Second, it’s unethical to enter combat as an aggressor where your military cannot be stopped from the position of the defender. Furthermore, if we’re talking about completely robotic military force it’s even less defensible to be using these forces as we don’t have any human control in the case of a software failure – or a hack and remote theft of the system.

As a society we need to have a conversation about if we think we should allow our military to do this. As it is we already routinely have operations that the citizens aren’t really aware of in countries like Yemen and god knows where else. These put our men and women at risk which no one wants for arguable benefit in taking out terrorists – it’s unclear if it’s working or we’re just making more enemies. If we are able to replace real live Seals with a team robotic bodies controlled by a Seal team remotely, how many more of these missions could we run? How much more of this sort of activity would we believe is an acceptable level?

I believe that this goes back to what we value as a society. If we value privacy, safety, freedom, and true constitutional control over the military then we need to make sure that we control this before the military just morphs without really any thought. The NSA morphed into a data sponge pulling in everything that moves on the internet. As a society, based on the outrage, we do value our privacy and we’re trying to pull back control from the NSA – some people disagree with that, which is fine that’s why we need a conversation.

I believe that having robotic avatar’s will lead to a higher likelihood of abuse – similar to what we’ve seen with the NSA. I think this is what’s happened with the Drone Program, where Obama has a kill list that they are proud of having. Having more humanoid drones that can shoot sniper rifles will reduce the amount of collateral damage, but will be abused. It’s also very debatable if the kill list is even constitutional.

I think that the innovation for reducing our military expenditure is a good thing. However, I think we need to have a conversation around what the end goal of these programs.