Is AI going to kill or us bore us to death?

The interwebs are split over the question of if AI is going to evolve into brutal killing machines or if AI will simply be just another tool we use. This isn’t a debate being asked by average Joes like you and me, it’s being asked by some pretty big intellectuals. Elon Musk thinks that dealing with AI is like summoning demons, while techno-optimist Kevin Kelly thinks that AI is only ever going to be a tool and never anything more than that, and finally you have Erik Brynjolfsson an MIT Professor that believes that AI will supplant humanity in many activities but the best results will come with a hybrid approach (Kevin Kelly does use this argument at length in his article).

Personally I think a lot of Kevin Kelly’s position is extremely naive. Believing that AI will ONLY be something that’s boring and never something that can put us at risk is frankly short sighted. Considering that Samsung, yes the company that makes your cell phone, developed a machine gun sentry that could tell the difference between a man and a tree back in 2006. In the intervening 8 years, it’s likely that Samsung has continued to advance this capability. It’s in their national interest as they deployed these sentries at the demilitarized zone between North and South Korea. Furthermore, with drones it’s only a matter of time that we will deploy an AI that will make many of the decisions between bombing and not bombing a given target. Current we have a heuristic, there’s no reason why that couldn’t be developed into a learning heuristic for a piece of software. This software doesn’t have to even be in the driver’s seat at first. It could provide recommendations to the drone pilot and learn from the choices when it is overridden and when it is not. Actually, the pilot doesn’t even have to know what the AI is recommending and the AI could still learn from the pilot’s choices.

AI isn’t going to be some isolated tool, it’s going to be developed in many different circumstances concurrently by many organizations with many different goals. Sure Google’s might be to find better search, but they also acquired Boston Dynamics which has done some interesting work in robotics. They are also working on developing driverless cars, which will need an AI. What’s to say that the driverless AI couldn’t be co-opted by the government and combined with the AI of the drone pilot to drop bombs or to “suicide” whenever it reaches a specific location. These AIs could be completely isolated from each other but still have the capabilities to be totally devastating. What happens when they are combined? They could at some point through a programmer decision or through an intentional attack on Google’s systems. These are the risks of fully autonomous units.

We don’t fully understand how AI will evolve as it learns more. Machine learning is a bit of a Pandora’s box. It is likely that there will be many unintended consequences, similarly to almost any sort of new technology that’s introduced. However, the ramifications could be significantly worse as the AI could have control over many different systems.

It’s likely that both Kevin Kelly and Elon Musk are wrong. However, we should assume that Musk is right while Kelly is wrong. Not because I want Kelly to be wrong and Musk to be right, but because we don’t understand complex systems very well. They very quickly get beyond our capability to understand what’s going on. Think of the stock market. We don’t really know how it will respond to a given quarterly earnings from a company or even across a sector. There are flash crashes and will continue to be as we do not have a strong set of controls over the high frequency traders. If this is extended across a system that has the capability to kill or intentionally damage our economy, we simply couldn’t manage it before it causes catastrophic damage. Therefore, we must intentionally design in fail safes and other control mechanisms to ensure these things do not happen.

We must assume the worst, but rather than hope for the best, we should develop a set of design rules for AI that all programmers must adhere to, to ensure we do not summon those demons.

What’s the difference between Ma Bell and Comcast?

If you were born in the 80’s or before you know that Ma Bell was the only phone company in town. Born any later than that you were born into a world without a single monopoly for telecommunication. That’s right, we’ve had a point in our collective history where there was only a single phone company. There are rules in place that prevent something similar from happening with Comcast, but we’ve been there before. However, I believe there are critical differences. AT&T knew they were a monopoly and they were a state sanctioned monopoly. They did everything in their power to keep prices down to prevent being broken up. AT&T actually had a broader monopoly than what Comcast could ever hope to have. They made the phones that worked on the line, they made all the telecom technology that made it work, and they designed the services that made it work. This is something called a natural monopoly, which I’ve written about before. A former founder of Comcast has declared Comcast a natural monopoly.

The biggest difference between Comcast and AT&T, back in the day, was that they did everything they could to keep the government happy. Was it perfect, no clearly not, there were shady business practices, but we as a society benefited greatly from Bell Labs. To this stay is still one of the greatest research facilities that ever existed. If it wasn’t for Bell Labs our current way of life would be very different. I highly suggest checking out the book on it.

Comcast claims to be pushing innovation with their X1 Xfinity platform, but that’s not really true, it’s simply a new operating system pushing content. Voice activation isn’t innovation and if that’s your main selling point then you’re in serious trouble. As I mentioned yesterday, the Netflix deal is a major concern, the Verge is saying the Internet is fucked and that we need to be contacting the FCC daily to un-fuck it.

I’m not entirely sure that the FCC can fix it. Congress has greatly hamstrung the FCC in dealing with internet companies, furthermore, their solution of calling the internet a Utility won’t work. If you aren’t aware we’ve had big pushes to deregulate the utility industry which unfortunately hasn’t really made rates better in many cases or in the long run. I think that it’s fair to say that in the telecom industry this is true as well. The impact of the AT&T break up has been this long term collection of conglomerates that continually increase price as well as “Fees” which similar to baggage fees are hidden from the “price” of the service. So, treating the internet like a utility isn’t going to work. What we need to do is treat it like a road.

Everyone that uses a car on the road is taxed based on use (Gasoline taxes) everyone pays for a portion of the maintenance based on other local taxes too. No, these aren’t perfect and are going to be under pressure based on hybrid and electric cars – and new models are being proposed. Of course one way to do this is through toll roads (which really never work) or through some sort of black box in the car to measure mileage (which no one wants).

Essentially it’s a pay for bandwidth consumed, so if you’re a high consumer of bandwidth you’d pay more, but the rates need to be realistic and the goal would be to cover expenses and continually improve service while making it cheaper. Which brings me back to AT&T – the president of Bell Labs had one mantra anything could be tested but only if it could lead to a “Better, cheaper or both” network. A public internet similar to a road that was paid to continually get cheaper, better, more secure, and faster is the only way to truly un-fuck the internet. It’s not likely to happen because it’s not a capitalist response. However, the internet these days is similar to public transpiration – it’s goal isn’t to make money, it’s goal is to enable economic activity. If think of it that way, then we can see the long term benefit of the whole economy rather than singular actors.

Evolution and Innovation

Apparently I published this before I meant too. Anyway, today in Techdirt, they published a discussion on copying, innovation and evolution. Basically, a biologist argued that we are evolutionarily predisposed to copy and use group learning to develop new tools. What this means is that instead of going out and developing something out of the blue we first have to see what someone else has done and then we copy whatever they did, then in a parasitic way, make marginal improvements on the original. We’re nothing but freeloading copiers that make things a little better.

Techdirt completely disagreed with this point of view. They argued that simply copying something or a part of something doesn’t mean you’re freeloading. You can add a great deal to something to the point that whatever you copied simply becomes a part of a larger whole.

Anyone should know from my writing that I support Techdirt’s perspective. This comes from several several different arguments. The first is from the evolution of technology. If you ignore some of the human motivation behind the changing technology itself and focus on the selection process, you can see that technology changes through incremental adjustments. These changes are selected by the market or in primitive societies by the end result of an improvement. Spears that last longer, less energy expended on making new spears, spears that can be thrown farther, less danger from the animal being killed, or sharper shovels, less energy spent gathering food – more food. This selection process is a very natural process. Additionally, there would be some specialization of skills even at this point in our history. Some people would have been better at making spears and in a collaborative environment, because there were no patents and sharing was for the best of everyone, many people could experiment with new spear designs. This innovation while based on copying is a very real form of innovation that likely lead to gradual improvement over a great deal of time.

The second argument that supports innovation after copying is the argument of Cesar Hidalgo, which argues that looking at what countries are currently producing you can see a relationship with their innovative ability. By looking to see what technologies they import and export you’re able to see how well they have developed scientifically and in the manufacturing world. For example you can expect to see more advanced products come out of a country if they got into producing fertilizer very early in modern times. This typically leads to a general chemical industry which can lead to pharmaceuticals and semiconductors. Why? Well developing a strong base in chemistry with fertilizers can be expanded into drugs and as a base for semiconductors.

How do new countries move into these fields? Essentially, they have a knowledge transfer from a country that is already doing it. This can be done in two ways, one is the easy way: have a multinational company set up a manufacturing then R&D facility in your country. This allows a direct flow of knowledge on how to manufacture the material, which increases the rate of copying. Would allow the country to be a fast follower but will still require significant time for them to eventually innovate on that technology. Having an R&D facility would increase this rate, because local scientists would have already been trained on how to innovate in that field. They would have already been doing research in that industry and would more easily be able to innovate if a spin-off was created (or if the state nationalized that part of the multinational). The second manner is much slower: repatriating of knowledge workers. This is essentially what has happened in Taiwan and India. Educated Indians or Taiwanese returned from the US and created spin-offs and became professors at the local universities. This isn’t always successful.

Saudi Arabia is trying to develop a third way, which is having some success. They are recruiting experts from around the world to develop their own universities and companies. This is having mixed results and education and industry needs to pay attention to these attempts to see how well it plays out in the long run.


Copying is extremely important in education and is required to develop new industries in a country. Technology evolves through copying previous technology, recombining with new learning from other fields and from experimentation within the current field. Without copying there cannot be innovation. The more people participating in an economy where innovation through copying is rewarded, the greater our culture and the greater or technological evolution will be. Biology needs to take a lesson from Evolutionary economics.

Can technology Save us? A wrap up

In my last three posts I’ve asked the question if technology can save us from many of our own problems. I’ve discussed several technologies for each topic, water, energy and food. These technologies are not all of the ones out there by any stretch of the imagination. These are the technologies I’m aware of at this point. I wouldn’t say I’ve done an exhaustive search for technologies either. I hope to have made it obvious that technology alone cannot save us. We need to make a concerted effort to change the status quo and that won’t be easy to do.

We have some major problems adopting new technologies. First, we have incumbents interests that have no desire to see the current energy regime change. We have problems of ownership of technical problems. Why should the US invent new ways to extract water when Mexico is the country that will suffer? How do we know that a given technology is going to be the best, or even good enough for our needs? What happens if all our best efforts turn out to actually make things worse?

These aren’t easy questions to answer. We have to make a choice as a society to decide what constitutes a good investment for research. In one Urban Time article I posit that the EU can over take the US in terms of scientific research in the upcoming decades. This should terrify people. This is what has driven the US economy since the 40’s and to some extent earlier. The shifts in capitalism have driven our company goals toward shorter and shorter returns on investments and less visionary goals. The ability to experiment in companies and use government funds to experiment with deploying new energy systems has floundered.

This should be cause for concern. We’ve seen the result of poorly managed technology in the past few years. Simple things like a software glitch that caused Toyota’s to accelerate out of control, flash crashes on the stock markets from high frequency traders and other complex systems like Fukashima. We don’t always have proper controls designed into our technologies to protect us from it.

Personally, I’m optimistic about the future of technology and what it can do for us. However, there are plenty of Sci-fi authors out there that are very pessimistic. I love reading the dystopian future and post-apocalyptic books as much (or more) than anyone and we need to realize that without requiring proper controls on our technology and production of our material goods these results could happen.

Technology alone cannot save us from ourselves. We may be able to use technology as a tool to fix problems we’ve created, but we have to do the dirty work. Technology doesn’t design and make itself (yet).

Is software a technology?

I saw an interesting comment on r/technology today, r/technology is a subreddit devoted to all things technology, where the author complained about too much web/software related articles were being posted on the site. As the site is user driven the choice of the content can be influenced by questions and comments like this. In fact it can change the shape of the entire community and how they interact with each other. For instance r/fitness tested text based submissions only with no external links allowed. This fundamentally changed the discourse in that community. Anyway, this made me sit back and think about if software or websites should be considered technology in the way that a computer or keyboard is.

According to the Google dictionary the following is the definition of technology:

tech·nol·o·gy/tekˈnäləjē/

Noun:
  1. The application of scientific knowledge for practical purposes, esp. in industry: “computer technology”; “recycling technologies”.
  2. Machinery and equipment developed from such scientific knowledge.


I believe that software could fall into the first category of technology. Wikipedia says: Technology is the making, usage, and knowledge of toolsmachines, techniques, craftssystems or methods of organization in order to solve a problem or perform a specific function.


Again this could easily be applied to software. Specifically because of the word techniques. However, I think we need to tread carefully here because both of these definitions would also include all of mathematics as a form of technology. Why does this definition matter? Well, you are able to patent technologies, but you are not able to patent mathematical algorithms or techniques. If some one was able to prove that P=NP in a mathematical proof then it couldn’t be patented. However, if you put that same proof into a piece of software it suddenly becomes patentable, and then make some one very rich.
I think there’s another fundamentally cognitive difference as well. Despite the fact that people say Android phone technology or Apache web server technology, it feels different than when you say internal combustion technology. I think the main difference is the physicality of the combustion technologies over the technology that has been developed to create phone OSes or webservers. It requires manual labor and a set of tools and skills that are all physical entities whereas with the software, anyone with a computer can learn how to program. That doesn’t mean that there won’t be a set of people that are better at it or more likely to pick it up than other people. I’m basically self taught in both SQL server and VB.Net.The fact that software can be copied perfectly an infinite number of times also changes how it should be treated.
I think that these differences means we should actually treat software differently. I think that it is a technology, but a technology more related to mathematics and logic than other sciences.

Technological Adjacency

Two days ago I talked about Technological convergences, yesterday I discussed how firms can enable technological convergences. Today I’m going to talk about technological adjacencies. First though, why do we care about these? There’s a couple reasons. One at the micro level, specifically you, understanding how technological adjacencies work can help you determine different industries that your skill set applies. Does understanding ceramics only help in making durable dishwares or can they be used in the semiconductor industry too? It turns out they can be. Ceramics are great insulators and are used on many different types of tools for manufacturing semiconductors. A step above, at the firm level, being able to produce ceramics can allow a company that used to only make dishware to move into creating other types of technologies, like for semiconductors. This shift can eventually open up an entire new market to allow for continued growth. However, as I mentioned yesterday, this doesn’t always work and can leave a company weaker than it was before the shift into the new industry. Finally, technological adjacencies can help spur regional and national growth.

Companies aren’t the only thing that can be viewed to have specific capabilities. Regions and countries typically have specialties Pittsburgh used to be the major hub in the world for steel. However, steel collapsed in the 70’s and 80’s there. Now Pittsburgh has turned itself into a medical and biomedical hub. Because of the steel industry Pittsburgh already had two world class universities and a number of great universities. After the crash of steel these became the main drivers of the economy. The firms that were created helped to rebuild the area.

As I mentioned above technological adjacencies are fairly simple to find after the fact. They are difficult to see ahead of time. It’s difficult to know what is a good bet and what is not a good bet for a company. This is why it’s important to have an R&D branch that is allowed to explore the adjacent technology spaces around your major technologies. If you don’t do this then there could be some great markets your missing out on.

Enabling Technological Convergences

In my last post I discussed technological convergences. I didn’t really discuss anything ground breaking or earth shattering. We all know these things happen. Even if we never really make a note of it. What’s a more interesting question though is why do some companies, like Apple and Blackberry, succeed and others like Microsoft and Rio (early MP3 maker) fail, either in creating technologies that converge or create technologies that then fail.

One of the first reasons is the culture of the company. To create a totally different product that will shake the core business firms may have to do something called “corporate venturing.” This is where a company decides they are going to take people that normally work on the major product and put them into a different area and seclude them and allow them to create a new product. Whatever sort of leadership structure develops, develops. It really doesn’t matter if this matches the rest of the firm. Essentially, these people are put into a position where they are starting a new company. Apple famously did this with the original Macintosh program. It was called a skunk works area. Of course recombining the two portions of a company creates huge problems, but good management can figure out how to deal with this.

Another piece required for a firm to successfully move into a new product space is the ability to identify the market need. This one is pretty obvious, but it still needs mentioning. In many cases it’s really obvious that there’s a product space and that some one should fill it. When companies don’t move into it there must be some sort of reason.

One of those reasons comes down to firm capabilities. Every firm has something at its core that it’s best at. I would argue that Microsoft is best at taking advantage of a virtual monopoly of a platform and moving into new directions within that platform. Internet Explorer and the Office Suite are the best example of this. Microsoft has also tried to do this with servers and other peripheries. Which is why Microsoft has had difficulty moving into other platform positions. They have failed (or mixed results at best) over and over again with phone OSes because it doesn’t rely on their dominate platform.

Another company that is an R&D powerhouse in energy but has failed at anything outside of their major focus is Shell. As a major energy company you’d expect Shell to be moving into other types of energy production to make massive amounts of money in the transition from fossil fuels to renewables. You’d actually be right. They have tired and failed. Aside from having a failed solar industry Shell has a moderately successful Wind program. Between the two it actually makes sense why solar failed and wind is doing well.

First, wind is closer to extracting material from the ground than making energy from the sun is. Now hang on, I know, but Shell has to maintain offshore oil rigs in tough conditions. Understanding how to build a wind farm out in the ocean has some similarities. Shell doesn’t actually make the windmills themselves, they buy the windmills and put them together to harvest energy. Shell was trying to make solar panels. Intel would be a significantly better solar panel producer than Shell. Why? Because solar panels are semiconductors. You make them with similar machines the technologies are adjacent to each other.

What’s technological adjacency? It’s whenever you are able to use your current skills and apply them with some research to a related technological field. I’ll discuss this more in my next blog.