Review: The Innovator’s

Written by Steve Jobs Biographer Walter Isaacson this book takes a look at how we got to now in the computing world. Starting with Ada Lovelace and ending with the pair of Sergey Brin and Larry Page, Isaacson covers dozens of the people that have enabled our world to be what it is today. He came to an non-intuitive result throughout his research, the lone inventor didn’t exist. He found that the most successful company or organization actually had a mixture of personalities and passions leading to success. His history is full of the men and women that made computing happen. Interestingly, it wasn’t until the 70’s and 80’s that women were no longer prominent in the history of computing. This is unfortunate as many of their contributions are still crucial to software development today. COBOL, subroutines, and most of the rules around software development were developed by women working either in the Navy or at some of the institutions that partnered with the military.

Much of this book was not new to me as I had read the history of Bell Labs and the history of Xerox PARC however a great deal of the book was news to me. He didn’t completely focus on the US even though a great deal of the history of computing took place here. He also explored the impact of the Germans, the British, and a bit of the Chinese.

If you’re a fan of computers and like the history of technology this book is definitely for you. Isaacson discusses the culture of the organizations that allowed technology to flourish (Intel) and the types of environments where it did not (Shockley Semiconductor). How no company created anything in a vacuum and that many ideas are, in fact, independently co-created.

Technology is a messy collaborative thing that could have been very different depending how just a few changes. Without collaboration, risk taking, and some big personalities we wouldn’t have the computers we have today.

Net Neutrality Vs. Title II – They Aren’t the Same

Since Title II passed I’ve seen a lot of articles that either indicate buyers remorse or have always been against Title II and are gloating that it’s going to be overturned. For example, Wired had an Op-Ed yesterday that used major points from Chairman Pai’s dissent against using Title II. Title II is clearly a divisive issue, as the guys over at KBMOD, where I also write, are completely divided over the supposed benefits of Title II. I sincerely hope that when we look back at this debate that we see this discussion as a confusing bit of history, because nothing happened. Where the Internet didn’t change and remained an open platform for everyone to easily and equally use.

Net Neutrality and Title II are not the same thing. Title II is an old law originally written in 1934 to regulate a single monopoly with the hopes of create more competition. It wasn’t successful but the legacy of Title II played an important role in the creation and development of the Internet. Title II was the policy regime that APRANET was developed. Whenever a scientist at MIT wanted to use a graphically powerful computer in Utah Title II was in full effect on that data system. Furthermore, Title II was the law of the land for all of dial up Internet. Which was actually a very good thing. The fact that there was Local-Loop unbundling meant that you could have an Internet service that was different than your phone company. It was also likely, given how low the costs were, that these ISPs didn’t have to pay many of the taxes that the Phone company did that you used to buy access to the Internet. We already know that Title II has and can foster a culture of innovation.

Net Neutrality is different than Title II because it was the architectural approach the initial designers took for creating the internet. There were a few key reasons for this, it was easier, required less computing power, and the majority of the early pioneers believed in what became the Open Source movement. In many cases it was the exception rather than the norm, early on, for scientists to patent their computer research. It’s likely because most of these researchers were Mathematicians and Physicists that came from a military background (WWI and WWII and all), so they weren’t used to patenting due to their educational background and the requirement for secrecy contributing to the war effort.

To provide preferential treatment to one packet of data over another required tools that simply would have prevented the data from arriving at its destination in a timely fashion in the 70’s. Remember this was during the time when a personal computer didn’t exist and computing used mainframes and terminals to do the work (interestingly we’re going back to that a bit with the cloud). This means that the routers would have had to have been mainframes themselves to decode the data and figure out what type of data it was before sending it to it’s next location. This was seen as a waste of computing power as well as an invasion of privacy. The point of the Packets was to help keep the data save and secure as much as to maximize capacity on lines connecting the computers.

One of the largest complaints about implementing Title II is that there’s not enough economic evidence to support it. I believe that to be true to some extent. It’s hard to forecast something that’s happening as it’s happening. Especially since the FCC was unlikely to get access, legally, to the Netflix-Comcast/Verizon deals to ensure equal access (or maybe preferred) to their lines. It was clearly shown by Netflix that Comcast/Verizon were intentionally causing issues they could easily resolve and they did immediately after they got paid. With Comcast/Verizon planning to foreclose the video streaming market in this fashion and violating the spirit of Net Neutrality, some sort of regulation was needed to prevent this foreclosure.

I would have rather not had any sort of regulation go into effect. However, I believe that the actions that Comcast and Verizon are taking are anticompetitive and anti-consumer. Time Warner Cable supposedly makes 97% profit on their broadband service, which isn’t a surprise whenever you have a local monopoly/duopoly for broadband.

Could there have been a better way? Yes, the FCC could have taken action that would have forced increased competition. Something like setting goals for every city in the US to have no fewer than 3 broadband providers and providing assistance to municipalities that wanted to develop their own to meet that goal. Ironically, the one provision not included in the Title II rule that would help with that is local-loop unbundling, which would reduce the cost of a new ISP entering the market as they wouldn’t have to build their own network, which has slowed Google Fiber down considerably.

New FCC Rules and competition

A friend retweeted the Tweet below today and it got me thinking about the broader context of the FCC rules that past last Thursday

Two things struck me about this tweet. First, it’s disappointing that the author doesn’t understand Title II better considering he co-founded the EFF. Second, that Title II as implemented was designed to do nothing about ISP competition. As I wrote on KBMOD this week, Net Neutrality has no provision for “Unbundling” which would promote competition amongst ISPs at the local level. Unbudling, according to Wikipedia, is a regulation that requires existing line owners (such as Comcast) to open up their lines to anyone that wants to sell cable, internet, or telephony access. Unbundling, under a much more restrictive Title II, is the only reason that AOL was successful as a business model. Since this provision of Title II was forborne, Title II will not, in fact, be for promoting competition in ISPs at all.

Instead, the FCC, at least in my opinion, looked at the Internet as a general purpose platform technology. They were looking to ensure competition ON the technology not between technology carriers. For example, the FCC wants to see as much competition as possible between companies like Netflix, Amazon Prime Video, Hulu, and Comcast’s Xfinity service. However, they want to make sure that Comcast cannot foreclose on the video delivery service by leveraging their existing monopoly in telecommunications. What that means is that Comcast could create rules or an environment where Netflix cannot compete and Comcast customers MUST use the Xfinity service because alternatives didn’t function well (Foreclosure is the thing that got Microsoft with Web browsers).

The FCC did enact a rule that will impact competition at the local level though. It’s a limited rule because it impacts only Tennessee and North Carolina. It is preempting state law by stating that it is legal for municipalities to develop their own broadband networks. Broadband build out is prohibitively expensive for an entrepreneur to set up a network, however if they had a backing of a municipality that is willing to share the risk and the reward, it might be possible for an entrepreneur to build out their own broadband network on a limited scale. Municipalities aren’t the ideal solution to this, it would be significantly more preferable if other businesses moved into areas and built new broadband networks, however unless they have a massive amount of money, like Google, it’s unlikely to happen. A bridge between is a public-private partnership where private enterprise, which has the telecommunications expertise, partners with a municipality, which has the demand and financial support, to build a network.

With the ruling on municipal broadband being so limited, it’s not going to make much of an initial impact, however it’s likely that other municipalities will try to jump on that bandwagon and overrule laws at the state level (as a note I’m not going to argue if this is something they have the authority to do, I’m just looking at the potential impact of the rule).

Big Data is Coming to Get You

Big data is what high tech companies are calling collecting massive amounts of data about their users. For Google, this includes all the trips you’ve taken, the places you’ve driven, where you’ve driven, your email (if you use Gmail), your searches, Google Now preferences, articles you’e posted to Google+, your pictures, and the list goes on. The idea is to use algorithms to mine this data for useful tidbits about user habits so products and services can be recommended just as you need it. These data can tell companies a great deal about the user including who their friends are.

However, what isn’t clear is who owns the data. Companies assume they own the data, which because you agreed to their terms of service, is true, even though you didn’t read them. However, with the recent re-categorization of fitness apps and trackers at medical devices a wrench has been thrown in the works. Data associated with Medical Devices is typically assumed to be Personal Health Information, which is protected under HIPAA. Which means that companies can’t really sell them AND that you are able to control what happens with the data. It’s the reason why doctors are required to share information with other healthcare professionals.

I believe that this is just the first step towards making our data more portable. In Europe you can already request a transcript of all the data Facebook collects of you, however they do not say you have control over what FB does with that data. Obama, is pushing to help increase privacy of personal information, but will only work if the companies feel like they have a stake or a penalty if they do not adequately protect data. Whenever they are an effective monopoly such as Apple or Google is of your data (through lock-in effects) their incentives to fully respective privacy is reduced because of the cost of switching to another monopoly.

Restrictions Can Drive Innovation

As a Lean process improvement guy as well as someone that really loves reading about innovation I’ve always taught my students that regulations, limitations, and restrictions on processes, equipment, and activities offer us an opportunity to innovate around those rules. The way that I describe it is that rules place you in a box, but within that box you can move up and down and diagonal and develop some really interesting ideas because of what you can’t do. However, you don’t focus on what you can’t do as much as focusing on how you can avoid that and what you CAN do.

I saw a picture to a great discussion about how gluten free diets are forcing, at least one chef, to be more innovative in their cooking. I’d post it, but the image is so large it’d take up the entire post, so I linked it above. Essentially the chef had a dish with polenta on it and a base that was all glutenous flour, but he figured out a way to make it all polenta, which actually created a unique dining experience that he felt offered a superior taste.

Another area where regulation has been the root of many innovation is the financial sector. They complain the most about regulations because it’s “bad for the economy” or something like that. However, CDO’s and everything that caused the last collapse was in response TO regulations. They figured out how to work around the regulations and make even more money than before. In fact, many banks started to follow suit because they weren’t able to post as high of profits and were getting hit by Wall Street for under performing comparatively.

This is one of the reasons why I personally don’t see value in fighting regulation other than to shape it in one direction or the other. The companies that are able to exploit the regulation the best are going to end up being first to market or extremely fast followers. Meaning they will make a great deal of money and likely dominate the market. If you look at regulation as a “disruptor” and an opportunity to disrupt the regulation, you’re going to do really well as a business.