Net Neutrality Vs. Title II – They Aren’t the Same

Since Title II passed I’ve seen a lot of articles that either indicate buyers remorse or have always been against Title II and are gloating that it’s going to be overturned. For example, Wired had an Op-Ed yesterday that used major points from Chairman Pai’s dissent against using Title II. Title II is clearly a divisive issue, as the guys over at KBMOD, where I also write, are completely divided over the supposed benefits of Title II. I sincerely hope that when we look back at this debate that we see this discussion as a confusing bit of history, because nothing happened. Where the Internet didn’t change and remained an open platform for everyone to easily and equally use.

Net Neutrality and Title II are not the same thing. Title II is an old law originally written in 1934 to regulate a single monopoly with the hopes of create more competition. It wasn’t successful but the legacy of Title II played an important role in the creation and development of the Internet. Title II was the policy regime that APRANET was developed. Whenever a scientist at MIT wanted to use a graphically powerful computer in Utah Title II was in full effect on that data system. Furthermore, Title II was the law of the land for all of dial up Internet. Which was actually a very good thing. The fact that there was Local-Loop unbundling meant that you could have an Internet service that was different than your phone company. It was also likely, given how low the costs were, that these ISPs didn’t have to pay many of the taxes that the Phone company did that you used to buy access to the Internet. We already know that Title II has and can foster a culture of innovation.

Net Neutrality is different than Title II because it was the architectural approach the initial designers took for creating the internet. There were a few key reasons for this, it was easier, required less computing power, and the majority of the early pioneers believed in what became the Open Source movement. In many cases it was the exception rather than the norm, early on, for scientists to patent their computer research. It’s likely because most of these researchers were Mathematicians and Physicists that came from a military background (WWI and WWII and all), so they weren’t used to patenting due to their educational background and the requirement for secrecy contributing to the war effort.

To provide preferential treatment to one packet of data over another required tools that simply would have prevented the data from arriving at its destination in a timely fashion in the 70’s. Remember this was during the time when a personal computer didn’t exist and computing used mainframes and terminals to do the work (interestingly we’re going back to that a bit with the cloud). This means that the routers would have had to have been mainframes themselves to decode the data and figure out what type of data it was before sending it to it’s next location. This was seen as a waste of computing power as well as an invasion of privacy. The point of the Packets was to help keep the data save and secure as much as to maximize capacity on lines connecting the computers.

One of the largest complaints about implementing Title II is that there’s not enough economic evidence to support it. I believe that to be true to some extent. It’s hard to forecast something that’s happening as it’s happening. Especially since the FCC was unlikely to get access, legally, to the Netflix-Comcast/Verizon deals to ensure equal access (or maybe preferred) to their lines. It was clearly shown by Netflix that Comcast/Verizon were intentionally causing issues they could easily resolve and they did immediately after they got paid. With Comcast/Verizon planning to foreclose the video streaming market in this fashion and violating the spirit of Net Neutrality, some sort of regulation was needed to prevent this foreclosure.

I would have rather not had any sort of regulation go into effect. However, I believe that the actions that Comcast and Verizon are taking are anticompetitive and anti-consumer. Time Warner Cable supposedly makes 97% profit on their broadband service, which isn’t a surprise whenever you have a local monopoly/duopoly for broadband.

Could there have been a better way? Yes, the FCC could have taken action that would have forced increased competition. Something likeĀ setting goals for every city in the US to have no fewer than 3 broadband providers and providing assistance to municipalities that wanted to develop their own to meet that goal. Ironically, the one provision not included in the Title II rule that would help with that is local-loop unbundling, which would reduce the cost of a new ISP entering the market as they wouldn’t have to build their own network, which has slowed Google Fiber down considerably.

Is AI going to kill or us bore us to death?

The interwebs are split over the question of if AI is going to evolve into brutal killing machines or if AI will simply be just another tool we use. This isn’t a debate being asked by average Joes like you and me, it’s being asked by some pretty big intellectuals. Elon Musk thinks that dealing with AI is like summoning demons, while techno-optimist Kevin Kelly thinks that AI is only ever going to be a tool and never anything more than that, and finally you haveĀ Erik Brynjolfsson an MIT Professor that believes that AI will supplant humanity in many activities but the best results will come with a hybrid approach (Kevin Kelly does use this argument at length in his article).

Personally I think a lot of Kevin Kelly’s position is extremely naive. Believing that AI will ONLY be something that’s boring and never something that can put us at risk is frankly short sighted. Considering that Samsung, yes the company that makes your cell phone, developed a machine gun sentry that could tell the difference between a man and a tree back in 2006. In the intervening 8 years, it’s likely that Samsung has continued to advance this capability. It’s in their national interest as they deployed these sentries at the demilitarized zone between North and South Korea. Furthermore, with drones it’s only a matter of time that we will deploy an AI that will make many of the decisions between bombing and not bombing a given target. Current we have a heuristic, there’s no reason why that couldn’t be developed into a learning heuristic for a piece of software. This software doesn’t have to even be in the driver’s seat at first. It could provide recommendations to the drone pilot and learn from the choices when it is overridden and when it is not. Actually, the pilot doesn’t even have to know what the AI is recommending and the AI could still learn from the pilot’s choices.

AI isn’t going to be some isolated tool, it’s going to be developed in many different circumstances concurrently by many organizations with many different goals. Sure Google’s might be to find better search, but they also acquired Boston Dynamics which has done some interesting work in robotics. They are also working on developing driverless cars, which will need an AI. What’s to say that the driverless AI couldn’t be co-opted by the government and combined with the AI of the drone pilot to drop bombs or to “suicide” whenever it reaches a specific location. These AIs could be completely isolated from each other but still have the capabilities to be totally devastating. What happens when they are combined? They could at some point through a programmer decision or through an intentional attack on Google’s systems. These are the risks of fully autonomous units.

We don’t fully understand how AI will evolve as it learns more. Machine learning is a bit of a Pandora’s box. It is likely that there will be many unintended consequences, similarly to almost any sort of new technology that’s introduced. However, the ramifications could be significantly worse as the AI could have control over many different systems.

It’s likely that both Kevin Kelly and Elon Musk are wrong. However, we should assume that Musk is right while Kelly is wrong. Not because I want Kelly to be wrong and Musk to be right, but because we don’t understand complex systems very well. They very quickly get beyond our capability to understand what’s going on. Think of the stock market. We don’t really know how it will respond to a given quarterly earnings from a company or even across a sector. There are flash crashes and will continue to be as we do not have a strong set of controls over the high frequency traders. If this is extended across a system that has the capability to kill or intentionally damage our economy, we simply couldn’t manage it before it causes catastrophic damage. Therefore, we must intentionally design in fail safes and other control mechanisms to ensure these things do not happen.

We must assume the worst, but rather than hope for the best, we should develop a set of design rules for AI that all programmers must adhere to, to ensure we do not summon those demons.