Net Neutrality Vs. Title II – They Aren’t the Same

Since Title II passed I’ve seen a lot of articles that either indicate buyers remorse or have always been against Title II and are gloating that it’s going to be overturned. For example, Wired had an Op-Ed yesterday that used major points from Chairman Pai’s dissent against using Title II. Title II is clearly a divisive issue, as the guys over at KBMOD, where I also write, are completely divided over the supposed benefits of Title II. I sincerely hope that when we look back at this debate that we see this discussion as a confusing bit of history, because nothing happened. Where the Internet didn’t change and remained an open platform for everyone to easily and equally use.

Net Neutrality and Title II are not the same thing. Title II is an old law originally written in 1934 to regulate a single monopoly with the hopes of create more competition. It wasn’t successful but the legacy of Title II played an important role in the creation and development of the Internet. Title II was the policy regime that APRANET was developed. Whenever a scientist at MIT wanted to use a graphically powerful computer in Utah Title II was in full effect on that data system. Furthermore, Title II was the law of the land for all of dial up Internet. Which was actually a very good thing. The fact that there was Local-Loop unbundling meant that you could have an Internet service that was different than your phone company. It was also likely, given how low the costs were, that these ISPs didn’t have to pay many of the taxes that the Phone company did that you used to buy access to the Internet. We already know that Title II has and can foster a culture of innovation.

Net Neutrality is different than Title II because it was the architectural approach the initial designers took for creating the internet. There were a few key reasons for this, it was easier, required less computing power, and the majority of the early pioneers believed in what became the Open Source movement. In many cases it was the exception rather than the norm, early on, for scientists to patent their computer research. It’s likely because most of these researchers were Mathematicians and Physicists that came from a military background (WWI and WWII and all), so they weren’t used to patenting due to their educational background and the requirement for secrecy contributing to the war effort.

To provide preferential treatment to one packet of data over another required tools that simply would have prevented the data from arriving at its destination in a timely fashion in the 70’s. Remember this was during the time when a personal computer didn’t exist and computing used mainframes and terminals to do the work (interestingly we’re going back to that a bit with the cloud). This means that the routers would have had to have been mainframes themselves to decode the data and figure out what type of data it was before sending it to it’s next location. This was seen as a waste of computing power as well as an invasion of privacy. The point of the Packets was to help keep the data save and secure as much as to maximize capacity on lines connecting the computers.

One of the largest complaints about implementing Title II is that there’s not enough economic evidence to support it. I believe that to be true to some extent. It’s hard to forecast something that’s happening as it’s happening. Especially since the FCC was unlikely to get access, legally, to the Netflix-Comcast/Verizon deals to ensure equal access (or maybe preferred) to their lines. It was clearly shown by Netflix that Comcast/Verizon were intentionally causing issues they could easily resolve and they did immediately after they got paid. With Comcast/Verizon planning to foreclose the video streaming market in this fashion and violating the spirit of Net Neutrality, some sort of regulation was needed to prevent this foreclosure.

I would have rather not had any sort of regulation go into effect. However, I believe that the actions that Comcast and Verizon are taking are anticompetitive and anti-consumer. Time Warner Cable supposedly makes 97% profit on their broadband service, which isn’t a surprise whenever you have a local monopoly/duopoly for broadband.

Could there have been a better way? Yes, the FCC could have taken action that would have forced increased competition. Something like setting goals for every city in the US to have no fewer than 3 broadband providers and providing assistance to municipalities that wanted to develop their own to meet that goal. Ironically, the one provision not included in the Title II rule that would help with that is local-loop unbundling, which would reduce the cost of a new ISP entering the market as they wouldn’t have to build their own network, which has slowed Google Fiber down considerably.

New FCC Rules and competition

A friend retweeted the Tweet below today and it got me thinking about the broader context of the FCC rules that past last Thursday

Two things struck me about this tweet. First, it’s disappointing that the author doesn’t understand Title II better considering he co-founded the EFF. Second, that Title II as implemented was designed to do nothing about ISP competition. As I wrote on KBMOD this week, Net Neutrality has no provision for “Unbundling” which would promote competition amongst ISPs at the local level. Unbudling, according to Wikipedia, is a regulation that requires existing line owners (such as Comcast) to open up their lines to anyone that wants to sell cable, internet, or telephony access. Unbundling, under a much more restrictive Title II, is the only reason that AOL was successful as a business model. Since this provision of Title II was forborne, Title II will not, in fact, be for promoting competition in ISPs at all.

Instead, the FCC, at least in my opinion, looked at the Internet as a general purpose platform technology. They were looking to ensure competition ON the technology not between technology carriers. For example, the FCC wants to see as much competition as possible between companies like Netflix, Amazon Prime Video, Hulu, and Comcast’s Xfinity service. However, they want to make sure that Comcast cannot foreclose on the video delivery service by leveraging their existing monopoly in telecommunications. What that means is that Comcast could create rules or an environment where Netflix cannot compete and Comcast customers MUST use the Xfinity service because alternatives didn’t function well (Foreclosure is the thing that got Microsoft with Web browsers).

The FCC did enact a rule that will impact competition at the local level though. It’s a limited rule because it impacts only Tennessee and North Carolina. It is preempting state law by stating that it is legal for municipalities to develop their own broadband networks. Broadband build out is prohibitively expensive for an entrepreneur to set up a network, however if they had a backing of a municipality that is willing to share the risk and the reward, it might be possible for an entrepreneur to build out their own broadband network on a limited scale. Municipalities aren’t the ideal solution to this, it would be significantly more preferable if other businesses moved into areas and built new broadband networks, however unless they have a massive amount of money, like Google, it’s unlikely to happen. A bridge between is a public-private partnership where private enterprise, which has the telecommunications expertise, partners with a municipality, which has the demand and financial support, to build a network.

With the ruling on municipal broadband being so limited, it’s not going to make much of an initial impact, however it’s likely that other municipalities will try to jump on that bandwagon and overrule laws at the state level (as a note I’m not going to argue if this is something they have the authority to do, I’m just looking at the potential impact of the rule).

Grants to build out networks rules change

Recently there have been a serious debate between the FCC and major telecoms about the minimum rate for broadband. It’s pretty obvious that there’s a strong disagreement between most customers and their ISPs. For the most part rural ISPs are pretty terrible. If you live outside of a major city it’s unlikely that you’ll have a very fast internet service. For a country of our size and population, we have an extremely large portion of our population that does have access to the internet, however we don’t have the deepest penetration of the internet in the world. Which for a country of our wealth that is something of a shame. We’ve been investing, through governmental grants since the middle of the 90’s and we haven’t seen the expected return on investment that we’d expected as investors. We paid for companies like Verizon and Comcast to invest in our network, and I mean we, as in the tax payers. We’re paying for them to get rich off of grants.

Internet Population and Penetration

Smaller countries like the Netherlands and the UK have significantly greater penetration. Sure they have smaller populations than we do, but they also have significantly faster internet speeds than we do across the board including rural areas. Korea has speeds an order of magnitude higher than we do, despite the fact that we’re a significantly richer country than South Korea.

One of the first moves in a long time that the FCC has done that is a positive move in a really long time. As of today, the FCC has decided that the minimum speed for broadband must be 10mbps which is a huge step in the right direction. This will change the minimum threshold for any investment by a company to earn a grant to increase from 4mbps to 10mbps. This is the right direction for our country and I’m really excited about the possibilities. It means that the FCC is starting to really understand that the telecoms don’t fully have our best interests in mind when they make their arguments. We’ll see what happens in the upcoming months.

More than two sides, the complexity of a story

In a lot of my writing, I typically focus on one aspect of the story. For example, with my writing about Ferguson I really focused on the wrong that I believed the police were doing. I didn’t really touch on the violence that the protesters were doing to the community (contained to the first few days) or the violence they were committing on the police. I didn’t ignore it personally, or as I was thinking about the articles, I just didn’t want to discuss it because it didn’t fit with the story I was trying to outline. That’s perfectly fine. You can’t fit everything into any given story. However, that doesn’t mean that omission was support of the actions of the protesters. I abhor their behavior and I think that it really negatively impacted their message. 

The past few days, we’ve had some pretty serious leaks. Over 100 celebrities have had their nude images leaked. The suspected culprit is iCloud. The iPhone, like most Android phones have the option to automatically backup your photos to a storage unit online. Apparently, there was a vulnerability in an application called Find My Phone, which allowed a person to try as many times as they wanted to access an account. What this meant was that brute force methods for cracking a login for an account would work eventually. It might have taken days or longer for whatever algorithm was used to crack the logins, but eventually it would have worked. There’s no way for it not. Essentially, the approach would run through as many permutations as possible for the login. furthermore, it could have actually been run concurrently on multiple different systems to test in parallel. It’s pretty horrible that someone was able to sneak into iCloud and steal these pictures, however, it’s also incumbent on the users of these systems and the owners of the systems to ensure that these simple lapses don’t happen. 

The users of these services bare a responsibility for understanding what is happening to their data once it leaves their phones. This is a requirement for any user, not just the famous. The famous likely should have someone help them with their security features, as it’s unlikely that many of them have the desire or knowledge to do it on their own. Not that this is any different for much of the rest of the population. They are as vulnerable as the famous, but aren’t a target simply by being uninteresting. 

In both cases, it’s fully acceptable to be upset by both sides of the story. It’s not impossible to say that police violence and militarization is bad and that the criminal element of the Ferguson protests is bad too. It’s also fine to say that you shouldn’t hack and that the people that develop the systems and use the systems are accountable as well. In most of our stories, there are complexities that are withheld or ignored because there is an angle the writer is going for, the story would take too long, or the writer has a low opinion of the readers. In my case, I was going for a specific angle with the Ferguson stories, because I assumed that it was obvious to the reader that the violence committed by the protesters was both known and understood to be a terrible wrong. Not mentioning it did make the police seem less rational than they were behaving though.

In the case of the leaks, most of the attention has been put on the leaker and the people enjoying the leaks, however, it’s important that we keep in mind that there’s a responsibility of the companies to keep that data safe. 

Net Neutrality, Let your Voice be heard

The FCC is currently taking comments on the net neutrality issue. Please contact them. The agency is currently completely overwhelmed with the feedback on Net Neutrality, but even still, more voices might help tip the scales that are pretty obviously stacked against us. It’s like the scales used to weigh if the witch weighs the same as a duck in Monty Python’s Holy Grail. One of the most important things about net neutrality is the scope that the ISPs actually own in this debate. They are dictating the terms of this debate through money – they make the most, they charge the most, and they have monopolies. This cartoon really helps explain what the ISPs actually own (click here or the picture to see all of the comic).

Economix comix depiction of Net Neutrality

That’s right, basically if you live off a street that says “road not maintained by such and such county”(lots of them where I live) THAT’s the portion the ISP maintains. That’s why a lot of these arguments are over “the last mile.” Basically it’s the mile from their data center that connects to the backbone of the internet to your house. In other industries it might be maintained by one company, but any company can use it. Think about back when you had a modem. You could have that service provided by ANYONE – that’s why AOL got so big they offered free time to just about everyone. Almost everyone signed up and anyone could because the last mile wasn’t maintained by your phone company and had to be shared. DSL still has that requirement only Cable and FiOS don’t and that’s because they were classified as a “information service” rather than a common carrier. The highway above is a common carrier.

If you’d like to see this changed, please go to the FCC and comment (if you can) the link is here: http://www.fcc.gov/comments click on 14-28 and try to leave a comment.

It’s up to us to fight for net neutrality. I’ve left at least two comments. I’ve signed several petitions. I’ve donated to mayone.us all because one of these alone isn’t enough. I’ve contacted by Senator and I know he supports Net Neutrality. If your company is an internet company or uses a large amount of bandwidth on a regular basis see if your company will come out in support of Net Neutrality. It’s the only way we’ll win. We need to get over whelming support.

Driverless cars aren’t without ethical quandaries

While driving home the other day I was thinking about the new Google Driverless car stuff that I’ve seen. It’s an interesting looking vehicle, see it below. Apparently, one of the reasons why Google went fully autonomous was that people would be first hyper vigilant, then so lazy that they completely trusted the car in any and every situation.

Google’s fully automated driverless car

I believe it’s likely that the first round of driverless cars won’t be fully automated. Data will eventually show that the fully automated cars are perfectly safe, but we’re a paranoid lot when it comes to new technology. I also think that there are definitely risks with a fully autonomous car in regard to hacking and spoofing the system. I have a feeling that will become a game with hackers to try to trick the car into thinking that a direction is safe when it is actually not. To continually combat these risks Google will have to make it very easy to update the software, possibly while driving, as well as the hardware. I believe this is one of the many reasons why Google just announced their 180 internet satellites that they will be launching soon.

However, I think that the best of intentions will likely lead to some serious issues for Google and law makers in the next few years. For some of them an author at the Guardian wrote a few of them. That being said, I think that the first cars will not be fully automatic until enough data comes into show they are safe going highway speeds consistently. I think that this will lead to issues for Google.

One of the things that is missed in the Guardian article above is that if you’re an Android user, those very things could happen already. Your phone already tracks not just GPS but also nearby cell towers, so you could very easily subpoena either Google or your cell provider for records of your whereabouts. However, the interesting thing that Google talks about in regard to safety, is that drunk driving will be a think of the past.

As I mentioned before I think that there will be a manual mode and I think there will have to be one for a while because of definite hacker threats. You’d need to override. I also think that this would require a mechanical switch that literally overrides the system. The system would still run, but would not be able to override the human driver. Maybe I’m just paranoid, but I don’t think that anyone can create a truly secure vehicle like this and if one is compromised then all of them would be under the exact same risk.

Now, let’s say a guy goes out drinking. Google knows where he is. Google knows that he took pictures of his shots Instagraming “#drinktilyoublackout!”. Google also knows that he texted a few friends through Hangouts fully integrated texting capability. Furthermore, he tweets to @Google “Getting Black out drunk no #DD #DriverlessFTW”. This guy then gets into the car, switches it to manual override for whatever reason gets in an accident, who is at fault here? Clearly the guy that’s driving right? Well, if he had a fully automated car with no other option he’d not hurt anyone. Google knows everything he’s doing. Google knows everywhere you go already because of how their devices work. The difference is now that they can control where you’re going and how you get there.

Is Google responsible for building a car with a manual override that could save people’s lives in other instances? Is the State responsible for mandating that Google put in that switch? Should Google have built in safety measures that make the user go through a series of actions or prove the driver is capable of overriding the car?

I think that we need to hash out all of these before these cars are allowed on the road. I also think it’s going to be vitally important that we understand what happens with that data from all our cars, who can access it, and if we really have any privacy in a fully automated car like that. Simply by participating in our culture with a cell phone we’ve already eroded our privacy a great deal in both the public and private realm. Driverless cars will further impact that and will likely end up being a highly political issue over the next several years. Taxis, Lyft, and Uber will be out of business – the Car2Go model will beat them out any day of the week if the cars are autonomous. Direct to customers, like Tesla is pretty obvious. Lots of changes are going to happen through these cars.

We can’t just let this happen to us, we need to make decisions about how we want to include driverless cars in our lives. They aren’t inevitable and definitely not in their current incarnation.

Comcast and regulation

I believe that the 300 GB data cap that Comcast is tossing around is tomorrow’s 640k predictions from Microsoft. In 5 years when they are claiming to plan to implement it, 300GB will be woefully small. As it stands many games are 50GB and likely will only grow. As will the size of our movies we stream and other services that will develop in the next five years.

Comcast’s arrogant attitude towards it’s customers can only be described economically in one way: market failure.  If we had a strong competitive telecom market, Comcast would not be able to dictate prices in this way. We know this is true because we can see prices AND speeds that are significantly better elsewhere in the world.

There are two other results of this market failure, pushing regulation that prevents competition and preventing regulation that would prevent a foreclosure of another market. I’ll start with the regulation preventing competition.

In my last blog I mentioned an idea call private public partnership. This is the concept of a municipality working with a private enterprise to spread the risk of implementing a local high speed network because one of the big players won’t. Comcast and other telecoms have pushed and been successful at making these partnerships illegal in a few states. This means a small rural community can’t develop their own fiber network if comcast doesn’t do it for them. It also means a big city like New York couldn’t either. This type of regulation only hurts competition and helps comcast control the market. In the US these partnerships have worked well. Provo Utah sold theirs to Google.

The other way that Comcast is using this market failure is to push the idea that net neutrality is regilation. It is a bit, because it prevents comcast from using a monopoly to foreclose another market. This is what Microsoft got in trouble for with Internet Explorer.  Leveraging the monopoly of Windows to push out other browsers. In the EU the ruling against MS really help other browsers immediately. Comcast will likely try a similar tactic with their Xfinity platform by never having it count against your data cap. Pushing people to their platform and squeezing out Netflix.

The play to get Netflix to pay them is a long term play, hurts Netflix now, but essentially will be funding further development of Xfinity.  Don’t forget, Xfinity will likely get Universal content earlier as they own that conent. This will give their platform a distinct advantage over Netflix. It’s pretty obvious to everyone that the future of in home entertainment is streaming content. Hence, Google looking to buy Twitch.

Comcast is using the anti – regulation faction to fight net neutrality while leveraging that same group’s anti- government sentiment to prevent novel forms of competition to exploit customers and move into new markets. This is a dangerous problem because they will keep doing this to push out other competitors.