New FCC Rules and competition

A friend retweeted the Tweet below today and it got me thinking about the broader context of the FCC rules that past last Thursday

Two things struck me about this tweet. First, it’s disappointing that the author doesn’t understand Title II better considering he co-founded the EFF. Second, that Title II as implemented was designed to do nothing about ISP competition. As I wrote on KBMOD this week, Net Neutrality has no provision for “Unbundling” which would promote competition amongst ISPs at the local level. Unbudling, according to Wikipedia, is a regulation that requires existing line owners (such as Comcast) to open up their lines to anyone that wants to sell cable, internet, or telephony access. Unbundling, under a much more restrictive Title II, is the only reason that AOL was successful as a business model. Since this provision of Title II was forborne, Title II will not, in fact, be for promoting competition in ISPs at all.

Instead, the FCC, at least in my opinion, looked at the Internet as a general purpose platform technology. They were looking to ensure competition ON the technology not between technology carriers. For example, the FCC wants to see as much competition as possible between companies like Netflix, Amazon Prime Video, Hulu, and Comcast’s Xfinity service. However, they want to make sure that Comcast cannot foreclose on the video delivery service by leveraging their existing monopoly in telecommunications. What that means is that Comcast could create rules or an environment where Netflix cannot compete and Comcast customers MUST use the Xfinity service because alternatives didn’t function well (Foreclosure is the thing that got Microsoft with Web browsers).

The FCC did enact a rule that will impact competition at the local level though. It’s a limited rule because it impacts only Tennessee and North Carolina. It is preempting state law by stating that it is legal for municipalities to develop their own broadband networks. Broadband build out is prohibitively expensive for an entrepreneur to set up a network, however if they had a backing of a municipality that is willing to share the risk and the reward, it might be possible for an entrepreneur to build out their own broadband network on a limited scale. Municipalities aren’t the ideal solution to this, it would be significantly more preferable if other businesses moved into areas and built new broadband networks, however unless they have a massive amount of money, like Google, it’s unlikely to happen. A bridge between is a public-private partnership where private enterprise, which has the telecommunications expertise, partners with a municipality, which has the demand and financial support, to build a network.

With the ruling on municipal broadband being so limited, it’s not going to make much of an initial impact, however it’s likely that other municipalities will try to jump on that bandwagon and overrule laws at the state level (as a note I’m not going to argue if this is something they have the authority to do, I’m just looking at the potential impact of the rule).

Grants to build out networks rules change

Recently there have been a serious debate between the FCC and major telecoms about the minimum rate for broadband. It’s pretty obvious that there’s a strong disagreement between most customers and their ISPs. For the most part rural ISPs are pretty terrible. If you live outside of a major city it’s unlikely that you’ll have a very fast internet service. For a country of our size and population, we have an extremely large portion of our population that does have access to the internet, however we don’t have the deepest penetration of the internet in the world. Which for a country of our wealth that is something of a shame. We’ve been investing, through governmental grants since the middle of the 90’s and we haven’t seen the expected return on investment that we’d expected as investors. We paid for companies like Verizon and Comcast to invest in our network, and I mean we, as in the tax payers. We’re paying for them to get rich off of grants.

Internet Population and Penetration

Smaller countries like the Netherlands and the UK have significantly greater penetration. Sure they have smaller populations than we do, but they also have significantly faster internet speeds than we do across the board including rural areas. Korea has speeds an order of magnitude higher than we do, despite the fact that we’re a significantly richer country than South Korea.

One of the first moves in a long time that the FCC has done that is a positive move in a really long time. As of today, the FCC has decided that the minimum speed for broadband must be 10mbps which is a huge step in the right direction. This will change the minimum threshold for any investment by a company to earn a grant to increase from 4mbps to 10mbps. This is the right direction for our country and I’m really excited about the possibilities. It means that the FCC is starting to really understand that the telecoms don’t fully have our best interests in mind when they make their arguments. We’ll see what happens in the upcoming months.

Comcast and regulation

I believe that the 300 GB data cap that Comcast is tossing around is tomorrow’s 640k predictions from Microsoft. In 5 years when they are claiming to plan to implement it, 300GB will be woefully small. As it stands many games are 50GB and likely will only grow. As will the size of our movies we stream and other services that will develop in the next five years.

Comcast’s arrogant attitude towards it’s customers can only be described economically in one way: market failure.  If we had a strong competitive telecom market, Comcast would not be able to dictate prices in this way. We know this is true because we can see prices AND speeds that are significantly better elsewhere in the world.

There are two other results of this market failure, pushing regulation that prevents competition and preventing regulation that would prevent a foreclosure of another market. I’ll start with the regulation preventing competition.

In my last blog I mentioned an idea call private public partnership. This is the concept of a municipality working with a private enterprise to spread the risk of implementing a local high speed network because one of the big players won’t. Comcast and other telecoms have pushed and been successful at making these partnerships illegal in a few states. This means a small rural community can’t develop their own fiber network if comcast doesn’t do it for them. It also means a big city like New York couldn’t either. This type of regulation only hurts competition and helps comcast control the market. In the US these partnerships have worked well. Provo Utah sold theirs to Google.

The other way that Comcast is using this market failure is to push the idea that net neutrality is regilation. It is a bit, because it prevents comcast from using a monopoly to foreclose another market. This is what Microsoft got in trouble for with Internet Explorer.  Leveraging the monopoly of Windows to push out other browsers. In the EU the ruling against MS really help other browsers immediately. Comcast will likely try a similar tactic with their Xfinity platform by never having it count against your data cap. Pushing people to their platform and squeezing out Netflix.

The play to get Netflix to pay them is a long term play, hurts Netflix now, but essentially will be funding further development of Xfinity.  Don’t forget, Xfinity will likely get Universal content earlier as they own that conent. This will give their platform a distinct advantage over Netflix. It’s pretty obvious to everyone that the future of in home entertainment is streaming content. Hence, Google looking to buy Twitch.

Comcast is using the anti – regulation faction to fight net neutrality while leveraging that same group’s anti- government sentiment to prevent novel forms of competition to exploit customers and move into new markets. This is a dangerous problem because they will keep doing this to push out other competitors.

Looming battle: Content providers vs. service providers

In my last post about the PS4, I discussed how the PS4 is a long term play and that over time the product will move away from playing directly on the PS4 towards utilizing servers to stream the game to the user. This was an argument to counter many PC gamer’s disdain for the specs for the system. Sure, the specs aren’t great, but they are a huge advancement over the PS3, which is still able to play, rather well, new games.

Most of the feedback I got on the article basically went “well that’s great and all, but the infrastructure isn’t there for this in the US.” This is extremely valid feedback. AOL still records $500 Million in revenue from dial up connections. The US rates among the worst in developed world for internet speeds and penetration. Of course there’s the argument that our country is so much larger, well, the EU as a whole tops us, it’s not uniform across the EU, but that still makes it a valid comparison. The other thing to remember, the console won’t just come out in the US. Many of these features will work better in Korea and Japan than in the US. Typically Sony has released different features by region and will likely experiment with the sharing features in Japan before rolling it out to the US, where Sony knows it will have infrastructure difficulties.

This discussion raises additional concerns though, infrastructure isn’t just about the lines in the ground, but also the structure of the service providers that allow access. In the case of the US, not only does quality and speed of the connection vary wildly but we also have more restrictions on the amount of data we can download than other countries. For a typical family you end up buying the internet 2 or 3 times at the minimum (smart phone access per family member and then the main house connection). Each of these connections likely has a different maximum for downloading or uploading with fees for going over this.

This creates a lot of difficulties as we don’t always know how much bits a specific file will use as we access it. In many cases, it likely drives consistent under utilization of the service do to excessive fees and user dissatisfaction for those hitting the cap. Americans are starting to cut the cord in record numbers, my wife and I don’t have TV, just cable internet; I have a lot of options without Cable. This is going to start increasing the rate of frustration users have with caps. I typically watch live streaming video in 720p while my wife surfs the net and watches a show on Hulu.

I have absolutely no idea how much bandwidth is being consumed on a typical night. There is no easy way for me to measure this or plan for getting close to a cap. Furthermore, both my wife and I use our phones to access the internet, listen to music, watch videos, and play games on our phones. Again, all of these use bandwidth and likely push us against our cellular plan. Sure there’s meters for these, but they are notoriously inaccurate.

This issue with be further exacerbated by the proliferation of cloud services like Drop Box, video sharing on YouTube, streaming new services all the time, and the eventual goal of offloading computing power to the cloud. The measurement of these services will be extremely difficult and planning for how much data these services will require will be absurdly difficult at best for the average user. It is likely that these services will push users over the usage caps on a monthly bases.

I think that we need to start looking for another solution. I think that Google Fiber is a start, it would make sense for Netflix, Amazon, Dishnetwork, Microsoft, Intel, and other content providers to join a consortium that will introduce a new service provider to attack the incumbents. I have heard that Dish is currently working on creating their own system with Google or some other company, I think that this could potentially shake up the industry and allow users more options. There are going to be a wealth of new services that require more and more bandwidth and higher speeds. If these content providers want users to be able to access and enjoy their services they need to challenge the status quo to enable their customers.

Ubiquitous free high speed wireless: Society

This is the last post I discussed the impact on the computing industry of ubiquitous high speed free wireless internet. In this post I’ll discuss some of the societal changes. In some ways the societal changes may be smaller, at first, than we’d anticipate.

First, we’ve seen how much people have jumped on playing with their phones in public spaces. I fully expect this trend to continue and in fact to increase. Simple to play games like Angry Birds will become more advanced and will likely look better. People will do more work on their phones and will likely begin using video calls in public. Which will be annoying, but it’s going to happen.

There may be a wave of apps that will try to increase the amount of social interaction of players. This doesn’t mean that we’ll have an increase of in person social interaction, but will likely be an increase of virtual social interaction. Which for some people is significantly better than what would happen otherwise.

I think that the ubiquitous internet will have a mixed impact on the ability to do work. As it is a lot of people already spend a great deal of time working from home off the clock. This will likely increase, but I think there will be a trade off. As people will, hopefully, be able to work while commuting more easily on trains and buses. People will begin to work in more places like cafes compared to the amount that currently do.

There will be other changes as new devices and applications are created to take advantage of the high speed internet. Many of these changes will happen as these devices are developed.

I would like to be completely optimistic that the greater the amount of internet will lead to a larger amount of user created content. That the increase of wireless internet will increase personal engagement in political and social activities, but I don’t think it will. I think that there will be a small increase because there will be a larger number of people that weren’t able to do it before are able to do it.

I think that a high percentage of engagement in social networks, content creation and other types of engagement will take some time to occur. I think it’s because of a mind set. A lot of people have no desire to become involved in these types of things. I would like to imagine that these changes will happen over night. However they will not. People will need time to understand how to exploit this infrastructure. It will take time for unique social experiments to develop using the network. Some people will understand immediately how to create new tools for the new environment, but it will take many established firms time to fully exploit it.

It will also take people time to adapt to the change. It’s not obvious in what ways the average user will exploit this technology. In many ways it will just increase the amount of general web browsing going on, in other ways video viewing will increase as well.

In this series I’ve looked at how our government, business, computer and social environments will change based on ubiquitous free wireless internet. It will have immediate changes and longer term changes that currently fall into the realm of science fiction. Device makers and app developers will have a new world to exploit because of increases in computing power locally and remotely. Creating novel methods of using this power is what will drive the next phase in our economy.

Ubiquitous free high speed wireless

One of the people I follow on twitter posed an interesting question. What would happen if there was free broadband wireless all over Europe. I sent them my 140 character answer but felt really unsatisfied by that. I’m going to devote some blog space to it over the next few days because I think that there would be a lot of changes. I’m going to break this into a few section. I haven’t worked out all of them but there will be government, business, computing and social changes. This structure loosely follows some of the structure within Lawrence Lessig’s Code 2.0. He also argued there were four structures that impact community building on the internet. It is written in the US context, but can be applied in other countries.

I’m going to start with Governmental changes.

One of the first things that will happen will be further encroachments on the ability for users to be anonymous and use pseudonyms online. Initially the requirement to login will be used to track which areas have the highest user rates and things like that, but this could be an incredibly powerful tool to prevent copyright abuse from users of the network. IP addresses would go out the window as an enforcement tool of nearly any online abuses. For instance, the safest place to download a movie from the internet would be on the train. You’d be changing IP addresses frequently and it would be very difficult to track a single user from one IP address to the next.

To deal with these problems there would have to be strict oversight to protect users of the network from invasions of privacy from the government and third party users of the network. Currently, the US government has a significantly heavy hand in collecting data from ISPs, Cloud data and social networking data. This includes both European and US data. This would need to be prevented.

Paying for and managing this network would need to be determined as well. One route could be to put a tax on advertisements that are displayed in a IP address range. Since IPs are distributed through regions this would be technically possible. Google just announced they made $9.7 Billion and nearly all of that is from ads (99% was from ad revenue in 2008). Putting a modest tax on this revenue will help pay for this network. Assuming that this infrastructure would need to be rolled out and continually upgraded I would expect at least $2-3 billion annual investment is required. I’m basing this on how much Verizon Wireless and AT&T invest in their network annually. This of course would change based on the amount of capacity required (a lot) and what technology used (WIFI, Wi-Max, LTE) for the network.

Since, this will effectively kill the business model of the telecoms, like T-mobile and KPN, they could be used to help manage the network. Governments and the like aren’t the best at managing these networks these old companies would be the best suited to manage it. That or create an organization that is based on former employees.

Finally, the network would have to be net neutral. Otherwise, it would effectively be government censorship if there was a reduction in access to any portion of the web. This means that the internet would be free as in free beer and free as in free speech. This would ensure the most positive results from the free internet on the business side and improve ability of users to participate in democracy.

Biggest changes? Management of the network, increased privacy concerns, paying for the network and copyright owners influence on data controls.

In my next blog I’ll discuss how this would change the business environment.