Looming battle: Content providers vs. service providers

In my last post about the PS4, I discussed how the PS4 is a long term play and that over time the product will move away from playing directly on the PS4 towards utilizing servers to stream the game to the user. This was an argument to counter many PC gamer’s disdain for the specs for the system. Sure, the specs aren’t great, but they are a huge advancement over the PS3, which is still able to play, rather well, new games.

Most of the feedback I got on the article basically went “well that’s great and all, but the infrastructure isn’t there for this in the US.” This is extremely valid feedback. AOL still records $500 Million in revenue from dial up connections. The US rates among the worst in developed world for internet speeds and penetration. Of course there’s the argument that our country is so much larger, well, the EU as a whole tops us, it’s not uniform across the EU, but that still makes it a valid comparison. The other thing to remember, the console won’t just come out in the US. Many of these features will work better in Korea and Japan than in the US. Typically Sony has released different features by region and will likely experiment with the sharing features in Japan before rolling it out to the US, where Sony knows it will have infrastructure difficulties.

This discussion raises additional concerns though, infrastructure isn’t just about the lines in the ground, but also the structure of the service providers that allow access. In the case of the US, not only does quality and speed of the connection vary wildly but we also have more restrictions on the amount of data we can download than other countries. For a typical family you end up buying the internet 2 or 3 times at the minimum (smart phone access per family member and then the main house connection). Each of these connections likely has a different maximum for downloading or uploading with fees for going over this.

This creates a lot of difficulties as we don’t always know how much bits a specific file will use as we access it. In many cases, it likely drives consistent under utilization of the service do to excessive fees and user dissatisfaction for those hitting the cap. Americans are starting to cut the cord in record numbers, my wife and I don’t have TV, just cable internet; I have a lot of options without Cable. This is going to start increasing the rate of frustration users have with caps. I typically watch live streaming video in 720p while my wife surfs the net and watches a show on Hulu.

I have absolutely no idea how much bandwidth is being consumed on a typical night. There is no easy way for me to measure this or plan for getting close to a cap. Furthermore, both my wife and I use our phones to access the internet, listen to music, watch videos, and play games on our phones. Again, all of these use bandwidth and likely push us against our cellular plan. Sure there’s meters for these, but they are notoriously inaccurate.

This issue with be further exacerbated by the proliferation of cloud services like Drop Box, video sharing on YouTube, streaming new services all the time, and the eventual goal of offloading computing power to the cloud. The measurement of these services will be extremely difficult and planning for how much data these services will require will be absurdly difficult at best for the average user. It is likely that these services will push users over the usage caps on a monthly bases.

I think that we need to start looking for another solution. I think that Google Fiber is a start, it would make sense for Netflix, Amazon, Dishnetwork, Microsoft, Intel, and other content providers to join a consortium that will introduce a new service provider to attack the incumbents. I have heard that Dish is currently working on creating their own system with Google or some other company, I think that this could potentially shake up the industry and allow users more options. There are going to be a wealth of new services that require more and more bandwidth and higher speeds. If these content providers want users to be able to access and enjoy their services they need to challenge the status quo to enable their customers.

PS4 may not be as bad as everyone thinks

The PS4 was announced yesterday, 2/20/2013, it was immediately pummeled by the media and on social networks. I think that this might be a touch premature. Why? I’ll list out a few different reasons and let the reader decide if I’m off my rocker.

First, streaming to the PS Vita. Commentators have already compared this to the Nvidia shield, while I think this is accurate, I think that misses part of the point behind this capability. The true purpose is to get people used to the idea of streaming a video game from one system onto another system. We are used to doing this with video already, but we aren’t used to truly playing something that is entirely run on a different system than the one we’re interacting with.

Second, play while downloading. This feature is again to help us get used to the idea of streaming a game from a server. Sony acquired Gaikai a while back which enables playing a game on the server. Initially offering only server side play while downloading is a very safe way for Sony to test system requirements on the server side, manage capacity needs by limiting the amount of concurrent users, and developing an understanding how game play feels when thousands of people are playing the same game over an internet connection.

Third, console gaming systems have always had lower specs than the bleeding edge PC games. However, the platform is stable and encourages developers to figure out new ways of exploiting the technology. They don’t have to worry about continually changing systems. On top of that, the developers will eventually begin to exploit the combination of the CPU and GPU using OpenCL and figure out new ways to eek more out of less using that technology.

Fourth, in 5 years it won’t matter what is underneath the hood of the PS4. Not because no one will be playing it, but because Sony will have acclimated users to streaming over the Vita. Sony will have acclimated users to streaming from a server through downloads. Sony will have determined server requirements to host all games and stream the game to the PS4. It’s likely that there will be some experimental games that will allow playing both client side or server side, but eventually there will be a game that will only be server side. It will be a big game and it will begin to push all other games to the server. At this point Sony will have optimized the hardware for the PS4 to display higher quality game play coming over the internet.

The PS4 is not the next console for Sony, it likely could be the “last” console for Sony as it develops new ways for users to access games and continually “upgrade” their console as the server side technology for game streaming continually matures. This of course eliminates the need to sync a disc to a specific system and even removes the need to download any content. You buy a short cut and you can play immediately.

So, is the PS4’s hardware going to be kill the PS4? No, the hardware on this system isn’t the point. The goal is to allow access to games that will be streamed from the cloud.

Ubiquitous free high speed wireless: Computing

In my last two blogs, Government and Business, I’ve discussed some of the impacts on our society of ubiquitous high speed wireless internet. In this post I’ll look at the future of the computing industry. I think that this industry will go one of two ways, or perhaps both at the same time. The first route is obvious and is already happening, the second route will probably begin as a backlash to the first route.

The obvious route is cloud computing. As I’ve said we’re already going down this route. The best example of the speed of this transformation is the Amazon Kindle Fire (all three different links). Basically, we will be using less powerful, but still growing in abilities, equipment and pushing the more processor intensive applications out onto a server in the cloud. This will most likely be owned by some private organization. Amazon’s Fire is a great example of this because it provides the ability to browse websites at a much faster rate than what’s allowed under current network speeds. Even with high speed internet this may continue because it’ll fit the website to your screen and make it even faster than over the high speed network.

However, many people are skeptical of cloud computing. There is a sense of a loss of ownership. You become locked in to a specific firm to provide the required services. End User license agreements change frequently and your true ownership of the data and information you place on their servers can change unexpectedly and in ways that aren’t in the favor of the users. Additionally, it’s been acknowledged by both Google and Microsoft that all data in their cloud servers are subject to the US Patriot Act. This raises privacy concerns for the EU and firms using cloud services.

I think that these concerns will drive another type of cloud computing. I think it’ll be something like a personal cloud. It will be similar to working with both a desktop and a laptop at the same time and remoting into the desktop from the laptop, but it will be done seamlessly and transparently. The ownership of the data will be clearly yours and the power will effectively take a phone or low power table and turn it into a fully powered desktop computer. This way the cloud won’t be out there and will be easily controlled by the end user. You don’t have to worry about the Patriot Act or a company going under, changing rates and other issues like that.

Both of these changes will create disruptive changes within the computing industry. The Kindle Fire is on the cutting edge of this. I fully expect Amazon to create additional applications that will run on the Amazon cloud system. There’s no reason not to expect this. It will shift how apps are developed. It will also change who is in the game of creating computers. Dell, for example, will continue to have a major hold over both servers and personal computers, however as we move away from laptops to tablets and phones over time Dell is going to fail in this market. They have been unsuccessful at every attempt to enter these markets. There will be a shift in the players in the market.

These systems will only work with ubiquitous internet connection. They will become more effective as the network speed and capacity increases. Users will become more willing to use the systems as the reliability of the systems increase.

In my opinion these changes will fundamentally change the way that we look at computers. The way we interact with computers and how we feel about the usage of computers. Today they are everywhere, but in the next few years I expect them to become more prevalent as we are able to offload high power demanding applications off of our phones and onto powerful servers.

In my next blog I’ll discuss some overall societal changes.