New FCC Rules and competition

A friend retweeted the Tweet below today and it got me thinking about the broader context of the FCC rules that past last Thursday

Two things struck me about this tweet. First, it’s disappointing that the author doesn’t understand Title II better considering he co-founded the EFF. Second, that Title II as implemented was designed to do nothing about ISP competition. As I wrote on KBMOD this week, Net Neutrality has no provision for “Unbundling” which would promote competition amongst ISPs at the local level. Unbudling, according to Wikipedia, is a regulation that requires existing line owners (such as Comcast) to open up their lines to anyone that wants to sell cable, internet, or telephony access. Unbundling, under a much more restrictive Title II, is the only reason that AOL was successful as a business model. Since this provision of Title II was forborne, Title II will not, in fact, be for promoting competition in ISPs at all.

Instead, the FCC, at least in my opinion, looked at the Internet as a general purpose platform technology. They were looking to ensure competition ON the technology not between technology carriers. For example, the FCC wants to see as much competition as possible between companies like Netflix, Amazon Prime Video, Hulu, and Comcast’s Xfinity service. However, they want to make sure that Comcast cannot foreclose on the video delivery service by leveraging their existing monopoly in telecommunications. What that means is that Comcast could create rules or an environment where Netflix cannot compete and Comcast customers MUST use the Xfinity service because alternatives didn’t function well (Foreclosure is the thing that got Microsoft with Web browsers).

The FCC did enact a rule that will impact competition at the local level though. It’s a limited rule because it impacts only Tennessee and North Carolina. It is preempting state law by stating that it is legal for municipalities to develop their own broadband networks. Broadband build out is prohibitively expensive for an entrepreneur to set up a network, however if they had a backing of a municipality that is willing to share the risk and the reward, it might be possible for an entrepreneur to build out their own broadband network on a limited scale. Municipalities aren’t the ideal solution to this, it would be significantly more preferable if other businesses moved into areas and built new broadband networks, however unless they have a massive amount of money, like Google, it’s unlikely to happen. A bridge between is a public-private partnership where private enterprise, which has the telecommunications expertise, partners with a municipality, which has the demand and financial support, to build a network.

With the ruling on municipal broadband being so limited, it’s not going to make much of an initial impact, however it’s likely that other municipalities will try to jump on that bandwagon and overrule laws at the state level (as a note I’m not going to argue if this is something they have the authority to do, I’m just looking at the potential impact of the rule).

Privacy and Public Places

Privacy is a tricky thing, there’s privacy of your home, expectations of privacy around mail, privacy related to digital devices, privacy in your car, and privacy in even more public places – each one of them we have different understood or assumed levels of privacy. These maybe different from person to person, but generally we assume in certain places that we’re pretty safe from being eavesdropped on. Furthermore, even though we often talk or talk on our phones in public we expect them to be relatively safe from being overheard, because most people simply don’t care about what we’re saying.

In the public there are some clear rules about what is free for police to inspect and what is not public. For example a police officer can listen to your conversations if they have the right equipment. It is possible for the police to photograph you as well whenever you’re walking around in public. Another place that is mostly a public place is actually your car. If anything is clearly visible on the seats through the windows it’s considered public. However, if something would be in your trunk or glove box the police officer cannot search it unless you give them permission, they have probable cause, or they have some sort of a warrant.

Recently the police and FBI have been using something called a “sting ray” which is effectively a middle man attack between your cell phone and the cell phone provider. The FBI believe, according to recent filings, that a stingray is something that they should be able to use in public without requiring a warrant. They argue that since the person on the cell phone is speaking in public they should have no expectation of privacy.

I think that this raises a lot of concerns. First, even if the sting ray is deployed in a “public” place there are definitely places that you can expect privacy. For instance if you live above a series of bars the bulk of the people that would be hit by the sting ray would likely be in a public place. Even areas that are mostly park still have areas that are private or might even be residential. For this to be even close to realistic the FBI would have to 100% certain that ever person possibly impacted this is in a public place.

Personally, I don’t think that this argument will fly. I believe that this is very similar in terms of technology used and methodology as GPS trackers on cars or more similarly is the GPS information from cell phones. Even if you are using a third party application or technology you still have the expectation of privacy. I believe that this should hold in this instance as well. You’re expecting your communication to be secure between your phone and the cell phone provider without anyone listening in.

I seriously hope that the FBI loses this, because I find the fact that using a technology like this to intercept my cell phone calls from going to the cell provider to be terrifying and if a similar technology was used by any one other than the authorities, they would be on charges for computer fraud and likely put in jail for a very very long time.

Uber might be crashing back to Earth

Last Friday Uber decided to start operating in Portland. I know, it’s a little surprising that Uber or any of the other rideshare Taxi apps aren’t already in the city. Portland had told Uber they could not operate in the city, but Uber decided to thumb their nose at that similarly to what they have done in other cities. Even though Uber was recently valued at $40 Billion they have had some serious issues lately, like rape of a woman in Delhi while illegally operating in the city. Furthermore, as I mentioned in my last article, they have smeared women journalists with the data Uber collects.

Portland has decided to sue Uber over their illegal operation within the city. The city is following Nevada in suing the company rather than trying to fine their drivers. Uber has since ceased operations in the state due to an injunction against the company operating in the state. This appears to be the only route that will work effectively as Uber is still operating in Delhi despite the citywide ban of the service. Uber has also been banned in Spain, Thailand, and parts of the Netherlands. I think the biggest blow, however, is the fact that both San Francisco and LA are suing the company for false advertising related to their fees and background check.

These responses should not come as much of a surprise to anyone that has been watching the company over the past few years. The company is part of the Silicon Valley culture of going fast and trying to break things. The problem is that, incumbents are incumbents for a reason and they do have the ear of government. It’s not to say that they should be incumbents or that it makes them something worthy of respect, but you need to understand the cards are stacked against you. In cases where you want to go in and intentionally ruffle feathers, you must have strong safe guards in place to protect your customers and be public about how you protect them. Uber should welcome background check audits, privacy audits, and driver safety audits whenever they go into a new market. These should all be huge features that they brag about and let people under the hood to actually see.

I think it’s time that companies like Uber start treating our data as if it’s Personal Health Information, which is protected by Health Information Portability and Accountability Act (aka that HIPAA agreement you sign at the doctors’ office). The default is to not share personal information about a patient, that if someone is caught looking at the data without just cause, it typically results in a firing and a fine for the organization. Similar action must be taken at Uber to show they are a steward of our data. Now the government won’t be taking that money, but instead they should be donating the funds to a good cause at a similar rate to a HIPAA violation.

In some respect Uber is exhibiting the effects of a company that is growing too large too fast without designing processes to enable their business activities properly. For Uber to be a successful long term company they need to figure out how to both appease city governments through over protecting their users and breaking existing rules. If the company can be trusted then governments will be more willing to accept pushing boundaries.

Is AI going to kill or us bore us to death?

The interwebs are split over the question of if AI is going to evolve into brutal killing machines or if AI will simply be just another tool we use. This isn’t a debate being asked by average Joes like you and me, it’s being asked by some pretty big intellectuals. Elon Musk thinks that dealing with AI is like summoning demons, while techno-optimist Kevin Kelly thinks that AI is only ever going to be a tool and never anything more than that, and finally you have Erik Brynjolfsson an MIT Professor that believes that AI will supplant humanity in many activities but the best results will come with a hybrid approach (Kevin Kelly does use this argument at length in his article).

Personally I think a lot of Kevin Kelly’s position is extremely naive. Believing that AI will ONLY be something that’s boring and never something that can put us at risk is frankly short sighted. Considering that Samsung, yes the company that makes your cell phone, developed a machine gun sentry that could tell the difference between a man and a tree back in 2006. In the intervening 8 years, it’s likely that Samsung has continued to advance this capability. It’s in their national interest as they deployed these sentries at the demilitarized zone between North and South Korea. Furthermore, with drones it’s only a matter of time that we will deploy an AI that will make many of the decisions between bombing and not bombing a given target. Current we have a heuristic, there’s no reason why that couldn’t be developed into a learning heuristic for a piece of software. This software doesn’t have to even be in the driver’s seat at first. It could provide recommendations to the drone pilot and learn from the choices when it is overridden and when it is not. Actually, the pilot doesn’t even have to know what the AI is recommending and the AI could still learn from the pilot’s choices.

AI isn’t going to be some isolated tool, it’s going to be developed in many different circumstances concurrently by many organizations with many different goals. Sure Google’s might be to find better search, but they also acquired Boston Dynamics which has done some interesting work in robotics. They are also working on developing driverless cars, which will need an AI. What’s to say that the driverless AI couldn’t be co-opted by the government and combined with the AI of the drone pilot to drop bombs or to “suicide” whenever it reaches a specific location. These AIs could be completely isolated from each other but still have the capabilities to be totally devastating. What happens when they are combined? They could at some point through a programmer decision or through an intentional attack on Google’s systems. These are the risks of fully autonomous units.

We don’t fully understand how AI will evolve as it learns more. Machine learning is a bit of a Pandora’s box. It is likely that there will be many unintended consequences, similarly to almost any sort of new technology that’s introduced. However, the ramifications could be significantly worse as the AI could have control over many different systems.

It’s likely that both Kevin Kelly and Elon Musk are wrong. However, we should assume that Musk is right while Kelly is wrong. Not because I want Kelly to be wrong and Musk to be right, but because we don’t understand complex systems very well. They very quickly get beyond our capability to understand what’s going on. Think of the stock market. We don’t really know how it will respond to a given quarterly earnings from a company or even across a sector. There are flash crashes and will continue to be as we do not have a strong set of controls over the high frequency traders. If this is extended across a system that has the capability to kill or intentionally damage our economy, we simply couldn’t manage it before it causes catastrophic damage. Therefore, we must intentionally design in fail safes and other control mechanisms to ensure these things do not happen.

We must assume the worst, but rather than hope for the best, we should develop a set of design rules for AI that all programmers must adhere to, to ensure we do not summon those demons.

FBI double downing on encryption horrors

Last week I wrote about how the Washington Post was being irresponsible by arguing that phone encryption was a greater risk than a benefit for citizens. Because the BAD GUYS or evil people would take advantage of it. Only a few days ago the director of the FBI doubled down on these statements saying that “phone encryption will take us to a very dark place.” Furthermore, the scare mongering examples he provides, cell phone data provided no help nor would have encryption been any sort of hindrance in the investigation.

Phone encryption will more likely force governments and the police to actually get warrants to search phones. As with Passwords courts can order a suspect to hand over encryption keys, in cases where the police don’t have enough evidence to earn a court order they are expected to crack it on their own with their own computer experts. This will likely lead to something of an arms race between police and encryption writers, but that’s already been happening for years.

I think that this is about something bigger than phones though. Once your average computer user has been educated in encryption for phones and loses their fear of encryption, they will likely look into encrypting or expecting their computers to come encrypted. Since phones are fairly easy to hack it makes sense to start with those spaces. However, with the massive amounts of computer leaks at companies lately, it’s likely that Microsoft will begin to encrypt their operating system, eventually consumers will expect it on their personal computers. Laptops and tablets are extremely easy to steal. With encryption it makes the theft a lot less valuable as they have to completely wipe the computer and will be unable to extract any data that might be used for identity theft.

The final end effect might be that users will have true end to end encryption. Which will make it much more difficult for the FBI, CIA, and NSA to spy on ordinary Americans. The end result of phone encryption might actually be that overall Americans have dramatically improved privacy from other Americans, businesses, and governments (not just the American government).

This is why the FBI is terrified.