Big Data is Coming to Get You

Big data is what high tech companies are calling collecting massive amounts of data about their users. For Google, this includes all the trips you’ve taken, the places you’ve driven, where you’ve driven, your email (if you use Gmail), your searches, Google Now preferences, articles you’e posted to Google+, your pictures, and the list goes on. The idea is to use algorithms to mine this data for useful tidbits about user habits so products and services can be recommended just as you need it. These data can tell companies a great deal about the user including who their friends are.

However, what isn’t clear is who owns the data. Companies assume they own the data, which because you agreed to their terms of service, is true, even though you didn’t read them. However, with the recent re-categorization of fitness apps and trackers at medical devices a wrench has been thrown in the works. Data associated with Medical Devices is typically assumed to be Personal Health Information, which is protected under HIPAA. Which means that companies can’t really sell them AND that you are able to control what happens with the data. It’s the reason why doctors are required to share information with other healthcare professionals.

I believe that this is just the first step towards making our data more portable. In Europe you can already request a transcript of all the data Facebook collects of you, however they do not say you have control over what FB does with that data. Obama, is pushing to help increase privacy of personal information, but will only work if the companies feel like they have a stake or a penalty if they do not adequately protect data. Whenever they are an effective monopoly such as Apple or Google is of your data (through lock-in effects) their incentives to fully respective privacy is reduced because of the cost of switching to another monopoly.

Privacy and Public Places

Privacy is a tricky thing, there’s privacy of your home, expectations of privacy around mail, privacy related to digital devices, privacy in your car, and privacy in even more public places – each one of them we have different understood or assumed levels of privacy. These maybe different from person to person, but generally we assume in certain places that we’re pretty safe from being eavesdropped on. Furthermore, even though we often talk or talk on our phones in public we expect them to be relatively safe from being overheard, because most people simply don’t care about what we’re saying.

In the public there are some clear rules about what is free for police to inspect and what is not public. For example a police officer can listen to your conversations if they have the right equipment. It is possible for the police to photograph you as well whenever you’re walking around in public. Another place that is mostly a public place is actually your car. If anything is clearly visible on the seats through the windows it’s considered public. However, if something would be in your trunk or glove box the police officer cannot search it unless you give them permission, they have probable cause, or they have some sort of a warrant.

Recently the police and FBI have been using something called a “sting ray” which is effectively a middle man attack between your cell phone and the cell phone provider. The FBI believe, according to recent filings, that a stingray is something that they should be able to use in public without requiring a warrant. They argue that since the person on the cell phone is speaking in public they should have no expectation of privacy.

I think that this raises a lot of concerns. First, even if the sting ray is deployed in a “public” place there are definitely places that you can expect privacy. For instance if you live above a series of bars the bulk of the people that would be hit by the sting ray would likely be in a public place. Even areas that are mostly park still have areas that are private or might even be residential. For this to be even close to realistic the FBI would have to 100% certain that ever person possibly impacted this is in a public place.

Personally, I don’t think that this argument will fly. I believe that this is very similar in terms of technology used and methodology as GPS trackers on cars or more similarly is the GPS information from cell phones. Even if you are using a third party application or technology you still have the expectation of privacy. I believe that this should hold in this instance as well. You’re expecting your communication to be secure between your phone and the cell phone provider without anyone listening in.

I seriously hope that the FBI loses this, because I find the fact that using a technology like this to intercept my cell phone calls from going to the cell provider to be terrifying and if a similar technology was used by any one other than the authorities, they would be on charges for computer fraud and likely put in jail for a very very long time.

Uber might be crashing back to Earth

Last Friday Uber decided to start operating in Portland. I know, it’s a little surprising that Uber or any of the other rideshare Taxi apps aren’t already in the city. Portland had told Uber they could not operate in the city, but Uber decided to thumb their nose at that similarly to what they have done in other cities. Even though Uber was recently valued at $40 Billion they have had some serious issues lately, like rape of a woman in Delhi while illegally operating in the city. Furthermore, as I mentioned in my last article, they have smeared women journalists with the data Uber collects.

Portland has decided to sue Uber over their illegal operation within the city. The city is following Nevada in suing the company rather than trying to fine their drivers. Uber has since ceased operations in the state due to an injunction against the company operating in the state. This appears to be the only route that will work effectively as Uber is still operating in Delhi despite the citywide ban of the service. Uber has also been banned in Spain, Thailand, and parts of the Netherlands. I think the biggest blow, however, is the fact that both San Francisco and LA are suing the company for false advertising related to their fees and background check.

These responses should not come as much of a surprise to anyone that has been watching the company over the past few years. The company is part of the Silicon Valley culture of going fast and trying to break things. The problem is that, incumbents are incumbents for a reason and they do have the ear of government. It’s not to say that they should be incumbents or that it makes them something worthy of respect, but you need to understand the cards are stacked against you. In cases where you want to go in and intentionally ruffle feathers, you must have strong safe guards in place to protect your customers and be public about how you protect them. Uber should welcome background check audits, privacy audits, and driver safety audits whenever they go into a new market. These should all be huge features that they brag about and let people under the hood to actually see.

I think it’s time that companies like Uber start treating our data as if it’s Personal Health Information, which is protected by Health Information Portability and Accountability Act (aka that HIPAA agreement you sign at the doctors’ office). The default is to not share personal information about a patient, that if someone is caught looking at the data without just cause, it typically results in a firing and a fine for the organization. Similar action must be taken at Uber to show they are a steward of our data. Now the government won’t be taking that money, but instead they should be donating the funds to a good cause at a similar rate to a HIPAA violation.

In some respect Uber is exhibiting the effects of a company that is growing too large too fast without designing processes to enable their business activities properly. For Uber to be a successful long term company they need to figure out how to both appease city governments through over protecting their users and breaking existing rules. If the company can be trusted then governments will be more willing to accept pushing boundaries.

Is AI going to kill or us bore us to death?

The interwebs are split over the question of if AI is going to evolve into brutal killing machines or if AI will simply be just another tool we use. This isn’t a debate being asked by average Joes like you and me, it’s being asked by some pretty big intellectuals. Elon Musk thinks that dealing with AI is like summoning demons, while techno-optimist Kevin Kelly thinks that AI is only ever going to be a tool and never anything more than that, and finally you have Erik Brynjolfsson an MIT Professor that believes that AI will supplant humanity in many activities but the best results will come with a hybrid approach (Kevin Kelly does use this argument at length in his article).

Personally I think a lot of Kevin Kelly’s position is extremely naive. Believing that AI will ONLY be something that’s boring and never something that can put us at risk is frankly short sighted. Considering that Samsung, yes the company that makes your cell phone, developed a machine gun sentry that could tell the difference between a man and a tree back in 2006. In the intervening 8 years, it’s likely that Samsung has continued to advance this capability. It’s in their national interest as they deployed these sentries at the demilitarized zone between North and South Korea. Furthermore, with drones it’s only a matter of time that we will deploy an AI that will make many of the decisions between bombing and not bombing a given target. Current we have a heuristic, there’s no reason why that couldn’t be developed into a learning heuristic for a piece of software. This software doesn’t have to even be in the driver’s seat at first. It could provide recommendations to the drone pilot and learn from the choices when it is overridden and when it is not. Actually, the pilot doesn’t even have to know what the AI is recommending and the AI could still learn from the pilot’s choices.

AI isn’t going to be some isolated tool, it’s going to be developed in many different circumstances concurrently by many organizations with many different goals. Sure Google’s might be to find better search, but they also acquired Boston Dynamics which has done some interesting work in robotics. They are also working on developing driverless cars, which will need an AI. What’s to say that the driverless AI couldn’t be co-opted by the government and combined with the AI of the drone pilot to drop bombs or to “suicide” whenever it reaches a specific location. These AIs could be completely isolated from each other but still have the capabilities to be totally devastating. What happens when they are combined? They could at some point through a programmer decision or through an intentional attack on Google’s systems. These are the risks of fully autonomous units.

We don’t fully understand how AI will evolve as it learns more. Machine learning is a bit of a Pandora’s box. It is likely that there will be many unintended consequences, similarly to almost any sort of new technology that’s introduced. However, the ramifications could be significantly worse as the AI could have control over many different systems.

It’s likely that both Kevin Kelly and Elon Musk are wrong. However, we should assume that Musk is right while Kelly is wrong. Not because I want Kelly to be wrong and Musk to be right, but because we don’t understand complex systems very well. They very quickly get beyond our capability to understand what’s going on. Think of the stock market. We don’t really know how it will respond to a given quarterly earnings from a company or even across a sector. There are flash crashes and will continue to be as we do not have a strong set of controls over the high frequency traders. If this is extended across a system that has the capability to kill or intentionally damage our economy, we simply couldn’t manage it before it causes catastrophic damage. Therefore, we must intentionally design in fail safes and other control mechanisms to ensure these things do not happen.

We must assume the worst, but rather than hope for the best, we should develop a set of design rules for AI that all programmers must adhere to, to ensure we do not summon those demons.

What can Interstellar Teach us about the tragedy of the Commons? (spoilers)

This post will contains some minor spoilers for the movie Interstellar. If you don’t want to read any spoilers, then stop reading now.

The tragedy of the commons represents a common good that without proper communication and planning can be destroyed through maximizing an individual’s utility. What does that mean? Well, a group of ranchers are sharing a field. One of them decides to make some additional money by buying, just ONE more head of cattle. He lets it eat in the grass that everyone else is sharing. No negative impact happens, the farmers discuss the number of cattle, which they had all agreed upon beforehand to be a set number. Since he increased his, everyone else does the same, eventually the land will not be able to sustain all the extra head of cattle, and the next year cattle start to die of starvation. Creating a crash in the economy.

According to Stephen Gardiner climate change represents a tragedy of the commons. However, instead of the ranchers, we have our great grand parent’s decision impacting our climate today. Climate change effectively started during the Industrial Revolution and our actions will be impacting future generations. Since the future generation does not have a voice in the conversation, it’s hard for us to put off current needs for future needs. This is further exasperated by the fact that we cannot even work to improve conditions for our own children, let alone some faceless grand child or great grandchild down the road.

Interstellar offers a glimpse into why this is so difficult. First, there’s clearly gaps in education, Interstellar points this out through exaggerating what a lot of school boards are currently doing, they go to the extreme to say that the Apollo missions are faked as a propaganda tool to destroy the Soviet Union. Second, Matthew McConaughey is one of the few forward thinking individuals, but he knows that we are continually leaving worse and worse conditions for our children, as a farmer he can see how poorly we’re fighting the blight that is killing our crops. Third, the time dilation he experiences being close to a blackhole allows him, while he’s still young, to see the full effects of his generations decisions on his children. He’s fully impotent to do anything about it, but he knows that the choices they made have fully doomed his children. Finally and I think most impactful, is the scene where Murph dies. He sees his grand children and great grand children and doesn’t even acknowledge them. He did everything he could for Murph but had no interest in seeing how all of this impacted his’s child’s children. Furthermore, Murph didn’t seem to want him to try to bridge that divide. Rather than try to build a relationship with the world as it was she pushed him to reunite with a crewmate that came from the same “world” as him.

All of these indicate that we have a serious tragedy of the commons problem. That education is required to even have a hope to combat the tragedy of the commons for climate change. That we must figure out a way to see past the here and now and create a seriously forward looking plan. That we cannot simply rely on a few forward thinking people because even they are limited in how much they can look to the future.

This is a serious concern because we now have a leader on the environmental committee in the US congress that doesn’t accept the evidence presented by scientists. Furthermore, the fact that lawmakers aren’t scientists seems to excuse them from understanding what people are saying about climate change.

We cannot expect some “they” to come and allow us to rescue ourselves with “their” help. We have to figure this out on our own. We’re failing miserably right now.

Another book that does a good job outlining these intergernational problems is the Forever War.