CISPA and the problem with volunteering data

So, CISPA, Cyber Information Sharing and Protection Act, is the newest cyber bill on the block. There is a difference between this and the other laws though. In SOPA and PIPA the laws were mandatory, and the government could simply act. In CISPA companies can willingly filter material and this may be based upon information the government provides as a threat. This was a bad situation and internet companies seem to like this law. Facebook and Microsoft are straight up supporting the law. There is uncertainty in the public if Google is or not.

So, in this law the government and internet companies can voluntarily share information about cyber threats and suspicious activities online. However, the problem with voluntary sharing programs is that they can turn into “voluntary” programs. What do I mean? Well, if the government is not required to give the information to all parties that could be affected in some sort of terrorist act the government could decide to give information to companies that are sharing information with the government. Additionally, the government could punish companies, like Twitter, that fight the government over privacy issues by not sharing information.

These are pretty obvious problems with this type of law. It assumes that each event is independent and previous actions have no consequent. This is a faulty premise. If this is viewed as a multi-turn prisoner’s dilemma, it’s obvious that with repeat interactions the best actions will always be to share. This will likely lead to sharing when there are cases of doubt over if the company should share or not. Companies will fault on the side of security over privacy, because the future benefits outweigh any punishment the users can enact on the companies.

These types of pseudo quid pro quo is impacting the US government in other ways including lobbying. It is likely that this information exchange will be used by companies whenever there are negotiations for future laws. They will be able to say, “you need to respect our rights to X, look how friendly we’ve been with the government” and then show a list of times they voluntarily gave data to the government. This was a tactic that Ma Bell used to keep their monopoly as long as they did. Because the company was providing the government with extra public goods (military research), the government was willing to over look the fact that the company was a monopoly and perhaps should be broken up.

CISPA is a dangerous law that we need to carefully weigh accepting. We need to pressure internet companies to step away from the law. We also need, if it passes, better understanding of when companies hand over data willingly and for what reasons. We should also be notified any time a company hands over our data about us to the government for any reason.

Are we talking past each other with the net neutrality debate?

I started reading (yes another book) “Internet Architecture and Innovation” on my flight to Portland Tuesday night. It’s going to be a really interesting read, if you like the internet, economics and innovation of course. One of the first parts discusses the history of the internet and a design principle called end to end. This means that when something is transmitted certain events must happen. There are two meanings to the same principle though, which complicates things. In one version only peers can “talk” to each other and share the information. This isn’t exactly literal, because if I’m skyping the data isn’t just between skype on my pc and yours, it goes through many, but the idea is that only your pc and mine know we are skyping. In the second method, some intermediaries might know that we are skyping, through something called deep packet inspection where a router is able to read the information it processes. Both ways are still called end-to-end. Which is obviously a problem.

Another easy example. One version would require equal up and download speeds, the other doesn’t. Let’s say you have a picture and want to upload it, in the one version it would take you the same time to upload as to download it the next day back to your pc. We know this doesn’t happen.

Until reading this book I really thought that the internet was truly designed in an equal and neutral manner. However, this isn’t the case. Using these two design principles results in an internet that looks very different and we would expect it to evolve differently based on which understanding was applied.

It’s obvious that for consumers the first option is better. Where the network behind the internet is neutral and a “dumb” pipe. Why is it better? Because no one would be able to intercept your data or change the speeds you get your information or even cap your data downloads. This is bad for network owners because they can’t charge or filter as easily for specific content. They simply become a pipe that information flows through.

The differences in incentives and contexts which the design rules are applied drives this discussion. Since the participants believe they are talking about the same thing there is confusion over the disconnect. This leads to an obvious other problem, our clueless elected officials. They don’t understand how the internet works at the simplest level, let alone the esoterics of the minute differences in this argument. It is no wonder they have tried to do back door deals to get this topic to go away.

This also has led to confusion within the internet community of how the telcoms can say that the internet wasn’t developed as a neutral platform. In a way they are correct, in other ways they are wrong. It was just a matter of what was being discriminated. Before it was up vs down speeds, now it could be content. Which to them is no different. For us, it matters a whole lot more.

Book Review: Idea Factory, the history of Bell Labs

Yea, I know I’ve just been doing book reviews.

This book was amazing. I had no idea of all the different things that Bell Labs produced from the mid 1920’s until the 1970’s and later. The book focused on the high point of Bell Labs innovation run. It followed the career of several, at the time, famous and prominent scientists that were employed at Bell Labs. Please such as Mervin Kelley (vastly improve the vacuum tube and was a long running director, VP and President of the Labs), William Shockley (inventor of the transistor) Brattian (inventor of a different kind of transistor), Claude Shannon (inventor of the field of Information Science), John Pierce (inventor of passive and active satellite). These there were many others, however, they each had significant impacts on how our modern society works.

The book does an excellent job in explaining some of the basics of how the research was conducted, what work needed to be done to make it work on an experimental level, the method of transferring the invention into innovation or a full product and the goal of each of these inventions. Mervin Kelley was famous for saying that to implement a change in AT&T’s network the new technology must be “better or cheaper or both.” This prevented a great deal of frivolous technologies from being implemented into the telephone network. Additionally, this was required to ensure that AT&T was always able to work towards reducing rates for subscribers as they were a “natural” monopoly.

This was a time when research was done to ensure that the network would be operational for 30 years without malfunction. This required huge investments in quality control and required that additional costs were built into the network for redundancies and protection. In fact Statistical Process Control was invented at Bell Labs to ensure proper quality.

How did all of this work? Well, there were two factors going on here. First, Bell Labs was able to hire the best and brightest to work on interesting problems. Second, the scientists had a continually evolving project that always needed more innovation. These two combined with a freedom to explore allowed the scientists to delve into basic and applied research. In some cases they did not know how or why something would work, but felt that it would improve the quality of the telephone network.

One of the goals of AT&T was to create a coast to coast network with universal service. This required the company to figure out how to address signal decay due to distances over several miles. To address this the company developed the vacuum tube repeater, which significantly increased the distance a voice call could travel. The manufacturing of a tube was extremely difficult and expensive. Bell Labs felt that there had to be a different way to create a repeater. Over the next 20 years they investigated off and on (with a break for WWII) how to make semiconductors work as a repeater. Bell Labs was capable of making this sort of investment because it had a guaranteed revenue stream and a mandate to continually improve the network. These two together allowed the Labs to do work that they otherwise would not have been able to investigate.

This is a very different model for innovation than we currently have in any organization. Universities come close, but they fall short in the fact that the professors are continually required to apply for more money and seek permission from someone to pursue their work. Bell Labs was much more relaxed about this.

This innovation method is also very different than some of the historic events in the US, such as the Manhattan Project or the Moon Landing. Those were single goals which allowed the focus of a great group of minds.There was never any intention of keeping those minds together for the next big project. Bell Labs had the ability to do this.

There are some organizations that should be able to do something like this. The National Labs are one, but there’s no direct business need so even this doesn’t exactly work. An organization like TNO in the Netherlands, which focuses more on practical matters could increase the amount of basic research they conduct in various different areas. TNO is structured differently than the National Labs in the US, because they are expected to work closely with both industry and universities. This gives each of the groups a strong business focus and could serve as a pipeline from basic research into business activities for the companies that work with TNO. However, at this point TNO does not perform these activities.

I give this book a 4.5/5. It was extremely well written, well organized and dealt with some amazing subject matters.

Book review: Consent of the Networked by Rebecca MacKinnon

I just finished Consent of the Networked today. This title, of course, is a play on the idea of the consent of the governed. Where governments are only able to govern with the express permission of the people it governs. We have seen recently with the Arab spring that it is possible to reject the govdrnment and show that the governed do not consent.

The book starts with a discussion of how the internet is different than traditional governments. As, most people are aware the internet is international, operated by many different actors including individuals governments and companies, and is not has some of its own rules and norms which are different than the physical world.

Because of the diverse set of stakeholders for the internet the way we (an average person) is different based on the country you live in, the network you are using and the relationship between your government and businesses from other countries. Then toss in advocates that use the internet to promote democracy (or are progovernment) and human rights experts and we have a very messy situation that will likely lead to more and more conflict.

Some of these conflicts are unsurprising, such as countriess like China, Iran and prefall Egypt and Tunisia want greater and greater control of their internet and networks. Which the US State department doesn’t want and puts the countries in great disagreement over the future of the internet. However this is not the only source of conflicts. There is conflict in the US itself.

The State department is pushing for more circumvention tools and techniques to make it possible to get around firwalls. TOR is one of these I’ve talked about in the past. However, the US legislature is pushing for more control and better access to what data is flowing and ways to block it. These laws, SOPA, PIPA and now CISPA all attempt to contol the internet in the name of IP or cybersecurity. However, they are methods that allow censorship and control over the internet. The US is not the only country implementing these laws, the UK has and the EU parliment is still considering ACTA.

MacKinnon also indicates that these actions help to validate countries like China. In some cases the support comes from artists like Bono or the RIAA when they say they want the same abilities as China for blocking access to content. However, the laws can only do what companies are capable of providing to governments and consumers and other agencies.

Copyright laws would be useless if companies had not created ways to inspect data and then stop the transfer. Some of this comes in tne form of filters and blockers for parents. These can be applied at the national level. Cisco and other major western comoanies provide equipment through sales to countries like China for the firewalls and censorship abilities.

These are not the only way businesses are complicit with repressive regimes (in many cases the equipment is essentially off the shelf), MacKinnon also describes the cases of Yahoo and other companies where they hand personal information over to the regimes. In some cases this has led to death for the person whose information was requested. Of course this isn’t just in China, but the same companies hand data over in the US and other democracies.

At this point human rights groups and other rights groups have become more active around the world on matters of the internet. A large portion of her book deals with these problems with through a human rights perspective. I believe that this is a good way to look at these problems. This levels the field across socio-economic levels. It begins with the assumption that protection of data should be universal. It frames the perspective that she argues for netizens to engage and to be active in address these issues.

She argues that we can’t expect the next CEO of Facebook to be benevolent as Zuckerberg has sort of been. The netizens need to pressure companies and governments for better clarity of what our data is being used for, how long it is stored and why it is collected. This important, because we “consent” by clicking I accept without reading and with no control over a change in contract. Anger at changes Facebook has made lead to changes, so as a group we have the ability to effect change at companies. We have also seen what collective action can do to government in light of the SOPA and ACTA discussions.

These matters are important because they affect all of us. This book does an excellent job explaining what is at stake. It provides a perspective from the developing world and the people under dictatorships. It highlights the fine line we are currently treading and that countries like the US and UK could easily slip from democracy into digital dictatorships where the views of a select few are paid a great deal of attention and the rest are ignore and censored.

Over all i give this book 4/5. At times the book was somewhat repetitive but it was to ensure the point was made. This book should be read by any cyber activist, developmental scholar and student of dictatorships.

Can technology Save us? A wrap up

In my last three posts I’ve asked the question if technology can save us from many of our own problems. I’ve discussed several technologies for each topic, water, energy and food. These technologies are not all of the ones out there by any stretch of the imagination. These are the technologies I’m aware of at this point. I wouldn’t say I’ve done an exhaustive search for technologies either. I hope to have made it obvious that technology alone cannot save us. We need to make a concerted effort to change the status quo and that won’t be easy to do.

We have some major problems adopting new technologies. First, we have incumbents interests that have no desire to see the current energy regime change. We have problems of ownership of technical problems. Why should the US invent new ways to extract water when Mexico is the country that will suffer? How do we know that a given technology is going to be the best, or even good enough for our needs? What happens if all our best efforts turn out to actually make things worse?

These aren’t easy questions to answer. We have to make a choice as a society to decide what constitutes a good investment for research. In one Urban Time article I posit that the EU can over take the US in terms of scientific research in the upcoming decades. This should terrify people. This is what has driven the US economy since the 40’s and to some extent earlier. The shifts in capitalism have driven our company goals toward shorter and shorter returns on investments and less visionary goals. The ability to experiment in companies and use government funds to experiment with deploying new energy systems has floundered.

This should be cause for concern. We’ve seen the result of poorly managed technology in the past few years. Simple things like a software glitch that caused Toyota’s to accelerate out of control, flash crashes on the stock markets from high frequency traders and other complex systems like Fukashima. We don’t always have proper controls designed into our technologies to protect us from it.

Personally, I’m optimistic about the future of technology and what it can do for us. However, there are plenty of Sci-fi authors out there that are very pessimistic. I love reading the dystopian future and post-apocalyptic books as much (or more) than anyone and we need to realize that without requiring proper controls on our technology and production of our material goods these results could happen.

Technology alone cannot save us from ourselves. We may be able to use technology as a tool to fix problems we’ve created, but we have to do the dirty work. Technology doesn’t design and make itself (yet).