On Being the Product

Today I’ve read and reposted a few articles (another) about users being the final product for several companies. These of course are facebook, twitter, google (in various forms including plus), yelp and the list goes on. Personally, I think that the claims that we are only the product is a bit of simplification. There is no doubt that we are the product, however, it’s also a matter of to whom are we the product? For instance, my blog, which I post on facebook, twitter and Google Plus allows others to be consumers of my content. The people who are my friends, followers or in my circles are able to consume my content. We are not merely products to companies, but we are products for other people as well.

We consume what are friends put out there. We have habits an manners in which we’d like to be able to consume that information. However, we’re running into a bidirectional problem. We’re losing control over what information we’re sharing and we’re losing control over how we consume this information. In Tom Anderson’s (of myspace fame) post about the changes in facebook, he mentions something called seamless sharing, where you have to do nothing and it’s instantly shared. This, to me, raises all sorts of privacy concerns. In this TED talk the speaker addresses the problem of filtering algorithms in google and facebook.

I think it’s very obvious that Facebook still realizes that we’re consumers of the information. For without our work as the product, posting links, pictures and statuses, there’d be no facebook. However, without us as consumers reading various different posts and clicking related links there’d also be no facebook. The product we are to non-fellow consumers comes down to our network, what the people in our network are interested in and whatever information that is automatically shared with facebook through our web browser.

We need to be aware that this trend is going to continue. We as users and consumers need to fight to get control over our data and the right to control what we share when we share it. This gets back to my points in my earlier blog posts about pseudonyms and truly being anonymous on the web. If you are interested in knowing at least some of the information that you’ve shared on facebook over the years in some countries you are able to download a copy of your facebook history. I haven’t done so yet, but I plan on it. If it is not available in your country, try to get the rights to your data.

While facebook is using you as a product, you still should have the right to demand the information they have on you and are selling to 3rd parties. Being the product isn’t fun, however, it’s nothing new. We’ve been the product for years and have never really complained. The difference now, is that the information about your personally has never been better and is only going to get better the more you give them. For free.

They’ve gone PLAID!! or CERN finds faster than light particle.

Yes, CERN has claimed that the speed of light has been broken by Neutrinos. What is exactly does that mean and why is it a big deal? First why is breaking the speed of light a big deal? According to the theory of special relativity, the speed of light is the maximum speed that something can accelerate to. Because of the famous equation E=mc2 it would require an infinite amount of energy to accelerate an object beyond the speed of light.

What is a Neutrino? A neutrino is one of the particles that make up other particles. It’s part of the building blocks of atomic theory. Neutrinos carry no charge, so they are different than the electron. Since they have no charge they are able to pass through matter. Neutrinos also require very special detection mechanisms. Neutrinos also have mass. 
This is why it is a big deal to have detected the speed of a neutrino at greater than the speed of light. Either the neutrino always has traveled faster than the speed of light or they were somehow able to accelerate to a speed greater than that of light, which requires infinite energy under our current model of physics. Since we’re talking about a particle accelerator here it can be assumed that the collision created the neutrino, thus we know that it is impossible for infinite energy to have be entered into the system.
Now that we understand what is going on, what is at stake here? A particle that is able to accelerate to a speed faster than the speed of light completely shifts our understanding of subatomic particles. Actually, it obliterates it. We will have no clear understand of what is going on at these particle sizes. 
Could this be the greatest finding in the 21st century? Yes. All physicists believe it would be. Are people just accepting these findings? No. There is a great deal of skepticism, and it’s not just from the broader community. The scientists that are presenting the results are basically issuing a challenge to the scientific community to show that they are wrong! Based on their findings these results are in fact statistically significant. 
Are other scientists going to test these results to verify it? Well, there are only two other places in the world that could have the capabilities to test it. Fermilab in Chicago and a Japanese lab that was damaged by the earth quake and tsunami. However, Fermilab’s equipment isn’t sensitive enough to detect the difference in the speeds. Basically, the speed difference is so small it is within the margin of error for the detection equipment.
What’s this all mean to me though? Well, for us non-physicists life goes on as normal. We can’t suddenly travel faster than light. However, this is a case of good science at work. We should seriously pay attention to what happens here. This type of science happens all the time at a smaller scale. For evolution this type of science is happening. Some extraordinary claim is made, which requires extraordinary data to support it, then is tested by other people. IF the claim withstands additional scrutiny the claim is accepted. In some cases where the claim is so extraordinary that the people making the claims don’t really buy it, then it is the duty of the scientific community and the larger community to give them the support they need to determine the validity of the results.
Here’s some comments from British CERN physicists Brian Cox.

Technocrats and Technology II

In my previous post I outlined some of the problems facing the energy sector in terms of determining the best course of action in the wake of the Fukushima reactor disaster. One of the solutions was to create a group of experts to determine the best mixtures of technologies and sources of energy. However, there are clearly flaws with this methodology. First, there’s the problem of trust in these experts. Second, there’s obviously a lack of input from the general public. Third, there’s problems with selecting technologies themselves.

As I mentioned yesterday, experts can claim many different things and using the right language can make something that’s incredible sound credible. When these experts put out information or opinions how can we trust it? Can we be sure they aren’t on the pay roll of big oil or big coal? If these experts are university professors how can we be sure they aren’t part of some global warming conspiracy? I think that it’s obvious there will be influences from oil and coal. These are to be expected and the goal should be to actually welcome them into the discussion. We should attempt to include them, however we need to give them the same weight of opinion with their obvious bias as any other expert on the panel. The difference is that we want it to be known that they are going to be rooting for oil/coal. Why? because we can more easily critically analyze their economic data knowing for sure where it comes from. This goes the same for a scientist that is heavily pushing solar or wind energy. We should know that they support it so we can have an honest discussion.

Public participation is a huge problem as well. Without proper support from local groups, agencies and governments a promising energy program and be killed. “Not In My Back Yard” (NIMBY) is always a hugely successful counter attack to many of green energy programs. People don’t want to have giant windmills over looking the beautiful landscape or oceanscape they cherish. Understanding these concerns and getting input into the the process from the public can lead to greater social acceptance of a plan. Also, making it clear who the information is coming from also will improve the tone of conversations. Without the clarity of information sources public opinion can quickly turn from a project.

Finally, what technologies should we use? Public opinion and vested interest in legacy technologies is very difficult to overcome. Especially when a technology like solar energy is more expensive than coal power, and has less consistent energy profiles. Of the solar technologies how do we select which technology is the best? How do we pick the right nuclear power plants? There are many different technologies out there competing. There is not a clear which technology a government plan should invest in. We are likely to pick a loser technology. However, we still need to choose something. I have mentioned it previously some ways to select technology. I’ll discuss more of that in my next post.

Technocrats and Technology

On my way back from Oktoberfest, which was awesome, my fellow car passengers discussed the decision by Germany to phase out nuclear energy over time. We all felt that this was an incredibly stupid long term decision. We agreed that it was a knee jerk reaction to the nuclear disaster at Fukushima. However, this raised some other questions about how to enact energy policy choices as well as other technology/science policies. We mostly focused on energy as that was the topic of interest, but it really does spill over to most scientific/technology policies at a national level.

The obvious solution to most engineers is to set up a panel of experts and have them come up with the best choices for energy sources. There are some flaws to this line of thinking, sadly. First, who selects these experts? Let’s use the US as a model country in this regard. There will be a huge battle over what experts should be included in the panel. If it has to be split 50/50 between experts selected by the Republicans and Democrats we’ll most likely have a group of lobbyists for the Oil and Gas industries from the Republicans, and a mixture of wind and solar experts from the Democrats. Nuclear energy maybe completely left off the radar. Even though there are tons of technologies out there that are hugely safer than the Fukushima nuclear reactors.

Additionally, nuclear energy has a stigma associated with it due to Three Mile Island, Chernobyl and now Fukushima. It doesn’t matter that coal is as destructive or that oil and natural gas extraction causes almost immediate negative impacts in the local environment. Why? Because these are huge job creation industries and also have been legitimatized over the course of the past 100+ years in many regions. For example in Pennsylvania, where Three Mile Island resides, coal is a way of life for many people. It has been an occupation that many people have been doing all their lives. There are nuclear facilities in the state still, but they are viewed with much more skepticism, lack of trust and fear by local residents.

Many engineers are something of a technocrat, where they believe that technology can solve a huge number of issues and that technology experts should be making many policy decisions related to technology issues. These technocrats are viewed with skepticism from the broader public. In many cases there are huge debates over the sources of the data and the reports which accompany many of these technology experts. In the case of GMO, even when the public is given information from both sides it is not trusted. Why? because people have lost faith in their governments and believe that there are scientific conspiracies to enact practices that are dangerous.

In my next blog I’ll discuss some more issues with these topics. I’ll go into some detail of cases where large differences in views were eventually over come.

Productivity Gains from Fiber networks

This is going to be a bit of a random post based on a seminar I went to today before a meeting with two of my professors. The idea is that an increase in internet speed will lead to productivity gains at corporations, which of course will lead to a growing economy. First, why do we care about this? Well, in small countries like New Zealand and the Netherlands as well as South Korea and Japan where there is an extremely high broadband penetration there is discussion of using public money to build fiber networks. What’s the difference? Broadband basically means anything faster than dial-up internet. If you have DSL, aDSL or cable, you have a broadband connection. What I mean by penetration, is that a high number of users in many different areas have access to broadband connections. This means there are enough providers that the majority of users are able to access the internet at high enough rates to be able to stream videos and download pictures at reasonable rates.  What is fiber though? Fiber optics, because that’s what it is, are networks that use lasers to communicate information rather than electrons. On a cable line there are changes in voltages that indicate a one or a zero, whereas with fiber it’s either a light on or off (one or a zero). This is able to be transmitted at a much higher rate.

Based on productivity data and information about different firms the study indicated that the largest productivity gain was seen in the shift between dial up and broadband. It also indicated that firms that used fiber and firms that used broadband did not see any difference in productivity. A follow up study indicated that if there was any difference it was related to size and to industry. This basically showed that there is no reason to subsidize fiber. That the government should not try to force telecoms to lay down fiber networks and that individual firms will that require fiber should pay for the investment themselves. The study also indicated that there may not be the applications, for firms, that require fiber networks.

Personally, and the author agrees with me, I think this leads to a chicken and the egg problem. If there is no fiber network how do you create an application with a wide enough audience that requires fiber when there are few customers to use it? It also puts a large burden on firms that require the network, especially if they are a smaller firm. Larger firms would be better positioned to afford the cost of the fiber line to their office as well as the equipment to utilize it. Although, in many cases they will have to worry about their old equipment from the broadband system they used.

In all, I felt it was a very interesting talk that discussed various problems with trying to get the government to subsidize the creation of a broadband network. The author also suggested if you are trying to stimulate the economy by building the network, there might be better targets for the subsidy dollars.

I’ll try to post next week. I’m heading to Munich tomorrow morning.