Apple v Samsung: iJury

As most of you are aware Apple crushed Samsung in it’s suit. Every patent of Apple’s was upheld and Samsung owes Apple just a touch over $1Billion. This is going to do a great deal to chill innovation. Many other people are commenting that these patents and the idea of copying isn’t new and that Apple has stolen a great deal themselves. In one discussion with an author at the Urban Times, he seemed to argue that the theft of these ideas is more honest than copying and that Apple was a better company for doing so. Well, there’s a major flaw in that idea, the theft of an idea is essentially copying the idea, the only difference is you act as if it was always yours and that you didn’t copy someone else.

One author thinks that one billion is a small price to pay to be the second largest mobile manufacturer in the world. While I understand the thinking behind this, sure they copied a great deal from Apple and it only cost them a portion of what it could have cost. However, this is a short sighted view. The manner in which Apple has attacked Samsung isn’t going to stop and will likely intensify. The ruling in San Jose wasn’t the only ruling that came in yesterday. In Korea a judge ruled that both companies were infringing each other and banned both products from being imported to the country. The judge also found that Samsung didn’t copy and in the UK a judge also said that Samsung didn’t copy and wasn’t cool enough to be confused with an i Anything – ordering them to post it on their website.

The idea that Apple’s design for the phone’s desktop being unique is a bit absurd. They simply changed the way the buttons looked, but there had been interfaces that were extremely similar for years. I had a Sony Cliq PDA in 2001 and 2002 and some of the way that product looked was similar to the iPhone. Apple repackaged things extremely well. Judge Koh did not allow Samsung to present all the information to the jury related to prior art, which certainly didn’t help Samsung’s case (Samsung released it to the public though).

The other major issue with this case is the idea that laypeople can really understand the issues with patents. They are difficult to understand, written in legalese and intended to be so broad that they can be interpreted in many different ways. I’ve read through several patents and they quite frankly are confusing and in many cases don’t convey the information they are required to convey (how to manufacture or build whatever is patented).

For a patent to be valid it only has three conditions to meet: Novel, which means that nothing like it has been done before; Non-Obvious, which means that (originally) that an expert in the field wouldn’t see this as a natural extension of previous work; now it must be non-obvious to a layperson; the final one is the possibility of industrial application, this means that the technology must be useful in some way. Many of Apple’s patents do not meet the threshold for the first two, novel or non-obvious. Now of course people that disagree will argue that in hindsight these patents are obvious because Apple did such a god job at inventing them. I disagree primarily because many of the patents are reapplication of ideas from the computer to the smart phone.

I’m extremely worried about the future of innovation in light of this ruling. I think that there will be serious repercussions and whatever comes out of this will be terrible for consumers.

Finally check out this video discussing what Apple has invented:

Europe in the Driver’s Seat

Today I woke up to wonderful news. CERN has discovered the Higgs Boson particle, the so called “God” particle and the EU parliament has voted against ACTA. This is a great day for science and for freedom of expression.

What do these mean? Well, the Higgs particle is supposed to be the particle that gives everything else mass. It is the actual building block that everything in our universe is supposedly built upon. Why do I say supposedly? Well, the discovery is with a 5 sigma confidence. This is a really good, but in many cases they like to have 9 sigma. What does that mean in layman’s terms? So most testing is looking for a probability of less than 5% that this could happen by pure happenstance, or random error. This means that 95% of your data bear out the test your trying to answer. This happens around 2 sigma where sigma represents a standard deviation. Most products are made with safety specifications around 2 sigma, maybe three sigma (99.73%). The values that we’re talking about are so high, that you’re starting to get into the range of lottery winning (or plane accidents for that matter) likelihood for 5 or 9 sigma. With such high confidence you actually start to run into a greater likelihood of missing the actual signal than for it to not actually be there. You are being so strict on your data requirements that something that actually is the real signal is ignored by your data set.

Does this change my daily life? No not at all. We won’t be able to do anything functional at this level for more than a century if ever. We’re still working on the results of Einstein’s theories and how to apply them. We haven’t really gotten quantum computing working or any of the other cool things we’re working on (teleporting light and particles for example). However, it does give us a greater understanding of how the universe works and we’ve had to develop a lot of new technologies to detect these particles. The technologies could be very useful in the future for completely unrelated applications.

ACTA is a very different story. I’ve talked about it in the past and mentioned how much of a risk it was to the openness of the internet and to our society as a whole. The largest political body in Europe has decided to reject ACTA. The vote wasn’t even remotely close. Our hard work has paid off and the treaty is effectively dead. In the US it hasn’t been ratified by the Legislative branch and is really only going to be between the US and Morocco, which really isn’t going to be very effective. This is fantastic news and I’m extremely excited about this.

Unfortunately, we can’t just take a break, we have to keep working on the main reason why these laws are even brought up for vote in the first place. The USTR is currently negotiating the TPP which is starting to be viewed in a similar fashion as ACTA. I believe that we’re on the right path for stopping these types of legislation and treaties.

Way to go Europe in two major things.

Evolution and Innovation

Apparently I published this before I meant too. Anyway, today in Techdirt, they published a discussion on copying, innovation and evolution. Basically, a biologist argued that we are evolutionarily predisposed to copy and use group learning to develop new tools. What this means is that instead of going out and developing something out of the blue we first have to see what someone else has done and then we copy whatever they did, then in a parasitic way, make marginal improvements on the original. We’re nothing but freeloading copiers that make things a little better.

Techdirt completely disagreed with this point of view. They argued that simply copying something or a part of something doesn’t mean you’re freeloading. You can add a great deal to something to the point that whatever you copied simply becomes a part of a larger whole.

Anyone should know from my writing that I support Techdirt’s perspective. This comes from several several different arguments. The first is from the evolution of technology. If you ignore some of the human motivation behind the changing technology itself and focus on the selection process, you can see that technology changes through incremental adjustments. These changes are selected by the market or in primitive societies by the end result of an improvement. Spears that last longer, less energy expended on making new spears, spears that can be thrown farther, less danger from the animal being killed, or sharper shovels, less energy spent gathering food – more food. This selection process is a very natural process. Additionally, there would be some specialization of skills even at this point in our history. Some people would have been better at making spears and in a collaborative environment, because there were no patents and sharing was for the best of everyone, many people could experiment with new spear designs. This innovation while based on copying is a very real form of innovation that likely lead to gradual improvement over a great deal of time.

The second argument that supports innovation after copying is the argument of Cesar Hidalgo, which argues that looking at what countries are currently producing you can see a relationship with their innovative ability. By looking to see what technologies they import and export you’re able to see how well they have developed scientifically and in the manufacturing world. For example you can expect to see more advanced products come out of a country if they got into producing fertilizer very early in modern times. This typically leads to a general chemical industry which can lead to pharmaceuticals and semiconductors. Why? Well developing a strong base in chemistry with fertilizers can be expanded into drugs and as a base for semiconductors.

How do new countries move into these fields? Essentially, they have a knowledge transfer from a country that is already doing it. This can be done in two ways, one is the easy way: have a multinational company set up a manufacturing then R&D facility in your country. This allows a direct flow of knowledge on how to manufacture the material, which increases the rate of copying. Would allow the country to be a fast follower but will still require significant time for them to eventually innovate on that technology. Having an R&D facility would increase this rate, because local scientists would have already been trained on how to innovate in that field. They would have already been doing research in that industry and would more easily be able to innovate if a spin-off was created (or if the state nationalized that part of the multinational). The second manner is much slower: repatriating of knowledge workers. This is essentially what has happened in Taiwan and India. Educated Indians or Taiwanese returned from the US and created spin-offs and became professors at the local universities. This isn’t always successful.

Saudi Arabia is trying to develop a third way, which is having some success. They are recruiting experts from around the world to develop their own universities and companies. This is having mixed results and education and industry needs to pay attention to these attempts to see how well it plays out in the long run.


Copying is extremely important in education and is required to develop new industries in a country. Technology evolves through copying previous technology, recombining with new learning from other fields and from experimentation within the current field. Without copying there cannot be innovation. The more people participating in an economy where innovation through copying is rewarded, the greater our culture and the greater or technological evolution will be. Biology needs to take a lesson from Evolutionary economics.

A bit remiss

Sorry dear readrs, I’ve been very bad about writing any blogs lately. I’ve had some pretty big changes in the past two months as you all know. I’ve moved back from the Netherlands to the US, did some consulting work and I just started a job at AMD. Consequently, I’ve not been able to post as much as I have in the past. Big changes have been happening in my life.

Because of these changes I wasn’t able to pay enough attention to the CISPA fiasco that just occurred in the US. This law is a terrible step in the direction of data tyranny. I’m even being hyperbolic about this either. I wrote about the risks of having a voluntary data sharing program and in my review of Consent of the Networked I discussed the different data and Government regimes out in the “wild.” These concerns are valid. We need to be aware of what’s going on. Now, I have to say we pretty much blew our collective internet protest load with the SOPA/PIPA protests. Which is actually a problem. I would hazard that in many ways CISPA is as bad or worse than SOPA, however I didn’t see as much chatter about CISPA on reddit, twitter, Google+ or Facebook about CISPA as I did about SOPA.

I think there are a few reasons for this actually. First, the majority of the people were able to clearly understand the risks associated with SOPA. These risks are pretty straight forward and understandable. These risks affect us tomorrow not in some future time period. In many ways SOPA like acts can already happen today. This makes it extremely obvious why SOPA/PIPA are terrible laws and should be opposed at many levels. Second, with CISPA coming so quickly after the SOPA/PIPA protests there was likely something of a protest overload or disbelief that another law could come through so quickly that is as bad or worse than SOPA. Especially with the language that was being used at the time of SOPA. It would have broken the Internet, how could anything be worse than that? Third, there was more support by large companies for this law than for SOPA. Apparently that actually matters more than we realized. We were able to push Wikipedia, Facebook, and other large companies to protest this law. However in this case Facebook and Microsoft supported the law while Google sat on the sideline saying nothing about the law.

I think from this stand point, people that weren’t happy with CISPA but didn’t understand the importance likely didn’t do anything about it. However, whenever a fantastic website like Wikipedia blacks out in protest for a law it will get people who are only on the fence about the law to actually do something about the law.

CISPA and SOPA are both bad but in very different ways. CISPA is something of an abstraction of risk. Losing your privacy when so many people already voluntarily give up so much information about themselves on Facebook and Twitter might not seem like as big of a deal. The secondary abstraction is a lack of understanding of the impact of the data sharing. It’s unclear of what exactly the Feds would do with the data once they have it. It’s unclear how data sharing would occur within the government. However, it is likely that the data would be shared throughout the government including the military. Which many privacy experts are say essentially legalizes military spying on US civilians. The third problem is that many people also feel that if you aren’t doing something wrong you don’t have anything to worry about. However, this is a fallacy as even people who are doing things that aren’t wrong can get in trouble. I’ve discussed the cases where people are fired for posting drunken pictures on Facebook. Additionally, this type of law represents the biggest of the big government that we can imagine. There’s no reason why the government needs to know what we’re doing in this level of detail.

It’s going to be a long and difficult fight to keep our internet free. However, it’s something that we must do and I believe we can do it. We will just need to keep vigilant and work together to ensure that our internet stays our internet.

The possible effects of a faster Fourier Transform

This weekend my friend Ryan sent me a link to a note in technology reviewwhere they talked about a group of researchers at MIT who showed off a new algorithm to speed up the calculation of the Fast Fourier Transform (FFT), FFT is a method to decompose a signal in a sum of sines and cosines of different frequencies, allowing to study the signal itself or to compress the information carried by it. A famous example of its application is with music because the MP3 is the end result of a FFT. I have to admit that I was really surprised about the MIT development and I looked for other notes or papers about it, and I found this IEEE spectrum note where they talk a little bit more about the results the group has achieved. The basic idea of the new algorithm is to exploit the fact that some signals are “sparse”, i.e. they contain just a relatively small number of frequencies components that are significant and therefore the number of operations needed to calculate the FFT can be reduced considerably for this kind of signals.

Certainly there are a lot of applications where the traditional FFT will be still needed, but we can think of a lot of examples where the sparse FFT will be of use since a lot of phenomena are sparse in frequency such as the sound of your guitar (every chord is adjusted to an specific frequency actually), the wave propagation (A wave has normally just a small number of dominant frequencies),  and even we can study images as 2 dimensional signals with a sparse frequency content. Even though we will have to wait until the paper describing the algorithm is published, we already can imagine several fields that could benefit from this new technique in terms of speed and  even the amount of energy required in certain applications.

Imagine the improvement of having a faster algorithm for sparse signals in Particle Image Velocimetry (PIV), which is a technique researchers use to see and study the streamlines of a fluid using really small particles inside it, called tracers, and by comparing the average movement of those particles in consecutive images it is possible to estimate the instantaneous velocities in the fluid. Not one but several FFT must be calculated in order to estimate the velocity of the particles in the fluid for a single image pair, causing the PIV speed to depend almost directly on the time the FFT needs to be calculated. Since the PIV images are normally captured in such way that just the particles are visible whilst the fluid appears as part of the background, those images are likely to present just a small number of dominant frequencies and the use of the sparse FFT will result in faster PIV sensors. Those faster PIV sensors may also result in other improvements like PIV control for high speed fluids allowing researches to come up with new experiments that right now are not feasible.

It is easy to understand that less calculations needed for the FFT will result in less computation power needed and therefore the devices used to process large data series, like the meteorological or oceanographic data, will be able to crunch even more data than they actually do using the same computational power which may result in faster models for example. Weather prediction algorithms, for instance, may get more complicated since the data processing needed to feed the forecast may require less computations with sparse FFT.

We don’t know yet if the sparse FFT will also result in an increase of speed for the inverse FFT or if the quality of the reconstruction of the original signal won’t be diminished, but if we assume that, we could expect also positive effects in the video and audio stream systems since the data compression may be better without losing quality or definition, allowing the end users to receive smaller data packages that will reconstruct a high quality song or video. Even with music players, if the sparse FFT can be used to compress and reconstruct audio data without losing quality, we may find in the future music players that are even smaller and have longer battery life without sacrificing the quality of our musical experience since an increase in the speed of the FFT and its inverse will result in less computations needed and therefore less hardware and less energy consumption.

We don’t know yet for sure the exact effect of this new algorithm in the technology, but as the FFT is used in almost any device processing or transmitting data right now we can predict for sure that sparse FFT will boost some research fields and will motivate improvements in some devices or technologies that we use day to day.  This may be a small snow ball that will end up transformed in a huge snow slide in the upcoming months or years.