A bit remiss

Sorry dear readrs, I’ve been very bad about writing any blogs lately. I’ve had some pretty big changes in the past two months as you all know. I’ve moved back from the Netherlands to the US, did some consulting work and I just started a job at AMD. Consequently, I’ve not been able to post as much as I have in the past. Big changes have been happening in my life.

Because of these changes I wasn’t able to pay enough attention to the CISPA fiasco that just occurred in the US. This law is a terrible step in the direction of data tyranny. I’m even being hyperbolic about this either. I wrote about the risks of having a voluntary data sharing program and in my review of Consent of the Networked I discussed the different data and Government regimes out in the “wild.” These concerns are valid. We need to be aware of what’s going on. Now, I have to say we pretty much blew our collective internet protest load with the SOPA/PIPA protests. Which is actually a problem. I would hazard that in many ways CISPA is as bad or worse than SOPA, however I didn’t see as much chatter about CISPA on reddit, twitter, Google+ or Facebook about CISPA as I did about SOPA.

I think there are a few reasons for this actually. First, the majority of the people were able to clearly understand the risks associated with SOPA. These risks are pretty straight forward and understandable. These risks affect us tomorrow not in some future time period. In many ways SOPA like acts can already happen today. This makes it extremely obvious why SOPA/PIPA are terrible laws and should be opposed at many levels. Second, with CISPA coming so quickly after the SOPA/PIPA protests there was likely something of a protest overload or disbelief that another law could come through so quickly that is as bad or worse than SOPA. Especially with the language that was being used at the time of SOPA. It would have broken the Internet, how could anything be worse than that? Third, there was more support by large companies for this law than for SOPA. Apparently that actually matters more than we realized. We were able to push Wikipedia, Facebook, and other large companies to protest this law. However in this case Facebook and Microsoft supported the law while Google sat on the sideline saying nothing about the law.

I think from this stand point, people that weren’t happy with CISPA but didn’t understand the importance likely didn’t do anything about it. However, whenever a fantastic website like Wikipedia blacks out in protest for a law it will get people who are only on the fence about the law to actually do something about the law.

CISPA and SOPA are both bad but in very different ways. CISPA is something of an abstraction of risk. Losing your privacy when so many people already voluntarily give up so much information about themselves on Facebook and Twitter might not seem like as big of a deal. The secondary abstraction is a lack of understanding of the impact of the data sharing. It’s unclear of what exactly the Feds would do with the data once they have it. It’s unclear how data sharing would occur within the government. However, it is likely that the data would be shared throughout the government including the military. Which many privacy experts are say essentially legalizes military spying on US civilians. The third problem is that many people also feel that if you aren’t doing something wrong you don’t have anything to worry about. However, this is a fallacy as even people who are doing things that aren’t wrong can get in trouble. I’ve discussed the cases where people are fired for posting drunken pictures on Facebook. Additionally, this type of law represents the biggest of the big government that we can imagine. There’s no reason why the government needs to know what we’re doing in this level of detail.

It’s going to be a long and difficult fight to keep our internet free. However, it’s something that we must do and I believe we can do it. We will just need to keep vigilant and work together to ensure that our internet stays our internet.

The possible effects of a faster Fourier Transform

This weekend my friend Ryan sent me a link to a note in technology reviewwhere they talked about a group of researchers at MIT who showed off a new algorithm to speed up the calculation of the Fast Fourier Transform (FFT), FFT is a method to decompose a signal in a sum of sines and cosines of different frequencies, allowing to study the signal itself or to compress the information carried by it. A famous example of its application is with music because the MP3 is the end result of a FFT. I have to admit that I was really surprised about the MIT development and I looked for other notes or papers about it, and I found this IEEE spectrum note where they talk a little bit more about the results the group has achieved. The basic idea of the new algorithm is to exploit the fact that some signals are “sparse”, i.e. they contain just a relatively small number of frequencies components that are significant and therefore the number of operations needed to calculate the FFT can be reduced considerably for this kind of signals.

Certainly there are a lot of applications where the traditional FFT will be still needed, but we can think of a lot of examples where the sparse FFT will be of use since a lot of phenomena are sparse in frequency such as the sound of your guitar (every chord is adjusted to an specific frequency actually), the wave propagation (A wave has normally just a small number of dominant frequencies),  and even we can study images as 2 dimensional signals with a sparse frequency content. Even though we will have to wait until the paper describing the algorithm is published, we already can imagine several fields that could benefit from this new technique in terms of speed and  even the amount of energy required in certain applications.

Imagine the improvement of having a faster algorithm for sparse signals in Particle Image Velocimetry (PIV), which is a technique researchers use to see and study the streamlines of a fluid using really small particles inside it, called tracers, and by comparing the average movement of those particles in consecutive images it is possible to estimate the instantaneous velocities in the fluid. Not one but several FFT must be calculated in order to estimate the velocity of the particles in the fluid for a single image pair, causing the PIV speed to depend almost directly on the time the FFT needs to be calculated. Since the PIV images are normally captured in such way that just the particles are visible whilst the fluid appears as part of the background, those images are likely to present just a small number of dominant frequencies and the use of the sparse FFT will result in faster PIV sensors. Those faster PIV sensors may also result in other improvements like PIV control for high speed fluids allowing researches to come up with new experiments that right now are not feasible.

It is easy to understand that less calculations needed for the FFT will result in less computation power needed and therefore the devices used to process large data series, like the meteorological or oceanographic data, will be able to crunch even more data than they actually do using the same computational power which may result in faster models for example. Weather prediction algorithms, for instance, may get more complicated since the data processing needed to feed the forecast may require less computations with sparse FFT.

We don’t know yet if the sparse FFT will also result in an increase of speed for the inverse FFT or if the quality of the reconstruction of the original signal won’t be diminished, but if we assume that, we could expect also positive effects in the video and audio stream systems since the data compression may be better without losing quality or definition, allowing the end users to receive smaller data packages that will reconstruct a high quality song or video. Even with music players, if the sparse FFT can be used to compress and reconstruct audio data without losing quality, we may find in the future music players that are even smaller and have longer battery life without sacrificing the quality of our musical experience since an increase in the speed of the FFT and its inverse will result in less computations needed and therefore less hardware and less energy consumption.

We don’t know yet for sure the exact effect of this new algorithm in the technology, but as the FFT is used in almost any device processing or transmitting data right now we can predict for sure that sparse FFT will boost some research fields and will motivate improvements in some devices or technologies that we use day to day.  This may be a small snow ball that will end up transformed in a huge snow slide in the upcoming months or years.