Tracking the right metric

Last week I wrote about the Facebook IPO and how I felt that for the company the shift to stock price metric tracking was a big deal. I said that there has been a shift from what Facebook was and could be to the broader public to how all of its actions impact the stock price for the company. Today, in an article on Forbes they published an article about the impact of what you measure and how it impacts later choices. One of the things they didn’t mention was how frequently this measure or metric is reported. These all matter.

Looking at Facebook, I think it’s rather clear why Zuckerberg has publicly stated that he doesn’t care about the stock price of the company. Stock price is continually reported and when major milestones are passed, either in the positive or negative, everyone is talking about it. Apparently, Facebook dipped below $30/share today. Is this the end of the world? No, but it does mean that a lot of people have lost a lot of money.

Let’s look at stocks. Do they truly reflect the value of a company? I, personally, don’t think so. There are so many factors that shift the price of a given stock in a week, that it’s impossible for the value of the company to fluctuate in such a manner. However, the price of a stock does impact what a business is able to do. Companies are able to leverage their stock values for loans and interest rates, which means that a company can suddenly gain or lose market capital if the stock market swings for something completely unrelated to them and investors sell of their stock.

Despite the fact that, at best, there’s a loose correlation between the actual value of a company and the price of its stock, CEOs are held accountable to this metric by investors. Now, maybe some CEOs do ignore the value like Zuckerberg plans on doing (I’ve heard Jeff Bezos from Amazon does), however, when it’s continually reported and discussed it likely will change some behavior even if the CEO does their best to ignore the stock price. Even if the CEO does ignore it, in many cases the board or the investors will not. They may take serious action if the CEO does not work to ensure that their metric, stock value, continues to increase.

However, this may drive the wrong behavior. Tracking the wrong metric may be answering the wrong question. What increases our stock price may not be the same answer to what keeps our company competitive. A company that reduces work force to cut expenses for the end of the year, may seriously be hampering their ability to compete over the next few years. The change will likely bolster the performance of the stock in the near term but will likely lead to greater drops in the medium or long term.

Company management should not solely be measured on stock price alone and neither should a company. As much as I dislike Facebook and Mark Zuckerberg, Facebook is a company that actually has more value than simply its bottom line. It is able to create new networks and new places for activists to work. Now is this likely to continue? I don’t know. Could another company come along and beat them at it? Definitely. That’s why Facebook bought Instagram and will likely buy other companies that could threaten their market space.

Facebook, IPO and valuing a company

This week we’ve been hearing about the debacle that was the Facebook IPO.Which has revealed that some of the underwriters for the IPO were doing shady things. Matt Taibbi believes that this indicates that there are essentially two markets. One for the insiders and one for the schumcks, the every day investors.

Why is this important? Well, based on the discussions I’ve read online, there’s a lot of concern of the validity of the whole IPO process, the valuation methods of companies and how investors think of companies. The valuation of Facebook had a great deal of discussion before the final IPO price of $38/share, this was partially driven by two articles that came out. In the first one it was mentioned that GM was pulling it’s account because “Facebook ads don’t work.” The other article of note relates that researchers found that 44% of Facebook users will NEVER click an ad. This research is important because some of the valuation is based on the conversion rates of ad views to ad clicks. On average Facebook was only able to earn around $4.34 per user. The valuation of $100 billion puts the life time earning potential per user at $100 (at 1 billion users). This is pretty low, but at the same time, if only 560 million users ever click ad, that pushes means the people that do click ads need to be earning Facebook roughly $200.

MIT Technology Review discusses how this is an unsustainable growth model for Facebook. Essentially, Facebook will begin to drive down the cost per view for their advertisers to try to increase their total revenue. This falls into the race to the bottom mentality that crushes industries. Advertisers will be able to say to any website, why should we pay you x amount per ad when we only pay Facebook y there is no way that you can get me more views than Facebook. The only way that a site could get more revenue if they can show data for a higher click through and conversion rates than Facebook. That might be tough. The Review article argues that this will eventually kill Facebook and a lot of the ad driven website business models.

The other aspect of the IPO is a difference in the way that business and technology media are reporting on Facebook. Things have shifted from all the non-business related activities to focusing solely on this aspect of Facebook. This will likely shift over time, but I believe that these considerations will be discussed in any article related to Facebook. If Facebook wants to remain a haven for activists it will be difficult if there are potential suits over people being activists. There will be an increase of risk aversion within the “owners” of the company as there will be influence from investors.

Zuckerberg has said that he plans on doing what is best for the long term and try to ignore the demands of investors. He might be able to do that because he still owns 57% of the voting rights for the company. However, it will be difficult for him to avoid the influence of the discourse of media outlets. Even if he gets all his news from his friends on Facebook, there will likely be articles posted that will give him news about the company and things that he probably won’t want to read.

Essentially, discussions will shift from being about the risk of privacy for users to how changes to Facebook will impact investors bottom line. I don’t think this is healthy for businesses, consumers of Facebook or the general public. There are other things companies do that are unrelated to investors that are important for society as a whole. The Facebook coverage really indicates that we don’t look at businesses in a long term sustainable manner. We need to change this if we want to save capitalism.

Religion, Morality and political stances

This morning on KUT (local NPR station) there was a local interview between the KUT host and an author of a book that discussed how religion has been playing a larger role in the public forum in the United States and that people are basing their political stances more and more on religion. I am skeptical of this for several reasons. First, the morality these stances are based on are sometimes dubious at best even within the religious context. Secondly, some of these moral stances aren’t actually based on teachings in the specific religion, but are much more cultural in origin than religious.

Let’s look at the first issue. There are many issues that we can examine to see if the validity of the moral stance. How about the death penalty. Many Christians (not all) strongly support the death penalty. This stance clearly violates one of the Ten Commandments (thou shall not kill). Supporting this type of policy is not congruent with this belief. In addition, it conflicts with the belief that all life is sacred, which is the argument against abortion. I personally don’t agree with either stand, I’m against the death penalty and pro-choice (by which I mean I support the woman’s right to choose if she wants to be pregnant or not).

I arrived at these moral stances outside of the Christian frame work. I find that life is sacred since we only have one. Ending a person’s life for whatever reason is a horrible thing. It destroys everything that they are and could be, it destroys their potential. Now some people may think that this is ok in the case of people that are beyond help, but who defines “help”? Or perhaps it’s ok to kill people that are more committing horrible crimes against other people and they can never be reformed. Well, first there’s a lot of things we need to look at as to the why they were doing what they were doing. We should investigate what changes we can do and what sort of environment we want them to be living in after the we’ve given up on them.

In terms of abortion, it’s a trickier matter than the death penalty. However, women should have control over their on bodies and when/if they ever want to have children. Sure killing a fetus is killing a possibility, but every time a person has sex there are thousands of possibilities that are destroyed by a condom or other birth control. It’s just a matter of time and why you chose to stop the pregnancy. In some case the baby can destroy the potential of the mother or could end up being a huge drain on society. These can cause larger issues than if the fetus was aborted when the woman wanted it to be aborted.

Issues of morality may not be easy, but there are also moral issues that happen to conform to a specific outlook on life. In the case of gay marriage, this is more of a cultural issue than a religious issue. The very book that proponents quote as the reason for denying this right is ignored on a routine basis (eating shellfish is a killable offense). Marriage has long been something sanctioned by the state and has a level of cultural normalcy that has moved it from the realm of religion alone. In some states it’s possible to be married through time spent living together and getting it approved by a Justice of the Peace. Marriage is a way that cements a relationship in your own mind, the mind of your community and with the state. A civil union doesn’t have the feeling of importance and smacks of differences in rights and demotes a person to a second class citizen.

There are definitely some policy stances that could easily be seen to be rooted in religious beliefs such as supporting welfare, turning the other cheek, being a pacifist and giving your money to the poor and needy. However, there are many people that are against abortion and against welfare. These wildly different stances  for a Christian smacks of a cultural belief structure driving many of these policy stances rather than their religious beliefs themselves. This doesn’t mean you aren’t a Christian or that have to be against abortion and for welfare, but it means you should be honest about the source of your morality in regard to your policy stances. You need to look inward and really investigate why you stand for something and why you’re against something. Look close enough and you may find that it’s due to your social and cultural influences rather than your religious beliefs.

Is Scientism the problem?

I just finished reading an article in The New Republic which argues that history and the humanities are knowledge too. At times it felt like the author was yelling at his brother begging to be noticed. Personally, I feel that in general the author is correct, that history and humanities do plan an important role and can be considered as knowledge. However, the author makes one glaring mistake, he is equating the unified theories of everything in physics with everything, where it typically means a combination of all physical laws within physics both particle and cosmic, which would then move into chemistry and likely into biology. However, this type of theory of everything would stop there. It couldn’t really combine natural selection as functions of chemicals in a specific manager do not necessarily mean a truer understanding of evolution. It would be able to explain how phenotypes are changed with genotypes, but not why one genotype/phenotype pair was selected over another without an understanding of the specifics of the environments at a time. A true theory of everything at that level would essentially be a simulation of the universe. It would be impossible to model in a series of equations beyond the fundamental laws of physics.

For the evolution of biological systems you have to understand the natural history of the world that the organisms develop and evolve. This is why when you read Sagan, Dawkins or any other biologists or cosmologist they argue that if you rewound the tape of history you’d get a different present day. Some things may have happened just slightly different enough and you’d have no humans. The understanding of the history of our world allows us to understand where the future of it is going.

In the same way, history does matter. There are branches of economics, such as evolutionary economics that use complexity models and work to ensure that the history of events are included in their models. What the major difference between typical theories of history and psychology and newer models of economics and complex systems of physics, is that we’re able to test them using simulations. It is likely that in the future we’ll be able to do the same thing with history. This will give us a deeper understanding of why our societies have developed as they have. One heavily contested aspect of evolution, which is mentioned in the article, is cultural inheritance, which is where the theory of memes came from. This approach doesn’t suggest one type of people is better than another or one lifestyle is better than another, it simply says that in the environment that the culture resides it’s more capable of surviving than others. This can go down deeper to smaller niches within the culture and how well they adapt to their environment.

Other aspects the author argues discusses is the differences in the acceptability (or perhaps the perception) of radical paradigm shifts in science compared to the humanities and history. He mentioned specifically Freud in psychology and Galileo in physics. He argues that Galileo was able to make changes in physics because he tackled an “easy” problem that had minimal level of complexity. He went after the theory of gravity and how objects fall at the same rate while Freud went after the entirety of the human psyche. I agree there is a difference of complexity, however the key differences between Galileo and Freud is that he was better able to explain the state of the world and when new scientific theories were produced they continued to explain what Galileo found but with more accuracy and expanded on them. When Freud was discredited it was more like discrediting Alchemy than going from Newtonian physics to Relativistic physics.

The key difference between many theories in humanities and in the rest of science is the lack of continuum between two major theories. Yes, Relativistic physics completely obliterated the value of Newtonian physics and created a new world (universe) view, but it solved the same problems or proved that many of the old problems were only problems because the theory wasn’t complete enough.

The key that needs to be remembered in either science or humanities is that all models are wrong, but some are useful. Freud was wrong in how he looked at the human psyche, but his models allowed other theories to be tested and used and likely spawned Neuroscience and the bridging between neuroscience and many of psychological problems.

Continual improvement, Innovation and Modularity

I’ve been reading Internet Architecture and Innovation which has gotten me to think a great deal about system’s architecture and innovation (shocking I know), but it has also gotten me to think about continual improvement as well. The perspective that Schewick takes for innovation in a system is actually based off of stock options. If you aren’t aware there are two types of options. Each is used in a different circumstance to sell at a certain price or to buy at a certain price. This has been used in some innovation theories for a while it’s called real options, or taking financial options and using them in a similar situation in real life. The differences is that it’s a go/no go choice instead of buy/sell. In terms of innovation it would be a choice between pursuing a new innovation in a system or not. For example. Let’s say you have a watch and you are trying to improve the time on the watch. Using the reals option approach you could figure out how much money you’d have to have for a return on your investment in the innovation, per watch, and figure out how many different types of crystals you would test to improve the timing mechanism. Another example could be a car, where you’re trying to reduce the drag on the car, which could dramatically change the full shape of the car. Whereas with a watch you may only be changing the crystal. 

Essentially, what this means is that you have two different ways of innovating within a system. Change the full system (car) or change a single module of the system (watch). Reducing the drag on a car could require a full system overall, because you’ll be changing the size of the front end, which could impact the maximum size of the engine (or shape of the engine), or could impact the maximum headroom of the vehicle. So, you could have a radically different looking vehicle from model to model. In fact we can see this if we look at the evolution of the car (below). This change is extremely expensive and requires a huge amount of work. It’s not likely that a company would pursue multiple designs beyond the drawing board or initial mockups. It would simply be too expensive to build multiple prototypes that are fully functional.
Evolution of Lamborghini
With watches you could have the exact same watch with several different materials to ensure the watch keeps proper time. In terms of watches there have been several radical innovations, including the wristband and digital. However, if the watch is not digital, the changes in some parts of the watch are extremely easy to test and compare on the market. For instance many pocket watches use rubies to protect the metal pieces in a watch from rubbing against each other. In this case it’s possible to test many different gems to protect the components, it’s also extremely cheap and if something fails completely it would never move into production. However, you could test hundreds of types of gems (sizes or whatever), at a significantly lower cost than testing many different full system designs.
So what’s the difference between the two? In this case we’re changing a full system compared to a module within the full system. Of course changing the gear structure of a watch would require a full redesign, but there are many parts that can be changed independently. In many aspects this can happen with a car, but there are limitations as well.
This modularity allows designers to innovate on separate aspects of the product without decreasing the quality of the overall system. This same idea can be applied in other business settings in terms of rapid and continual improvement processes. Many business processes are systems that integrate many different groups and aspects. Splitting the system into modular components allows continual improvement on many different aspects of the system at the same time. This modularity decreases the cost of improving individual aspects of the system as well as allows for more improvement projects throughout the system. 
Why would the costs be lower? Well, as I mentioned with the watch, it’s cheaper to test different components for the gems, time keeping crystal and face glass than to test a change in drag for a car. The change in drag could require changes to the seat heights, new design for the windshields, possibly an entirely new chassis. In the case of reduced drag, if the design works you may have to redesign all these other components. In the case of the watch finding out that the new glass face doesn’t work wouldn’t impact which crystal works best. This reduces the costs for testing the improved system.

A bit remiss

Sorry dear readrs, I’ve been very bad about writing any blogs lately. I’ve had some pretty big changes in the past two months as you all know. I’ve moved back from the Netherlands to the US, did some consulting work and I just started a job at AMD. Consequently, I’ve not been able to post as much as I have in the past. Big changes have been happening in my life.

Because of these changes I wasn’t able to pay enough attention to the CISPA fiasco that just occurred in the US. This law is a terrible step in the direction of data tyranny. I’m even being hyperbolic about this either. I wrote about the risks of having a voluntary data sharing program and in my review of Consent of the Networked I discussed the different data and Government regimes out in the “wild.” These concerns are valid. We need to be aware of what’s going on. Now, I have to say we pretty much blew our collective internet protest load with the SOPA/PIPA protests. Which is actually a problem. I would hazard that in many ways CISPA is as bad or worse than SOPA, however I didn’t see as much chatter about CISPA on reddit, twitter, Google+ or Facebook about CISPA as I did about SOPA.

I think there are a few reasons for this actually. First, the majority of the people were able to clearly understand the risks associated with SOPA. These risks are pretty straight forward and understandable. These risks affect us tomorrow not in some future time period. In many ways SOPA like acts can already happen today. This makes it extremely obvious why SOPA/PIPA are terrible laws and should be opposed at many levels. Second, with CISPA coming so quickly after the SOPA/PIPA protests there was likely something of a protest overload or disbelief that another law could come through so quickly that is as bad or worse than SOPA. Especially with the language that was being used at the time of SOPA. It would have broken the Internet, how could anything be worse than that? Third, there was more support by large companies for this law than for SOPA. Apparently that actually matters more than we realized. We were able to push Wikipedia, Facebook, and other large companies to protest this law. However in this case Facebook and Microsoft supported the law while Google sat on the sideline saying nothing about the law.

I think from this stand point, people that weren’t happy with CISPA but didn’t understand the importance likely didn’t do anything about it. However, whenever a fantastic website like Wikipedia blacks out in protest for a law it will get people who are only on the fence about the law to actually do something about the law.

CISPA and SOPA are both bad but in very different ways. CISPA is something of an abstraction of risk. Losing your privacy when so many people already voluntarily give up so much information about themselves on Facebook and Twitter might not seem like as big of a deal. The secondary abstraction is a lack of understanding of the impact of the data sharing. It’s unclear of what exactly the Feds would do with the data once they have it. It’s unclear how data sharing would occur within the government. However, it is likely that the data would be shared throughout the government including the military. Which many privacy experts are say essentially legalizes military spying on US civilians. The third problem is that many people also feel that if you aren’t doing something wrong you don’t have anything to worry about. However, this is a fallacy as even people who are doing things that aren’t wrong can get in trouble. I’ve discussed the cases where people are fired for posting drunken pictures on Facebook. Additionally, this type of law represents the biggest of the big government that we can imagine. There’s no reason why the government needs to know what we’re doing in this level of detail.

It’s going to be a long and difficult fight to keep our internet free. However, it’s something that we must do and I believe we can do it. We will just need to keep vigilant and work together to ensure that our internet stays our internet.

The possible effects of a faster Fourier Transform

This weekend my friend Ryan sent me a link to a note in technology reviewwhere they talked about a group of researchers at MIT who showed off a new algorithm to speed up the calculation of the Fast Fourier Transform (FFT), FFT is a method to decompose a signal in a sum of sines and cosines of different frequencies, allowing to study the signal itself or to compress the information carried by it. A famous example of its application is with music because the MP3 is the end result of a FFT. I have to admit that I was really surprised about the MIT development and I looked for other notes or papers about it, and I found this IEEE spectrum note where they talk a little bit more about the results the group has achieved. The basic idea of the new algorithm is to exploit the fact that some signals are “sparse”, i.e. they contain just a relatively small number of frequencies components that are significant and therefore the number of operations needed to calculate the FFT can be reduced considerably for this kind of signals.

Certainly there are a lot of applications where the traditional FFT will be still needed, but we can think of a lot of examples where the sparse FFT will be of use since a lot of phenomena are sparse in frequency such as the sound of your guitar (every chord is adjusted to an specific frequency actually), the wave propagation (A wave has normally just a small number of dominant frequencies),  and even we can study images as 2 dimensional signals with a sparse frequency content. Even though we will have to wait until the paper describing the algorithm is published, we already can imagine several fields that could benefit from this new technique in terms of speed and  even the amount of energy required in certain applications.

Imagine the improvement of having a faster algorithm for sparse signals in Particle Image Velocimetry (PIV), which is a technique researchers use to see and study the streamlines of a fluid using really small particles inside it, called tracers, and by comparing the average movement of those particles in consecutive images it is possible to estimate the instantaneous velocities in the fluid. Not one but several FFT must be calculated in order to estimate the velocity of the particles in the fluid for a single image pair, causing the PIV speed to depend almost directly on the time the FFT needs to be calculated. Since the PIV images are normally captured in such way that just the particles are visible whilst the fluid appears as part of the background, those images are likely to present just a small number of dominant frequencies and the use of the sparse FFT will result in faster PIV sensors. Those faster PIV sensors may also result in other improvements like PIV control for high speed fluids allowing researches to come up with new experiments that right now are not feasible.

It is easy to understand that less calculations needed for the FFT will result in less computation power needed and therefore the devices used to process large data series, like the meteorological or oceanographic data, will be able to crunch even more data than they actually do using the same computational power which may result in faster models for example. Weather prediction algorithms, for instance, may get more complicated since the data processing needed to feed the forecast may require less computations with sparse FFT.

We don’t know yet if the sparse FFT will also result in an increase of speed for the inverse FFT or if the quality of the reconstruction of the original signal won’t be diminished, but if we assume that, we could expect also positive effects in the video and audio stream systems since the data compression may be better without losing quality or definition, allowing the end users to receive smaller data packages that will reconstruct a high quality song or video. Even with music players, if the sparse FFT can be used to compress and reconstruct audio data without losing quality, we may find in the future music players that are even smaller and have longer battery life without sacrificing the quality of our musical experience since an increase in the speed of the FFT and its inverse will result in less computations needed and therefore less hardware and less energy consumption.

We don’t know yet for sure the exact effect of this new algorithm in the technology, but as the FFT is used in almost any device processing or transmitting data right now we can predict for sure that sparse FFT will boost some research fields and will motivate improvements in some devices or technologies that we use day to day.  This may be a small snow ball that will end up transformed in a huge snow slide in the upcoming months or years.