Big Data is Coming to Get You

Big data is what high tech companies are calling collecting massive amounts of data about their users. For Google, this includes all the trips you’ve taken, the places you’ve driven, where you’ve driven, your email (if you use Gmail), your searches, Google Now preferences, articles you’e posted to Google+, your pictures, and the list goes on. The idea is to use algorithms to mine this data for useful tidbits about user habits so products and services can be recommended just as you need it. These data can tell companies a great deal about the user including who their friends are.

However, what isn’t clear is who owns the data. Companies assume they own the data, which because you agreed to their terms of service, is true, even though you didn’t read them. However, with the recent re-categorization of fitness apps and trackers at medical devices a wrench has been thrown in the works. Data associated with Medical Devices is typically assumed to be Personal Health Information, which is protected under HIPAA. Which means that companies can’t really sell them AND that you are able to control what happens with the data. It’s the reason why doctors are required to share information with other healthcare professionals.

I believe that this is just the first step towards making our data more portable. In Europe you can already request a transcript of all the data Facebook collects of you, however they do not say you have control over what FB does with that data. Obama, is pushing to help increase privacy of personal information, but will only work if the companies feel like they have a stake or a penalty if they do not adequately protect data. Whenever they are an effective monopoly such as Apple or Google is of your data (through lock-in effects) their incentives to fully respective privacy is reduced because of the cost of switching to another monopoly.

Restrictions Can Drive Innovation

As a Lean process improvement guy as well as someone that really loves reading about innovation I’ve always taught my students that regulations, limitations, and restrictions on processes, equipment, and activities offer us an opportunity to innovate around those rules. The way that I describe it is that rules place you in a box, but within that box you can move up and down and diagonal and develop some really interesting ideas because of what you can’t do. However, you don’t focus on what you can’t do as much as focusing on how you can avoid that and what you CAN do.

I saw a picture to a great discussion about how gluten free diets are forcing, at least one chef, to be more innovative in their cooking. I’d post it, but the image is so large it’d take up the entire post, so I linked it above. Essentially the chef had a dish with polenta on it and a base that was all glutenous flour, but he figured out a way to make it all polenta, which actually created a unique dining experience that he felt offered a superior taste.

Another area where regulation has been the root of many innovation is the financial sector. They complain the most about regulations because it’s “bad for the economy” or something like that. However, CDO’s and everything that caused the last collapse was in response TO regulations. They figured out how to work around the regulations and make even more money than before. In fact, many banks started to follow suit because they weren’t able to post as high of profits and were getting hit by Wall Street for under performing comparatively.

This is one of the reasons why I personally don’t see value in fighting regulation other than to shape it in one direction or the other. The companies that are able to exploit the regulation the best are going to end up being first to market or extremely fast followers. Meaning they will make a great deal of money and likely dominate the market. If you look at regulation as a “disruptor” and an opportunity to disrupt the regulation, you’re going to do really well as a business.

The Innovation machine – This is a “how to” guide for Innovation management

As many of my blog readers know I’m an innovation reading junky. I’ve read many of the books on how to manage, from a individual’s perspective, creating an innovation or even at a high level how to run an innovation project. However, this if the first book that looks at things in a very systematic manner utilizing a lot of case studies. The Innovation Machine by Rolf-Christian Wentz is a fantastic introduction into a series of case studies of the most innovative companies in the world.

Books like the Innovator’s Dilemma are a lot more prescriptive in what a business should do or how a given business has been disrupted. Typically they focus on the smaller entrants that enter a market and beat the incumbents. The Innovation Machine on the other hand looks at the incumbents and analyzes what the organization did culturally to enable innovation. I believe books like Innovator’s Method and the Lean Startup address a different need: how to take an innovative idea to market. This book touches on those things, but looks at how the whole organization can enable those Lean startups within the organization and use it’s size to maximize the results.

The Innovation Machine also touches on the portfolio management aspect as well as some of the best ways to fund projects, staff projects (2 is best, a small room is next, anything else is doomed to fail), and finally how to integrate the project teams back into the larger business as a whole. No book that I’ve read has really discussed how to do this. All these topics are covered with clear case studies of some of the most innovative companies. He includes discussions of Google, Toyota, GE, P&G, SC Johnson, BMW, Microsoft, Whirlpool, and a litany of others. The stories are referenced as he details the concepts that were leveraged by the companies in his case study.

I believe that this book is a must read for a CEO or a leader that values innovation. Especially since he calls out the massive differences between managing Incremental Innovation and Disruptive Innovation – he gives very clear practical examples and methods for managing them separately. I believe these are powerful and will help me identify projects I work on more easily as disruptive or incremental.

AMD, What Are You Doing?

The past few months haven’t been kind to AMD. First Lisa Su, the first female CEO, ousted Rory Read. Now three leaders have left including the General Manager John Byrnes, CMO Colette LaForce and Chief Strategist Rajan Naik. Furthermore, it’s pretty clear that the remaining two leaders long term leaders, Mark Papermaster CTO and Devindar Kumar were sort of bribed to stay with restricted stock. This is on top of delays in their desktop, graphics, and mobile chipset and layoffs.

I think it’s pretty clear that AMD no longer has a clear strategy. AMD, while I was working there, was starting to put out some cool stuff that could really define the future of computing. Their APUs were best in class and could have been deployed in a lot of really cool applications. However, those never appeared to have materialized and now Intel is starting to attack the SoC market. While Intel’s Iris graphic chipset is way behind AMD in pure power, I think it’s going to play a serious role in the up coming years especially since Intel is leveraging a similar enough design that they are able to use the Open Compute Language that AMD championed.

Another area of concern for AMD fans is that John Byrne, shortly before his departure, announced at CES that AMD was steering clear of the IoT phenomenon. Which I found surprising considering that their strategy, only a year and a half ago, was to conquer the embedded computing space. Since they restructured again, that’s about 4 times in the past 4 years, they have clearly decided to forego that space. The IoT chipsets are likely going to be a disruptive technology to computing. For instance, this computer you can dock and upgrade every year for about $200, while Intel released a full Windows computer on an HDMI stick for $150. In the past I wrote that I thought that the dockable phone that would turn into a full computer would be the long term future, but these are the incremental steps to get us there.

AMD clearly doesn’t see these spaces the future. They are currently looking at where the market is now and not truly planning for the future. I was excited whenever AMD announced the partnership with Gizmosphere hoping it could compete head to head with the Raspberry Pi, but AMD is clearly failing to embrace that movement, since those devices would be powering the IoT and the maker movement. On the otherhand Intel is rushing to embrace these groups and sees these people as the way into attacking Qualcomm, Samsung, ARM, and Apple’s designs.

Low power is going to be vital for the future expecting a smaller and smaller niche of applications. In these applications, excepting graphics chips, AMD is getting crushed. Even in the graphics space AMD is starting to flounder with poor quality, as @NipnopsTV reported with his year old or so 7970 card.

All of these should be a concern for AMD fans. The company is not investing in the disruptive technology hitting their industry, their market cap is only $2.06B and their shares are at $2.66. They may be positioning themselves to get bought or could be at risk for a hostile take over for their IP or pushed into bankruptcy since their IP might be worth more than the company operating as it is. Look at Nortel to as an example where it’s IP was sold for $4.5B while everything else was just ditched.

Could we eventually see a Samsung R290 and a Samsung Kaveri processor? They gobbled up a ton of AMD’s engineers in 2013 definitely could happen.

Researchers Have “Solved” Poker and What it Could Mean

Today, Chezz pointed me to a really interesting article. Apparently have figured out how to pretty much guarantee a win in “Heads Up Limit Hold ‘Em” Poker. This is the poker equivalent to beating chess masters head to head, like what Deep Blue did in the 90’s and what Watson did more recently on Jeopardy! The difference between these instances though is all players in the game have the same basic information. In chess all the information to inform any move and future moves are available with a glance at the board. In Jeopardy! it’s a little different because it’s knowledge based, but to create the question to the answer, it’s what you know, but the answer is there for everyone at the same time.

In poker, it’s different because you, initially, know only 2 cards out of the 52 in the deck, as the play continues you know more. So you’re dealing with imperfect information about what action to take. This is important, because that’s what you need to do as a player is address that uncertainty. In this program the researchers developed a great learning tool that was able to determine the best course of play and with the experience the researchers gave the program they effectively created an unbeatable computer.

However, the game is limited to a 1 vs. 1 situation with a limit to how much the players are able to bet in any given situation. Those limits are based on multiples of the opening bid. These limits, I’m sure, will eventually be generalized to handle any number of players and then any number of betting options, such as no limit.

Once this happens, I think that these learning systems will have or could have a dramatic impact on a great deal of things. First, trading is essentially poker and the companies that will likely leverage this first will be the companies that deal in high frequency trading. This will make the computers act very differently than they are now and with these new learning algorithms built into them, it could dramatically reshape our stock markets (more than they have been to this point). Second, these systems would be used to “help” with negotiations in any number of situations. I’m thinking initially diplomatic situations where there are a great deal of stakes on the table, which most of them are known, but the information is incomplete. In these cases a computer can greatly augment the capabilities of the diplomat that wouldn’t have been possible in the past, which could either increase the likelihood of a war or reduce it depending on what the goals of the computer are. What does “winning” mean in those cases. So setting those clear boundaries will be important, but that’s why having a person there to augment the machine is crucial as they would provide that feedback over the course of the negotiations.

Finally, this one is by far the largest stretch, but it might be more possible to plan or react to a great deal of the actions of economic entities. This means that governments could leverage these applications to help determine the best determine where to invest as well as where to buy to help truly manage the economy. The central bank could change dramatically.

None of these situations are going to happen overnight. Most likely we’re 2-3 years from multiplayer with no limit hold ’em and 5 years for more monetizable uses for this application. Rest assured these algorithms will be used in a business at some point. Watson and Deep Blue have been repurposed to make IBM money. Expect something similar and I think that these are all very realistic applications that these researcher could pursue. What do you think?