Technological Convergences

Convergences happen in all different ways. They happen in books or book series, where a good author can plan to have plotlines converge in a specific time and place. In the case of the series I just finished, the Malazan Book of the Fallen, the author was able to get two totally unrelated characters meet in really unexpected ways. It happens in films too, Crash and 21 Grams are two great examples of this. This happens in technology as well. Most of the time, we as consumers never even see it happening. When we look back though we realize it was incredibly obvious that it would happen. Two great examples of this happened with cell phones.

MP3 players have been wildly popular since they came out in the late 90’s. Napster and easy to rip CD’s made them incredibly useful and provided hours of great listening. Around the same time cell phones were becoming smaller and more popular. No unexpectedly, phone manufacturers decided that it would be useful to put a music player onto the phone. These were clunky and really only used when people didn’t have a better MP3 player. Apple had created a great MP3 player and realized, like the phone manufacturers that users only wanted to carry one of these devices. This is one of the reasons that drove them to make the iPhone. Great interface and good music experience. At this point they already had the music infrastructure and the loyal fan base to be sure of a high number of sales.

Around the same time as the MP3 boom businessmen were starting to use Portable Digital Assistants (PDA). This was a replacement to the calendar and phone book. It also provided a few applications that allowed some work on documents. It could also be used to schedule emails when the PDA was synced with the computer. It was obvious that this would be a great device to connect to some sort of network aside from plugging it in. Blackberry used to make two way pagers and figured out a way to send emails and other useful data over the pager network. This was one of the earliest smart phones. Eventually Microsoft and Palm got into the phone manufacturing game for the same reason. People didn’t want to carry two device a PDA and a phone. If you put them both together you’d have a better product and would sell more.

These two technologies converged on a similar product, smart phones. Both types of phones had a very different set of users initially. However, since the iPhone there has been a further convergence of these phones into general purpose phones. Blackberry, while still catering to the business side, is shifting to compete directly with the iPhone because business users want the apps that the iPhone has. Palm has vanished from the market being unable to compete and Android has appeared as the first PC based OS. Android is a distribution of Linux, it doesn’t run well on PCs but MS and Apple are moving in a direction of merging mobile OSes and PC OSes (sure it’s a Mac, but it uses Intel so there’s no different besides the OS).

If we look back at these convergences, aside from new competitors and firm failure, they appear to be pretty obvious. Why wouldn’t these companies move into these market spaces? I’ll discuss some of that in my next blog.

Amazon’s Silk

Interesting read on Tech Dirt on Amazon.com’s Silk browser. They note that it’s a copyright infringement suit waiting to happen. If you’re too lazy to read the article, basically Silk will copy whatever website you go to onto it’s servers so it can send you a compressed version of it. For instance if a website that you’re on has a 3mb picture they’ll send you a 50kb picture instead. This does a few things. First, it will help relieve congestion on cell networks because smaller pieces of information are being sent. Second, it will save you data if you don’t have an unlimited data package. Finally, it could violate copyright. Why? Because it’s copying everything from a website and then sending you the information from a different source. Not only that, but it is effectively altering the picture they are sending you. I’m not sure if there have been any copyright cases based on compressing the quality of a picture, but for all intents and purposes it’s altering the picture. It probably should fall under fair use, but you never know some one will probably try to sue over that.

There are some other issues to consider too. The browser has predictive capabilities based off of aggregate users actions. This is actually fairly similar to what Facebook is doing, but there are no implications for ads with Amazon (at this point we don’t know if they store individual user statistics). The example they give on the website, is if you go to NYTimes.com and a high percentage of users then click on the business section Amazon will pre-load this information into their severs. This could have an impact on big websites’ server loads as well. They could potentially be hit twice for a lot of visits to their site. If Amazon predicts incorrectly, then it will hit the server at least twice.

Another interesting consideration is related to ad revenue. Let’s say users of some website like, I don’t know KBMOD.com, always visit a YouTube account after reading the front page, let’s go with InfiniteSadd, which would then auto play the video that’s on top. This of course have the ad pop up on the bottom. Now the question I have is in these situations would this count as a click, or would the ads start to filter out views and click throughs from Silk? The situation, I presented is unlikely as there’s no direct link from KBMOD to InfiniteSadd’s user profile. But’s easy to image that it could work that way.

I’d really like to know more about the user statistics that Silk will be collecting. Since the browser is going to be on their Fire device (who knows could also be an update for older Kindles as well), Amazon will know who is browsing what you are browsing and may actually keep that information in your account to predict your behavior better. I don’t see any reason why they couldn’t collect that data. I would imagine that it’s very technologically feasible to use a larger aggregate dataset for websites you don’t frequent, but for your most commonly visited websites for Amazon to have enough usage to figure out where you’re going to go next.

I think the browser is a great idea. However, I can also see this turn into another way for Amazon to better target your recommendations. If you are on your Fire and they see where you go, then they will also know what other products you might be interested in that you haven’t bought through Amazon before. If they know what interests you then they can put those into your “Silk based recommendations.” Now there hasn’t been any talk of that yet, but since they are selling the product at a loss they need you to buy a decent amount of product to get a return on their investment. I’ve seen two values, $50 and $10 losses.

Keep your eyes open for news on this, it could be a copyright and privacy issue before long.

Future of Employment II

Yesterday I talked a little bit about the future of employment. Apparently this isn’t the most interesting topic. However, it’s important. The Slate series ends with some startling research that shows even scientists could eventually be replaced. I think we are a long way from those things happening. In my opinion the first things that  machines will do in R&D is replace humans in the creation of incremental innovations. In fact, to some extent computers already do replace humans in some of these things. Computers do a great deal of CPU, DRAM and Flash designing. Typically, these are incremental innovations. They are building on a current technology and making improvements. Humans are required for the radical innovations, such as a new chip set, calculation methodology or what have you.

Even some advanced R&D work could easily be improved by computers. Researchers have to read a great deal of papers to keep up with the state of the art in research. As the slate series points out, this is a form of data mining and lawyers are currently using automated programs to find specific words. There’s actually a branch of Science and Technology Studies that focuses on word analysis. They use similar programs and dump a few papers into it and figure out what verbal connections between the papers exist. This is a way of creating maps of knowledge. You are able to see through citations and similar word usage that a specific theory is prevalent or not. How would this apply to R&D? You could put in the materials that you’re using the problems you’re seeing and a bunch of papers that might be related and see what comes out. It could give you new materials new designs things of this nature. For this to work though, it’s a ways away.

What does mean in the long run? That no position is safe. I don’t think this will happen in our life time though. People are much too conservative to leave everything to computers. They just simply won’t be accepted. Even by our generation there’s too much distrust. It’s going to take one or two more generations for there to be enough trust in computing and technology to allow more control to shift to them. Sure some companies will be on the cutting edge with accepting these changes, others will be laggers.

If computers can do everything why do we need any jobs, isn’t the guy from CNN is right? I disagree. People will always want to work. People need to work. I’m not saying this because I’m hoping there won’t be a robotic take over or anything, but because people will not allow it to happen. In general people like to feel in control. Even if you aren’t the bus driver, knowing that it’s a person that you can relate to makes you feel like your more in control. Leaving everything to computers requires a level of surrender. Many people will simply refuse to give up that level of control. We won’t have fancy automatically driving cars for this very reason. People love to feel in control of where they go. It doesn’t matter if they would be safer, save money and get places faster. They would rail against the change because they loose control.

Would we leave the future of our economy in the hands of machines? You could argue that some companies already have. For instance take the May flash crash on Wall street. This has been attributed to high frequency trading following logical algorithms, it wiped about $1 trillion in wealth, most of it was restored.

In much of my research on academic spin-offs and technology incubators there is an important component related to tacit knowledge. Know how of the inventor of a technology. This is something that we’d lose if all of our work was robotized. There’s no difference in that than outsourcing. In developmental economics and innovation theories the ability to create copycat technologies is a precursor to developing their own technologies in that field. I think this is something we must keep in mind when discussing the reality of full automation. Without tacit knowledge and hands on experience with the devices and machines building the product it’s very difficult to develop improvements on either.

I think that we’ll have many legacy jobs hanging around for a long time. Simply because we need them to continue growing economically. Otherwise, we’ll stagnate and keep producing the same technologies.

Technocrats and Technology II

In my previous post I outlined some of the problems facing the energy sector in terms of determining the best course of action in the wake of the Fukushima reactor disaster. One of the solutions was to create a group of experts to determine the best mixtures of technologies and sources of energy. However, there are clearly flaws with this methodology. First, there’s the problem of trust in these experts. Second, there’s obviously a lack of input from the general public. Third, there’s problems with selecting technologies themselves.

As I mentioned yesterday, experts can claim many different things and using the right language can make something that’s incredible sound credible. When these experts put out information or opinions how can we trust it? Can we be sure they aren’t on the pay roll of big oil or big coal? If these experts are university professors how can we be sure they aren’t part of some global warming conspiracy? I think that it’s obvious there will be influences from oil and coal. These are to be expected and the goal should be to actually welcome them into the discussion. We should attempt to include them, however we need to give them the same weight of opinion with their obvious bias as any other expert on the panel. The difference is that we want it to be known that they are going to be rooting for oil/coal. Why? because we can more easily critically analyze their economic data knowing for sure where it comes from. This goes the same for a scientist that is heavily pushing solar or wind energy. We should know that they support it so we can have an honest discussion.

Public participation is a huge problem as well. Without proper support from local groups, agencies and governments a promising energy program and be killed. “Not In My Back Yard” (NIMBY) is always a hugely successful counter attack to many of green energy programs. People don’t want to have giant windmills over looking the beautiful landscape or oceanscape they cherish. Understanding these concerns and getting input into the the process from the public can lead to greater social acceptance of a plan. Also, making it clear who the information is coming from also will improve the tone of conversations. Without the clarity of information sources public opinion can quickly turn from a project.

Finally, what technologies should we use? Public opinion and vested interest in legacy technologies is very difficult to overcome. Especially when a technology like solar energy is more expensive than coal power, and has less consistent energy profiles. Of the solar technologies how do we select which technology is the best? How do we pick the right nuclear power plants? There are many different technologies out there competing. There is not a clear which technology a government plan should invest in. We are likely to pick a loser technology. However, we still need to choose something. I have mentioned it previously some ways to select technology. I’ll discuss more of that in my next post.

Technocrats and Technology

On my way back from Oktoberfest, which was awesome, my fellow car passengers discussed the decision by Germany to phase out nuclear energy over time. We all felt that this was an incredibly stupid long term decision. We agreed that it was a knee jerk reaction to the nuclear disaster at Fukushima. However, this raised some other questions about how to enact energy policy choices as well as other technology/science policies. We mostly focused on energy as that was the topic of interest, but it really does spill over to most scientific/technology policies at a national level.

The obvious solution to most engineers is to set up a panel of experts and have them come up with the best choices for energy sources. There are some flaws to this line of thinking, sadly. First, who selects these experts? Let’s use the US as a model country in this regard. There will be a huge battle over what experts should be included in the panel. If it has to be split 50/50 between experts selected by the Republicans and Democrats we’ll most likely have a group of lobbyists for the Oil and Gas industries from the Republicans, and a mixture of wind and solar experts from the Democrats. Nuclear energy maybe completely left off the radar. Even though there are tons of technologies out there that are hugely safer than the Fukushima nuclear reactors.

Additionally, nuclear energy has a stigma associated with it due to Three Mile Island, Chernobyl and now Fukushima. It doesn’t matter that coal is as destructive or that oil and natural gas extraction causes almost immediate negative impacts in the local environment. Why? Because these are huge job creation industries and also have been legitimatized over the course of the past 100+ years in many regions. For example in Pennsylvania, where Three Mile Island resides, coal is a way of life for many people. It has been an occupation that many people have been doing all their lives. There are nuclear facilities in the state still, but they are viewed with much more skepticism, lack of trust and fear by local residents.

Many engineers are something of a technocrat, where they believe that technology can solve a huge number of issues and that technology experts should be making many policy decisions related to technology issues. These technocrats are viewed with skepticism from the broader public. In many cases there are huge debates over the sources of the data and the reports which accompany many of these technology experts. In the case of GMO, even when the public is given information from both sides it is not trusted. Why? because people have lost faith in their governments and believe that there are scientific conspiracies to enact practices that are dangerous.

In my next blog I’ll discuss some more issues with these topics. I’ll go into some detail of cases where large differences in views were eventually over come.