Science, evidence, and paradigms

Last night was a big debate between Bill Nye the Science Guy and Creationist Ken Ham. This was to help inform people that the science supporting evolution and how that refutes the “science” behind creationism. One of the key questions during the debate was around what would be required to convince Bill Nye that creationism was true and evolution was false. He said “Evidence” essentially. While, this is the ideal answer for a scientist, I find it unlikely. This, of course, isn’t a popular oppinion. It’s not that Bill Nye doesn’t believe that he would change his mind or that he would change his mind quickly, but it’s unlikely. People aren’t purely rational, in a purely rational world, yes that’s exactly what would happen. Even scientists have a serious problem with this. Scientists still suffer from the same sort of denial that global warming denialist, however, this impact is the largest inside of their field rather than outside.

Why do we know that this is true? According to Karl Popper whenever theories are incommensurate it’s unlikely that a leading theoriest in that field will switch to the new theory or paradigm. What does this mean? Well, if we think about scientific theories in terms of technology it will become easier to understand. Let’s look at jets and propellors for airplanes. It was clear in the early 50’s that jet engines were the way to go, but not all companies decided to pursue that type of engine. Instead these companies decided to continually tweak the capabilities of props instead. A similar reaction happened with sail technology and steam engines in this case sail techology was still more effective than steam, it took years before steam would catch up let alone surpass sail.

This similarly happens with scientific theories. What happens is that flaws start to appear that the theory cannot easily explain. For example, in the Geocentric theory planets would seem to track backwards over time and then begin to move forward. Theories about how these planets had small circles that would regularly appear through the course of their normal revolution around earth. The mathematics for this theory became increasingly complex and seemingly less realistic. The heliocentric approach reduced the complexity and eliminated the small circles and allowed for the eventual creation of Newtonian physics. However, whenever this started to break down and Einstein proposed relativity, it was largely ignored for decades. Essentially, it took until that generation retired for relativity to finally get accepted by the broader scientific community. This happens to scientific theories on a regular basis.

In fact, there are some pretty serious debates going on about the full mechanics of evolution. The original basis of the theory are still true, heredity, competition/pressure, and variety, however the nuances are being debated. For instance Richard Dawkin’s theories have started to fall a bit out of favor, while we’re learning that there are some things that we do in our lives that impact our genes. Those changed genes could be inhereted, which could change the next generation – this was Lamarcain to the core. However, Dawkins will likely not accept a different theory than the one he’s devoted to his life to. So, while to some extent it’s true that scientists will and do change their mind, it’s more likely that Science will change while individual scientist will take significantly longer if they ever do.

Lenovo bought Motorola now what?

Well, it’s complete, Lenovo has completed it’s assimilation of IBM’s x86 market and now has moved onto the mobile market in the US through buying Motorola Mobility. This move makes a lot of sense for both Lenovo and Google (as well as IBM). From the Innovator’s Dilemma perspective, we have a company that is focused on building produts at a good price with all sorts of efficiencies buying a much lower on the priority source of revenue from Google and IBM.

What else does this mean? Well, it’s likely that Lenovo will begin to experiment with unique products to take advantage of their full technology stack. They are one of the few companies that have successful lines of business in Laptop, Servers, and now Smartphones. They obviously will be quickly filling any gap they have in the tablet space – which they already have offerings. Now, they can take mature technologies from Motorola Mobility and implement them in their broader product portfolio. In the sale, Google kept most of the patents, but they are sharing the patents with Lenovo. This is a big deal for Lenovo as they can freely use those technologies with no cost which means they can experiment with those technologies in unique products.

For Google, it’s a good move because they don’t need to manage a legacy manufacturing and design firm that they never really integrated. This is especially true since they were partnering with other companies to develop their Nexus line up. It simply made no sense to hold on to a company like Motorola when you weren’t planning on truly using it. Plus, Google makes most of it’s money from ads still and from their ecosystem. They do better if there’s strong competition in their operating system ecosystem. In fact having Lenovo as a partner now will likely help them more than owning Motorola ever did.

Lenovo will be able to put more money into their smart phones and will likely offer unique products that mix their laptop capabilities and servers with the smart phones. What could we see? Well perhaps a fully docking phone/tablet that truly replaces a laptop and maybe even a desktop. With the small server capabilities they may even figure out how to mix those capabilities by selling a smart phone that can connect to a “personal server” that allows you to access more power and storage while on the go.

I’m excited to see stronger cometition for Samsung and Apple in Lenovo and look forward to interesting products from the company to compete and push the market in a new direction.

Technology obsessive culture leads to product worshiping

Apparently, today is the 30th year since the Macintosh computer was introduced. All over the internet was a big masturbatory fest over this great achievement. Honestly, I don’t really give two shits. Quite frankly, I don’t think that it really changed everything and anything – similarly I don’t think that the iPhone did. In both of these cases the technology had been in the market, it just required the right type of interface or marketing. It’s well known that there were a lot of similarities between the work that was being done at Xerox PARC and at Apple. In fact, Steve Jobs went to visit and learned a lot about what the computer gods of Xerox were doing. Did he steal ideas from there? No, but I’m sure that his ideas were enhanced and improved because of his visit. Similarly to the way that his ideas were enhanced and improved by all the competition to the iPod including the Palm Pilots, BlackBerry, Windows Mobile and so on.

Apple was the first to market for really easy to use printing interfaces as well as type faces. However, at the same time that Apple came out with their product, Adobe was developing their similar product which was a spin off from Xerox. Similar, Microsoft Office was developed by an ex-Xerox employee.

Did the Macintosh change things? It’s likely from a design perspective more than anything as both Windows and Apple’s operating system were similar to the Xerox operating system. What happened, why did Apple succeed and change things and not Xerox? Because Xerox didn’t know what to do with what they had. Apple, coming from a different perspective, different cost structure and different corporate culture, was able to move into the market with only competition from IBM. IBM was a business first company and didn’t really understand the market they were helping to develop. This is why IBM wasn’t able to dominate the market the way they did in the minicomputer and mainframe days – in fact, IBM has completely exited the x86 market. Because of IBM’s business decisions we now have Microsoft and Intel (and others of course).

We idolize the great personalities and the beginning of a new technology. But the movement and technology wasn’t created by Apple even though they get the credit. Apple did do great work, they helped to shape an early portion of the computer age, but the introduction of a specific product only notes a specific point in the total arc of that technology. Computers went racing on by, new ways to interface with computers have emerged and were even invented before the Macintosh was released.

The Macintosh was certainly was a high mark at the time and was a great introduction to many people to the greater opportunity of computing. It allowed more people to access computers. I know that I used a version of Macintosh while I was growing up in elementary school, however at home we never owned a Mac, we only ever owned PCs while I was growing up. The Mac was already on the way out by the early 90s, which at the time was fairly fast, considering the quick ramp of computer since then.

Should we honor the Mac? No more than we should honor the first Palm, Blackberry or Android phone. I fully expect the iPhone will be honored as much or more in 3 years when the iPhone hits ten.

Ethics and Values; Military and Espionage

We didn’t get to have a national conversation about government espionage until Snowden released all those documents and now we’re having a pretty vocal one in 2/3 branches of our government (well all three since Obama seems to contradict himself fairly often). Today on Vice’s Motherboard I read an article claiming the military is going cyberpunk. As the article notes, the military has used flight simulators for years, because crashing in one of those is a lot cheaper than crashing a real plane. The Stealth Bombers cost close to 2 Billion each, so learning how to fly one of those is best done in a simulator than in a real plane, plus it reduces the risk of death in the event of a crash.

How will this trend continue? Apparently the military is investing in virtual reality battle grounds. This will help train soldiers in different combat situations without having to build extremely expensive facilities, use blank rounds, damage guns, and any other types of explosive that would be used in those situations. Never mind the logistics to get the equipment there and all that.

It’s likely that these battle grounds will incorporate things like the Oculus Rift and omnidirectional treadmills. This will allow soldiers to move crouch and actually feel like they are in direct combat. For people at home, it’s not going to be as useful, but it could work well in this type of situation. If they add in the ability to make the environment cold or hot and wet or dry they could simulate a great deal of the virtual environment to build skills of soldiers.

The military is also working on robotics as a way to reduce the number of men we have on a battle field. This of course could be extendable beyond simply having robots like the Boston Dynamics Dog, but we could eventually mix the VR environment with a “robot” to have a remote soldier that is bullet proof, never tires (as you could replace the driver), and moves around like a person. This opens up an entirely new type of warfare. It takes the idea of drone combat and moves it to the next level – foot soldier drones that truly make the battle field imbalanced. Of course the final step would be fully autonomous robotic soldiers – but I think most people wouldn’t accept those.

In any of these cases we need to have a serious national conversation about the application of these technologies. Looking from an ethical standpoint there are conflicting views. First, it’s ethical to protect our soldiers as much as possible when we’re in a justifiable defensible conflict. Second, it’s unethical to enter combat as an aggressor where your military cannot be stopped from the position of the defender. Furthermore, if we’re talking about completely robotic military force it’s even less defensible to be using these forces as we don’t have any human control in the case of a software failure – or a hack and remote theft of the system.

As a society we need to have a conversation about if we think we should allow our military to do this. As it is we already routinely have operations that the citizens aren’t really aware of in countries like Yemen and god knows where else. These put our men and women at risk which no one wants for arguable benefit in taking out terrorists – it’s unclear if it’s working or we’re just making more enemies. If we are able to replace real live Seals with a team robotic bodies controlled by a Seal team remotely, how many more of these missions could we run? How much more of this sort of activity would we believe is an acceptable level?

I believe that this goes back to what we value as a society. If we value privacy, safety, freedom, and true constitutional control over the military then we need to make sure that we control this before the military just morphs without really any thought. The NSA morphed into a data sponge pulling in everything that moves on the internet. As a society, based on the outrage, we do value our privacy and we’re trying to pull back control from the NSA – some people disagree with that, which is fine that’s why we need a conversation.

I believe that having robotic avatar’s will lead to a higher likelihood of abuse – similar to what we’ve seen with the NSA. I think this is what’s happened with the Drone Program, where Obama has a kill list that they are proud of having. Having more humanoid drones that can shoot sniper rifles will reduce the amount of collateral damage, but will be abused. It’s also very debatable if the kill list is even constitutional.

I think that the innovation for reducing our military expenditure is a good thing. However, I think we need to have a conversation around what the end goal of these programs.

Is Net Neutrality regulation commie nonsense?

Network Economy

Regulation’s a bad thing, right? Personally, I think there are instances where regulation is an amazingly good thing that drives innovation. We also need to be cautious about who is saying regulation is good or bad. Back in the 90’s we’d hear that regulating in anyway to prevent acid rain would cripple business and kill our economy. This clearly didn’t happen, we have acid free rain for the most part, we have more productive manufacturing than ever. We also hear that regulating CEO pay by median rather than average is significantly more complicated to the point that a place stacked full of MBA’s can’t figure it out. Then there are regulations that pick winners like Solyndra and turns out to be a disaster. These cause higher taxes and are actual drains on the economy (personally I’m on the fence about experimenting with new technologies and having the government support them, but that’s me).

What about the FCC “regulating” net neutrality? I think that it’s important to look at how this all started. First, I’ll start with a bit of a history with the telecoms, then move to how the internet was developed, and move to comparisons between other monopolies.

AT&T has been described as a natural monopoly. This was partially helped by the US government because the government wanted coast to coast telephony and selected AT&T as the standard for that activity. This gave AT&T incredible market strength, but was also extremely fragile as it was continually under threat of being broken up for being a monopoly (which is was). To do everything they could to avoid this, the geniuses at Bell Labs continually designed ways to keep their costs down, improve quality, and make very thing better. They also had some government deals that helped them a lot (military contracts for telecom stuff, like the first satellite). The value of AT&T’s network grew every time a person joined the network.

The fact that one person joined Network A over Network B could further impact the growth of that network. Let’s say Person A is friends with 5 people and is already on Network A, it’s likely, if they are really good friends and A is known for making good decisions, that those five people will join A on Network A. The value increases by more than simply 5, because all five of those people can talk to each other as well as every other person they know on Network A. Now if Person A has more friends, but not as good of friends and they actually are better friends with Person A’s friends they will also likely join Network A. This sort of cascade effect will continue to happen. This is also known as Metcalfe’s law.

When AT&T was force to break up, all of that interoperability remained. Instead of one big monopoly there were regional ones instead. As we’ve seen over time, these same regional operators have slowly re-joined back into 2 Bells versus the non-Bells. AT&T being split is a type of regulation for sure, but it did spur some interesting competition for a time.

How the Internet was designed:

The internet was originally designed to operate in many different application layers. Essentially the bottom of the stack was Internet Protocol which was agnostic to the type of information being sent across it. At the time, the most efficient method was over Ethernet so there was not any requirement to be concerned over the application medium. Over time there would be some concern, but that was really addressed by the protocol.

What would happen is that the applications that required information to be sent on either end would translate the information to be used by the layer below it to send out, such as a web browser to the OS, to the network driver to IP, across the internet to the network driver to the OS to the web server application. Across this entire process the actual data being sent was unknown to any of the nodes in between the application layers. (If you’re interested in this check out Internet Architecture and Innovation).

Of course the companies providing the bandwidth for that did not want to find itself in a similar role as they had after the break up of AT&T where they were forced to become “dumb pipes” for whatever people wanted to send across their network. To prevent this they created capabilities like deep package inspection and other tools to identify what content was being shipped across their lines. This also was the beginning of violating “True” net neutrality.

Why were they dumb pipes? Because they were defined as a common carrier to increase competition across the land line providers and ISPs the telephone companies had no choice. This lead to the explosion of ISPs like AOL, Century Link, and so on. What has happened since? The broadband lines have been ruled that they are not “Common Carriers“. Meaning that the data across the line can be treated however the companies that own the lines want.

Why is this bad in a network economy?

In a network economy, being able to fully control anything and everything can be very bad for the consumer if there is no other option. Now, you could argue that there are options, but in most cases because of other monopoly rules there are few options for allowing a new ISP.

A perfect example where a network monopoly isn’t a big deal is in Smart Phones. The iOS App Store is a natural monopoly in a network. The more people using the iPhone the more valuable it became and more app developers developed apps. It never became a problem that Apple regulates the entire experience BECAUSE there were other networks you could shift to, such as Blackberry, webOS, Windows (whatever mobile version you want to include), and, of course, Android. All of these ecosystems offer very different options for devs. Additionally, within Android there are competing App stores which further benefits the consumer. If there were no other competitors to iOS and it’s App Store the constraints that Apple puts on their product would likely be viewed as very anti-competitive and a type of “foreclosure.”

Market foreclosure is using one monopoly to enable another monopoly. Now, regardless of if you think that this should have happened or not, it did. Microsoft was hit for using it’s Window’s OS to foreclose on the internet browser market and was looking to do the same with their music player. What resulted was that MS was required to offer other browsers when a new Windows OS was launched and helped to reduce the market share of IE.

How does this apply here? Comcast is already trying to do the same with Netflix in the streaming video business. Comcast owns the content (Universal, NBC, etc), the connection (Comcast Cable ISP), the rules (data caps), and if they want to charge to access their network or not. Eliminating the rules of net neutrality tilt the table in the direction of Comcast to a degree that Netflix may never recover. If Netflix, at one point 2/3 of all internet traffic, had to pay for every bit they streamed to allow for an enjoyable streaming experience they would be bankrupt in very short order.

I get that Comcast’s of the world don’t want to be dumb pipes, they own the content and that’s king. However, not every ISP owns content (Verizon/AT&T) so they aren’t at such an advantage to companies like Netflix. However that’s where AT&T’s data plan comes in. Which would essentially level the table compared to Comcast. We, as end users, wouldn’t see any benefit out of this. It’s not that our subscription fees would lower or we’ll magically get faster internet. This is simply rent seeking behavior and bad for the economy overall. Only true new competition can lead to that. Changing these rules have zero impact on that competition.

What it does do though is negatively impact the creation of new businesses that want to stream video or provide a novel product that requires high bandwidth and equal rights to streaming. Removing the protections on net neutrality dramatically increases the cost of streaming that otherwise could go into building that startup’s infrastructure. Think of the problems at Twitch.TV with their growth. My subscription fees pay for the growth of the network that I subscribe to regardless if it’s something like Twitch or Comcast. Anything else will go to shareholders and CEOs.

Could we develop other options like a Mesh network? It’s possible, but for that to work the option would have to be a public/private venture. Most citizens aren’t going to help create that and likely don’t have the technology savvy to do so. To further complicate this issue many ISPs are actually pushing to make it illegal for cities to create their own ISP.

In many cases regulation is bad for business. However, in cases like net neutrality it’s returning the net to it’s roots and enabling much stronger competition based on the merits of the company providing the service, not the arbitrary whim of network owner.