Tech and Art

Last night I asked for a writing prompt, not for my blog, but for my planned creative writing stream on Twitch.tv. Instead of a fictional writing prompt, I got one requesting I write about the intersection of technology and art. This is a pretty interesting space to be honest as there are folks that are building crazy things for Burning Man, Soak in Oregon, and just for fun.

The laser reflecting on the windmill is pretty interesting. I haven’t see anything quite like this before. When I used to drive between Austin and Santa Fe on a regular basis the wind mills in east Texas, always got me excited, even when it was just the flashing light on the top. The elegance of the blades juxtaposed with the barren landscape was really a great site to behold.

This gif also brought to mind another Dutch technologist/artist though. This creator uses a form of machine evolution to create super interesting “animals” that move around on beaches without going into the ocean and that move around more efficiently.

A book I read a number of years ago, called “Design Driven Innovation”  talks about how using art along with an understanding of how people use objects allows a great deal of innovation in our products. What might seem useless today, such as a laser on a windmill may actually help pave the way for new energy transmission methodologies or perhaps another way to enhance the amount of energy a windmill actually creates.

I’ll close this with my thoughts about an event in Eindhoven, The Netherlands that I really loved. It was a Glow Festival, which really makes sense because it was a city large built upon the successes of Philips. It is a festival where the entire city center is turned into a series of light art exhibits. It combines the aesthetics of the old city, with modern lights. I really enjoyed it and if you’re living in Europe I strongly suggest you check it out!

Capitalism vs. Robots – which is more terrifying?

In an article that recently resurfaced on Reddit, Famed Astrophysicists Stephen Hawking argues that we should fear capitalism more than robots. I think the timing of this is somewhat interesting, being an election cycle and the two populist candidates are opposites in many regards especially in terms of Democratic Socialism vs. Crony Capitalism (Sanders v Trump). In the broader context of emerging technology this is important as well though, as many other technology leaders have expressed fear of AI, such as Elon Musk, while other leaders are running full steam ahead towards more and more automation.

Hawking isn’t the only person thinking about the economy and technology though. Warren Buffet just released Berkshire Hathaway’s annual report with some pretty stark warnings about the future of capitalism in action at the corporate level. Indicating that innovation does have a darkside. While he’s speaking as a manager, there are economists looking into this and in the book Second Machine age, the authors argue that the best is still to come, because man and machine work best together, not separately.

Unfortunately, this will only push the ceiling up on skills required for jobs, rather than expanding opportunities. A perfect example of this will be Uber. Being an Uber driver isn’t a difficult job because of skill requirements, but because it’s a boring job that is relatively tiring. Uber has been pushing down their prices over a multi-month/year process which will continue through the introduction of “Autonomous” Cars, or RobotCars. At this time a large number of low skilled workers will find themselves out of a job, including people I know and probably people you know. This has been Uber’s plan for a long time as they understand that people are the biggest costs and risk for the company. Especially in light of the mass shooting in Michigan.

Uber isn’t the only major company looking to replace workers like this. In fact, it’s likely that a lot of White Collar jobs are going to go this route as well, including in industries that notoriously relied on people that then made unethical decisions, such as the financial industry.  We’ve heard of High Frequency Trading, which is basically a set of algorithms to make decisions on buying and selling stocks based on microtrends. However, this is going to continue to expand into newer areas. It’s been well remarked that most brokers are no better than a coin flip (Black Swan; Drunkards Walk; Thinking, Fast and Slow; all reference this) so it is highly likely that algorithms will do better than people in picking winners and losers on the stock market. It’s also likely that those algorithms will have access to more data faster than any person could eve analyze and act upon.

This interaction between capitalism and automation creates huge risks for the economy. A few years ago, there was a “flash crash” which was basically caused by those HFT I mentioned above. As more and more portions of the financial industry come under the purview of robo-traders, these sorts of events are going to be more likely. These institutions still have pushed most of the risk to the public, while retaining the bulk of the profits from these robots.

As these trends continue across industries, the local optimization of companies to automate and create more robots is going to gradually push people out of jobs at a more and more rapid pace than new categories of jobs can be opened. I think it likely that will be likely that we’ll see more companies going the route of Uber. Using tools like Amazon’s Mechanical Turk to get processes started before they invest effort and energy into automating processes. Once they are shown to be successful, the effort to remove the human element will continually increase until those workers are out of a job. What we will eventually see is a white collar migratory worker going from one type of tech job to another only to be replaced by automation in the long run.

The impact to the economy in the long run and the human condition in the short term will be catastrophic as our current institutions are not designed to handle this sort of change in labor type. The incentives for this behavior has been in place for decades and have been pushing bad actors to be worse, such as the Turing Pharmaceuticals’ CEO price gouging dying patients, because the market could support it.

Continual Computing Innovation

Several years ago, I wrote a few blogs about where I felt that the future of computing is heading. The main premise focused on high speed internet, essentially a mesh network with the speed of Google fiber. I feel pretty good at some of the things I predicted have come true, like the personal cloud in some form or another. The other key component that is slowly starting to come true is the phone that you can plug in and use as a computer. HP announced another version of this at the Mobile World Congress to pretty bad press reports. I’m not convinced I agree with TechCrunch’s prognosis, but I also don’t plan to run out and buy one of these phones. I believe that for HP’s product to be successful it really needs to build on the Surface Book. A phone, even a large phone, is nice for some applications, but definitely won’t have enough power for other applications. Especially graphically demanding applications. I’m sure that we’re getting to the point that tablets are getting to be laptop replacements, in the right form factor. I don’t think phones are there yet. What needs to happen, is that the dock for the HP V3 needs to have the ability to enhance the performance of the phone. Furthermore, the switching between docked and undocked needs to be seamless to the point the user doesn’t suffer major slow downs in opening applications that are expected to be used in phone mode (texting, email, etc…). I think we’re finally getting to the point where we’re going to see more and more of these products.

Another more interesting advance in cell phones is the LG G5, which has the ability to add new hardware components, after market. Right now the portion that comes off, removes the battery, but I have a feeling there will be an aftermarket version that will change how that works. I believe that this phone has some serious potential to have an attachment that docks similarly to the HP V3, but with the ability to increase performance of the phone. This is going to be a big deal as more and more people are looking into turning Android into a successful desktop operating system (or maybe laptop operating system). I can imagine a monitor that has the capability to dock with a phone of one kind or another will be developed to further support this ecosystem. It would be easiest to create a dock with an USB-C adapter based on the amount of information and that these are already being used to allow for Graphics Amplification.

I think over the next few years the innovations around the blurring of smart phones with these applications is going to increase. We’ve seen it to some degree with Smart TVs, albeit mostly unsuccessfully compared to set top boxes like Apple TV and Roku. However, with the way phones plow through batteries and the continually evolving use cases of smart phones and computers, this convergence is going to continue and within the next 3-5 years we’ll have at least one company offering a suite of products that is all inclusive to the smartphone as a replacement for computer. Especially, if we’re able to get unlimited 5G on said phone (5G speeds are expected to be in the range of Fiber).

Net Neutrality Vs. Title II – They Aren’t the Same

Since Title II passed I’ve seen a lot of articles that either indicate buyers remorse or have always been against Title II and are gloating that it’s going to be overturned. For example, Wired had an Op-Ed yesterday that used major points from Chairman Pai’s dissent against using Title II. Title II is clearly a divisive issue, as the guys over at KBMOD, where I also write, are completely divided over the supposed benefits of Title II. I sincerely hope that when we look back at this debate that we see this discussion as a confusing bit of history, because nothing happened. Where the Internet didn’t change and remained an open platform for everyone to easily and equally use.

Net Neutrality and Title II are not the same thing. Title II is an old law originally written in 1934 to regulate a single monopoly with the hopes of create more competition. It wasn’t successful but the legacy of Title II played an important role in the creation and development of the Internet. Title II was the policy regime that APRANET was developed. Whenever a scientist at MIT wanted to use a graphically powerful computer in Utah Title II was in full effect on that data system. Furthermore, Title II was the law of the land for all of dial up Internet. Which was actually a very good thing. The fact that there was Local-Loop unbundling meant that you could have an Internet service that was different than your phone company. It was also likely, given how low the costs were, that these ISPs didn’t have to pay many of the taxes that the Phone company did that you used to buy access to the Internet. We already know that Title II has and can foster a culture of innovation.

Net Neutrality is different than Title II because it was the architectural approach the initial designers took for creating the internet. There were a few key reasons for this, it was easier, required less computing power, and the majority of the early pioneers believed in what became the Open Source movement. In many cases it was the exception rather than the norm, early on, for scientists to patent their computer research. It’s likely because most of these researchers were Mathematicians and Physicists that came from a military background (WWI and WWII and all), so they weren’t used to patenting due to their educational background and the requirement for secrecy contributing to the war effort.

To provide preferential treatment to one packet of data over another required tools that simply would have prevented the data from arriving at its destination in a timely fashion in the 70’s. Remember this was during the time when a personal computer didn’t exist and computing used mainframes and terminals to do the work (interestingly we’re going back to that a bit with the cloud). This means that the routers would have had to have been mainframes themselves to decode the data and figure out what type of data it was before sending it to it’s next location. This was seen as a waste of computing power as well as an invasion of privacy. The point of the Packets was to help keep the data save and secure as much as to maximize capacity on lines connecting the computers.

One of the largest complaints about implementing Title II is that there’s not enough economic evidence to support it. I believe that to be true to some extent. It’s hard to forecast something that’s happening as it’s happening. Especially since the FCC was unlikely to get access, legally, to the Netflix-Comcast/Verizon deals to ensure equal access (or maybe preferred) to their lines. It was clearly shown by Netflix that Comcast/Verizon were intentionally causing issues they could easily resolve and they did immediately after they got paid. With Comcast/Verizon planning to foreclose the video streaming market in this fashion and violating the spirit of Net Neutrality, some sort of regulation was needed to prevent this foreclosure.

I would have rather not had any sort of regulation go into effect. However, I believe that the actions that Comcast and Verizon are taking are anticompetitive and anti-consumer. Time Warner Cable supposedly makes 97% profit on their broadband service, which isn’t a surprise whenever you have a local monopoly/duopoly for broadband.

Could there have been a better way? Yes, the FCC could have taken action that would have forced increased competition. Something like setting goals for every city in the US to have no fewer than 3 broadband providers and providing assistance to municipalities that wanted to develop their own to meet that goal. Ironically, the one provision not included in the Title II rule that would help with that is local-loop unbundling, which would reduce the cost of a new ISP entering the market as they wouldn’t have to build their own network, which has slowed Google Fiber down considerably.

New FCC Rules and competition

A friend retweeted the Tweet below today and it got me thinking about the broader context of the FCC rules that past last Thursday

Two things struck me about this tweet. First, it’s disappointing that the author doesn’t understand Title II better considering he co-founded the EFF. Second, that Title II as implemented was designed to do nothing about ISP competition. As I wrote on KBMOD this week, Net Neutrality has no provision for “Unbundling” which would promote competition amongst ISPs at the local level. Unbudling, according to Wikipedia, is a regulation that requires existing line owners (such as Comcast) to open up their lines to anyone that wants to sell cable, internet, or telephony access. Unbundling, under a much more restrictive Title II, is the only reason that AOL was successful as a business model. Since this provision of Title II was forborne, Title II will not, in fact, be for promoting competition in ISPs at all.

Instead, the FCC, at least in my opinion, looked at the Internet as a general purpose platform technology. They were looking to ensure competition ON the technology not between technology carriers. For example, the FCC wants to see as much competition as possible between companies like Netflix, Amazon Prime Video, Hulu, and Comcast’s Xfinity service. However, they want to make sure that Comcast cannot foreclose on the video delivery service by leveraging their existing monopoly in telecommunications. What that means is that Comcast could create rules or an environment where Netflix cannot compete and Comcast customers MUST use the Xfinity service because alternatives didn’t function well (Foreclosure is the thing that got Microsoft with Web browsers).

The FCC did enact a rule that will impact competition at the local level though. It’s a limited rule because it impacts only Tennessee and North Carolina. It is preempting state law by stating that it is legal for municipalities to develop their own broadband networks. Broadband build out is prohibitively expensive for an entrepreneur to set up a network, however if they had a backing of a municipality that is willing to share the risk and the reward, it might be possible for an entrepreneur to build out their own broadband network on a limited scale. Municipalities aren’t the ideal solution to this, it would be significantly more preferable if other businesses moved into areas and built new broadband networks, however unless they have a massive amount of money, like Google, it’s unlikely to happen. A bridge between is a public-private partnership where private enterprise, which has the telecommunications expertise, partners with a municipality, which has the demand and financial support, to build a network.

With the ruling on municipal broadband being so limited, it’s not going to make much of an initial impact, however it’s likely that other municipalities will try to jump on that bandwagon and overrule laws at the state level (as a note I’m not going to argue if this is something they have the authority to do, I’m just looking at the potential impact of the rule).

When Piracy is Easy, How Do You Compete?

Popcorn Time is something that I’ve been hearing about for a while now but I’ve never really looked into. Effectively it’s a tool that gives you an easy to use User Interface to find Torrents for your favorite TV shows and movies. Torrents, by the way are a type of file and download methodology. Effectively you get tiny bits and pieces from a large number of different users across the internet. This makes it harder to track the individual files, prevents it from easily being removed from the web, and helps manage internet usage across the multiple users. In the days of Kazaa, you directly downloaded from a single peer, now you’re downloading from multiple users, so if one goes offline or reduces the bandwidth they are sending the file to you it has minimal impact.

Torrents are what’s called “piracy” and are on the pirate bay and any number of other sites that share those files. Since they do not have to follow strict contracting like Netflix, Comcast, Hulu, HBO, and other streaming services you have access to the movies you want whenever you want them. For instance, Netflix recently lost access to the Avengers, probably because of the cost of keeping in their library and Disney trying to create artificial scarcity of the legal product. You can find extremely high quality torrents out there to watch it if you can’t get it for free. In fact I’m sure it’s on Popcorn Time right now.

Because of these difference and the historic complexity and risks of downloading a torrent, Netflix had positioned itself as a way to prevent piracy. Now this might not be the case, as Netflix is beginning to see Popcorn Time as a legitimate threat to their business model. I’m not surprised that Netflix sees risk here and I think that this is a good thing for Netflix. It means they are expecting their business to be disrupted and that they can take proactive steps to address it.

What can they do to keep their business afloat and continue to fight piracy? Well, since they are essentially seen as a cash cow on two fronts – ISPs and Content producers (MPAA and TV companies), they need to clearly articulate the amount of piracy that was reduced once the content was put onto Netflix and then show the increase in piracy after the content was pulled from Netflix for contractual reason. If Netflix can’t afford to keep it on their network, then with an easy to use app like Popcorn Time, the content will be pirated, which means that any revenue artificial scarcity was hoping to drive or to be extracted from Netflix at an elevated price goes out the window and the content will still be consumed.

In some cases piracy will happen regardless, but if the trend continues were people are switching back and forth between cord cutting and going back to cable because of rising costs of apps, then apps like Popcorn Time will become more popular because they can completely replace Hulu, Amazon Prime Videos, HBO Go, Netflix, etc.. You could be a cord cutter with this and pay for one app to get your live sports and be good to go. Content producers will begin to lose out again, because they are trying to squeeze the companies that provide easy, relatively cheap access to their content. I’d rather not go back to that, but if my costs keep rising because the companies I choose to support can’t afford the content that I want, then I’d have no choice.

AMD, What Are You Doing?

The past few months haven’t been kind to AMD. First Lisa Su, the first female CEO, ousted Rory Read. Now three leaders have left including the General Manager John Byrnes, CMO Colette LaForce and Chief Strategist Rajan Naik. Furthermore, it’s pretty clear that the remaining two leaders long term leaders, Mark Papermaster CTO and Devindar Kumar were sort of bribed to stay with restricted stock. This is on top of delays in their desktop, graphics, and mobile chipset and layoffs.

I think it’s pretty clear that AMD no longer has a clear strategy. AMD, while I was working there, was starting to put out some cool stuff that could really define the future of computing. Their APUs were best in class and could have been deployed in a lot of really cool applications. However, those never appeared to have materialized and now Intel is starting to attack the SoC market. While Intel’s Iris graphic chipset is way behind AMD in pure power, I think it’s going to play a serious role in the up coming years especially since Intel is leveraging a similar enough design that they are able to use the Open Compute Language that AMD championed.

Another area of concern for AMD fans is that John Byrne, shortly before his departure, announced at CES that AMD was steering clear of the IoT phenomenon. Which I found surprising considering that their strategy, only a year and a half ago, was to conquer the embedded computing space. Since they restructured again, that’s about 4 times in the past 4 years, they have clearly decided to forego that space. The IoT chipsets are likely going to be a disruptive technology to computing. For instance, this computer you can dock and upgrade every year for about $200, while Intel released a full Windows computer on an HDMI stick for $150. In the past I wrote that I thought that the dockable phone that would turn into a full computer would be the long term future, but these are the incremental steps to get us there.

AMD clearly doesn’t see these spaces the future. They are currently looking at where the market is now and not truly planning for the future. I was excited whenever AMD announced the partnership with Gizmosphere hoping it could compete head to head with the Raspberry Pi, but AMD is clearly failing to embrace that movement, since those devices would be powering the IoT and the maker movement. On the otherhand Intel is rushing to embrace these groups and sees these people as the way into attacking Qualcomm, Samsung, ARM, and Apple’s designs.

Low power is going to be vital for the future expecting a smaller and smaller niche of applications. In these applications, excepting graphics chips, AMD is getting crushed. Even in the graphics space AMD is starting to flounder with poor quality, as @NipnopsTV reported with his year old or so 7970 card.

All of these should be a concern for AMD fans. The company is not investing in the disruptive technology hitting their industry, their market cap is only $2.06B and their shares are at $2.66. They may be positioning themselves to get bought or could be at risk for a hostile take over for their IP or pushed into bankruptcy since their IP might be worth more than the company operating as it is. Look at Nortel to as an example where it’s IP was sold for $4.5B while everything else was just ditched.

Could we eventually see a Samsung R290 and a Samsung Kaveri processor? They gobbled up a ton of AMD’s engineers in 2013 definitely could happen.