Using Tools to Enable Deep Work

I read an interesting article about programming today, the author says that learning to program is easy, it’s working “Deep” for long periods of time that is difficult. I think this a really insightful way of looking at mastering skills. It’s really easy to jump to the next email or ping when you’re learning because you’re afraid to fail at learning. When learning becomes difficult, people have a more difficult time keeping focused – even if they have an incentive (Pay check or paying someone) to learn.

This can be exacerbate by not having a good environment to learn in or a good teacher. A bad teacher that isn’t willing to give you the examples that you’re able to learn from in a constructive environment is wasting everyone’s time. However, if you’re self learning, then you’re going to be using mostly Google searches or maybe a few books here and there. The best way to learn then is to give yourself an interesting project related to something you care immensely about. I’m not an expert at programming, but I know when I’ve learned most successfully it’s been when I have a clear objective with the right tools in front of me to dig into the problem I’m trying to solve.

There are tools out there that make doing this sort of work easier and others that make this work more difficult. Git and all it’s various version are tools that can, once you learn them, make deep work easier, because you eliminate the fear of mistakes. If you screw up too bad you can simply start over from where you were. Breaking your project into chunks becomes much more important so you can work on items without risking the entire project.

There are other tools like Slack, that apparently, can really be a detriment to deep work. There was a breakup letter about this topic that’s been getting some attention. I think it’s focusing on the incorrect problem. Slack isn’t the issue here, it’s the person doing the work and/or the work environment that has caused the problem “Breaking up” with Slack is like breaking up a hammer because you’re unsuccessfully screwing in screws. The tool is not at fault, it’s doing what it’s designed to do, hammer in things, you’re applying it wrong or using the tool incorrectly. Yes, in this case it is not the right tool for the job, but you’ve done a poor job defining the problem you’re trying to solve with the tool.

At my company, I think we’ve come up with a pretty good solution to this. We don’t use Slack, but it’s competitor HipChat, pretty similar overall, but with the right tools integrated together, you’re able to create rooms for specific features. These are tied together between Bitbucket, Jira, and HipChat (yea we went all in on Atlassian), which means you’re able to see all the information you need about the problem the feature you’re working on is trying to solve. We’ve started to use this to pull in the voice of the customer (me in this case I’m not a developer) earlier into the process so that I am able to give feedback quickly to what the developer needs. This allows the developer to meet my acceptance criteria by getting quick feedback and then getting back onto the deep work of really writing the software.

In some cases can it be disruptive? Yes, but that’s only if people aren’t using it correctly and we work with them to change their behavior before it becomes a problem. Slack, Jira, Bitbucket, et al are only tools that are designed to reduce the burden of working with remote team members to enable us to get down to the nuts and bolts of deep work for programming.

Take a look in the mirror if you’re struggling with learning programming or using a tool like Slack. You’re the problem, create a structure around how you work and how your team works. Use your hammer on nails not screws.

Values and Metrics Drive Emergent Strategies

This is part of my ongoing series on Lean Disruption. Where I write about combining Innovation, Lean, Lean Startup, Agile, and Lean product development methodologies.

Clay Christensen argues that there are two types of strategies corporate leaders engage with, deliberate and emergent. Porter’s 5 Forces analysis is an attempt to use tools to pull emergent strategies based on changing environmental landscape into the corporation’s deliberate strategy. A deliberate strategy is the strategy that leaders have vocalized and intentionally invested money and resources into. Emergent strategies on the other hand are strategies that develop through metrics and actual organizational behavior. While a leader may intentionally push resources one direction another metrics that has much more value to lower level managers may require those resources to be redeployed in another context. Resulting in a different strategy to develop than what executives had originally planned.

Hoshin Kanri (Policy Deployment) plays a similar role to the 5 Forces in the Toyota Production System where leaders start with their stated 3-5 year goals and turn those into annual goals, projects, and finally the metrics by which those projects will be measured. However, the process isn’t done after a single meeting, this policy is reviewed monthly and if conditions change enough can be completely reworked or modified based on what conditions are emerging. This is important because it can, in fact, feed changes the whole way back at the the 3-5 year goal levels where if serious issues are occurring in multiple projects associated with a goal, such as lack of resource commitment, that goal must be re-evaluated or there needs to be other changes to incentivize resource commitment to those projects.

The Lean Startup and Agile approaches are likely the most closely tied to emergent strategy development. The Lean Startup approach values experimentation and customer engagement above all else which can result in initially a great deal of change in project/corporate strategy. In the Running Lean the author uses the Lean Canvas as a tool to maximize the power of emergent strategy develop and smooth the transition from emergent to deliberate strategy development. In many cases that transition is relatively easy anyway, however it is possible to see that transition occur as a corporate leader iterates through versions of the Lean Canvas resulting in less and less changes to the Canvas.

Agile similarly promotes engagement with customers and using iteration to eliminate uncertainty. In this way Agile is closer to Lean Startup than traditional Project Management and leads to emergent based products. Where the customer need is truly met. Which, over time, results in a deliberate strategy to maximize the resulting product. This product is still, likely, within the initial deliberate strategy of the leaders of the company, but may be very different than what the leadership had initially wanted or planned. This is the best of both worlds as the leaders get a product that fits their strategy, but is more effective in serving market needs than what they ever could have planned.

The values and metrics the organization uses to manage the work that it does heavily influences the direction any project or product develops. The tighter the control over metrics with less flexibility for innovation leads to more tightly aligned products to deliberate strategies. However, this can come at a cost of less innovative ideas and poor-market fit. In the case where something might significantly change the direction of the company, for that product to survive it is best to move that into a skunkworks or protected space where funding is secure with appropriately aligned staffing levels. This will allow the metrics and values to coalesce around the product, the customer, and the market needs.

Values in an Agile/Lean/Innovative company

This is part of my Lean Disruption Series where I’m looking at Lean, Agile, Innovation, and Lean Startup.

None of these methodologies can be adopted for free. They require a great deal of firm introspection. Understanding how processes interaction with people and values is vital to adopting any of these approaches let alone a combination of these approaches.

Metrics are one of the best examples of how there can be conflicts between stated values, values in making decisions, how resources are handled and how processes are structured. The famous saying “You manage what you measure” is right in a lot of ways. Many companies claim that they value customer satisfaction, however many of these companies do not actually do anything with the satisfaction surveys they do get. Comcast is the most obvious example of this. Comcast doesn’t really value customer satisfaction because they measure their customer support on how much they can upsell to the customer anytime they are on the phone. This changes the processes their customer support must use, rather than designing processes to enable single call resolution, their processes are designed to enable selling more products. Their employees, the resources, are rated based on this and if they don’t meet those goals they are unlikely to do well. Considering the Verge’s Comcast Confessions series most of the resources at Comcast do not feel valued. This all points to the true values for Comcast being retention at all costs and more revenue per user measured in Churn and ARPU (Average Revenue per User) respectfully.

Agile Manifesto from ITIL’s blog

For a company to adopt an Agile approach to developing software, the paradigm of what the organization values must radically change to align to the Agile Manifesto. In most software development the concepts on the right are what are valued through a Project Management Office. The concepts on the left are typically considered only at the beginning or the end of the project or not at all. Working product is the goal of a project, while customer collaboration inclusive only in the beginning getting requirements.

Switching from the right to the left creates massive cultural upheaval at an organization, where power is shifted down and out. It is shifted down to the team level, where managers in the past made all the important decisions Product Owners, Scrum Masters, and developers make the decision now with the customers. Power is shifted out through increased collaboration with the customer. Customer centricity forces the company to understand what the customer really wants and more quickly respond to changes in their understanding of their needs. This does mean that the “requirements” change, however, in many cases due to the uncertainty in a technology, interface, or some other aspect it was impossible to properly articulate the actual need until there was an example in front of the customer.

With these value changes there must be process changes to that properly reflect the change in the way the values require work to be completed. In the case where Single Call resolution is the most important metric reflecting the value of true customer satisfaction, processes must be built to enable that – such as training, information repositories, and authority to truly address customer needs at a single point of contact. In software development rapid iteration with continual feedback is a process that must be built to enable that.

This changes are not free and require true commitment from leaders across the organization. Without their commitment any adoption of these frameworks is doomed to failure.

When we buy something do we control anything?

In new routers Comcast has decided to enable another WiFi signal that is public, but separate from your network, but still using your data. Initially, you were able to fairly easily turn off the the second network, however, Comcast has started to make it much more difficult. This raises the question in my mind, around if you’re paying for a service, shouldn’t you be able to control what is happening with that service within your house? It also raises the concern in my mind that the second network will use your data cap in the areas that have data caps – and Comcast plans to expand those caps even though we hate them.

Similarly, Uber, has done some pretty horrible things around data privacy of their users. Similarly, Facebook has conducted experiments on their users and what they display. In Uber’s case you buy the service, in Facebook, you pay for it through seeing ads. In each case you do not control anything done with your data once you enter the agreement to use their services.

Apple has been accused, and admitted to, deleting songs added to an iPod by a non-iTunes service. This is even more problematic in my mind than Amazon deleting something from your Kindle, because the iPod is a physical object that you own that was only updated whenever you connected the iPod to your computer. Furthermore, Apple was deleting things you owned without your consent from a product that you own because they didn’t want their competitors content on a product in their ecosystem. It is likely many people didn’t notice because you can have so many songs on the device, but I’m sure some people were confused.

Then there is the “licensing” that happens whenever you buy software, even whenever you buy a physical copy, companies like Autodesk have sued over the right to sell that “license” again. They sued and won over someone selling their physical disks, which is pretty insane, but they wanted to protect their product and claimed that it violate’s their licenses.

In all of these cases, a company is doing something related to a service you purchased without your consent or input into how they use it. Effectively, you don’t really control the stuff you buy. Even though we all feel like we own everything we buy, we really don’t. We don’t have control over the services we purchase and this is going to get worse over time. It will get worse, because software is eating the world, and is now in many more traditional industries like mining equipment manufacturer Joy Mining. Michael Porter wrote a really lengthy article about how software is having serious impact on the future of competition he argues that software will be everywhere and in fact companies need to build the internal capability to create software. As users of these new technologies we need to understand how companies use our data and what control we actually have on the services and products we buy.

Is AI going to kill or us bore us to death?

The interwebs are split over the question of if AI is going to evolve into brutal killing machines or if AI will simply be just another tool we use. This isn’t a debate being asked by average Joes like you and me, it’s being asked by some pretty big intellectuals. Elon Musk thinks that dealing with AI is like summoning demons, while techno-optimist Kevin Kelly thinks that AI is only ever going to be a tool and never anything more than that, and finally you have Erik Brynjolfsson an MIT Professor that believes that AI will supplant humanity in many activities but the best results will come with a hybrid approach (Kevin Kelly does use this argument at length in his article).

Personally I think a lot of Kevin Kelly’s position is extremely naive. Believing that AI will ONLY be something that’s boring and never something that can put us at risk is frankly short sighted. Considering that Samsung, yes the company that makes your cell phone, developed a machine gun sentry that could tell the difference between a man and a tree back in 2006. In the intervening 8 years, it’s likely that Samsung has continued to advance this capability. It’s in their national interest as they deployed these sentries at the demilitarized zone between North and South Korea. Furthermore, with drones it’s only a matter of time that we will deploy an AI that will make many of the decisions between bombing and not bombing a given target. Current we have a heuristic, there’s no reason why that couldn’t be developed into a learning heuristic for a piece of software. This software doesn’t have to even be in the driver’s seat at first. It could provide recommendations to the drone pilot and learn from the choices when it is overridden and when it is not. Actually, the pilot doesn’t even have to know what the AI is recommending and the AI could still learn from the pilot’s choices.

AI isn’t going to be some isolated tool, it’s going to be developed in many different circumstances concurrently by many organizations with many different goals. Sure Google’s might be to find better search, but they also acquired Boston Dynamics which has done some interesting work in robotics. They are also working on developing driverless cars, which will need an AI. What’s to say that the driverless AI couldn’t be co-opted by the government and combined with the AI of the drone pilot to drop bombs or to “suicide” whenever it reaches a specific location. These AIs could be completely isolated from each other but still have the capabilities to be totally devastating. What happens when they are combined? They could at some point through a programmer decision or through an intentional attack on Google’s systems. These are the risks of fully autonomous units.

We don’t fully understand how AI will evolve as it learns more. Machine learning is a bit of a Pandora’s box. It is likely that there will be many unintended consequences, similarly to almost any sort of new technology that’s introduced. However, the ramifications could be significantly worse as the AI could have control over many different systems.

It’s likely that both Kevin Kelly and Elon Musk are wrong. However, we should assume that Musk is right while Kelly is wrong. Not because I want Kelly to be wrong and Musk to be right, but because we don’t understand complex systems very well. They very quickly get beyond our capability to understand what’s going on. Think of the stock market. We don’t really know how it will respond to a given quarterly earnings from a company or even across a sector. There are flash crashes and will continue to be as we do not have a strong set of controls over the high frequency traders. If this is extended across a system that has the capability to kill or intentionally damage our economy, we simply couldn’t manage it before it causes catastrophic damage. Therefore, we must intentionally design in fail safes and other control mechanisms to ensure these things do not happen.

We must assume the worst, but rather than hope for the best, we should develop a set of design rules for AI that all programmers must adhere to, to ensure we do not summon those demons.