Demystification versus Understanding

I am getting really interested in distinctions between types of knowledge these days. I think these distinctions are very important to the invisible structure of mental models.

One distinction whose importance I have come to increasingly appreciate, is the one between demystification and understanding. Both are types of appreciative knowledge. I define them as follows:

  • To demystify something is to understand it to a level where you no longer feel anxious about your ignorance.
  • To understand something is to have the same priorities as experts regarding that something.

The latter is in fact an implicit chicken-egg definition of expertise.

There is a shallow sense in which I can come across as very “knowledgeable.” Very few important things utterly mystify me that do not also mystify everybody else. When I encounter a new idea, I usually have some way to parse it. I am rarely at a loss over what to make of it. But this knowledge is only slightly deeper than the knowledge of a librarian who knows how to classify a book on any subject in a catalog.

So what is real understanding? Why is having the same priorities as experts a good test?

[Read more…]

Fertile Variables and Rich Moves

Engineers and others attracted to comprehensive systems views often fail in a predictable way: they translate all their objectives into multi-factor optimization models and trade-off curves which then yield spectacularly mediocre results. I commented on this pathology as part of a recent answer to a question about choosing among multiple job offers on Quora  and I figured I should generalize that answer.

Why is this a failure mode? Optimization is based on models, and  this failure mode has to do with what you have left out of your model (either consciously or due to ignorance or a priori unknowability). If there are a couple of dozen relevant variables and you build a model that uses a half-dozen, then among those chosen variables, some will have more coupling to variables you’ve left out than others. Such variables serve as proxies for variables that aren’t represented in your model. I’ll overload a term used by statisticians in a somewhat related sense and call these variables fertile variables. Time is a typical example. Space is another.  Money is a third, and particularly important because ideological opinions about it often blind people to its fertile nature. Physical fitness is a fourth.

Fertile variables feed powerful patterns of action based on what I will call rich moves. 

[Read more…]

Time Lensing

We all experience lenses and fun-house mirrors from an early age. Some people were glasses, while others have very acute vision, better than 20/20. Some are colorblind while presumably others are more sensitive to color differences. We know that there are birds and animals that see space very differently from us.

So we are used to the idea that our perception of space depends on how we see. We are used to the idea that if how we see space by default isn’t good enough, we can buy and use telescopes and microscopes to change how we see.

Time is actually a very similar dimension and exhibits exactly the same phenomena, but our intuitions around time are far worse.

For example, if you are angry or sick, time can seem to pass much more slowly than if you are having fun or are healthy. Alcohol generally slows down the perception of time passing (a drunken hour seems longer than a sober one).  Coffee speeds it up.

Various meditative practices or extraordinary situations (like being involved in a major fire, being on a battlefield, etc.) can make time appear to almost stand still, or make hours seem like minutes.

There has been some systematic study of these things (which I’ve referenced in Tempo, such as the early work of Ornstein), but in general, the phenomenology of time perception is largely unstudied. It is just hard to study in laboratory conditions. But it is not hard to study in your own life.

It is useful to think of yourself as going through life with varying kinds of time lenses stuck between your consciousness and the universe. Sometimes you are experiencing time through a microscope or telescope. Sometimes in a convex mirror. You can deliberately put on different types of time glasses for different purposes (coffee, alcohol, music). You can learn mindfulness meditation — the equivalent of getting Lasik surgery for your time-eyes.

The value of gaining some conscious control over your time-perception is that you can experience reality at different levels of resolution, both external reality and your own thoughts. Sometimes it is useful to see all the pores in your time-skin, just as it is useful to see your hair roots in a convex mirror while shaving.

If you are a computer science or information theory geek, you can think of consciousness as having a sort of raw bit-rate, and your time-lens as being able to experience that stream at a certain sampling rate and resolution.

But I am not particularly enamored of the idea of developing strong time-vision for its own sake. So long as I am wearing time-lenses appropriate to the task at hand, I am fine. I don’t need an electron microscope when a hand-held magnifying glass will do.

Routine, but Cannot be Automated

The hardest kind of activity to get organized is stuff that is routine, but cannot be automated. This is stuff that has trivial meta-content, but non-trivial work content. Even GTD struggles in this department.

Trivial meta-content means it is not hard to plan or schedule this stuff, or figure out and create the necessary enabling pre-conditions. Non-trivial content means the actual work is hard and cannot be automated.

Blogging is an example for me, so I’ll use that. If I had to put it in my organization system, it would simply be write blog post as a weekly calendar reminder.  No biggie. But I cannot put the work itself on autopilot.

Small business book-keeping is another. It seems simple enough to just put your receipts in a shoebox, and update your books based on your invoices, credit card statements, receipts and bank balances every month. All you need is an Internet connection and your shoebox. But the work itself cannot be automated.

Why is this stuff hard to get organized? Is it fundamentally hard? What are the consequences if you don’t keep up?

[Read more…]

Live Life, Not Projects

I first encountered the concept of arrival fallacies in Gretchen Rubin’s book The Happiness Project. Which goes to show that you should occasionally attempt to learn from people who are very unlike yourself (Greg Rader has a nice post about this from a few months ago). If you’ve been following my writing for any length of time, you probably know by now that I am deeply suspicious of the very idea of happiness, and its pursuit. The Rubins of the world rarely get on my radar.

An arrival fallacy in the sense of Rubin is any pattern of thinking that fits the template, I’ll be happy when ______ (Rubin credits Tal Ben-Shahar’s book Happier, which I haven’t read, for the concept).

The idea generalizes beyond happiness to any sort of goal-driven behavior. You could use templates like I’ll be ready ____ once _____. Or I’ll really understand life when ________.  Call the first template Type A (happiness fallacies), and the other two Type B (readiness fallacies) and Type C (enlightenment fallacies) respectively. There are probably other common types, but we’ll stick to three.

Let’s make up a list of examples of each type, for reference, before trying to understand arrival fallacies more deeply.

[Read more…]

Steer, Ready, Fire

I like various permutations and adaptations of the phrase ready, aim, fire to think about decision-making between the extremes of pure contemplation and pure action. Playing around with this phrase led me to this 2×2 (I seem to be thinking a lot in 2×2 form these days). I’ll connect the dots in a minute.

 

Aiming versus Feedback

The apparently logical sequence, ready, aim, fire describes a feedforward model. You get your mind in the right place, then you figure out how to be effective (aim can map to waterfall planning at any level), then you take action.

The phrase ready, fire, aim, preferred by the action-oriented in uncertain and dynamic environments, is a response to the analysis-paralysis that can happen if you try to get to ideal starting conditions and perfect information before starting to act.

The absurdity of aiming after firing can only be resolved via appeal to the logic of iteration and feedback. You converge on the successful course of action through feedback from failed actions. This works well as a motto for startup types and others who believe in the release early and often, and fail fast approach to projects.

Then there is the phrase, ready, fire, steer. I am not sure who came up with that one, but I’ve heard it attributed to Paul Saffo.  This replacement of aim with steer suggests that real-time feedback and control can be continuous. It is the logical limit of iterating faster and faster. Heat-seeking or radar-guided missiles are perfect examples.

The Role of “Fire”

The variant ready, fire, steer made me wonder about why fire is even necessary. Within your basic firearm metaphor, firing gives you all your momentum (kinetic energy) in one big dose. Of course, you also have whatever positional advantages (potential energy) you possess.  It maps well to situations like getting investment in a startup, coming into a trust fund, or using a rocket to launch satellites.

But there are also cars and airplanes, with more continuous energy-generation models. There are also renewable energy models like sail ships, and models that create a net surplus of energy, like a solar car with more energy than it needs.

These don’t need a fire step. You could do with just ready, steer thinking (or ready, start, steer if you insist). A lot of bootstrapped business models would qualify, as you use tiny or zero cash investments to get started, and nurture cash flows slowly to get where you want. You may be accumulating a surplus of cash or attention that you can conserve for later use.

It takes a lot more foresight to work without the boost of a fire stage, but in return you get more control and efficient use of resources, in cases where the fire represents borrowed energy, provided on terms that you don’t like.

In fact, you can often dispense with ready as well. The idea that you need a ready, independent of information preparedness is more psychological fiction than reality. While you are contemplating doing anything, your readiness level changes over time, even before you adopt any sort of intention. As you process relevant information, your situation awareness may increase or degrade in quality, and you may become more or less oriented.

Ready really only matters in situations where there are decisive go/no-go thresholds defined by irreversible (or very expensive to reverse) actions, such as quitting your job or getting married, but ready as an internal state doesn’t really capture that. You’ll never be really ready. But as a continuously-changing state, your readiness may cross a minimum threshold associated with a given irreversible decision.  That threshold is set by external conditions.

This means that you start steering the moment even a tiny amount of readiness bubbles up into your consciousness. After that, the feedback process that is steer automatically moves your readiness level along.

So steer is really at the heart of it all. Continuous feedback control of energy, using information.

Ready is useful to add in where there is an important, unavoidable and irreversible decision inside the decision process.

Creating an Opening

Fire can actually come at the end as well, and this is the case that interests me the most these days .

In cases where you maneuver for an opening starting from unfavorable conditions (ready, steer), you could be accumulating a surplus capacity for action while waiting for a good opportunity to use it.

This could be a purely passive wait, or you could be actively trying to engineer an opening through “set up” moves.

This accumulating surplus might be money, information, a slowly-grown marketing asset like a blog, or going to night school to get a degree. Or it might simply involve waiting and watching for environmental conditions, trending in a certain direction, to hit a threshold.

Within a large corporation, this could be a matter of making specific allies and accumulating a strong position around a currently unattractive business asset (such as a dog of a product that people think cannot do well in the future, or a sales region that nobody wants) and waiting for, or engineering, a way to work it.

For example, there was an optimal window of time for streaming video businesses to be launched, based on falling bandwidth costs. If you were in that business, you’d have been wise to adopt a ready, steer hold-and-accumulate strategy, waiting for your moment to fire.

Today, the emerging sector of 3D printing is in the wait zone for many people: once the technology becomes sufficiently cheap and some basic technology to exploit it has emerged (such as stable, cheap and easy to use software for generating designs), a lot of people are going to jump in.

Bootstrapping to Big

Since ready has to do with crossing externally-determined irreversibility thresholds more than being in some mystic state of perfect readiness, the steer, ready, fire sequence is great for maneuvering to create an opening, and then triggering an irreversible action that requires a burst of informed energy. This is what is typically referred to as a go-big-or-go-home moment.

One application of steer-ready-fire thinking is bootstrapped businesses that intend to grow big at the right time. These days, we’ve somehow bought into the illusion that bootstrapping is for lifestyle businesses and that you need professional investors to go big.

This is obviously false. If you steer to ready with sufficient foresight, carefully build cash-flow assets, and  wait for or create the right opening, you can bootstrap and go big. Many big businesses before the 1940s were grown in precisely this fashion. Before investment banking  became a big business in its own right in the 1870s in America (and later, the sub-sector of venture capital in the post World War II era), big fortunes — including those of the two biggest Robber Barons, Vanderbilt and Rockefeller — were built through this sort of bootstrapped, leveraged model. There were times when Rockefeller in fact had more capacity to move the markets from the outside, than his famous finance contemporary, J. P. Morgan, had on the inside.

Stepping back a bit, what’s common to all these approaches to thinking about decision processes is the interplay of energy and information in some abstract sense (where energy can be money or marketing potential for instance, in our running startup sector example). Acting with either too much or too little information, given your energy levels, is inefficient.  Having neither information nor energy is of course a stable situation.

Mindfulness is when energy and information dance together well. Note that you don’t necessarily have to keep them balanced at a specific moment. You can store both. So you might wait for energy to catch up with information, or vice-versa. Or you can accumulate both and unleash a ferocious burst of mindful action driven by a store of heavily-informed energy.

Sudden Actions, Entropy and OODA

That last part (accumulating both energy and information to enable sudden movements) took me a while to get to. For a long time, I was unable to reconcile sudden, high-power movements with the idea of mindfulness because I was fixated on the thought that mindful actions are necessarily smooth actions. They needn’t be. Jerky movements have a role to play in our world.

But there is a deeper level at which “slow” and “smooth” matter. This is where an abstract notion of entropy is relevant. Slow, smooth actions cause low increases in entropy. Quick, jerky actions cause high increases in entropy. Unfortunately, you cannot always work with low-entropy behavior because there is a lot of messiness in the outside world — the world that you don’t completely control. The smaller and more closed your world, the more you can approach the idea of working purely with slow, low-entropy actions.

This is why readiness is best thought of in relationship to irreversible-action thresholds determined by external conditions. In thermodynamics, isentropic processes (those that don’t increase energy) are reversible. Entropic processes are not.

When you unleash a sudden action, entropy will increase. In decision-making terms, it means you’ll trigger action that is so fast that you cannot process the information being generated by feedback, so it will effectively act as noise. But there are situations where you know enough to know that this chaos you are unleashing will mostly favor you. This is reflected in the attitude that “I think it will all work itself out.” Eventually, when the dust settles, you will be able to get back to a more mindful engagement with the situation.

And of course, there will always be net entropy increases even after the dust settles. Being mindful about this realization is the same as accepting the inevitability of death.

Of course, this extended thermodynamic metaphor needs to be carefully applied in abstract situations, but I believe the correspondence is a very close one. This thermodynamic metaphor, and the interplay of ready, fire, aim and steer in various permutations and combinations, is one approach to understanding how Boyd’s OODA model really works.

Smart Money and Dumb Money

You can extrapolate this sort of thinking to larger groups and organizations, and think about how energy (usually money in the human world) and information are distributed within a organization and the environment it operates in.  You can talk about whether energy drives information or vice-versa.

In larger systems of people, power distributions often emerge out of the interplay of energy and information. Smart money represents information in control of energy. Dumb money represents the converse situation.

In the world of dumb money, entrepreneurs must chase investors. In the world of smart money, investors court entrepreneurs. Why?

In entrepreneurship, smart money is often used to refer to investment from people who can also provide information and advice. This is actually not particularly smart money. If an investor holds all the cards — money and information — what exactly does the entrepreneur bring to the table besides talent? That sort of relationship defines employment, not investment. Truly valuable information comes from unlikely places. Information from well-known sources, such as seasoned investors or former entrepreneurs, is unlikely to be particularly special or exclusive.  It is in fact likely to be common knowledge — it will help you lower costs of doing business, but not provide a competitive advantage.

A collaboration between a party with too much energy, and one with too much information, is fraught with tension. It is very hard to merge the two in mindful ways. One party is impatient and the other party is frustrated. Meetings between parties with unbalanced and complementary assets, who are also mindful about what they have and what they need, are quite rare.

The result is that power dynamics are triggered while things are sorting themselves out. This is one reason I advocate a slightly evil philosophy. Engaging the world outside your personal control means dealing with all this. Trying to be purely good is like trying to work with just smooth, slow, isentropic actions. It is just not workable when there are transient openings and irreversibility thresholds in the environment.

So it isn’t just individuals who have to gradually become more mindful decision-makers, gradually lowering the amount of sloth, impatience and frustration in their thinking. Organizations have to do it too. I can think of many frustrated, slothful or impatient organizations and groups, ranging in size from married couples to Fortune 500 companies and entire nations.

 

A Pilgrimage through Stagnation and Acceleration

Gregory Rader at onthespiral.com just posted an interesting synthesis of the some of the ideas we’ve been discussing here lately. He’s taken elements of a couple of my recent posts, thrown in other ideas, and come up with a deeper explanation of why mindful learning curves, thrust, drag and 10x effects behave the way they do.  He zeroes in on the idea of latent drag/lurking drag (drag that’s waiting to kick in) as the central meta-problem, and gets to several interesting insights.

But, suppose you have perfected the art of schedule management…have you permanently defeated the scourge that is drag?

Of course not.  Ultimately drag is anything that distracts you from thrust work.  Biological needs are sources of drag.  You surely know at least a few people who periodically engage in near-manic bouts of creative effort, largely by ignoring their needs to eat, sleep, or maintain decent hygiene.

Venkat focuses on schedule management because it is an obvious limiting factor.  Schedule management, for many people is the low hanging fruit.  However, alleviating one source of drag will only enable a temporary period of productive acceleration before another, previously latent source of drag emerges as a limiting factor.

For some relevant context, Greg is big on CrossFit training, and I suspect a lot of his thinking is informed by analogies to that transformation process. Read the whole post: A Pilgrimage through Stagnation and Acceleration (and the comment I’ve posted on stuff like moving bottlenecks and weakest-link dynamics).

Thrust, Drag and the 10x Effect

If you are only used to driving cars, it is hard to appreciate just how huge a force drag can be. The reason is that drag increases as the square of speed, so an object will experience 100 times the drag at 300 mph as it does at 30 mph. Not 10 times.

In  Physics Can Be Fun, Soviet popular science writer Ya Perelman provided a dramatic example of the consequences of drag. With drag, a typical long-range artillery shell travels 4 km. Without drag, the same shell would travel 40 km.

Or 10x further. Which brings me to the famous 10x effect in software engineering.

If you haven’t heard of it, the 10x effect is the anecdotal observation that great programmers aren’t just a little more productive than average ones (like 15-20%). They tend to be 10 times more productive. A similar effect can be found in other kinds of creative information work.

Can you transform yourself into a 10x person? If you meet certain qualifying conditions (by my estimate, maybe 1 in 4 people do), I think you can.

[Read more…]

Daemons and the Mindful Learning Curve

Humans naturally think about their own behaviors in terms of peak and trough performance levels, rather than means or medians. Without any performance tracking, we know our limits in a variety of domains. Each time we attempt a performance episode in any skilled domain, these limits change, yielding a learning curve. It has a characteristic tempo, depending on the episodic performance quality, as well as a tempo across episodes, that is correlated with quality. I prefer this model of a learning curve that I made up to the usual smooth, S-shaped one with a plateau at the end. I’ll explain why in a bit. This picture is related to what I called the Freytag Staircase in the book, but is not quite the same.

[Read more…]