Data is Eating Clocks

It struck me recently that Marc Andreessen’s now-famous observation, that software is eating everything, has a special case that is particularly interesting for students of the history of the industrial revolution.

Data is eating clocks.

Fifteen years ago, I used to wear a watch.  One day, I lost it and never replaced it. The only time I look at a clock these days is when I have to catch a train or plane. I only think about the date when I have to sign a legal document. Most of the time, the day of the week matters more.

The clock was both a motif for the industrial revolution and a critical piece of technology driving it. Every small town in Europe gradually acquired a village clock tower. In the US, time zones emerged alongside transcontinental railroad clocks.

One reason precise time-keeping was so important in the industrial age is that when data is scarce, synchronization becomes critical to many activities. If you don’t know where your friend is, you have to set a precise  time and place to meet: “let’s meet at Starbucks at 10:30. But if you can text, you can coordinate in much looser ways: “I’ll text you when I am close to downtown and we can figure out where to meet.”

Behavior becomes more responsive to real-time situational details, and more robust to delays. Synchronization, a fragile coordination technique, becomes less necessary.

Interestingly enough, Chet Richards, a close associate of John Boyd, told me that Boyd hated the idea of synchronization, which was antithetical to his conception of maneuver warfare. Synchronization, however, was central to the idea of network-centric warfare, which is often viewed as an opposed doctrine.

I think the human world is increasingly going to become liberated from clocks and calendars. This is the literal manifestation of atemporality. Clocks will remain extremely important to coordination between artificial technologies, however. Cellphones, satellites, data centers: all need very precise clocks to talk to each other properly.

The artificial world is going through its own industrial revolution apparently, going by the increasing importance of clocks to the inner workings of technology.

The Examined Life

A useful idea for people interested in narrative-driven decision making is the Socrates quote: the unexamined life is not worth living.

Fair enough, but how do you actually apply this insight? Clearly you need an element of living to provide fodder for the examining. You cannot be born and raised in a dark sensory-deprivation chamber and do any useful examining (in fact, horrendous medieval experiments along these lines generally destroyed the unfortunate victims).

How do you balance examining versus living?

Here’s a quick primer. It’s more subtle than you might think.

[Read more…]

Annealing the Tactical Pattern Stack

Human behaviors are complicated things. They are easy to describe, as fragments of narratives, but hard to unpack in useful and fundamental ways. In Tempo, I offered a model of behavior where universal tactics (universal in the sense of arising from universally shared conceptual metaphors, and being enacted in domain-specific ways) form a basic vocabulary, and are enacted through basic decision patterns, which are like basic sentence structures in language.

I suggested that there are four basic kinds of tactical pattern: reactive, deliberative, procedural and opportunistic, that could be conceptualized via this 2×2, where the x-axis represents the locus of the information driving the action (inside/outside your head) and the y-axis represents whether the information has high or low visibility (i.e. whether it is explicit and in awareness, or whether it is part of the frame/background, and below awareness).

 While writing the book, I tried to figure out whether these behaviors also form a natural hierarchy of sorts. I was unable to make up my mind, so I did not include the idea in the book. Now I think I have a good model. The stack looks like this (the simplicity is deceptive):

 

Why? And how should you understand this diagram?

[Read more…]

Demystification versus Understanding

I am getting really interested in distinctions between types of knowledge these days. I think these distinctions are very important to the invisible structure of mental models.

One distinction whose importance I have come to increasingly appreciate, is the one between demystification and understanding. Both are types of appreciative knowledge. I define them as follows:

  • To demystify something is to understand it to a level where you no longer feel anxious about your ignorance.
  • To understand something is to have the same priorities as experts regarding that something.

The latter is in fact an implicit chicken-egg definition of expertise.

There is a shallow sense in which I can come across as very “knowledgeable.” Very few important things utterly mystify me that do not also mystify everybody else. When I encounter a new idea, I usually have some way to parse it. I am rarely at a loss over what to make of it. But this knowledge is only slightly deeper than the knowledge of a librarian who knows how to classify a book on any subject in a catalog.

So what is real understanding? Why is having the same priorities as experts a good test?

[Read more…]

Positioning Moves versus Melee Moves

My general philosophy of decision-making de-emphasizes the planning/execution distinction. But I am not an agility purist. Nobody is. You can think of the Agility Purist archetype as a useful abstraction. This mythical kind of decision-maker believes that a mind and personality that is sufficiently prepared for a particular domain (say programming or war or biochemistry) needs no preparation for specific situations or contingencies. This magical being can jump into any active situation in that particular domain and immediately start acting effectively.

At the other extreme you have an equally mythical Planning Purist archetype who has thought through every possible contingency all the way through the end and can basically hit “Start” and reach a successful outcome without further thinking. In fiction, this is best represented by jewelry heist capers based on long, involved and improbably robust sequences of moves, as in Ocean’s Eleven or The Italian Job. A few token things go wrong, but overall, these narratives play out like Rube Goldberg machines.

Clearly, reality lives somewhere in the middle. But planning vs. execution is not always a good pair of trade-off variables to create reality out of these two asymptotic myths. That distinction only works when there are a lot of known, hard temporal constraints  or formal logical constraints (socks before shoes) in play. These actually help simplify things and make planning/execution a useful model.

When there is none of this temporal structure (what David Allen calls “a hard landscape”) and everything is rather fluid and chaotic, I find it useful to think in terms of a different distinction: positioning moves vs. melee moves. I learned of this distinction from Alfred Thayer Mahan’s The Influence of Sea Power Upon HistoryHere’s a brief primer.

[Read more…]

Stress Failures versus Decay Failures

There is a rich history to the idea that the state of your personal environment reflects the state of your mind. So a cluttered office reflects a cluttered mind, for instance. This is why I made the connection explicit and foundational in Tempo by assuming that designed environments are primarily projections of mental models, created via codification and embedding into fields-flow complexes (the big brother of systems and processes).

Clutter is the most obvious manifestation of the mind-environment mapping, but I want to comment on a less-appreciated one: brokenness. 

There are environments where things are in a constant state of disrepair and brokeness. What do such broken environments reveal about the mental models that created them?

Brokenness implies a physical failure in the past.

There are two major sources of failure: operational stress and decay.

Operational stress failure happens when a heavily used system is subjected to a rare loading condition that breaks it.

Decay failure on the other hand, happens when a rarely used system is degenerating internally through disuse, until a common loading condition is enough to break it.

An environment that is in a constant state of brokenness because operational failures are coming in faster than repairs can be made is a state of war. One that is in a constant state of brokenness because things are decaying and collapsing is in a state of atrophy.

Neither is sustainable. A state of war must eventually lead on to victory or defeat. This kind of brokenness requires stepping back to rethink mental models and modification of field-flow complexes. If the rare loading condition is truly rare (example, Katrina), you might need to rethink your insurance model. If a once-rare loading condition is suddenly common, you need to redesign the whole thing operationally.

Atrophy happens either because nothing is happening in your life (so you need to get some action going) or because you built useless/non-functional environments. A state of atrophy is also not sustainable. It can turn into gangrene. You must either excise the decaying portions to protect the healthy portions, or start subjecting them to stress so that they start to regenerate.

Healthy environments aren’t unbroken ones. They are environments where different things get broken as time progresses, repair is mostly able to keep up and the brokenness does not spiral out of control.  The variety in what breaks down suggests that your mental models as well as the environment are evolving in a healthy way. If the same thing keeps breaking down, there is something stupid in your thinking.

Repair must also be able to keep up. If it overtakes to the point that your environment is routinely in a state of perfection, you are not doing enough. If on the other hand, brokenness accumulates to the point where you are constantly fighting fires, you need to upgrade capabilities all around.

Appreciative versus Manipulative Mental Models

In my early training in mathematical and computational modeling, an idea was drilled into my head by many teachers: make your models as simple as possible. But somehow, I’ve always resisted this urging. I’ve instinctively gravitated to greater complexity; even intractable complexity. Sometime later in my career, I encountered the slightly more refined principle: start with the simplest model of the problem that you don’t know how to solve. 

Still, I did not like the advice. Even with Einstein’s credibility behind it (“a theory should be as simple as possible, but not too simple”), something seemed wrong about the advice to me.

A few years ago, I found the key clue to the simplicity principle. A work colleague offered the principle: how you model something depends on what you want to do with the model. 

[Read more…]

Time Lensing

We all experience lenses and fun-house mirrors from an early age. Some people were glasses, while others have very acute vision, better than 20/20. Some are colorblind while presumably others are more sensitive to color differences. We know that there are birds and animals that see space very differently from us.

So we are used to the idea that our perception of space depends on how we see. We are used to the idea that if how we see space by default isn’t good enough, we can buy and use telescopes and microscopes to change how we see.

Time is actually a very similar dimension and exhibits exactly the same phenomena, but our intuitions around time are far worse.

For example, if you are angry or sick, time can seem to pass much more slowly than if you are having fun or are healthy. Alcohol generally slows down the perception of time passing (a drunken hour seems longer than a sober one).  Coffee speeds it up.

Various meditative practices or extraordinary situations (like being involved in a major fire, being on a battlefield, etc.) can make time appear to almost stand still, or make hours seem like minutes.

There has been some systematic study of these things (which I’ve referenced in Tempo, such as the early work of Ornstein), but in general, the phenomenology of time perception is largely unstudied. It is just hard to study in laboratory conditions. But it is not hard to study in your own life.

It is useful to think of yourself as going through life with varying kinds of time lenses stuck between your consciousness and the universe. Sometimes you are experiencing time through a microscope or telescope. Sometimes in a convex mirror. You can deliberately put on different types of time glasses for different purposes (coffee, alcohol, music). You can learn mindfulness meditation — the equivalent of getting Lasik surgery for your time-eyes.

The value of gaining some conscious control over your time-perception is that you can experience reality at different levels of resolution, both external reality and your own thoughts. Sometimes it is useful to see all the pores in your time-skin, just as it is useful to see your hair roots in a convex mirror while shaving.

If you are a computer science or information theory geek, you can think of consciousness as having a sort of raw bit-rate, and your time-lens as being able to experience that stream at a certain sampling rate and resolution.

But I am not particularly enamored of the idea of developing strong time-vision for its own sake. So long as I am wearing time-lenses appropriate to the task at hand, I am fine. I don’t need an electron microscope when a hand-held magnifying glass will do.

The Second Most Important Archetype in your Life

In Tempo,  I distinguished between two broad classes of archetypes: generic ones that have names and explicit descriptions, which apply loosely to many people, and specific ones that apply to just one person, and may be only implicitly recognized based on characteristic behaviors.

The more intimately and personally you know somebody, the more you need a specific and implicit archetype. This means that your self-archetype is the one that has to be the most specific. At least if you agree that self-awareness is generally a good thing to seek.

This does not mean that a specific archetype needs to be detailed. It can still be an impressionistic thumbnail sketch that is no more than a characteristic shrug or turn of phrase. It merely needs to be one-of-a-kind; sui generis. 

Your self-archetype is arguably the most important archetype in your life. It can be either specific or general, and a thumbnail or very detailed. But most often it is specific and detailed.  It is sometimes useful to compute with a very generic, thumbnail self-archetype, to break out of toxic self-absorption.

What do you think is the second most important archetype? Hint: it is not necessarily the one that maps to your significant other.

[Read more…]

Live Life, Not Projects

I first encountered the concept of arrival fallacies in Gretchen Rubin’s book The Happiness Project. Which goes to show that you should occasionally attempt to learn from people who are very unlike yourself (Greg Rader has a nice post about this from a few months ago). If you’ve been following my writing for any length of time, you probably know by now that I am deeply suspicious of the very idea of happiness, and its pursuit. The Rubins of the world rarely get on my radar.

An arrival fallacy in the sense of Rubin is any pattern of thinking that fits the template, I’ll be happy when ______ (Rubin credits Tal Ben-Shahar’s book Happier, which I haven’t read, for the concept).

The idea generalizes beyond happiness to any sort of goal-driven behavior. You could use templates like I’ll be ready ____ once _____. Or I’ll really understand life when ________.  Call the first template Type A (happiness fallacies), and the other two Type B (readiness fallacies) and Type C (enlightenment fallacies) respectively. There are probably other common types, but we’ll stick to three.

Let’s make up a list of examples of each type, for reference, before trying to understand arrival fallacies more deeply.

[Read more…]