When Monitoring a Behavior Makes it Worse

I’ve been doing some idle life-logging experimentation for the past two months with a Google Form and some simple Matlab analysis (project repo here). It was partly motivated by trying to operationalize some of my thinking around habit formation and falling off the wagon/getting back on, and partly by the vague idea that I might in the future unbundle Tempo into small idea chunks and rebundle those into an app, instead of writing a second edition.

In the two months, I learned a few interesting lessons big and small. Some were about the right way to log and analyze behaviors, others were about the nature of behaviors and habits themselves. All very interesting.

But perhaps the most interesting thing I learned was about the very idea of monitoring behaviors (with anything from a diary to an app) is that if you measure a behavior, it generally gets worse before it gets better, if it gets better at all.

Kinda like if you think too much about how you work the brake and accelerator while driving, you’ll suddenly start fumbling/jerking awkwardly like a student driver.

I think there are two things going on here:

  1. When you bring up any habit for conscious inspection with a tool, you regress from unconscious competence to conscious incompetence (see shu-ha-ri). This happens because most of your later mastery is unconscious, and paying conscious attention to what you’re doing suspends the unconscious parts.
  2. When the habit is a creative habit, there is an additional factor. For an uncreative habit, feedback of error via inspection or monitoring triggers dumb corrective actions. If you’re drifting out of your lane and your fancy new car beeps, you just steer back in. But if your monitoring is telling you that your “hit rate” for successful blog posts as a fraction of all blog posts is falling, there is no obvious action you can take to fix it. So being sensitized to the gap just increases anxiety, which makes performance worse.

The first is a manageable problem in a thoughtfully designed tool that foregrounds and manages the trade-off by setting the right expectations: “warning: this logging/monitoring app will make things worse before it makes it better, like any skill-learning aid.”

The second is a much more serious one. When the right response to a feedback error is a creative action, the tradeoff is between knowing more about the “stuck” situation versus heightened anxiety that prevents you from doing much with the data. Arguably, in this regime, the right way to handle the tradeoff is to turn off the monitoring and go open-loop for a while, trusting creative play behaviors to generate an event that unsticks you.

I think this is why common self-improvement goals like weight loss run aground once you hit the existing homeostasis point.  If your body set-point is say 150 lb and you are at 155 lbs due to too much Thanksgiving and Christmas over-eating, the diet-and-exercise routine in response to what you see on the scale everyday is enough to get you back to 150 lb. But if you’re hovering in the noise zone around 150 lbs (say +/- 2 lbs) and want to move the setpoint itself to 140 lbs, you need a creative lifestyle shift.

Watching the scale daily is not helpful in achieving this goal. You need something else.

I am not entirely sure about how to approach this interesting problem, but for starters, I think it’s useful to segment tools and behavior modification projects into two kinds: sustaining projects (no set points need to move, no creativity needed) and disruptive projects (set points need to move, creative insights needed). They are two very different regimes of behavior modification, and inspection/feedback/monitoring tools work very differently in the two regimes.

I believe we fall off the wagon when we have to shift between these regimes.

 

Time, Money and Bandwidth

The NYT has an interesting piece on the psychology of poverty, No Money, No Time:

My experience is the time equivalent of a high-interest loan cycle, except instead of money, I borrow time. But this kind of borrowing comes with an interest rate of its own: By focusing on one immediate deadline, I neglect not only future deadlines but the mundane tasks of daily life that would normally take up next to no time or mental energy. It’s the same type of problem poor people encounter every day, multiple times: The demands of the moment override the demands of the future, making that future harder to reach.

When we think of poverty, we tend to think about money in isolation: How much does she earn? Is that above or below the poverty line? But the financial part of the equation may not be the single most important factor. “The biggest mistake we make about scarcity,” Sendhil Mullainathan, an economist at Harvard who is a co-author of the book “Scarcity: Why Having Too Little Means So Much,” tells me, “is we view it as a physical phenomenon. It’s not.”

“There are three types of poverty,” he says. “There’s money poverty, there’s time poverty, and there’s bandwidth poverty.” The first is the type we typically associate with the word. The second occurs when the time debt of the sort I incurred starts to pile up.

Worthwhile perspective on time, decision-making and scarcity of cognitive resources. Similar in spirit to the research on decision fatigue I reblogged a while back.

Is Decision-Making Skill Trainable?

I shared an article a while back on decision fatigue. The article came up again in a recent discussion, and another idea was raised, this time from the fitness/training world: Acute Training Load vs. Chronic Training Load

“ATL – Acute Training Load represents your current degree of freshness, being an exponentially weighted average of your training over a period of 5-10 days…

CTL – Chronic Training Load represents your current degree of fitness as an exponentially weighted average of you training over a 42 day period. Building your CTL is a bit like putting money in your savings account. If you don’t put much in you won’t be able to draw much out at a later date.”

This seems like a very fertile idea to me.  The language here is very control-theoretic, and the idea seems to be basically about separating time scales of training in a useful way. It also seems to relate to what I think of as the raise the floor/raise the ceiling ways of increasing performance, which I talked about in the context of mindful learning curves.

The interesting question, as a friend of mine put it, is whether decision-making skill (and therefore decision-fatigue limits) responds to training the way our bodies do. I don’t mean this in the sense of gaining experience. That of course happens. I mean, being able to go for longer before performance degrades.

I think the jury is still out on that one.

Resilient Like a Fox

Last week, I was at the LIFT Conference in Geneva to speak on “resilience.” I did my 20 minute talk using a very Tempo-esque angle on the subject, using the fox and hedgehog archetypes that I talked about in Chapter 3 of the book. My thinking on using archetypes to analyze complex themes has been slowly getting more sophisticated, and I hope to do a stronger treatment of the idea in a future edition.

I’ve embedded the talk below, and you can also get to it via this link. You may also want to check out some of the other talks. If you’re based in Europe, I highly recommend the LIFT conference. It is unusually well designed and choreographed.

I’ll be developing this fox/hedgehog theme further in my upcoming talk at ALM Chicago in March.

Should You Count Near-Misses as Successes or Failures?

Wired has an excellent article on research on near-misses:

It is the paradox of the close call. Probability wise, near misses aren’t successes. They are indicators of near failure. And if the flaw is systemic, it requires only a small twist of fate for the next incident to result in disaster. Rather than celebrating then ignoring close calls, we should be learning from them and doing our very best to prevent their recurrence. But we often don’t.

“People don’t learn from a near miss, they just say, ‘It worked, so let’s do it again,’” Dillon-Merrill says. Other studies have shown that the more often someone gets away with risky behavior, the more likely they are to repeat it; there is a sort of invincibility complex. “For ego protection reasons, we like to assume that past events are a product of what we controlled rather than chance,” Tinsley adds.

This reminds me of similar research mentioned in Tom Vanderbilt’s Trafficon driving accidents and a device that teaches young drivers using near-misses, which are far more common than the drivers realize others.

HT: Jordan Peacock

Fertile Variables and Rich Moves

Engineers and others attracted to comprehensive systems views often fail in a predictable way: they translate all their objectives into multi-factor optimization models and trade-off curves which then yield spectacularly mediocre results. I commented on this pathology as part of a recent answer to a question about choosing among multiple job offers on Quora  and I figured I should generalize that answer.

Why is this a failure mode? Optimization is based on models, and  this failure mode has to do with what you have left out of your model (either consciously or due to ignorance or a priori unknowability). If there are a couple of dozen relevant variables and you build a model that uses a half-dozen, then among those chosen variables, some will have more coupling to variables you’ve left out than others. Such variables serve as proxies for variables that aren’t represented in your model. I’ll overload a term used by statisticians in a somewhat related sense and call these variables fertile variables. Time is a typical example. Space is another.  Money is a third, and particularly important because ideological opinions about it often blind people to its fertile nature. Physical fitness is a fourth.

Fertile variables feed powerful patterns of action based on what I will call rich moves. 

[Read more…]

Analysis-Paralysis and The Sensemaking Trap

Analysis-paralysis is when you get into a loop of continuous analysis that prevents you from breaking on through to the “other side” where action can begin. I am beginning to get a handle on the problem, but it is not going to make much sense to you unless you’ve read the book. So this is in the advanced/extra-credit department. Perhaps after some more thought I’ll be able to capture this idea in a simpler way.

In the Double Freytag model of narrative decision-making, analysis-paralysis corresponds to getting stuck in the sense-making phase. Why does this happen?

[Read more…]

Thinking in a Foreign Language

This is an idea that simply refuses to go away. Ever since the Sapir-Whorf hypothesis and its debunking in the original naive form, the idea that language shapes thought keeps popping up. Now the behavioral economists weigh in to show that decision-making changes when you switch languages. The research is reported in a Wired article, Thinking in a Foreign Language

This looks like it is primarily about the mere fact of shifting gears to a different language causing greater deliberation. But I strongly suspect there are going to be patterns related to mental model construction and use in the to and from languages as well (i.e., specific ordered language pairs, (A, B), will likely have measurable and characteristic effects on the nature of decision-making).

You’d need more subtle tests for that though.

The researchers next tested how language affected decisions on matters of direct personal import. According to prospect theory, the possibility of small losses outweigh the promise of larger gains, a phenomenon called myopic risk aversion and rooted in emotional reactions to the idea of loss.

The same group of Korean students was presented with a series of hypothetical low-loss, high-gain bets. When offered bets in Korean, just 57 percent took them. When offered in English, that number rose to 67 percent, again suggesting heightened deliberation in a second language.

To see if the effect held up in real-world betting, Keysar’s team recruited 54 University of Chicago students who spoke Spanish as a second language. Each received $15 in $1 bills, each of which could be kept or bet on a coin toss. If they lost a toss, they’d lose the dollar, but winning returned the dollar and another $1.50 — a proposition that, over multiple bets, would likely be profitable.

When the proceedings were conducted in English, just 54 percent of students took the bets, a number that rose to 71 percent when betting in Spanish. “They take more bets in a foreign language because they expect to gain in the long run, and are less affected by the typically exaggerated aversion to losses,” wrote Keysar and colleagues.

The researchers believe a second language provides a useful cognitive distance from automatic processes, promoting analytical thought and reducing unthinking, emotional reaction.

 

Hacking Grand Narratives

Grand narratives are probably the most frequently mentioned subject in reactions I get to Tempo, even though I carefully restricted myself to individual narratives in the book. Apparently the urge to apply narrative models to collectives is irresistible. Several readers have gone ahead and sort of hacked the narrative models I discuss in Tempo, and applied them to grand narratives. To be frank, I don’t completely understand most of these attempts. I know of applications to unconventional crisis response, the political process in Honduras, the history of Western art, and the history of debt/finance.

But as I’ve mentioned in previous posts, I am treading carefully here.  I’ve learned something from each hacking attempt people have told me about (do share if you’ve tried this sort of thing), and I’ve made two experimental attempts myself: applying the model to 19th century American business/technology history and on a smaller scale, to software projects. I am starting a third experiment: applying narrative analysis to wannabe-Silicon-Valley tech hubs like Boulder and Las Vegas. But overall, I am not satisfied that my models (or anyone else’s) are good enough yet.

But let me try and lay out the problem here, and have you guys weigh in.

[Read more…]

Trigger Narratives and the Nuclear Option

We use the phrase nuclear option rather casually as an everyday metaphor for highly consequential, irreversible and consciously triggered decisions. But chances are, you’ve never actually considered how the actual nuclear option is managed. The turning of this one little key — the picture is of an an actual nuclear trigger —  is easily the most analyzed decision in history. The design of the decision process around it is one of the greatest feats of narrative engineering every accomplished. That the trigger has  (knock on wood) not been pulled since World War II is an engineering accomplishment comparable to the Moon landing.

The nuclear option is the most extreme example of a special kind of decision narrative that I call a trigger narrative: one built around a major decision requires an explicit triggering action after all the preparation is done: things like proposing marriage, submitting a manuscript to an editor or issuing a press release. Not all major decisions are framed by trigger narratives, but for those that are, the nuclear trigger narrative has much to teach.

[Read more…]