...helping clients in complex environments make better decisions, faster.

CALL NOW: 404.898.9096

info@decisionclarityconsulting.com

Archive for the ‘Examples’ Category

Human knowledge in the linear domain, does not transfer properly to the complex domain

Nassim Nicholas Taleb is a brilliant thinker, philosopher and writer.  I have read and enjoyed Fooled by Randomness as well as Black Swan and am looking forward to his most recent, Antifragile (reviewed in The Economist here).

Taleb had an article in Foreign Affairs in 2011 which hit on some of his main arguments (as applied to international relations) – The Black Swan of Cairo.  Black Swan is Taleb’s term for an unpredicted (and essentially unpredictable) event that disrupts the status quo.

Complex systems that have artificially suppressed volatility tend to become extremely fragile, while at the same time exhibiting no visible risks. In fact, they tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface. Although the stated intention of political leaders and economic policymakers is to stabilize the system by inhibiting fluctuations, the result tends to be the opposite. These artificially constrained systems become prone to “Black Swans”—that is, they become extremely vulnerable to large-scale events that lie far from the statistical norm and were largely unpredictable to a given set of observers.

Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.

[snip]

Humans simultaneously inhabit two systems: the linear and the complex.  The linear domain is characterized by its predictability and the low degree of interaction among its components, which allows the use of mathematical methods that make forecasts reliable. In complex systems, there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability. Nonlinear elements are also present, such as those commonly known, and generally misunderstood, as “tipping points.” Imagine someone who keeps adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.

[snip]

Engineering, architecture, astronomy, most of physics, and much of common science are linear domains. The complex domain is the realm of the social world, epidemics, and economics. Crucially, the linear domain delivers mild variations without large shocks, whereas the complex domain delivers massive jumps and gaps. Complex systems are misunderstood, mostly because humans’ sophistication, obtained over the history of human knowledge in the linear domain, does not transfer properly to the complex domain. Humans can predict a solar eclipse and the trajectory of a space vessel, but not the stock market or Egyptian political events. All man-made complex systems have commonalities and even universalities.  Sadly, deceptive calm (followed by Black Swan surprises) seems to be one of those properties.

Taleb also mentions but does not elaborate on, the important issue of “the illusion of local causal chains—that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect.”  There is a tendency to see the last event as the “cause” of something when in fact it is sometimes simply the catalyst to a systemic readjustment, i.e. the straw that broke the camel’s back.  It wasn’t the straw per se, but the cumulative weight that preceded it.

There are echoes of Stephen Jay Gould and Niles Edlredge’s Punctuated Equilibrium  in which they argued that evolution is not a smooth continual process but rather a process characterized by fits and starts or a system of punctuated equilibrium.

Taleb is arguing that our efforts to ensure near term tactical stability are often at odds with desirable system evolution over the long run.  It is a classic trade-off decision.  He has observed many times that the good tactical intentions often end up unintentionally leading to catastrophic strategic outcomes.  An example would be that of forest fire management.  Nobody wants forest fires and for decades the strategy was simple fire suppression, keep fires from happening and put them out as fast as possible when they do happen.

In reducing near term fires, regrettably, forests have accumulated much greater fuel loads than they would otherwise under natural conditions where lightning strike fires periodically clear dead brush.  The result has been increasingly frequent, vast and intense wildfires beyond control.  A strategy for achieving near term stability (reduced wildfires) has ended up worsening the situation in the long run.

When making a strategic decision, it is important to consider the historical context.  Has the existing system evolved over time and therefore has some base level of stability, or has it existed in an unnatural state of artificial stability with all variance suppressed?  If it is the latter, then any actions undertaken related to a new change may have unanticipated consequences not necessarily having anything to do with the intended plan of action but simply as a consequence of cumulative avoided evolution.

 

 

Go away and think

Wise counsel on understanding the history and context of any fact or assumed causation.  When solving problems, we often want to start with a clean slate, unencumbered by history.  Even where that is possible, it is not always wise.  Only when we understand the nature of the status quo do we begin to be in a position to intelligently change it.  From G.K. Chesterton in The Thing (1929), Ch. IV : The Drift From Domesticity.

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable.  It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. But the truth is that nobody has any business to destroy a social institution until he has really seen it as an historical institution. If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, that they have since become bad purposes, or that they are purposes which are no longer served. But if he simply stares at the thing as a senseless monstrosity that has somehow sprung up in his path, it is he and not the traditionalist who is suffering from an illusion.

The power of storytelling versus the power of the story

Is there a difference?  More than we usually realize.

Nassim Nicholas Taleb describes the issue nicely in The Black Swan

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.

We are primed for making sense of the world, to see patterns in nature and in data.  Success often depends on accurate anticipation of what will happen next.  We look for patterns in things to use as a means of forecasting.  But sometimes that beneficial habit betrays us.  We see patterns where there are none.

Kaiser Fung, a statistician, makes the observation that in most news articles, there is a point where the reporter transitions from reporting the story to telling a story.  They shift from the agreed facts to an imputation of cause that is not there.  It is the point where we transition from facts to speculation and it is important to distinguish the two.

Fung offers an example from a Los Angeles Times article, Who’s teaching L.A.’s kids?  The article starts out reporting the results from “a statistical approach known as value-added analysis, which rates teachers based on their students’ progress on standardized tests from year to year.”   For example, the first finding reported is:

Highly effective teachers routinely propel students from below grade level to advanced in a single year. There is a substantial gap at year’s end between students whose teachers were in the top 10% in effectiveness and the bottom 10%. The fortunate students ranked 17 percentile points higher in English and 25 points higher in math.

After a series of factual findings, the reporter then transitions.

Nevertheless, value-added analysis offers the closest thing available to an objective assessment of teachers. And it might help in resolving the greater mystery of what makes for effective teaching, and whether such skills can be taught.

On visits to the classrooms of more than 50 elementary school teachers in Los Angeles, Times reporters found that the most effective instructors differed widely in style and personality. Perhaps not surprisingly, they shared a tendency to be strict, maintain high standards and encourage critical thinking.

But the surest sign of a teacher’s effectiveness was the engagement of his or her students — something that often was obvious from the expressions on their faces.

The transition is subtle and not obvious unless you are looking for it.  In a matter of column inches we have gone from the certainty of “There is a substantial gap at year’s end between students whose teachers were in the top 10% in effectiveness and the bottom 10%” (an objective and empirical fact), to the speculation that “the surest sign of a teacher’s effectiveness was the engagement of his or her students.”  The performance gap is measured empirically and objectively but the proposed cause of the gap (engagement) is not.  Instead the reporter connects outcomes to a cause which he attributes to engagement which he can ascertain from “the expressions on their faces.”  We have moved away from the factual story to the entertainment of speculative storytelling.

It seems obvious when pointed out but people most often go with the flow without recognizing the phase change.  It is important to acknowledge that the reporter might be correct, that the teacher’s capacity to engage students might indeed be the cause of their improved results.  But that is an undocumented and unproven assumption which we need to treat differently from the documented facts.

Accurate facts are the lifeblood of good decision-making.  Without them, everything is faith and hope.  Not bad things in themselves but as the adage has it, Hope is not a strategy.

Most routine problems arise not because we don’t know what needs to be done but because we don’t do what we know needs to be done.  Ignorance of the facts is not at fault but will.  And so we tell stories.   We link the facts together in a sequence from beginning to end, from A leading to Z, each set of facts causing the next action.  Tied together the individual facts begin to create a sense of inevitability, increasing our confidence and our willingness to act.

But a well told story is not the same thing as an accurate story.  An accurate story means we know a series of facts AND we know the causal relationship between those facts and the final outcome we are trying to attain.  Most often what we know are some select facts but then we fill in the blanks about causes with what makes most sense to us.   These unacknowledged assumptions are often the genesis of both failure and unintended consequences.

Those filled-in blanks covering missing facts and unknown causations are assumptions that undermine our decision-making.  Some facts simply can’t be known and we do have to make assumptions.  But we have to do it consciously with a plan to mitigate the consequences if our assumption turns out to be wrong.  It is the unrecognized assumption, usually created in the storytelling process, which ends up sinking the ship.

The most common form of error is the classical logical fallacy of post hoc ergo propter hoc, (after this, therefore because of this).  We see one thing correlated with another and we leap to the conclusion that the one thing caused the other.  You go into a wealthy neighborhood and see many luxury cars.  You assume that luxury cars must cause wealth.  Put that way, it is patently absurd but that is the logic that underpinned US housing policy in the latter two decades of the 20th century and which ended so disastrously in 2008.  The argument was in part that homeownership ought to be encouraged through relaxed mortgage requirements and other programs because many positive social outcomes (mortality, morbidity, education attainment, lower teen pregnancy, higher savings rates, higher incomes, etc.) were correlated with home ownership.  It was assumed that home ownership fostered desirable traits such as diligence, planning, and responsibility.  If you increased home ownership, you would increase those traits and therefore would correspondingly increase life expectancy, education attainment, etc.

In hindsight, the error is obvious; the traits themselves facilitate both homeownership and the other desirable life outcomes.  Classic post hoc ergo propter hoc.

Once you know the facts and have an acceptable level of confidence in how the facts are linked together to yield the outcome you are seeking, you then have a basis for a good decision.

Getting people to accept that decision is in part a storytelling exercise.  People respond to stories.  Inspired storytelling that follows established facts is likely to yield good outcomes.  Inspired storytelling in the absence of facts or despite facts is a recipe for disaster.

So when someone comes to you with a proposition, a proposal or an argument, look for the hinge point where they shift from telling the story to storytelling, where they depart from facts and venture into the realm of assumptions and speculation.  Likewise, when presenting a recommendation to a diverse audience, ask yourself how grounded in facts is your argument, and how much are you asking them to take on the faith of a shared assumption.  The more diverse the audience, the greater the number of different stakeholders, the more certain it is that someone will call you on your own assumptions.

Decision Clarity Consulting tools and methodologies help distinguish convincing storytelling from convincing facts and help bring the two into alignment with one another.

Trials and tribulations of evidence based decision-making

Take Back Your Pregnancy by Emily Oster.

The key to good decision making is evaluating the available information—the data—and combining it with your own estimates of pluses and minuses. As an economist, I do this every day. It turns out, however, that this kind of training isn’t really done much in medical schools. Medical school tends to focus much more, appropriately, on the mechanics of being a doctor.

When I asked my doctor about drinking wine, she said that one or two glasses a week was “probably fine.” But “probably fine” isn’t a number. In search of real answers, I combed through hundreds of studies—the ones that the recommendations were based on—to get to the good data. This is where another part of my training as an economist came in: I knew enough to read the numbers correctly. What I found was surprising.

The key problem lies in separating correlation from causation. The claim that you should stop having coffee while pregnant, for instance, is based on causal reasoning: If you change nothing else, you’ll be less likely to have a miscarriage if you drink less coffee. But what we see in the data is only a correlation—the women who drink coffee are more likely to miscarry. There are also many other differences between women who drink coffee and those who don’t, differences that could themselves be responsible for the differences in miscarriage rates.

Well worth a read.  Her experience is not dissimilar to that of the science writer Stephen Jay Gould when he was diagnosed with an uncommon malignant cancer.  He documented his exploration of the facts and statistics of his diagnosis (which was a median survival time of eight months from diagnosis) in an article, The Median Isn’t the Message.  Understanding what are the real facts is a task in itself.

On a separate note, Oster alludes to decision-making being the evaluation of available information.  That is part of it but I would suggest it is actually the iterative balance of four activities.

1)  Intentions – What are your goals, how will you measure them, what are the parameters that cannot be exceeded, and how will you know when you are successful?

2) Evidence – What does the evidence actually say?

3) Estimates – For the critical knowledge needed to make the decision and which is not available, what are the best available estimates and what risks are associated with those estimates?

4) Forecasts – Once causal analysis is completed, what are the actions necessary to achieve the desired outcomes and what are the forecasts of both required resources and effort as well as forecasts of outcomes?

Intent, Evidence, Estimates, Forecasts – that sounds better and more accurate.

None of those things are observable

An interesting article, The STEM Crisis Is a Myth by Robert N. Charette.

For years I have been hearing about the STEM shortage but every time I look at the employment and salary numbers, all I see is the market functioning normally. People with STEM degrees and functioning in a STEM capacity in a STEM field are always in demand and their average compensation, as the article points out, has been fairly steady.

Granted, there are emergent fields, unique circumstances, and pressing needs that will suddenly create a temporary demand for those with a very particular STEM skill set, but the market functions, more people move in or specialize in the hot area and pretty soon things are back to normal.  Also granted that the best people in any one of the STEM fields can command very high premiums over the novice.  You might not like having to pay $150,000 for an experienced ERP implementation manager and you might wish that they were cheaper but that does not necessarily mean that there is a shortage of experienced ERP implementation managers.

I think this is once again an issue arising from particular advocates wanting to use the coercive force of government to achieve individual objectives.  Engineers too expensive, issue more green cards.  Increased supply will reduce the cost.

Given all of the above, it is difficult to make a case that there has been, is, or will soon be a STEM labor shortage. “If there was really a STEM labor market crisis, you’d be seeing very different behaviors from companies,” notes Ron Hira, an associate professor of public policy at the Rochester Institute of Technology, in New York state. “You wouldn’t see companies cutting their retirement contributions, or hiring new workers and giving them worse benefits packages. Instead you would see signing bonuses, you’d see wage increases. You would see these companies really training their incumbent workers.”

“None of those things are observable,” Hira says. “In fact, they’re operating in the opposite way.”

Read the whole thing.

If you want Wal-Mart to have a labor force like Trader Joe’s and Costco, you probably want them to have a business model like Trader Joe’s and Costco

From Why Wal-Mart Will Never Pay Like Costco by Megan McArdle.

In other words, Trader Joe’s and Costco are the specialty grocer and warehouse club for an affluent, educated college demographic. They woo this crowd with a stripped-down array of high quality stock-keeping units, and high-quality customer service. The high wages produce the high levels of customer service, and the small number of products are what allow them to pay the high wages. Fewer products to handle (and restock) lowers the labor intensity of your operation. In the case of Trader Joe’s, it also dramatically decreases the amount of space you need for your supermarket … which in turn is why their revenue per square foot is so high. (Costco solves this problem by leaving the stuff on pallets, so that you can be your own stockboy).

Both these strategies work in part because very few people expect to do all their shopping at Trader Joe’s, and no one expects to do all their shopping at Costco. They don’t need to be comprehensive. Supermarkets, and Wal-Mart, have to devote a lot of shelf space, and labor, to products that don’t turn over that often.

Wal-Mart’s customers expect a very broad array of goods, because they’re a department store, not a specialty retailer; lots of people rely on Wal-Mart for their regular weekly shopping. The retailer has tried to cut the number of SKUs it carries, but ended up having to put them back, because it cost them in complaints, and sales. That means more labor, and lower profits per square foot. It also means that when you ask a clerk where something is, he’s likely to have no idea, because no person could master 108,000 SKUs. Even if Wal-Mart did pay a higher wage, you wouldn’t get the kind of easy, effortless service that you do at Trader Joe’s because the business models are just too different. If your business model inherently requires a lot of low-skill labor, efficiency wages don’t necessarily make financial sense.

That’s not the only reason that the Trader Joe’s/Costco model wouldn’t work for Wal-Mart. For one thing, it’s no accident that the high-wage favorites cited by activists tend to serve the affluent; lower income households can’t afford to pay extra for top-notch service. If it really matters to you whether you pay 50 cents a loaf less for generic bread, you’re not going to go to the specialty store where the organic produce is super-cheap and the clerk gave a cookie to your kid. Every time I write about Wal-Mart (or McDonald’s, or [insert store here]), several people will e-mail, or tweet, or come into the comments to say they’d be happy to pay 25 percent more for their Big Mac or their Wal-Mart goods if it means that the workers are well paid. I have taken to asking them how often they go to Wal-Mart or McDonald’s. So far, no one has reported going as often as once a week; the modal answer is a sudden disappearance from the conversation. If I had to guess, I’d estimate that most of the people making such statements go to Wal-Mart or McDonald’s only on road trips.

However, there are people for whom the McDonald’s Dollar Menu is a bit of a splurge, and Wal-Mart’s prices mean an extra pair of shoes for the kids. Those people might theoretically favor high wages, but they do not act on those beliefs — just witness last Thanksgiving’s union action against Wal-Mart, which featured indifferent crowds streaming past a handful of activists, most of whom did not actually work for Wal-Mart.

If you want Wal-Mart to have a labor force like Trader Joe’s and Costco, you probably want them to have a business model like Trader Joe’s and Costco — which is to say that you want them to have a customer demographic like Trader Joe’s and Costco. Obviously if you belong to that demographic — which is to say, if you’re a policy analyst, or a magazine writer — then this sounds like a splendid idea. To Wal-Mart’s actual customer base, however, it might sound like “take your business somewhere else.”

There is a correct answer to that question, but it’s unlikely we’ll ever know what it was.

From Why Do Education and Health Care Cost So Much? by Megan McArdle.  A great example of the challenges related to causal density.  We may accurately identify all the causes of an outcome but still not be able, because of poor understanding of the relationships between root causes, to predict outcomes.  Absent accurate prediction, we don’t really understand the nature of a problem at all.

So how do we explain health care and college cost inflation? Well, health care economist David Cutler once offered me the following observation: In health care, as in education, the output is very important, and impossible to measure accurately. Two 65-year-olds check into two hospitals with pneumonia; one lives, one dies. Was the difference in the medical care, or their constitutions, or the bacteria that infected them? There is a correct answer to that question, but it’s unlikely we’ll ever know what it was.

Similarly, two students go to different colleges; one flunks out, while the other gets a Rhodes Scholarship. Is one school better, or is one student? You can’t even answer these questions by aggregating data; better schools may attract better students. Even when you control for income and parental education, you’re left with what researchers call “omitted variable bias” — a better school may attract more motivated and education-oriented parents to enroll their kids there.

So on the one hand, we have two inelastic goods with a high perceived need; and on the other hand, you have no way to measure quality of output. The result is that we keep increasing the inputs: the expensive professors and doctors and research and facilities.

I would quibble with McArdle.  There are actually two problems.  It is true that it is hard to measure education and health outcomes and that is a challenge.  But even if we were able to measure with great precision and accuracy, that is still not the same as forecasting.  Measuring is a predicate to forecasting.

If we precisely and accurately measure our initiating action X, we want to know with some level of accuracy and certainty that X will lead to Y, the outcome we desire.  If we cannot predict the outcome, it means we don’t understand the relationship between and among the various causes.

Solution to low public transportation utilization

From Mobility for the Poor: Car-Sharing, Car Loans, and the Limits of Public Transit by Jeff Khau.  An example of the importance of establishing the difference between correlation and causation; of the importance of directionality of causation; of context; of root cause analysis; and goal definition.

Theoretically, one can look at this graph and legitimately make the argument that in order to increase public utilization of public transportation, one ought to increase the average commute time.  It is a good exercise in critical thinking to spot the fallacy of such an interpretation.

To analyze does not necessarily mean to produce useful information

From Reviewing the Movies: Audiences vs. Critics by Catherine Rampell.

It is a fair and interesting question or set of questions.  Do audiences and critics assess movies in different ways?  If so, in what ways do they differ?  Which views, audience or critics, provide a better forecast of future performance?  These questions apply to art, sports, books, etc. There are answers to some of these questions.  The general informed public and specialists do tend to review things differently.  General informed public tend to factor in more context and larger macro considerations than do specialists.  General informed public tend to be better forecasters than are specialists.  Nate Silver covers a lot of this in his The Signal and The Noise: Why So Many Predictions Fail – but Some Don’t
What is notable is that Rampell asks a legitimate question and has an idea on how to answer the question.  Her error is to use information that is available (Rotten Tomatoes Database) rather than information that is needed.  There is a fairly detailed critique of her analysis in the comments.  The article serves as an example of Selection Bias (the distortion of a statistical analysis, resulting from the method of collecting samples) and Information Bias (the tendency to seek information even when it cannot affect action), and possible Mere Exposure Bias (the tendency to express undue liking for things merely because of familiarity with them).

 

Almost all Americans devoutly believe, the liberal, market principles on which our country is built will triumph around the world.

From Bambi Meets Godzilla In The Middle East by Walter Russell Mead.  Read the whole thing.

The end of history, which AI founder Francis Fukuyama used to describe the historical implications of the Cold War, is to American political philosophy what the Second Coming is to Christians. In the end, almost all Americans devoutly believe, the liberal, market principles on which our country is built will triumph around the world. Asia, Africa, South America, the Middle East and even Russia will some day become democratic societies with market economies softened by welfare states and social safety nets. As a nation, we believe that democracy is both morally better and more practical than other forms of government, and that a regulated market economy offers the only long term path to national prosperity. As democracy and capitalism spread their wonder-working wings across the world, peace will descend on suffering humanity and history as we’ve known it will be at an end.

[snip]

It seems misanthropic to doubt that a particular country isn’t on the road to freedom and prosperity, and it also seems like heresy against our national creed. That tendency is reinforced among our policy elite and chattering classes. The “experts” ought to know better and be more skeptical, but they are often more naive and more dogmatic than the American people at large. It is often the best educated and connected who are most confident, for example, that political science maxims work better than historical knowledge and reflection when it comes to analyzing events and predicting developments. When democratic peace theory or some other beautiful intellectual system (backed with regressions and statistically significant correlations in all their austere beauty) adds its weight to the national political religion, a reasonable faith can morph into blind zeal. Bad things often follow.

What Americans often miss is that while democratic liberal capitalism may be where humanity is heading, not everybody is going to get there tomorrow. This is not simply because some leaders selfishly seek their own power or because evil ideologies take root in unhappy lands. It is also because while liberal capitalist democracy may well be the best way to order human societies from an abstract point of view, not every human society is ready and able to do walk that road now. Some aren’t ready because like Haiti they face such crippling problems that having a government, any government, that effectively enforces the law and provides basic services across the country is beyond their grasp. Some aren’t ready because religious or ethnic tensions would rip a particular country apart and cause civil war. Some aren’t ready because the gap between the values, social structures and culture of a particular society make various aspects of liberal capitalism either distasteful or impractical. In many places, the fact that liberal democratic capitalism is historically associated with western imperialism and arrogance has poisoned the well. People simply do not believe that this foreign system will work for them, and they blame many of the problems they face on the countries in Europe and North America who so loudly proclaim the superiority of a system they feel has victimized them.

[snip]

Americans need to face an unpleasant fact: while American values may be the answer long term to the Middle East’s problems, they are largely irrelevant to much that is happening there now. We are not going to stop terrorism, at least not in the short or middle term, by building prosperous democratic societies in the Middle East. We can’t fix Pakistan, we can’t fix Egypt, we can’t fix Iraq, we can’t fix Saudi Arabia and we can’t fix Syria. Not even the people who live in those countries can fix them at this point; what has gone wrong is so deeply rooted and so multifaceted that nothing anybody can do will turn them into good candidates for membership in the European Union anytime soon. If we could turn Pakistan into Denmark, the terrorists there would probably settle down—but that isn’t going to happen on any policy-relevant timetable. We must deal with terrorism (and our other interests in the region) in a world in which the basic conditions that breed terrorists aren’t going away.

Decision Clarity Consulting has no affiliation with the Scheier Group, owner of the DECISION CLARITY®, and the Scheier Group does not sponsor, endorse, or control any material on this site. Decision Clarity Consulting, in accordance with an agreement with Scheier Group, performs no work with non-profit or government entities.