...helping clients in complex environments make better decisions, faster.

CALL NOW: 404.898.9096

info@decisionclarityconsulting.com

Author Archive

Multi-causal problems almost always have multi-causal solutions

From Seven Deadly Sins of Science Reporting by Avi Roy & Anders Sandberg.  While dealing specifically with science reporting, there is application to argument formulation and communication.  My summary.

1)  Nothing is ever proven – You can only provide information that increases the probability that the argument is correct.

2)  Nothing is in itself inherently bad – It is all about proportionality.  Everything is deadly when too concentrated and everything is safe when sufficiently dilute.

3)  There are no silver bullets – Multi-causal problems almost always have multi-causal solutions.

4)  Personal behavioral traits cannot be sourced to your genes – There are no behaviors associated with single genes.

5)  Simple actions trump simple solutions – Longevity is contextually determined.  Not smoking, exercising, sleeping regularly, eating balanced and moderate meals and positive mental attitude outweigh the effect of drinking red wine, practicing yoga, eating fish, etc.

6)  Past performance does not predict future outcomes – A study from a prestigious university is only as good as the quality of the study, not the prestige of the university.

7)  The plausibility of a story is not necessarily correlated with the truthfulness of the story – Simple explanations of complex problems are also wrong explanations.

Causal density, trade-offs and unsatisfactory outcomes

Here is an interesting juxtaposition of articles.  The first, Death of an Adjunct by Daniel Kovalik, came out in September 2013 and chronicled the tribulations and death of an 83 year old adjunct professor at Duquesne University, Margaret Mary Vojtko.  It seems to tell an appalling tale of neglect and the exploitation of a vulnerable old woman by a heartless educational bureaucracy.

But then there is Death of a Professor by L.V. Anderson, covering exactly the same ground and arriving at an entirely different, and much more nuanced, conclusion.

Kovalik is a lawyer for the union seeking to organize adjunct professors and clearly his article is an exercise in rhetoric to advance his cause.  Given that his article was republished fairly broadly, it was clearly a reasonably effective exercise in rhetoric.  Kovalik is basically arguing that had there been a union representing the interests of Vojtko, she would not have died in circumstances of near poverty.

I think what the juxtaposition of the two articles does is profile 1) the importance of providing the full dossier of evidence and 2) the importance of problem definition.

Anderson does not significantly contradict anything in Kosalik’s rendition of events.  All she does is provide a more complete context.  In doing so, she pretty much demolishes his argument.  With the more complete picture she provides, it is clear that Vojtko’s penurious circumstances were the result of a range of decisions on her part (causal density), most of which could not be addressed through the existence of a union.  This is not to deny the tragedy of this story but rather to point out the obvious – things are not always the way they seem and few human issues have simple answers.

In fact, the latter point is illustrated by a follow-up article written by Anderson, Why Adjunct Professors Don’t Just Find Other Jobs.  She answers her own question in three parts.  First, they still hope to find a track to a tenured position.  Given the terrible working conditions of the adjunct professor, this is a testament to the value of the tenured position.   Second, there are scheduling challenges to finding jobs outside of academia (I didn’t say that Anderson’s reasons were good reasons).  Third, they really, really like teaching rather than anything else they might do.

So if your concern is the financial well-being of adjunct professors, then Anderson’s second article means that there is little that can be done.  Adjunct professors will do just about anything in the hopes of getting on a tenure track, they aren’t all that interested in non-academic jobs and they prefer teaching, even at low rates of pay, to just about any alternative.  And there are a lot of them.  Given limited tenured positions and increasing supply of individuals freely seeking those positions, there is little that can be done to change the economic circumstances of adjunct professors.  Excessive supply and limited demand inherently means that compensation will be low.  You can either increase demand (increase the number of tenured positions) or reduce the supply of adjunct professors – neither action being particularly feasible in a free market environment where people are free to make their own choices.

What are the decision-making lessons from these related articles.  I would argue there are three.  1) Work hard to get the complete picture.  The emotionally satisfying story is often not the complete story.  2)  Problem definition is critical.  3) Causal density means that there are many human problems, tragedies even, which do not have acceptable solutions.

As an exercise, what systemic actions could have been taken, consistent with free citizens and the law, that could have precluded the actual outcome?

Human knowledge in the linear domain, does not transfer properly to the complex domain

Nassim Nicholas Taleb is a brilliant thinker, philosopher and writer.  I have read and enjoyed Fooled by Randomness as well as Black Swan and am looking forward to his most recent, Antifragile (reviewed in The Economist here).

Taleb had an article in Foreign Affairs in 2011 which hit on some of his main arguments (as applied to international relations) – The Black Swan of Cairo.  Black Swan is Taleb’s term for an unpredicted (and essentially unpredictable) event that disrupts the status quo.

Complex systems that have artificially suppressed volatility tend to become extremely fragile, while at the same time exhibiting no visible risks. In fact, they tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface. Although the stated intention of political leaders and economic policymakers is to stabilize the system by inhibiting fluctuations, the result tends to be the opposite. These artificially constrained systems become prone to “Black Swans”—that is, they become extremely vulnerable to large-scale events that lie far from the statistical norm and were largely unpredictable to a given set of observers.

Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.

[snip]

Humans simultaneously inhabit two systems: the linear and the complex.  The linear domain is characterized by its predictability and the low degree of interaction among its components, which allows the use of mathematical methods that make forecasts reliable. In complex systems, there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability. Nonlinear elements are also present, such as those commonly known, and generally misunderstood, as “tipping points.” Imagine someone who keeps adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.

[snip]

Engineering, architecture, astronomy, most of physics, and much of common science are linear domains. The complex domain is the realm of the social world, epidemics, and economics. Crucially, the linear domain delivers mild variations without large shocks, whereas the complex domain delivers massive jumps and gaps. Complex systems are misunderstood, mostly because humans’ sophistication, obtained over the history of human knowledge in the linear domain, does not transfer properly to the complex domain. Humans can predict a solar eclipse and the trajectory of a space vessel, but not the stock market or Egyptian political events. All man-made complex systems have commonalities and even universalities.  Sadly, deceptive calm (followed by Black Swan surprises) seems to be one of those properties.

Taleb also mentions but does not elaborate on, the important issue of “the illusion of local causal chains—that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect.”  There is a tendency to see the last event as the “cause” of something when in fact it is sometimes simply the catalyst to a systemic readjustment, i.e. the straw that broke the camel’s back.  It wasn’t the straw per se, but the cumulative weight that preceded it.

There are echoes of Stephen Jay Gould and Niles Edlredge’s Punctuated Equilibrium  in which they argued that evolution is not a smooth continual process but rather a process characterized by fits and starts or a system of punctuated equilibrium.

Taleb is arguing that our efforts to ensure near term tactical stability are often at odds with desirable system evolution over the long run.  It is a classic trade-off decision.  He has observed many times that the good tactical intentions often end up unintentionally leading to catastrophic strategic outcomes.  An example would be that of forest fire management.  Nobody wants forest fires and for decades the strategy was simple fire suppression, keep fires from happening and put them out as fast as possible when they do happen.

In reducing near term fires, regrettably, forests have accumulated much greater fuel loads than they would otherwise under natural conditions where lightning strike fires periodically clear dead brush.  The result has been increasingly frequent, vast and intense wildfires beyond control.  A strategy for achieving near term stability (reduced wildfires) has ended up worsening the situation in the long run.

When making a strategic decision, it is important to consider the historical context.  Has the existing system evolved over time and therefore has some base level of stability, or has it existed in an unnatural state of artificial stability with all variance suppressed?  If it is the latter, then any actions undertaken related to a new change may have unanticipated consequences not necessarily having anything to do with the intended plan of action but simply as a consequence of cumulative avoided evolution.

 

 

Go away and think

Wise counsel on understanding the history and context of any fact or assumed causation.  When solving problems, we often want to start with a clean slate, unencumbered by history.  Even where that is possible, it is not always wise.  Only when we understand the nature of the status quo do we begin to be in a position to intelligently change it.  From G.K. Chesterton in The Thing (1929), Ch. IV : The Drift From Domesticity.

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable.  It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. But the truth is that nobody has any business to destroy a social institution until he has really seen it as an historical institution. If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, that they have since become bad purposes, or that they are purposes which are no longer served. But if he simply stares at the thing as a senseless monstrosity that has somehow sprung up in his path, it is he and not the traditionalist who is suffering from an illusion.

The power of storytelling versus the power of the story

Is there a difference?  More than we usually realize.

Nassim Nicholas Taleb describes the issue nicely in The Black Swan

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.

We are primed for making sense of the world, to see patterns in nature and in data.  Success often depends on accurate anticipation of what will happen next.  We look for patterns in things to use as a means of forecasting.  But sometimes that beneficial habit betrays us.  We see patterns where there are none.

Kaiser Fung, a statistician, makes the observation that in most news articles, there is a point where the reporter transitions from reporting the story to telling a story.  They shift from the agreed facts to an imputation of cause that is not there.  It is the point where we transition from facts to speculation and it is important to distinguish the two.

Fung offers an example from a Los Angeles Times article, Who’s teaching L.A.’s kids?  The article starts out reporting the results from “a statistical approach known as value-added analysis, which rates teachers based on their students’ progress on standardized tests from year to year.”   For example, the first finding reported is:

Highly effective teachers routinely propel students from below grade level to advanced in a single year. There is a substantial gap at year’s end between students whose teachers were in the top 10% in effectiveness and the bottom 10%. The fortunate students ranked 17 percentile points higher in English and 25 points higher in math.

After a series of factual findings, the reporter then transitions.

Nevertheless, value-added analysis offers the closest thing available to an objective assessment of teachers. And it might help in resolving the greater mystery of what makes for effective teaching, and whether such skills can be taught.

On visits to the classrooms of more than 50 elementary school teachers in Los Angeles, Times reporters found that the most effective instructors differed widely in style and personality. Perhaps not surprisingly, they shared a tendency to be strict, maintain high standards and encourage critical thinking.

But the surest sign of a teacher’s effectiveness was the engagement of his or her students — something that often was obvious from the expressions on their faces.

The transition is subtle and not obvious unless you are looking for it.  In a matter of column inches we have gone from the certainty of “There is a substantial gap at year’s end between students whose teachers were in the top 10% in effectiveness and the bottom 10%” (an objective and empirical fact), to the speculation that “the surest sign of a teacher’s effectiveness was the engagement of his or her students.”  The performance gap is measured empirically and objectively but the proposed cause of the gap (engagement) is not.  Instead the reporter connects outcomes to a cause which he attributes to engagement which he can ascertain from “the expressions on their faces.”  We have moved away from the factual story to the entertainment of speculative storytelling.

It seems obvious when pointed out but people most often go with the flow without recognizing the phase change.  It is important to acknowledge that the reporter might be correct, that the teacher’s capacity to engage students might indeed be the cause of their improved results.  But that is an undocumented and unproven assumption which we need to treat differently from the documented facts.

Accurate facts are the lifeblood of good decision-making.  Without them, everything is faith and hope.  Not bad things in themselves but as the adage has it, Hope is not a strategy.

Most routine problems arise not because we don’t know what needs to be done but because we don’t do what we know needs to be done.  Ignorance of the facts is not at fault but will.  And so we tell stories.   We link the facts together in a sequence from beginning to end, from A leading to Z, each set of facts causing the next action.  Tied together the individual facts begin to create a sense of inevitability, increasing our confidence and our willingness to act.

But a well told story is not the same thing as an accurate story.  An accurate story means we know a series of facts AND we know the causal relationship between those facts and the final outcome we are trying to attain.  Most often what we know are some select facts but then we fill in the blanks about causes with what makes most sense to us.   These unacknowledged assumptions are often the genesis of both failure and unintended consequences.

Those filled-in blanks covering missing facts and unknown causations are assumptions that undermine our decision-making.  Some facts simply can’t be known and we do have to make assumptions.  But we have to do it consciously with a plan to mitigate the consequences if our assumption turns out to be wrong.  It is the unrecognized assumption, usually created in the storytelling process, which ends up sinking the ship.

The most common form of error is the classical logical fallacy of post hoc ergo propter hoc, (after this, therefore because of this).  We see one thing correlated with another and we leap to the conclusion that the one thing caused the other.  You go into a wealthy neighborhood and see many luxury cars.  You assume that luxury cars must cause wealth.  Put that way, it is patently absurd but that is the logic that underpinned US housing policy in the latter two decades of the 20th century and which ended so disastrously in 2008.  The argument was in part that homeownership ought to be encouraged through relaxed mortgage requirements and other programs because many positive social outcomes (mortality, morbidity, education attainment, lower teen pregnancy, higher savings rates, higher incomes, etc.) were correlated with home ownership.  It was assumed that home ownership fostered desirable traits such as diligence, planning, and responsibility.  If you increased home ownership, you would increase those traits and therefore would correspondingly increase life expectancy, education attainment, etc.

In hindsight, the error is obvious; the traits themselves facilitate both homeownership and the other desirable life outcomes.  Classic post hoc ergo propter hoc.

Once you know the facts and have an acceptable level of confidence in how the facts are linked together to yield the outcome you are seeking, you then have a basis for a good decision.

Getting people to accept that decision is in part a storytelling exercise.  People respond to stories.  Inspired storytelling that follows established facts is likely to yield good outcomes.  Inspired storytelling in the absence of facts or despite facts is a recipe for disaster.

So when someone comes to you with a proposition, a proposal or an argument, look for the hinge point where they shift from telling the story to storytelling, where they depart from facts and venture into the realm of assumptions and speculation.  Likewise, when presenting a recommendation to a diverse audience, ask yourself how grounded in facts is your argument, and how much are you asking them to take on the faith of a shared assumption.  The more diverse the audience, the greater the number of different stakeholders, the more certain it is that someone will call you on your own assumptions.

Decision Clarity Consulting tools and methodologies help distinguish convincing storytelling from convincing facts and help bring the two into alignment with one another.

Trials and tribulations of evidence based decision-making

Take Back Your Pregnancy by Emily Oster.

The key to good decision making is evaluating the available information—the data—and combining it with your own estimates of pluses and minuses. As an economist, I do this every day. It turns out, however, that this kind of training isn’t really done much in medical schools. Medical school tends to focus much more, appropriately, on the mechanics of being a doctor.

When I asked my doctor about drinking wine, she said that one or two glasses a week was “probably fine.” But “probably fine” isn’t a number. In search of real answers, I combed through hundreds of studies—the ones that the recommendations were based on—to get to the good data. This is where another part of my training as an economist came in: I knew enough to read the numbers correctly. What I found was surprising.

The key problem lies in separating correlation from causation. The claim that you should stop having coffee while pregnant, for instance, is based on causal reasoning: If you change nothing else, you’ll be less likely to have a miscarriage if you drink less coffee. But what we see in the data is only a correlation—the women who drink coffee are more likely to miscarry. There are also many other differences between women who drink coffee and those who don’t, differences that could themselves be responsible for the differences in miscarriage rates.

Well worth a read.  Her experience is not dissimilar to that of the science writer Stephen Jay Gould when he was diagnosed with an uncommon malignant cancer.  He documented his exploration of the facts and statistics of his diagnosis (which was a median survival time of eight months from diagnosis) in an article, The Median Isn’t the Message.  Understanding what are the real facts is a task in itself.

On a separate note, Oster alludes to decision-making being the evaluation of available information.  That is part of it but I would suggest it is actually the iterative balance of four activities.

1)  Intentions – What are your goals, how will you measure them, what are the parameters that cannot be exceeded, and how will you know when you are successful?

2) Evidence – What does the evidence actually say?

3) Estimates – For the critical knowledge needed to make the decision and which is not available, what are the best available estimates and what risks are associated with those estimates?

4) Forecasts – Once causal analysis is completed, what are the actions necessary to achieve the desired outcomes and what are the forecasts of both required resources and effort as well as forecasts of outcomes?

Intent, Evidence, Estimates, Forecasts – that sounds better and more accurate.

If you have to forecast, forecast often.

Various quotes from economist Edgar R. Fiedler.

The herd instinct among forecasters makes sheep look like independent thinkers.
If you have to forecast, forecast often.

The things most people want to know about are usually none of their business.

A cardinal principle of Total Quality escapes too many managers: you cannot continuously improve interdependent systems and processes until you progressively perfect interdependent, interpersonal relationships.

If all the economists were laid end to end, they’d never reach a conclusion.

If you’re ever right, never let ’em forget it.

Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.

Lao Tzu, 6th Century BC Chinese Poet

Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.

Measuring expert opinion

The value and dangers of forecasting.  Everybody’s An Expert: Putting predictions to the test by Louis Menand.

Well documented and researched.

 

My business is to teach my aspirations to conform themselves to fact, not to try and make facts harmonize with my aspirations

From Thomas Huxley:

My business is to teach my aspirations to conform themselves to fact, not to try and make facts harmonize with my aspirations.

Decision Clarity Consulting has no affiliation with the Scheier Group, owner of the DECISION CLARITY®, and the Scheier Group does not sponsor, endorse, or control any material on this site. Decision Clarity Consulting, in accordance with an agreement with Scheier Group, performs no work with non-profit or government entities.