Air University Review, May-June 1971

Decision-Making

Major General Glenn A. Kent

I am not so sure that analysis as a credible ingredient in decision-making will necessarily have a brilliant future. For a variety of reasons I believe the influence of analysis may be near its zenith and decline is in the offing. The watchword for the day is “Beware.” Don’t look now but your credibility is showing.

In mathematical language, while the first derivative for extrapolation into the future of the stature and credibility of operations research may now be positive, forces are at work that affect the higher derivatives. In time, if not corrected, these second and third derivatives will make the curve of influence turn downward. The purpose of this article is to describe these subtle but insidious forces and suggest corrective action.

First, decision-makers are becoming increasingly annoyed that different analysts get quite different answers to seemingly the same problem. Analyses are allegedly for the purpose of illumination. Still, at times, the light has a green tinge, or a deep blue tinge, or a light blue tinge, or a purple tinge. Sometimes the light comes out pure black. Seldom do analysts produce illumination with pure white brilliance. So the decision-maker becomes wary—as well he should—of this biased or shaded illumination. There must be something wrong when quantification of some particular problem produces such radically different results. In the blind rush to be worthy advocates, analysts enthusiastically engage in practices that border on perjury. The naïve exclaim that the answers appear to have been known ahead of time. The calloused inquire whether there is another way.

There is no easy fix. A common suggestion—in the interest of objective analysis—is to establish joint organizations for analysis or have analyses done by people who are “above service bias.” This sounds good, but the theory is better than the practice: it is merely substituting one form of parochialism for another. To be more pointed, the illumination on problems by the services will predictably reflect their own color. The illumination afforded by Joint Chiefs of Staff (JCS) studies has a way of coming out black because it goes through all of the filters. Those by the Office of the Secretary of Defense (OSD) come out purple, which may or may not be a better (or wiser) color than green, deep blue, or light blue. All too often the analyses are conducted in the context of a preconceived position. They become papers for “advocacy” as distinct from papers for “illumination.” The quantification is shaped, twisted, and tortured to establish the “validity” of some particular point. But decision-makers want the facts.

Analyses by OSD and “think” organizations do not escape this plague. For one reason, their analyses are not so subject to critical review by nonbelievers as are analyses from the services. Whatever objectivity is achieved by the services does not necessarily stem from basic purity but rather from fear of rebuttal. One could get a single answer to a particular problem by never having more than one analyst work on the problem. But, while the problem of getting different answers has been resolved, there is still the nagging concern about parochialism. Such a measure may clear up the symptom but does not cure the disease.

Aside from bias and preconception, there is another reason analysts get different answers to seemingly the same problem. There is too little discipline about our analysis business. Not all of us handle interactions the same way. True, different analysts may use the same formula in describing the interaction of a bomb against a target, but once you get much beyond that simple stage there is little agreement.

If one is inclined to believe that great strides are being made in understanding the universal truths about interactions, let me note some of our basic deficiencies. In the tactical area, there is no consensus on the formulas or simulations that describe the interaction between such things as two aircraft in a dogfight, soldiers in a firefight, or aircraft attacking soldiers. Even in strategic matters—where we think we do very well on how to model the problem—the ICBM versus ABM interaction is still a confuser for many important situations. Different groups of people get different results because they do the calculations on different models (codes). But few are clear on the basic differences between the models. There is little attempt to determine which model or code is best suited to the real-life problem at hand.

As far as broader issues are concerned in the tactical area, only feeble progress has been made in our understanding of the logic having to do with how to allocate resources among ground troops, close air support, artillery, counterair, and interdiction. But such an understanding is central to an informed allocation of resources to achieve the best overall military posture. There is not even a consensus on the right measure of merit. More discipline should be introduced into the system.

In addition to being parochial and somewhat undisciplined at times, analysts are not even good illuminators. The mystique behind analysis has been torn away. Decision-makers are beginning to realize—as well they should—that if an analysis is done correctly and presented succinctly, it should be clear to nonanalysts. No longer can analysts hide behind some obscure explanation, nor can they, to close off all discussion, say to the decision-maker, “It’s really quite complicated”—with the clear implication that only card-carrying analysts should understand.

The decision-maker knows that analysts have become quite adept at getting one bar higher than another. So he is quite cautious about making any decision without a better overall grasp. But he is having problems in getting that grasp, particularly when the analyst is not endowed with the basic understanding in the first place.

Even if the analyst and the decision-maker belong to the same parochial group and, accordingly, “know” and agree on the right answer, we are not home free. The decision-maker still faces the problem of selling the right answer to skeptics higher up. He feels the need of something more persuasive than that “one bar is higher than another.” He needs the problem collapsed so that the bone structure is clean. He needs a “gut” argument; it is awkward to talk learnedly about linear regressions over an early breakfast with the Chief.

Too many analyses seem constructed in the context that the purpose is to convince friendlies that the position they already hold is a good position. The cons to the position are carefully avoided lest we shake the abiding faith in our own righteousness “Don’t put in the cons or the Chief may not buy our position.” “Don’t bring up so-and-so; it will only open Pandora’s box.” But to be a persuasive advocate, the Chief needs to know all about the cons and the counters to these cons. Skeptics have a very nasty habit and a diabolical instinct to focus on the poorer aspects of any proposition, as distinct from the better aspects. One point in all this is that even in the business of advocacy it pays to be honest.

It is with some trepidation that I approach my next point: writing reports and giving briefings that do not leave the reader or the briefee in a state of complete frustration. But it does have a place in my overall theme. Packaging is important in other endeavors; the business of analysis is no exception. I am not going to dwell on the fixes; mainly they come tinder the heading of discipline—discipline in describing charts, labeling charts, as well as discipline in the vernacular. If the analyst invents new terms, all right; but he should announce that he is doing so and stick with it, not reinvent a new vernacular on each page and chart. There are no problems in this respect that murder sessions and good editing will not cure. Decision-makers are reluctant to admit that they do not understand some chart, particularly when everyone else in the room has assumed a knowing look. But if the analyst-briefer’s charts display strange abbreviations designed primarily to cue him on what to talk about next, then the decision-maker may get tired reading them, since he gets no message. The worst fate of an analyst is not to be contested, but to be ignored.

Yet of all the analyst’s sins, the one that will finally hurt his profession the worst is the blurring of “analysis” on the one hand and “position-taking” on the other. By failing to distinguish between the two, the analyst compromises a very useful tool. Analysts should be recruited because they have the talent to dissect problems—to collapse seemingly complicated problems to much simpler terms. They are to be graded on impeccable logic and correct arithmetic. They are to be graded on how elegantly and simply they were able to model some problem. One recruits such people from those who have been educated in economics, logic, and mathematics. One looks for people who have exhibited an uncommon ability to think and explain. Position-takers, on the other hand, are graded on how many times their position is accepted by the Big Chief. Position-takers are recruited from people who have a good background of experience and possess intangibles such as “mature judgment.” Of course, the respective talents of these two different groups are not necessarily mutually exclusive. But, on the other hand, they are not necessarily coupled. Carried to the extreme, one could even suggest that the Pentagon stop the present-practice of recruiting analysts to practice position-taking.

It is probably permissible, although somewhat dangerous, for analysts to be allowed to take a position. But, I submit, these are two quite different functions and it is time we recognized they are different and acted accordingly. The position that is to be taken invariably hinges on far more factors than the analyst can include in his model. The analysis (the study itself) should not contain conclusions and recommendations. In the vernacular of “Completed Staff Work,” the analysis is a subset of “Factors Bearing on the Problem.” But the operating word is “subset” as distinct from the whole set.

If the analyst feels compelled to announce his position to the world, then he should do so in a covering letter, not within the confines of the document that is allegedly an analysis. All of this is intended to get analysts into a frame of mind that promotes at least a modicum of objectivity and relieves the reader of the unwanted burden of separating analysis from position-taking. If the analyst makes, as part of his analysis, the recommendation that we should buy A rather than B, then he is apt to go back through the analysis and turn every single input to the “buy A” position. He does this because he has been burned in the past by some reviewer who made the deathless charge that “The conclusions and recommendations were not supported in the body of the analysis.” If the unfortunate analyst had not fouled up in the first place by including a “position,” he would not have been open to the charge at all.

Another aspect of this matter has to do with approving analyses. If the analyst insists on practicing position-taking and including an announcement of his position in the body of his report or briefing, then approval of his report hinges mainly on whether someone agrees with his position. Thus his report can be approved at one level, disapproved at the next, then reapproved at the next. People are apt to get mixed up on two separate questions: (1) Did he as an analyst do a good job in exposing the problem? (2) What course of action is going to be taken? If things are kept straight and separate, then the report will be distributed on the basis that it was a professional job; what is going to be done about the whole problem is quite a different question and sometimes very messy.

That these are separate questions is illustrated by the following recent case. The question (problem) had to do with how many FB-111s should be procured. The analysts from OSD and the Air Force were able to agree on an analysis; that is, they were able to agree on a measure of merit and agree that we were in the presence of the right question. Further, we agreed on how to do the calculations that showed how an agreed measure of merit varied as a function of the number of FB-111s is procured. Actually we did not accomplish this professional agreement at first, but the decision-makers, after becoming frustrated in getting a feel for the problem, did come to an agreement on an analysis. Predictably, the analysis, when finished, showed that as we procured more FB-111s we did better, but to ever diminishing returns. Further, we were able to agree on how costs increased as a function of increased force levels. This is analysis (facts), and we could agree. But when it came to position-taking, there tended to be a slight divergence. The Air Force looked at that analysis and proclaimed: “All that increase in capability for such a small increase in budget.” Personnel from OSD looked at the same analysis and exclaimed, “All that increase in budget for such a small increase in capability.” Who is nearer the truth is indeterminate. It is strictly a matter of judgment—a judgment based on many more factors than were included in the analysis and a judgment to be made by decision-makers.

The Air Force should try to do more of this kind of analytical preparation for decision-making. We should have analyses conducted jointly by analysts who are inclined to different positions. The steps are straightforward: In the first place, agree on the relevant measures of merit; second, agree on the factors that affect these measures of merit; third, agree on the form of the equations that describe exactly how the measure of merit is affected by each factor (hopefully, eventually, perhaps, we can get this from the “Book of Standard Practice”); fourth, agree on the numerics—on what values to assign the inputs (the factors); and finally agree on how to present the results.

There should be agreement at least through the third step. This allows the calculations to be made. Agreement may not be reached on the values (the numerics) of all the inputs, but the results for different numerics can be shown. ‘‘If assumption X is used, this is the answer; alternatively, if assumption Y is used, this is the answer." In this way it is crystal clear why different results are achieved—different inputs were used. At present, all too often it is not known why different results are attained—one group used Code 99 and the other 007, and they talked right by each other.

There are surely shortcomings and pitfalls in doing analyses jointly and thinking about a “Book of Standard Practice.” But we should at least keep analyses from being the principal confuser in the decision-making process. In the past, analysts were safe and serene in their sheltered life. Now the word is out that analysts can quantify almost anything, and they are suddenly in the limelight with an edict to produce or perish.

So beware. Watch that credibility.

Hq United States Air Force


Contributor

Major General Glenn A. Kent (M.S., California Institute of Technology; M.S., University of California) is Assistant Chief of Staff, Studies and Analysis, Hq USAF. He has spent most of his career in research and development assignments relating to atomic and special weapons, plans, strategic and defensive systems, analysis, development plans, and concept formulation. General Kent, is a graduate of the Air War College, and when he was a Fellow of the Center for International Affairs. Harvard University published his thesis “On the Interaction of Opposing Forces Under Possible Arms Agreements.” (1963).

Disclaimer

The conclusions and opinions expressed in this document are those of the author cultivated in the freedom of expression, academic environment of Air University. They do not reflect the official position of the U.S. Government, Department of Defense, the United States Air Force or the Air University.


Home Page | Feedback? Email the Editor