Approved for public release; distribution is unlimited.

Published: 1 March 2010
Air & Space Power Journal - Spring 2010

Air Force LogoTHE MERGE

Improving Cost-Effectiveness in the Department of Defense

Col Drew Miller, PhD, USAFR*

*The author is an individual mobilization augmentee to the commandant of cadets, US Air Force Academy. He has advised and participated in resource-management decision making for the Department of Defense as a consultant with the Institute for Defense Analyses (a federally funded research and development center) and as a program manager in the DOD’s Business Management Modernization Program. He has also worked as a manager in Corporate Planning and Development at ConAgra and as vice president and president in several midsize firms.

In signing the latest and largest Department of Defense (DOD) budget, which cut the Air Force’s F-22 and the Army’s Future Combat Systems, Pres. Barack Obama proclaimed, “We can’t build the 21st-century military we need unless we fundamentally reform the way our defense establishment does business.”1 Reforming the DOD and achieving more businesslike cost-effectiveness—long-standing goals for the DOD—are often rejected as impossible since the military is not a for-profit business. Lack of a profit bottom line, however, does not prevent the DOD from attaining the cost-effectiveness that many nonprofit government and military organizations enjoy. Management consultant Peter Drucker called one nonprofit organization, the Salvation Army, “by far the most effective organization in the U.S.” He noted that “no one even comes close to it with respect to clarity of mission, ability to innovate, measurable results, dedication, and putting money to maximum use.”2

One of the biggest hurdles to improving the DOD’s cost-effectiveness is the lack of a simple, consistently used means of decision making. The department needs a standard decision support system (DSS) akin to business’s profit-and-loss spreadsheet to replace the current practice of decision making without clear criteria. Another great plague on military cost-effectiveness is our “stovepiped” approach to planning, programming, and budgeting with changing bases for the analysis and no chain of accountability or penalty paid, either for cost overruns or poor performance relative to plans. Donald Rumsfeld, former secretary of defense, estimated that 25 percent of the DOD’s spending is wasted.3

The Air Force and DOD can improve their cost-effectiveness with some practical reforms, including adopting and consistently using our version of business’s ubiquitous profit-and-loss spreadsheet to realize the tremendous benefit that businesses enjoy from having a common, widely used, and understood means of analysis and decision making. By employing a simple multiple criteria decision making (MCDM) tool such as the one developed by RAND, which uses commercial spreadsheet software, the DOD could reap the advantages of improved analysis, enhanced accountability for results, and more cost-effective resource management.4

Nearly all corporate business decisions are made using a spreadsheet, the “ubiquitous” “core piece of software” that “is utterly pervasive . . . [and] integral to the function and operation of the global financial system.”5 The familiarity and constant use of the income-statement spreadsheet as a common language and tool enable much better analysis and decision making in business organizations. The DOD, however, has no common format or DSS, using only PowerPoint as a support tool. The department makes its decisions on billion-dollar programs with a horrible lack of consistency and quality, without adequate transparency of estimates and analysis, and with no record of the criteria and rationale used to provide accountability. Unsurprisingly, this results in poor cost-effectiveness.

The most important factor in cost-effectiveness for business is not profit but a system that enables real accountability and consequences for measurable results. Profit, return on investment (ROI), discounted cash flow, or net present value are just the chosen metrics for making decisions and tracking results. To establish cost-effective management and decision making, the DOD can use corresponding means to business’s nine enablers of cost-effectiveness (see table).

Table. Enablers of cost-effectiveness


Department of Defense

1. Profit: the bottom line on profit and loss, the common metric

1. Capability: the key criteria, the common metric; reach maximum mission capability, ideally with the flexibility to apply across many scenarios and mission areas

2. ROI (adding in the balance sheet the consequence of asset costs)

2. Capability ROI: the basis for programming, budgeting, and acquisition decisions

3. Accurate cost data

3. Activity-based costing

4. Spreadsheet as the common format for decision making

4. MCDM DSS as the common DOD format

5. “Lines of business” and “profit centers” with profit/ROI goals and a responsible, accountable manager

5. Capability delivery group—a fully costed DOD operating unit or agency that is the basis for planning, budgeting, and operations

6. A chain of accountability—the same organizational entity/manager responsible for planning, budgeting, and operational/fiscal execution

6. Capability delivery group—a fully costed DOD operating unit or agency that is the basis for planning, budgeting, and operations

7. Operating company/division competition for resources (corporate or investor funding)

7. Capability delivery groups compared with MCDM DSS compete for funding in planning, programming, and budgeting process

8. Accurate, near-real-time profit/ROI reports

8. Improved budget execution and performance reporting using MCDM DSS

9. Consequences for achieving ROI objectives—or not

– For the firm: success, survival

– For the individual: receiving bonuses, retaining job

9. Consequences for outcomes, achieving Capability ROI promises and MCDM DSS claims

– Performance weighed in next round of budget competition for the capability delivery group

– Personal ratings and bonuses

The DOD has been working for a decade to adopt “capabilities based planning,” a common framework for planning and managing resources. Many resource-management processes in the Pentagon are now using or trying to use capabilities as the equivalent of a profit bottom line.

Most enablers of cost-effectiveness have been used successfully in some parts of the DOD. Others, like capability delivery groups (CDG) and “Capability ROI” would be new innovations. Achieving cost-effectiveness requires all nine enablers; however, without an integrating framework like Capability ROI, managed via a common MCDM DSS, pulling all enablers of cost-effectiveness together in the DOD is probably impossible.

Despite the optimization goals of planning, programming, budgeting, and execution (PPBE), the DOD’s current resource management fails to optimize either military capability or cost-effectiveness. As the Air Force’s former budget chief, Maj Gen (now Gen) Stephen Lorenz explained, “It rewards advocates who are the most adept at articulating increases in spending but sometimes punishes programs that can produce savings. Even worse, it lacks fundamental measures of value on which to base decisions. . . . Management processes currently in place provide little incentive to reduce costs and only limited accountability for those costs.”6

Better DOD Decision Making and Cost-Effectiveness Require a Standardized, Consistently Used, Multiple Criteria Decision Support System

Having just one widely used DSS facilitates better analysis and decision making and improves accountability because the promised results are clearly recorded in a format that everyone can understand. A business will also use the same software throughout the organization, with a very limited, controlled family of software. The DOD, however, with more than 4,000 different business information-technology systems, loses billions annually through this wasteful lack of standardization.7 Losses from poor decision making due to lack of a standard DSS and cost-effective management framework are probably much higher.

Although the DOD cannot use an income statement as its primary DSS, the DynaRank MCDM DSS has the flexibility to cover a wide variety of objectives and can be used in all of the department’s resource-management decision making. A very flexible tool, DynaRank can support the systems approach (assuring consideration of a broad range of alternative programs and strategies), decision analysis (a structured, disciplined analysis), and game theory (consideration of adversaries’ reactions), weighing cost as a criterion, and then using operations-research models and simulations for measuring performance. The DynaRank spreadsheet accommodates application of a wide range of analysis techniques and information across a diverse range of decision criteria.8 Developed by RAND analysts Dr. Paul Davis and Dr. Richard Hillestad, DynaRank has been used to address the highest-level strategy/major-force decisions down to decisions on what system best accomplishes a mission.

The DynaRank “scorecard” (fig. 1) shows four options for carrying out the mission of prompt global strike. Options or alternatives appear in rows. Goals, objectives, and criteria to consider (organizable in a hierarchy with high-level goals on top and lower-level objectives, along with criteria to measure them, below) appear in columns. The ratings of alternatives entered in the cells may be very detailed, objective data where appropriate or just subjective judgments on a scale of one to five. Users can vary the weight applied to the criteria and the different opinions on ratings to do sensitivity analysis for seeking options that consistently rank highest in aggregate score.

This illustrative MCDM scorecard compares four options (in rows) for improving the capability of prompt global strike. The columns list criteria for comparing the options, starting at the top with five high-level criteria: elimination of the current threat (destruction of a target), dissuasion of future threats (likely effect of this force change on an opponent’s actions), strategic agility (preference for options that not only help with prompt global strike for countering threats from weapons of mass destruction but also have value in other situations), political/diplomatic acceptability, and cost. These high-level criteria are then broken down into subobjectives or measurable criteria scored on a 0–100 scale.

The small number next to each criterion is the weight. Here, seven “points” appear at the top criteria level, so cost, with a weight of two, provides about 30 percent of the total score. Users can also adjust weights on how the lower-level criteria add up. For example, this analysis places more emphasis on destroying hardened, deeply buried targets. The MCDM scorecard then adds the weighted criteria scores and ranks them. In this example, the option for enhanced B-2 bomber modifications scored highest, and the conventional-warhead ICBM lowest. The shades of gray help identify high- and low-ranking scores. Clearly, the conventional ICBM option did poorly in the “strategic agility” area. However, changing the weight on strategic agility from three to zero (eliminating it as a criterion), alters the scores but not the rankings of options: the bomber is still first, and the conventional ICBM last. This reflects the real value of the MCDM DSS approach—testing different views, ratings, and criteria weights to identify consistently superior options.

Though simple, the MCDM DSS can “contain” and weigh the effects of very detailed analysis. For example, the estimates of collateral damage under the criterion of current threat elimination come from detailed models of likely civilian casualties. Execution time for the criterion of prompt response may be based on detailed studies or general estimates. The MCDM DSS can use subjective judgments for criteria, such as views of key allies or world opinion. Like an Excel spreadsheet in business, the DOD decision maker can “drill down” to find out what analysis and data generated the score.

The DynaRank MCDM DSS is not intended to “model” a decision or “compute” an answer thoroughly. Rather, it is a flexible, capable tool designed to help consider the objectives of a decision and analyze (as well as shape/alter) alternatives to best meet objectives. RAND’s Davis and Hillestad call for using this tool “in a dialogue with decisionmakers, allowing them to select the emphasis on criteria, observe the implications, and iterate the weighting . . . to study the implications of emphasis.”9 Proper selection of goals, objectives, and evaluation criteria is vital to using this DSS successfully. All parties involved in the process need to question the criteria, weights, and metrics—which is feasible if it is a common format and a process familiar to everyone. The example of using MCDM in the GPS program cited on the next page shows how a careful, transparent process can prove very helpful. One of the key criticisms levied by the Government Accountability Office on the Army’s now-cancelled Future Combat Systems program had to do with the fact that the contractor, not the Army, developed objectives and evaluation criteria.

Successful Use of MCDM in the Department of Defense

Military planners needed to choose between a variety of options with varying performance and costs for new global positioning system (GPS) technology. They also had to consider commercial users of GPS who had some conflicting objectives. What could have been a very difficult and contentious decision-making process ended up yielding a unanimous, amiable decision as a result of using an objective, quantifiable MCDM approach.

Using “Value Focused Thinking” and the “Analytic Hierarchy Process” (both elements of MCDM), they identified key performance measures for comparing alternatives. For each high-level “goal,” they identified second-tier “functions” and third-tier “tasks,” ending up with 48 measures. Different GPS customers were interviewed to obtain their recommended weights for criteria. Each group had 100 points to allocate to the various functions that the GPS system would perform.

This approach offered the following advantages:

• Identified the measures and data most important to collect and analyze

• Persuaded people through analysis

• Highlighted some counterintuitive results and trade-offs

• Mitigated bias by focusing the decision on performance measures

Key players said that the results were “surprising and gratifying” and key to getting different GPS user communities to agree unanimously on an alternative.

See Lt Col Lee Lehmkuhl, Maj David Lucia, and Col James Feldman, “Signals from Space: The Next-Generation Global Positioning System,” Military Operations Research 6, no. 4 (2001): 5–18.

Once users make a decision after this “dynamic” look at (1) the different weights on objectives, (2) the review of ratings if disagreements or uncertainty occurs (perhaps rated by different groups), and (3) the changes to some of the assumptions or ratings, then they can save the MDCM scorecard selected as best and use it to document the decision and the performance results expected. This is precisely how the income statement is used in business. It is not just for planning a line of business and then submitting a budget for it. Once approved, the “plan” income statement is not left buried in a PowerPoint briefing (à la the DOD) but is compared frequently to the “actuals” to see how managers execute the plan—and to hold them accountable for results.

I saw the damage in the DOD from the lack of a ubiquitous DSS tool like the business income spreadsheet when I tried to use DynaRank in the Pentagon: a three-star officer told me that it was too complex and detailed for senior executives. In business, however, executives look at far more complex financial models and spreadsheets, usually “drilling down” into details to probe for bad assumptions to better understand key issues. There was no time to teach them how the DynaRank DSS worked, he argued. Instead, staffers briefed and leaders approved a multi-billion-dollar program change based on multicolor PowerPoint slides, not multicriteria analysis. The lack of a standard DSS for the DOD greatly hinders the rigorous analysis and decision-making reviews common in business—and vital for instituting cost discipline and effectiveness. Some “operations research” analysts in the DOD do use MCDM, though (see the sidebar, above).

RAND analysts have used MCDM for decades to brief senior decision makers; the author and his partner used the DynaRank MCDM tool to analyze a difficult issue in Iraq and brief Gen David Petraeus. His staff warned that it was too much and too technical to brief, but we insisted. General Petraeus concurred with the analysis, noted that he liked the methodology, and asked us to publish the study.10

The diversity and enormity of the DOD does not preclude use of a common DSS—just as businesses in diverse industries all use a profit-and-loss system with the same basic format. Clearly, a need exists for incorporating multiple objectives and criteria into the DOD resource-management DSS, but the DynaRank MCDM offers plenty of flexibility to handle the department’s diversity of issues.

An MCDM DSS would allow the Office of the Secretary of Defense (corporate) to dictate certain objectives/criteria that everyone must consider, along with flexibility for different services and agencies at lower levels to add criteria relevant to them. For example, the DOD policy of “cost as an independent variable” was adopted to encourage more attention on “cost-performance tradeoffs to achieve savings.”11 The MCDM DSS example for prompt global strike (fig. 1) showed two cost metrics, included as independent variables or criteria for the decision. Persuading the DOD to really do cost trade-offs has not happened due to problems noted later in this article and the lack of a DSS like DynaRank that facilitates and can force the consideration of cost as an independent variable. In the absence of a mandated DSS with mandated use of cost as a criterion in MCDM, a bureaucracy (actually a collection of often-competing bureaucracies) like the DOD won’t change.

This MCDM tool is also a great way to help the department consider risk. It allows adding (and mandating) the consideration of several different defense-planning scenarios and types of conflict. (See the four different scenarios depicted in fig. 1.) The resulting scorecards are also an exceptional tool for doing sensitivity analysis—comparing alternatives across “worst case” as well as “expected” cost and performance estimates.

Despite the DOD’s size and diversity, we can use a flexible DSS consistently in resource management. Governments run on standard systems.12 DynaRank could work in the Joint Capabilities Integration and Development System (JCIDS) and in PPBE processes as both the main analysis tool and DSS to make key resource-management decisions. The DynaRank MCDM DSS would provide a record to track performance, support accountability, do program evaluation, and learn from mistakes. This is especially important in the DOD since both military and civilian leaders tend to change every two to three years.

A common, consistently used, widely understood MCDM DSS offers many other advantages:

• Identification of issues and areas of disagreement that do not matter. Using an MCDM DSS to input opposing views on ratings and criteria often shows that irreconcilable differences are irrelevant to the decision because some scores change but not the overall ranking.

• Elimination of “groupthink.”13 When users write down subjective ideas and fuzzy criteria in a DSS and rate them, they find it easier to “see” questionable assumptions and to challenge them by questioning the rating rather than questioning some forceful speaker in a verbal discussion or PowerPoint briefing.

• Reduction of common human errors in decision making such as the “danger inherent in all analytic approaches, . . . the tendency to close on strategies prematurely: to skip past the creative but uncomfortable stage of inventing new models or strategies.”14 The discipline, transparency, and rigor that a DSS helps to impose are critical because of the innate tendency to judge probabilities, emotions, and irrationality poorly—overreacting to new information and a host of other inadequate “seat of the pants” decision-making practices.15

• Enabling of faster interagency decision making during a crisis.16 Different agencies can independently rate the courses of action and then compare in the event of major disagreements in ratings or criteria weights.

• Simplification of conducting audits, checking decisions, and doing program evaluations by having objectives, criteria, ratings, and alternatives clearly laid out.

• Assistance in breaking down “stovepipes” that lead to duplication and waste. The common language, software, and formats enable better collaboration and information sharing. Business rarely starts with a blank spreadsheet.

The DOD may never achieve the cost-effectiveness of a Procter and Gamble or General Electric, but it can do much better than its current situation. To bring focus on costs and capabilities and to improve the state of analysis and decision making, the department needs to mandate use of one MCDM tool as a standard DSS for resource-management decision making.

Implementing Cost-Effective, Capabilities-Based Management
in the Department of Defense

The Planning, Programming, and Budgeting System has changed little since Robert McNamara served as secretary of defense, though it is now called PPBE, with the added “E” for “execution”—a goal of devoting more attention to looking at how we actually spend money and, hopefully, reach desired objectives. Different offices may manage different parts of PPBE in a series of stovepiped processes, but there must be a single basis for analysis and decisions—a single entity for PPBE. The current potpourri of thousands of program elements and constantly changing programming constructs—and then budgeting and execution by operational units—precludes accountability, hinders analysis and decision making, and yields poor cost-effectiveness. This article proposes a new construct for conducing PPBE that would provide the consistency and accountability needed for cost-effective resource management in the DOD.

The department shifted to capabilities-based planning in 2001 to emphasize building more flexible forces with a better likelihood of success in responding to a wide range of uncertain future threats. A 2002 report to the DOD’s Senior Executive Council noted many problems in implementing capabilities-based management due to the lack of a DSS and an integrated framework.17 In 2004 the Joint Defense Capabilities Study (the“Aldridge Study”) called for a “ ‘capabilities culture’ that simultaneously considers costs and needs.”18 DOD decisions may affect dozens of military capabilities and deal with over 100 types of DOD organizations and military units, all budgeted via thousands of program elements—the building blocks of the PPBE process. To integrate capability-management efforts and drive cost-effectiveness improvements, the DOD needs decision making based on Capability ROI—analyzing, deciding upon, and tracking issues via the MCDM scorecard based on capability delivery groups.

A new construct, CDGs allow PPBE to replace the ineffective practice of trying to budget by 6,000 program elements (primarily weapon systems) and hundreds of organizational budgeting entities. Operational units such as the fighter wing (fig. 2) would serve as the basis of CDGs. The costs of headquarters and supporting elements would be allocated to CDGs using activity-based costing. The DOD would plan, program, and budget by CDGs, which are based on the primary operational units used to execute operations and budget. Doing so would yield a much better link between capabilities planning, programming, budgeting, and performance reporting. CDGs would be a much better entity for data feedback on budget execution and accountability.19

Capabilities-based management using Capability ROI focuses on fulfilling capability needs established through the JCIDS process, balancing risks across mission areas. The early part of this process now largely ignores cost-effectiveness. Cost considerations arise later at the “analysis of alternatives” stage, often limited solely to expensive options designed without cost-effectiveness in mind. The lack of an easy-to-use, widely understood “spreadsheet-like” DSS makes it difficult to consider cost issues early in the JCIDS. To design more cost-effective capability options, the DOD must involve budget and cost experts earlier in the JCIDS process, using the common MCDM DSS with cost as a criterion and focusing on a Capabilities ROI “bottom line” at the start of the process.

CDGs would compete in PPBE program/budget reviews, with the Office of the Secretary of Defense selecting CDGs with the best ROI to fill priority gaps. CDGs would serve as a line of business with real accountability and strong incentives to minimize costs to compete better against other CDGs and win funding (fig. 3).

The federal government is pushing performance reporting, but setting goals and measuring outcomes are not enough to produce good management in the DOD because there is no chain of accountability to hold a specific organization or its leader responsible for results. Rather, we have

• numerous plans at many organizational levels,

• program analysis by weapon system or for some capability,

• a budget-by-component organization structure that often does not match a weapon system / program or operational unit that delivers the capability, and

• failure to weigh/consider/budget for all the associated support/infrastructure costs that should rise or fall, based on changes in other DOD organizations’ use of them.

So when results are disappointing, what do we blame?

• Bad plans?

• Wrong programming, analysis, or promises?

• Inadequate budget?

• Poor execution of great plans and budgets?

Unless the same organization is the basis for the planning, programming, budgeting, and operational/budget execution, there can be no accountability for delivering promised results. Instead, we have a budget-focused process with incentives to overpromise on performance and fully spend every penny budgeted.

The DOD’s reliance on PowerPoint and seat-of-the-pants decision making, with planning, programming, and budgeting by different organizations and managers, yields no accountability and poor cost-effectiveness. Decisions made and “documented” via a PowerPoint briefing do not leave a usable “scorecard” to compare plans and promises to results. A weapon system doesn’t operate on its own, and the mishmash of new acquisition programs, old program elements, and operating organizations yields confusion and cover.20 It’s an ideal system for avoiding blame. One organization plans, others acquire equipment and systems, another office does the budgeting, and then different operating units execute the budget. Combining this diffusion of functional and operational decision makers with the lack of a standard, consistent spreadsheet portrayal of “this is what you promised to do / this is what you’ve actually done” allows perfect deniability. Former DOD officials—including Anthony Cordesman, now with the Center for Strategic and International Studies—note that the department “has been locked into a ‘liar’s contest’ at the level of defense contractors, program managers, every military service, and the Office of the Secretary of Defense where no one is really held accountable.”21

Many pieces of a capabilities-based management system for the DOD are developing, but without CDGs and an overarching, common tool for analysis and decision making like DynaRank, they won’t survive or achieve integration and cost-effectiveness. Air Force Materiel Command (AFMC) and US Air Forces in Europe (USAFE) have successfully used activity-based cost accounting and capabilities-based planning and budgeting.22 AFMC succeeded in lowering unit costs and persuading “cost centers” to spend less than their budget, in favor of better cost-effectiveness.23 USAFE implemented capabilities-based programming.24 But without a commonly used DSS, a consistent basis for analysis and accountability at all stages of PPBE, and all nine enablers of cost-effectiveness, these isolated efforts will not succeed.

Building accountability and incentives to save budget rather than fully spend a budget won’t happen unless spending is tied to the operating entity (CDG) that competes in the PPBE process, using Capability ROI. It is critical that we base the costs used in CDG competition in PPBE not on what the CDG claims it can deliver in cost-effectiveness performance, but on what it actually cost the CDG to perform in the past year of budget execution. This will give operating units / CDGs the incentive not to “use it or lose it” but to underspend their budgets to attain a higher Capability ROI and position themselves for better success in upcoming CDG budget competitions. Supporting services and agencies will also have incentives to cut spending and lower costs or face operating-unit customers that reject their support (and thus allocated costs) as too expensive for the capability they add.

Commitment to initiate a process of CDGs and Capability ROI as the basis for DOD resource management using a common DynaRank DSS is more important than waiting for a perfect, precise, or complete system. It will be difficult to set up and calculate capabilities-based ROI across hundreds of the DOD’s CDGs. But if we roll out and enforce a standard analysis and MCDM DSS, the huge analysis, planning, programming, and budgeting staffs in the department can do this. Results will improve rapidly as they share lessons learned from using a common approach and DSS.

Finally, we must dispel the belief that the DOD is simply too big, complex, and diverse to lend itself to cost-effective management. It surely is big, but multinational corporations with more diverse lines of business than the DOD’s and operations in over 100 countries manage to consolidate business systems and use common DSS approaches cost-effectively. The DOD has multiple objectives and criteria to consider in its decisions, and an MCDM DSS is clearly essential. RAND’s Dr. Paul K. Davis concludes that “it is possible to go from the high concepts of grand strategy down to the nitty-gritty issues of economic choice using one intellectual framework. There is no guarantee that this process of working up and down the ladder of choice will be easy. But it is both feasible and desirable—given strong management, good will, and participation by senior leaders of the defense community.”25

This MCDM DSS, using CDGs and a Capability ROI framework for decision making, will never perfectly capture all the issues and information that might be considered in DOD resource management. Nor will it stop the political interference and problems injected by congressmen who push their favorite weapon systems and pork projects. But explicitly laying out and organizing the multiple objectives, showing the uncertainties, using a common approach for measuring different types of military capability, estimating metrics even when very subjective, and recording the final rationale for the decision in a spreadsheet for accountability will improve analysis and cost-effectiveness. The overall approach outlined here offers a good way to do the “dynamic” analysis that RAND’s Davis and Hillestad advocate to let decision makers take the intuitive shortcuts and simplifications they use to make difficult prioritization, risk, and trade-off decisions. The MCDM DSS, CDG, and Capability ROI approach proposed in this article is not perfect, but no other contender exists now, other than business as usual. It is vital that we pick some improved approach and require everyone in the DOD to use it for analysis and decision making. With widespread and common use, great improvement in the system will come over time.

Many successful chief executive officers have worked in the Pentagon. Some generals have been hired to manage and improve logistics in business.26 We have very capable, dedicated military and civilian employees working these issues, but they are fatally handicapped by the lack of a common DSS such as the business income statement and spreadsheet. Efforts to make the DOD cost-effective will not succeed until the structure and framework for decision making are transformed into a consistent process with the same organizational entity used in all steps of PPBE. Our fighting men and women and taxpayers need to recover some of the estimated 25 percent of wasted DOD spending. It is long past time for rigorous use of an MCDM tool as part of a Capability ROI–managed, accountable, and cost-effective Department of Defense. ✪

Bagram Airfield, Afghanistan


1. William Matthews, “Obama’s Partial Victory: Minus Some ‘Wasteful Projects,’ U.S. Spending Bill Remains Massive,” Defense News, 2 November 2009, http://www.defensenews.com/story.php?i=4354391 (accessed 14 December 2009).

2. Quoted in Robert A. Watson and Ben Brown, The Most Effective Organization in the U.S.: Leadership Secrets of the Salvation Army (New York: Crown Business, 2001) (see book cover).

3. See NBC Nightly News, 10 September 2001, http://icue.nbcunifiles.com/icue/files/icue/site/pdf/3551.pdf (accessed 14 December 2009).

4. A not-for-profit corporation or “think tank,” RAND developed the original Planning, Programming, and Budgeting System methodology and MCDM tools for the DOD’s use. See Richard J. Hillestad and Paul K. Davis, Resource Allocation for the New Defense Strategy: The DynaRank Decision-Support System (Santa Monica, CA: RAND, 1998), http://www.rand.org/pubs/monograph_reports/MR996/ (accessed 14 December 2009); and Paul K. Davis and Paul Dreyer, RAND’s Portfolio Analysis Tool (PAT): Theory, Methods, and Reference Manual (Santa Monica, CA: RAND, 2009), http://www.rand.org/pubs/technical_reports/2009/RAND_TR756.sum.pdf (accessed 14 December 2009).

5. Russ Banham and Sam Knox, “When Do Companies Outgrow Their Spreadsheets?” CFO Research Services, 1 June 2004; Robert Kugel, “A Better Spreadsheet Alternative,” Intelligent Enterprise, 20 May 2005, http://intelligent-enterprise.informationweek.com/showArticle.jhtml;jsessionid=D321T4AWB2TRLQE1GHRSKH4ATMY32JVN?articleID=163106301 (accessed 14 December 2009); and Grenville J. Croll, “The Importance and Criticality of Spreadsheets in the City of London,” European Spreadsheet Risks Interest Group, 2005, [4], http://arxiv.org/pdf/0709.4063v2 (accessed 14 December 2009).

6. Maj Gen Stephen R. Lorenz, Lt Col James A. Hubert, and Maj Keith H. Maxwell, “Linking Resource Allocation to Performance Management and Strategic Planning: An Air Force Challenge,” Aerospace Power Journal 15, no. 4 (Winter 2001): 34–35, http://www.airpower.au.af.mil/airchronicles/apj/apj01/win01/win01.pdf (accessed 14 December 2009).

7. “Strategic Planning and Budgeting Domain Overview Briefing,” DOD Business Management Modernization Program, August 2004.

8. See Hillestad and Davis, Resource Allocation for the New Defense Strategy.

9. Ibid., 38.

10. Mason Brooks and Drew Miller, “Inside the Detention Camps: A New Campaign in Iraq,” Joint Force Quarterly, 1st Quarter 2009, 129–33, http://www.ndu.edu/inss/Press/jfq_pages/editions/i52/
25.pdf (accessed 14 December 2009).

11. David S. C. Chu and Nurith Berstein, “Decisionmaking for Defense,” in New Challenges, New Tools for Defense Decisionmaking, ed. Stuart E. Johnson, Martin C. Libicki, and Gregory F. Treverton (Santa Monica, CA: RAND, 2003), 26, http://www.rand.org/pubs/monograph_reports/MR1576/MR1576.ch1.pdf (accessed 14 December 2009).

12. “Limitation of Systems,” Bob Behn’s Public Management Report 3, no. 9 (May 2006): 1–2, http://www.ksg.harvard.edu/thebehnreport/May2006.pdf (accessed 14 December 2009).

13. See Irving L. Janis, Groupthink: Psychological Studies of Policy Decisions and Fiascos, 2d ed., rev. (Boston: Houghton Mifflin, 1982).

14. Henry Mintzberg, The Rise and Fall of Strategic Planning: Reconceiving Roles for Planning, Plans, Planners (New York: Free Press, 1994), 185.

15. For a good summary of these common decision-making errors, see Herbert A. Simon and Associates, “Report of the Research Briefing Panel on Decision Making and Problem Solving,” in Committee on Science, Engineering, and Public Policy, Research Briefings 1986: Report of the Research Briefing Panel on Decision Making and Problem Solving (Washington, DC: National Academy Press, 1986), 17–35, http://www.netlibrary.com/urlapi.asp?action=summary&v=1&bookid=14732 (accessed 15 December 2009).

16. Drew Miller, Institute for Defense Analyses, Using Decision Trees as a Decision Support System for EBO Analysis and Rapid COA Analysis and Selection, 2003. (Available from the author at drmiller@drewmiller.com)

17. “Report to the Senior Executive Council on Streamlining Decision Process,” item no. 20, Defense Planning Guidance, 2004–9, October 2002.

18. Department of Defense, Joint Defense Capabilities Study: Improving DoD Strategic Planning, Resourcing and Execution to Satisfy Joint Capabilities (Washington, DC: Department of Defense, January 2004), B-12, http://www.worldcat.org/wcpa/oclc/56134236?page=frame&url=http%3A%2F%2Fpurl.access.gpo.gov%2FGPO%2FLPS52753%26checksum%3Df2bfe7764aba17f961c9c53aed657ee2&title=&linktype= digitalObject&detail= (accessed 15 December 2009).

19. Drew Miller, “Using DynaRank for Effects-Based Operations and Capabilities-Based Tradeoff Analysis” (presentation to the Military Operations Research Society Symposium, US Military Academy, West Point, NY, June 2005).

20. Ibid.

21. Anthony H. Cordesman and Paul S. Frederiksen, Is Defense Transformation Affordable? Cost Escalation in Major Weapons Programs (Washington, DC: Center for Strategic and International Studies, 27 June 2006), 4, http://www.comw.org/rma/fulltext/0606cordesman.pdf (accessed 15 December 2009).

22. Michael Barzelay and Fred Thompson, Efficiency Counts: Developing the Capacity to Manage Costs at Air Force Materiel Command (Arlington, VA: IBM Center for the Business of Government, August 2003), http://www.businessofgovernment.org/pdfs/barzelay_report_fms.pdf (accessed 15 December 2009); Sean Wilson, USAFE/A5PE, “USAFE POM Process,” PowerPoint Briefing, 2003; and Robert Millman, USAFE/A5PE, “FY07 APOM Capability Program Deliverables,” PowerPoint Briefing, 2003.

23. Barzelay and Thompson, Efficiency Counts.

24. Lorenz, Hubert, and Maxwell, “Linking Resource Allocation”; and Philip J. Candreva, “Accounting for Transformation,” Armed Forces Comptroller 49, no. 4 (Fall 2004): 11, http://web.ebscohost.com/ehost/viewarticle?data=dGJyMPPp44rp2%2fdV0%2bnjisfk5Ie41%2beK8%2bTnjqzj34HspOOA7enyWK%2borUmzpbBIrq%2beSa6ws064qrY4v8OkjPDX7Ivf2fKB7eTnfLujtUqvr7NRr6a1PurX7H%2b72%2bw%2b4ti7hfLepIzf3btZzJzfhruntEi3qrFKs5zkh%2fDj34y75uJ%2bxOvqhNLb9owA&hid=101 (accessed 15 December 2009).

25. Paul K. Davis, “Uncertainty-Sensitive Planning,” in New Challenges, New Tools, 155, http://www.rand.org/pubs/monograph_reports/MR1576/MR1576.ch5.pdf (accessed 15 December 2009).

26. For example, Lt Gen William Gus Pagonis, USA, director of logistics during the Gulf War of 1991, was hired by Sears in 1993 to fix its logistics system. He retired from Sears in 2004 as president of its logistics services.


The conclusions and opinions expressed in this document are those of the author cultivated in the freedom of expression, academic environment of Air University. They do not reflect the official position of the U.S. Government, Department of Defense, the United States Air Force or the Air University

[ Home Page | Feedback? Email the Editor]