Cost Growth in Major Defense Acquisition: Is There a Problem? Is There a Solution?


To print a PDF copy of this article, click here.

Author: William D. O’Neil

Cost growth in defense acquisition is both a problem in its own right and part of the larger phenomenon of programs that fail to perform as intended or desired. It is a limited but persistent phenomenon, which has not improved in any material respect over at least the past four decades; nor is it unique to defense, and it can flow from a variety of causes. A limited group of similar remedies have repeatedly been tried, but achieved very little success due to lack of clear analysis of underlying causes. Research points to a corrective technique, “taking the outside view,” or “reference class forecasting,” with clear promise for attacking the root problems.


The reasons for concern about cost growth in terms of its influence on Department of Defense (DoD) programs were succinctly reviewed by Mark F. Cancian (2010). In this article, I address cost growth in defense acquisition both as a problem in its own right and as a part of the larger phenomenon of programs that fail to perform as intended or desired. I show in turn that: (a) it is a limited but persistent phenomenon, which has not improved in any material respect over at least the past four decades; (b) it is not unique to defense; (c) cost growth may flow from a variety of causes—including errors in the management or contracting process—but defects in the original concept are a very common cause; (d) a limited group of similar remedies have repeatedly been tried but achieved very little success due to lack of clear analysis of underlying causes; and (e) research by social and management scientists points to a corrective technique, “taking the outside view” or “reference class forecasting,” which has a sound theoretical basis and a limited but significant record of success in nondefense applications as well as specific defense areas. I conclude that reference class forecasting and its supporting analysis and data collection bases should be more widely adopted in defense acquisition, and particularly in early evaluation and delineation of technical issues.

A Limited, But Persistent Problem

In the United States, the modern era of concern about defense program cost and results can fairly be said to have started in the late 1960s and early 1970s. Congress began demanding Selected Acquisition Reports (SARs) to provide much better and more comprehensive reporting of the costs of Major Defense Acquisition Programs (MDAPs) (Cancian, 2010). DoD instituted reforms, including establishment of the Cost Analysis Improvement Group (CAIG) (Srull, 1998, pp. 5–17), presently a statutory constituent of the Cost and Program Evaluation Office.
Congress has repeatedly revised the laws governing MDAPs, while DoD has gone through more than a dozen substantively different generations of its 5000-series acquisition regulations since the first versions were issued in July 1971 (Ferrara, 1996). The Obama Administration followed its predecessors in instituting a spectrum of reforms and initiatives aimed at acquisition improvement, while one of the incoming president’s early acts was to sign into law a new Weapon Systems Acquisition Reform Act of 2009, Pub. L. 111-23.

The Statistical Record of Cost Growth in DoD

Examination of successive annual SARs shows that when significant cost growth does occur, its full magnitude rarely is apparent for several years following program initiation, and frequently not for 10 years or more—even leaving aside growth from increased ultimate production quantities. Thus, it will be years before the real results of these new initiatives can be objectively assessed. Indeed, assessments are difficult to make well or even long after the fact. But the best and most comprehensive assessment of MDAP cost growth to date has concluded that up through programs that started officially as late as the mid-1990s, none of the reforms since the first batch in the early 1970s had any major overall effect in reducing cost growth. A study authored by Dr. David L. McNicol (2005, pp. 18–19), former chairman of the CAIG, now with the Institute for Defense Analyses (IDA), deals principally with procurement, with very limited detail on development. While the study does not include more recent results that might reflect reforms undertaken early in the 2000s, I will show that limited data do not give any indications of improvement.

This is only one of a number of analyses that attempt to determine trends in defense acquisition cost growth. Others of relatively recent date (Arena, Leonard, Murray, & Younossi, 2006; Christensen, Searle, & Vickery, 1999; Sipple, White, & Greiner, 2004; Smirnoff & Hicks, 2008) employ various statistical techniques, but all work from the historical SAR database extending back to December 1969, with its many analytical pitfalls. Hough (1992) identified the most notable problems as: (a) failure of some programs to use a consistent baseline cost estimate, (b) exclusion of some significant elements of cost, (c) exclusion of certain classes of major programs, (d) constantly changing preparation guidelines, (e) inconsistent interpretation of preparation guidelines across programs, (f) unknown and variable funding levels for program risk, (g) cost sharing in joint programs, and (h) reporting of effects of cost changes rather than their root causes.

McNicol (2005) used a variety of approaches to avoid or mitigate the effects of these pitfalls. He started with data refined by adjusting all values to constant 2000 price levels and constant quantities, pruning entries not really relevant to rigorous and consistent statistical analysis of cost growth, employing a refined categorization of individual cost increases to distinguish meaningful trends, and further adjusting the data to account for decisions to change requirements or budgets. Then he used the standard econometric technique of Ordinary Least Squares Regression analysis of panel data to estimate the magnitude and significance of a wide variety of causative influences on cost growth. While all of the analyses agree that over time no major change in cost growth has resulted from numerous reform efforts, McNicol (2005) best and most rigorously isolated the specifics; accordingly, I largely follow his lead in analysis of causes.

One other major study has examined the SAR data with similar care to provide clear insight into root causes (Bolten, Lenonard, Arena, Younossi, & Sollinger, 2008), but it limits its scope to fewer than one-third as many programs as McNicol (2005) covered, making McNicol’s the superior choice for purposes of this study.

After pruning, McNicol was left with 138 MDAPs that passed their Milestone II or Milestone B (marking formal approval as programs and entry into engineering and manufacturing development [EMD], and approval of a baseline cost estimate) between the beginning of 1970 and the end of 1997. At the most summary level, his data are plotted in the Figure as the solid line showing the distribution of average procurement unit cost (APUC) variance from baseline estimate. While few programs exactly met their initial procurement cost estimates, three-quarters of them came reasonably close. It is the smaller number of very high growth programs, representing roughly one quarter of the whole, which contributed the great bulk of overall cost growth.

Figure. Procurement Unit Cost Growth of MDAPs Initially Approved Between 1970 and 1997, and Program Unit Cost Growth of Those Approved Between 1998 and 2006.

Extending the Statistical Record

McNicol’s (2005, p. 45) data did not extend past the end of 1997. Regrettably, resources to update the data set he currently uses, which would permit reanalysis, have not been forthcoming. Using raw gross data from the most recent SAR summary tables (DoD, 2009a), however, I have calculated the program average unit cost (PAUC) variances for those programs with baselines between 1998 and 2006 and plotted the distribution of these as the series of discrete green squares in the Figure. These points represent only those programs that had their initial development estimates in this period, have nonzero procurement quantity, and have a minimum of 3 years EMD since the initial development estimate—all for the greatest possible consistency with the series from McNicol (2005). The most notable remaining gross-level inconsistency is that the PAUC data include development and military construction costs rather than solely procurement costs as detailed in McNicol (2005), and development costs on the whole are known to show higher cost growth (McNicol 2005, p. 17). But the effect of this is mitigated because, in general, procurement cost outweighs development by 4:1 (McNicol 2005, p. 4).

Clearly, the two distributions plotted in the Figure show the same general character, with that for the more recent period having generally higher growth in the upper quartile. A two-sample, two-tailed Kolmogorov-Smirnov statistical test finds inadequate evidence to accept the hypothesis that the 1998–2006 sample is drawn from the distribution of the 1970–1997 sample even at the 80 percent significance level (p = 0.638). Because of the differences in the two data sets, we must not read too much into this result, but clearly the statistical test reveals no evidence that even hints of secular improvement in control of cost growth, at least through 2006.

In both the earlier and later samples, we see that roughly three-quarters of the included MDAPs have reasonably satisfactory cost growth histories, with at most no more than 30 percent growth and average growth near zero. Excessive cost growth affects only a minority of programs.

To obtain a statistically consistent sample, the results shown in the Figure put aside programs that are terminated early, that are radically restructured, or that follow significantly nonstandard development paths. Recent examples include the Army Future Combat System and Navy Littoral Combat Ship. Such programs often have high cost growth and thus cannot be neglected in considering effects and cures, but their omission does little to affect the overall statistical picture. Some of the 1990s-era programs shown could well experience further cost growth, since the most seriously troubled programs tend to involve considerable extensions in development. Two notable examples are the 4-year slip in the schedule for completion of the F-35 Joint Strike Fighter EMD (DoD, 2010) and approximately 9-year slip in schedule for the Space Based Infrared Satellite (SBIRS) High (DoD, 2008).

Public discussions of defense cost growth often make it seem like a problem unique to DoD, but this gives a distorted impression that impedes accurate understanding and effective correction. In fact, complex programs throughout government and private industry are very prone to cost growth (Flyvbjerg, Bruzelius, & Rothengatter, 2003; Lovallo & Kahneman, 2003; Merrow, Phillips, & Myers, 1981; New York Times, 2011).

The Futility of Relying on Price Competition

Every incoming DoD administration has made efforts to improve the management of acquisition, with control of cost growth usually a prominent declared objective. But to a very great extent, lack of accurate diagnosis of causes has undermined these initiatives. Notably, a review of a pair of foundational studies of defense acquisition performed half a century ago by Merton J. Peck and Frederic M. Scherer of the Harvard Business School reveals significant issues still largely unaddressed by intervening management efforts (Peck & Scherer, 1962; Scherer, 1964). In particular, Peck and Scherer (1962) argued at length that price competition—a wide favorite for controlling costs—is bound to be largely ineffective in major defense system acquisition, and very likely counterproductive.

Nevertheless, officials have repeatedly emphasized price competition in acquisition. They have advocated price competition under a variety of banners, with the common element being an attempt to include a firm commitment regarding production of at least the initial lots as an important element in selecting the development contractor, thus transferring the risk of cost growth to the contractor.

In principle this seems sound and businesslike. Cases exist where it has seemed to work reasonably well, but only in limited circumstances. The six cases of this approach that were covered in McNicol (2005) all had especially high cost growth, putting them in the upper quartile, as shown in the Figure. A more recent example is the SBIRS High, which attempted a modified version of this strategy. SBIRS High has suffered especially great cost growth (DoD, 2010), with more than 175 percent reported.

Attempts to transfer the risks of cost growth to the contractors fail in much the same way that the nation’s banking system collapsed in 2008, and for broadly parallel reasons. Even though the remaining major defense contractors are at little risk of being allowed to go out of business, the fortunes of their individual business units can fluctuate a great deal. Their managers can and fairly frequently do suffer diminished career prospects and even job loss when things go wrong—a powerful negative motivation. But they face a painful dilemma. If they promise too much, then they may come to regret it in a few years. Yet, if they promise too little, they will lose out at once to a competitor. In such circumstances, the incentives weigh heavily on the side of accepting future risks rather than immediate ones, for one can always hope for some redemptive development in the meantime.

The critical faculties of the corporate leaders who must ultimately approve the offer are blunted by the knowledge that they command an organization too big and vital to be allowed to fail. A program filled with problems may cause pain, but not corporate destruction. And like their subordinate business unit managers, they may well hope for some future deliverance.
No plausible threats of retribution for distant problems, however dire, can go far to offset these mechanisms. In principle the government can reject offers deemed unrealistic, as it does when offerors omit some significant element or make a demonstrable error. But a source selection authority (SSA) cannot simply substitute his or her own judgment for the contractor’s regarding prospective improvement or advances in development or production. Even at best, attempting to distinguish degrees of realism among competing proposals, in many cases, is fraught with unforeseen difficulties.

If the contractor is to be held responsible, the government must allow it much autonomy and authority. In programs where price competition is not central, the government may step in and provide essential assistance and direction when a contractor encounters difficult problems, but this is inconsistent with holding the contractor responsible. Individual case studies of such programs often show contractors running into trouble while responsible officials hesitate to intervene. Most detailed case studies contain sensitive information and remain unpublished, but this effect can be clearly seen in Whittle (2010) and Younossi et al. (2008).

Other Inadequate Explanations and Solutions

Sometimes problems may be solved, or at least improved, without thoroughly analyzing their causes. After four decades of failed attempts, however, we have to question how long it might take to make much progress against cost growth and its companion problems through cut-and-try.

Some usual suspects can be dismissed from the lineup at once on the basis of strong alibis. These include:

Profiteering. Defense contractors are not noted for high profit rates, and executive compensation is not a major expense in this industry.

From the government’s perspective, the function of profits is to permit industry to raise the capital it needs to serve government needs. Contracting policy is shaped in various ways to minimize the levels of profits necessary, and analysis shows that in general this is achieved efficiently. Profitability could not be significantly lower without impairing industry’s ability to meet government needs (Arnold, 2008, pp. 13–15).

Lack of incentives to economize or reduce inefficiency. Throughout the history of American defense contracting, concerns have repeatedly been expressed that in the absence of immediate and direct competitive pressures at every stage, firms would lack incentives to economize (Holley, 1964). Close analysis by Arnold, McNicol, and Fasana (2009), however, showed that on the whole, government contracting officers make quite effective use of legally permitted contract incentives to motivate performance.

Experience in working within or close to defense industry firms and government acquisition organizations reveals many areas of apparent inefficiency or waste—ill-motivated or poorly qualified personnel, idle resources, deteriorated equipment, bureaucratic busywork, minor peculation, and a host of others. Yet on the whole, the experience is not noticeably different in nondefense industry. Where it has been possible to make more or less direct comparisons, they have revealed no systematic deficiency in defense-related efficiency (Besselman, Arora, & Larkey, 2000; Kelley & Watkins, 1998). The pattern in which a relatively small proportion of programs account for virtually all of MDAP cost growth cannot be explained by industry inefficiencies unless they are somehow specific to particular programs.

Requirements creep. Requirements changes do occur and they contribute to cost growth. But the cost data set used by McNicol (2005) adjusted for requirements changes; thus, they did not contribute to the pattern of cost growth seen in the Figure.

Technology risk. Another usual suspect is in fact more commonly implicated in major cost growth: excessive technology risk. Public Law 111-84 (Armed Forces, 2009) requires certification at the time of program initiation that “the technology in the program has been demonstrated in a relevant environment.” This corresponds to what the Department of Defense (2009b) defined as Technology Readiness Level Six (TRL 6). The Government Accountability Office (GAO), in its periodic assessments, regularly emphasizes technology readiness, which it cites as a major factor in determining the prevalence and seriousness of cost growth. Levels of technology maturity at program initiation have been rising in recent years, which the GAO sees as an encouraging sign for future control of cost growth (GAO, 2009, pp. 16–17).

But cost growth is by no means consistently a result of low technology maturity. The Expeditionary Fighting Vehicle (EFV) program is a notable example. More than a quarter of a century of focused technology development efforts preceded program approval in 2000, including the construction of a series of functional prototype vehicles. All but one of the program’s critical technologies met TRL 6, and the remaining one has not caused prohibitively expensive problems. Nevertheless, the engineering prototypes functioned so badly that testing had to be abandoned, and EMD had to be started over again. Planned procurement has been cut more than 43 percent, objectives for performance and reliability have been scaled back substantially, scheduled initial operational capability has been slipped by approximately 9 years, and the estimate of APUC has risen by 168 percent (DoD, 2008).

In the EFV as in many other high-growth programs, the fundamental problem is not technology per se but failure to work out and recognize in advance many of the implications of the design choices that were made at the time of program initiation. We can trace a high proportion of the problems in the current and former “leaders” in cost growth to variations on this theme. Program managers and engineers laid confident plans to achieve performance and schedule goals without recognizing what they truly involved. This can be clearly seen in a few published program case studies (Coulam, 1977; Whittle, 2010; & Younossi et al., 2008), but other studies remain unpublished due to sensitivity.

The Origins of Flawed Plans

How can this be? How can experienced and well-qualified managers and engineers repeatedly fail to lay realistic plans? How can acquisition officials repeatedly overlook such faults, often bending or setting aside established policies to do so? Modern research in social and management sciences provides answers, involving patterns of behavior at both the individual and group level.

At the individual level, the key factor is the planning fallacy. This is a concept growing out of the work of Daniel Kahneman and Amos Tversky (1977, 1982). The phrase refers to the pervasive human tendency to hold “the conviction that a current project will go as well as planned even though most projects from a relevant comparison set have failed to fulfill their planned outcomes.” Controlled experiments have repeatedly validated the phenomenon (Buehler, Griffin, & Peetz, 2010).

Management and social scientists have explored the planning fallacy’s operations and implications specifically in business (Lovallo & Kahneman, 2003; Flyvbjerg, Lovallo, & Kahneman, 2003) and major infrastructure projects (Flyvbjerg, Garbuio, & Lovallo, 2009). Many of the problems they found in particular cases traced to faulty decisions related to the planning fallacy (Buehler, Griffin, & Peetz, 2010).

At the group level, the scenario these studies present as typical involves individuals and groups competing to secure adoption of their proposals for a new program. They are driven to making unrealistic promises, in exactly the same manner—as I have already argued—indicative of firms competing for contracts. That is, the groups that make the most optimistic promises gain an advantage, so long as their optimism does not excite outright incredulity. Their optimism is fostered by their own planning fallacies, and once decision makers have bought into a proposed program, they too are drawn into planning fallacy.

Explicit strategic deception may possibly be involved at one level or another, deliberately calculated to gain advantage over competing proposals (Flyvbjerg, Holm, & Buhl, 2002; LaBerge, 1982). But very unrealistic plans can come into being and gain approval without Machiavellian calculation, particularly in a cascade of multiple levels of decision with associated multiple layers of planning fallacy. In defense acquisition, my experience suggests that this is far more common than calculated deception.

The planning fallacy appears to be a given fact of the innate workings of human thought. It is extremely difficult to see it in ourselves, and the practically minded people who predominate in decisions regarding acquisition programs seem particularly resistant to such introspection. But we can see it outside of ourselves, if we are able to look dispassionately, and that offers an important clue about what might be done to mitigate its ill effects (Buehler et al., 2010).
Kahneman and his colleagues suggest what they call taking the outside view, or reference class forecasting, founded in a process of analyzing data from the results of prior programs or efforts that correspond closely—as closely as possible—to what is planned. Even though the correspondence is not exact, this procedure provides a more reliable guide to results than forecasting directly on the basis of detailed program plans (Flyvbjerg, 2008).

The Role of the CAIG

This sounds much like what the CAIG has been doing in a sophisticated and rigorous way for the past four decades, using what it terms the parametric method. As the CAIG’s first director described it, “The parametric approach does not rely on a detailed description of the ‘inputs’ to the system, but rather considers system ‘output’ characteristics such as speed, thrust, etc. Historical defense system cost experience is used to develop relationships between such output characteristics and system costs. These empirical relationships are then used to project a portion or all of the costs of a new system” (Srull, 1972).

In some cases, the CAIG may make early estimates using analogies with generally similar systems, but there too it seeks rigor through the use of structured and objectively evaluated selection of analogues. In either event, it is pursuing the “outside view,” as Tversky and Kahneman, and those who have followed them, have recommended (Buehler et al., 2010; Flyvbjerg, 2008).

Establishment of the CAIG was followed by a large, swift improvement in agreement between official estimates and actual costs, even though acquisition officials were not required to accept its recommendations and only rarely did in full (McNicol, 2005). Viewed from outside the CAIG, it seemed clear to me at the time that the knowledge that estimates were reviewed led to increased attention to cost estimating by program managers and the sponsoring organizations, and increased willingness to adopt the CAIG’s parametric methods. This was fostered by its active efforts to share its data and methods. Thus, the CAIG brought a measure of cooperative-competitive synergy to cost estimation.

Effective Measures

Impressive as it is, the record of the CAIG (and of its methods in other hands) has limitations. DoD treats CAIG estimates as sensitive management information and does not release them, but based on seeing many over the past four decades, it seems clear to me that they are usually more accurate than (and higher than) the Service estimates, but also sometimes significantly inaccurate. Unpublished case histories of some high-growth programs show costs growing well beyond even CAIG forecasts. Even a very intensive examination with full access has failed to find enough relevant data to permit a comprehensive statistical analysis of the CAIG’s historical accuracy, but does make it clear that there are incidents of substantial underestimation (McNicol, Tyson, Hiller, Cloud, & Minix, 2006).

Its authors remark:

The estimates prepared by cost estimators are crucially dependent on technical and programmatic assumptions over which they have little or no say. There are some gray areas; cost estimators should recognize—and provide corrections for in their estimates—some types of unrealistic program assumptions and some likely execution problems. But, without trying to fix the boundaries of these exceptions, it is clear that they are exceptions—cost estimators generally are not equipped to do engineering analyses of proposed programs or to assess the capabilities of potential contractors. (McNicol, 2005, p. 19)

The unpublished case studies suggest this as a very significant cause of serious underestimates. In most of these cases, it was possible to know that the technical assumptions were optimistic, and this was pointed out by at least some observers at the time. While no comprehensive survey has been conducted, in confidential interviews CAIG personnel have told me that in some cases they had reservations, but ultimately lacked a strong basis for questioning confident assertions by program managers or other official advocates. Thus, while no basis exists for assessing the incidence of such situations, we can be sure it is not zero.

This relates to what then-Deputy Secretary of Defense David Packard (1970) emphasized four decades ago: Cost growth is closely related to technical problems including schedule slippage, quality problems, and inability to meet baseline requirements (Flyvbjerg et al., 2003). Faulty initial engineering plans and concepts are not the root of all cost growth, but are involved in much of it.

These problems can be attacked by an approach comparable to that which the CAIG uses—taking an outside view, using reference class forecasting of technical factors as well as the costs that depend on them. The basic techniques for parametric analysis of engineering characteristics are well established and have been used by engineers for at least 250 years in the early design phases of systems of many kinds (Vincenti, 1990, pp. 138–141). They are a great deal like the techniques used by the CAIG in that they do not depend on highly detailed information about the system design or particular technologies. Those who apply them must have appropriate broad technical knowledge and judgment, but do not need deep expertise in the particular systems.

When one examines program development histories closely, as I often have, it becomes apparent that there are cases in which the problems were such that even thorough engineering parametric analysis might not have identified them, but many more in which it should have—if it were tried. Unless and until it is tried in a systematic way by competent personnel, there will be no way to be sure. But given the historical evidence regarding the value of engineering parametric analysis generally, together with the modern evidence regarding the importance of the “outside view,” it seems that a thorough trial is called for—and all the more so since the cost of such efforts is so small compared to the costs of even one badly conceived or executed program.


To print a PDF copy of this article, click here.

Author Biography

oneil-headshotMr. William D. O’Neil has more than 40 years of experience in defense acquisition, and has held executive acquisition-related positions in the Office of the Secretary of Defense, Lockheed Corporation, and the Center for Naval Analyses (CNA). He is presently a consultant dealing with acquisition issues for the Institute for Defense Analyses as well as a CNA Senior Fellow. He holds degrees from the University of California, Los Angeles, in mathematics and quantitative management science.

(E-mail address: w.d.oneil@pobox.com)


References

Arena, M. V., Leonard, R. S., Murray, S. E., & Younossi, O. (2006). Historical cost growth of completed weapon system programs (Report No. TR-343). Santa Monica, CA: RAND Corp.

Armed Forces, 10 U.S.C., Pub. L. 111-84, §2366b(a)(3)(D) (2009).

Arnold, S. (2008, Fall). Does DoD profit policy sufficiently compensate defense contractors? IDA Research Notes. Retrieved from https://www.ida.org/upload/research%20notes/rn_fall2008_profit.pdf

Arnold, S. A., McNicol, D. L., & Fasana, K. G. (2009). Can profit policy and contract incentives improve defense contract outcomes? IDA Paper P-4391 (Revised). Alexandria, VA: Institute for Defense Analyses.

Besselman, J., Arora, A., & Larkey, P. (2000). Buying in a businesslike fashion—and paying more? Public Administration Review, 60(5), 421–434.

Bolten, J. G., Lenonard, R. S., Arena, M. V., Younossi, O., & Sollinger, J. M. (2008). Sources of weapon system cost growth: Analysis of 35 major defense acquisition programs (Document No. MG-670-AF). Santa Monica, CA: RAND Corp.

Buehler, R., Griffin, D., & Peetz, J. (2010). The planning fallacy: Cognitive, motivational, and social origins. In M. P. Zanna & J. M. Olsen (Eds.), Advances in Experimental Social Psychology, 43(C), 1–62.

Cancian, M. F. (2010). Cost growth: Perception and reality. Defense Acquisition Review Journal, 17(3), 389–403.

Christensen, D. S., Searle, D. A., & Vickery, C. (1999). The impact of the Packard Commission’s recommendations on reducing cost overruns on defense acquisition contracts. Acquisition Review Quarterly, 6(3), 251–262.

Coulam, R. F. (1977). Illusions of choice: The F-111 and the problem of weapons acquisition reform. Princeton, NJ: Princeton University Press.

Department of Defense. (2008). Summary explanations of significant SAR cost changes as of Dec. 31, 2007. Department of Defense announces Selected Acquisition Reports [News release]. Retrieved from http://www.globalsecurity.org/military/library/news/2008/04/mil-080407-dod01.htm

Department of Defense. (2009a). Selected Acquisition Report (SAR) summary tables as of December 31, 2009. Retrieved from http://www.acq.osd.mil/ara/2009%20DEC%20SAR.pdf

Department of Defense. (2009b). Technology readiness assessment (TRA) deskbook. Washington, DC: Office of the Director, Defense Research and Engineering.

Department of Defense. (2010). Summary explanations of significant SAR cost changes as of Dec. 31, 2009. Department of Defense announces Selected Acquisition Reports [News release]. Retrieved from http://www.defense.gov/releases/release.aspx?releaseid=13425

Ferrara, J. (1996). DoD’s 5000 documents: Evolution and change in defense acquisition policy. Acquisition Review Quarterly, 3(2), 109–130.

Flyvbjerg, B. (2008). Curbing optimism bias and strategic misrepresentation in planning: Reference class forecasting in practice. European Planning Studies, 16(1), 3–21.

Flyvbjerg, B., Bruzelius, N., & Rothengatter, W. (2003). Megaprojects and risk: An anatomy of ambition. Cambridge: Cambridge University Press.

Flyvbjerg, B., Garbuio, M., & Lovallo, D. (2009). Delusion and deception in large infrastructure projects: Two models for explaining and preventing executive disaster. California Management Review, 51(2), 170–193.

Flyvbjerg, B., Holm, M. S., & Buhl, S. (2002). Underestimating costs in public works projects: Error or lie? Journal of the American Planning Association, 68(3), 279–295.

Flyvbjerg, B., Lovallo, D., & Kahneman, D. (2003). Delusions of success. Harvard Business Review, December, 121–122.

Government Accountability Office. (2009). Defense acquisitions: Assessments of selected weapon programs (GAO Report No. 09-326SP). Washington, DC: Author.

Holley, I. B., Jr. (1964). Buying aircraft: Matériel procurement for the Army Air Forces. Washington, DC: Office of the Chief of Military History.

Hough, P. G. (1992). Pitfalls in calculating cost growth from Selected Acquisition Reports (Document No. N-3136-AF). Santa Monica, CA: RAND Corp.

Kahneman, D., & Tversky, A. (1977). Intuitive prediction: Biases and corrective procedures (PTR-1042-77-6). McLean, VA: Decisions and Designs, Inc.

Kahneman, D., & Tversky, A. (1982). Intuitive prediction: Biases and corrective procedures. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgement Under Uncertainty: Heuristics and Biases. Cambridge, NJ: Cambridge University Press.

Kelley, M. R., & Watkins, T. A. (1998). Are defense and non-defense manufacturing industries really all that different? In G. I. Susman and S. O’Keefe (Eds.), The Defense Industry in the Post-Cold War Era: Corporate Strategies and Public Policy Perspectives. Oxford: Pergamon Press.

LaBerge, W. B. (1982). Defense acquisition: A game of liar’s dice? Concepts: The Journal of Defense Systems Acquisition Management, 5(1), 56–63.

Lovallo, D., & Kahneman, D. (2003). Delusions of success: How optimism undermines executives’ decisions. Harvard Business Review, 81(7), 29–36.

McNicol, D. L. (2005). Cost growth in major weapon procurement programs (2nd ed., IDA Paper No. P-3832), Alexandria, VA: Institute for Defense Analyses.

McNicol, D. L., Tyson, K. W., Hiller, J. R., Cloud, H. A., & Minix, J. A. (2006). The accuracy of independent estimates of the procurement costs of major systems (P-3989). Alexandria, VA: Institute for Defense Analyses.

Merrow, E. W., Phillips, K. E., & Myers, C. W. (1981). Understanding cost growth and performance shortfalls in pioneer process plants (Report No. R-2569-DOE). Santa Monica: RAND Corp.

New York Times (2011, March 27). Boeing 787 Dreamliner. Retrieved from http://topics.nytimes.com/topics/news/business/companies/boeing_company/787_dreamliner/index.html

Peck, M. J., & Scherer, F. M. (1962). The weapons acquisition process: An economic analysis. Boston: Division of Research, Graduate School of Business Administration, Harvard University.

Scherer, F. M. (1964). The weapons acquisition process: Economic incentives. Boston: Division of Research, Graduate School of Business Administration, Harvard University.

Sipple, V., White, E., & Greiner, M. (2004). Surveying cost growth. Defense Acquisition Review Journal, 11(1), 78–91.

Smirnoff, J. P., & Hicks, M. J. (2008). The impact of economic factors and acquisition reforms on the cost of defense weapon systems. Review of Financial Economics, 17(1), 3–13.

Srull, D. (Ed.). (1998). The cost analysis improvement group: A history. McLean, VA: Logistics Management Institute.

Srull, D. W. (1972). Parametric cost estimating aids DoD in systems acquisition decisions. Defense Management Journal, 8, 2–5.

Vincenti, W. G. (1990). What engineers know and how they know it: Analytical studies from aeronautical history. Baltimore, MD: Johns Hopkins University Press.

Whittle, R. (2010). The dream machine: The untold history of the notorious V-22 Osprey. New York: Simon & Schuster.

Younossi, O., Lorell, M. A., Brancato, K., Cook, C. R., Eisman, M., Fox, B., et al. (2008). Improving the cost estimation of space systems: Past lessons and future recommendations (Report No. MG-690-AF). Santa Monica: RAND Corp.

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *