By Samuel Mark Borowski
For at least four times since World War II, competitive prototyping has been highly encouraged, if not mandated, as a preferred approach to major systems acquisition in the Department of Defense. Its repeated encouragement is due in part to its description as a best practice by organizations like the Government Accountability Office, RAND, and specially formed task forces like the Packard Commission. In most instances, competitive prototyping is presented as a tool for stoking creative thought, for improving decision-making, and for leading to better acquisition outcomes. In other instances, its value has been questioned. In practice, competitive prototyping has not always delivered on its promises. Part of its mixed results has been attributed to widespread confusion over the meaning of terms and how prototyping should be pursued on a competitive basis. Building off lessons learned, this paper provides an overview of prototyping accompanied by a description of how competitive prototyping has and could be practiced better within the Department of Defense. The terms “prototype” and “prototyping” are defined as is an approach to competitive prototyping explained. Law and regulation make brief appearances, but both give way to recorded experience. Throughout, lessons learned, best practices, and other considerations are highlighted to better position the Defense Department to implement the competitive prototyping requirements of the Weapon Systems Acquisition Reform Act of 2009.
Building off lessons learned, an overview of prototyping is provided and is accompanied by suggestions for doing so better and on a competitive basis within the Department of Defense.
The debate over prototyping reaches back over half a century—not so much over whether prototyping is good but when it provides value. At Milestone C, prototyping as a pre-requisite to a low-rate initial production decision is well accepted. The large commitments of capital that accompany a production award warrant some assurance that the technology to be produced will deliver as promised. Prototyping provides that assurance. At Milestones A and B, however, prototyping has not gained much traction, especially when cheaper alternatives seem to be available. Paper competitions coupled with systems analysis, models and simulations, and other estimation techniques have been the desired alternative at these earlier stages of the life cycle. These methods are thought to be both cheaper and less time consuming than prototyping, and therefore more cost effective. The critics say otherwise.
Critics argue that if prototyping is good enough to support a production decision, why not use it earlier to justify a formal program start at Milestone B or a comparison of alternatives at Milestone A. Time and time again, they say, paper competitions and capability estimates have shown major systems acquisition to be plagued more by the unknown risks of systems development than the known ones. Experiences learned when developing the F-111 and C-5A aircraft illustrate this point: all the analysis in the world cannot reveal what one does not know. Prototyping can.
The response to these criticisms has been that, while prototyping may provide value, there is too much change early in the life cycle to make prototyping worthwhile (Drezner & Huang, 2009, p. 11–12). Changes in technology, performance objectives, and operational concepts prior to Milestone B marginalize the value of prototyping in the early stages of the life cycle. Besides, the response goes, prototyping can provide little additional knowledge when compared to other acquisition techniques without completing a detailed design. And going through detailed design prior to Milestone B is just setting oneself up to do it all over again in Engineering and Manufacturing Development. The debate, therefore, is over whether prototyping early in the life cycle can ever be cost effective.
A similar debate surrounds the use of competition. All would agree competition is good; not all would agree it is always sensible. Competition for competition’s sake has never been the goal. The goal is to get a better value. Competition may be a means to this end, but, in the defense market, to invite more competition invariably entails more costs. When these costs exceed their expected returns, competition no longer makes sense. Like the debate over prototyping, with competition, the debate is over how competition should be approached so that it provides enough value to warrant its costs.
Recently, the debates over early prototyping and competition have converged in the context of the Weapon Systems Acquisition Reform Act of 2009 (“WSARA”). WSARA mandates competitive prototyping for major systems acquisitions up to Milestone B and compels it to be a continued consideration throughout the life cycle. In some ways, competitive prototyping’s resurgence as a part of WSARA should be no surprise. In the last fifty years, the hallmark of competitive prototyping’s ascendance has been the threat of shrinking defense budgets. With shrinking defense budgets on the horizon and sequestration looming, this is no less true today. But its prevalence as a means for effective reform is counterintuitive. Competitive prototyping not only requires more development dollars up front, it takes more time, and its success in the Department of Defense has been mixed.
The issue facing the Defense Department is how to deal with the additional costs of competitive prototyping so that better acquisition outcomes can follow. Fortunately, the lessons from previous periods of competitive prototyping reforms provide some clues. They suggest both how prototyping can remain cost effective and how competition can be sensibly pursued. Reintroducing these lessons and building from them in ways applicable to today’s acquisition environment is the first step to implementing WSARA’s competitive prototyping reforms. It is also the first step towards obtaining better acquisition outcomes. After all, competitive prototyping does not guarantee such outcomes will follow; it only makes them possible. The goal is to make them possible using fewer dollars than before.
While competitive prototyping can be more valuable than paper competitions, it can also confirm what is already known. The key to uncovering more value and making prototyping more cost effective lies in understanding what is meant by the terms “prototype” and “prototyping.” Despite fifty years of intermittent prototyping in the Department of Defense, settled definitions for these terms have not yet emerged. Conceptually, they are easy to comprehend if not always to explain, at which point a reference is usually handy. A handy reference can explain the terms’ everyday parlance, but no handy reference can explain how they relate to the Defense Department’s acquisition process with its various milestones and decision points. The leap from everyday practice to the highly specialized Defense Department procurement process is too great. In the acquisition lexicon, there is a void.
In the past, this void in terminology has divided the policy of competitive prototyping from its practice in ways that have frustrated realizations of its promises (Drezner, 1992, p. 2; Reed et al., 1994, p. 22). Practitioners have not understood how prototyping should be approached. Policy-makers have struggled to organize principles around prototyping from which better outcomes can emerge. To meet the challenges of WSARA, this void should be filled. Settling on definitions for the terms “prototype” and “prototyping” in a way that instructs those who make policy as much as it guides those who must build one is the first step in doing so.
Prototypes Are Test Articles.
Whereas paper studies estimate a technology’s capabilities, prototyping demonstrates those capabilities through testing. Test articles are designed, constructed, and tested to demonstrate the capabilities of some technology or system. In its simplest explication, the test article is the prototype, and as a test article, it can take many forms and represent various states of maturity depending on the aims of the test (Office of Management and Budget [OMB], 2006, p. 42). Whether the test article represents a concept, subsystem, or end item that is full scale, fully capable, or something that is much less mature, all are forms of prototypes (Tyson et al., 1991, p. 13). The process of using these test articles to demonstrate capabilities is the practice of prototyping (Drezner & Huang, 2009, p. 4–5).
Prototyping’s emphasis on technology demonstration is one reason it has been popular during periods of falling defense budgets. With fewer procurement dollars to spend, there is less appetite for risky expenditures on unproven technologies. To warrant greater investment, technology must prove itself, and more than just in operational terms (National Research Council, 2001, p. 57). It also must prove to be affordable. As the President’s Blue Ribbon Commission on Defense Management, or Packard Commission, famously put it, prototyping “should allow us to fly—and know how much it will cost—before we buy” (President’s Blue Ribbon Commission on Defense Management [Packard Commission], 1986, p. 57).
But knowing what to fly to justify what to buy has been a recurring difficulty. The prevailing wisdom has vacillated between prototyping something that is production representative and something that is less sophisticated (Defense Science Board, 1978, p. 20). Of the two, prototyping a production representative test article is the most conservative approach. Building production representative prototypes in advance of every major program start allows a full understanding of a technology’s costs and benefits. With a production representative approach to prototyping, risks can be contained. Fixed-priced contracts can follow. Programmatic success would be more likely. Ostensibly, these are the goals of every prototyping and development effort as it nears production (OMB, 2006, p. 47). Rarely, however, is such an approach cost effective, especially on a competitive basis prior to Milestone B (Defense Science Board, 1978, p. 53).
Prior to Milestone B, prototyping requires one to be selective, and being selective is where the benefits and difficulties of prototyping lie. It may not be cost effective to build a production representative prototype prior to Milestone B, but building something less sophisticated may be. Whether it is or not depends on whether one can selectively design, construct, and evaluate a prototype in ways that provide more reliable information than paper studies and analysis can provide. The key to remaining cost effective is to invest no more capability in the prototype than is required to further the prototype’s primary purpose (Drezner & Huang, 1991, p. 19). The key to making prototyping more reliable than paper studies is to target those capabilities paper studies struggle to estimate accurately. Perfecting both these aspects of prototyping in a test article of limited capability is extremely difficult (Reed et al., 1994, p. 8). Doing both, however, is essential to realizing a prototype’s full potential and serving its ultimate end: to generate information and guide future decisions (Drezner, 1992, p. vi).
Prototypes Guide Decisions.
The Defense Department’s multi-phased acquisition process has many decisions points all corresponding to individual phases. At different decisions points, the degree and types of knowledge required to support a particular decision varies. But having sufficient knowledge at each point is essential to enabling better acquisition outcomes (U.S. Government Accountability Office [GAO], 2011, March, p. 175). Where insufficient knowledge exists, resources are committed when not enough about the technology is known. Technical risk is underestimated; cost increases and schedule slips follow (GAO, 2008, September, p. 2). This has been the downside of basing decisions solely on paper studies. They tend to underestimate what is not already known.
Prototyping enables better acquisition outcomes by improving the reliability of available information. Prototyping injects an early dose of realism into the assumptions and conclusions at the core of previous studies and analysis, thereby making them more useful (Reed et al., 1994, p. 9; Tyson, Nelson, Gogerty, Harmon & Salerno, 1991, p. 33–34). Realism comes through demonstrated capabilities. As more capabilities are demonstrated, more becomes known, and the more justification there is for the decisions made (GAO, 2001, October, p. 7–8). But the more capabilities are added, the more costs will be incurred, and the more closely one must evaluate whether the information being provided is worth the extra costs. There is a line where prototyping’s costs begin to exceed its returns (National Research Council, 2011, p. 36). For prototyping to be a productive exercise, prototyping must keep on the positive side of the line. In practice, this requires prototyping with a particular end in mind, investing only in activities that support this end, and then using the information that results to chart a better course.
Charting a better course through the early stages of the acquisition life cycle does not require all the capabilities of a final system to be embedded in a prototype. A production representative prototype at Milestone B is not only overkill, it resembles a waste (Defense Science Board, 1978, p. 53). A prototype need have no more capability than is necessary to support the next series of decisions (Defense Science Board, 1978, p.88). Ensuring the prototypes are more valuable than paper studies, though, requires that certain capabilities be targeted. Namely, prototypes should target the areas where paper studies are most weak: areas of high technical risk that are essential to system success (Drezner, 1992, p. 9).
This targeting is essential to uncovering the unknowns that plague acquisition programs based on paper and to making prototyping worthwhile. It is also essential to reducing risk in advance of the next phase and positioning an acquisition program to capture efficiencies later on (Tyson et al., 1991, p. 5, 13). These all make prototyping more cost effective and more desirable than limiting oneself to paper alone. The Air Force’s Advanced Tactical Fighter and the Navy’s A-12 fighter programs provide limited examples of these dynamics (Tyson et al., 1991, p. 33).
During the Advanced Tactical Fighter’s prototype phase, a number of fixes for the YF-22 prototype were identified early and incorporated at lower cost as a part of the next phase. The Navy’s A-12 program took a different approach; its early system design was based almost entirely on paper. As Full Scale Development ramped up—which is today’s equivalent to Engineering and Manufacturing Development—a number of technical problems emerged that engulfed all hopes of successfully implementing the paper design. To a certain degree, the Advanced Tactical Fighter program encountered comparable problems, but not all were technical ones. Problems with funding, work sharing among contractors, and an unstable industrial base hindered efforts to capitalize on promised efficiencies (Ynossi, Stem, Lorell & Lussier, 2005, p. 13–21). Thus, while the Advanced Tactical Fighter program enjoyed a successful prototype phase, it shows how even a strong start can be overwhelmed by other issues down the road (Drezner & Huang, 2009, p. 17–18). Prototyping may enable better acquisition outcomes, but it does not guarantee they will follow.
The goal with prototyping is to make better outcomes possible, and demonstrating areas of high technical risk is essential to reaching this goal (National Research Council, 2011, p. 130). Demonstrating areas of high technical risk is also essential to making prototyping more cost effective. When areas of high technical risk are demonstrated through prototyping, it presents an opportunity to address problems early, when rates of expenditures are lower, and without risking the success of the next phase (Tyson et al., 1991, p. 5). In development, problems always emerge, and when development is based solely on untested analysis and estimates of a design, problems tend to emerge later in development when expenditure rates are higher. Prototyped programs encounter similar problems, but problems tend to be identified earlier and can be fixed more cheaply, as in the case of the YF-22. Capturing this efficiency, an example of cost avoidance, bolsters prototyping’s cost effectiveness. Capturing enough of them so that the extra development dollars invested in prototyping can be recouped later is what makes prototyping more worthwhile (Defense Science Board, 1968, p. 6; Defense Science Board, 1993, p. 13). Sometimes these efficiencies result in reduced cost; most of the time they result in reduced risk.
The Air Force’s Close Air-Attack-Support program, the program that led to the highly successful A-10 aircraft, provides an example of prototyping’s ability to reduce risk and avoid costs. During flight test, the designers of one prototype identified a flaw in wing design while the designers of the other prototype realized the benefits of one critical technology were not worth its costs (Smith et al., 1981, p. 49, 56, 58). Fixes were identified and adjustments made so that moving into the next phase, risks for both designs were reduced in ways that also avoided costs. In the testing of both designs, their prototypes served as risk reduction tools.
When areas of high technical risk are not addressed through prototyping, it is not likely to reduce risk or to result in much gain. During early development of the Army’s Brilliant Anti-Armor Submunition (“BAT”), for example, prototypes were constructed and tested with the highest technical risk components excluded from the design (Reed et al., 1994, p. 1). When moving into the next phase, these components became the major risk areas. Without demonstrating those areas of high technical risk that were essential to system success, the prototype’s ability to reduce risk was marginalized. As an acquisition strategy, prototyping did not provide much value.
For any given prototype used within the Defense Department’s acquisition life cycle, the areas of highest technical risk appropriate for demonstration should vary. A prototype need only provide enough sophistication to address those risks that are most relevant to the next series of decisions (Drezner & Huang, 2009, p. 5). These risks tend to vary by phase of the acquisition life cycle. Early in the life cycle, areas of high technical risk relate to technology development. As one nears Milestone B and into Engineering and Manufacturing Development, risks associated with systems development—such as risk in the areas of integration, manufacturability, producibility, and operational suitability—come to the fore (National Research Council, 2011, p. 131). As such, a prototype’s maturation should vary depending on where it falls within these phases (Office of Technology Assessment [OTA], 1992, June, p. 61). What this means as a matter of acquisition practice is that in terms of reducing risk through technology demonstration, all prototypes are not created equal. It also means that not all risks are suitable for reduction through early prototyping. Some risks, like those appearing later in the life cycle, are just incapable of being reduced without a prototype resembling the final design.
Regardless of the risks that may or may not be reduced in a particular prototype, not to be lost is the fact that all prototypes produce information. This information can and should be used to guide a full range of decision-making, from those occurring in the context of a specific acquisition to those on which the acquisition is based (Drezner, 1992, p. 4; Drezner & Huang, 2009, p. 9). For instance, in the acquisition community, prototyping can assist in determining whether the benefits of a new technology outweigh its risks, and thus warrant further investment (Packard Commission, 1986, p. 55–56). Prototyping can also be useful for evaluating the merits of a particular design approach, or in the science and technology community to guide the transition of technology outside of the laboratory (Defense Science Board, 1987, p.46). Prototypes can also be used in the requirements community to evaluate operational concepts and needs (Smith, Barbour, McNaugher, Rich & Stanley, 1981, p. 37). When prototyping, the information provided is valuable. All should use it.
Prototyping Leads to Change.
The ability for a wide community of users to capitalize on the knowledge prototyping provides is another benefit of prototyping, but it does not always result in efficiency. In the course of acquisition decision-making, some degree of change is expected to result from prototyping. Indeed, change is what prototyping is all about (Perry, 1972, p. 7). When acquisition decisions are based solely on paper studies, one would expect that change would be equally inevitable. It is. The difference is that with paper studies, the need for change is not recognized until greater capital investments have been made and major funding committed. At that point, it becomes more costly to undo what has been done (Smith et al., 1981, p. 58).
Prototyping allows resources to be committed incrementally until the merits of a technology are better understood. When the merits are not there or are simply not worth the costs, prototyping gives reason to change course (Perry, 1972, p. 7; Reed et al., 1994, p. 2; Tyson et al., 1991, p.11). In this way, by allowing for change, prototyping provides a hedge against uncertainty (Drezner, 1992, p. 8). Sometimes the uncertainty lies in a technology’s maturity. Sometimes the uncertainty lies in something more fundamental, such as an operational concept, requirement, or threat. In each case, prototyping provides an opportunity to change as more that was uncertain becomes known (Drezner & Huang, 2009, p. 11; Drezner, 1992, p. 8, 14, 74).
But in providing the opportunity to change, prototyping has a weakness. Too much change marginalizes the value of prototyping, making it no more useful than the cheaper, less reliable paper studies it supplants. The more a prototype resembles a system’s final configuration, the less change it can tolerate and still provide the expected returns. When the prototype and final configuration item closely resemble each other, minor changes can be accommodated. Major changes, such as those associated with an operational concept or a fundamental approach, cannot (Drezner & Huang, 2009, p. 20).
The experiences encountered on the Air Force’s strategic airlift C-X program underscore this point: too much change in operational concept can marginalize the value of prototyping. The predecessor to the C-X program was the Air Force’s Advanced Medium Short Landing and Take-Off Transport (“AMST”) program. The AMST program constructed and tested two full-scale representations of aircraft that emphasized tactical airlift. Over time, however, the Air Force saw strategic airlift as more important and cancelled the AMST program in favor of the strategic-oriented C-X. The C-X program ultimately led to the development of the C-17 Globemaster III cargo aircraft, but the fundamental change in the operational concept between the two program marginalized the value of the full-scale AMST prototypes (Battershell, 1995, p. 220). The expected benefits of prototyping never emerged. The C-17, for all its low risk technology and preceding AMST prototype phase, still encountered significant challenges going forward (Battershell, 1995, p. 223).
The Air Force’s first generation Advanced-Medium Range Air-to-Air Missile (“AMRAAM”) program provides another example of too much change, while at the same time illustrating the perils of too little (Tyson et al., 1991, p. 34). In its first generation, AMRAAM was to provide a capability similar to another air-to-air missile of the day, just with shorter range and in a smaller package. To meet the desired form factor, the prototypes used solid-state electronics instead of the conventional tube technology found in its predecessor. The technology could not perform, and when entering Full Scale Development, the design reverted to using the already proven tube technology. When making the transition, program personnel relied on assumptions that said it could still meet the desired date for initial operational capability without acknowledging the significant step back they were making. AMRAAM essentially started over, and used paper studies to support its optimistic assumptions. Like the C-X program, significant delays and cost overruns ensued. Requirements should have been changed to reflect the new start (Tyson et al., 1991, p. 21). They were not, and difficulties followed.
Hence, while prototyping allows and even encourages some amount of changes to be made, a limit must be imposed if prototyping investments are to be preserved (National Research Council, 2011, p. 130). There is no bright line rule here. Rather, the scope of allowable change appears to vary in inverse proportion to the scale and sophistication of the prototypes. When the change is significant and the prototypes relatively mature, then proceeding without a new prototype phase could be no more different than starting off with a paper design (Drezner & Huang, 2009, p. 12; Smith et al., 1981, p. 39). This is one reason the XV-15 tiltrotor prototype was not an effective precursor to the MV-22 Osprey. Though the technology’s feasibility had been demonstrated, its ability to meet an operational need was not, especially the advanced operational needs of the Marine Corps. Building the MV-22 Osprey on the basis of the XV-15 prototype was little more than starting from a paper design (Whittle, 2010, p. 108-115).
With prototyping, change may be expected, change may even be encouraged, but not all changes can be accommodated. The amount of allowable change has to be limited to ensure that the prototypes, as a tool for enabling better acquisition outcomes, survive the decision-making process.
“Prototype” and “Prototyping” Defined.
A test article used for gathering knowledge to guide decision-makers who instigate change—these are the primary attributes of a prototype. Couple them with the general guidelines provided above and the following workable definitions for the terms “prototype” and “prototyping” emerge:
- A “prototype” is a test article designed to demonstrate areas of high technical risk that are essential to system success. A prototype need not be a full system, but, in scope and scale, it is tailored to accommodate series of decisions, and as such, can represent a concept, subsystem, or end item according to the decisions to be made. Rather than reflect the final design, prototypes are built with the expectation that, as decisions are made, change will follow.
- “Prototyping” is the practice of testing prototypes, of appropriate scope and scale, for the purpose of obtaining knowledge about some requirement, capability, or design approach. The knowledge obtained informs a decision-making process the output of which results in some degree of change. The degree of allowable change is bounded, in inverse proportion, by the scope and scale of the prototype.
These definitions are variations of ones first introduced by RAND (Drezner, 1992, p. 9). Over the years, they have been refined in different ways to emphasize various aspects of prototyping believed to be important. The refinements incorporated here do the same. They emphasize aspects of prototyping that, in the multi-faceted decision-making process of today’s acquisition environment, tend to be missed. Also emphasized are aspects of prototyping that preserve it as a cost-effective alternative to cheaper, less reliable acquisition methods.
Unlike other definitions, these definitions do not exclude production representative prototypes. Rather, they embrace them with the caveat that, with prototypes of a production representative caliber, the scope of allowable change is much less, the range of available decisions more narrow, than for less sophisticated prototypes appearing earlier in the life cycle. Among such prototypes are “demonstrator prototypes” and “advanced development prototypes,” both of which appear between Milestones A and B.
For practitioners charged with prototyping in advance of Milestone B, the distinctions between these two types of prototypes have been confounding (Defense Science Board, 1978, p. 54). Both are properly considered prototypes, but each resides on different sides of the development divide. Technology demonstrators are more closely associated with technology development (OTA, 1992, p. 54). Advanced development prototypes are more closely associated with systems development (OTA, 1992, p. 60). On visual inspection, their differences are not intuitively obvious. But the latter is much more sophisticated and better suited to informing Milestone B. The former is much less so, but is also much cheaper to design and build. Getting the latter and not the former requires a great deal of judgment. Better policies about the two would also provide a great deal of help, which is part of the reason these definitions and their refinements are reintroduced here.
Their aim is to provide practitioners a rudimentary approach to prototyping so that it remains a viable option to cheaper alternatives. They also provide a baseline upon which more thorough policies can follow. More thorough policies are needed if better acquisition outcomes are to follow. Having working definitions is just a first step in that direction.
Competitive Prototyping and its Dilemmas.
When compared to paper studies, prototyping presents an early dilemma: it takes more time and it costs more money, at least in the short term (Defense Science Board, 1993, p. 13). As such, prototyping is justified only to the degree it allows for better materiel solutions or future savings. The objective is to spend more in development so that better systems can be fielded more quickly and for less overall cost (Packard Commission, 1986, p. xxiii). In practice, prototyping should be a leveraged investment (Tyson et al., 1991, p. 5). The dilemma is to prototype in ways that provide a positive return.
When prototypes are evaluated on a competitive basis, the early dilemma of prototyping is exacerbated. Rather than fund one prototyping effort, the Government must fund two or more. Added to the tough decision of determining what to prototype are the tougher decisions of determining how competition will be pursued and the resulting prototypes evaluated. In most cases, the evaluation will support a future down-select decision, but until complete, the funding and management of multiple contractors puts incredible pressure on the procurement budget. Rarely does the addition of a contractor team come with a proportional increase in dollars.
Consequently, for competitive prototyping to be a leveraged investment, certain trade-offs are necessary. Development dollars must be allocated to support the right mix of activities and appropriately controlled so that a positive return might result. Competition, meanwhile, must be harnessed in a way that allows better performing systems to emerge at lower costs. In competitive prototyping, decisions are no longer limited to just those associated with the prototype. With competition, there are additional challenges.
Dealing with Budgetary Pressures.
If prototyping is more costly at the front end, then prototyping on a competitive basis is worse. Not only must multiple prototypes be designed and tested as part of a competition, but with competition, the prototypes are usually more advanced (Tyson et al., 1991, p. 30). In an austere funding environment where resources are in scarce supply, this additional demand creates incredible pressure on the research and development budget. It also creates incredible risk for industry in ways that undermine the value of prototyping and hinder competition. The challenge is to find ways to channel these budgetary pressures effectively, not only for the success of the prototypes, but also for the success of the Defense Department.
To channel budgetary pressures effectively requires dealing with risk effectively by allocating resources to what matters most. As Milestone B nears, this means devoting more resources to developing a military system and less to developing technology (Reed et al., 1994, p. 44). Historically, the aggressive performance goals of military systems have frustrated attempts to allocate resources this way. To meet performance objectives, immature technology is embraced on the hope that it can be matured at the same time a system is developed (GAO, 2006, September, p. 7). Rare is the program that can successfully develop a system and mature technology at the same time (GAO, 1999, July, p. 3; GAO, 2006, September, p. 22). WSARA seems to have taken this lesson to heart by emphasizing more mature technology at Milestone B. Using more mature technology frees resources for purposes of systems development, which is no small feat when the system is for military use, even when the technology is relatively mature (Battershell, 1995, p. 220).
Dealing with risk effectively requires making room for more mature technology. In practice this means introducing flexibility in performance objectives, or setting more modest ones, so that the risk profile will align with available funding. Flexibility creates an outlet by which the trade-offs can be made to release some of the budgetary pressure. It also obviates less desirable means by which the pressure can be released: through contributions from contractor independent research and development budgets (Defense Science Board, 1978, p. 53).
When competition is introduced to prototyping, tapping contractor independent research and development budgets is incredibly enticing. By tapping these funds, the total budget for prototyping can be increased, more risk can be accommodated, and more performance can be chased. In the case of the Advanced Tactical Fighter program, the total budget for its prototype phase is reported to have been $5 billion (OTA, 1992, June, p. 71). At least $2 billion dollars came from contractor independent research and development funds (Gertler, 2009, p. 3). When compared to competitively prototyped programs where the performance goals were much more modest, the difference in total costs is striking. For the Close-Air Support and Lightweight Fighter prototypes, the total budget for prototyping was about five orders of magnitude less (Tyson, Nelson, Om & Palmer, 1989, p. VIII-2).
Competitive prototyping’s ability to absorb large amounts of private capital is not unique to aircraft procurements (OTA, 1992, June, p. 64). It is a by-product of competition. It is not irrational for contractors to take a considerable amount of financial risk in development in hopes of winning a production award. Development is not thought of as a profitable venture in military procurement; production is (Perry, 1972, p. 10). Thus, there is an incentive for contractors to take large financial risks in development to win the more lucrative production contract likely to follow. For those in the Defense Department always on the search for better performance, there is an equal incentive to let them do so.
Independent research and development budgets are the primary resource a contractor has for improving its competitive posture in a prototyping phase (U.S. General Accounting Office, 1974, March, p. 5). When that is not enough, pooling resources through teaming, such as occurred in the Advanced Tactical Fighter program, is another means for responding to a prototype’s aggressive needs (Defense Science Board, 1978, p. 53; Ynossi et al., 2005, p. 17). Over the short term, teaming and collaboration among development teams can be a good thing (OTA, 1992, June, p. 65). It can spur innovation and increase the flow of ideas. But when few can afford to compete and when failing to win the competition threatens the viability of the industrial base, neither the Defense Department nor industry’s interests are well served (Defense Science Board, 1978, p. 53). Managing and controlling these effects is a part of competitive prototyping.
This is not to say contractors should not share in the cost of a prototype’s development, nor is it to say prototyping is bad for industry. In the 1990s, a robust strategy of prototyping and limited production was presented as a means for preserving the industrial base after the Cold War (OTA, 1992, June, p. 51). Independent research and development, meanwhile, is a valuable tool for expanding the capabilities of a military system. But the pressures of competition, the inescapable lure of a multi-billion dollar defense market, and the combination of aggressive performance goals, immature technology, and inadequate funding, can create a swirling vortex for private capital from which some competitors may never emerge. It may also create an environment that some competitors purposely avoid, all of which is counterproductive to preserving the industrial base and future chances of competition.
At the same time, when too much private capital contributes to the prototype’s development, the prototype’s value as a tool for reducing risk and providing better information is limited (Reed et al., 1994, p. 53). At the extreme end, when private funding is wholly responsible for a prototype, the resulting test article has been described as a tool best suited for marketing, not uncovering risks (Drezner, 1992, p. 50). Defense Department procurements do not operate at this extreme. But at the extreme lies the risk of what may happen when a prototyping effort is inadequately funded and controlled. For all its success, the Navy’s Joint Standoff Weapon (“JSOW”) program encountered this risk as a result of its competitive prototyping effort (Reed et al., 1994, p. 12). For JSOW, much of the prototyping was done outside of the contract and on the contractors’ dime. The result: much of the technical risk was carried into Engineering and Manufacturing Development. The prototypes, though useful to some degree, were described as being more suitable for attracting media attention than positioning the program for the next phase.
To counter these effects, the Government should absorb most of the cost pressures by funding the majority of the competitive prototyping effort. This only makes sense given that the purpose of the prototypes is to inform the Government’s process of decision-making. After all, the process is a treacherous one. Requirements change. Programs are cancelled. Funds are put on hold. Delays follow. All these things are likely outcomes in major systems acquisition. When they occur, they limit the ability of prototyping investments to provide a positive return. They also are all traceable to the Government. If the Government is responsible for marginalizing a prototyping investment, then the Government should bear the costs for doing so.
This is not only fair, it introduces discipline to the decision-making process and makes it more likely resources will be allocated in ways that provide the best return. With only a limited supply of resources, the Government can decide how best to align the risk profile to match the budget. Introducing flexibility in performance requirements so trade-offs can be made is one way of doing this. Adopting an austere development environment that focuses, with a laser-like intensity, on the objectives of the prototyping phase is another (Tyson et al., 1991, p. 19). Austerity controls costs and allows resources to be allocated to the most important activities, such as those that complement the practice of prototyping (Drezner & Huang, 2009, p. 19-20). Traditional forms of systems analysis and models and simulations, for example, are complementary to prototyping because they extend the range of knowledge prototyping provides (Drezner & Huang, 2009, p. 6). Rather than eschewed, such activities should be pursued.
In addition to allocating resources effectively, if the Government is to garner a positive return on its prototyping investments, its financial commitment cannot be open-ended. There must be a sensible cap placed on the costs of a prototyping phase to preserve the promise of some expected return. Some rules of thumb—such as 25% of estimates for Engineering and Manufacturing Development costs, 10% of estimates for total acquisition costs, and 5% of estimates for life-cycle costs—have been proposed (Tyson et al., 1991, p. 38). To make these limits meaningful, they have been accompanied by suggestions to contract for prototypes on a fixed price basis (Smith et al., 1981, p. 22-23). Contracting for competing prototypes on a fixed price basis without absorbing a large amount of private capital counsels for contracting on a best efforts basis (Drezner, 1992, p. 54). Reining in production expectations is also important. Some have suggested that when competitive prototyping, the future should be wholly in doubt. No production run should be promised or expected (Drezner & Huang, 2009, p. 20). It may also be best to limit how much private capital can be contributed to the prototyping phase. All these steps limit how much private capital can shape the prototypes’ baseline and obscure the bottom line.
These are all ways in which resources might be effectively allocated in a competitive prototyping so that the prototype’s status as a risk reduction tool can be preserved. Also preserved is the likelihood of a capturing a positive return on prototyping investments without distorting capital allocations in the industrial base. When overzealous commitments of private capital are averted, more productive allocations can be made across the industrial base (Defense Science Board, 1978, p. 53). This in turn preserves the competitive landscape of the military arms market, and, over the long term, this is a net benefit for the Department of Defense. After all, competition may offer plenty of value, but it relies on having enough players to take the field.
With the Government bearing most of the cost pressures for competitive prototyping, other temptations exist besides the allure of private capital. Competition may be a means by which better performing systems can be obtained at lower cost, but competition adds its own costs (OMB, 2006, p. 21). In competitive prototyping, the costs come in the form of having to build two or more prototypes instead of one. The temptation is to dispense with competition on the grounds that its benefits will not outweigh its costs. Then, the extra resources competition will require can be redirected to other priorities, such as attaining better performance goals. The challenge is to approach competition creatively so competitive pressures can yield better results. This requires creativity in how objectives are framed, how the benefits of competition are tallied, and how competition is pursued.
On the Defense Department’s acquisition ledger, benefits are usually tallied in terms of objectives for performance, cost, schedule, and risk (Drezner, 1992, p. 59). When these objectives are defined at the system level, this can mean competition will be pursued likewise. In this approach, the prototypes are full-scale, but not necessarily fully capable, representations of the final system design. The Air Force’s Close Air-Attack-Support, Lightweight Fighter, and Advanced Tactical Fighter programs are all examples of the full-scale competitive prototyping approach. In each case, objectives for the prototypes were defined at the system level, and then evaluated on that basis in a competitive fly-off.
How objectives for a competitive prototyping effort are defined has a large impact on competitive prototyping’s costs. How objectives are defined determines how they will be evaluated, and how they will be evaluated determines how competition will be pursued (Smith et al., 1981, p. 38). When objectives are defined at the system level, they can be particularly costly to address as part of a competition, especially when they are very specific. Evaluating system-level objectives related to maintenance, operating, and supportability costs, for example, requires prototypes that closely approximate the final design. The Army took this approach in its competition for the Utility Tactical Transport Aircraft System, and as a result, had to field two production representative prototypes to support the competition (Smith et al., 1981, p. 12). This required a considerable financial commitment and limited the ability to realize efficiencies over the course of development (Smith et al., 1981, p. 17). Instead, the Army had to realize efficiencies over a much longer term. In the case of the Utility Tactical Transport Aircraft System, it was the entire life cycle.
Tallying the benefits of competition over the long term is probably the best approach, especially for systems likely to be in the field for several decades. But with smaller development budgets, the resources necessary to evaluate production representative designs as part of a competitive effort are not likely to be there. Instead, competition must be approached in a different way, such as at a lower scale or with prototypes having fewer capabilities. What this means is defining system-level objectives with much less specificity, or reducing those objectives to something that can be competed at a lower scale, such as a subsystem.
When system-level objectives can be reduced to terms of subsystem performance, pursuing competition will be cheaper. At the same time, being able to capture system-level returns at lower costs will likely make competition more attractive. Development of the AIM-54 Phoenix missile is one example where the system-level performance largely hinged on a subsystem level competition (Defense Science Board, 1978, p. 52). Rather than devote resources to a full system competition, they were targeted at the subsystem that mattered most. Systems like aircraft have also been highlighted as systems where, better performance at the subsystem level can lead to outsized, system-level returns. For this reason, competition at the subsystem level has been highly encouraged (Defense Science Board, 1968, p. 6).
One of the best-documented examples of competitive prototyping at the subsystem level is the cannon competition held during the Air Force’s Close-Air-Attack-Support program. Besides the competition between airframes, the Close-Air Attack-Support program pursued two other competitions to attain its life-cycle cost and performance goals (Smith et al., 1981, p. 46). The first competition was for the cannon. The second competition was for the ammunition. The Air Force managed the first. In the second, the Air Force worked through the winning cannon contractor as a surrogate. For the aircraft that later became the A-10, the gun system was critical to providing the desired operational capability. But given the cannon’s high rate of fire, cost objectives were placed on the ammunition so that the capability would remain affordable (Jacques & Strouble, 2010, p. 34–35). To meet these cost objectives, the Air Force directed that a competition be held at the subcontractor level for ammunition. Additional requirements directed to how competition was to be pursued and the results evaluated were also levied. The result: savings that could be garnered over the life cycle in the form of an eighty percent reduction in unit cost for ammunition (Jacques & Strouble, 2010, p. 44).
Successfully pursuing competition at the subcontractor level requires the Government to intervene in a relationship it would often leave alone. To meet the Government’s needs, additional requirements related to managing a subcontractor must be levied on the prime (Reed et al., 1994, p. 17). The prime contractor must be informed of the Government’s requirements, including source selection criteria, to ensure the Government’s priorities are what shape the competition. It is also not far-fetched to condition any down-select decision on the Government’s pre-approval (Reed et al., 1994, p. 18). Implementing these additional controls will undoubtedly lead to more development costs, but not implementing them may undermine the objectives of having a competition between subcontractors. In the case of the Close-Air-Attack-Support program, directing the prime was surely more expensive, but the gains it reaped have been garnered over the long term.
Framing objectives in non-system specific terms, such as in terms of capability needs rather than materiel needs, is another way let competition work, and at lower costs. The idea behind this approach to competition is that, by allowing competition to work on a wider scale without being confined to a subset of materiel solutions, it will reap greater returns. The Congressional Commission on Government Procurement, also known as the McGuire-Holifield Commission, believed more competition in terms of capability needs and during the early stages of the acquisition process would have a greater effect on acquisition outcomes (Commission on Government Procurement, 1972, p. 79). Rather than have industry respond with competing proposals in response to a predetermined materiel need, the commission advocated for competing materiel solutions in response to a capability need (Commission on Government Procurement, 1972, p. 126–128).
In today’s acquisition practice, this means competition between various materiel solutions prior to Milestone A. Prototyping at this early stage in development takes the form of conceptual prototypes, which are breadboard designs of individual components and subsystems coupled with estimates of system performance provided by traditional modeling, simulation, and analysis techniques (OTA, 1992, June, p. 52–54). Conceptual prototypes provide only crude approximations of system performance and cost, but enough insight to challenge assumptions and open the door for more innovative thought. They are also less costly to pursue than other types of prototypes, and, considering their timing, provide benefits that can be exploited over a much longer term.
Others have picked on this theme of non-system specific prototyping as a means to innovate and have taken it in another direction: technology transition. In terms of budget categories, innovation has been found most likely to reside at the applied research and advanced technology development budget categories (Defense Science Board, 1969, p. 19). Technologies that are prototyped at this stage of development are not ready for operational use, but they can be demonstrated and assessed for operational utility (Defense Science Board, 1987, p. 24).
More prototyping at these early stages of development has been heralded as a means for focusing research, bridging the worlds of technology development and systems development, and expanding operational performance at lower cost (Blue Ribbon Defense Panel, 1970, p. 69; Isenson & Sherwin, 1969, p. 103; National Research Council, 2001, p. 59). In the scientific and technology community, prototypes that appear in these budget categories are known as Advanced Technology Demonstrators, and they have become a valuable tool for technology transition. They also can trace their lineage to the prototyping recommendations of the Packard Commission (Defense Science Board, 1987, p. 22).
Competing prototypes in the context of technology transition can take the form of a competition between system upgrades. A competition between system upgrades is typically more cost effective and less risky than competing entirely new systems (Defense Science Board, 1987, p. 22). The incremental improvements upgrades provide go a long way towards facilitating innovation, and therefore, when still cost effective, likely should be pursued (Defense Science Board, 1978, p. 88-89; OTA, 1991, July, p. 71).
Competing system upgrades in a cost effective manner is again a matter of how objectives are framed. When framed in the most general terms, such as in terms of mission effectiveness and expected life-cycle costs, more options for pursuing competition result. One option is to compete a combination of upgrades to existing systems to determine which combination provides the most cost effective capability. The Air Force’s Enhanced Tactical Fighter program, which competed a combination of upgrades to the F-15 and F-16 fighter aircraft and led to the F-15E Strike Eagle aircraft, provides an example of this approach. Another option is to compete an upgrade to an existing system against the capabilities of a new one. This option is a particularly attractive means for providing competitive pressures when resources are too constrained to compete multiple new designs (OMB, 2006, p. 44).
What this means for program planners is that competition may be more a question of program planning than anything else. Given a limited supply of resources, the objective is always to allocate those resources most effectively to obtain the best mix of performance, cost, schedule and risk. Accommodating competition is one way to obtain a better mix. The challenge is to approach competition in such a way that paying for it is worth the costs.
Putting It All Together.
If the purpose of competition is to obtain better performing systems at lower cost—which is usually referred to as obtaining the best value—then allowing the best value to be found is just as important as letting competition in. In military procurements, best value is not so much about just performance or price as it is about both (Packard Commission, 1986, p. 62).
Harnessing the right mix of performance and price has been described as the key to securing a technological advantage over our adversaries (Packard Commission, 1986, p. 56). Surely performance is one aspect of a technological advantage, but so is price. The more systems that can be fielded for a given sum the greater the force multiplier effect. When procuring military systems, the goal is often to find the best mix of the two (OMB, 2006, p. 43). This is no less true when competitively prototyping. Competition merely postures the Government to make a better choice. The challenge is to structure the competition so the best value shines through.
Competitive prototyping presents the Government with a choice to continue funding one or more of multiple designs (Drezner, 1992, p. 15). Through prototype demonstrations, information about each design is gathered and used to determine whether development will proceed, and if so, which contractor is best situated to pursue it. At Milestone B, the information should reflect that the risks and benefits of a particular design warrant greater financial commitments in Engineering and Manufacturing Development (Defense Science Board, 1978, p. 88; Drezner & Huang, 2009, p. 21). If the benefits outweigh the risks, then the prototypes should further demonstrate which design provides the best value (Drezner, 1992, p. 16).
Unlike other procurement approaches, though, competitive prototyping does not rely so much on the Government defining the best value as it does allowing competing contractors to find it on their own. Knowing what the Government considers most important in a final design is important, but giving contractors the freedom to experiment within a range of acceptable criteria facilitates better technological solutions at lower cost. Flexibility allows for bolder technological solutions to be tried and tested without renegotiating the contract, and therefore allows more risks and design flaws to be addressed early (Smith et al., 1981, p. 38). Competition, meanwhile, stokes innovative thought and encourages aggressive design solutions in direct response to identified needs (Defense Science Board, 1968, p. 6; U.S. General Accounting Office, 1974, March, p. 1). The requirement to prototype allows all this to occur without sacrificing credibility (Tyson et al., 1991, p. 31). Through competitive prototyping, better technological solutions should emerge based on performance, not promises, and when they do, the results can be surprising (Defense Science Board, 1968, p. 5).
The outcomes of the Air Force’s Lightweight Fighter competition and the Army’s Advanced Attack Helicopter show how this can be true. The superior handling of the YF-16 prototype in the Lightweight Fighter competition was one reason it was preferred (Coram, 2002, p. 305). For the Army, the superior handling and simpler design of the YAH-64 prototype is what caused it to prevail (Smith et al., 1981, p. 166). In both, operational suitability in the form of airframe handling was a primary discriminator. Airframe handling is difficult to evaluate on paper in advance of design, but it is particularly important to determining an aircraft’s operational suitability (Defense Science Board, 1968, p. 8). In the cases of the Lightweight Fighter and the Advanced Attack Helicopter programs, the superior operational suitability of the winning designs seemed to carry the day. For the YAH-64 prototype, these results were surprising. All analysis up to the point of flight test suggested its competitor would win (OTA, 1992, June, p. 52).
Allowing contractors the freedom to explore a mix of technological approaches has a second benefit besides providing for more innovative thought: it allows the Government to know how much capability it can afford to buy (Tyson et al., 1991, p. 33). When approached in this way, competitive prototyping can inform requirements and avert an improvident pursuit for a capability that is just not there. Even when the capability is there, competitive prototyping can show that a cost effective one may be still another generation of prototypes away (Tyson et al., 1991, p. 57). This information can be very valuable.
Indeed, prototyping uncovers all kinds of valuable information (Tyson et al., 1991, p. 21). When judging them as part of a competition, all things should be considered, not just performance or cost (OTA, 1991, July, p. 11). To make the prototypes more insightful than paper studies, particular attention should be paid to those things that are difficult to capture in advance and on paper, such as operational suitability (Tyson et al., 1991, p. 13). Also important is past performance. Competitive prototyping can reveal which contractor is more likely to better perform in the next phase (Tyson et al., 1991, p. 21). In terms of past performance, no gauge is likely to be more reliable than the recent experiences garnered through prototyping.
All the information prototyping provides can and should be used to choose between competing designs. Competition, meanwhile, should be leveraged to spur innovation and presentation of a best solution. The challenge is to put all these things together so the quest for a better solution will be worth the extra costs.
Reintroducing how the Defense Department can approach competitive prototyping in ways that make sense are just some of the challenges to meeting WSARA’s acquisition reforms. But no approach to competitive prototyping will be complete until it is based on a clear and consistent lexicon. This is the next challenge for the Department of Defense. The introduction of technology transition in the late 1980s has done much to muddle prototyping’s vocabulary. So much so that at one time a task force was formed to sort it all out (Janifer & Brown, 1991). Its work, though beneficial, has been marginalized by the change that has followed. Much needs to be sorted out; a lexicon for prototypes that parallels the various phases of the acquisition life cycle is needed. Until one emerges, confusion will not only persist, but better outcomes will be frustrated.
Another challenge facing the Department will be to adjust the paradigm by which it approaches prototyping’s results. Sometimes the process of prototyping, whether it is competitive or not, suggests that the benefits of a particular technology are not worth the costs. In such cases, the best decision is probably not to go forward with large commitments of capital. Some may consider such a result to be a prototyping failure. This is the wrong thinking. It is a prototyping success (Drezner, 1992, p. 75).
Prototyping identifies problems prior to a full commitment of capital when it becomes harder to view a program as anything but “too big to fail” (GAO, 1992, December, p. 55–56). It is better to direct resources to technologies that can get to the war fighter cheaper and more quickly than it is to continue to invest in technologies that have promise but no near term return. Indeed, this is the primary reason one prototypes.
A final challenge lingers in establishing better prototyping guidelines. These guidelines will need to be more specific than those presented here. They will also have to account for the differences in prototypes among various military systems. Not all systems can be cost effectively prototyped at the full system level. The non-recurring expenses associated with systems like satellites and aircraft carriers are too large to ever be recouped later, and as such, they are best prototyped at something less than full-scale (Drezner & Huang, 2009, p.6). Aircraft and munitions, on the other hand, are best prototyped at higher levels of integration. To make competitive prototyping cost effective, there is a balance that has to be struck in terms of cost and capabilities that enables better decisions (National Research Council, 2011, p. 36). The balance will vary by system. Better guidelines are needed for finding where the balances lie.
Similarly, there are points where prototyping on a competitive basis, as well as prototyping itself, may no longer make sense. To confirm through prototyping what is already known may be reassuring but is not likely a productive investment of capital. To continue to compete when one solution far exceeds the rest is not likely to result in much gain (National Research Council, 2011, p. 96). After all, pursuing competition for competition’s sake is not the goal (Packard Commission, 1986, p. xxiii). The goal is better value.
When the costs of competition and for prototyping exceed any expected returns, both mandates might need to go away. With WSARA, these are the points where waiver needs to be considered. But even when waiver appears to be appropriate, one must ask whether it makes more sense to approach competitive prototyping in a different way. Pursuing prototyping at the subsystem or subcontractor levels may, on net, provide enough benefits to still make the competitive exercise worthwhile (Drezner, 1992, p. 73).
Better guidelines in all these areas as well as a better lexicon would do much to institutionalize competitive prototyping as an acquisition practice in the Department of Defense. In 1986, Congress passed the first statute to encourage the Department to do just this. In the mid 1990s, the statute was extended and later repealed before the process was complete (Reed et al., 1994, p. 40). With WSARA, a new generation of procurement professionals must pick up where the last one left off. Relearning what it means to prototype and to competitively do so in a cost effective manner are two steps in the right direction. Many more steps should follow.
Battershell, A. L. (1995). Technology approach: DoD versus Boeing (a comparative study). Acquisition review quarterly, 2(3), 213-230.
Blue Ribbon Defense Panel. (1970). Report to the President and the Secretary of Defense on the Department of Defense. Washington, DC: Government Printing Office.
Commission on Government Procurement. (1972). Report of the Commission on Government Procurement (Vol. 2). Washington, DC: Government Printing Office.
Coram, R. (2002). Boyd: The fighter pilot who changed the art of war. New York, NY: Little, Brown and Company.
Defense Science Board. Defense Science Board Task Force on Fighter Aircraft. Office of the Director of Defense Research and Engineering. (1968). Fighter aircraft. Washington, D.C.: Office of the Director of Defense Research and Engineering.
Defense Science Board. Panel on R & D Management Office of the Under Secretary of Defense for Research and Engineering. (1969). Report of the Panel on R & D Management 1969 summer study. Washington, D.C.: Office of the Under Secretary of Defense for Research and Engineering.
Defense Science Board. Task Force on the Acquisition Cycle. Office of the Under Secretary of Defense for Research and Engineering. (1978). Report of the Acquisition Cycle Task Force 1977 summer study. Washington, D.C.: Office of the Under Secretary of Defense for Research and Engineering.
Defense Science Board. Summer Study on Technology Base Management. Office of the Under Secretary of Defense for Acquisition. (1987). Report of the Defense Science Board 1987 Summer Study on Technology Base Management. Washington, D.C.: Office of the Under Secretary of Defense for Acquisition.
Defense Science Board. Task Force on Aircraft Assessment. Office of the Under Secretary of Defense for Acquisition. (1993). Report of the Defense Science Board Task Force on Aircraft Assessment. Washington, D.C.: Office of the Under Secretary of Defense for Acquisition.
Drezner, J. A. RAND Corporation, (1992). The nature and role of prototyping in weapon system development (R-4161-ACQ). Retrieved from website: http://www.rand.org/content/dam/rand/pubs/reports/2007/R4161.pdf
Drezner, J. A., & Huang, M. RAND Corporation, (2009). On prototyping: Lessons from RAND research (OP-267). Retrieved from website: http://www.rand.org/pubs/occasional_papers/2010/RAND_OP267.pdf
Gertler, J. (2009, December 22). Air Force F-22 fighter program: Background and issues for Congress (Congressional Report No. RL31673). Washington, DC: Library of Congress Congressional Research Service. Retrieved from Open CRS website: https://opencrs.com/document/RL31673/2009-12-22/
Isenson, R. S., & Sherwin, C. W. Department of Defense, Office of the Director of Defense Research and Engineering. (1969). Project hindsight (AD495905). Washington, DC: Author.
Jacques, D. R., & Strouble, D. D. Air Force Institute of Technology, Air Force Center for Systems Engineering. (2010). A-10 Thunderbolt II (Warthog) systems engineering case study. Retrieved from Author website: https://www.afit.edu/cse/cases.cfm?case=21&a=detail
Janifer, E. M., & Brown, E. A. Department of the Army, U.S. Army Materiel Command. (1991). Acquisition semantics (LABCOM-SR-91-1). Washington, DC: Author.
Klein, B. H., Glennan, T. K., & Shubert, G. H. RAND Corporation, (1971). The role of prototypes in development (RM-3467-1-PR). Santa Monica, CA: Author.
National Research Council, Committee on Review of the U.S. Department of Defense Air and Space Systems Science and Technology Program. (2001). Review of the U.S. Department of Defense air, space, and supporting information systems science and technology program. Retrieved from National Academy Press website: http://www.nap.edu/catalog/10179.html
National Research Council, Committee on Evaluation of U.S. Air Force Preacquisition Technology Development. (2011). Evaluation of U.S. Air Force preacquisition technology development. Retrieved from National Academy Press website: http://www.nap.edu/catalog.php?record_id=13030
Office of Management and Budget. (2006). Capital programming guide v.2.0 supplement to Office of Management and Budget circular A-11, part 7: planning, budgeting, and acquisition of capital assets. Washington, DC: Author.
Perry, R. RAND Corporation, (1972). A prototype strategy for aircraft development (RM-5597-1-PR). Santa Monica, CA: Author.
President’s Blue Ribbon Commission on Defense Management. (1986). A quest for excellence: Final report to the President. Washington, D.C.: Government Printing Office.
Reed, D. E. et al. Department of Defense, Office of the Inspector General. (1994). Effectiveness of prototyping acquisition strategies for major defense acquisition programs (Report No. 94-181). Retrieved from Author website: http://www.dodig.mil/Audit/Audit2/94-181.pdf
Smith, G. K., Barbour, A. A., McNaugher, T. L., Rich, M. D., & Stanley, W. L. RAND Corporation, (1981). The use of prototypes in weapon system development (R-2345-AF). Retrieved from Author website: http://www.rand.org/pubs/reports/2006/R2345.pdf
Tyson, K. W., Nelson, J. R., Om, N. I., & Palmer, P. R. Institute for Defense Analysis, (1989). Acquiring major systems: cost and schedule trends and acquisition initiative effectiveness (P-2201). Alexandria, VA: Author.
Tyson, K. W., Nelson, J. R., Gogerty, D. C., Harmon, B. R., & Salerno, A. W. Institute for Defense Analysis, (1991). Prototyping defense systems (D-1097). Alexandria, VA: Author.
Office of Technology Assessment. (1991, July). Redesigning defense: Planning the transition to the future U.S. defense industrial base. (OTA-ISC-500). Washington, D.C.: Government Printing Office.
Office of Technology Assessment. (1992, June). Building future security: Strategies for restructuring the defense technology industrial base. (OTA-ISC-530). Washington, D.C.: Government Printing Office.
U.S. General Accounting Office. (1974, March). Review of five Department of Defense major weapon systems developed under competitive prototype contracts. (B-167034). Retrieved from Author website: http://www.gao.gov/assets/200/193566.pdf
U.S. General Accounting Office. (1992, December). Weapons acquisition: A rare opportunity for lasting change. (Publication No. GAO/NSIAD-93-15). Retrieved from Author website: http://www.gao.gov/assets/160/152880.pdf
U.S. Government Accountability Office. (1999, July). Best practices: Better management of technology development can improve weapon system outcomes. (Publication No. GAO/NSIAD-99-162). Retrieved from Author website: http://www.gao.gov/assets/160/156673.pdf
U.S. Government Accountability Office. (2001, October). Joint Strike Fighter acquisition: Mature critical technologies needed to reduce risks. (Publication No. GAO-02-39). Retrieved from Author website: http://www.gao.gov/assets/240/232918.pdf
U.S. Government Accountability Office. (2006, September). Best practices: Stronger practices needed to improve DOD technology transition processes. (Publication No. GAO-06-883). Retrieved from Author website: http://www.gao.gov/new.items/d06883.pdf
U.S. Government Accountability Office. (2008, September). Defense acquisitions: Fundamental changes needed to improve weapon program outcomes. (Publication No. GAO-08-1159T). Retrieved from Author website: http://www.gao.gov/assets/130/121377.pdf
U.S. Government Accountability Office. (2011, March). Defense acquisitions: Assessments of selected weapon programs. (Publication No. GAO-11-233SP). Retrieved from Author website: http://www.gao.gov/assets/320/317081.pdf
U.S. Government Accountability Office. (2012, April). Tactical aircraft: Comparison of F-22A and legacy fighter modernization programs. (Publication No. GAO-12-524). Retrieved from Author website: http://www.gao.gov/assets/600/590505.pdf
Weapon Systems Acquisition Reform Act of 2009 § 202, 10 U.S.C. § 2340 (2006).
Weapon Systems Acquisition Reform Act of 2009 § 203, 10 U.S.C. § 2340 (2006).
Whittle, R. (2010). The dream machine; the untold history of the notorious V-22 Osprey. New York: Simon&Schuster.
Ynossi, O., Stem, D. E., Lorell, M. A., & Lussier, F. M. RAND Corporation, (2005). Lessons learned from the f/a–22 and f/a–18 e/f development programs (MG-276). Retrieved from Author website: http://www.rand.org/pubs/monographs/2005/RAND_MG276.pdf.
Samuel Mark Borowski is an intellectual property attorney in the Air Force General Counsel’s Office and a former technical lead responsible for the development, fielding, modernization, and sustainment of multiple Air Force systems. He can be reached by e-mail at email@example.com.