Department of Defense Acquisition Program Terminations: Analysis of 11 Program Management Factors


To print a PDF copy of this article, click here.

Authors: Lt Col Patrick Clowney, USAF (Ret.), Jason Dever, and Steven Stuban

The research described herein aims to add to the body of knowledge of program management and factors that lead to acquisition program terminations within the U.S. Department of Defense (DoD). Specifically, this research surveyed three groups—DoD acquisition program managers, defense industry program managers, and defense industry consultants—to evaluate and analyze key program factors that influence DoD acquisition program terminations. This research used relative importance weight calculations and a chi-squared distribution analysis to compare the differences between DoD acquisition program managers, defense industry program managers, and defense industry consultants regarding the factors that lead to DoD acquisition program terminations. The results of this research indicate that a statistically significant difference does not exist between the three groups as to the relative importance of 11 program management factors.


The U.S, Department of Defense (DoD) loses billions of dollars annually on canceled or failed acquisition programs (DoD, 2013). In fact, many acquisition studies conducted by the Government Accountability Office (GAO), DoD, Office of Management and Budget, as well as many Federally Funded Research and Development Centers illuminate the myriad of programs that are terminated without meeting full operational capability (DoD, 2013).
From 1997 to the present, DoD spent in excess of $62 billion on programs that were eventually canceled (Table 1 and Figure 1). The DoD has invested a great deal of time, energy, and resources to investigate the root causes of program cancellation and to determine why so many programs fail to make it through the acquisition system. In fact, The Office of Performance Assessments and Root Cause Analyses (PARCA), established in 2009 by the Weapon Systems Acquisition Reform Act of 2009, continuously evaluates the status of defense programs (Weapon Systems, 2009). PARCA issues policies, procedures, and guidance governing the conduct of such work by the Military Departments and Defense Agencies (Weapon Systems, 2009).

TABLE 1. SELECTED DoD PROGRAM TERMINATIONS AND COSTS

FIGURE 1. MAJOR PROGRAMS OFFICIALLY CANCELLED WITHOUT PRODUCING ANY OR FEW OPERATIONAL UNITS AS A PERCENTAGE OF RDT&E (1995–2013)

arj-78-article-3-figure-1

When programs are terminated, the DoD loses billions of investment dollars (Leanard, 2013). In some termination cases, the DoD garners value despite termination. Marginal benefits include economic value, knowledge, skills, lessons learned, and insights. Further, the effects of termination influence many areas of the acquisition enterprise.
Scholars, program managers, and systems engineers posit that a host of factors influences whether a program is terminated or allowed to continue. They include, but are not limited to, political pressures, cost overruns, schedule overruns, and performance shortfalls. Figure 2 illustrates the various effects of program termination (GAO, 2014).

FIGURE 2. PROGRAM CANCELLATION EFFECTS

The purpose of this research was to compare the three groups that are primarily associated with DoD program and project management: DoD program managers, DoD program manager consultants, and DoD industry program managers. These three groups were selected for comparison because each works with DoD programs, but each group has a unique perspective. Exploring the different perspectives is essential for understanding acquisition systems (Cornell, 2009). Viewing and understanding systems from various perspectives increase the overall understanding and appreciation of acquisition program system dynamics (Cornell, 2009).

arj-78-article-3-secondary-2Most research into DoD program termination focuses on an analysis of scope, schedule, and budget. This research expands on those three primary factors and evaluates eight other factors.  Although other factors were identified in the literature review that were not evaluated, the authors chose not to evaluate every factor in the literature. Instead, the authors chose to evaluate the 11 most common factors across the literature that explore program success, failure, and termination. This research aims to identify the factors that have the greatest influence on program and project cancellation from the expert’s perspective, and capture any significant differences between DoD program managers, defense industry program managers, and defense industry consultants.

This research aims to answer two interrelated research questions to identify the factors that have the greatest influence on program and project cancellation from the expert’s perspective. The research questions for this article include:

  • Are there any statistically significant differences between DoD acquisition program managers, defense industry program managers, and defense industry consultants as to the leading factors that result in DoD acquisition program terminations?
  • What are the critical factors and attributes that lead to DoD acquisition program terminations?

If statistically significant differences exist between the three groups as to the relative importance of cancellation factors, the research will identify where those differences exist. If statistically significant differences do not exist between the three groups as to the relative importance of cancellation factors, the research will identify the synergies between DoD program managers and defense industry program managers. In the first case, the differences could suggest future research to understand why different perspectives are prevalent. In the second case, the common responses could highlight/identify opportunities for emphasis to quell the frequency of program terminations.

Program and project failure and success are an enduring subject of investigation, discovery, and discussion in government, business, industry, and the private sector. Indeed, project termination usually comes with tremendous financial consequences and significant loss of time. Within the DoD, a great deal of research has been conducted by Federally Funded Research and Development Centers  (RAND, Center for Naval Analysis, MITRE, etc.), think tanks, and academia into some of the causes for program and project failure (Hofbauer, Sanders, Ellman, & Morrow, 2011). While most of this research focuses on the unique root causes for individual program failure, a comprehensive analysis at the aggregate level—using expert judgment to compare and contrast DoD program managers, DoD industry program managers, and DoD consultants—is missing in the literature. This research is the first step in a qualitative and quantitative analysis, using expert judgement at the aggregate level through a survey in order to assess the relative importance of recognized factors that lead to program and project termination.

Literature Review

arj-78-article-3-secondary-3A key aspect of understanding program and project failure is an analysis of program and project attributes and factors that affect program and project management. The factors that influence program and project management success in multiple industries have been thoroughly investigated in academia. These factors serve as an outstanding analytical tool to provide a unique look into DoD acquisition program and project failures from a program and project management perspective. Essential to this task is identifying the key factors, understanding the root causes, and ascertaining the major influences of program and project failure to provide keen insight into DoD program and project failure. Because DoD program and project terminations cost American taxpayers billions of dollars, an investigation into this subject matter is imperative for DoD, defense industry, Congress, and systems engineering researchers in order to glean an enhanced understanding of DoD program and project failure, thereby ensuring efficient, effective, and successful program and project management.

An exhaustive literature review identified 11 critical factors associated with program and project management for examination. Program and project management, project failure, project success, and the factors that lead to project failure and project success remain important issues of significant interest to program managers, decision makers, and executives within the DoD.

The literature is replete with scholarly articles and research into this endeavor. The articles pertaining to factors that impact project management, success, and failure generally fit into several broad categories. Such categories may include value of project management; project success criteria; project failure criteria; project management rubrics; case studies; and industry-specific research, consulting services, and independent studies, such as information technology, construction, and engineering. Further, a significant body of research focuses on the roles of managers in project failure. The following discussion is a brief summary of the most salient research on program and project management, failure, and success within the literature. The 11 factors identified in the literature review served as the factors for analysis.

Pinto and Slevin (1987) developed a framework (Figure 3) for understanding the implementation of projects, as well as a diagnostic tool for the project manager known as the Project Implementation Profile (Pinto & Slevin, 1987). Their research focused on identifying predictive factors of successful program and project management, and serves as a seminal work for all discussions on program and project management; their research identified the following 10 factors (Pinto & Slevin, 1987):

  1. Project mission
  2. Top management support
  3. Project schedule plan
  4. Client consultation
  5. Personnel and recruitment
  6. Technical tasks
  7. Client acceptance
  8. Monitoring and feedback
  9. Communication
  10. Troubleshooting

FIGURE 3. PROJECT IMPLEMENTATION PROFILE

arj-78-article-3-figure-3

Their framework showed that the factors are dynamic. Pinto and Slevin claim that when studying program and project management, the factors follow a logical progression. Despite recognizing the interdependence of the factors on each other, their study did not explore this finding.

Pinto and Slevin further suggest that their framework is an effective tool for project managers. Project managers can use their framework as a means to manage and monitor the project’s posture as well as determining where the project is related to its life cycle. Their tool can also be used as a measure of project success. They developed a Likert scale instrument whereby a project manager can measure the importance of each factor on a given program or project at different points in the life cycle to determine which factor is most important.

Additional research conducted by Lawrence and Scanlan (2007) provides tremendous value into project failure in defense industries. They were involved in a 10-year research project of U.S. and European aerospace industries to create methodologies and tools for large aerospace project managers (Lawrence & Scanlan, 2007). The study was commissioned to address a large amount of project terminations in U.S. and European aerospace industries. Although their focus was on aerospace industries, the authors maintain that their findings are universal to large engineering projects within all industries (Lawrence & Scanlan, 2007). Many interviews with program and project managers revealed that causes of project termination were not singularly the project managers’ fault. Program and project managers were characterized as highly intelligent and extremely competent. They concluded that more robust software tools are needed to manage the complexities of today’s multifaceted engineering projects. However, they also identified eight other critical elements that strongly impact project success or failure (Lawrence & Scanlan, 2007). They include the following:

  1. Poor initial planning
  2. Lack of clear objectives and deliverables
  3. Lack of understanding of dependencies
  4. Inadequate resource allocation
  5. Poor risk analysis
  6. Poor change management
  7. Lack of ‘buy-in’ from stakeholders
  8. Poor understanding of priorities

arj-78-article-3-secondary-4Their findings are germane to any discussion on defense industry project management. The technology, complexity, large budgets, and multiple stakeholders in the aerospace defense industry projects mirror the problems and challenges of the DoD aerospace acquisition programs. Thus, Lawrence and Scanlan’s posits serve as a great foundation for discussion of project terminations within the DoD.

Research into project management conducted by Mir and Pinnington (2014) illustrates the dynamic relationships and interactions of successful project management factors. They test the relationship between project management performance and project success. They concluded that a positive correlation exists between project management performance and contributing variables of project success. The project management performance variables (Mir & Pinnington, 2014) included:

  1. Project efficiency
  2. Impact on customer
  3. Impact on project team
  4. Business success
  5. Preparing for future
  6. Project success factors included:
  7. Project manager leadership
  8. Project manager staff
  9. Project manager policy and strategy
  10. Project manager partnerships and resources
  11. Project manager life cycle management processes
  12. Project manager key performance indicators

Their research clearly showed that dynamic relationships exist between the factors. When considering project management factors, a context of dynamic relationships must be considered. Factors are not static; each factor or variable in a project dynamically influences other factors.

Researched conducted by Allen, Alleyne, Farmer, McRae, and Turner (2014) on project success highlights some of the factors and issues surrounding program success and failure. Using case study analysis as the rubric to identify project success factors, they studied the U.S. Coast Guard’s 123-Foot Patrol Boat and Proctor and Gamble’s New Growth factory (Allen et al., 2014). The researchers also developed a survey and administered the survey to project managers involved in the respective projects. Based on the case studies and the associated survey, they concluded that the following factors influence project success (Allen et al., 2014):

  1. Project management plan
  2. Responsibility assignment matrix
  3. Budget monitoring
  4. Schedule monitoring
  5. Insufficient stakeholder engagement
  6. Broad scope and requirements
  7. Product monitoring

They also concluded that these factors are excellent tools for analysis on large and small projects (Allen et al., 2014).

The Defense Acquisition University Smart Shutdown Guidebook (DAU, 2009) provides tremendous insights into factors that lead to program success or failure that eventually lead to termination. The guidebook specifically lists the following factors:

  1. Changes in threat environment
  2. Technology changes
  3. Changes in budget environment
  4. Unsustainable cost growth in development, production, or deployment
  5. Failure to meet key performance parameters
  6. Policy changes that affect system deployment
  7. Selection of alternative approaches to mission requirements
  8. Shifting executive authority from one Service to another Service
  9. Other programmatic factors

These factors along with other factors identified in the literature review serve as a good basis for analysis of the most influential factors for program termination.

Although the literature identified other factors that affect program success, failure, and termination, the authors chose to limit the scope of analysis of this research to the factors that were most common in multiple works of the literature review. Table 2 summarizes the findings and conclusions of these and other researchers on the topic of factors influencing the outcomes of acquisition programs.

TABLE 2. 11 LEADING FACTORS THAT INFLUENCE PROJECT FAILURE

arj-78-article-3-table-2

Research Aim and Objectives

This article analyzes and evaluates the causes of acquisition program and project failure within the DoD. The objectives of this article are:

  • To study, identify, and evaluate the most critical factors that influence program and project termination within DoD;
  • To evaluate the main factors, based on expert judgment, that lead to program and project failure, and the relative importance of those factors;
  • To identify any differences between DoD acquisition program managers, DoD contractors, and DoD consultants; and
  • To serve as a springboard for future research in DoD program and project management.

The purpose of this research is to expand the current understanding of program and project failures and successes, and to identify the different perspectives between various stakeholders within the acquisition program and project management enterprise at the aggregate level. Although significant research has been conducted on terminated programs within the DoD, the research has focused on individual programs or a group of select programs. The Federally Funded Research and Development Centers, GAO, and Congressional Research Service normally evaluate a specific program or a small group of programs.

However, the authors could find neither a robust comprehensive study based on expert judgment (the approach used in this research) in the literature, nor the analytical approach used in the text for analysis of DoD program and project terminations.

Methodology

For this study, the examination and methodology used a literature review to identify factors that lead to program and project success or failure, expert judgment, survey, relative importance weight, and Chi-squared distribution to analyze the factors identified in the literature review. Relative Importance Weight (RIW) methodology consisted of conducting a survey to identify and evaluate the relative importance of the significant factors influencing program termination (see Figure 4 for methodology flow). Respondents of this survey included the following three groups: (a)  DoD program and project managers, (b) DoD industry personnel, and (c) DoD consultants. If respondents did not fall into one of these groups or had no experience with program and project termination, their responses were not considered. The 131 participants of a structured survey were identified through professional networks, Project Management Institute events, and National Defense Industrial Association events.

FIGURE 4. METHODOLOGY FLOW DIAGRAM

arj-78-article-3-figure-4

To gather data for evaluation, analysis, and comparison of program and project failure factors within the DoD program portfolio, a questionnaire was developed seeking respondents from three specific groups: program managers from the Services, program managers from companies with past experience working on DoD programs, and DoD program managers. The questionnaire consisted of 11 leading factors that influence project failure, extrapolated from an extensive literature review. The factors evaluated are outlined in Table 2. The literature review indicated that commonality existed between project success and failure factors. The success or failure factor depended on the author’s point of view. Essentially, program success and failure factors are two sides of the Janus coin. In the context of this text, program and project failure is defined by program termination. The factors identified in the literature influence program performance and thus influence program termination.

A total of 131 responses was analyzed, which consisted of 45 DoD program managers, 52 defense industry program managers, and 34 defense industry consultants. Based on previous research (Doloi 2008; Flyvbjerg, Holm, & Buhl, 2004), these numbers are acceptable for this type of analysis. Further, since these data are ordinal and thereby nonparametric, many opinions exist on what constitutes an appropriate sample size (Bonett & Wright, 2000; Noether, 1987).

The various works on estimating an appropriate sample size rely on assuming some degree of normality. To be confident in the sample size, but maintain the integrity of the nonnormality of the nonparametric data, a sample size of 30 was an appropriate sample for the three groups. N=30 is recognized in many statistical works as an agreed-upon acceptable sample size (Devore, 2012; Sprent, 1989). Table 3 identifies the profiles of the respondents.

TABLE 3. RESPONDENTS’ PROFILES

arj-78-article-3-table-3Respondents were asked to rate the relative importance of the factors that influence project failure based on a five-point Likert Scale (1 = very low, 2 = low, 3 = medium, 4 = high, 5 = very high). To differentiate the expert perceptions of the relative importance of project failure between groups, two hypotheses were developed and tested:

  • Ho: There is no agreement among groups of the relative importance of factors that influence program/project failure.
  • H1: Agreement exists among groups of the relative importance of factors that influence program/project failure.

Findings and Data Analysis

For analysis of responses, RIW analysis was conducted (Doloi, 2013; Frimpong et al., 2003). RIW is a weight measure to compare the importance of various attributes according to a group of respondents. Weights must be assigned to a collection of survey responses; if the survey responses are numerical already, and ordered such that the “most important” response is assigned the highest value (such as the Likert scale), the numerical assignment comes directly from the survey results.  The RIW for responses was calculated using the following equation (Salunkhe & Patil, 2013):

arj-78-article-3-equation  (1)
Relative Importance Weight

RIWj = the relative weight important for attribute j
ai  =  the weight given to response (Likert is used, therefore i = 1,2,3,4,5)
ni = the number of people who responded “ i ” for attribute j .
xj = is the sum of all weighted responses for the jth attribute.
N = total number of factors

The RIW equation was used to calculate the RIW for program and project failure factors. The weights were ranked for DoD program managers and DoD contractors. The results of the weights are shown in Table 4.

TABLE 4. SUMMARY OF RIW RESPONSES

arj-78-article-3-table-4To determine if there was a significant difference between the rankings of the three groups’ responses, Kendall’s Coefficient of Concordance served as the analytical tool. Kendall’s coefficient of concordance, or Kendall’s W, is a nonparametric statistic, recognized as an analytical tool appropriate for assessing the degree of agreement among judges. Kendall’s W ranges from 0 to 1 (Grzegorzewski, 2006). A rating of zero indicates no agreement and a rating of one indicates strong agreement (Hollander, Wolfe, & Chicken, 2014):

arj-78-article-3-equation-2

  • m = total number of judges (respondents)
  • n = total number of ob jects (factors)

arj-78-article-3-equation-3arj-78-article-3-equation-4arj-78-article-3-equation-5

Based on the responses from Table 4, Kendall’s W = 0.84. This strongly suggests that agreement exists among the three groups. Despite this strong evidence of agreement, a Chi-squared approximation was also conducted to validate the results. The Chi-squared equation is shown here, followed by Table 5.
χ2 = m(k − 1)
(Devore, 2012)
k = number of factors

TABLE 5. RESULTS TABLE

arj-78-article-3-table-5

Based on the Chi-squared equation, the calculated value of Chi-squared was 25.45. Using the critical value for Chi-squared for k = 11, degree of freedom = 10 with significance = .01 , the critical value of χ2 was calculated as follows:

arj-78-article-3-equation-6

Since 25.45 > 18.3, reject the null hypothesis. Further, for a level of significance α = .01, p-value .0045 is less α = .01, reject the null hypothesis. The results indicate that a significant level of agreement exists among DoD program managers, DoD industry program managers, and DoD consultants.

Results Discussion

arj-78-article-3-secondary-5The survey was analyzed from the DoD program manager, DoD consultant, and DoD industry program manager’s perspective. RIW analysis illuminated the factors that have the greatest influence on program termination from the various groups’ perspectives. Table 4 displays the rankings by the various groups.

Since the data analysis indicates that agreement exists among the various groups, the leading factors present opportunities to address. The analysis indicates that several factors greatly influence program termination. DoD program managers and defense industry program managers agreed on the top three factors that influence program termination: schedule-related attributes, budget-related attributes, and scope-related attributes. DoD consultants ranked schedule-related attributes and budget-related attributes one and two respectively, but contractor-related attributes was the other top three factor.

The data also indicate that program and project management team-related attributes was the least most important factor by all groups. This suggests that strong agreement exists among the groups that program and project management teams put forth great effort to ensure program success. This also infers that program managers have the right tools and understanding of acquisition systemic processes to be successful.

Recommendations

Based on the analysis discussed previously, the authors offer several recommendations for consideration since the experts agree that several attributes influence DoD program termination:

  • DoD should continue investment into understanding the root causes of schedule-related attributes.
  • Realistic, adequate, and appropriate fiduciary requirements must be established early in the programming process to ensure program success.
  • DoD should continue investment in understanding requirements creep in programs.
  • Since DoD consultants ranked contractor-related attributes extremely high, and DoD program managers and DoD industry program managers rated contractor-related attributes relatively high, this area warrants further research to explore and perform a root cause analysis of contractor-related attributes.
  • The DoD’s investment in program manager training and equipping program managers should be continued.

Implementation of the recommendations should have a positive influence on DoD acquisition program performance.

Study Limitations

The research presented in this article has two limitations that should be considered when digesting the findings. First, this study was performed at the aggregate level within the DoD. DoD survey participants represented all branches of the Services and DoD program managers. Perspectives from the different Services were not considered, but rather the DoD aggregate.

arj-78-article-3-secondary-6Although the Services have very similar experiences in program and project cancellation, the nuances of the differences in the importance of factors is worth mentioning and exploring in future research. Another limitation of the research is the mode chosen for factor analysis. The researchers presented and selected the factors for analysis to be presented to survey participants. Although the factors were determined from an exhaustive literature review, an open-ended survey may have presented a new set of factors for analysis and consideration unique to DoD program and project management. Further, the researchers limited the factors for analysis, thereby excluding some factors from the literature. However, the factors selected for analysis were the factors most common across multiple authors and articles.

Another limitation of the research is further root cause analysis of the factors identified, surveyed, and analyzed. Root cause analysis of the factors would provide greater fidelity and granularity of the factors. This fidelity and granularity could lead to plausible solutions and corrective actions to address the influence of these factors on DoD acquisition program termination. The authors chose to first focus on identifying DoD acquisition program factors and determining whether agreement existed among the three prominent DoD acquisition groups.  The authors recommend that future studies should focus on the root cause analysis of the factors identified.

Summary and Conclusions

arj-78-article-3-secondary-7This research identified the RIW of factors that influence DoD program termination. Factors were identified through a literature review of salient research on factors that lead to program success and failures. These factors served as the basis for analysis into DoD acquisition program termination. A survey was developed from the factors garnered from the literature review to determine the RIW of each of the factors. The survey was administered to DoD acquisition program managers, DoD industry program managers, and DoD consultants. The three groups’ responses were compared. The results showed that there is agreement among the three groups on the influence of the factors analyzed. Based on the analysis of the results, the authors presented several recommendations for the DoD acquisition enterprise. This agreement suggests that there are opportunities and areas for the groups to work together to mitigate the most important factors, thereby decreasing the likelihood of program termination.

Areas for Future Research

arj-78-article-3-secondary-1In a similar vein as the study limitations, the authors recommend several areas for future research. First, this study did not consider the role of the Congress in DoD acquisition program cancellation. In the United States, Congress plays a huge role in program termination. Congress has the power to cut program budgets, terminate programs, conduct hearings on program status, and change requirements. Often, the DoD wants to cut a program, but Congress orders the programs to continue. As mentioned in the study limitations, an open-ended survey could produce an entirely new set of factors or attributes for consideration or analysis unique to DoD programs. Once these new factors are identified, a host of data analysis could be performed including, but not be limited to, dynamic interactions of these new factors, attribute and factor analysis, and RIW. This study identified the most important factors. Future research could focus on the why of the most critical factors that are unique to the DoD. Another area for future research could focus on the derivatives of failed and canceled programs. Although programs are canceled, a resultant loss is not always incurred. The derivatives, vestiges, and lessons learned from those programs suggest that all is not lost. Putting a value on these aspects could be beneficial in program analysis or termination. For example, the Army Future Combat System was terminated. On the surface and aggregate, this may appear like a failure, but many of the technologies and systems developed were used in other Army systems. All was not lost despite program failure and termination. A comparison of successful and failed DoD programs is another area for future research. This research could compare the root causes in the difference between successes and failures. A final area for future research is the role of knowledge management in program and project failure.

Acknowledgment

This article is dedicated to the late Honorable Claude M. Bolton, Jr., who dedicated his life to public service and acquisition excellence. His guidance, assistance, and time in this research endeavor were greatly appreciated and will always be remembered.

References

Allen, M. A., Alleyne, D., Farmer, C., McRae, A., & Turner, C. (2014, October). A framework for project success. Journal of IT and Economic Development, 5(2), 1–17.

Altunok, T., & Cakmak, T. (2010). A technology readiness levels (TRLs) calculator software for systems engineering and technology management tool. Advances in Engineering Software, 41, 769–778.

Belassi, W., & Tukel, O. I. (1996). A new framework for determining critical success/failure factors in projects. International Journal of Project Management, 14(3), 141–151.

Bonett, D. G., & Wright, T. A. (2000, March). Sample size requirements for estimating Pearson, Kendall and Spearman correlations. Psychometrika, 65(1), 23–28.

Chan, D. W. M., & Kumaraswamy, M. M. (1997). A comparative study of causes of time overruns in Hong Kong construction projects. International Journal of Project Management, 15(1), 55–63.

Clarke, A. (1999). A practical use of key success factors to improve the effectiveness of project management. International Journal of Project Management, 17(3), 139–145. Retrieved from http://dx.doi.org/10.1016/S0263-7863(98)00031-3

Cornell University Office for Research on Evaluation. (2009). The eye of the beholder—Multiple perspectives. Retrieved from https://core.human.cornell.edu/research/systems/theory/perspectives.cfm

Defense Acquisition University. (2009). Defense Acquisition University smart shutdown guide. Fort Belvoir, VA: DAU Visual Arts and Press.

Department of Defense. (2013). Performance of defense aquisition system 2013 annual report. Washington, DC: Office of the Under Secretary of Defense (Acquisition, Technology, & Logistics).
de Wit, A. (1988). Measurement of project success. International Journal of Project Management, 6(3), 164–170. doi: 10.1016/0263-7863(88)90043-9

Devore, J. L. (2012). Probability and statistics for engineering and the sciences (8th ed.). Boston, MA: Brooks/Cole.

Doloi, H. (2013). Cost overruns and failure in project management: Understanding the roles of key stakeholders in construction projects. Journal of Construction Engineering and Management, 139(3), 267–279.

Flyvbjerg, B., Holm, M. K. S., & Buhl, S. L. (2004, January). What causes cost overrun in transport infrastructure projects? Transport Reviews, 24(1), 3–18.

Fogarty, S., Zeitoun, A, Fass, S., Ready, J. D., & McAlpine, S. (2010). Project management 2010: A study of project management in the U.S. Federal Government. Newtown Square, PA: Project Management Institute.

Fox, J. R., & Miller, D. B. (2006). Challenges in managing large projects. Fort Belvoir, VA: DAU Visual Arts and Press.

Frimpong, Y., Oluwoye, J., & Crawford, L. (2003). Causes of delay and cost overruns in construction of groundwater projects in developing countries; Ghana as a case study. International Journal of Project Management, 21(5), 321–326.

Grzegorzewski, P. (2006). The coefficient of concordance for vague data. Computational Statistics & Data Analysis, 51(1), 314–322.

Hicks, J. (1992). Heavy construction estimates, with and without computers. Journal of Construction Engineering and Management, 118(3), 545–560.

Hofbauer, J., Sanders, G., Ellman, J., & Morrow, D. (2011). Cost and time overruns for major defense acquisition programs. Washington, DC: Center for Strategic and International Studies.

Hollander, M., Wolfe, D., & Chicken, E. (2014). Nonparametric statistical methods. Hoboken, NJ: John Wiley & Sons.

International Project Leadership Academy. (2016). 101 Common causes. Retrieved from Calleam Consulting at http://calleam.com/WTPF/?page_id=2338

Kappelman, L. A., McKeeman, R., & Zhang, L. (2006). Early warning signs of IT project failure: The dominant dozen. Information Systems Management, 23(4), 31–36. Retrieved from http://search.proquest.com/docview/214123546?accountid=11243

Kerzner, H. (1987). In search of excellence in project management. Journal of Systems Management, 38(2), 30. Retrieved from http://proxygw.wrlc.org/login?url=http://search.proquest.com/docview/199808640?accountid=11243

Lawrence, P., & Scanlan, J. (2007). Planning in the dark: Why major engineering projects fail to achieve key goals. Technology Analysis & Strategic Management, 19(4), 509–525.

Learnard, C. (2013). Government risks US$148 million for every US$1 billion spent on programs due to ineffective program management. Retrieved from http://www.pmi.org/About-Us/Press-Releases/Government-Risks-Spent-on-Programs-Due-to-Ineffective-Program-Management.aspx

Mankins, J. J. (2009). Technology readiness assessments: A retrospective. Acta Astronautica, 65, 9–10.

Mansfield, N., Ugwu, O., & Doran, T. (1994). Causes of delay and cost overruns in Nigerian construction projects. International Journal of Project Management, 12(4), 254–260.

Mir, F. A., & Pinnington, A. H. (2014). Exploring the value of project management: Linking project management performance and project success. International Journal of Project Management, 32(2), 202–217.

Mulcahy, R. (1999). Top reasons projects fail. Newtown Square, PA: Project Management Institute.

Noether, G. (1987). Sample size determination of some nonparametric tests. Journal of Statistical Association, 82(328), 645–647.

Pinto, J. K., & Mantel, S. J., Jr. (1990). The causes of project failure. IEEE Transactions on Engineering Management, 37(4), 269–276. doi:10.1109/17.62322

Pinto, J., & Prescott, J. K. (1988). Variations in critical success factors over the stages in the project life cycle. Journal of Management, 14(1), 5–18.

Pinto, J. K., & Slevin, D. P. (1987). Critical success factors in effective project implementation. IEEE Transactions on Engineering Management, EM-34(1), 22–27. doi:10.1109/TEM.1987.6498856
Project Management Institute. (2013). Guide to the project management body of knowledge (PMBOK guide). Newtown Square, PA: Author.

Reed, J. (2011, July 19). $46 billion worth of cancelled programs. Retrieved from http://defensetech.org/2011/07/19/46-billion-worth-of-cancelled-programs/#ixzz3Z79191Sp

Rodriguez, S. (2014, December 2). Top 10 failed defense programs of the RMA era. Retrieved from http://warontherocks.com/2014/12/top-10-failed-defense-programs-of-the-rma-era

Roeder, T. (2013). Managing project stakeholders: Building a foundation to achieve project goals. Hoboken, NJ: Wiley.

Sage, A. P., & Rouse, W. B. (2014). Handbook of systems engineering and management (2nd ed.). Hoboken, NJ: Wiley.

Salunkhe, A. A., & Patil, R. S. (2013, September-October). Statistical methods for construction delay analysis. Journal of Mechanical and Civil Engineering, 9(2), 58–62.

Shehu, Z., & Akintoye, A. (2010). Major challenges to the successful implementation and practice of programme management in the construction environment: A critical analysis. International Journal of Project Management, 28, 26–39.

Sprent, P. & Smeeton, N. (2000). Applied nonparametric statistical methods. London, UK: Chapman and Hall.

Straub, J. J. (2015). In search of technology readiness level (TRL) 10. Aerospace Science and Technology, 46, 312–320.

U.S. Government Accountability Office. (2014). Canceled DoD programs: DoD needs to better use available guidance and manage reusable assets (Report No. GAO-14-77). Washington, DC: Author.

Weapon Systems Acquisition Reform Act of 2009, 10 U.S.C., Pub. L. 111-23 (2009).


To print a PDF copy of this article, click here.

Biographies

arj-78-article-3-clowneyLt Col Patrick Clowney, USAF (Ret.) is currently a PhD candidate in Systems Engineering at The George Washington University, pursuing research focusing on program management and program termination. He holds a BS in Engineering Sciences from the U.S. Air Force Academy, an MA in Organizational Behavior from The George Washington University, an MPA from the University of Oklahoma, an MA in National Security Studies from Naval Command and Staff College, and an MAS in Airpower Studies from the School of Advanced Air and Space Studies.

(E-mail address: patrick.clowney@gmail.com)

arj-78-article-3-deverDr. Jason Dever works as a systems engineer supporting the National Reconnaissance Office, responsible for developing an open information technology framework such that software components can be shared across the government. Previously, he supported numerous positions across the systems engineering life cycle, including requirements, design, development, deployment, and operations and maintenance. Dr. Dever received a bachelor’s degree in Electrical Engineering from Virginia Polytechnic Institute and State University, a master’s degree in Engineering Management from The George Washington University, and a PhD in Systems Engineering from The George Washington University. His teaching interests are project management, systems engineering, and quality control.

(E-mail address: jdever@gwmail.gwu)

arj-78-article-3-stubanDr. Steven M. F. Stuban is the deputy director of the National Geospatial-Intelligence Agency’s Facility Program Office. He is a Professional Engineer and is Defense Acquisition Workforce Improvement Act Level III certified in the Program Management, Program Systems Engineer, and Facilities Engineering career fields. Dr. Stuban holds a bachelor’s degree in Engineering from the U.S. Military Academy, a master’s degree in Engineering Management from the University of Missouri – Rolla, and both a master’s and doctorate in Systems Engineering from The George Washington University. He is an adjunct professor with The George Washington University and serves on a standing Doctoral Committee.

(E-mail address: Steven.M.Stuban@nga.mil)

Comments

comments