Performance Indexing: Assessing the NonMonetized Returns on Investment in Military Equipment


To print a PDF copy of this article, click here.

Authors: Ian D. MacLeod and Capt Robert A. Dinwoodie, USMC

A prime managerial concern is how to decide which investment alternatives provide the greatest return with least risk of loss. In civilian organizations, numerous methods and formulas assist these decisions. However, in military and other governmental agencies, these methods often fall short because typical governmental investments do not have a monetary return. The processes underpinning governmental resource allocation and acquisition decisions are often cumbersome and time consuming. In this article, the authors present a unique application of composite indexing methods to compare the return on investment in military equipment. They posit that this analytical method can improve government agencies’ investment decisions for capital equipment, especially when methods that are more laborious cannot be executed in the allotted time frame.


A prime managerial concern is how to select, among a range of investment alternatives, the option that provides the greatest return with least risk of loss. In civilian organizations, numerous methods and formulas such as Net Present Value, Return on Investment (ROI), and Return on Assets address these issues (Brealy, Myers, & Allen, 2011). However, in military and other governmental agencies, these methods often fall short because government investments do not offer a monetary return. Rather, they provide intangible returns such as national defense, public safety, goodwill, and other public goods that are difficult, but not impossible, to quantify (Oswalt et al., 2011). As Gonzalez, Perera, and Correa (2003) noted, “the economic valuation of nonmarket goods…is aimed at obtaining a monetary assessment of the welfare or utility gain (or loss) experienced by a certain group of people from the improvement of (or damage to) a nonfinancial asset” (p. 65).

Numerous economic models for calculating ROI exist, and most require only a few basic inputs such as costs, benefits, time horizon, and risks (Bailey, 2015). The benefit of calculating ROI of government investments is to save costs over other alternatives (Bailey, Mazzuchi, Sarkani, & Rico, 2014), but scholarly research into assessing the ROI of complete military systems is lacking. In this article, we present a method that efficiently compares equipment options using a composite index that generates a normalized measure of performance return. By objectively assessing equipment’s ROI, leaders can eliminate low-value and inefficient programs, ultimately saving U.S. taxpayer dollars.

Background and Literature Review

Department of Defense (DoD) budget and acquisition decisions are lengthy processes governed by hundreds of federal laws and practices (Chairman of the Joint Chiefs of Staff, 2015), often producing suboptimal and ineffective results (Government Accountability Office, 2008). Such decisions involve professionals from many government entities and disciplines, as well as politicians who all have different perspectives on the best way to invest scarce public funds. As decision analysts at Headquarters Marine Corps, we saw leadership request analytical support to make considerable performance, capacity, and resource trade-offs quickly during all phases of the Planning, Programming, Budgeting, and Execution system cycle. Often, these decisions are made with an incomplete understanding of an investment’s value (return) because it cannot be objectively quantified. To support resource allocation decisions, our mission was to provide accurate and timely analyses with readily available information.

arj74-article-4-secondary-2In fiscal year 2014, the Marine Corps evaluated its strategic equipment investment initiatives for the ground combat and tactical vehicle (GCTV) fleet. Between 2025 and 2035, 85 percent of currently fielded platforms within the GCTV portfolio are projected to reach the end of their service life, necessitating a large influx of capital to replace or sustain GCTV capabilities (Dinwoodie, 2012). In addition, all these investments are competing for dwindling funds within the larger Marine Corps budget due to the 2011 Budget Control Act and predicted reductions in defense spending (Krepinevich, 2012; Liebman, 2013). Inevitably, declining budgets force trade-offs among important projects.

When we analyzed GCTV asset options, it was difficult to compare the costs and returns of different types of equipment as complete systems. As Oswalt et al. (2011) asserted, “a practice or methodology does not exist in the DoD to capture and characterize the future and extended value accruing to users beyond the primary recipients of the investment” (p. 126). Boiling complex military systems down to one metric is difficult for three main reasons: (a) vehicle performance measures typically cannot be aggregated into a single overall measure; (b) opinions about military equipment’s utility differ and are often subjective; and (c) accepted quantitative methods for assessing overall value are time- and resource-intensive.

First, performance data on vehicles are typically measured and quantified in different units of measure for  specific characteristics such as fuel consumption in miles per gallon; payload in pounds; and speed in miles per hour. Within the Joint Capabilities Integration and Development System (JCIDS), developing achievable requirements, called Key Performance Parameters (KPP)1 and Key System Attributes (KSA),2 requires establishing Measures of Performance (MOP) and Measures of Effectiveness (MOE). MOPs are “system-particular performance parameters such as speed, payload, range…or other distinctly quantifiable performance features” (Defense Acquisition University, n.d., para. 1). MOEs measure operational capabilities in terms of engagement or battle outcomes (Department of the Air Force, 1996).  Elaborate operational testing and evaluation events are created to evaluate these measures (Gentner, Best, & Cunningham, n.d.). Extensive modeling and simulation events evaluate system performance in scenarios and vignettes (Gentner et al., n.d.; Lai & Lamoureux, 2012; Lingel et al., 2012). While these methods assess performance and provide inputs to decisions, they are not structured to create a singular and objective measure of ROI. Performance metrics such as MOPs, MOEs, KPPs, and KSAs can be compared across systems, but cannot be aggregated into a single measure of overall performance without normalization.

Second, qualitative value assessments of military equipment are often subject to biases “because personal cognitive processes inform how individuals understand their environment” (Reynolds, 2015). Consequently, military personnel have different qualitative biases towards equipment based on their specific experiences (Simon, 2004). Conversely, the assessment of a financial investment’s ROI is simpler: (Stickney, Weil, Schipper, & Francis, 2010) and normalized in a common measure: dollars.

Third, a significant body of research and accepted practices exists to quantitatively assess qualitative value preferences, but these methods are time- and resource-intensive. The Analytic Hierarchy Process (AHP) is a multilevel, decision-making framework that allows “practitioners to assign numerical values to what are abstract concepts and then deduce from these values decisions to apply to a global framework” (Saaty, 1988, p. 110). This framework allows the judgments of qualified individuals to be aggregated into a group judgment. Based on the intensities of those judgments, an output with explicit rules for allocating resources among competing projects is derived (Saaty, 2013).

Similarly, Multiple-Objective Decision Analysis (MODA) and Value-Focused Thinking are interrelated methodologies that can derive value functions that map performance scores to value metrics (Parnell, Bresnick, Tani, & Johnson, 2013). MODA “quantitatively assesses the trade-offs between conflicting objectives by evaluating an alternative’s contribution to the value measure and the importance of each value measure” (Parnell et al., 2013, p. 196). However, both AHP and MODA can be time consuming and difficult to execute since they typically require significant amounts of senior leadership attention (Triantaphyllou & Mann, 1995).

Given the shortcomings of more elaborate methods, our desire was to create a single ROI metric that provides a straightforward, quantitative, and objective evaluation of options. Our method utilizes a composite index to normalize different measures and aggregate them into a single metric facilitating holistic comparison of multiple system alternatives. In this case, we used established KPPs and KSAs, or Performance Metrics, as a baseline. We then calculated the relative deviation of multiple platforms’ performance from this baseline. This method, called the Distance to Reference (DTR) technique, “measures the relative position of a given indicator vis-à-vis a reference point” (Organisation for Economic Co-Operation and Development, 2008, p. 28). The disparate measures are normalized by dividing the tested performance value’s distance from the reference point by the reference point. Once all metrics are converted, they are aggregated into a single metric that quantifies the total performance of each alternative. The composite index is simply the performance measured from the reference standard. We call this metric a performance index (PI).

By creating a single measure of system performance rather than independently evaluating multiple systems’ Performance Metrics, we directly compared different material solutions against each other and our reference standard simultaneously. We believe this analysis can assist other professional decision analysts, both in military and civilian fields, to improve the quality of actionable information provided to leadership. Further, the graphical displays we created easily communicate complex economic trade-offs among capital equipment options. To illustrate our method, we apply it to a case study on Marine Corps vehicles.

Case Study

Background

We began this analysis while conducting financial analytics and modeling for the 2014 Marine Corps GCTV strategy update (a 25-year capital investment plan). The Marine Corps was considering two investment courses of action (COA) to recapitalize its truck fleet with a mix of newly procured vehicles and sustainment programs for older platforms.

arj74-article-4-secondary-3We will call the trucks in this fleet ALPHAs, BRAVOs, and CHARLIEs. The baseline COA consisted of a three-platform mix of approximately 20,000 vehicles. One-third would be next-generation BRAVOs; the next third would be an upgrade of newer existing ALPHAs; and the final third would be CHARLIEs undergoing minimal sustainment actions. This mixed fleet was institutionally preferred because it was believed to provide acceptable performance at lower cost due to the smaller quantity of BRAVOs (the most expensive vehicle). The Marine Corps was also considering a second COA that would eventually replace all ALPHAs and CHARLIEs with BRAVOs by 2040. This COA would initially fund ALPHAs and CHARLIEs, but eventually replace them, one for one, with BRAVOs.

By using our PI methodology, we found that the BRAVO significantly outperforms the ALPHA and CHARLIE in an absolute sense and additionally provides greater performance per dollar (PP$) when its PI is divided by the procurement cost. Additionally we wanted to explore whether other vehicle mixes could provide higher levels of truck fleet performance at lower cost, because funding constraints often prohibit purchasing desired quantities of exquisite systems. We used the PI to generate four additional COAs that showed higher return is achievable for less cost. The Marine Corps subsequently changed its truck procurement strategy partly because of our analysis.

Assumptions

To facilitate our analysis, we made the following assumptions:

  • BRAVO’s performance represented the ideal performance benchmarks, because without financial constraints, the only vehicle acquired would be BRAVO.
  • Performance Metric values had linear returns to scale.
  • Performance Metrics are independent, allowing them to be summed together in a linear fashion.
  • All Performance Metrics comprised equal value, or weight, relative to the vehicles’ total performance.

Research Questions

By generating a single ROI for each platform, we could do the following:

  • RQ1: Compare platforms as complete systems, not just individual characteristics between systems (X vs. Y, not just the payload of X vs. the payload of Y).
  • RQ2: Determine the average PP$ spent for each vehicle alternative and each vehicle mix COA.
  • RQ3: Compared with the baseline COAs, create new ways to achieve equal or greater performance in the truck fleet at different funding levels as well as identify, quantify, and evaluate the risks in all COAs.

Methodology

Data, variables, and modeling. We gathered institutionally approved life-cycle costs and performance data for all trucks. We conducted all our analysis, modeling, and additional COA development in Microsoft Excel.

Defining Performance Metrics. Table 1 shows the six notional primary Performance Metrics for all three trucks (T). AR1 and AR2 are quantitative measures of vehicle armor. Payload (D) is the vehicle’s useful carrying capacity measured in pounds. Mobility (M) is an index value measuring the vehicle’s ability to maneuver in soft soil. Reliability (R) is the mean miles between operational mission failures. Power generation (G) is the number of gallons per hour required to produce 20 kilowatts of electricity. For both M and G, a lower number is better. Because each Performance Metric is expressed in different units of measure, they cannot be combined into an aggregate score without normalization.

arj74-article-4-table-1Normalizing Performance Metrics. Because BRAVO’s performance was the standard, we turned its Performance Metrics into the variable optimal,Oi, from which all deviations are assessed. Other applications of this methodology could choose a different Oi, such as acquisition threshold or the objective requirements for KPPs and KSAs. All three trucks’ Performance Metrics are indexed against all six optimal Performance Metrics as a percentage deviation in actual performance.

The general equation for indexing each Performance Metric (PM) when a higher value is preferred was:

For example, the optimal payload is 3,500 lb. The ALPHA payload is 1,500 lb. Therefore,
ALPHA Indexed D = (1500/3500) * 100 = 42

This shows that ALPHA delivers 42 percent of our optimized payload (D). For consistency, we needed to index mobility (M) and power generation (G) differently because a lower score for those metrics indicates better performance. To do this, we calculated the index so that a measured Performance Metric’s percentage difference above the optimal value was an index score below 100.

The general equation for indexing each Performance Metric when a lower value indicates a better score is:

For example, the optimal (M) is 25 and ALPHA’s M is 30. Therefore,

This shows that ALPHA delivers 80 percent of our optimal (M). Table 2 shows the indexed performance characteristics including the total performance of each platform. The total platform performance (TPP) column is the sum product of all indexed Performance Metrics.
Scaling and weighting performance variables. As stated in Table 2, TPP is an absolute measurement scale; as such, it is difficult to interpret and gauge the percentage difference between platforms. Scaling TPP to a 100-point scale increases the metric’s understandability. Including the weights (W) of the indexed Performance Metrics allows their relative importance to affect the TPP score according to organizational value. In this instance, we did not have institutional value assessments for the Performance Metrics and thus we let all variables carry an equal weight of 16.6 percent (100/6).

arj74-article-4-table-2The scaled TPP of each vehicle is the sum product of its indexed Performance Metrics. Using the equal Performance Metric weights (16.6 percent), the TTP equation is:

TPP = WFPU * Indexed FPU + WFPW * Indexed  FPW + WD * Indexed D + WM * Indexed M + WR * Indexed R + WG * Indexed G

Table 3 shows the scaled and weighted TPP scores, while Figures 1 and 2 depict the DTR method and show how the scores are plotted. Figure 2 shows the TPP scores for the three alternatives against the optimal. It is important to note that the payload of Charlie, even with additional maintenance funding, returns less payload than required: this Performance Metric  returns negative value and is not a computational error.

arj74-article-4-table-3Figure 1. DISTANCE TO REFERENCE METHOD

arj74-article-4-figure-1Figure 2. COMPARISON OF PPI SCORES

arj74-article-4-figure-2Determining the truck fleet’s cumulative performance return. After deriving the TPPs for each platform, we calculated each COA’s impact on truck-fleet performance by year, and over the entire investment horizon. Mathematically, this is vehicle quantity multiplied by TPP. Total cumulative performance is how much value a COA generates within the Marine Corps truck fleet.

We multiplied the projected yearly fleet mix inventory (I), which is the planned procurement quantity plus existing inventory, by TPP A,B,C to obtain a yearly COA Performance Point Score (CPPSY) for each year between 2015 and 2040. CPPSY is the aggregate return of the entire truck fleet in one year. We then summed all the years to obtain a total score for each COA (CPPST). CPPST represents how much total value each COA could provide across the entire investment horizon (2015–2040). The equations for CCPSY,T are:

CPPSy = ∑ TPPA,B,C * IA,B, C

CPPSt = ∑ CPPSyi2015-2040

Determining cost. To develop total cost, we multiplied the yearly inventory (I) for all platforms IA,B,C by the projected maintenance and procurement actions in a given year. We then calculated the yearly costs (Cy) and total costs (CT) for each COA. Table 4 lists the variables, factors, and costs associated with the truck fleet. All costs are in thousands of calendar year 2014 dollars. The “New” column shows the cost of procuring a single BRAVO and/or ALPHA. The columns for “SLEP” (Service Life Extension Program) show the cost of conducting a major overhaul of the CHARLIE fleet and the percentage of IA overhauled each year. The columns under “IROAN” (Inspect or Repair Only as Needed) show the estimated cost and percentage of each fleet scheduled for IROAN maintenance actions each year.3 CN is multiplied by the yearly acquisition quantities. For brevity, Table 5 lists only the beginning and final acquisition quantities.

arj74-article-4-table-4

arj74-article-4-table-5The platform cost equations for BRAVO and ALPHA are:

Bravo Cy = ∑(CN * BQty ) + (CIR * IIR )
Alpha Cy = ∑(CN * A<sub”>Qty ) + (CIR * IIR )

The platform cost equation for CHARLIE, which does not have any new procurement, is:

Charlie Cy = ∑ (CS * IS ) + (CIR * IIR )

The total cost (CT) for a COA is:

Table 6 shows the total cost for each COA from 2016 to 2040.

arj74-article-4-table-6Calculating PP$. The CT of a COA is the sum of all procurement and depot-level sustainment actions from 2015 to 2040. To create a normalized return per dollar spent for each COA, we divided CCPST by CT. We call this metric performance points per dollar, PP$. This provides a normalized measure of the performance return for each dollar spent on a truck and the entire fleet capability. A higher value is better than a lower one because it indicates greater performance for less money.

The PP$ equation for a COA and individual truck is:

Analysis

Truck Investment COAs

We began the analysis with only two COAs in the truck portfolio. However, we created four additional COAs to evaluate cost and performance changes by varying fleet-mix (see Table 6). Specifically, we constructed a null COA to assess the truck fleet’s value without BRAVOs and three additional COAs to explore the potential trade-offs among the other alternatives. The primary factor driving new COA development was leadership’s desire to understand how many more BRAVOs could be bought if funding planned for ALPHAs and CHARLIEs was redirected to BRAVO procurement instead.

Findings

By using our method, we evaluated our three research questions.

For RQ1, we found that the TPPs are as follows:

  1. TPPB  = 100
  2. TPPA  = 37
  3. TPPC  = 14

ALPHA is only one-third as capable as BRAVO. CHARLIE is only one-seventh as capable.
For RQ2, by dividing TPPs by APUC (Average Procurement Unit Cost)4 and SLEP costs listed in Table 4, we calculated their performance points per (thousand) dollars:

BRAVO PP$=1.38
ALPHA PP$=1.27
CHARLIE PP$=0.56

These metrics imply that every thousand dollars spent on the BRAVO returns approximately 1.38 performance points; on the ALPHA, 1.27; and on the CHARLIE, 0.56. In this example, spending money on ALPHAs or CHARLIEs does not offer the highest return. Funding should thus be spent on BRAVOs instead. However, this analysis also showed that even though overall ALPHA performance is less than that of BRAVOs, its low APUCA relative to APUCB, allows for the creation of a COA with a similar level of PP$ at less cost.

We also compared the sensitivity of PP$ to changes in the APUC. We found that when the APUC for ALPHA falls below $160,200, a BRAVO no longer offers the highest return per dollar. Figure 3 shows the performance and cost curves for BRAVO and ALPHA. These curves identify the change in PP$ relative to the APUC as well as the inflection points where changes in the APUC reverse our previously stated best-value assessment. The Y-axis is TPP per $1,000 and the X-axis is a range of APUCs. BRAVO’s performance is superior in absolute terms, but PP$ is sensitive to differences in unit cost. The current unit cost estimates for ALPHAs and BRAVOs are roughly proportional to their relative absolute performance levels, explaining why the PP$ is close between the options. However, changes in those unit cost estimates would change the PP$ even if absolute performance does not change.

Figure 3. PERFORMANCE AND COST CURVES FOR BRAVO AND ALPHA

arj74-article-4-figure-3Holding all other variables constant, we can also see that the BRAVO would not offer highest PP$ if APUCB exceeds $474,400. A rise of $39,400, or 9 percent, in APUCB, makes the ALPHA a better alternative per dollar. The change in PP$ due to change in APUCB , is shown below:
BRAVO PP$ = 1.26
ALPHA PP$ = 1.27

Conversely, if APUCB is held constant at $435,000, the APUC for the ALPHA must drop by approximately 8 percent to $160,200 for it to become the better option in terms of PP$.
Finally, for RQ3, we evaluated all COAs’ performance in four ways: (a) performance return each year from 2015 to 2040, CPPSY; (b) performance levels of each COA, CPPST; (c) cost of each COA, CT; and (d) average trade-off between COAs, PP$ (see Table 7). This absolute scale shows the magnitudes of differences between the COAs in terms of cost and performance return. For example, we can see that COA6 provides twice the total performance of COA3 for an additional $3.3 billion.

arj74-article-4-table-7Figure 4 illustrates the information presented in Table 7. The X axis shows the total performance points over the investment period; the Y axis shows the total cost of each COA in billions of constant year 2014 dollars from 2015 to 2040; and the size of each bubble indicates the PP$.

Figure 4. TRUCK FLEET’S COA PERFORMANCE, COST, AND PERFORMANCE PER DOLLAR

arj74-article-4-figure-4Using this figure to evaluate the COAs in our example quickly leads to the elimination of COA1–4, because COA5 and 6 have either greater capability or lower cost and greater capability. This presents leadership with a choice of maximizing performance overall (COA6) or maximizing PP$ (COA5). An efficient leadership decision is thus between the value of the extra capability and the opportunity cost of attaining it.

Figure 5 shows the change in average PP$ incurred by moving among alternatives. For instance, moving a dollar from COA4 to COA2 buys 0.5 fewer performance points than keeping that dollar in COA4. Conversely, moving a dollar from COA2 to COA4 buys 0.5 more performance points. Overall, COA5 is the best, as moving from COA5 to any alternative reduces the average PP$. In addition, moving from COA6 to any other COA except COA5 reduces the net benefit. Hence, COA5 and 6 should be the focus of leadership’s decision making, and the original COAs (COA1 and 2) should be abandoned.

The timing of major investment decisions is another important factor to consider. Our methodology can be used to evaluate performance over time. Figure 6 plots the CPPSY of COA1–6 from 2016 to 2040. This graphical representation shows each COA’s benefit stream per year, throughout the investment horizon. Several options provide more performance earlier, and at less cost than the current baseline plan.

Figure 6. TRUCK FLEET COAS’ PERFORMANCE BY YEAR

arj74-article-4-figure-6Decision point 1 shows where leadership should abandon COA1–4. Decision point 2 shows when a choice between COA5 and 6 should be made. This graphic highlights when major decisions are required before reductions in capability may appear and focuses discussions on risks, trade-offs, and mitigation plans across the time horizon. Leaders can assess differences in each COA’s performance by year, over time, and against total cost. Aggregate cost and fleet performance trade-offs can be evaluated simultaneously.

Additional Applications of the Methodology

Source Selection Decisions

Using the PI method during formal source-selection decisions could allow all potential platforms under consideration to be normalized and evaluated objectively with (a) a common performance scale, and (b) a single metric based on each platform’s ROI relative to performance standards. PP$ shows each system’s monetized performance return and stream of benefits over time, allowing for direct comparisons against all competitors. In addition, as we have shown, this methodology defines the APUC range within which a given system is the preferred option in terms of PP$. The PI method can also facilitate objective strategic discussions about how different systems affect the projected fleet’s performance.

This methodology is not solely limited to the DoD or military acquisition process. Any entity that makes capital investments that do not produce a monetized return could use it to compare alternatives objectively. For example, municipal governments and public safety agencies procuring emergency equipment could benefit from using this methodology, especially if there is not a formally defined or rigorous acquisition process and leaders simply want to know if they are getting “the most bang for the buck.” The basic requirement is to understand the desired goals, objectives, and performance. If an investment has required performance standards, each alternative’s deviation from that standard is straightforward and easily calculated using common software.

Limitations of the Methodology

arj74-article-4-secondary-1As presented, the method has several areas for improvement. First, the assumption of linear returns to scale possibly overstates the scores for each platform. However, this effect can be mitigated by including weights that reflect institutional value functions. MODA and AHP are effective methods to develop institutional value functions on each Performance Metric for inclusion in the PI calculation. In addition, the weights of each Performance Metric should accurately reflect their value contribution to the system’s total performance.

Conclusions

We used disparate performance measures to calculate a composite PI—an ROI proxy— to analyze the nonmonetized return of three trucks. We then evaluated two institutionally directed COAs for truck procurement and developed additional COAs with different cost and performance trade-offs. We found that the existing COAs provide less performance for more cost than the alternatives. A COA we created represented the most efficient use of fiscal resources since it provided the second highest level of performance at almost half the cost of the other COAs. This analysis shows that the Marine Corps can return more performance for each dollar spent. We recommended that the Marine Corps reevaluate its truck fleet options and consider alternative COAs. Based in part on this analysis, the Marine Corps shifted procurement plans in the GCTV fleet.

This PI method is objective: it is simply a reflection of institutional requirements and tested system performance.

This method also has broad analytical applicability. First, the power of this method lies in its ability to aggregate disparate performance measurements into a common scale. This PI method is objective: it is simply a reflection of institutional requirements and tested system performance. By removing subjective bias, equipment investment decisions can focus on salient issues (e.g., cost, performance) rather than the different value perceptions among stakeholders. Second, as an ROI metric, the PI highlights areas of opportunity and loss. Options that inefficiently spend funding can be eliminated early, allowing subsequent analytical efforts to focus on alternatives returning highest institutional value. This improves decision quality and speed. Finally, using the PI method and graphics in this article, complex economic, cost, and performance information can be modeled quickly, supporting changing strategies. Altering variables (e.g., cost, vehicle mix, time) allows leaders to see the impacts their ideas have on fleet cost and performance, and assess the associated risks. All these factors lead to the PI method as an effective way to determine the ROI of nonmonetized investments.


To print a PDF copy of this article, click here.

References

Bailey, R. F., Mazzuchi, T. A., Sarkani, S., & Rico, D. F. (2014, October). A comparative analysis of the value of technology readiness assessments. Defense Acquisition Research Journal, 21(4), 826–852. Retrieved from http://dau.dodlive.mil/2014/11/13/a-comparative-analysis-of-the-value-of-technology-readiness-assessments

Bailey, R. U. (2015). A risk analysis tool for evaluating ROI of TRA for major defense acquisition programs (Doctoral dissertation). Retrieved from ProQuest at http://gradworks.umi.com/36/68/3668434.html. (UMI Dissertation Publishing No. 3668434)

Brealy, R. A., Myers, S. C., & Allen, F. (2011). Principles of corporate finance. New York, NY: McGraw-Hill.

Chairman of the Joint Chiefs of Staff. (2015). Joint capabilities integration and development system (JCIDS) (CJCSI 3170.01I). Retrieved from http://acqnotes.com/wp-content/uploads/2014/09/CJCS-Instruction-3170-01I-Joint-Capabilities-Integration-and-Development-System-23-Jan-15.pdf

Defense Acquistion University. (n.d.). Glossary of defense acquisition acronyms and terms [Online glossary]. Retrieved from https://dap.dau.mil/glossary/Pages/2237.aspx

Department of the Air Force. (1996). Developmental test and evaluation (AFI 99-101). Washington, DC: Secretary of the Air Force.

Dinwoodie, R. (2012). Ground combat & tactical vehicle lifecycle crisis [Unpublished internal analysis]. Washington, DC: Headquarters Marine Corps.

Gentner, F. C., Best, P. S., & Cunningham, P. H. (n.d.). Sources of measures of effectiveness (MOEs) for assessing human performance in aeronautical systems. Crew System Ergonomics Information Analysis Center (CSERIAC). Retrieved from http://www.ijoa.org/imta96/paper21.html

Gonzalez, S. G., Perera, A. G., & Correa, F. A. (2003). A new approach to the valuation pf production investments with environmental effects. International Journal of Operations and Production Management, 23(1), 62–82.

Government Accountability Office. (2008). Defense acquisitions: Assessments of selected weapon programs (Report No. GAO-08-467SP). Retrieved from http://www.gao.gov/products/GAO-08-467SP

Hagan, G. (2009). Glossary of defense acquisition acronyms and terms (13th ed.). Retrieved from https://dap.dau.mil/glossary/Pages/Default.aspx

Krepinevich, A. F. (2012, November–December). Strategy in a time of austerity: Why the Pentagon should focus on assuring access. Foreign Affairs, 91(6), 58–69.

Lai, G., & Lamoureux, T. (2012). Development of measures of effectiveness and performance from cognitive work analysis products. CAE Professional Services. Retrieved from http://www.dtic.mil/dtic/tr/fulltext/u2/a601721.pdf

Liebman, J. B. (2013). The deterioration in the U.S. fiscal outlook, 2001–2010. In J. R. Brown (Ed.), Tax Policy and the Economy, Vol. 27 (pp. 1–18).

Lingel, S., Menthe, L., Alkire, B., Gibson, J., Grossman, S., Guffey, R., …Wu, E. (2012). Methodologies for analyzing remotely piloted aircraft in future roles and missions. Santa Monica, CA: RAND.

Organisation for Economic Co-Operation and Development. (2008). Handbook on constructing composite indicators: Methodology and users guide. Paris, France: Author.

Oswalt, I., Cooley, T., Waite, W., Waite, E., Gordon, S., Severinghaus, R., Feinberg, J., & Lightner, G. (2011, April). Calculating return on investment for U.S Department of Defense modeling and simulation. Defense Acquisition Reseach Journal, 18(2), 121–143.  Retrieved from http://www.dau.mil/publications/DefenseARJ/Pages/Archives/arj58.aspx

Parnell, G. S., Bresnick, T. A., Tani, S. N., & Johnson, E. R. (2013). Handbook of decision analysis. Hoboken, NJ: Wiley and Sons.

Reynolds, P. (2015). Past failures and future problems: The psychology of irregular war. Small Wars and Insurgencies, 26(3), 446–458. (doi:10.1080/09592318.2013.866426)

Saaty, T. L. (1988). What is the analytical hierarchy process? Mathematical Models for Decision Support, 48, 109–121. (doi:10.1007/978-3-642-83555-1_5)

Saaty, T. L. (2013). The modern science of multicriteria decision making and its practical applications: The AHP/ANP approach. Operations Research, 61(5), 1101–1118.
Simon, C. J. (2004). A case study of jumping from the C-17 and the C-130: A better platform for paratroopers? Retrieved from http://www.dtic.mil/dtic/tr/fulltext/u2/a455930.pdf

Stickney, C. P., Weil, R. L., Schipper, K., & Francis, J. (2010). Financial accounting: An introduction to concepts, methods, and uses (Vol. 13). Mason, OH: South-Western Cengage Learning.

Triantaphyllou, E., & Mann, S. H. (1995). Using the analytic hierarchy process for decision making in engineering applications: Some challenges. International Journal of Industrial Engineering: Applications and Practice, 2(1), 35–44.

Endnotes

1 “Key Performance Parameters (KPP) are those attributes or characteristics of a system that are considered critical or essential to the development of an effective military capability and that make a significant contribution to the characteristics of the future joint force. A KPP normally has a threshold representing the minimum acceptable value achievable at low-to-moderate risk and an objective representing the desired operational goal, but at higher risk in cost, schedule, and performance” (Hagan, 2009, p. B-100).
2 “Key System Attributes (KSA) are the attributes considered most critical or essential for an effective military capability, but not selected as Key Performance Parameters (KPP)” (Hagan, 2009, p. B-101).
3 “A service life extension program or SLEP is a major overhaul of a vehicle that incorporates reengineering, modification and other activities with the goal of extending the useful life of the vehicle. Alternatively, an Inspect or Repair Only as Needed, or IROAN, is a much more limited program that only replaces components as required and does not feature reengineering or modification” (Hagan, 2009).
4 “Average Procurement Unit Cost (APUC) is calculated by dividing total procurement cost by the number of articles to be procured” (Hagan, 2009, p. B-15).

Author Biographies

arj-74-macleod-headshotMr. Ian D. MacLeod is a senior national security analyst at the Johns Hopkins University–Applied Physics Lab.  He previously worked for the Center for Naval Analyses supporting the U.S. Marine Corps at Marine Aviation Weapons and Tactics Squadron I; I and III Marine Expeditionary Forces; and the deputy commandant for Programs and Resources. He holds an MA in Applied  Economics from the Johns Hopkins University, and a BA in Economics and International Relations from Syracuse University.

(E-mail: ian.macleod@jhuapl.edu)

arj-74-dinwoodie-headshotCapt Robert A. Dinwoodie, USMC, has 14 years of business and active duty military experience. He currently is a regional manager for Expeditors International, designing corporate global supply chains. He previously served as a strategic investment analyst at Headquarters Marine Corps. Capt Dinwoodie earned an MS in Defense Systems Analysis Naval Postgraduate School (NPS); a BA from DePauw University in History; a Graduate Certificate in Cost Estimating & Analysis from NPS; and currently holds a Six Sigma Black Belt.

(E-mail: robert.dinwoodie@outlook.com)

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *