13-674_lead_WP

Initial Capabilities Documents: A 10-Year Retrospective of Tools, Methodologies, and Best Practices


To print a PDF version of this article, click here.

Authors: Maj Bryan D. Main, USAF, Capt Michael P. Kretser, USAF, Joshua M. Shearer, and Lt Col Darin A. Ladd, USAF

The Joint Capabilities Integration and Development System (JCIDS) is 10 years old and ripe for review. A central output document of the JCIDS process is an Initial Capabilities Document (ICD) used by the Department of Defense to define gaps in a functional capability area and define new capabilities required. The research team analyzed 10 years of ICDs to identify methods and trends. The team found that several methodologies were favored and a convergence emerged in format and necessary content. Additionally, potential shortfalls in current best practices of interest to implementers and decision makers are identified. Guidelines and best practices are presented to create more effective, concise, and complete ICDs.


It may come as a surprise to many acquisition practitioners that the historically unstable, formal written procedures and processes that embody the Defense Acquisition System and Joint Capabilities Integration and Development System (JCIDS) are now over 10 years old. During this time, the Department of Defense (DoD) has published significant revisions and updates to the JCIDS-related documents, including Department of Defense Instruction (DoDI) 5000.02 entitled, Operation of the Defense Acquisition System and the Joint Capabilities Integration and Development System Manual (DoD, 2013; Joint Requirements Oversight Council [JROC], 2012). The current system’s longevity may be partially attributable to its utilization of modern management approaches, further enabled by a slow convergence of the Joint Strategic Planning System set in motion by the Goldwater-Nichols Act (Goldwater-Nichols, 1986). With its focus on Joint development and deconfliction of capabilities, JCIDS uses a portfolio management approach and streamlined documentation to elevate user requirements relatively quickly and vet them against current capabilities. Further, its emphasis on knowledge management ensures that all stakeholders can view the process and its outcomes as the key documents percolate through the JCIDS process.

Early analysis of the JCIDS process by the U.S. Government Accountability Office (GAO, 2008) identified variable product quality. Attempts were made at creating user’s guides to improve document quality (JROC, 2012; Joint Chiefs of Staff [JCS], 2009); however, these documents did not fully address the analysis techniques contained therein. As a key component of process quality, the ability to select, use, and report an appropriate analysis technique is an item of interest for authors, stakeholders, and portfolio managers. Therefore, this effort reviewed the content, tools, and methodologies recorded in the past 10 years’ Initial Capabilities Documents (ICDs) created as a part of the JCIDS process.

Early analysis of the JCIDS process by the U.S. Government Accountability Office (GAO, 2008) identified variable product quality.

As one of the first products created in JCIDS, ICDs are important because they validate requirements derived through an analysis of current capabilities and capability gaps. Additionally, they are signed by senior service members and are the basis for program acquisitions. Further, due to their recommended brevity, it is important that ICDs contain the correct level of detail to identify the key assumptions, limitations, and boundary conditions contained or referenced in their analyses. A lack of analytical clarity at this stage may lead to misdirected resources further in the process (GAO, 2008).
Of particular interest were the methodologies that implementers and decision makers were choosing to use in developing ICDs. Through this process, it was possible to identify a series of best practices and guidelines to improve ICD quality, and thus aid in the evolution of JCIDS.

Background

The JCIDS process was created as a response to a 2002 memorandum from the Secretary of Defense to the Vice Chairman of the Joint Chiefs of Staff to study alternative ways to evaluate requirements (JCIDS, 2014). At the time of this memorandum, the governing document was Chairman of the Joint Chiefs of Staff Instruction 3170.01B (CJCSI, 2001) and was titled the Requirements Generation System. The purpose of JCIDS was to streamline and standardize the methodology to identify and describe capabilities’ gaps across the DoD, and to engage the acquisition community early in the process while improving coordination between departments and agencies.

The GAO’s (2008) report indicated that “the JCIDS process has not yet been effective in identifying and prioritizing warfighting needs from a joint, department-wide perspective” (GAO, 2008, para. 1). This report outlined the shortfalls and gaps in the JCIDS process in its 5-year life span, furthering the redesign of the process. Additionally, the report outlined several recommendations for the DoD, including developing a more analytical approach within JCIDS to better prioritize and balance capability needs as well as allocating the appropriate resources for capabilities development planning.

The current documentation for both creating and implementing ICDs are the Capabilities-Based Assessment (CBA) User’s Guide and the JCIDS Manual. These documents were released in 2009 and 2012 respectively as a part of the process to address the issues found by the 2008 GAO report. The impact of these documents in improvements to the JCIDS process has yet to be determined, but will be discussed in this article.

Focus and Methodology

The research team used the Knowledge Management/Decision Support (KM/DS) system to examine the JCIDS process. The KM/DS Web site is the repository for the documents created through or as a byproduct of the JCIDS process. Included in this study are ICDs, Joint Capabilities Documents (JCDs), Capability Development Documents, and other supporting documents that are a part of this process. To focus this research, the team specifically studied the core documents—ICDs and JCDs—to better understand what kinds of methodologies are being implemented by the various Services to convey the gap information under study.

Ultimately, it was the intention of the research team to observe and report on best practices for future ICD writers.

Of those entered in the KM/DS system, over 1,000 ICDs and JCDs were in various phases of the JCIDS process covering the period January 1, 2002, to December 31, 2012. The team decided to focus on only those documents that were considered ‘Validated’ and ‘Final,’ with the expectation of little to no revision remaining for these documents in the near future. These criteria reduced the number of the documents under review to 225 ICDs/JCDs. The team of four researchers split the ICDs/JCDs evenly across year and type to ensure similar exposure to the complete population available. At the completion of the review, the researchers met and discussed commonalities and anomalies found in documents of interest, and in the population in general. For purposes of this article, the term ICD will be used to describe both the ICDs and JCDs unless specified otherwise.

The team formulated an initial set of generally accepted methodologies for a baseline to identify, categorize, and sort the currently used methodologies within the ICDs. They did not solely consider this set of techniques, but allowed for an expansion of the list to detect emergent techniques.

Additionally, an analysis was performed on key metrics and areas of interest to see if there were any correlations or observations that could be made about various components of the ICDs. These attributes were chosen as they were key areas of interest or sections in the Capabilities-Based Assessment (CBA) User’s Guide and the JCIDS Manual. By examining these attributes, the team was able to determine to what extent past ICDs have followed current guidance. Some of the components considered in the analysis can be found in Table 1.
Ultimately, it was the intention of the research team to observe and report on best practices for future ICD writers. As such, we focused on finding those ICDs that best embodied the intentions found in the Capabilities-Based Assessment (CBA) User’s Guide (JCS, 2009) and the JCIDS Manual (JROC, 2012).

Table 1. Attributes for Analysis

Attributes
ACAT Level DOTMLPF-P Analysis Measures of Effectiveness Threshold Values Defined
Lead FCB Formatting UJTL Traceability Objective Values Defined
Supporting FCBs Analysis Described Number of Gaps Number of Pages
Current Milestone Capabilites Defined Gap Prioritization Attributes Listed

Results

The team examined several ICD characteristics that are presented in the JCIDS Manual and were expected to be used in most ICDs (Figure 1). The team found that of the features prescribed by the JCIDS Manual, many were not present in the majority of ICDs reviewed. Less than half of the ICDs described what analysis was done to identify capability gaps. Over 90 percent of the ICDs reviewed define a specific capability while some ICDs do not have a well-defined end state.

Figure 1. ICD Content Analysis

Note. AoA = Analysis of Alternatives; MOEs = Measures of Effectiveness; UJTL = Universal Joint Task List

Nearly half of the ICDs analyzed defined their Measures of Effectiveness (MOE), described their analysis, prioritized gaps and capabilities, and defined minimum values for required capability attributes. The presence of these characteristics provides additional information to the reader and improves the fidelity of the ICD; their absence leaves commonly questioned areas open for discussion. The 2012 JCIDS Manual requires threshold values, but description of the analysis has been left open to the document creator, and many choose not to describe it. In fact, the manual states a preference to ”avoid unnecessary rigor and time-consuming detail.” Applying and documenting some level of rigor seems necessary and useful for documenting how gaps were identified and showing how the capability requirements were justified. The prioritization of gaps and capabilities helps decision makers understand those components that are critical when resources are limited to address the full capability gap, but allows for partial capability fulfillment or a subset of smaller gaps to be filled.

The inclusion of an Analysis of Alternatives (AoA) is an interesting additional piece of content as it is no longer part of the Capabilities-Based Assessment (CBA) User’s Guide, and is done in subsequent work of the JCIDS process. Nearly one-third of all ICDs included some form of an AoA, whether in the form of a brief paragraph or a full documentation found in attachments or enclosures. Most documents that contained a complete AoA were from the first 5 years, a period of time in which the content of ICDs was still in flux. Including an AoA would presuppose a preferred materiel solution—something not within the scope of documenting a capability gap.

Also, less than 25 percent of the ICDs surveyed contained objective values for the capabilities to be met. While it has become more common for threshold values to be defined for capabilities, objective values can only be seen in less than half of those cases. One might expect to see objective values used more frequently to quantify desired capabilities beyond the minimums. Including objective values is expected to aid the process owner in determining if a recommended solution is able to meet the objective of closing the specified gap.

Identifying the Functional Capabilities Boards (FCBs) to which ICDs were assigned provided insight as to what types of capabilities have been defined and what priorities have been dictated. FCB and associated Joint Capability Area (JCA) categories include Force Support (formerly Force Support and Building Partnerships); Battlespace Awareness; Force Application; Logistics; Command, Control, Communications, and Computers (C4)/Cyber (formerly Net-Centric, Command and Control, and C4/Cyber); and Protection. Previous FCBs, including Special Operations and Test, are listed in Figure 2 under “Other Legacy FCBs.”

Identifying the Functional Capabilities Boards (FCBs) to which ICDs were assigned provided insight as to what types of capabilities have been defined and what priorities have been dictated.

Figure 2. Number of Functional Capabilities in ICDs Analyzed

Note. C4 = Command, Control, Communications, and Computers; FCBs = Functional Capabilities Boards.

Each ICD is assigned a lead and supporting FCB. Figure 2 shows ICDs arranged by lead FCB with Force Application being the most prominent lead FCB. The prominence of Force Application over Force Support led the team to conclude that validated ICDs are more likely to focus on the direct needs of the warfighter and less likely to focus on capabilities of supporting processes. At the same time, a significant number of ICDs listed net-centricity and C4/Cyber as supporting FCBs.

The research team decided early on to capture the length of ICDs as the Capabilities-Based Assessment (CBA) User’s Guide specifically states that ICDs should be no longer than 10 pages, with separate allowance for appendices (JCS, 2009). Figure 3 presents the average ICD page length without appendices; quality and meticulousness were not necessarily correlated with quantity of pages. ICDs were meant to be concise documents that outline the necessary capabilities while still answering the required content.

Figure 3. Average Number of Pages for ICDs in Corresponding Years

Note. Avg = Average; Dev = Deviation; Std = Standard.

The drastic increase in length of ICDs is potentially a result of a change in the process by which capability gaps were outlined. As with most processes, uncertainty in a new method allows for an increase in the breadth and depth of the information found within ICDs. As page length has been steadily decreasing over the last few years, it would suggest that sponsors have become more comfortable with the process and have become more efficient at outlining the information needed.

One final note concerning page length was to evaluate the relation of page length to Acquisition Category (ACAT) level. Would larger projects lend themselves to taking more pages to explain the research and identify the gaps? These two factors were examined, and between ACAT Levels I, II, and III the mean page length was 25.53, 23.35, and 21.02 respectively. While the difference between ACATs I and III are statistically significant using a t-test with an alpha of .05, the difference (on average) is roughly four pages.

Within the time period analyzed, a total of 2,779 gaps were identified; the average number of gaps identified in an ICD are shown in Figure 4. Additionally, Figure 4 illustrates the fluctuation in the number of ICDs validated each year. The GAO (2008) report noted that JCIDS was ineffective in properly prioritizing capabilities and suggested that nearly all ICDs submitted were accepted. Since the inception of the JCIDS process, 2012 was the first year that the average number of gaps exceeded number of ICDs validated. This suggests that ICDs are identifying more gaps per document, creating documents that are tackling larger and more complex problems than before. It appears that the JCIDs process has matured, and the process has become more efficient as a result of the GAO report.

Figure 4. Average Number of Gaps Identified Compared to Number of Validated Documents

The research team noted that many ICDs had “too few” gaps identified (only one or two, or none at all) leading to the conclusion that the methodology employed was not optimal as there are probably more gaps that have yet to be identified, and several documents identified “too many” gaps. It was very difficult to understand and prioritize identified gaps when too many were identified (several contained over 50 gaps).

Figure 5 is a representation of the most frequently used methodologies from 2002 to 2012, displaying the percentage of ICDs covered by the methodology. The top five methodologies were chosen for representation as they represented those methodologies that were implemented in greater than 10 percent of ICDs, whereas the remaining methodologies were typically used in one to two ICDs only. Each ICD employed several methodologies so the percentages will not sum to 100 percent. A variety of analytical techniques may be appropriate depending on the type of analysis being conducted. As an example, intelligence-based assessment would likely be an appropriate technique for identifying a strategic capability gap requiring a new weapon system, but not appropriate for identifying the need for a new inventory system for the Defense Commissary Agency.

Figure 5. Top Methodologies Used

Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities–Policy

The research team observed at least two interpretations of the Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities-Policy (DOTMLPF-P) analysis within the ICDs. The analysis sometimes took the course where ICDs identified DOTMLPF-P categories of nonmateriel solutions that could satisfy capability gaps, while others took the second interpretation where ICDs considered the DOTMLPF-P implications of their proposed materiel solution. Defense Acquisition University training for DOTMLPF-P distinguishes between these uses and indicates that the ICD should focus on the former approach as the latter is addressed in later stages of the acquisition process (Defense Acquisition University, 2014).

We also observed a wide range of quality in these analyses. Many ICDs contained rote statements declaring the insufficiency of these nonmateriel approaches to close capability gaps. To paraphrase an example, several ICDs stated that “DOTMLPF solutions were considered…, but adjustments or improvements in these areas will have minimal impact to mission satisfaction.” Though not every capability gap can be met with nonmateriel solutions, such “box check” DOTMLPF-P analyses offer no value to the requirements validation process.

In contrast, several analyses reflected a concerted effort to find nonmateriel solutions to supplement the proposed materiel solution. One example of this level of analysis is the Air Force’s Advanced Pilot Training ICD. In its DOTMLPF-P analysis, the Service employed a three-phase process: first, brainstorming and combining possible solutions; second, conducting quantitative analysis on a subset of the best of the proposed solutions; and third, conducting a qualitative assessment of the final list of proposed solutions. Not all of the nonmateriel solutions were deemed feasible or prudent, but several were included as part of the final recommendations. Further explanations of how the Air Force conducted this analysis are found in the ICD and its attachments on KM/DS.

Recommendations and Guidelines

Through the analysis the team observed a variety of interpretations of how to write an ICD. In general, analytical rigor could be stronger. In a fiscally constrained environment, the importance of documenting analysis is magnified, and many ICDs fell short of careful documentation of analysis. Another observation is that most of the ICDs were submitted by the Services and very few by Joint sponsors. This is not surprising as individual Services organize, train, and equip their forces; it is expected that capability gaps will continue to be identified by the Services.

Useful Analytical Techniques

Several ICDs utilized subject matter experts (SMEs) to identify capability gaps and recommend solutions. One way to incorporate SME input into a more rigorous fashion is by employing the Delphi Technique. In this method, the researcher works with 10-15 experts to identify, further define, and determine the importance of an issue in their area of expertise (Linstone & Turoff, 1975). Using the Delphi method when SMEs are available is one way to add analytical rigor to the ICD process.

Though not possible for all ICDs, several documents included a life-cycle cost summary that was effective in communicating the costs of the capability gap. If the proposed solution is expected to reduce some recurring cost, presenting those numbers can make a convincing case to the reader.

In the Appendix to this article, the authors provide a list of additional analytical techniques along with a short description of each. This resource is intended to assist ICD writers and project managers in selecting a methodology or methodologies appropriate for their document or project. References are provided to direct interested readers to source documents with additional descriptions of each methodology.

One way to incorporate SME input into a more rigorous fashion is by employing the Delphi Technique. In this method, the researcher works with 10-15 experts to identify, further define, and determine the importance of an issue in their area of expertise (Linstone & Turoff, 1975).

Architectural Enhancements

Nearly all existing ICDs present a High-Level Operational Concept Graphic (OV-1) depicting the proposed solution(s). A previous Air Force Institute of Technology researcher identified several additional Department of Defense Architecture Framework (DoDAF) products that could be useful to present within the ICD (Hughes, 2010). The Capability Taxonomy (CV-2), Capability Dependencies (CV-4), Capability to Operational Activities Mapping (CV-6), as well as the Operational Resource Flow Description (OV-2) and Operational Activity Decomposition Tree (OV-5a) are products now required by JCIDS for the ICD.

Hughes also found value in including the Operational Activity Model (OV-5b) and Operational Activity to Systems Function (SV-5). The OV-5b presents capabilities and activities and their relationship among activities, inputs, and outputs. The SV-5 maps systems back to capabilities or operational activities. Neither is currently recommended in the JCIDS Manual, but could be presented there as optional architecture products.

Characteristics of Model ICDs

Based upon analysis of the data that were examined during the study, several guidelines or best practices emerged. The best written ICDs provided detailed, but relevant analysis without being too wordy. Here, we propose the contents of a model ICD.

The most fundamental building block of an ICD is conformance to JCIDS standards of format and content. The JCIDS Manual presents a logical flow of the document from gap identification to final recommendations. The Concept of Operations should illustrate how the described capability will support the Joint Force Commander. The JCAs or Universal Joint Task List pedigree should be clear, but not overly detailed. Documents that rolled up capability gaps to Tier 2 or Level 2 components seemed more readable than those that traced capabilities to lower levels. A document that acknowledges extant systems is more convincing in establishing a capability gap.

The team believes that a concise ICD may be written with 5–12 gaps identified. Page lengths may vary by ACAT level, with more complex proposed solutions demanding more explanation, but the ideal ICD would be 15–25 pages in length. In short, a well-written ICD will follow the prescribed format, clearly define its necessity to the Joint mission, and be presented in a clear and logical manner. Additionally, the ICD should present clear MOEs with minimum and desired values. Good MOEs allow the reader or evaluator to know when the new capability has delivered on its design promises. MOEs are sometimes confused with measures of performance (MOPs). Noel Sproles states, “MOEs are concerned with the emergent properties or outcomes of a solution. They take an external view of a solution and as such are different from MOPs, which are concerned with the internal workings of a solution” (Sproles, 2002).

Table 2 compares ICD content required by the Capabilities-Based Assessment (CBA) User’s Guide, the JCIDS Manual, and recommendations based on our analysis. As part of the analysis, the team identified those ICDs that implemented and followed the best practices identified by the team. These ICDs, shown in Table 3, are identified to give future ICD writers and functional groups examples of what they can strive toward to make clear and concise documents that are both effective and efficient.

Table 2. Comparison of CBA/ICD Content

CBA User’s Guide JCIDS Manual Research Team
Purpose CONOPS/Desired Outcomes CONOPS
Background/Guidance Joint Functional Areas Relationship to Tier 2 JCA/UJTL
Objectives Description of Required Capability Gaps, Overlaps, Redundancies Analysis Techniques Used with Description of Scope
Scope Capability Attributes/Metrics Prioritized List of 5-12 Capability Gaps
Methodology-Approaches-MOEs-Technological/Policy Opportunities Relevant Threats/Operational Environment Clearly defined MOEs wiith Threshold and Objective Values
Organization/Governance Proposals for Non-material Solutions DOTMLPF-P Analysis of Nonmaterial Solutions
Projected Schedule Final Recommendations Clear Final Recommendatins
Recommendations

Note. CBA = Capabilities-Based Assessment; CONOPS = Concept of Operations; DOTMLPF-P = Doctrine, Organization, Training, Material, Leadership, Personnel, Facilities – Policy; ICD = Initial Capabilities Document; JCA = Joint Capabilities Assessment; JCIDS = Joint Capabilities Integration and Development System; MOEs = Measures of Effectiveness, UJTL = Univeral Joint Task List.”

Table 3. Sample of Exemplery ICDs

Document Name (Control Number) Year Noteworthy Items
Data Masked (05-51947485-00) 2005 Layered analytical methods resulted in 100 shortfalls that were further clustered and examined–top 3 presented for further study
Military Operational Medicine (07-65416952-00) 2007 Extensive Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities (DOTMLPF); lots of prioritized tables
Aviation Ground Support (07-600735309-00) 2007 Prioritized tables, quantitative threshold values, good DOTMLPF, multiple methods used to determine/rank nonmateriel solutions
Initial Capabilities Document for Joint Improvised Explosive Device Defeat (07-66686002-00) 2007 Performed a well-documented, thoughtful DOTMLPF analysis; references three assessments—Joint Staff (J8), Joint Improvised Explosive Device Defeat Task Force baseline, and follow-on; prioritized tables
Biometrics in Support of Identity Management (09-090146111-00) 2008 Detailed analysis including Scenario-based Planning and Risk Analysis
Advanced Pilot Training (10-99164267-00) 2009 Strong DOTMLPF analysis; clear explanation of analytical approach included in Appendices
Vessel-to-Shore Bridging (09-97169105-00) 2009 Gaps have numerous subparts; uses a typical but good example of capability prioritization/mapping matrix (includes Measures of Effectiveness [MOE] and Minimum Values)
Cross Domain Enterprise (10-112959174-00) 2010 Uses a typical but good example of capability prioritization/mapping matrix (includes MOEs and Minimum Values); recommends mix of materiel and nonmateriel solutions
Amphibious Combat Vehicle ICD (11-151956055-00) 2011 Requirements traceable to the Joint Operating Concept vice Universal Joint Task Lists; uses a typical, but good example, of capability prioritization/mapping matrix (includes MOEs and minimum values); recommends mix of materiel and nonmateriel solutions
Personnel Recovery (12-167465473-00) 2012 Succinct document; recommends materiel and nonmateriel solutions
Data Masked (12-159990107-00) 2012 Detailed analysis using several techniques; well-defined MOEs including Threshold and Objective Values

Future Research and Conclusions

Future research could focus on the relationship between the ICD and the program it generates. Can the utility or performance of a program be traced to the description of the initial capability gap and requirement definition? Are there characteristics of an ICD that indicate how well a program will adhere to cost, performance, and schedule expectations?
Since 2002, the JCIDS process has been refined and enhanced. There appears to be a convergence in the formatting and content of many ICD/JCDs since 2008. While the quality of historical ICDs varies, marked improvements to the analysis have been documented since 2008, possibly due to the GAO report from the same year.

Through research of the current methodologies used in ICDs since the inception of the process, the research team has formulated an outline of proposed areas upon which writers and implementers can focus. Future writers may use this outline as well as a series of DoD guidelines to provide the Joint community with superior ICDs that achieve their goals in a more efficient manner with minimal processing time.


To print a PDF version of this article, click here.

Author Biographies

Major Bryan D. Main, USAFMaj Bryan D. Main, USAF, is currently studying at the Air Force Institute of Technology for a Doctor of Philosophy degree in Logistics. He holds a bachelor’s degree in History from John Brown University, and a master’s degree in Logistics Management from the Air Force Institute of Technology.

(E-mail address: Bryan.Main@us.af.mil)

Captain Michael P. Krester, USAFCapt Michael P. Kretser, USAF, is currently studying at the Air Force Institute of Technology for a Doctor of Philosophy degree in Logistics. He holds a bachelor’s degree in Computer Science Programming from Limestone College, and a master’s degree in Logistics Management from the Air Force Institute of Technology.

(E-mail address: Michael.Kretser@us.af.mil)

Joshua ShearerMr. Joshua M. Shearer, USAF, is currently studying at the Air Force Institute of Technology for a Doctor of Philosophy degree in Systems Engineering. He holds a bachelor’s and master’s degree in Materials Science and Engineering from Wright State University and a master’s degree in Business Management from Wright State University.

(E-mail address: Joshua.Shearer@us.af.mil)

Lt Colonel Darin A. Ladd, USAFLt Col Darin A. Ladd, USAF, is the director of Communications and assistant professor of Systems Engineering at the Air Force Institute of Technology. As a consultant, he assisted Services and combatant commands with early systems analysis and systems selection projects. He holds a PhD from Washington State University in Information Systems, an MS in Information Resource Management from the Air Force Institute of Technology, and a BS from the U.S. Air Force Academy.

(E-mail address: Darin.Ladd@us.af.mil)


References

Air Force Materiel Command. (2008). Analysis of alternatives (AoA) handbook. Office of Aerospace Studies. Retrieved from https://acc.dau.mil/CommunityBrowser.aspx?id=45041

Barney, J. B. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99–120.

Blanchard, B. S., & Fabrycky, W. J. (2010). Systems engineering and analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall.

Brosh, I. (1985). Quantitative techniques for managerial decision making. Reston, VA: Reston Publishing.

Chairman of the Joint Chiefs of Staff. (2001). Requirements generation system (CJCSI 3170.01B). Retrieved from https://info.aiaa.org/tac/SMG/SOSTC/Launch percent20Management percent20Documents/Appendix percent20B percent20Reference percent20Documents/Charman_JCS_Instruction.pdf

Daszykowski, M., Kaczmarek, K., Vander Heyden, Y., & Walczak, B. (2007). Robust statistics in data analysis—A review: Basic concepts. Chemometrics and Intelligent Laboratory Systems, 85(2), 203–219.

Defense Acquisition University. (2014). DOTmLPF-P change recommendation. Retrieved from https://dap.dau.mil/acquipedia/Pages/ArticleDetails.aspx?aid=0f017b62-6273-4d58-b02c-d72c776198e8

Department of Defense. (2013). Operation of the Defense Acquisition System (Interim DoDI 5000.02). Washington, DC: Deputy Secretary of Defense.

Goldwater-Nichols Department of Defense Reorganization Act of 1986, Pub. L. 99-433 (1986).

Goodman, C. M. (1987). The Delphi technique: A critique. Journal of Advanced Nursing, 12(6), 729–734.

Government Accountability Office. (2007). Best practices: An integrated portfolio management approach to weapon system investments could improve DoD’s acquisition outcomes (Report No. GAO-07-388). Washington, DC: Author.

Government Accountability Office. (2008). Defense acquisitions: DOD’s requirements determination process has not been effective in prioritizing joint capabilities (Report No. GAO-08-1060). Retrieved from http://www.gao.gov/assets/290/281695.pdf

Hammond, J. S., Keeney, R. L., & Raiffa, H. (1998). Even swaps: A rational method for making trade-offs. Harvard Business Review, 76, 137–150.

Helms, M. M., & Nixon, J. (2010). Exploring SWOT analysis-where are we now? A review of academic research from the last decade. Journal of Strategy and Management, 3(3), 215–251.

Hiam, A. (1990). The vest-pocket CEO: Decision-making tools for executives. Prentice Hall Press.

Hughes, R. C. (2010). Development of a concept maturity assessment framework. Wright-Patterson AFB, OH: Air Force Institute of Technology.

Joint Capabilities Integration and Development System (JCIDS). (2014). In Defense Acquisition University ACQuipedia online encyclopedia. Retrieved from https://dap.dau.mil/acquipedia/Pages/ArticleDetails.aspx?aid=12227505-ba29-41c0-88f0-682a219d5bbc

Joint Chiefs of Staff. (2009). Capabilities-Based Assessment (CBA) User’s Guide (Ver. 3). Force Structure, Resources, and Assessments Directorate (J8). Retrieved from http://www.dtic.mil/futurejointwarfare/strategic/cba_guidev3.pdf

Joint Requirements Oversight Council. (2012). Manual for the operation of the Joint Capabilities Integration and Development System. Retrieved from https://acc.dau.mil/CommunityBrowser.aspx?id=267116&lang=en-US

Kirkwood, C. W. (2002). Decision tree primer. Tempe, AZ: Department of Supply Chain Management, Arizona State University.

Linstone, H. A., & Turoff, M. (1975). The Delphi method: Techniques and applications. Boston: Addison-Wesley Publishing.

Mackay, H., Carne, C., Beynon-Davies, P., & Tudhope, D. (2000). Reconfiguring the user: Using rapid application development. Social Studies of Science, 30(5), 737–757.

Porter, M. E. (1980). Value chain analysis. London: Oxford Press Ltd.

Porter, M. E. (2008). The five competitive forces that shape strategy. Harvard Business Review, 86(1), 78–93.

Ringland, G., & Schwartz, P. P. (1998). Scenario planning: Managing for the future. Chichester, UK: John Wiley & Sons.

Sage, A. P., & Armstrong, J. J. (2000). An introduction to systems engineering. New York, NY:John Wiley & Sons.

Secretary of the Air Force. (2013). Operational capability requirements development (AFI 10-601). Washington, DC: Author.

Shenhar, A. J., & Dvir, D. (2007). Reinventing project management: The diamond approach to successful growth and innovation. Boston: Harvard Business Review Press.

Sink, D. S. (1983). Using the nominal group technique effectively. National Productivity Review, 2(2), 173–184.

Sproles, N. (2002). Formulating measures of effectiveness. Systems Engineering, 5(4), 253–263.

Turner, J. R., & Cochrane, R. A. (1993). Goals-and-methods matrix: Coping with projects with ill-defined goals and/or methods of achieving them. International Journal of Project Management, 11(2), 93–102.

Williamson, O. E. (1979). Transaction-cost economics: The governance of contractual relations. Journal of Law and Economics, 22(2), 233–261.

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *