Manage Toward Success—Utilization of Analytics in Acquisition Decision Making


To print a PDF copy of this article, click here.

Authors: Sean Tzeng and K. C. Chang

Large information technology (IT) projects such as Defense Business System (DBS) acquisitions have been experiencing an alarming rate of large cost overruns, long schedule delays, and under-delivery of specified capabilities. There are strict defense acquisition laws/regulations/policies/guidance with an abundance of review and oversights, generating a plethora of data and evidence for project progress. However, with the size and complexity of these large IT projects and sheer amount of project data they produce, there are challenges in collectively discerning these data and making successful decisions based on them. This research article develops an analytic model with Bayesian networks to orient the vast number of acquisition data and evidence to support decision making, known as the DBS Acquisition Probability of Success (DAPS) model.

Developing an information technology (IT) system to meet organizational needs is not a simple task. It is often very extensive, taking a long time to realize, and more costly and difficult than originally imagined. This is especially true for large IT projects (over $15 million). In a 2012 study, University of Oxford researchers reported that, on average, large IT projects run (based on 5,400 IT projects) 45% over budget, 7% over time, and are delivered with 56% less value (Bloch, Blumberg, & Laartz, 2012). The situation seems to be even worse for Department of Defense Business System (DBS) acquisition programs, where the majority of programs would meet the University of Oxford researchers’ threshold for large IT projects. A Government Accountability Office (GAO, 2012) report indicates that of 10 Enterprise Resource Planning programs the Department of Defense (DoD) identified as critical to business operations transformation, nine of the programs were experiencing schedule delays up to 6 years, and seven of the programs were facing estimated cost increases up to or even over $2 billion. This is occurring even though acquisition laws, regulations, policies, guidance, independent assessments, technical reviews, and milestone reviews guide DBS acquisition.

arj73-article-3-secondaryGreat amounts of data and a large number of artifacts are generated during execution of DBS programs. A few examples include the Integrated Master Schedule (IMS), Earned Value Management System (EVMS) Metrics, Business Case, and Systems Engineering Plan (SEP), as well as Risk Reports and various independent assessments. These data/artifacts are commonly used by decision makers at technical reviews and milestone reviews as evidence of program progress to support their decisions. However, the development and use of evidence to support decisions has not translated to desirable investment outcomes. This issue is analogous to the experience of other professional disciplines such as intelligence, criminal justice, engineering, and medical professions. In today’s Information Age, acquisition and availability of information and evidence no longer represent the most challenging issues. Often data/evidence is abundant, but the availability of analytical tools limits the ability to figure out what all the evidence means collectively and how it supports the hypothesis being sought. Good decision making requires not only information and evidence, but the inference and representation of the evidence to support decision making. Currently, DBS acquisition decision makers have limited means to aid them in holistically and logically processing what all the available evidence collectively indicates about a program, and for using that evidence in a structured manner to support decision making.

DBS Acquisition Probability of Success (DAPS) is the evidence-based analytical tool developed to help decision makers collectively draw inferences from the abundance of available evidence produced during the course of DBS acquisition. Based on observations and inferences of evidence, the DAPS model is able to assess program performance in specific subject matter knowledge areas and assess the overall likelihood for program success. DAPS is a way ahead to support acquisition decision making, and an initial step forward in improving human understanding and ability to innovate and engineer systems though evidential reasoning.

Theoretical Foundations

A brief discussion on the theoretical foundations behind the DAPS research is presented in this section. Topics include evidential reasoning and knowledge-based management.

Evidential Reasoning

According to Schum (2001), evidence is described as “a ground for belief; testimony or fact tending to prove or disprove any conclusion” (p. 12). The evidence within the framework of a DBS acquisition program includes the artifacts, technical plans, facts, data, and expert assessments that will tend to support or refute the hypothesis of program success. However, evidence by nature is incomplete, inconclusive, ambiguous, dissonant, unreliable, and often conflicting (Schum, 2001), making the decision process based on the observations and inferences of evidence a challenging and difficult endeavor. Evidential reasoning utilizes inference networks to build an argument from the observable evidence items to the hypothesis being sought. In the case of DBS acquisition, the DAPS model argues for the hypothesis of program success or the alternative hypothesis of program failure based on the observations of evidence.

arj73-article-3-secondary-2A Bayesian network is a graphic modeling language used in this research to build the inference network for evidential reasoning. Its basis is the Bayesian approach of probability and statistics, which views inference as belief dynamics and uses probability to quantify rational degrees of belief. Bayesian networks are direct acyclic graphs that contain nodes representing hypotheses, arcs representing direct dependency relationships among hypotheses, and conditional probabilities that encode the inferential force of the dependency relationship (Neapolitan, 2003).

A Bayesian network is a natural representation of causal-influence relationships (CIRs), the type of direct dependency relationships built in the DAPS model. CIRs are relationships between an event (the cause) and a second event (the effect), where the second event is understood as a consequence of the first. CIRs are an important concept of Bayesian networks, and reflect stronger bonds than dependency relationships, which are not causal-based (Pearl, 1988).

Knowledge-based Management

The DAPS model framework is based on the concept of knowledge-based acquisition described by the GAO. In the GAO (2005) report for National Aeronautics and Space Administration (NASA) acquisition programs, GAO recommended to NASA, and NASA subsequently concurred, that transition to a knowledge-based acquisition framework will improve acquisition program performance. The GAO has also made the same recommendation to the DoD in other GAO reports, including the GAO (2011) report.

GAO (2005) describes the knowledge-based acquisition as follows:

A knowledge-based approach to product development efforts enables developers to be reasonably certain, at critical junctures or “knowledge points” in the acquisition life cycle, that their products are more likely to meet established cost, schedule, and performance baselines and, therefore provides them with information needed to make sound investment decisions. (p. 9)

The more knowledge is achieved, the less risk or uncertainty the program is likely to encounter during the acquisition process. Sufficient knowledge reduces the risk associated with the acquisition program and provides decision makers and program managers higher degrees of certainty to make better decisions. The concept of the knowledge-based acquisition is adapted in this research and built into the DAPS model. The Knowledge Points mentioned in the Defense Acquisition Guidance and the GAO reports are called Knowledge Checkpoints in the DAPS model. DAPS also contains Knowledge Areas, which are the subject matter areas of DBS acquisition in the model, derived from Project Management Institute (PMI)’s (2008) Knowledge Areas.

DAPS Bayesian Network Model

DAPS is developed with a Bayesian network model in the Netica software tool (Norsys, 2010). By using a Bayesian network, DAPS was able to construct a complex inference network to measure the certainties/uncertainties in subject matter Knowledge Areas and assess the level of success achieved at Knowledge Checkpoints.

Model Topology

The DAPS Bayesian network model contains a three-level structure, representing the three types of nodes in the model. Three types of static arcs also represent the interrelationships among the three types of nodes at a point in time, and one type of dynamic arc represents the temporal relationships from one point in time to another. The DAPS model at the first Knowledge Checkpoint, Material Development Decision (MDD), is shown in Figure 1. The topology of the top two levels—Knowledge Checkpoint and Knowledge Areas—is repeated at each of the 15 Knowledge Checkpoints. The bottom level containing the Evidence Nodes—the observation points of the DAPS model—varies at each Knowledge Checkpoint, depending on various evidence requirements.

Figure 1. daps model at material development decision knowledge checkpoint

arj73-article-3-figure-1

Table 1 outlines these DAPS model elements.

The complete DAPS model contains 15 Knowledge Checkpoints. Each Knowledge Checkpoint has one Knowledge Checkpoint Node, seven Knowledge Area Nodes, and a number of Evidence Nodes. The total is:

  • 15 Knowledge Checkpoint Nodes
  • 105 Knowledge Area Nodes
  • 258 Evidence Nodes
  • 258 KA2E Arcs
  • 195 KA2KA Arcs
  • 60 KA2KC Arcs
  • 98 KA2KAi+1 Arcs

Table 1. defense business system acquisition probability of success (daps) elements

arj73-article-3-table-1

Knowledge Checkpoint Node. The Knowledge Checkpoint is the top-level node, which cumulates all information about the DBS acquisition program at that decision point, assessing the likelihood of program success. It provides a cumulative measurement of success achieved by the program up to the current Knowledge Checkpoint, and is the metric that can be used to help decision makers decide whether the program has demonstrated enough certainty and maturity to move on to the next phase.

Knowledge Checkpoints are modeled as leaf nodes. They have no children nodes and contain four Knowledge Area Nodes as parent nodes: time, quality, cost, and scope Knowledge Area Nodes, which are the four measurable (direct) Knowledge Areas in the DAPS model. These CIRs on the Knowledge Checkpoint Node represent the four direct measures of success. Success is defined in DAPS as meeting program time, cost, and quality goals from a clearly defined program scope. The Knowledge Area Nodes are further discussed in the next section. Table 2 lists the 15 technical reviews and milestone reviews modeled in DAPS as Knowledge Checkpoints (Defense Acquisition University, 2013).

Knowledge Checkpoint Nodes contain two states describing the state of the program: “Success” and “Failure.” The probability of these states reflects the knowledge (certainty) and risk (uncertainty) assessment of the program at the Knowledge Checkpoint.

Knowledge Area Node. Knowledge Areas are the second-level node, which measures the certainty and maturity attained for that particular subject matter area of DBS acquisition at the Knowledge Checkpoint. Knowledge Areas in DAPS are derived from the nine Project Management Body of Knowledge (PMBOK) Knowledge Areas (Project Management Institute, 2008), integrated with the systems engineering elements of defense acquisition. These Knowledge Areas are further divided into the measurable (direct) and enabling (indirect) Knowledge Areas. Measurable Knowledge Areas include scope, cost, time, and quality subject matter areas, which directly affect the measures of program success in DAPS. Enabling Knowledge Areas include general management, systems engineering, and procurement subject areas, which do not directly affect the measure of program success, but are important enabling factors that drive success.

table 2. case 1 daps model output

arj73-article-3-table-2

Measurable Knowledge Areas include scope, cost, time, and quality subject matter areas, which directly affect the measures of program success in DAPS.

The Knowledge Areas represent an important aspect of the DAPS model. They model the static and dynamic complex interrelationships and effects within DBS acquisition and combine the observations of various evidence items in the subject matter Knowledge Area. The arcs among the Knowledge Area Nodes at a static point—the KA2KA arcs—model the CIR of how knowledge in one Knowledge Area affects knowledge in the second Knowledge Area. The KA2KA relationships in DAPS are shown in Figure 2, which is extracted from the model structure presented in Figure 1. The arcs in the KA2KA structure are selected based on the expert knowledge elicitation conducted as part of this research.

Figure 2. knowledge area to knowledge area (ka2ka) graph structure

arj73-article-3-figure-2

The dynamic arcs from a Knowledge Area Node at the prior Knowledge Checkpoint to the same Knowledge Area Node at the next Knowledge Checkpoint—the KA2KAi+1 arcs—model the CIRs of DBS acquisition through time. The KA2KAi+1 arc represents the knowledge in a Knowledge Area at a prior Checkpoint influencing the knowledge of the same Knowledge Area at the next Checkpoint. DAPS uses Knowledge Area Nodes to model the dynamic effects in the progression of knowledge during an acquisition project. Thus, each Knowledge Area Node gains information from the observations at the current Knowledge Checkpoint, as well as the information cumulated from prior Knowledge Checkpoints.

Figure 3 provides an example graph of the KA2KAi+1 arcs in green arrows from the MDD Knowledge Checkpoint to the next Initial Technical Review Knowledge Checkpoint.
The arcs from Knowledge Area Nodes to Evidence Nodes—the KA2E arcs—model the CIR of how knowledge affects the outcome observed with the evidence. Figure 4 provides an outline of the seven Knowledge Areas and select samples of the evidence grouped under each
Knowledge Area.

figure 3. knowledge area @ kc1 to knowledge area @ kc2 (ka2kai+1) arc example

arj73-article-3-figure-3figure 4. sample of evidence Taxonomy by knowledge area

arj73-article-3-figure-4

Knowledge Area Nodes contain two states describing the state of the knowledge level achieved in the subject matter Knowledge Area: “Good” and “Marginal.” The probabilities of these states reflect the knowledge (certainty) and risk (uncertainty) in the subject matter Knowledge Area.

Evidence Node. The third- and bottom-level nodes are the Evidence Nodes in the DAPS model. Observations of Evidence Nodes are entered at this level to drive inference for assessing a program’s probability of success. The only CIRs for this level are the arcs from Knowledge Area nodes to Evidence Nodes—the KA2E arcs described previously.

Evidence Nodes contain three states describing the state of the evidence: “Outstanding,” “Acceptable,” or “Unacceptable.” In summary, these states reflect the risk assessment of the program in the specific Knowledge Area. Outstanding would require no worse than a “Low-Risk” assessment. Acceptable would require no worse than a “Moderate-Risk” assessment. Unacceptable would require a “High-Risk” assessment or worse. Since these are the Evidence Node observations, one of the states is chosen to describe the real-world observation of the evidence. This provides information to the parent Knowledge Area Nodes, which updates the belief in the Knowledge Area.

Model Summary

arj73-article-3-secondary-3To summarize the model, Figure 1 shows the inference network at one static point. At this point, Evidence Nodes are observed in accordance with the three node states (Outstanding, Acceptable, or Unacceptable) to provide information on the assessment of the certainty/maturity in the seven Knowledge Area Nodes through the KA2E arcs. The assessments are evaluated according to the two Knowledge Area Node states: Good and Marginal. The Knowledge Area Nodes then propagate the information according to the KA2KA arcs to combine the belief, based on the evidence observed under the Knowledge Area, as well as the belief in other Knowledge Areas where a CIR relationship exists. Finally, the Direct Knowledge Area Nodes provide information to the Knowledge Checkpoint Node to assess the belief in the Knowledge Checkpoint Node states—Success and Failure—through the KA2KC arcs, which completes the information flow within a static point at a Knowledge Checkpoint.

The information at the static point within a Knowledge Checkpoint is then passed on to the next Knowledge Checkpoint using the seven Knowledge Area Nodes through the KA2KAi+1 arcs, where Evidence Node assessment observations will again be made. The information flow process is then repeated 14 times until the last Knowledge Checkpoint Node—the Full Operating Capability (FOC) Knowledge Checkpoint Node—is propagated.

DAPS Decision Process and Case Analysis

DAPS is an analytic model that assesses program performance in subject matter Knowledge Areas and measures the overall likelihood for success. Its basis is the observations of evidence already being conducted through acquisition reviews and oversight. DAPS has significant potential to aid decision makers in holistically and logically processing the mountain of evidence to support their acquisition decision making at Knowledge Checkpoints. This section will first discuss how DAPS could be used in the acquisition process and then demonstrate its use through a case analysis and associated what-if analysis.

DAPS Support of Acquisition Process

The highest level of DAPS model output is the probability of success measurements at the Knowledge Checkpoint Nodes, based on the program knowledge (certainty) level attained. This highest level DAPS model output is the cumulative metric to support decision making at Knowledge Checkpoints, aided by the measurements at the second-level Knowledge Area Nodes.

Three alternative views are available to the decision maker to observe this top-level output of DAPS.

First is simply the probability of success at the Knowledge Checkpoint, P(KC = Success), as outputted from the DAPS model.

arj73-article-3-equation

The second alternative view is the translation of the probability of success at Knowledge Checkpoint Nodes into a “Success Factor”—the likelihood ratio of Success over Failure. This view intends to help decision makers better comprehend the chance for success in terms of ratios, illustrating the odds the program is more likely to succeed than fail, shown in Equation (1).

The success factor is presented in a format similar to the safety factor, which is commonly used in engineering applications as a simple metric to determine the adequacy of a system, as well as the widely used EVMS metrics of the Cost Performance Index and Schedule Performance Index. A success factor above 1 indicates that the program is more likely to succeed than fail, while a success factor below 1 indicates that the program is less likely to succeed than fail.

The third alternative view is by the use of adjectival ratings (DoD, 2011) to describe the Knowledge Checkpoint assessment level. Table 3 provides the range of success factors used for the case analysis, their respective P(KC = Success) ranges, their associated adjectival ratings and risk levels, as well as the prescriptive recommended decisions for the respective range and rating. The ranges and ratings recommended in Table 3 reflect a risk attitude based on heuristics drawn from safety factor applications. Each organization or decision maker would be able to change the ranges and associated ratings based on their own risk attitude.

Table 3. knowledge checkpoint assessment and decision guide

arj73-article-3-table-3In addition, the decision maker may observe the predicted probability of success measurements or success factors at future Knowledge Checkpoints, especially the Full Operating Capability (FOC) Knowledge Checkpoint—the final milestone. A success factor greater than 1 at FOC, indicating that success is more likely than failure as the ultimate program outcome, would help to support the decision to proceed. A success factor less than 1, indicating that failure is more likely than success as the ultimate program outcome, would help support the decision for “Delay,” “Corrective Action,” or “Shutdown.” Depending on the observations of evidence, the predicted probability of success at future Knowledge Checkpoints may indicate a different trend for success as compared to the assessment at the current Knowledge Checkpoint. It provides an additional insight into the program.

Case Analysis

A total of 14 case analyses were conducted as part of the DAPS research. Two of them were conducted with a prototype Bayesian network model based on the Naval Probability of Program Success v2 framework (Department of the Navy, 2012) for direct analysis and comparison. Twelve more case analyses were conducted on the final DAPS model. One of them is presented in the discussion that follows.

The intent of this case analysis is to test the sensitivity of the model to extreme but realistic conditions and analyze the effect of conflicting evidence on program success. The case presents a hypothetical program where program management, budgeting, and funding support are strong, along with an outstanding cost estimate, while contracting/procurement actions are proceeding with adequate performance. However, staffing is determined to be inadequate. The program also has not developed an SEP or any architecture. Quality risk is high due to the lack of technology maturity. This case is applied at Milestone A, and the DAPS model is being used to support the Milestone Decision Authority (MDA)’s milestone decision. The specific Evidence Node observations in DAPS appear in Table 4.

Table 4. specific evidence node observations in daps

arj73-article-3-table-4

The model’s Evidence Node observation inputs as well as the Knowledge Area Node and the Knowledge Checkpoint Node results are shown in Figure 5. The probability of success measure at this Knowledge Checkpoint, as indicated by the Milestone A Knowledge Checkpoint Node, is at 55.8%. This is the result of the model even with only four unfavorable observations as compared to 12 favorable. The program’s time knowledge, cost knowledge, procurement knowledge, and general management knowledge are likely to be good; while scope knowledge, systems engineering knowledge, and quality knowledge are likely
to be marginal.

figure 5. case analysis output at milestone a

arj73-article-3-figure-5

The probability of success measurement at Milestone A is derived from the scope, quality, time, and cost Knowledge Area measurements. Although the evidence at this Knowledge Checkpoint strongly supports that the program has attained Good knowledge in the time Knowledge Area at 79.6%, and in the cost Knowledge Area at 99.9%, the evidence does not support the same argument for the quality Knowledge Area and scope Knowledge Area, measured only at 41.4% Good and 37% Good, respectively. From the elicitation of the expert knowledge conducted in the research, the DAPS model specified the weighted influences of quality Knowledge Area and scope Knowledge Area to be twice as strong as the weighted inferential forces of time and cost Knowledge Area, producing the 55.8% Success measurement for Milestone A Knowledge Checkpoint.

Figure 5 outlines the probability of success for the case analysis at each of the 15 Knowledge Checkpoints and their respective success factors, based on the observation inputs at Milestone A.

Based on the success factor of 1.26 at Milestone A, the Knowledge Level of the acquisition program is rated as Marginal with a recommended action of Delay or Corrective Action. The fact that the future success factors past Milestone A are all above 1 bodes well for this program, however, indicating that the program contains a solid foundation for possible future success.

Within the DAPS model, this can be attributed to the high general management Knowledge Area and cost Knowledge Area results. The general management Knowledge Area acts as the root node in each Knowledge Checkpoint instance computation, and has a strong influence on the other six Knowledge Areas. The cost Knowledge Area is the only leaf node within the Knowledge Area network structure and is a strong indicator of the adequacy of the other Knowledge Areas.
With the “Marginal” rating and recommendation of “Delay or Corrective Action,” sufficient evidence is not present to either defend a favorable decision to proceed or unfavorable decision to shut down the program. However, the predicted future success factors indicate there are favorable observations of evidence supporting the likelihood for eventual success.

With the Marginal rating and recommendation of Delay or Corrective Action, available evidence is not sufficient either to firmly defend a favorable decision to proceed or unfavorable decision to shut down the program. However, the predicted future success factors indicate available observations of evidence support the likelihood for eventual success. Based on this DAPS assessment, the MDA would be advised to delay the Milestone A decision until the SEP and architecture artifacts are adequately developed. By that time, the program could be reassessed based on the developed artifacts and the program’s approach to address the staffing shortage and technology maturity issues.

What-if Analysis

Prior to the actual Milestone A Review, the program manager might ask the question, “What if the Milestone A Review were delayed beyond the threshold date for a short period in order to develop the SEP and the architecture to an adequate level? What would that do to my probability of success measurement at Milestone A and beyond?” Figure 6 provides the Milestone A output from DAPS if the SEP and the scope risk level becomes acceptable, while the Integrated Master Schedule (IMS) Progress becomes Unacceptable due to the missed Milestone. This “what-if” scenario assumes all other observations of evidence for this case remain the same.

figure 6. “What if” analysis at milestone a

arj73-article-3-figure-6

Note. ADM = Acquisition Decision Memorandum; GM = General Management; IDM = Investment Decision Memorandum; IGCE = Independent Government Cost Estimate; IMS = Integrated Master Schedule; KC = Knowledge Checkpoint; MSA = Milestone A; RiskRep = Risk Report; SE = Systems Engineering; SEP = Systems Engineering Plan; Strat = Strategic.

As shown in Figure 6, if the program manager worked to complete the missing artifacts and delayed the Milestone A Review beyond the acceptable range, the probability of success at Milestone A would have been improved from 55.8% to 71.6%, which updates the success factor from 1.26 to 2.52, thereby doubling it. A success factor would have changed the Knowledge Level rating from Marginal to Acceptable and Recommended Decision from Delay or Corrective Action to “Proceed with Caution.” The significant change can be attributed to two observations of evidence being changed to favorable, while only one is being changed from unfavorable to favorable: (1) the relative higher weight of scope Knowledge Area to Knowledge Checkpoint Success as compared to time Knowledge Area, and (2) the overarching effects of systems engineering Knowledge Area to the other Knowledge Areas.

Thus, if the program manager delayed the Milestone A Review until the SEP and the architecture were completed, the program manager would have provided the MDA better evidence to support a favorable decision to proceed, as compared to the original scenario. Even though falling behind schedule is undesirable, the what-if scenario with the Acceptable rating provided the MDA just enough proof of program maturity and knowledge certainty to be allowed to Proceed with Caution.

Conclusions

arj73-article-3-secondary-4The DAPS model demonstrated the potential of an evidence-based, Bayesian network model to support acquisition decision making. DAPS quantitatively assesses a program’s likelihood for success by building an inference network consisting of observable quality evidence, intermediate subject Knowledge Areas, defense acquisition Knowledge Checkpoints, and the respective CIRs among them. DAPS embodies the principles of knowledge-based acquisition in its ability to analyze DBS programs’ knowledge and certainty levels through the Knowledge Checkpoint and Knowledge Area measurements. Through these quantitative measures, DAPS can be used to aid the acquisition decisions at Knowledge Checkpoints, whether to allow the program to proceed, delay, order corrective actions, or shut down the program.

The DAPS model represents an initial step toward modeling and analyzing the complex decision process for DBS acquisition and system development projects in general. Future research can be made to expand the Bayesian network presented within the DAPS model, further build out the underlying complex interrelationships as well as environmental effects, and further develop the prescriptive capabilities to recommend decisions and actions. Potentially significant capabilities and enhancements could be achieved when coupled with the ever-advancing data science and computing power. Through the utilization of analytics to represent the information and evidence available and make better inferences the decision makers will be able to arrive at better informed decisions, leading to more successful programs and desirable investment outcomes.


To print a PDF copy of this article, click here.

References

Bloch, M., Blumberg, S., & Laartz, J. (2012, October). Delivering large-scale IT projects on time, on budget, on value. McKinsey & Company Insights and Publications. Retrieved from http://www.mckinsey.com/insights/business_technology/delivering_large-scale_it_projects_on_time_on_budget_and_on_value

Defense Acquisition University. (2013). Defense acquisition guidebook. Retrieved from https://dag.dau.mil/Pages/Default.aspx

Department of Defense. (2011). Department of Defense source selection procedures. Washington, DC: Office of Defense Procurement and Acquisition Policy.

Department of the Navy. (2012). Naval PoPS guidebook—A program health assessment methodology for Navy and Marine Corps Acquisition Programs Version 2.2. Washington, DC: Author.

Government Accountability Office. (2005). NASA: Implementing a knowledge-based acquisition framework could lead to better investment decisions and project outcomes (Report No. GAO-06-218). Washington, DC: Author.

Government Accountability Office. (2011). Defense acquisitions: Assessments of selected weapon programs (Report No. GAO-11-233SP). Washington, DC: Author.

Government Accountability Office. (2012). DoD financial management (Report No. GAO-12-565R). Washington, DC: Author.

Neapolitan, R. E. (2003). Learning Bayesian networks. Upper Saddle River, NJ: Prentice Hall.

Norsys Software Corp. (2010). Netica 4.16 for MS Windows [Computer software]. Vancouver, Canada: Norsys Software Corp.

Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Francisco, CA: Morgan Kaufmann.

Project Management Institute. (2008). A guide to the project management body of knowledge (PMBOK guide) (4th ed.). Newtown Square, PA: Author.

Schum, D. A. (2001). The evidential foundations of probabilistic reasoning. Evanston, IL: Northwestern University Press.

Author Biographies

arj73-tzengDr. Sean Tzeng is currently one of the lead enterprise architects at the Office of the Department of the Navy Chief Information Officer (DON CIO). He has previously supported Naval Sea Systems Command at several positions, performing systems engineering, architecture, and acquisition program management functions. Dr. Tzeng holds MS and PhD degrees in Aerospace Engineering and Systems Engineering/Operations Research from George Mason University.

(E-mail address: sean.tzeng@navy.mil)

arj73-changDr. K. C. Chang is currently a professor of systems engineering and operations research, and the director for the Sensor Fusion Lab, Systems Engineering and Operations Research Department, George Mason University. He holds MS and PhD degrees in Electrical Engineering from the University of Connecticut. Dr. Chang is an Institute of Electrical and Electronics Engineers Fellow, a position he earned for his contribution on sensor data fusion and Bayesian inference.

(E-mail address: kchang@gmu.edu)

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *