The Challenges in Meeting OSD’s Obligation and Expenditure Rate Goals: A Closer Look at Potential Causal Factors, Their Groupings, and How They Modulate


To print a PDF copy of this article, click here.

Authors: Col Robert L. Tremaine, USAF (Ret.), and Donna J. Kinnear-Seligman

Managing an acquisition program in the DoD is a complicated process. The turbulence created by funding instability can make it even more difficult. Nonetheless, to help program offices maintain their overall funding execution pace, the Office of the Secretary of Defense (OSD) instituted Obligation and Expenditure rate goals over two decades ago. For numerous reasons, acquisition program managers have found it difficult to meet established Obligation and Expenditure rate goals. For purposes of this article, and based on Defense Acquisition University and OSD subject matter expertise, the authors looked more closely at the potential causal factors that could be interfering with the achievement of these goals.


Several months ago, Dr. Nancy Spruill, director of Acquisition Resources and Analysis, Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, solicited support from the Defense Acquisition University (DAU) to help uncover the causal factors that could be interfering with the attainment of OSD’s Obligation and Expenditure rate goals. To learn more about the intervening obstacles, DAU, with assistance from OSD, developed a comprehensive survey that queried experienced and high-level DoD personnel involved in a weapon program’s decision chain. The data might also indicate the prevalence of any significant variances among the factors that could be undermining program execution itself. Results of the study (Higbee, Tremaine, Seligman, & Arwood, 2013) were presented to Assistant Secretary of Defense for Acquisition Katrina McFarland and other senior OSD personnel.

Research Methodology

Two hundred and twenty-nine DoD personnel responded to this survey. The respondents were comprised of program office personnel (program managers, deputy program managers, budget and financial managers, and contracting officers); program executive officers and their chief financial officers; and a variety of senior staff at OSD including Headquarters Financial Management senior staff and Senior Acquisition Executive (SAE) staff (Table 1). Because several functional areas reflected lower response rates, a more detailed analysis of the causal factors was restricted to an aggregate sample size given the confidence levels required to draw any inferences or conclusions.

Table 1. Individual Respondent Groups

Survey Respondent Details
ACAT Levels Respondent Groups Totals
Respondent Distributiona I II III Program Office b PEOc Senior Staffd Responses Queried Response Rate
Total 91 28 23 142 63 24 229 698 33%

aIncludes sampling from all DoD Components and several Defense Agencies.

bProgram managers, deputy program managers, business-financial management (BFM) managers, deputy BFM managers, and contracting officers.

cProgram executive officers (PEO), deputy PEOs, and their chief financial officers.

dHeadquarters, Financial Management and Senior Acquisition Executive staff.

Respondents ranked the impact of 64 factors under nine categories (Figure 1). The researchers then assessed the rankings using a top box (TB) three methodology (i.e., the percentage of 5, 6, and 7 responses on a Likert-like scale from 1–7). Since the frequency of occurrence for some factors could also be contributing to the interference, the researchers included an additional selection (e.g., daily, weekly, monthly, etc.) to isolate any potential ignition areas for any factor.

Figure 1. Factor Categories

Discussion

Factor Distribution

Figure 2 shows the distribution of all 64 factors assessed. Three factors reported an impact rating of two standard deviations above the mean (denoted by +2σ); six factors reported an impact rating of one standard deviation above the mean (denoted by +1σ); and 22 factors rose above an average impact rating (denoted by x). The remaining 33 factors fell below x.

Figure 2. Respondent Histrogram

Nineteen of the 22 factors measured for frequency of occurrence resulted in an impact rating above 39 percent. Sometimes, just one occurrence of that factor appeared to have a significant impact.

Causal Factors Rank Ordered

Table 2 lists the relative ranking of all 64 factors in the context of TB in descending order. This ranking provides a comprehensive view of all factors although the remaining discussion in this article addresses only the factors above x. One particular factor, “Unrealistic, overly optimistic spend plans” (F10), is important to note since it serves as a written forecast of a program’s funding needs and initially establishes Obligation and Expenditure projections. However, spend plans are also subjected to so many real world eventualities that updating them can become a full-time job.

Table 2. Impact Factor Ratings in Aggregate Descending Order

 

Factors Rated by Adverse Impact TB mean-symbol standard-deviation-symbol
F1 Late release of full obligation budget authority due to Continuing Resolution Authority (CRA) 69% 5.29 2.41
F2 Contract negotiation delays 67% 5.06 2.59
F3 Contract award delays 67% 5.00 2.56 +2 standard-deviation-symbol =67%
F4 Shortage of contracting officers 64% 4.79 2.58
F5 Congressional mark/recission 61% 4.87 2.65
F6 Contractor proposal prep delays 60% 4.87 2.59
F7 OSD-directed Resource Management Decision (RMD) 58% 4.50 2.63
F8 Request for Proposal (RFP) prep delays 57% 4.63 2.46
F9 Source selection delays 55% 4.44 2.53 +1 standard-deviation-symbol =53%
F10 Unrealistic, overly optimistic spend plans 52% 4.30 2.44
F11 Changes in user requirements 51% 4.16 2.43
F12 Changes to program acquisition strategy 51% 4.41 2.52
F13 Changes in other stakeholder requirements 50% 4.32 2.34
F14 Preparing Defense Acquisition Executive (DAE)-level review and decision 50% 4.15 2.18
F15 Lack of decision authority at expected levels 50% 4.22 2.52
F16 Implementation of new OSD/Service policy 49% 4.20 2.59
F17 Component-directed Program Objective Memorandum (POM) adjustment 49% 4.26 2.51
F18 Awaiting reprogramming action 49% 4.23 2.44
F19 Changes in user priorities 47% 4.00 2.38
F20 Realistic spend plans, but risks materialized 45% 4.00 2.21
F21 Program delays from additional development, testing, or other prerequisite events 44% 4.09 2.35
F22 Defense Contract Audit Agency (DCAA) administrative actions 44% 3.92 2.61
F23 Unplanned Congressional adds to Porgram BAseleine (PB) request 43% 3.90 2.41
F24 Use of undefinitized contract action delays 42% 3.73 2.56
F25 Expenditure contingent on hardware delivery 41% 3.92 2.41
F26 Loss of funding through reprogramming action to higher priority requirements to program executive officer (PEO) portfolio 41% 3.89 2.46
F27 Lack of experience levels in key acquisition functional areas 40% 3.90 2.30
F28 Awaiting DAE-level review and decision 40% 3.50 2.42
F29 Shortage of cost estimators 40% 3.67 2.37
F30 Shortage of business/finance personnel 39% 3.66 2.32
F31 Programmatic conflicts between government and prime contractor 39% 3.66 2.32 mean-symbol =39%
F32 Preparing Service Acquisition Executive/Component Acquisition Executive (SAE/CAE)-level review and decision 38% 3.74 2.02
F33 Delays in contractor payment due to late invoices 37% 3.67 2.35
F34 Unobligated proper year funding not adequately factored 36% 3.57 2.23
F35 Component Comptroller Withhold 35% 3.58 2.34
F36 Defense Contract Management Agency (DCMA) administrative actions 35% 3.42 2.36
F37 Redirection of contractor efforts 35% 3.47 2.23
F38 OSD Comptroller Withhold 34% 3.43 2.37
F39 Shortage of technical/engineering/test personnel 34% 3.51 2.17
F40 Shortage of auditors 33% 3.17 2.43
F41 Slower burn rate than expected sue to unfavorable Schedule Performance Index 33% 3.25 2.14
F42 Awaiting SAE/CAE-level review and decision 32% 3.33 2.30
F43 SAE/CAE/Component-directed reprogramming 32% 3.27 2.30
F44 Recission 32% 3.16 2.46
F45 Changes in system specs 31% 3.30 2.03
F46 Tenure of program manager (PM) and others in key positions 31% 3.11 2.18
F47 Holding award/incentive fees in commitments for future obligation 29% 3.23 2.35
F48 Inadequate training 29% 3.29 2.13
F49 Shortage of managers 28% 3.10 2.17
F50 Insufficiently planned Overseas Contingency Operations (OCO) funding 27% 3.07 2.27
F51 Shortage of staff 26% 2.99 2.12
F52 Contractor rework 26% 3.00 2.14
F53 Deferred payments for scheduling earning fees, progress payments/performance-based payments 25% 3.08 2.20 -1 standard-deviation-symbol =25%
F54 Effect of contract type on outlay rates 24% 2.99 2.17
F55 Material/Systems Command Comptroller Withhold 24% 2.71 2.17
F56 Awaiting PEO-level review and decision 24% 2.80 2.01
F57 Termination liability 22% 2.72 2.17
F58 Insufficient workplace tools/apps 22% 2.82 2.01
F59 PEO-directed programming 21% 2.83 2.10
F60 Slower burn rate than expected due to favorable Cost Performance Index 21% 2.77 1.95
F61 PEO Withhold 20% 2.39 1.99
F62 Preparing PEO-level review and decision 20% 2.66 1.53
F63 Production line issues 19% 2.82 2.08
F64 Labor disputes 10% 1.89 1.64

Factors and Respondent Groups

Figure 3 accounts for the 31 factors above the mean and by respondent group as depicted in Table 1. The 31 factors were the only ones further evaluated in this study unless a factor shifted above x after a more detailed correlation delineation (e.g., Acquisition Category [ACAT]) levels, military components, position, etc.). Unexpectedly, the individual factors showed widespread perception disparities among the respondent groups for the factors that fell below +2σ. After analyzing the specific individual factors among all the respondent groups, seven of the 31 factors had an unusually large σ. As a result of these conspicuous gaps, the authors turned to the qualitative data and watched for any strong correlations (e.g., positive quantitative correlation coefficients (r) > 0.7) to better understand the reasons for the differences as well as the influence of any intervening and/or moderating factor couplings. The remaining discussion addresses the 31 impact factors in descending order from highest to lowest.

Figure 3. Impact Ratings Above x in Aggregate Descending Order with Respondent Group Low and High Ratings

Factors Ranked Two Standard Deviations Above the Mean (+ 2σ)

In Figure 3, Late release of full obligation/budget authority due to Continuing Resolution Authority (CRA) (F1), Contract negotiations’ delays (F2), and Contract award delays (F3) all rose above 2σ where 67 percent or more of the respondents claimed they had the highest adverse impact of all factors measured. The occurrence of CRA had the most significant negative impact to Obligation and Expenditure rates. It also had one of the smallest variances (σ) among the respondent groups. Even with the expectation that CRA might prevail and the subsequent planning that followed for such a likely event, many PMs pointed to an overly conservative and slow internal vetting process posture that created additional obstacles in meeting OSD goals. In their responses to qualitative questions, several PMs recommended using some sort of “CRA variable” to temporarily offset the consequences of CRA if the required funds were not released as originally projected. Next in rank order were contract negotiations and contract award delays. The respondents emphasized that DoD could fix the problem more readily since unlike CRA, these factors were under internal control. When asked what could be done to reduce the adverse effects of all three factors, the respondents recommended the “inclusion of more risk mitigation into contract award planning, more realistic timelines, more realistic plans, greater funding stability, reduction in bureaucratic obstacles, more synchronized internal processes, and better aligned accounting systems.”

Factors Ranked One Standard Deviation Above the Mean (+1σ)

This next line of demarcation (Figure 3, factors F4–F9) included many contracting-related factors (i.e., Shortage of contracting officers (F4), Contractor proposal prep delays (F6), Request for Proposal (RFP) prep delays (F8) and Source selection delays (F9). Nearly all the factors showed the emergence of a more alarming σ between the individual respondent groups—as high as 18 percent in one case (i.e., Contractor proposal prep delays [F6]). For this particular factor, procurement contracting officers (PCO) reported the highest impact, while PMs ranked it as the lowest. Senior staff cited that Shortage of contracting officers (F4) created the highest impact, while PCOs reported it had the lowest impact. With a 7 percent σ, it was the lowest among all six factors in this grouping.

Given that six of the top nine factors were contract-specific factors that ranked above +1σ (Figure 3), it came as little surprise to see so many reinforcing comments surface.

  • “Lack of experienced and qualified contract specialists . . ..”
  • “Alarmingly low personnel qualified . . . many unsure/lack guidance and experience . . . .”
  • “Significantly stressed with overtime to complete all contracting actions prior to close of fiscal year.”
  • “Inadequate training . . . inordinate number of interns with very low experience in all career fields.”
  • “Lack of sufficient legal personnel trained in Acquisition.”
  • “Loss in brain trust and skill to develop complete, clear SOWs [Statements of Work] using proactive contract language.”
  • “SOW writing and the teaching of SOW writing classes is greatly left to contractors or support contractors, resulting in unclear language.”

The highest frequency of occurrence was also associated with contracting-related factors (Figure 3). By far, the aggregate respondents rated Shortage of contracting officers (F4) as the single highest factor among all 22 factors measured for frequency. Because the contracting activity timeline generally has lengthy durations, any disruption appears to have an unmistakable impact on contract award. Shortage of contracting officers (F4) was seen as having the most significant impact. Several respondents said “multiple contracting actions were having compounding consequences.”

The two remaining factors above +1σ Congressional mark (F5) and OSD-directed RMD adjustment (F7), had very low frequency of occurrences, but still reported a very high impact similar to CRA. When combining these with F4, all three appear to be a strong antecedent force (or moderating factor) to the already time-consuming chain of contracting actions.

Factors Ranked Above x

This final grouping (Figure 3, factors F10–F31) accounted for the remaining 22 impact factors. Perception polarities persisted, especially between two respondent groups—senior staff outside the program office and PMs inside program offices. For PMs in every case except one (i.e., Component-directed POM adjustment [F17]), the impact factors ranked well below x. In sharp contrast, senior staff, in every case except one (i.e., Component-directed POM adjustment [F17]), stated the majority of the top 31 factors had the largest impact—or close to it—among all respondent groups.

Even though the remaining impact factors above x are still significant, the researchers shifted the focus to the presence of any strong correlations since factor couplings could be having a moderating effect and require a closer look.

Factors That Correlate

Table 3 summarizes the strongest and weakest factor correlations for all respondents queried. Several strong correlations surfaced for factors above x. Changes in user requirements (F11) and Changes in user priorities (F19) were very strongly correlated. In three specific instances, two factors above x were very strongly correlated with three factors that fell below x: Lack of experience levels in key acquisition functional areas (F27) and Inadequate training (F48); Lack of experience levels in key acquisition functional areas (F27) and Tenure of PM and others in key positions (F46); and DCMA administrative actions (F36) and DCAA administrative actions (F22). Three contract-related factors (F4, F8, and F9) showed weaker correlations than expected. Whether a factor had a weak correlation doesn’t mean it had any less importance, but any course of action intended to mitigate the presence of any impact factor strongly correlated with another should be weighed more heavily in any recommended action. For example, the turnover of PMs could be part of the experience quotient.

Table 3. Factor Correlation Coupling

Factor Plotting

The researchers generated a scatter plot diagram (Figure 4) that punctuated how the 31 factors fluctuated between impact and frequency of occurrence. In some cases, the impact of certain factors occurred with low frequencies of occurrence. In other cases, the frequency of occurrence compounded the impacts.

Figure 4. Scatter Plot of Impact Factors with Frequency

The research data results were rebased to a Likert-like scale for plotting the frequency and adverse impact response averages. The researchers included Factors F29–F31 in Figure 4 because they only fall slightly below x.

For the relationships that were co-linear (e.g., the most strongly correlated depicted in Table 3), the researchers explored whether they also behaved as strong predictors across the sample population. After investigating t-ratios (used with ACAT Level factors) and beta-weights (used for the sample population), the researchers determined the relationships were not significantly co-linear enough to substantiate causation. Consequently, there was no merit in running any further regression that analyzed the factors as predictors. However, the researchers conducted another set of tests by modulating certain respondent demographics and holding constant.

Factor Plotting—Modulating ACAT Levels

Figure 5 shows how the factor rankings changed after isolating ACAT levels.

Figure 5. Factor Ratings ≥ x ACAT Level

ACAT I. Funding and requirements factors (F18, F19, F23, and F26) previously ranked above x dropped below x while Contractor proposal prep delays (F6) rose markedly to become the highest impact factor. Component-directed POM adjustment (F17) made a noticeable shift to the top nine factors (or one standard deviation above the mean).

ACAT II. Fifteen of the factors previously ranked above x dropped below x (leaving only F1, F2, F3, and F17). Four of the factors that fell below x included contracting-related factors (F4, F6, F8, and F9).

ACAT III. Six of the factors (F16, F18, F19, F21, F23, and F24) previously ranked above x dropped below x. Shortages of personnel (F29, F30, F39, and F51) and Redirection of contractor efforts (F37) became more dominating issues for the respondents. Changes in user priorities (F19), Changes in other stakeholder requirements (F13), and Loss of funding through reprogramming action to higher priority requirements to PEO portfolio (F26) all moved significantly above x.

What does this mean? The more detailed differentiation seen in the scatter plots gives additional insight into the factors that would benefit from a more focused investigation of each ACAT. In some cases, reducing frequency of occurrence or perhaps instituting more early warning metrics could have a marked effect in reducing any adverse impacts.

Factor Plotting—Modulating Service Components and DoD

Figure 6 shows how the factor rankings changed after isolating Service Components.

Figure 6. Factor Ratings ≥ x By Component

U.S. Army. No factors fell below x. The only component where factors moved above x was Shortage of auditors (F40) and Insufficiently planned Overseas Contingency Operations (OCO) funding (F50). Based on historical information, OCO funding will most likely continue to present challenges since contingency funding needs are less predictable during a wartime footing.

U.S. Air Force. Shortage of contracting officers (F4) and Use of undefinitized contract action delays (F24) both dropped below x. Even though Shortage of contracting officers moved, there were no companion drops in contracting-related factors.

U.S. Navy. Six factors dropped below x. Implementation of new OSD/Service Policy (F16), Awaiting reprogramming action (F18), Changes in user priorities (F19), Unplanned Congressional adds to PB request (F23), Use of undefinitized contract action delays (F24), and Loss of funding through reprogramming action to higher priority requirements to PEO portfolio (F26) became less of an impact. For Navy respondents, there was no notable movement in the top six contracting-related factor collective.

DoD. Three factors fell below x (i.e., Implementation of new OSD/Service policy [F16], Component-directed POM adjustment [F17], and Use of undefinitized contract action delays [F24]), while three factors rose above x: OSD Comptroller Withhold (F38), Shortage of business/finance personnel (F30), and Shortage of technical/engineering/test personnel (F39).

What does this mean? The Army was the only one of the four groupings that was significantly affected by Use of undefinitized contract action delays (F24); and DoD was the only one of the four groupings that was significantly affected by OSD Comptroller Withhold (F38), Shortage of business/finance personnel (F30), and Shortage of technical/engineering/test personnel (F39).

Factor Plotting—Modulating Respondent Groups

Figure 7 shows how the factor rankings changed after isolating the respondent groups.
Program Office. Six factors dropped below x: Awaiting reprogramming action (F18), Changes in user priorities (F19), Program delays from additional development, testing, or other prerequisite events (F21), Unplanned Congressional adds to PB request (F23), Use of undefinitized contract action delays (F24), and Loss of funding through reprogramming action to higher priority requirements to PEO portfolio (F26). No factors fell below x.
PEO. Use of undefinitized contract action delays (F24) fell below x, while four factors rose above x: Shortage of cost estimators (F29), Shortage of business/finance personnel (F30), Component Comptroller Withhold (F35), and Insufficiently planned OCO funding (F50).
Senior OSD Staff. Awaiting reprogramming action (F18) fell below x while 13 factors rose above x.

Figure 7. Factor Ratings ≥ x When Negotiated Contract Costs Were Lower Than Costs Projected

For PEO and senior OSD staff, personnel shortages (F29, F30, F20, and F40) became more dominant while Awaiting reprogramming action (F18) became less dominant for program office and senior OSD staff personnel. Of the three groupings in this particular case, nowhere were there more factor increases than for senior OSD staff personnel. The rise in Unobligated prior year funding not adequately factored (F34), SAE/CAE/Component-directed reprogramming (F43), and PEO-directed programming (F59) seemed intuitive since senior staff may see first-hand the longer time it takes for program managers to react to changes in their plans. However, it was very interesting to note the disparities between how senior OSD staff personnel responded to survey queries regarding the major impediments to meeting OSD’s Obligation and Expenditure rate goals versus the responses from program office personnel, especially shortage of personnel and contract-specific factors (i.e., Changes in systems specs (F45) and Redirection of contractor efforts (F37). What does this mean? This wide perception disparity deserves a more intensive understanding since it could be creating false perceptions that could lead to misrepresented positions and even unsubstantiated decisions.

Factor Plotting—Modulating Program Phase and Cost Projections

Figure 8 shows how the factor rankings changed after modulating by program phase when their negotiated contract costs were significantly lower than projections.

Figure 8. Factor Ratings ≥ x By Position

Development Phase (Technology Development [TD] and Engineering, Manufacturing, and Development [EMD]). Four factors dropped below x, including Changes in other stakeholder requirements (F13), Awaiting reprogramming action (F18), Unplanned Congressional adds to PB request (F23), and Loss of funding through reprogramming action to higher priority requirements to PEO portfolio (F26). Four factors rose above x, including Shortage of business/finance personnel (F30), Programmatic conflicts between government and prime contractor (F31), Shortage of technical/engineering/test personnel (F39), and Holding award/incentive fees in commitment for future obligation (F47). In two cases, Programmatic conflicts between government and prime contractor (F31) and Implementation of new OSD policy (F16) made a noticeable shift to the top nine factors (or one standard deviation above the mean).

Procurement Phase (Low Rate Initial Production [LRIP] and Full Rate Production [FRP]). Eight of the factors that previously ranked above x dropped below x. The majority of the movement was seen in factors involving program delays, and funding and requirements changes. The factors involving program delays included Program delays from additional development, testing, or other prerequisite events (F21), and Use of undefinitized contract action delays (F24). The factors involving funding delays included Unplanned Congressional adds to PB requests (F23), and Awaiting reprogramming action (F18). The factors involving requirements changes included Changes in user requirements (F11), Changes in other stakeholder requirements (F13), Changes in user priorities (F19), and Loss of funding through reprogramming action to higher priority requirements to PEO portfolio (F26). Both Unobligated prior year funding not adequately factored (F34) and Shortage of technical, engineering, and test personnel (F39) rose above x.

In both phases, Changes in other stakeholder requirements (F13), Awaiting reprogramming action (F18), Unplanned Congressional adds to PB requests (F23), and Loss of funding through reprogramming action to higher priority requirements to PEO portfolio (F26) fell below x. In the context of modulating by program phase, the researchers found that any factor movement was negligible when costs met or exceeded projections.

What does this mean? Changes in user requirements (F11) could potentially be more stable during the production phase and no longer become a factor. However, the emergence of Programmatic conflicts between government and prime contractor (F31) during the development phase could perhaps be the sign of competing motivations between DoD and industry as well as more prominent technical and schedule risks. All three could result in programmatic delays.

Respondent Comments Regarding the Factors

The respondents were also asked several open-ended questions about whether they found the use of metrics helpful in better meeting OSD goals as well as any process improvements they would recommend. They stated the metrics making a difference for them included “real-time monitoring, frequent reviews, tight coupling to contractor actions and milestones, and realistic spend plans.” When asked about any necessary improvements to current processes, the respondents recommended including a CRA duration variable that readjusted expectations, establishing more realistic program goals, ensuring more funding stability, reducing bureaucratic obstacles and streamlining more outdated processes, forging greater cooperation between government and industry, and synchronizing disparate accounting systems used in Obligation and Expenditure reporting.

The respondents provided a number of additional qualitative comments that reinforced the quantitative data, especially for the factors above ≥ x that were causing obligation rate interference.

Figure 9. Sampling of Respondent Comments

Recommendations

What next? Based on the research findings presented in this article, a number of impact factors above x, if sufficiently addressed, could help lower the barriers to the attainment of OSD’s Obligation and Expenditure rate goals. Hence, the researchers offer the following recommendations:

  • Institute an Obligation and Expenditure baseline adjustment for programs affected by any funding delay or limitation (especially CRA), then measure a program’s progress to that revised adjustment.
  • More thoroughly review the entire contracting action value chain. Look closely at efficiency opportunities along the review and decision cycle continuum, especially from the time an RFP is developed to the time a contract is let. Set reasonable time thresholds with triggers that afford more proactive measures by PMs—and confirm productivity.
  • Establish a recurring communication forum among key stakeholders, especially PMs and OSD, to dialogue more frequently and eliminate perception gaps that could be creating counterproductive actions and misconceptions.
  • Track requirement changes throughout a program’s life and look more strategically at the effects on program execution and accompanying Acquisition Program Baselines. Despite ACAT levels, an obvious ripple effect is associated with any substantive change in program content across a program’s life that should be codified more comprehensively. However, there are also issues associated with different ACAT levels, which must be noted.
  • Review the program review cycle and streamline wherever possible. Checks and balances within the DoD’s acquisition community have always been a vital constituent component of program execution, but every review should have a distinctive purpose, exit criteria, and associated suspense date that is just as material and credible.
  • Build and maintain realistic spend plans, measure against them, account for contingencies, and make adjustments with required frequency due to real world realities. Since spend plans are subjected to so many real world programmatic eventualities, updating them is vital. Collaborate with senior leadership early enough about required adjustments to avoid more draconian measures later.
  • Validate the key personnel shortage areas and recognize the time it takes to rebuild those experience levels.
  • Nurture experience in key functional areas with strong catalysts such as disciplined on-the-job training, programs, mentoring, and guidance. With the recent surge of contracting specialist interns, their progress as a group should be measured more carefully.
  • Evaluate the real effects of reprogramming action or realignment of future budget decisions before any corrective action is taken.
  • Conduct a wholesale review of the program execution metrics currently in place and determine their usefulness and effectiveness. What are they actually measuring? How are these data (metrics) used and are they worth collecting? Consolidate whenever practical and eliminate the data (metrics) that have outlived their usefulness.
  • Encourage innovation and avoid the “bookkeeping process” as RAND Corporation found in a recent study that could be limiting improvements championed by PMs (Blickstein & Nemfakos, 2009).

Summary

On Feb. 5, 2013, the authors presented the study results discussed in this article to Assistant Secretary of Defense for Acquisition Katrina McFarland and other key OSD senior staff. With the metrics she plans to institute with Better Buying Power (BBP) 2.0, DoD will have another means to address many of the impact factors discussed herein and a host of other variables that could be encumbering program execution expectations.

On Sept. 10, 2012, Under Secretary of Defense for Acquisition, Technology and Logistics Frank Kendall, and Under Secretary of Defense (Comptroller) Robert F. Hale, jointly signed a memorandum that listed six tenets that could help combat some of the same factors discussed in this article regarding the disposition of DoD’s unobligated funds (DoD, 2012). Over time, realization of these tenets might also reduce perception disparity gaps among the key personnel that have a hand in ensuring our warfighters continue to get the weapon systems they need—and on time—to best support our national military strategy.


To print a PDF copy of this article, click here.

 Author Biographies

Col Robert L TremaineCol Robert L. Tremaine, USAF (Ret.), is an Associate Dean at the Defense Acquisition University West Region with over 26 years of experience in various system acquisitions. He holds a BS from the U.S. Air Force Academy and an MS from the Air Force Institute of Technology. He is level III certified in both Program Management and Engineering. He is also a graduate of the Canadian Forces Command and Staff College, and the U.S. Army War College.

(E-mail address: robert.tremaine@dau.mil)

kinnear-sekigman-headshotMs. Donna J. Kinnear-Seligman is a program analysis manager and management information systems specialist at the Defense Acquisition University West Region. She has over 20 years of experience with developing and managing complex business knowledge applications, performing comprehensive system analyses, and conducting extensive research. She holds a BS in Information Decision Systems from San Diego State University and is completing coursework toward an MS in Program Management.

(E-mail address: donna.seligman@dau.mil)


 References

Blickstein, I., & Nemfakos, C. (2009). Improving acquisition outcomes: Organizational and management issues (Document No. OP-262-OSD). Retrieved from the RAND Corporation Web site at http://m.rand.org/content/dam/rand/pubs/occasional_papers/2010/RAND_OP262.pdf

Department of Defense. (2012). Department of Defense management of unobligated funds; obligation rate tenets [Memorandum]. Retrieved from http://www.acq.osd.mil/docs/OSD%20Memo_DoD%20Mgt%20of%20Unobligated%20Funds_Obligation%20Rate%20Tenets_10Sep12.pdf

Higbee, J., Tremaine, R., Seligman, D., & Arwood, S. (2013). Obligations & expenditures: An investigation into the factors that affect OSD goals. Defense Acquisition University presentation to Assistant Secretary of Defense (Acquisition), Washington, DC, March 13, 2013. Retrieved from https://dap.dau.mil/aphome/Documents/OSD%20Obs%20%20Exps%20Study_2013.pdf

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *