OSD’s Obligation and Expenditure Rate Goals An Examination of the Factors Contributing to the Interference

To print a PDF copy of this article, click here.

Authors: Robert L. Tremaine and Donna J. Seligman

Several months ago, Dr. Nancy Spruill, director of Acquisition Resources and Analysis, Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (OUSD (AT&L)), solicited support from the Defense Acquisition University (DAU) to help uncover the causal factors that could be interfering with attainment of the Obligation and Expenditure rate goals of the Office of the Secretary of Defense (OSD).

Decades earlier, OSD instituted these goals as a benchmark to help weapon systems program offices maintain the required execution pace of appropriated funding. However, due to a number of internal and external factors, Department of Defense (DoD) acquisition programs have sometimes found it difficult to meet these goals.

To learn more about the intervening obstacles, DAU with assistance from OSD developed a comprehensive survey that queried experienced and high-level DoD personnel involved in a weapon program’s decision chain. What we learned from the subsequent analysis confirmed several previous suspicions. The data also indicated the prevalence of more underlining perception variances among many of the factors that could be undermining program execution itself.

The study results were presented to Assistant Secretary of Defense for Acquisition Katharina McFarland and other senior OSD personnel. It also reinforced the value of the memorandum on the disposition of DoD’s unobligated funds, which was signed jointly by Under Secretary of Defense (AT&L) Frank Kendall and Under Secretary of Defense (Comptroller) Robert F. Hale.

Recommendations Up Front

Based on the research findings of this study, there are a number of impact factors above  (i.e., above that mean) that if addressed sufficiently could help lower the barriers to attainment of OSD’s obligation and expenditure rate goals. Specifically:

  • Institute an Obligation and Expenditure baseline adjustment for programs affected by any funding delay or limitation (especially Continuing Resolution Authority [CRA]), then measure a program’s progress to that revised adjustment.
  • More thoroughly review the entire contracting action value chain. Look closely at efficiency opportunities along the review and decision cycle continuum, especially from the time a request for proposal (RFP) is developed to the time a contract is let. Set reasonable time thresholds with triggers that afford more proactive measures by program managers (PMs) and confirm productivity.
  • Establish a recurring communication forum among key stakeholders, especially PMs and OSD, to dialogue more frequently and eliminate perception gaps that could be creating counterproductive actions and misconceptions.
  • Track requirement changes throughout a program’s life and look more strategically at the effects on program execution and accompanying Acquisition Program Baselines (APBs). Despite Acquisition Category (ACAT) Levels, there is an ­obvious ripple effect associated with any substantive change in program content across a program’s life that should be
  • codified more comprehensively. However, there also are issues associated with different ACAT levels.
  • Review the program review cycle and streamline wherever possible. Checks and balances within the DoD’s acquisition community are a vital constituent component of program execution—but every review should have a distinctive purpose, exit criteria, and associated suspense date that are just as material and credible.
  • Build and maintain realistic spend plans, measure against them, account for contingencies and make adjustments with required frequency due to real world realities. Collaborate with senior leadership early enough about required adjustments to avoid more draconian measures later.
  • Validate the key personnel shortage areas and recognize the time it takes to rebuild those experience levels.
  • Nurture experience in key functional areas with strong catalysts such as disciplined on-the-job-training programs, mentoring, and guidance. With the recent surge of contracting specialist interns, their progress as a group should be measured more carefully.
  • Evaluate the real effects of reprogramming action or realignment of future budget decisions before any corrective action is taken.
  • Conduct a wholesale review of the program execution metrics currently in place and determine their usefulness and effectiveness. What are they actually measuring? Consolidate whenever practical and eliminate those that have outlived their usefulness.

Checks and balances within the DoD’s acquisition community are a vital constituent component of program execution—but every review should have a distinctive purpose, exit criteria, and associated suspense date that are just as material and credible.

Research Methodology

Two hundred and twenty-nine DoD personnel responded to this survey. The respondents were comprised of program office personnel (program managers (PMs), deputy PMs, budget and financial managers, and contracting officers), program executive officers (PEOs), and their chief financial officers (CFOs), and a variety of senior staff at OSD, including Headquarter Financial Management (FM) senior staff and Senior Acquisition Executive (SAE) staff (Table 1). Because several functional areas saw lower response rates, a more detailed analysis of the causal factors was restricted to an aggregate sample size given the confidence levels required to draw any inferences or conclusions.

Table 1. Individual Respondent Groups


Respondents ranked the impact of 64 factors under nine categories (Figure 1). The researchers then assessed the rankings using a top box three methodology (i.e., averaging the percentages of 5, 6, and 7 on a Likert scale from 1 to 7). Since the frequency of occurrence for some factors also could be contributing to the interference, the researchers included an additional selection (e.g., daily, weekly, monthly, etc.) to isolate any potential ignition areas.

Figure 1. Factor Categories



The Causal Factors Contributing to Low Obligation and Expenditure Rates

Figure 2 shows the distribution of all 64 factors assessed. Three factors reported an impact rating of two standard deviations (also called sigma [σ]) above the mean (denoted by +2σ); six factors reported an impact rating of one standard deviation above the mean (denoted by +1σ); and 22 factors fell above an average (also called x-bar [mean-symbol]) impact rating (denoted by mean-symbol). The remaining 33 factors fell below mean-symbol.

Figure 2. Factor Ranking Distribution


Nineteen of the 22 factors measured for frequency of occurrence resulted in an impact rating above 39 percent. Sometimes, just one occurrence appeared to have a significant impact.

Table 2 accounts for the 31 factors above the mean. They were the only ones further evaluated in this study unless a factor shifted above mean-symbol after any further specific delineation (e.g. ACAT levels, military departments, agencies, etc.). The individual factors showed widespread perception disparities (see Low vs. High columns in Table 2) among the respondent groups for the factors that fell below +2σ. After analyzing the specific individual factors among all the respondents, seven of the 31 factors had an unusually large σ. As a result of these conspicuous gaps, we turned to the qualitative data. We also watched for any strong correlations (e.g., positive quantitative correlation coefficients (r) > 0.7 or qualitative comments) to better understand the reasons for the differences as well as the influence of any intervening and/or moderating factor couplings. The remaining discussion addresses the 31 impact factors in descending order from highest to lowest.

Table 2. Impact Factor Ratings in Aggregate Descending Order With Respondent Group Low and High Ratings


The Factors that Ranked Two Standard Deviations above the Mean (i.e., + 2σ)

This first grouping (Table 2, factors F1-F3) indicated release of full obligation/budget authority due to Continuing Resolution Authority (CRA) (F1), contract negotiations delays (F2), and contract award delays (F3) all rose above 2σ. The occurrence of CRA had the most significant negative impact to Obligation and Expenditure rates. It also had one of the smallest variances (σ) among the respondent groups. Even with the expectation that CRA might prevail and the subsequent planning that followed for such a likely event, many PMs pointed to an overly conservative and slow internal vetting process that created additional obstacles in meeting OSD goals.

Several PMs recommended using some sort of “CRA variable” to temporarily offset the consequences of CRA if the required funds were not released as originally projected. Next in rank order were contract negotiations and contract award delays. The respondents emphasized that DoD could fix the problems more readily since, unlike CRA, they were under internal control. When asked what could be done to reduce the adverse effects of all three factors, the respondents recommended the “inclusion of more risk mitigation into contract award planning, more realistic timelines, more realistic plans, greater funding stability, reduction in bureaucratic obstacles, more synchronized internal processes, and better aligned accounting systems.”

The Factors that Ranked One Standard Deviations above the Mean (i.e. +1σ)

The second line of demarcation (Table 4, factors F4-F9) contained a majority of contracting-related factors (i.e., shortage of contracting officers (F4), contractor proposal prep delays (F6), RFP prep delays (F8), and source selection (F9) predominated. Nearly all the factors showed the emergence of a more alarming σ between the individual respondent groups—as high as 18 percent in one case (i.e., proposal prep delays [F6]). For this particular factor, procuring contracting officers (PCOs) reported the highest impact while PMs ranked it as the lowest. Senior staff cited that shortage of contracting officers (COs)(F4) created the highest impact while PCOs reported it had the lowest impact. With a 7 percent σ, it was the lowest among all six factors in this grouping.

Given that six of the top nine factors in were contract-specific factors that ranked above +1σ, it came as little surprise to see so many reinforcing comments surface.

  • “Lack of experienced and qualified contract specialists … .”
  • “Alarmingly low personnel qualified … many unsure/lack guidance and experience … .”
  • Significantly stressed with overtime to complete all contracting actions prior to close of fiscal year.”
  • “Inadequate training … inordinate number of interns with very low experience in all career fields.”
  • “Lack of sufficient legal personnel trained in Acquisition.”
  • “Loss in brain trust and skill to develop complete, clear SOWs [statements of work] using proactive contract language.”
  • “SOW writing and the teaching of SOW-writing classes is greatly left to contractors or support contractors, resulting in unclear language.”

The highest frequency of occurrence also was associated with contracting-related factors (Figure 3). By far, Shortage of Contracting Officers (F4) was reported as the single highest frequency among all 22 factors measured for frequency. Because the contracting activity timeline generally has lengthy durations, any disruption appears to have an unmistakable impact on contract award. F4 was seen has having the most significant. As an aggregate group, the respondents said multiple contracting actions were having compounding consequences.

Figure 3. Scatter Plot of Impact Factors with Frequency


The two remaining factors above +1σ, Congressional marks (F5) and OSD directed RMD adjustment (F7), had a very low frequency of occurrence but still reported a very high impact similar to CRA. When combining all three, they appear to be a strong antecedent force (or moderating factor) to the already time-consuming chain of contracting actions.

The Factors that Ranked Above the Mean (i.e. )

This final grouping (Table 2, factors F10–F31) accounted for the remaining 22 impact factors. Perception polarities persisted especially between two respondent groups—senior staff outside the program office and PMs inside program offices. As a result of the PMs’ selections in every case except one (i.e., Component-directed Program Objective Memorandum (POM) adjustment [F17]), the impact factors ranked well below mean-symbol. In sharp contrast, senior staff in every case except one (i.e., Component-directed POM adjustment [F17]) stated the majority of top 31 factors had the largest impact.

Even though the remaining impact factors above mean-symbol still are significant, the researchers shifted the focus to the presence of any strong correlations since factor couplings could be having a moderating effect and require a closer look.

The Factors that Correlate

Table 3 summarizes the strongest and weakest factor correlations for all respondents queried. Several strong correlations surfaced for factors above . User Requirements (F11) and User Priorities (F19) were correlated very strongly. In three specific instances, two factors above mean-symbol were correlated very strongly with three factors that fell below : key acquisition experience (F27) and inadequate training (F48); key acquisition experience (F27) and tenure of PM and other key positions (F46); and Defense Contract Management Agency (DCMA) administration actions (F36) and Defense Contract Audit Agency (DCAA) administration (F22). Three contract-related factors (F4, F8, and F9) showed weaker correlations than expected. To learn more, we performed a regression test and found that shortage of contracting officers (F4) fell below mean-symbol for Air Force respondents only. Specific Acquisition Categories (ACATs) also behaved as a moderating variable. RFP prep delays (F8) fell below for ACAT IIs only; and source selection (F9) fell below mean-symbol for ACAT Is and ACAT IIs only. A factor having a weak correlation doesn’t mean it had any less importance, but any course of action intended to mitigate the presence of any impact factor strongly correlated with another should be weighed more heavily in any recommended action. For example, the turnover of PMs could be part of the experience quotient.

Table 3. Factor Correlation Couplings


Factor Plotting by Impact and Frequency

The researchers generated a scatter plot diagram (Figure 3) that punctuated how the 31 factors fluctuated between impact and frequency of occurrence. In some cases, the impact of certain factors had low frequencies of occurrence. In other cases, the frequency could be compounding the impacts.

Respondent Comments Regarding the Factors

The respondents also were asked several open-ended questions about the use of metrics they found that helped them better meet OSD goals as well as any process improvements they would recommend. They said the metrics making a ­difference for them included “real-time monitoring, frequent reviews, tight coupling to contractor actions and milestones, and realistic spend plans with inch stones.” As far as necessary improvements to current processes, the respondents recommended including a CRA duration variable that readjusted expectations, establishing more realistic program goals, ensuring more funding stability, reducing bureaucratic obstacles and streamlining more outdated processes, increasing cooperation between government and industry, and synchronizing disparate accounting systems used in
obligation/expenditure reporting.

The respondents provided a number of qualitative comments that reinforced the quantitative data, especially for the factors above ≥ mean-symbol that were causing obligation rate interference:

Personnel, Tools and Training

  • “Takes too long to get Acquisition Strategies and Acquisition Plans written and approved.”
  • “Personnel do not have experience with the subject matter.”

Contracting Activities

  • “Inadequate proposals, protracted negotiations, lengthy audits, and lengthy pre-award processes.”

Requirements Stability

  • “Had to defer/reprioritize requirements execution into FY13 and carry forward FY12 funding into FY13 to cover cutbacks/shortfall.”
  • “Changes in requirements precipitated by other stakeholders’ actions.”
  • “Ill-defined requirements.”
  • “User leadership routinely changes requirement and priorities.”

Business Ops

  • “MIPR billing process can delay expenditures from 90 to 120 days.”
  • “Delays in negotiating best deal for government and sometimes delays in getting acceptable proposals.”

Senior Level/Executive Reviews

  • “Extensive reviews, too long to get decision briefs through oversight layers—not always value added.”
  • “Multiple instances where milestone documentation took upward of 9 months to a year to get approved.”

Funding Realities

  • “The problem isn’t unrealistic or overly optimistic spend plans as much as it’s not knowing when funds will be appropriated and how much will be apportioned by the executing organization.”


070813-article-4-secondaryOn Feb. 5, 2013, we shared the results of this study with Assistant Secretary McFarland and other key OSD senior staff. With the metrics that Mrs. McFarland has planned to institute with Better Buying Power (BBP) 2.0, DoD will have another means to address many of the impact factors associated with this study and a host of other variables encumbering program execution expectations.

On Sept. 10, 2012, Under Secretary (AT&L) Kendall and Under Secretary (Comptroller) Hale jointly signed a memorandum that listed six tenets that could help combat some of the same factors discussed in this study regarding the disposition of DoD’s unobligated funds. Over time, realization of these tenets might also reduce perception disparity gaps among the key personnel who have a hand in ensuring our warfighters continue to get the weapon systems they need—and on time—to best support our national military strategy.

Authors’ Note: The authors extend our deepest gratitude to three key people involved in this study. John Higbee provided exceptional support as a thinking partner. Lt. Col. Rob ­Pittman served as our OSD point man and provided extraordinary support throughout this study from survey inception to final presentation. Shandy Arwood also played a vital role. Her survey development and analysis skills played a large part in the success of this study.

To print a PDF copy of this article, click here.

Tremaine is an associate dean for outreach and mission assistance at the Defense Acquisition University’s West Region with more than 30 years of acquisition experience in program management and systems engineering. Seligman is a program management analyst at DAU West Region with more than 20 years of experience in developing business applications, performing system analyses, and conducting research.

The authors can be contacted at robert.tremaine@dau.mil and donna.seligman@dau.mil.



Leave a Reply

Your email address will not be published. Required fields are marked *