10-507-lead-image

Reimagining Workload Task Analysis: Applications to Training System Design


To print a PDF copy of this article, click here.

Authors: Dennis Duke, Dana E. Sims, and James Pharmer

Today’s warfighter performs more complex, cognitively demanding tasks than ever before. Despite the need for more extensive training to perform these tasks, acquisition professionals are often tasked to reduce training budgets and identify optimal tradeoffs. Tools are available to help them make these decisions that provide empirical evidence of how performance and mission requirements will be affected by design decisions. This article offers insights into the utility of implementing a Workload Task Analysis (WLTA) early in weapon systems acquisition for the purpose of focusing on training system decisions, and provides a description of where WLTA occurs within the top-down functional analysis process. It concludes with several examples of how the WLTA results can be used to guide training development.


A perfect storm has arisen in military training system acquisition. First, a great deal of attention is being given to cost overruns in major weapon systems acquisition by the Department of Defense (DoD). As a result, government officials are continually searching for ways to reduce budgets. Program managers are continually being asked to consider reducing training time to reduce costs, and while battlefield technology is becoming more sophisticated, it provides much greater capability in considerably less time. However, in many instances the technology is placing greater demands on warfighters by requiring a shift in how work is accomplished. In many cases, tasks are more sophisticated and time-sensitive than in the past. This shift in the type of work and speed of performance by the war­fighter on the battlefield requires more training in knowledge-based, decision-making tasks than ever before. This type of training is more complex because it requires individuals to understand, integrate, and act swiftly on the information generated from weapon systems technology; thus, additional time to train warfighters for battle is needed although budgets seldom allow for it.

One solution to this perfect storm is to employ systems engineering much earlier in the acquisition process as advocated by Michael J. Sullivan, Director, Acquisition and Sourcing Management in testimony before the Panel on Defense Acquisition Reform, Committee on Armed Services, House of Representatives (Government Accountability Office [GAO], 2009). Numerous dimensions are inherent to systems engineering in the acquisition process, one of which involves a systematic evaluation of the type of work done by a human operator on a new weapon system with the intent to determine how best to design training solutions to support that work. This process is called a Workload Task Analysis (WLTA) and is incorporated as an element of an overarching process called Top-Down Function Analysis (TDFA). This methodology aligns with the revised DoD Instruction 5000.02 (DoD, 2008, Encl. 8), which stipulates that “… where practicable and cost effective, system designs shall minimize or eliminate system characteristics that require excessive cognitive, physical, or sensory skills; entail extensive training or workload-intensive tasks; result in mission-critical errors; or produce safety or health hazards” (DoD, 2008). This article suggests that if this process is effective for a weapon system design, it can also produce similar results for a training system.

Although WLTA is often done iteratively throughout the weapon systems acquisition process, this article is limited to how an early analysis can initially identify areas where warfighters experience the highest levels of workload that may negatively affect performance. The findings from conducting WLTA up front have the potential to not only increase operator efficiency and effectiveness by influencing weapon systems design early in the acquisition cycle, but may also reduce weapon systems life-cycle costs. The design of the training system can be included as part of this cost reduction; accordingly, this article provides a concise definition, examples, and insight into WLTA and the basic steps required to perform a credible WLTA.

Fundamentals of Workload Task Analysis Methodology

Modern technology has vastly changed the way we do business and has improved our productivity by providing us with many more on-the-job capabilities. However, the technology is only productive when effectively employed by the human operator. This effective operation relies upon the cognitive capacity of individuals combined with their ability to operate the new technology to its fullest potential. This ability comes from a well-designed training program that provides the operator with pertinent information needed to effectively perform tasks on the job. Today, the amount and speed of information received during war combined with the complexity of the technology that military personnel employ to gather and interpret this information further compounds workload burdens on the individual.

Historically, WLTA has been used to predict potential performance bottlenecks and pinpoint where to focus the efforts of human factors engineers in helping them make informed decisions on new systems design. In numerous instances, operator workload, task time demands, and interface design issues affected the design of numerous platforms, from helicopters to airplanes, ships, and individual weapon systems (Aldrich, Szabo, & Bierbaum, 1989, pp. 65–80; Laughery & Corker, 1997). Recently, trends are re-emerging in military settings to investigate existing systems engineering processes and procedures much earlier in the acquisition program because of their impact on reducing overall life-cycle costs for major weapon systems (GAO, 2009).

Examining workload can assist analysts in determining the degree to which operators can successfully perform their job to meet mission requirements (Lysaght et al., 1989). To design a good system, the designer must comprehend the concept of workload and understand what “optimal” workload means to performance (Mitchell, 2000). Although a number of definitions of workload have been provided over the years, the overarching theory acknowledges that it is a multidimensional construct that considers the amount of effort (e.g., sensory, cognitive, or psychomotor) required or invested by the individual in order to perform on the job (Aldrich et al., 1989; Nachreiner, 1995; Wickens, 1984). This performance is affected in part by (a) the demands of the environment where the task is performed (e.g., heat, danger), (b) the complexity of the task or the system in which the task is performed, and (c) the capability of the operator to satisfy those demands (Parasuraman & Hancock, 2001; Parasuraman & Rovira, 2005; Wickens, 2002).

In a WLTA effort, insight into how individuals gather and process information about task performance and the variables that affect cognitive decision making (e.g., environmental factors, task complexity, etc.) can be collected and analyzed through task network simulation models. These models provide useful human system performance information to human factors psychologists, systems engineers, and instructional system developers to allow for a myriad of design trade decisions. The power of these models lies in their capability to predict within a task flow where and when operators may not be able to perform specific tasks in a timely, effective, and efficient manner. This effectively assists analysts in determining targets of opportunity for developing an optimal performance solution. An example cited in a recent GAO report describes a situation that illustrates where WLTA may affect the design of weapon systems. The report (GAO, 2010) describes a scenario in a surveillance aircraft where operators who are responsible for processing, exploiting, and disseminating Intelligence, Surveillance, and Reconnaissance (ISR) data can only use collected intelligence data if the data are visible to them.
Making ISR data discoverable in this way can be accomplished through meta-data tagging…For example, a camera may create meta-data for a photograph, such as date, time, and lens settings. The photographer may add further meta-data, such as the names of the subjects. The process by which information is meta-data tagged depends on the technical capabilities of the systems collecting the information. Most ISR systems do not automatically meta-data tag the ISR data when they are transferred from the sensor to the ground station for processing and exploitation because most of these systems were developed prior to DoD’s emphasis on enforcing meta-data standards. Since the sensors on these legacy systems are not able to meta-data tag automatically, it is up to each of the military services to prioritize the cataloging of the ISR data manually after collection. (p. 5)

The solution, influenced by WLTA analysis findings, may involve designing software capabilities that automate these meta-data tags and require the human operator to confirm the accuracy of the tagging. The training implications resulting from this design change involve the identification of specific knowledge and skills needed to operate this newly designed hardware and a determination of a training strategy for presenting this information to the operator. However, to fully understand where WLTA comes into play in the acquisition process, it is important to describe the overarching analysis process involved. This is known as the TDFA.

Understanding the TDFA Process

The initial assessment of any workload prediction methodology requires the conduct of a comprehensive mission/task/workload analysis (Aldrich et al., 1989). The TDFA methodology is a systems engineering approach that identifies mission requirements and provides a comprehensive capability for ensuring that the human performance requirements are incorporated into the systems engineering process (Dugger, Parker, Winters, & Lackie, 1999). The intent of the TDFA is to influence and refine system design throughout the acquisition process. The full TDFA methodology used in several naval aviation acquisitions involves nine phases or steps as shown in Figure 1 (Duke, Guptill, Hemenway, & Doddridge, 2006). However, only the analytical activity undertaken in the Mission Analysis (Phase 1.0), Function Analysis (Phase 3.0), and Task Design and Analysis (Phase 5.0) will be discussed in this article.

Figure 1. The Top-Down Function Analysis (TDFA) Process

In this TDFA model, the WLTA is included as the major component of Phase 8.0, Performance, Workload, and Training Estimation. However, the WLTA cannot be undertaken until critical hierarchical information about the weapon system’s missions, functions, and tasks is available. In the Phase 1.0 (Mission Analysis), the external objectives or the “what” of the system performance are identified. This equates to the systems engineering “requirements analysis” described in American National Standards Institute/Electronic Industries Alliance 632, Processes for Engineering a System (ANSI/EIA, 2003). System functions, which are initially analyzed in Phase 3.0 (Function Analysis), describe “how” the system will achieve performance requirements. These system functions are then further decomposed into human and system tasks, which describe the qualitative and quantitative workload of individual, team, and crew operators and maintainers. This decomposition occurs in Phase 5.0 (Task Design and Analysis). Optimal design solutions based upon recommendations from the task decomposition are integrated during Phase 6.0 (Interface Concepts and Designs) and Phase 7.0 (Crew/Team Concepts and Designs) to ensure system-level optimization and compatibility. The results of the TDFA are then verified to see if human system integration is being adequately addressed in meeting weapon systems mission goals. Ultimately, the process provides a hierarchy for logically linking human performance (tasks) with the combatant commander’s warfighting needs (missions) as shown in Figure 2 (Duke et.al., 2006).

Figure 2. Missions-Functions-Tasks Hierarchy

The WLTA, which occurs in Phase 8.0 (Performance, Workload, and Training Estimation), uses information obtained in the previously mentioned TDFA phases. To gain an appreciation of the WLTA, a brief description of the mission, function, and task analysis phases of the TDFA model is provided.

Mission Analysis (Phase 1.0)

The Mission Analysis Phase of the TDFA serves to determine and document the overall purpose, objectives, and mission requirements of a weapon system. Initially, a weapon system’s primary and secondary missions (e.g., Anti-Submarine Warfare, Anti-Surface Warfare, etc.) are determined and correlated with system mission tasks. The Universal Navy Task List or other Service task lists provide the basis for identifying the system mission tasks. Initial metrics are also established to measure results in the execution of missions. This involves creating Measures of Effectiveness (MOEs) and Measures of Performance (MOPs), which are used to determine the system’s ability to support the achievement of an operational mission and the technical performance standards that a system must achieve to satisfy the MOEs, respectively (Chairman of the Joint Chiefs of Staff, 2003). MOPs also serve as high-level standards by which many systems operators will be evaluated.

The system constraints and boundaries, which may have an impact on training program design, are also identified in the Mission Analysis Phase. For example, in dealing with the acquisition of a training system for a new surveillance aircraft, one must determine if the scope of the platform (system) includes a ground station. If it does, then the training analysis must consider the infrastructure and all associated logistics associated with the ground station as well as that of the platform. Information about constraints and boundaries is usually obtained by analyzing its high-level mission objectives, which are found in acquisition-related publications such as the weapon system’s Initial Capabilities Document, Capability Development Document, Performance Based Specification, and Office of the Chief of Naval Operations Instruction 1000.16K (2007). Once the overall purpose, objectives, and mission requirements are determined, it must be determined what the system must do to satisfy the mission requirements. This analysis is called a Function Analysis.

Function Analysis (Phase 3.0)

The goal of the Function Analysis Phase is to define performance at the level of detail where it is possible to design all subsystems or components needed to satisfy performance requirements. For example, if a mission requirement exists for weapon systems to perform surveillance, then the weapon system (e.g., an aircraft) must be designed with a means to undertake the surveillance function (e.g., must have a radar system). Functions provide the means to align mission requirements to the specific system hardware and software. The Department of Defense Architecture Framework (DODAF) data model views, which are developed by mission system engineers, provide detailed hardware and software information about the functions performed by equipment embedded in the technical systems (e.g., radar system) comprising a weapon system (DoD, 2009). They also supply important information about the technical communication between the equipment (e.g., describe technical cues and responses of hardware and software tasks). The integrated product team (IPT) training team must train the human operator on how to effectively interface with the equipment and must ensure the operator understands how technical activities accomplish mission requirements.

Task Design and Analysis (Phase 5.0)

In the Task Design and Analysis Phase, analysts develop initial tasks that describe how humans will perform assigned system functions. During this phase, hardware and software tasks are linked with human performance. Information in this step usually comes from a stratified sample of subject matter experts (SMEs), who either have performed the tasks or are very familiar with how the tasks should be performed. One way to collect task information is to use critical-event scenarios where SMEs, using a flowchart, identify, describe, and document the individual tasks and subtasks they perform at their workstation during a mission. Each scenario should depict a unique mission area, which allows the SMEs to collect information about the different types of workstation tasks performed during different missions. The authors recommend that all technical publications and applicable reference workstation documents are made available to SMEs during this exercise. In cases where the SMEs are providing information based on their experiences with a legacy system, the legacy tasks are compared to the high-level notional functions, missions, and tasks obtained from the DoDAF model views and documented in the TDFA. As the new systems on the weapon system are developed, changes will be assessed and the functional architecture will be modified.

Undertaking a Workload Task Analysis

As stated previously, a WLTA effort provides insight into an operator’s perceived level of effort to complete a task and the variables that affect decision making. The WLTA is conducted in Phase 8.0 of the TDFA process. The information from the Mission, Function, and Task Design and Analysis Phases enables the WLTA analyst to understand the workload associated with a given task/mission. The workload activity is divided into time and information processing demands (e.g., visual, auditory, cognitive, and psychomotor [VACP]) so it can be examined from various perspectives. These perspectives assist the human factors psychologists, systems engineers, and instructional system developers to better analyze high-workload-demand tasks. The discussion that follows contains a brief description of the two types of workload data collected (i.e., time estimation, information processing estimation), and provides examples of potential training-related performance solutions to workload. These training solutions are not intended to be all-inclusive, but rather provide a starting-off point for any IPT training team to consider for its own platform.

Time Estimation

In this portion of the WLTA, the analyst is interested in the time spent on particular components of the overall mission. The results from this analysis can be an indication of the complexity of the task or performance inefficiencies (e.g., poor system design, lack of training, etc.). Depending on the analyst’s interpretation of the cause of the time spent on certain components of an overall mission, targeted training-related solutions may be identified.

Several options are available to gather time-estimate data. Ideally, the collection of time data should come from direct observation of actual operators performing the tasks during a live mission or on a simulator during a training exercise. In many cases, however, this is not possible, especially if the weapon system has not been built or the nature of the tasks do not allow for direct observation during a mission. As an alternative to observation, domain SMEs provide estimates of the amount of time spent performing each task, as well as whether each task is continuous (no observable start or end point) or discrete (actions with definite, observable start-end points). Prior to the SMEs providing these ratings, they are provided examples of discrete (e.g., manipulating a knob on a computer console) and continuous (e.g., monitoring targets on a radar screen) tasks. These tasks are graphically illustrated on flowcharts that clearly depict what is actually being done by an operator in response to a specific cue. The flowcharts make it easier to estimate the time it takes to accomplish a specific task, and provide analysts with both an average time on each discrete task and a range of time for the continuous tasks.

Information Processing Estimation

The next step of the WLTA is to determine and categorize the amount of workload required to perform the task during a typical mission. This step is likely the most challenging as it requires the analyst to implement a modeling approach that accounts for tasks occurring simultaneously, the types of tasks being done, and other factors that may shape the performance of the operator. Research suggests that humans are able to process information across multiple different VACP channels, as illustrated in Christopher D. Wickens’ Multiple Resource Theory model (Wickens, 2002; Wickens, Sandry, & Vidulich, 1983). In such cases, the summative workload demands of multiple, simultaneous tasks on one channel can provide some indication of the likelihood an individual would be able to perform two or more tasks at the same time with a given workload. For example, if a task calls for an operator to simultaneously aim a weapon at a target (rated a 4 on the visual channel by the operator) and make some fine discrimination of symbols on a screen (rated a 5 on the visual channel), the combined demand on the operator’s visual channel would exceed the highest rating possible on the visual scale (see Appendix). This high-workload rating on the visual channel would be cause for concern in a design that required an operator to simultaneously perform these two tasks. The VACP scales are provided in the Appendix.

From a training analyst perspective, it is necessary to assess the workload demands on an operator at different intervals throughout the mission. To do so, the VACP components of the tasks have to be estimated (McCracken & Aldrich, 1984; Szabo & Bierbaum, 1986) and populated into a discrete-event simulation/modeling software tool. A number of commercial and government discrete-event simulation/modeling software tools (e.g., MicroSaint, Improved Performance Research Integration Tool [IMPRINT]), are available to provide the capability to account for operator ability to multitask across noncompeting processing channels consistent with Wickens’ Multiple Resource Theory (Wickens, 2002; U.S. Army Research Laboratory, 2005). Figure 3 provides an example of a simulation showing how operators reacted to a surveillance mission situation. It uses the dynamic modeling technique of Coloured PetriNet (Kristensen, Christensen, & Jensen, 1998) to illustrate the predicted VACP demands on an electronic intelligence (ELINT) operator in a surveillance aircraft while performing assigned tasks in a specified period of time during an operational mission. Careful evaluation of these workload predictions provided the analyst with insight on candidate tasks where workload demands may be improved. For example in Figure 3, the data indicate that about 7 minutes into the mission, the operator had to undertake several tasks while responding to target cues at the workstation. During 30-second intervals, the operator had to simultaneously employ several skills, which caused a temporary visual and cognitive overload. The operator used high visual skills (shown by blue line showing 8.5—exceeding the visual workload scale). Thus, the operator may not have been able to “see” all the target data available on the screen during that 30-second interval. The operator also used high cognitive skills to interpret what was being seen and heard (shown by green line showing 8.3—exceeding the cognitive workload scale). Thus, the operator may not have been able to comprehend the information presented. Both of these were done while interpreting sound patterns (shown by red line indicating 7.0 on the auditory scale) and manually adjusting a thumbwheel (shown by the light blue line indicating 5.8 on the psychomotor scale). This “task-stacking” situation resulted in the operator exceeding visual and cognitive capacity for approximately 30 seconds, meaning critical information may have been missed, which could impact overall mission performance. With this information, the engineers, human factors psychologists, and instructional system designers can begin to develop alternatives for task redesign, human engineering improvements, and/or training solutions. In the following section, a few examples of this process are provided.

Figure 3. Results of a VCAP Analysis Done to Illustrate Operator Performance During a Mission

Workload Solution Identification

The risk in developing weapon systems without significant consideration for how the operator will actually utilize them is that the system utilization may fall significantly below its potential. Additionally, a great deal of time and resources may be required in developing training systems for the operator. Utilizing WLTA data up front may prevent this situation. Workload solutions can take many forms, and should be based on the cost, schedule, and performance considerations by the weapon systems program. The final solution(s) chosen should be guided, at least in part, by the results from the WLTA. For purposes of this article, only training solutions are discussed, but WLTA identifies where workload issues may arise within a mission scenario for operators of a system and narrows the focus to particular tasks and combinations of tasks.
WLTA provides information about the operator tasks and provides the training developer with data about how user interfaces are structured to enable performers to effectively use the weapon system. Specifically, WLTA uncovers cues that initiate task behaviors, the time required to perform the tasks, and documents various demands the tasks place on the individual. Engineers in the design of weapon systems can use this information in the design of the weapon system. For example, in the ELINT operator example, a software modification can allow the workstation to “automatically identify” targets, thus relieving the operator from the requirement to visually identify the target. This information helps the training analyst establish a training strategy to support successful accomplishment of the task. For this reason, two training-focused performance solutions that can be identified and implemented based on the WLTA data are what to train and how to train specific tasks. A third training solution that can be derived from WLTA data is error reduction. Each potential solution will be discussed in turn.

What to Train

The benefit of the information provided by the WLTA is that it tells training analysts where operators may spend most of their time and what tasks require the most of the operator’s limited resources during a mission. The analysts can then dig deeper into this information to understand whether these are areas of training importance and then focus training on that area of tasks. In Figure 4, a decision-making matrix that could be used by instructional systems developers to focus training is provided. In this contrived example, reasonable tradeoffs of what to train can be made to focus on tasks that require High Information Processing (I and III) because operators are likely to require the most support in performing the tasks. Tasks requiring a High Time Spent but Low Information Processing (II) can then be assessed to determine whether these are tasks that could be automated or distributed to other team members to allow more time for operators to perform I and III tasks. Finally, tasks within the IV quadrant may be identified as unnecessary to train, thereby assisting the IPT in allocating resources for the best return on investment. It should be noted that this decision-making matrix is intentionally simplistic for the purposes of this article. Factors such as the criticality of the tasks, number of information processing channels required, and others important to the weapon system should also be considered in determining what to train. In the ELINT operator example, an early decision to fund a software modification to “automatically identify” targets can provide life-cycle cost reductions from a training perspective. Since targets will be “automatically identified” by the workstations, then training objectives relating to interpreting the information will be incorporated in the curriculum. Without the need to teach how to manually recognize the targets, the course can be shortened, reducing the overall life-cycle training costs.

Figure 4. Decision-Making Matrix to Guide Training

10-507-figure-4

How to Train

Once decisions are made regarding what to train, instructional systems specialists can also utilize WLTA results to determine how best to train the skills that are identified as training tasks. For tasks that fall within the II and IV quadrants of Figure 4, low information processing is required, suggesting that these tasks are relatively automatic or simplistic. This suggests that these types of skills may be most effectively taught partly through training methods such as computer-based online courses and partly through task training devices or training simulators, wherein trainees receive a demonstration of how/when to perform the tasks and opportunities to practice performing the skills. Conversely, demanding tasks (i.e., high information processing tasks [I and III]) often involve more cue complexity and mental effort (Wickens & Carswell, 2006). With this information in hand, the instructional developer can appropriate more time to train complex tasks and ensure prerequisite knowledge and skills are acquired early in training. Furthermore, training strategies can be chosen to ensure more trainee-instructor interaction, and may allow operators repeated opportunities to practice the tasks with varying and increasing levels of complexity to build the decision-making and information processing skills that are less outwardly tangible and difficult to train.

Reducing Errors

Interviews with SMEs during the WLTA often reveal common mistakes (as well as their consequences) made by operators. Senior operators will comment that mistakes in performance are usually traceable to inattention, over attention, or fixation (Greenwell, Strunk, & Knight, 2004; Wickens & Carswell, 2006). As Carl (2009, p. 120) noted, “…there is a tendency for performers to devote too much time to some cues, devote too little time to other cues, or poorly manage their time in attending to all the cues that impact task execution.” With this information, instructional designers can focus on initially training new operators to select and concentrate on important task cues while disregarding irrelevant noise. In this case, utilizing WLTA data will not only emphasize the important components of training, it will reduce downstream performance problems that will increase life-cycle costs for repair or replacement of the system. In the ELINT operator example, the costs for a software modification to “automate target recognition” are incurred only once during acquisition (unless there are system upgrades in which additional costs will be incurred). Training costs associated with teaching target recognition will reoccur with each new set of trained operators, thus affecting overall life-cycle training costs.

Conclusions

Although acquisition professionals are continually asked to identify tradeoffs to reduce weapon system budgets, tools are available to help them make decisions regarding life-cycle costs. Everyone acknowledges that weapon systems being acquired today are extremely sophisticated. The operator and maintainer tasks associated with the weapon systems are also becoming increasingly complex, and they require time and expensive simulators to satisfy training requirements. Training time and training media are quite costly from a life-cycle perspective.

WLTA could be an extremely valuable tool in reducing life-cycle costs, ensuring the system can be used effectively by designing training that can support the operator. Admittedly, WLTA is no simple task, requires significant time and support from the acquisition team, and must be adapted to the needs of the individual program. However, the results of WLTAs can become increasingly valuable as the team is required to make trade-off decisions and must negotiate with program managers to retain funding, which can be done with the support of systematically derived evidence of how performance and mission requirements will be affected by design decisions.


To print a PDF copy of this article, click here.

Author Biographies

Dr Dennis DukeDr. Dennis Duke is a Naval Air Systems Command Research and Engineering Fellow and a senior instructional systems specialist at the Naval Air Warfare Center Training Systems Division in Orlando, Florida. He has over 33 years of experience and has undertaken front-end analyses for several organizations including the Navy, Marine Corps, Army, NASA, Federal Aviation Administration, Department of Energy, and Department of Labor. Dr. Duke holds an EdD in Education Administration from the University of Central Florida and an MBA in Acquisition Management from Florida Institute of Technology.

(E-mail address: dennis.duke@navy.mil)

Dr Dana SimsDr. Dana E. Sims is a research psychologist at the Naval Air Warfare Center Training Systems Division, Orlando, Florida, where she supports the development of training solutions for military aviation applications. Dr. Sims has contributed extensively in the area of training design and evaluation as well as in team processes, with more than 20 scholarly and applied publications and an additional 22 professional presentations given nationally and internationally. Dr. Sims received her doctorate in Industrial Organizational Psychology from the University of Central Florida and is a member of the Society for Industrial Organizational Psychology.

(E-mail address: dana.e.sims@gmail.com)

Dr James PharmaDr. James Pharmer is a senior research psychologist at the Naval Air Warfare Center Training Systems Division in Orlando, Florida, where he is lead for the Human Systems Integration Science and Technology Lab. His work focuses primarily on the integration of Human Systems Integration tools, techniques, and methodologies into the systems engineering process for surface and aviation programs. Dr. Pharmer holds a PhD in Applied Experimental Human Factors Psychology from the University of Central Florida and an MS in Engineering Psychology from the Florida Institute of Technology.

(E-mail address: james.pharmer@navy.mil)


References

Aldrich, T. B., Szabo S. M., & Bierbaum, C. R. (1989). The development and application of models to predict operator workload during system design. In G. R. McMillan, D. Beevis, E. Salas, M. H. Strub, R. Sutton, & L. van Breda (Eds.) Applications of Human Performance Models to System Design (Defense Research Series Vol. 2). New York: Plenum.

American National Standards Institute/Electronic Industries Alliance. (2003). Processes for engineering a system (ANSI/EIA 632). New York: Author.

Carl, D. R. (2009). Cue awareness and situational awareness in task analysis. Performance Improvement Quarterly, 22, 115–132.

Chairman of the Joint Chiefs of Staff. (2003). Joint capabilities integration and development system (JCIDS). Chairman of the Joint Chiefs of Staff Instruction 3170.01C. Retrieved from http://www.dtic.mil/cjcs_directives/cdata/unlimit/3170_01.pdf

Department of Defense. (2008). Operation of the defense acquisition system (Department of Defense Instruction 5000.2). Retrieved from http://www.dtic.mil/whs/directives/corres/pdf/500002p.pd

Department of Defense. (2009). Department of Defense Architecture Framework (DoDAF) Version 2.0 (Vol.1). Retrieved from http://cio-nii.defense.gov/docs/DoDAF%20V2%20-%20Volume%201.pdf

Dugger, M., Parker, C., Winters, J., & Lackie, J., (1999, September 24). Systems engineering task analysis: Operational sequence diagrams (OSDs) (Office of Naval Research Report No. SC-21, Version 2.0). Retrieved from http://mentalmodels.mitre.org/cog_eng/reference_documents/systems%20engineering%20task%20analysis%20operation%20sequence%20diagrams.pdf

Duke, D. S., Guptill, R. S., Hemenway, M., and Doddridge, W. (2006). Improving human performance by employing a top down function analysis (TDFA) methodology in navy aircraft design. In J. A. Pershing (Ed.), Handbook of Human Performance Technology: Principles, Practices, and Potential, 3rd ed. (pp. 1067–1084). San Francisco, CA: Pfeiffer.

Government Accountability Office. (2009). Defense acquisitions: Measuring the value of DoD’s weapon programs requires starting with realistic baselines. Testimony before the Panel on Defense Acquisition Reform, Committee on Armed Services, House of Representatives. Statement of Michael J. Sullivan, Director, Acquisition and Sourcing Management (Report No. GAO-09-543T). Washington, DC: Government Printing Office.

Government Accountability Office. (2010). Intelligence, surveillance, and reconnaissance: Overarching guidance is needed to advance information sharing. Testimony before the Subcommittees on Air and Land Forces and Seapower and Expeditionary Forces, Committee on Armed Services, House of Representatives. Statement of Davi M. D’Agostino, Director, Defense Capabilities and Management (Report No. GAO-10-500T). Washington, DC: Government Printing Office.

Greenwell, W. S., Strunk, E. A., & Knight, J. C. (2004). Failure analysis and the safety-case lifecycle. In C. W. Johnson & P. Palanque (Eds.), Human Error, Safety, and Systems Development (pp. 163–176). Toulouse, France: Kluwer Academic.

Kristensen, L. M., Christensen, S., & Jensen, K. (1998). The practitioner’s guide to coloured petri nets. International Journal on Software Tools for Technology Transfer. Springer Verlag.

Laughery, K. R., & Corker, K. M. (1997) Computer modeling of human/system performance. In G. Salvendy (Ed.), Handbook of Human Factors. New York: Wiley and Sons.

Lysaght, R. J., Hill, S. G., Dick, A. O., Plamondon, B. D., Linton, P. M., Weirwille, W. W., Zaklad, A. L., Bittner, A. C., & Wherry, R. J. (1989). Operator workload: Comprehensive review and evaluation of operator workload methodologies (Technical Report No. 851). Fort Bliss, TX: U.S. Army Research Institute for the Behavioral and Social Sciences.

McCracken, J. H., & Aldrich, T. B. (1984). Analysis of selected LHX mission functions: Implications for operator workload and system automation goals (Technical Note ASI497-024-84). Fort Rucker, AL: U.S. Army Research Institute, Aviation Research and Development Activity.
Mitchell, D. K. (2000). Mental workload and ARL workload modeling tools (Report No. ARL-TN-161). Aberdeen Proving Ground, MD: Army Research Laboratory.

Nachreiner, F. (1995). Standards for ergonomics principles relating to the design of work systems and to mental workload. Applied Ergonomics, 26(4), 259–263.

Office of the Chief of Naval Operations. (2007). Navy total force manpower policies and procedures. Office of the Chief of Naval Operations Instruction 1000.16K. Retrieved from http://doni.daps.dla.mil/Directives/01000%20Military%20Personnel%20Support/01-01%20General%20Military%20Personnel%20Records/1000.16K.pdf

Parasuraman, R., & Hancock, P. A. (2001). Adaptive control of mental workload. In P. A. Hancock & P. A. Desmond (Eds.), Stress, Workload, and Fatigue (pp. 305–333). Mahwah, NJ: Lawrence Erlbaum.

Parasuraman, R., & Rovira, E. (2005). Workload modeling and workload management: Recent theoretical developments (Technical Report No. ARL-CR-0562). Aberdeen Proving Ground, MD: U.S. Army Research Laboratory.

Szabo, S. M., & Bierbaum, C. R. (1986). A comprehensive task analysis of the AH-64 mission and crew workload estimates and preliminary decision rules for developing an AH-64 workload prediction model (Technical Report ASI678-204-86[B], Vols. I–IV). Fort Rucker, AL: Anacapa Sciences.

U.S. Army Research Laboratory. (2005). Improved Performance Research Integration Tool (IMPRINT) analysis guide version 7.0. Aberdeen Proving Ground, MD: U.S. Army Research Laboratory Human Research and Engineering Directorate/Micro Analysis and Design, Inc., Boulder, CO.

Wickens, C. D. (1984). Processing resources in attention. In R. Parasuraman & R. Davies (Eds.), Varieties of Attention (pp. 63–101). Orlando, FL: Academic Press.

Wickens, C. D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3(2), 159–177.

Wickens, C. D., & Carswell, C. M. (2006). Information processing. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics (pp. 111–149). Hoboken, NJ: John Wiley.

Wickens, C. D., Sandry, D., & Vidulich, M. (1983). Compatibility and resource competition between modalities of input, output, and central processing. Human Factors, 25, 227–248.


Appendix

Visual Workload Scale
Scale Value Visual Scale Descriptor
0.0 No Visual Activity
1.0 Visually Register/Detect (detect occurrence of image)
3.7 Visually Discriminate (detect visual differences)
4.0 Visually Inspect/Check (discrete inspection/static condition)
5.0 Visually Locate/Align (selective orientation)
5.4 Visually Track/Follow (maintain orientation)
5.9 Visually Read (symbol)
7.0 Visually Scan/Search/Monitor (continuous/serial inspection, multiple conditions)
Auditory Workload Scale
Scale Value Auditory Scale Descriptor
0.0 No Auditory Activity
1.0 Detect/Register Sound (detect occurrence of sound)
2.0 Orient to Sound (general orientation/attention)
4.2 Orient to Sound (selective orientation/attention)
4.3 Verify Auditory Feedback (detect occurrence of anticipated sound)
4.9 Interpret Semantic Content (speech)
6.6 Discriminate Sound Characteristics (detect auditory differences)
7.0 Interpret Sound Patterns (pulse rates, etc.)
Cognitive Workload Scale
Scale Value Cognitive Scale Descriptor
0.0 No Cognitive Activity
1.0 Automatic (simple association)
1.2 Alternative Selection
3.7 Sign/Signal Recognition
4.6 Evaluation/Judgment (consider single aspect)
5.3 Encoding/Decoding, Recall
6.8 Evaluation/Judgment (consider several aspects)
7.0 Estimation, Calculation, Conversion
Psychomotor Workload Scale
Scale Value Psychomotor Scale Descriptor
0.0 No Psychomotor Activity
1.0 Speech
2.2 Discrete Actuation (button, toggle, trigger)
2.6 Continuous Adjustive (Flight control, sensor control)
4.6 Manipulative
5.8 Discrete Adjustive (rotary, vertical thumbwheel, lever position)
6.5 Symbolic Production (writing)
7.0 Serial Discrete Manipulation (keyboard entries)

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *