IMPRINT
Developer(s)Alion Science and Technology, Army Research Laboratory, U.S. Army CCDC Data and Analysis Center
Stable release
4.6.60.0
Written in.NET Framework, C#
Operating systemMicrosoft Windows
TypeDiscrete Event Simulation
Websitewww.microsaintsharp.com/home/tools

The Improved Performance Research Integration Tool (IMPRINT) is a discrete-event simulation and human performance modeling software tool developed by the Army Research Laboratory and Micro Analysis and Design (acquired by Alion Science and Technology). It is developed using the .NET Framework. IMPRINT allows users to create discrete-event simulations as visual task networks with logic defined using the C# programming language. IMPRINT is primarily used by the United States Department of Defense to simulate the cognitive workload of its personnel when interacting with new and existing technology to determine manpower requirements and evaluate human performance.[1]

IMPRINT allows users to develop and run stochastic models of operator and team performance. IMPRINT includes three different modules: 1) Operations, 2) Maintenance, and 3) Forces. In the Operations module, IMPRINT users develop networks of discrete events (tasks) that are performed to achieve mission outcomes. These tasks are associated with operator workload that the user assigns with guidance in IMPRINT. Once the user has developed a model, it can be run to predict the probability of mission success (e.g., accomplishment of certain objectives or completion of tasks within a given time frame), time to complete the mission, workload experienced by the operators, and the sequence of tasks (and timeline) throughout the mission. Using the Maintenance module users can predict maintenance manpower requirements, manning requirements, and operational readiness, among other important maintenance drivers. Maintenance models consist of scenarios, segments, systems, subsystems, components and repair tasks. The underlying built-in stochastic maintenance model simulates the flow of systems into segments of a scenario and the performance of maintenance actions to estimate maintenance manhours for defined systems. The Forces module allows users predict comprehensive and multilevel manpower requirements for large organizations composed of a diverse set of positions and roles. Each force unit consists of a set of activities (planned and unplanned) and jobs. This information, when modeled, helps predict the manpower needed to perform the routine and unplanned work done by a force unit.

IMPRINT helps users to assess the integration of personnel and system performance throughout the system lifecycle—from concept and design to field testing and system upgrades. In addition, IMPRINT can help predict the effects training or personnel factors (e.g., as defined by Military Occupational Specialty) on human performance and mission success. IMPRINT also has built-in functions to predict the effects of stressors (e.g., heat, cold, vibration, fatigue, use of protective clothing) on operator performance (task completion time, task accuracy).

The IMPRINT Operations module uses a task network, a series of functions which decompose into tasks, to create human performance models.[2] Functions and tasks in IMPRINT models usually represent atomic units of larger human or system behaviors. One of IMPRINT's main features is its ability to model human workload. Users can specify visual, auditory, cognitive, and psychomotor workload levels for individual tasks which can measure overall workload for humans in the system and influence task performance.[3][4]

History

The IMPRINT tool grew out of common U.S. Air Force, Navy, and Army manpower, personnel, and training (MPT) concerns identified in the mid-1970s: How to estimate MPT constraints and requirements early in system acquisition and how to enter those considerations into the design and decision-making process. The U.S. Navy first developed the HARDMAN (HARDware vs. MANpower) Comparability Methodology (HCM). The Army then tailored the manual HCM, which became known as HARDMAN I, for application to a broad range of weapon systems and later developed an automated version, HARDMAN II. In HARDMAN I and II, however, there was no direct link between MPT and performance. To directly remedy this shortcoming, the U.S. Army began the development of a set of software analysis modules in the mid-80's.[5] This set of modules was called HARDMAN III, and although the name was the same, it used a fundamentally different approach for addressing MPT concerns than previous methods: It provided an explicit link between MPT variables and soldier-system performance [6]

HARDMAN II.2 tool: HARDMAN II was formerly called MIST (Man Integrated Systems Technology). HARDMAN II.2 was first released by the Army Research Institute (ARI) in 1985. It required a Vax-11 computer to host the suite of analytical processes. An upgraded version was released during 1990.

HARDMAN III tools: HARDMAN III was a major development effort of the Army Research Institute's (ARI) System Research Laboratory (which has now become part of the ARL HRED). The contract that supported the work was let in a three phase development process.[7] Each phase resulted in multiple awards to contractors, based on a competitive evaluation of the work each contractor produced in the previous phase. The first phase, Concept Development, began September 1986 and completed April 1987. Phase 2, Requirements Specification, began June 1987 and ended January 1988. Phase 3 began April 1988 and ended August 1990.

HARDMAN III was Government-owned and consisted of a set of automated aids to assist analysts in conducting MANPRINT analyses. As PC DOS-based software, the HARDMAN III aids provided a means for estimating manpower, personnel, and training (MPT) constraints and requirements for new weapon systems very early in the acquisition process. The DOS environment imposed several limitations on the HARDMAN III tool set. The most significant problem was the 640K RAM limitation. The original HARDMAN III tools had to be designed so that pieces of the analyses could fit within these RAM blocks. However, the power of a MANPRINT analysis lies in the integration of the quantitative variables across the domains of the study. In order to support the tradeoff of, say, manpower and personnel, you must be able to consider them in an integrated fashion. Unfortunately, the DOS environment forced the flow of data across the analytical domains to be more stilted and deliberate than was ideal.

Furthermore, the DOS environment imposed limitations on the scope of analysis that could be conducted. Since the HARDMAN III analysis is task-based and includes simulation models of system missions, the amount of data that can be managed at once must fit under the RAM constraints. This led to a restriction of 400 operations tasks, and 500 maintenance tasks.

The nine modules in HARDMAN III were:

  1. MANpower-based System EVALuation aid (MAN-SEVAL): MAN-SEVAL was used to assess human workload.
    1. Workload Analysis Tool (WAA): integrates two key technologies: Micro SAINT simulation and modified McCracken-Aldrich workload assessment methodology. The modified McCracken-Aldrich workload assessment methodology was used to assess four workload components (visual, auditory, cognitive, and psychomotor) for each operator. Each task was assigned a scaled value for the four workload components. When the simulation was run, operator workload was tracked over time and can be displayed graphically.
    2. Maintenance Manpower Analysis Aid (MAMA): used to predict maintenance requirements and system availability.
  2. PERsonnel-based System EVALuation aid (PER-SEVAL): PER-SEVAL was used to assess crew performance in terms of time and accuracy. PER-SEVAL had three major components that were used to predict crew performance: (1) Performance-shaping functions that predicted task times and accuracies based on personnel characteristics (e.g., armed forces qualification test or AFQT) and estimated sustainment training frequencies. (2) Stressor degradation algorithms that diminished task performance to reflect the presence of heat, cold, noise, lack of sleep, and mission-oriented protective posture (MOPP) gear. (3) Simulation models that aggregated estimates of individual task performance and produce system performance estimates.
  3. System Performance and RAM Criteria Estimation Aid (SPARC): Helped Army combat developers identify comprehensive and unambiguous system performance requirements needed to accomplish various missions.
  4. MANpower CAPabilities analysis aide (MANCAP): The objective of MANCAP was to help users estimate maintenance man-hour requirements at the system unit level. MANCAP let the analyst perform trade-off analyses between (1) the amounts of time systems are available for combat, given specified numbers and types of maintainers, (2) how often systems fail because of component reliability, and (3) how quickly systems can be repaired when one or more components have failed. MANCAP was originally inspired by the Air Force's Logistics Composite Model (LCOM). The results of MANCAP were used as the basis for estimating Army-wide manpower requirements in FORCE.
  5. Human Operator Simulator (HOS): HOS was a tool that was used to develop improved estimates for task time and accuracy. HOS had built-in models of particular subtasks (called micromodels), such as "hand movement," which help analysts to better estimate how long it would take an operator to do a certain task.
  6. Manpower CONstraints aid (M-CON): Identified the maximum crew size for operators and maintainers and the maximum Direct Productive Annual Maintenance Manhours (DPAMMH).
  7. Personnel CONstraints aid (P-CON): Estimated the significant personnel characteristics that describe and limit the capabilities of the probable soldier population from which the new system's operators and maintainers will come.
  8. Training CONstraints aid (T-CON): T-CON was designed to be used by the Government to identify the types of training programs that were likely to be available to support new systems. Identifies what the training program for the new system is likely to look like. Also estimated the maximum time needed to train the new system's operators and maintainers, given available training resources.
  9. Force Analysis Aid (FORCE): Provided Army-wide assessment of manpower and constraints based on estimating numbers of people and impacts by types of people (i.e. ASVB score and MOS).

IMPRINT was originally named: Integrated MANPRINT Tools and was first released in 1995. It was a Windows application that merged the functionality of the 9 HARDMAN III tools into one application. In 1997 IMPRINT was renamed to the Improved Performance Research Integration Tool – the name changed but the IMPRINT acronym remained the same. Between 1995 and 2006 several enhancements were made to IMPRINT and new releases (Versions 2 through 6) were made available. IMPRINT Pro was introduced in 2007. It featured a new interface design and complete integration with the Micro Saint Sharp simulation engine. It had enhanced analytical capabilities and moved from being an Army tool to a tri-service tool. From the beginning IMPRINT has continued to evolve, new enhancements have been continually added, and new releases made freely available to the user community. IMPRINT has over 800 users supporting Army, Navy, Air Force, Marine, NASA, DHS, DoT, Joint and other organizations across the country.

Discrete event simulation in IMPRINT

Simulations, or Missions as IMPRINT refers to them, contain a task network called a Network Diagram. The network diagram contains a series of tasks connected by paths which determine control flow. System objects called entities flow through the system to create a simulation. IMPRINT also includes more low level features such as global variables and subroutines called macros.[8]

Tasks

The task node is the primary element driving the simulation's outcome. Task nodes simulate system behavior by allowing programmer specified effects, task duration, failure rates, and pathing. Task Effects are programmer specified C# expressions where programmers can manipulate variables and data structures when a task is invoked. Task duration can be specified by the programmer as a specific value, through a probability distribution, or using a C# expression. Programmers can also specify task success in a similar way. Task success influences the effects of the task node and the pathing of the entity. Failure consequences include task repetition, task change, and mission failure among other options. Control flow and pathing can also be specified by the programmer. IMPRINT provides a series of other nodes which include special functionality:

Nodes include:

  • Start Node: Emits the first entity in the model, signifying the start of a simulation execution.[8]
  • End Node: Receives an entity which signifies the end of the simulation.[8]
  • Goal Node: Emits an entity when a specified goal is achieved, activating a secondary task network.[8]
  • Work Load Monitor: A visual node not connected to the task network which displays the workload value and number of active tasks associated with a specific Warfighter.[8]
  • Function Node: creates a subnetwork diagrams which allow users to modularize complex networks into specific tasks.[8]
  • Scheduled Function Node: a Function node which allows the user to specify clock times for the start and end of the execution of the subnetwork tasks.[8]

Entities

Entities are dynamic objects which arrive into the system and move through the task network. Entities flow from one task to the next based on the task's path logic. When an entity enters a task, the task's effects are triggered. When the task concludes, the entity moves to the next task. One entity is generated by default at the beginning of the simulation. More entities can be generated at any point in the simulation based on programmer specified logic. When all entities reach the end node or are destroyed, the simulation concludes.[8]

Events

Events are occurrences that happen in an instant of simulated time within IMPRINT that change the global state of the system. This can be the arrival or departure of an entity, the completion of a task, or some other occurrence. The events are stored in a master event log which captures every event that will happen and the simulated time that the event occurred. Due to the stochastic nature of discrete-event simulation, an event will often trigger the generation of a random variate to determine the next time that same event will occur. Thus, as events occur, in the simulation, the event log is altered.[8]

Control flow

Once a task concludes, the invoking entity moves to another node which is directly connected to the current node in the task network. Nodes can connect to any number of other tasks, so IMPRINT provides a number of pathing options to determine the task to which the entity moves.[8]

  • Probabilistic pathing allows the programmer to specify a percentage chance for an entity to be moved adjacent nodes by inputting the exact probabilities, summing to one hundred, for each node.[8]
  • Tactical pathing allows the programmer to use C# predicates to determine the pathing of an entity to each adjacent node. If more than one expression evaluates to true, the entity will follow the first path with a true expression.[8]
  • Multiple pathing behaves exactly like tactical pathing, but will path entities to any adjacent node with an expression evaluating to true.[8]

Variables and macros

IMPRINT has a number of global variables used by the system throughout a simulation. IMPRINT provides the public global variable Clock which tracks the simulation's current time. IMPRINT also has private variables such as operator workload values. IMPRINT allows the modeler to create custom global variables which can be accessed and modified in any task node. Variables can be of any type native to C#, but the software provides a list of suggested variable types including C# primitive data types and basic data structures. IMPRINT also provides the programmer with the functionality to create globally accessible subroutines called macros. Macros work as C# functions and can specify parameters, manipulate data, and return data.[8]

Human performance modeling

IMPRINT's workload management abilities allow users to model realistic operator actions under different work overload conditions.[4] IMPRINT allows users to specify Warfighters which represent human operators in the modeled system. Each task in IMPRINT is associated with at least one Warfighter. Warfighters can be assigned to any number of tasks, including tasks which execute concurrently.[4] IMPRINT tasks can be assigned VACP workload values.[3] The VACP method allows modelers to identify the visual, auditory, cognitive, and psychomotor workload of each IMPRINT task. In an IMPRINT task, each resource can be given a workload value between 0 and 7, with 0 being the lowest possible workload, and 7 being the highest possible workload for that resource. The VACP scale for each resource provides verbal anchors for certain scale values. For instance, a visual workload of 0.0 corresponds to “no visual activity”, while a visual workload of 7.0 continuous visual scanning, searching, and monitoring.[9] When a Warfighter is executing a task, their workload is increased using the VACP value assigned to that task. An IMPRINT plugin module was proposed in 2013 to improve the cognitive workload estimation within IMPRINT and make the overall calculation less linear.[10] IMPRINT's custom reporting feature allows modelers to view the workload over time of the Warfighters in their models. Workload monitor nodes allow modelers to view the workload of a specific Warfighter as the simulation executes.[8]

Research

IMPRINT has been used by scientists at the Army Research Lab to study Unmanned Aerial Systems,[11] workload of warfighter crews,[12][13] and human-robot interaction.[14] The United States Air Force and Air Force Institute of Technology have used IMPRINT to study automated systems,[15][16] human systems integration,[17] and adaptive automation[18] among other things. The Air Force Institute of Technology in specific is using IMPRINT to research the prediction of operator performance, mental workload, situational awareness, trust, and fatigue in complex systems.[19]

References

  1. Rusnock, Christina F; Geiger, Christopher D (2013). Using Discrete-Event Simulation for Cognitive Workload Modeling and System Evaluation. IIE Annual Conference. Proceedings. Norcross. pp. 2485–2494. ProQuest 1471959351.
  2. Laughery, Romn (1999). "Using discrete-event simulation to model human performance in complex systems". Proceedings of the 31st conference on Winter simulation Simulation---a bridge to the future - WSC '99. Vol. 1. pp. 815–820. doi:10.1145/324138.324506. ISBN 978-0-7803-5780-8. S2CID 18163468.
  3. 1 2 Mitchell, Diane K. (September 2003). Advanced Improved Performance Research Integration Tool (IMPRINT) Vetronics Technology Test Bed Model Development. Army Research Laboratory. DTIC ADA417350.
  4. 1 2 3 IMPRINT PRO user guide Vol 1. http://www.arl.army.mil/www/pages/446/IMPRINTPro_vol1.pdf
  5. Kaplan, J.D. (1991) Synthesizing the effects of manpower, personnel, training and human engineering. In E. Boyle. J. Ianni, J. Easterly, S. Harper, & M. Korna (Eds. Human centered technology for maintainability: Workshop proceedings (AL-TP-1991-0010) (pp. 273-283). Wright-Patterson AFB, OH: Armstrong Laboratory
  6. Allender, L., Lockett, J., Headley, D., Promisel, D., Kelley, T., Salvi, L., Richer, C., Mitchell, D., Feng, T. “HARDMAN III and IMPRINT Verification, Validation, and Accreditation Report.” Prepared for the US Army Research Laboratory, Human Research & Engineering Directorate, December 1994."
  7. Adkins, R., and Dahl (Archer), S.G., “Final Report for HARDMAN III, Version 4.0.” Report E-482U, prepared for US Army Research Laboratory, July 1993
  8. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 IMPRINT PRO user guide Vol 2. http://www.arl.army.mil/www/pages/446/IMPRINTPro_vol2.pdf
  9. Mitchell, D. K. (2000). Mental Workload and ARL Workload Modeling Tools (ARL-TN-161). Aberdeen Proving Ground.
  10. Cassenti, Daniel N.; Kelley, Troy D.; Carlson, Richard Alan (2013). Differences in performance with changing mental workload as the basis for an IMPRINT plug-in proposal. 22nd Annual Conference on Behavior Representation in Modeling and Simulation, BRiMS 2013 - Co-located with the International Conference on Cognitive Modeling. pp. 24–31. ISBN 978-162748470-1.
  11. Hunn, Bruce P.; Heuckeroth, Otto H. (February 2006). A Shadow Unmanned Aerial Vehicle (UAV) Improved Performance Research Integration Tool (IMPRINT) Model Supporting Future Combat Systems. Army Research Laboratory. DTIC ADA443567.
  12. Salvi, Lucia (2001). Development of Improved Performance Research Integration Tool (IMPRINT) Performance Degradation Factors for the Air Warrior Program. Army Research Laboratory. DTIC ADA387840.
  13. Mitchell, Diane K. (September 2009). Workload Analysis of the Crew of the Abrams V2 SEP: Phase I Baseline IMPRINT Model. Army Research Laboratory. DTIC ADA508882.
  14. Pomranky, R. a. (2006). Human Robotics Interaction Army Technology Objective Raven Small Unmanned Aerial Vehicle Task Analysis and Modeling. ARL-TR-3717.
  15. Colombi, John M.; Miller, Michael E.; Schneider, Michael; McGrogan, Major Jason; Long, Colonel David S.; Plaga, John (December 2012). "Predictive mental workload modeling for semiautonomous system design: Implications for systems of systems". Systems Engineering. 15 (4): 448–460. doi:10.1002/sys.21210. S2CID 14094560.
  16. Storey, Alice A.; Ramírez, José Miguel; Quiroz, Daniel; Burley, David V.; Addison, David J.; Walter, Richard; Anderson, Atholl J.; Hunt, Terry L.; Athens, J. Stephen; Huynen, Leon; Matisoo-Smith, Elizabeth A. (19 June 2007). "Radiocarbon and DNA evidence for a pre-Columbian introduction of Polynesian chickens to Chile". Proceedings of the National Academy of Sciences. 104 (25): 10335–10339. Bibcode:2007PNAS..10410335S. doi:10.1073/pnas.0703993104. PMC 1965514. PMID 17556540.
  17. Miller, Michael; Colombi, John; Tvaryanas, Anthony (2013). "Human systems integration". Handbook of Industrial and Systems Engineering, Second Edition. Industrial Innovation. Vol. 20131247. pp. 197–216. doi:10.1201/b15964-15. ISBN 978-1-4665-1504-8.
  18. Boeke, Danielle K; Miller, Michael E; Rusnock, Christina F; Borghetti, Brett J (2015). Exploring Individualized Objective Workload Prediction with Feedback for Adaptive Automation. IIE Annual Conference. Proceedings. Norcross. pp. 1437–1446. ProQuest 1791990382.
  19. Rusnock, Christina F.; Boubin, Jayson G.; Giametta, Joseph J.; Goodman, Tyler J.; Hillesheim, Anthony J.; Kim, Sungbin; Meyer, David R.; Watson, Michael E. (2016). "The Role of Simulation in Designing Human-Automation Systems". Foundations of Augmented Cognition: Neuroergonomics and Operational Neuroscience. Lecture Notes in Computer Science. Vol. 9744. pp. 361–370. doi:10.1007/978-3-319-39952-2_35. ISBN 978-3-319-39951-5.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.