Information

Easy-to-use simulations of human behavior

Easy-to-use simulations of human behavior



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm interested in simulations of human behavior.

Nicky Case has done some really good ones:

https://ncase.me/trust/ - A simulation of repeated prisoner's dilemma under different scenarios, showing how trust can be built and destroyed between people.

https://ncase.me/crowds/ - A simulation of memes spreading in a society, and how the relations between people affect whether a meme will spread or not.

I like the fact that they are easy for people to interact with, and you can learn how things that we accept just happen in society are shown to be emergent phenomena that are the result of simple behavior.

My question: Do you know of more sources of simulations like these that you can share? I'm especially interested in simulations of people's emotional connections with each other, and group dynamics.


Nicky Case is a tough act to follow, but there are a lot of simulations available for a variety of different theories in social psychology and behavioural economics. As such, I won't try to list them all, but provide a "list of lists" that should help you to find more.

  • Wikipedia maintains a list of games in game theory. This does not link to any simulations, but it's easy enough to search Google for them. For example, here are a couple of interactive online simulations of the hawk-dove game: Single population, 2 population, and an interactive version of the Blotto game can be found here. Wikipedia also has a "list of lists". There are also several interactive simulations that are capable of modelling multiple game theory games, including the prisoner's dilemma: GTE, Oyun, Axelrod.
  • NetLogo, from Northwestern University, lists a number of psychology and social science simulations in its models library (sorry no jump-to link, you'll have to scroll down to see the categories), including Rebellion "… an adaptation of Joshua Epstein's model of civil violence (2002)", the Piaget-Vygotsky Game "… to shed light on the ongoing debate between two theories of learning, constructivism and social constructivism", and Ethnocentrism based on a model by "Robert Axelrod and Ross A. Hammond".
  • AnyLogic Cloud offers a variety of simulations for its platform, including agent-based models such as reinforcement learning, epidemic model, and schelling segregation, and market models such as restaurant business. Several other simulation platforms for social science and human behaviour are listed in Wikipedia's comparison of agent-based modelling software.
  • A list of interactive online economics simulation games.
  • An old simulation on the evolution of social norms.

The above lists are all free; the catalogue would be much larger if you add paid services such as classroom education software.


Dr. Strayer in Action

Observing Driver Distraction


Professors Joe Kearney and Jodie Plumert at the University of Iowa College of Liberal Arts & Sciences explain their research on pedestrian safety using 3-D immsersive virtual technology.

Bicycling injuries represent a significant public health problem in the United States. Five- to 15-year-old children represent a particularly vulnerable segment of the population, having the highest rate of injury per million cycling trips. Motor vehicles are involved in approximately one-third of all bicycle-related brain injuries and in 90% of all fatalities resulting from bicycle crashes. Many of these collisions between bicycles and motor vehicles occur at intersections. A critical first step in developing programs to prevent these car-bicycle collisions is understanding more about why such collisions occur. Our work uses virtual environment technology to examine the factors that may put children at risk for car-bicycle collisions when crossing intersections.


Human Behavior

Academic and commercial researchers alike are aiming towards a deeper understanding of how humans act, make decisions, plan, and memorize.

In this guide we will introduce you to human behavior fundamentals and how you can tap into previously unknown secrets of the human brain and mind.

The Complete Pocket Guide

This 52-page guide will introduce you to:

  • The basics… and beyond
  • Best practices in human behavior
  • The theories behind
  • How to go beyond surveys and focus groups
  • … and much more

About iMotions
Training & support
Terms & conditions
Contact

Europe
Global HQ, Copenhagen, Denmark +45 71 998 098 | [email protected]
Germany +49 (0)151-63980468 | [email protected]

United States
Boston, USA HQ +1 617-520-4958 | Chicago, USA +1 857-702-0776 | [email protected]

Asia Pacific
Singapore HQ +65 8480-9180 | Sydney, Australia +61 426 982 496 | Taiwan +886 931 684 806 | [email protected]

iMotions A/S (VAT: DK 33504004) is registered in Denmark. Registered address: Frederiksberg Allé 1-3, 1621 København V, Denmark. CEO: Peter Hartzbech.


Modeling Human and Organizational Behavior: Application to Military Simulations (1998)

The purpose of this chapter is to provide general methodological guidelines for the development, instantiation, and validation of models of human behavior. We begin with a section describing the need for the tailoring of models that incorporate these representations in accordance with specific user needs. The core of the chapter is a proposed methodological framework for the development of human behavior representations.

THE NEED FOR SITUATION-SPECIFIC MODELING

At present, we are a long way from having either a general-purpose cognitive model or a general-purpose organizational unit model that can be incorporated directly into any simulation and prove useful. However, the field has developed to the point that simulations incorporating known models and results of cognition, coordination, and behavior will greatly improve present efforts by the military, if&mdashand only if&mdashthe models are developed and precisely tailored to the demands of a given task and situation, for example, the tasks of a tank driver or a fixed-wing pilot. It is also important to note that clear measures of performance of military tasks are needed. Currently, many measures are poorly defined or lacking altogether.

Given the present state of the field at the individual level, it is probably most useful to view a human operator as the controller of a large number of programmable components, such as sensory, perceptual, motor, memory, and decision processes. The key idea is that these components are highly adaptable and may be tuned to interact properly in order to handle the demands of each specific task in a particular environment and situation. Thus, the system may be seen as a

framework or architecture within which numerous choices and adaptations must be made when a given application is required. A number of such architectures have been developed and provide examples of how one might proceed, although the field is still in its infancy, and it is too early to recommend a commitment to any one architectural framework (see Chapter 3).

Given the present state of the field at the unit level, it is probably most useful to view a human as a node in a set of overlaid networks that connect humans to each other in various ways, connect humans to tasks and resources, and so forth. One key idea is that these networks (1) contain information (2) are adaptable and (3) can be changed by orders, technology, or actions taken by individuals. Which linkages in the network are operable and which nodes (humans, technology, tasks) are involved will need to be specified in accordance with the specific military application. Some unit-level models can be thought of as architectures in which the user, at least in principle, can describe an application by specifying the nodes and linkages. Examples include the virtual design team (Levitt et al., 1994) and ORGAHEAD (Carley and Svoboda, 1996 Carley, forthcoming).

The panel cannot overemphasize how critical it is to develop situation-specific models within whatever general architecture is adopted. The situations and tasks faced by humans in military domains are highly complex and very specific. Any effective model of human cognition and behavior must be tailored to the demands of the particular case. In effect, the tailoring of the model substitutes for the history of training and knowledge by the individual (or unit), a history that incorporates both personal training and military doctrine.

At the unit level, several computational frameworks for representing teams or groups are emerging. These frameworks at worst supply a few primitives for constructing or breaking apart groups and aggregating behavior and at best facilitate the representation of formal structure, such as the hierarchy, the resource allocation structure, the communication structure, and unit-level procedures inherited by all team members. These frameworks provide only a general language for constructing models of how human groups perform tasks and what coordination and communication are necessary for pursuing those tasks. Representing actual units requires filling in these frameworks with details for a specific team, group, or unit and for a particular task.

A METHODOLOGY FOR DEVELOPING HUMAN BEHAVIOR REPRESENTATIONS

The panel suggests that the Defense Modeling and Simulation Office (DMSO) encourage developers to employ a systematic methodology in developing human behavior representations. This methodology should include the following steps:

Developers should employ interdisciplinary teams.

They should review alternatives and adopt a general architecture that is most likely to be useful for the dominant demands of the specific situation of interest.

They should review available unit-level frameworks and support the development of a comprehensive framework for representing the command, control, and communications (C 3 ) structure. (The cognitive framework adopted should dictate the way C 3 procedures are represented.)

They should review available documentation and seek to understand the domain and its doctrine, procedures, and constraints in depth. They should prepare formal task analyses that describe the activities and tasks, as well as the information requirements and human skill requirements, that must be represented in the model. They should prepare unit-level task analyses that describe resource allocation, communication protocols, skills, and so forth for each subunit.

They should use behavioral research results from the literature, procedural model analysis, ad hoc experimentation, social network analysis, unit-level task analysis, field research, and, as a last resort, expert judgment to prepare estimates of the parameters and variables to be included in the model that are unconstrained by the domain or procedural requirements.

They should systematically test, verify, and validate the behavior and performance of the model at each stage of development. We also encourage government military representatives to work with researchers to define the incremental increase in model performance as a function of the effort required to produce that performance.

The sections that follow elaborate on the four most important of these methodological recommendations.

Employ Interdisciplinary Teams

For models of the individual combatant, development teams should include cognitive psychologists and computer scientists who are knowledgeable in the contemporary literature and modeling techniques. They should also include specialists in the military doctrine and procedures of the domain to be modeled. For team-, battalion-, and force-level models, as well as for models of command and control, teams composed of sociologists, organizational scientists, social psychologists, computer scientists, and military scientists are needed to ensure that the resultant models will make effective use of the relevant knowledge and many (partial) solutions that have emerged in cognitive psychology, artificial intelligence, and human factors for analyzing and representing individual human behavior in a computational format. Similarly, employing sociology, organizational science, and distributed artificial intelligence will ensure that the relevant knowledge and solutions for analyzing and representing unit-level behavior will be employed.

Understand the Domain in Depth, and Document the Required Activities and Tasks

The first and most critical information required to construct a model of human behavior for military simulations is information about the task to be performed by the simulated and real humans as regards the procedures, strategies, decision rules, and command and control structure involved. For example, under what conditions does a combat air patrol pilot engage an approaching enemy? What tactics are followed? How is a tank platoon deployed into defensive positions? As in the Soar-intelligent forces (IFOR) work (see Chapter 2), military experts have to supply information about the desired skilled behavior the model is to produce. The form in which this information is collected should be guided by the computational structure that will encode the tasks.

The first source of such information is military doctrine&mdashthe ''fundamental principles by which military forces guide their actions in support of national objectives" (U.S. Department of the Army, 1993b). Behavioral representations need to take account of doctrine (U.S. doctrine for own forces, non-U.S. doctrine for opposing forces). On the one hand, doctrinal consistency is important. On the other hand, real forces deviate from doctrine, whether because of a lack of training or knowledge of the doctrine or for good reason, say, to confound an enemy's expectations. Moreover, since doctrine is defined at a relatively high level, there is much room for behavior to vary even while remaining consistent with doctrine. The degree of doctrinal conformity that is appropriate and the way it is captured in a given model will depend on the goals of the simulation.

Conformity to doctrine is a good place to start in developing a human behavior representation because doctrine is written down and agreed upon by organizational management. However, reliance on doctrine is not enough. First, it does not provide the task-level detail required to create a human behavior representation. Second, just as there are both official organization charts and informal units, there are both doctrine and the ways jobs really get done. There is no substitute for detailed observation and task analysis of real forces conducting real exercises.

The Army has a large-scale project to develop computer-generated representations of tactical combat behavior, such as moving, shooting, and communicating. These representations are called combat instruction sets. According to the developers (IBM/Army Integrated Development Team, 1993), each combat instruction set should be:

Described in terms of a detailed syntax and structure layout.

Explicit in its reflection of U.S. and opposing force tactical doctrines.

Explicit in the way the combat instruction set will interface with the semiautomated forces simulation software.

Traceable back to doctrine.

Information used to develop the Army combat instruction sets comes from written doctrine and from subject matter experts at the various U.S. Army Training and Doctrine Command schools who develop the performance conditions and standards for mission training plans. The effort includes battalion, company, platoon, squad, and platform/system-level behavior. At the higher levels, the mission, enemy, troops, terrain, and time available (METT-T) evaluation process is used to guide the decision making process. The combat instruction sets, like the doctrine itself, should provide another useful input to the task definition process.

At the individual level, although the required information is not in the domain of psychology or of artificial intelligence, the process for obtaining and representing the information is. This process, called task analysis and knowledge engineering, is difficult and labor-intensive, but it is well developed and can be performed routinely by well-trained personnel.

Similarly, at the unit level, although the required information is not in the domain of sociology or organizational science, the process for obtaining and representing the information is. This process includes unit-level task analysis, social network analysis, process analysis, and content analysis. The procedures involved are difficult and labor-intensive, often requiring field research or survey efforts, but they can be performed routinely by well-trained researchers.

At the individual level, task analysis has traditionally been applied to identify and elaborate the tasks that must be performed by users when they interact with systems. Kirwan and Ainsworth (1992:1) define task analysis as:

&hellip a methodology which is supported by a number of specific techniques to help the analyst collect information, organize it, and then use it to make judgments or design decisions. The application of task analysis methods provides the user with a blueprint of human involvement in a system, building a detailed picture of that system from the human perspective. Such structured information can then be used to ensure that there is compatibility between system goals and human capabilities and organization so that the system goals will be achieved.

This definition of task analysis is conditioned by the purpose of designing systems. In this case, the human factors specialist is addressing the question of how best to design the system to support the tasks of the human operator. Both Kirwan and Ainsworth (1992) and Beevis et al. (1994) describe in detail a host of methods for performing task analysis as part of the system design process that can be equally well applied to the development of human behavior representations for military simulations.

If the human's cognitive behavior is being described, cognitive task analysis approaches that rely heavily on sophisticated methods of knowledge acquisition are employed. Many of these approaches are discussed by Essens et al. (1995). Specifically, Essens et al. report on 32 elicitation techniques, most of which rely either on interviewing experts and asking them to make judgments and categorize material, or on reviewing and analyzing documents.

Descriptions of the physical and cognitive tasks to be performed by humans in a simulation are important for guiding the realism of behavior representations. However, developing these descriptions is time-consuming and for the most part must be done manually by highly trained individuals. Although some parts of the task analysis process can be accomplished with computer programs, it appears unlikely that the knowledge acquisition stage will be automated in the near future. Consequently, sponsors will have to establish timing and funding priorities for analyzing the various aspects of human behavior that could add value to military engagement simulations.

At the unit or organizational level, task analysis involves specifying the task and the command and control structure in terms of assets, resources, knowledge, access, timing, and so forth. The basic idea is that the task and the command and control structure affect unit-level performance (see Chapter 10). Task analysis at the unit level does not involve looking at the motor actions an individual must perform or the cognitive processing in which an individual must engage. Rather, it involves laying out the set of tasks the unit as a whole must perform to achieve some goal, the order in which those tasks must be accomplished, what resources are needed, and which individuals or subunits have those resources.

A great deal of research in sociology, organizational theory, and management science has been and is being done on how to do task analysis at the unit level. For tasks, the focus has been on developing and extending project analysis techniques, such as program evaluation and review technique (PERT) charts and dependency graphs. For the command and control structure, early work focused on general features such as centralization, hierarchy, and span of control. Recently, however, network techniques have been used to measure and distinguish the formal reporting structure from the communication structure. These various approaches have led to a series of survey instruments and analysis tools. There are a variety of unresolved issues, including how to measure differences in the structures and how to represent change.

Instantiate the Model

A model of human behavior must be made complete and accurate with specific data. Ideally, the model with its parameters specified will already be incorporated into an architectural framework, along with the more general properties of human information processing mechanisms. Parameters for selected sensory and motor processes can and should be obtained from the literature. However, many human behavior representations are likely to include high-level decision making, planning, and information-seeking components. For these components, work is still being done to define suitable underlying structures, and general models at this level will require further research. In many cases, however, the cognitive activities of interest should conform to doctrine or are highly

proceduralized. In these cases, detailed task analyses provide data that will permit at least a first-order approximation of the behavior of interest.

Sometimes small-scale analytical studies or field observations can provide detailed data suitable for filling in certain aspects of a model, such as the time to carry out a sequence of actions that includes positioning, aiming, and firing a rifle or targeting and launching a missile. Some of these aspects could readily be measured, whereas others could be approximated without the need for new data collection by using approaches based on prediction methods employed for time and motion studies in the domain of industrial engineering (Antis et al., 1973 Konz, 1995), Fitts' law (Fitts and Posner, 1967), or GOMS 1 (John and Kieras, 1996 Card et al., 1983). These results could then be combined with estimates of perceptual and decision making times to yield reasonable estimates of human reaction times for incorporation into military simulations.

Inevitably, there will be some data and parameter requirements for which neither the literature nor modeling and analysis will be sufficient and for which it would be too expensive to conduct even an ad hoc study. In those cases, the developer should rely on expert judgment. However, in conducting this study, the panel found that expert judgment is often viewed as the primary source of the necessary data we emphasize that it should be the alternative of last resort because of the biases and lack of clarity or precision associated with such judgments.

Much of the modeling of human cognition that will be necessary for use in human behavior representations&mdashparticularly those aspects of cognition involving higher-level planning, information seeking, and decision making&mdashhas not yet been done and will require new research and development. At the same time, these new efforts can build productively on many recent developments in the psychological and sociological sciences, some of which are discussed in the next chapter.

Verify, Validate, and Accredit the Model

Before a model can be used with confidence, it must be verified, validated, and accredited. Verification refers here to the process of checking for errors in the programming, validation to determining how well the model represents reality, and accreditation to official certification that a model or simulation is acceptable for specific purposes. According to Bennett (1995), because models and simulations are based on only partial representations of the real world and are modified as data describing real events become available, it is necessary to conduct verification and validation on an ongoing basis. As a result, it is not possible to ensure

GOMS (goals, operators, methods, and selection rules) is a relatively simple methodology for making quantitative estimates of the performance times for carrying out well-structured procedural tasks.

Verification may be accomplished by several methods. One is to develop tracings of intermediate results of the program and check them for errors using either hand calculations or manual examination of the computations and results. Verification may also be accomplished through modular programming, structured walkthroughs, and correctness proofs (Kleijnen and van Groenendaal, 1992).

Validation is a more complex matter. Indeed, depending on the characteristics of the model, its size, and its intended use, adequate demonstration of validity may not be possible. According to DMSO, validation is defined as "the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended users of the model" (U.S. Department of Defense, 1996). The degree of precision needed for a model is guided by the types and levels of variables it represents and its intended use. For example, some large models have too many parameters for the entire model to be tested in these cases, an intelligent testing strategy is needed. Sensitivity analysis may be used to provide guidance on how much validity is needed, as well as to examine the contributions of particular models and their associated costs. Carley (1996b) describes several types of models, including emulation and intellective models. Emulation models are built to provide specific advice, so they need to include valid representations of everything that is critical to the situation at hand. Such models are characterized by a large number of parameters, several modules, and detailed user interfaces. Intellective models are built to show proof of concept or to illustrate the impact of a basic explanatory mechanism. Simpler and smaller than emulation models, they lack detail and should not be used to make specific predictions.

Validation can be accomplished by several methods, including grounding, calibration, and statistical comparisons. Grounding involves establishing the face validity or reasonableness of the model by showing that simplifications do not detract from credibility. Grounding can be enhanced by demonstrating that other researchers have made similar assumptions in their models or by applying some form of ethnographic analysis. Grounding is appropriate for all models, and it is often the only level of validation needed for intellective models.

Calibration and statistical comparisons both involve the requirement for real-world data. Real-life input data (based on historical records) are fed into the simulation model, the model is run, and the results are compared with the real-world output. Calibration is used to tune a model to fit detailed real data. This is often an interactive process in which the model is altered so that its predictions come to fit the real data. Calibration of a model occurs at two levels: at one level, the model's predictions are compared with real data at another, the processes and parameters within the model are compared with data about the processes and

parameters that produce the behavior of concern. All of these procedures are relevant to the validation of emulation models.

Statistical or graphical comparisons between a model's results and those in the real world may be used to examine the model's predictive power. A key requirement for this analysis is the availability of real data obtained under comparable conditions. If a model is to be used to make absolute predictions, it is important that not only the means of the model and the means of the real world data be identical, but also that the means be correlated. However, if the model is to be used to make relative predictions, the requirements are less stringent: the means of the model and the real world do not have to be equal, but they should be positively correlated (Kleijnen and van Groenendaal, 1992).

Since a model's validity is determined by its assumptions, it is important to provide these assumptions in the model's documentation. Unfortunately, in many cases assumptions are not made explicit. According to Fossett et al. (1991), a model's documentation should provide an analyst not involved in the model's development with sufficient information to assess, with some level of confidence, whether the model is appropriate for the intended use specified by its developers.

It is important to point out that validation is a labor-intensive process that often requires a team of researchers and several years to accomplish. It is recommended that model developers be aided in this work by trained investigators not involved in developing the models. In the military context, the most highly validated models are physiological models and a few specific weapons models. Few individual combatant or unit-level models in the military context have been validated using statistical comparisons for prediction in fact, many have only been grounded. Validation, clearly a critical issue, is necessary if simulations are to be used as the basis for training or policy making.

Large models cannot be validated by simply examining exhaustively the predictions of the model under all parameter settings and contrasting that behavior with experimental data. Basic research is therefore needed on how to design intelligent artificial agents for validating such models. Many of the more complex models can be validated only by examining the trends they predict. Additional research is needed on statistical techniques for locating patterns and examining trends. There is also a need for standardized validation techniques that go beyond those currently used. The development of such techniques may in part involve developing sample databases against which to validate models at each level. Sensitivity analysis may be used to distinguish between parameters of a model that influence results and those that are indirectly or loosely coupled to outcomes. Finally, it may be useful to set up a review board for ensuring that standardized validation procedures are applied to new models and that new versions of old models are docked against old versions (to ensure that the new versions still generate the same correct behavior as the old ones).


3. Cognitive Modeling in Cyber-Security

There are several ways in which cognitive and behavioral modeling paradigms may be useful in the context of cyber-security. Here we focus on embedded computational process cognitive models and model-tracing techniques. Embedded cognitive models are independent simulations of human cognition and behavior that can interact directly with the task-environment (Salvucci, 2006 Gluck, 2010). In the context of cyber-security, these are cognitive models of network users, defenders, and attackers that can interact with the same software that humans interact with. This may be useful for adding simulated participants in training scenarios, for generating offline predictions in applied tests of network security, or for basic research simulations, especially in the contexts of human-factors and cyber epidemiology.

Cognitive modeling is similar to behavioral modeling, and is often employed for similar purposes. For example, a behavioral model of desktop user behavior may be a Markov state-transition probability matrix, stating that that if the user is in the state where they are typing an email, they may transition to a state where they are looking up something on Google with a probability x and a state where they are installing software with a probability y. A cognitive model may represent the same state-transitions as state-actions (a.k.a. productions), and assign utilities to each state-action pair. State transitions may be directly calculated based on state-action utilities, with the major difference being that state-action utilities (as well as the states and the actions available in agent memory) will change based on agent experiences.

Simulations of network users, defenders, and attackers require models that include cognitive processes and generic knowledge, as well as domain-specific facts and procedures. There is a variety of cognitive architecture software that attempts to provide modelers with fundamental sets of generic cognitive processes and basic knowledge (e.g., ACT-R, Soar, Sigma, PyIBL, Clarion Anderson and Lebiere, 1998 Sun, 2006 Anderson, 2007 Laird, 2012 Morrison and Gonzalez, 2016 Rosenbloom et al., 2016). Cognitive architectures often overlap in cognitive theory and capabilities. However, different architectures often have different assumptions and implementations of generic cognitive processes, different modeling languages and requirements, and different level of analysis focus in cognitive time-scale. For this reason, some architectures may be preferable to others depending on the purpose of the modeling effort. For example, Soar and ACT-R architectures both include reward-based learning mechanisms and can update the aforementioned state-action utilities based on agent experiences. However, Soar may be the more appropriate framework for modeling multi-step planning (Laird, 2012), whereas ACT-R may be the better choice when precise fact-retrieval times are of importance (Anderson, 2007).

Regardless of the initial cognitive architecture choice, the modeling system can be tuned based on the specific task and population being modeled. There is no limit to such tuning, enabling modelers to add and remove whole modules in their architecture of choice. However, most of the time such tuning takes the form of parameter value adjustments and model development. Model development is often a form of knowledge engineering—specification of potential goals, inputs, facts, and procedures assumed to be in the mind of the human being modeled.

There are many models simulating parts of network user behavior. For example, in independent efforts Fu and Pirolli (2007) and Peck and John (1992) developed models that make fair predictions as to network user behavior in a web browser based on current goals. There are models simulating how goals are retrieved (e.g., Altmann and Trafton, 2002) and how they are juggled (e.g., Salvucci, 2005). There are user modeling efforts that have focused on social network use (e.g., Hannon et al., 2012), chat behavior (e.g., Ball et al., 2010), team performance (Ball et al., 2010), and email activity (Dredze and Wallach, 2008). Finally, robust models of human cognition, especially in the realm of reward-based motivation (e.g., Nason and Laird, 2005 Fu and Anderson, 2006), can aid in explaining and predicting human behavior in the cyber domain (e.g., Maqbool et al., 2017). There are also many efforts for integrating individual models into a comprehensive model that can encompass multi-agent behavior at network-level dynamics (Romero and Lebiere, 2014). Such models can become an essential component of simulations in cyber, useful for generating realistic traffic and security holes. Model-based agents can act as simulated humans, switching between applications, clicking links, and downloading and installing software.

Attackers and defender models require more domain-specific knowledge. Unfortunately, subject-matter experts in this field are rarely available to the academic groups that do the bulk cognitive model development. Some core components of human-software interaction may be modeled without any deeper understanding of attacker/defender subject-matter expertise. For example, Instance-Based Learning theory (Gonzalez et al., 2003), integrated with memory dynamics of ACT-R (Anderson, 2007), has been employed in efforts to explain situational awareness of cyber analysts (Arora and Dutt, 2013 Dutt et al., 2013 Gonzalez et al., 2014), and to predict the role of intrusion-detection systems on cyber-attack detection (Dutt et al., 2016). These modeling efforts involved abstracted scenarios, but still exemplify useful research for understanding and predicting expert behavior. Moreover, in the case where cognitive models are to be exported as part of decision aid software for real-world cyber-security experts, abstract states and procedures may always be remapped to more specific domain correlates.

Regardless of whether the attempt is to model users, defenders, or attackers, tailoring the model to reflect what may be known about the individuals being modeled may be necessary to achieve better precision and use in the simulation. Model tailoring may be done during and prior to model initialization, as well as live, while the model is running, based on incoming data points. Much of model tailoring takes the form of adjusting model parameters (e.g., learning rate, exploratory tendencies), but some of it takes the form of adjusting model experiences on the fly to match human subject experiences. This latter form of tailoring is known as model-tracing.

The focus of model-tracing is in tuning a cognitive model to real in-task experiences of a specific individual. This technique is employed for maintaining an individual's cognitive state throughout that individual's time within the task-environment. For example, Anderson et al. (1995) employed model-tracing in automated 𠆌ognitive tutors’ to predict why students made certain errors on algebra problems, so as to better suit instructions to each individual student. In the context of cyber-security, model-tracing of network user and defender cognition can aid in predicting potential biases, errors, and negligence and model-tracing of attacker cognition can aid in predicting probable attack paths.

The following sections discuss model embedding in network simulations, model initialization and dynamic tailoring, the use of modeling in defender-attacker dynamics, and the use of modeling in automation.


Social/Personality Psychology

The Social/Personality Psychology program at Yale University has trained research scholars for more than sixty years. Under the influence of Carl Hovland in the 1940’s and 1950’s, the Yale program was concerned primarily with persuasion and attitude change. This group of psychologists, some of whom continue to be active in the Department even today, set the course for the Yale program through their investigation of problems such as the links between frustration and aggression, public opinion formation, and the cognitive basis of social behavior. During these years and the decades that followed, the program remained committed to training students interested in both laboratory-based methods as well as field research. The Social/Personality program has focused on advancing both basic knowledge about intrapersonal and interpersonal processes, while at the same time encouraging applications of these theoretically driven investigations.

Since its inception, the character of the Social/Personality program has been unique in combining four training goals. First, we believe that training students in scientific fundamentals is the most effective way to influence progress in the field of psychology. Second, in addition to a strong emphasis on traditional laboratory experiments as the primary tool of the Social/Personality psychologist, the training focus has also encompassed diverse methodologies such as field experimentation, survey techniques, computer simulation, and case studies (where the “case” might be an individual, group, or organization). Third, the program attempts to foster an awareness among students of the use of applied contexts to test theoretically based ideas. Finally, the faculty in Social/Personality Psychology is committed to an integration of personality processes and interpersonal influences in the study of human behavior. We believe that meaningful analyses of human behavior can best be accomplished when researchers investigate interactions between intrapersonal processes (e.g., emotion, social cognition, motivation, attitudes, and belief systems) and social behavior (e.g., persuasion, communication, decision making, stereotyping, political behavior, health behavior, and intergroup cooperation or conflict).

We believe that young investigators are best trained by a program emphasizing carefully supervised independent research with one or more members of the faculty. Although students receive classroom training in the essentials of general psychology theory, research methods, history, and the current literature, they are encouraged from their first days at Yale to develop a program of collaborative research with members of the faculty. There are only a few course requirements, and students are expected to construct a program consistent with their own research interests that includes elective courses in other areas of psychology and in other social science fields. The Social/Personality area meets as a group every Monday for research presentations and discussion. Individuals interested in specific areas of specialization such as political psychology, health psychology, emotion, or social cognition can attend additional weekly meetings of like-minded faculty and students.


Introduction

The Influence of Surrounding Environments on Behavior: Research Limitations

Our surrounding physical environment can influence behavior (Waterlander et al., 2015) as it �ords” (per Gibson, 1979) the activities of the broader social, political, and cultural world. By understanding how our surrounding environment affects occupants, researchers can identify evidence-based design approaches such as developing standardized evaluation toolkits (Joseph et al., 2014 Rollings and Wells, 2018), identifying design moderators (Rollings and Evans, 2019), and ultimately informing policy, including guidelines governing how facilities are built, renovated, and maintained (Sachs, 2018). By understanding how environments affect behaviors on a microbehavioral (i.e., unconscious) level, researchers can identify appropriate interventions (e.g., providing more sidewalks to encourage physical activity) and thereby inform the development of more effective informational and environmental interventions to improve desirable behavior (Marcum et al., 2018).

However, experimentally examining the influence of our surrounding environment on behavior is challenging. Real-life environmental manipulations may be costly and even politically challenging to implement (Schwebel et al., 2008). On the other hand, behaviors induced in conventional lab-based environments may not be generalizable to real-life environments (Ledoux et al., 2013). The influence of the surrounding environment on behaviors might be better understood (Ledoux et al., 2013) if researchers could immerse participants in complex physical and social environments that are ecologically valid while being highly controlled (Veling et al., 2016). Because of this, simulations are sometimes used to explore the relationship between environment and behavior (Marans and Stokols, 2013). Potential simulations can include mockups, sketches, photographs, models, and immersive virtual environments (IVEs). While CAVE automatic virtual environments (CAVEs, Cruz-Neira et al., 1993) and head-mounted displays (HMDs) have both been used to simulate such environments, the recent increase in the availability of consumer HMDs means that many more researchers can now use IVEs to answer questions about the effects of surrounding environments on behaviors. In this review, we reviewed and synthesized peer-reviewed research that used IVEs presented in HMDs for research on behavior influenced by our surrounding environment, with the aim of showcasing the solutions found by previous researchers. As virtual reality (VR) and IVEs will be frequently mentioned in this review, it is important to distinguish “VR” as the technology used to create “IVEs.”

Immersive Virtual Environment Tools for Human Behavior Research: Making the Case

Past research suggests that VR is a useful research tool to simulate real-life environmental features, as it allows researchers to immerse participants in hypothetical contexts and study their responses to controlled environmental manipulations otherwise difficult to examine in real-life environments (Parsons et al., 2007 Schwebel et al., 2008 Poelman et al., 2017 Ahn, 2018). Considerable work has demonstrated VR’s ability to elicit behavioral responses to virtual environments, even when the participant is well aware that the environment is not “real” as in demonstrations of the classic “pit demo” (Meehan et al., 2003).

In 2002, Blascovich and colleagues foresaw the advantages of VR as a tool for research in the social sciences. Although Blascovich’s original article discussed the use of VR as a tool for social psychology specifically, the advantages he describes for balancing experimental control and mundane realism and improving replicability and representative sampling have made it a tool of interest for researchers in several social science fields. VR has a high degree of realism: users tend to react to scenarios as if they were occurring in the real world. VR allows for a high degree of experimental control. Environments, events, and even virtual people can be programmed to appear to every user in the same way. Thus, VR has already been used extensively for diagnosis (Parsons et al., 2007), clinical education (Lok et al., 2006 Atesok et al., 2016), and clinical and experimental interventions (Difede and Hoffman, 2002 Wiederhold and Wiederhold, 2010 Wiederhold, 2017).

VR provides critical benefits over other methods available for behavior research (Schwebel et al., 2008). These advantages are particularly applicable when considering the influence of environments on behavior. VR has the potential to examine how people behave in real-life situations, without exposing participants to the risk and inconsistency of real-world environments (Blascovich et al., 2002). Participants can safely experience immersion in the virtual environment when the real environment is hazardous (Viswanathan and Choudhury, 2011), permitting researchers to ethically examine potentially dangerous behaviors (Schwebel et al., 2012). Additionally, it is relatively easy to manipulate environmental factors such as noise and crowding in virtual environments (Neo et al., 2019).

The Design of VR Environments for Behavior Studies: Research Gap

A prototype “is an artifact that approximates a feature (or multiple features) of a product, service, or system” (Otto and Wood, 2001 Camburn et al., 2017, p. 1) and 𠇊 virtual prototype is one which is developed (and tested) on a computational platform” (Camburn et al., 2017, p. 17). VR, especially its prototyping functions (i.e., the test-refinement-completion of designs using digital mockups, Ulrich and Eppinger, 2012), has been increasingly applied to behavior research. In this review, we examine VR’s potential to address environmental effects on behavior. In these cases, IVEs should be designed such that interactions between the individual and the virtual environment are as analogous as possible to interactions that would take place if the individual were in the actual environment, with the ultimate goal of developing a more robust way of examining the impact of the surrounding environment on behavior.

VR is generally considered to be a high-presence medium. Presence refers to the sense of �ing there” in the VR environment (Heeter, 1992 Slater et al., 2009). While presence and immersion are terms sometimes used interchangeably, researchers have distinguished between the subjective psychological sense of presence and immersion, which can be considered a quality of the technology (Slater, 2018). A virtual reality setup that provides highly detailed visual content, spatialized sound, and haptic feedback (e.g., through vibrating controllers) would be considered more immersive than a scene rendered on a desktop monitor. Greater immersion is generally considered to increase presence (Cummings and Bailenson, 2016). Because consumer HMDs have reduced cost and expense while retaining a high sense of presence it is plausible for many more researchers to use VR for prototyping applications thus, we focus our recommendations on this larger pool of potential researchers.

While other considerable valuable works have used CAVE or desktop-based virtual environments to examine behavior, we have limited our analysis in this review to studies that use HMDs, to study behavior as it relates to the environment. The relatively lower cost and portability of new consumer HMDs mean that researchers who have not previously engaged with virtual reality now have the opportunity to use these systems for their research. This review aims to provide a summary of design considerations pulled from existing research in virtual reality that might prove useful to potential researchers who are not experienced in this area.

The qualities of HMDs provide special opportunities and constraints. HMDs combine portability with the ability to block out the surrounding environment, making them good for “in-the-wild” studies (Oh et al., 2016). The greater presence HMDs can provide is particularly important to these behavioral studies but comes with tradeoffs. Users do not see their real bodies, so researchers must decide whether or not to include avatars. HMDs allow users to experience spaces that may be larger than the physical space that they are actually in, meaning that users’ abilities to navigate must be programmed and controlled. Such environments allow for the ready tracking of behavioral data (Yaremych and Persky, 2019) and interaction with objects, but all of these interactions must be designed. In this review, we highlight the solutions and tradeoffs that previous researchers have made in this context.

Best Practices for Successful IVE-Based Experimental Studies

Heydarian and Becerik-Gerber (2017) describe 𠇏our phases of IVE-based experimental studies” and discuss best practices for consideration in different phases of experimental studies (Figure 1, see Heydarian and Becerik-Gerber, 2017 for in-depth discussion).

FIGURE 1. Four phases of IVE-based experimental studies.

In this review, we focus on the �velopment of experimental procedure” phase, described by Heydarian and Becerik-Gerber as Phase 2. This includes the design and setup of the IVEs, especially considerations involving the level of detail required (i.e., factor(s) recognizable by participants Heydarian and Becerik-Gerber, 2017). This may differ between studies and can include visual appearance, behavioral realism, and virtual human behavior. To meet the study objectives, a sense of presence is key, allowing study participants to feel “there” and thus behave as if they were in the actual environment.

However, information on the design process in Phase II can be hard to find. Researchers typically describe the 𠇏inal environment” they have designed in publications, but justifications for the many design decisions they have made in the development of the virtual environment are less common, probably at least in part due to publication length limitations. However, this information is extremely valuable. The following review expands on the work by Heydarian and Becerik-Gerber (2017) by reviewing and synthesizing strategies from 18 studies using IVEs for behavior research. In addition, we have created a wiki (https://osf.io/gyadu/) to collect citations for other papers that use IVEs for this purpose so that this database can be updated. We hope this synthesis and this wiki will be an additional resource for researchers new to this space to build on the knowledge of previous researchers to make informed choices when they are designing such IVEs.


7 Ways to Measure Human Behavior [CHART]

Human behavior can be measured in a multitude of ways, which is why we have made – and now updated – the chart below to guide you with choosing your ideal measurement.

Human behavior is a complex interplay of a variety of different processes, ranging from completely unconscious modulations of emotional reactions to decision-making based on conscious thoughts and cognition. In fact, each of our emotional and cognitive responses is driven by factors such as arousal, workload, and environmental conditions that impact our physiological state in that very moment.

“I want to measure human behavior. Which biosensor should I use?”

Below is your kick-start to the what, how, and why of biometric measurements in one handy format.

To help you peek beneath the surface of human behavior and its underlying processes, we recently have set you up with the ins and outs of Eye Tracking EEG, GSR and Facial Expression Analysis.

Admittedly, if you are new to the field it can be quite overwhelming to gain a solid overview of the different biometric sensors and the available metrics (let alone the interpretation of physiological data), given the fact that each modality will provide insight into a specific key aspect of human behavior.

Here you can see emotional valence and intensity visualized in a graph.

Which sensor is the most suitable to address your research question? Is one sensor alone able to deliver all the insights you seek or should you rather opt for multiple sensors to get to the bottom of things? With the help of the chart above, you should be a step closer to answering these questions.

I hope this gives you some deeper insights into the world of human behavior measurements. If you have any questions you are always more than welcome to contact us.


How AI Will Cause Desolation

Not to scare you, but if AI surpasses the human race in common intelligence and becomes “super-intelligent”, then it could possibly grow beyond our capacity and get out of hand. Not being able to control what we’ve created — what a dramatic way to go obsolete that would be! At this pace, the fate of humanity might rest upon machines someday.

AI will also have a huge difference on the world economy and geopolitics. Like McKinsey estimates, AI may deliver an additional economic output of around US $13 trillion by 2030, increasing global GDP by about 1.2% annually.


Table of contents

Cognitive neuroscience

Cognitive neuroscience is the overlapping science of the ‘dry and the wet’ part of the brain: where dry represents the cognitive part (mind, emotions, and senses), and where wet represents the brain. This combination of scientific disciplines tries to explain the connection between neural activities in the brain and mental processes, in order to find answers to the questions of how neural circuits in the brain affect cognition.

This blog series addresses the interplay between the brain, behavior, and emotions, in the field of cognitive neuroscience:

Autism research: Observing infants to detect autism

Making the same movement multiple times, or making repetitive movements, is an important step in the development of a newborn child and them learning how to use their limbs. Repeating these movements is typical for motor development but an increased frequency of repetitive movements can be an early indicator for neurodevelopmental disorders.

Purpura et al. conducted a retrospective analysis of video clips taken from home videos recorded by parents, to verify if a higher frequency of repetitive movements could differentiate infants with ASD from infants with Developmental Delay (DD) and Typical Development (TD), analyzing the age range between six and 12 months.

Read more about their study here.

Find out how The Observer XT is used in a wide range of studies and how it can elevate your research!

  • Free white papers and case studies
  • Customer success stories
  • Recent blog posts

Adolescent research

How are adolescents’ emotions socialized by mothers and close friends? What can parents do to prevent escalating conflicts? What is the role of early childhood stress and inhibitory control adolescent substance use? These and more studies are excellent examples of adolescent research:

On-site observational studies

In some cases observations for your study are best performed on-site. For example, you might want to observe people in a natural setting: at home, in a shop, in the classroom, or in the office.

Another case where on-site research would be beneficial is when your participants are experiencing health issues, preventing them from travelling to your lab. Conducting your research on location enables you to study people that are otherwise difficult to reach.

In this blog post, we highlighted two cases of on-site observational studies with older age groups, conducted at home or at a healthcare facility.

Doctor-patient interactions and the use of humor

Science has proven that laughter is healthy. However, how often is humor actually used during doctor-patient interactions? To characterize the logistics of humor in medical encounters, e.g. frequency, who introduces it, or what it is about, researcher Phillips and her team analyzed audio/video-recorded clinical encounters to describe the frequency and other features of humor in outpatient primary and specialty care visits.

More healthcare research examples

Healthcare research - sometimes also called "medical research" or "clinical research" - refers to research that is done to learn more about healthcare outcomes. There are plenty of healthcare research examples on our Behavioral Research Blog:


Table of contents

Cognitive neuroscience

Cognitive neuroscience is the overlapping science of the ‘dry and the wet’ part of the brain: where dry represents the cognitive part (mind, emotions, and senses), and where wet represents the brain. This combination of scientific disciplines tries to explain the connection between neural activities in the brain and mental processes, in order to find answers to the questions of how neural circuits in the brain affect cognition.

This blog series addresses the interplay between the brain, behavior, and emotions, in the field of cognitive neuroscience:

Autism research: Observing infants to detect autism

Making the same movement multiple times, or making repetitive movements, is an important step in the development of a newborn child and them learning how to use their limbs. Repeating these movements is typical for motor development but an increased frequency of repetitive movements can be an early indicator for neurodevelopmental disorders.

Purpura et al. conducted a retrospective analysis of video clips taken from home videos recorded by parents, to verify if a higher frequency of repetitive movements could differentiate infants with ASD from infants with Developmental Delay (DD) and Typical Development (TD), analyzing the age range between six and 12 months.

Read more about their study here.

Find out how The Observer XT is used in a wide range of studies and how it can elevate your research!

  • Free white papers and case studies
  • Customer success stories
  • Recent blog posts

Adolescent research

How are adolescents’ emotions socialized by mothers and close friends? What can parents do to prevent escalating conflicts? What is the role of early childhood stress and inhibitory control adolescent substance use? These and more studies are excellent examples of adolescent research:

On-site observational studies

In some cases observations for your study are best performed on-site. For example, you might want to observe people in a natural setting: at home, in a shop, in the classroom, or in the office.

Another case where on-site research would be beneficial is when your participants are experiencing health issues, preventing them from travelling to your lab. Conducting your research on location enables you to study people that are otherwise difficult to reach.

In this blog post, we highlighted two cases of on-site observational studies with older age groups, conducted at home or at a healthcare facility.

Doctor-patient interactions and the use of humor

Science has proven that laughter is healthy. However, how often is humor actually used during doctor-patient interactions? To characterize the logistics of humor in medical encounters, e.g. frequency, who introduces it, or what it is about, researcher Phillips and her team analyzed audio/video-recorded clinical encounters to describe the frequency and other features of humor in outpatient primary and specialty care visits.

More healthcare research examples

Healthcare research - sometimes also called "medical research" or "clinical research" - refers to research that is done to learn more about healthcare outcomes. There are plenty of healthcare research examples on our Behavioral Research Blog:


How AI Will Cause Desolation

Not to scare you, but if AI surpasses the human race in common intelligence and becomes “super-intelligent”, then it could possibly grow beyond our capacity and get out of hand. Not being able to control what we’ve created — what a dramatic way to go obsolete that would be! At this pace, the fate of humanity might rest upon machines someday.

AI will also have a huge difference on the world economy and geopolitics. Like McKinsey estimates, AI may deliver an additional economic output of around US $13 trillion by 2030, increasing global GDP by about 1.2% annually.


7 Ways to Measure Human Behavior [CHART]

Human behavior can be measured in a multitude of ways, which is why we have made – and now updated – the chart below to guide you with choosing your ideal measurement.

Human behavior is a complex interplay of a variety of different processes, ranging from completely unconscious modulations of emotional reactions to decision-making based on conscious thoughts and cognition. In fact, each of our emotional and cognitive responses is driven by factors such as arousal, workload, and environmental conditions that impact our physiological state in that very moment.

“I want to measure human behavior. Which biosensor should I use?”

Below is your kick-start to the what, how, and why of biometric measurements in one handy format.

To help you peek beneath the surface of human behavior and its underlying processes, we recently have set you up with the ins and outs of Eye Tracking EEG, GSR and Facial Expression Analysis.

Admittedly, if you are new to the field it can be quite overwhelming to gain a solid overview of the different biometric sensors and the available metrics (let alone the interpretation of physiological data), given the fact that each modality will provide insight into a specific key aspect of human behavior.

Here you can see emotional valence and intensity visualized in a graph.

Which sensor is the most suitable to address your research question? Is one sensor alone able to deliver all the insights you seek or should you rather opt for multiple sensors to get to the bottom of things? With the help of the chart above, you should be a step closer to answering these questions.

I hope this gives you some deeper insights into the world of human behavior measurements. If you have any questions you are always more than welcome to contact us.


Introduction

The Influence of Surrounding Environments on Behavior: Research Limitations

Our surrounding physical environment can influence behavior (Waterlander et al., 2015) as it �ords” (per Gibson, 1979) the activities of the broader social, political, and cultural world. By understanding how our surrounding environment affects occupants, researchers can identify evidence-based design approaches such as developing standardized evaluation toolkits (Joseph et al., 2014 Rollings and Wells, 2018), identifying design moderators (Rollings and Evans, 2019), and ultimately informing policy, including guidelines governing how facilities are built, renovated, and maintained (Sachs, 2018). By understanding how environments affect behaviors on a microbehavioral (i.e., unconscious) level, researchers can identify appropriate interventions (e.g., providing more sidewalks to encourage physical activity) and thereby inform the development of more effective informational and environmental interventions to improve desirable behavior (Marcum et al., 2018).

However, experimentally examining the influence of our surrounding environment on behavior is challenging. Real-life environmental manipulations may be costly and even politically challenging to implement (Schwebel et al., 2008). On the other hand, behaviors induced in conventional lab-based environments may not be generalizable to real-life environments (Ledoux et al., 2013). The influence of the surrounding environment on behaviors might be better understood (Ledoux et al., 2013) if researchers could immerse participants in complex physical and social environments that are ecologically valid while being highly controlled (Veling et al., 2016). Because of this, simulations are sometimes used to explore the relationship between environment and behavior (Marans and Stokols, 2013). Potential simulations can include mockups, sketches, photographs, models, and immersive virtual environments (IVEs). While CAVE automatic virtual environments (CAVEs, Cruz-Neira et al., 1993) and head-mounted displays (HMDs) have both been used to simulate such environments, the recent increase in the availability of consumer HMDs means that many more researchers can now use IVEs to answer questions about the effects of surrounding environments on behaviors. In this review, we reviewed and synthesized peer-reviewed research that used IVEs presented in HMDs for research on behavior influenced by our surrounding environment, with the aim of showcasing the solutions found by previous researchers. As virtual reality (VR) and IVEs will be frequently mentioned in this review, it is important to distinguish “VR” as the technology used to create “IVEs.”

Immersive Virtual Environment Tools for Human Behavior Research: Making the Case

Past research suggests that VR is a useful research tool to simulate real-life environmental features, as it allows researchers to immerse participants in hypothetical contexts and study their responses to controlled environmental manipulations otherwise difficult to examine in real-life environments (Parsons et al., 2007 Schwebel et al., 2008 Poelman et al., 2017 Ahn, 2018). Considerable work has demonstrated VR’s ability to elicit behavioral responses to virtual environments, even when the participant is well aware that the environment is not “real” as in demonstrations of the classic “pit demo” (Meehan et al., 2003).

In 2002, Blascovich and colleagues foresaw the advantages of VR as a tool for research in the social sciences. Although Blascovich’s original article discussed the use of VR as a tool for social psychology specifically, the advantages he describes for balancing experimental control and mundane realism and improving replicability and representative sampling have made it a tool of interest for researchers in several social science fields. VR has a high degree of realism: users tend to react to scenarios as if they were occurring in the real world. VR allows for a high degree of experimental control. Environments, events, and even virtual people can be programmed to appear to every user in the same way. Thus, VR has already been used extensively for diagnosis (Parsons et al., 2007), clinical education (Lok et al., 2006 Atesok et al., 2016), and clinical and experimental interventions (Difede and Hoffman, 2002 Wiederhold and Wiederhold, 2010 Wiederhold, 2017).

VR provides critical benefits over other methods available for behavior research (Schwebel et al., 2008). These advantages are particularly applicable when considering the influence of environments on behavior. VR has the potential to examine how people behave in real-life situations, without exposing participants to the risk and inconsistency of real-world environments (Blascovich et al., 2002). Participants can safely experience immersion in the virtual environment when the real environment is hazardous (Viswanathan and Choudhury, 2011), permitting researchers to ethically examine potentially dangerous behaviors (Schwebel et al., 2012). Additionally, it is relatively easy to manipulate environmental factors such as noise and crowding in virtual environments (Neo et al., 2019).

The Design of VR Environments for Behavior Studies: Research Gap

A prototype “is an artifact that approximates a feature (or multiple features) of a product, service, or system” (Otto and Wood, 2001 Camburn et al., 2017, p. 1) and 𠇊 virtual prototype is one which is developed (and tested) on a computational platform” (Camburn et al., 2017, p. 17). VR, especially its prototyping functions (i.e., the test-refinement-completion of designs using digital mockups, Ulrich and Eppinger, 2012), has been increasingly applied to behavior research. In this review, we examine VR’s potential to address environmental effects on behavior. In these cases, IVEs should be designed such that interactions between the individual and the virtual environment are as analogous as possible to interactions that would take place if the individual were in the actual environment, with the ultimate goal of developing a more robust way of examining the impact of the surrounding environment on behavior.

VR is generally considered to be a high-presence medium. Presence refers to the sense of �ing there” in the VR environment (Heeter, 1992 Slater et al., 2009). While presence and immersion are terms sometimes used interchangeably, researchers have distinguished between the subjective psychological sense of presence and immersion, which can be considered a quality of the technology (Slater, 2018). A virtual reality setup that provides highly detailed visual content, spatialized sound, and haptic feedback (e.g., through vibrating controllers) would be considered more immersive than a scene rendered on a desktop monitor. Greater immersion is generally considered to increase presence (Cummings and Bailenson, 2016). Because consumer HMDs have reduced cost and expense while retaining a high sense of presence it is plausible for many more researchers to use VR for prototyping applications thus, we focus our recommendations on this larger pool of potential researchers.

While other considerable valuable works have used CAVE or desktop-based virtual environments to examine behavior, we have limited our analysis in this review to studies that use HMDs, to study behavior as it relates to the environment. The relatively lower cost and portability of new consumer HMDs mean that researchers who have not previously engaged with virtual reality now have the opportunity to use these systems for their research. This review aims to provide a summary of design considerations pulled from existing research in virtual reality that might prove useful to potential researchers who are not experienced in this area.

The qualities of HMDs provide special opportunities and constraints. HMDs combine portability with the ability to block out the surrounding environment, making them good for “in-the-wild” studies (Oh et al., 2016). The greater presence HMDs can provide is particularly important to these behavioral studies but comes with tradeoffs. Users do not see their real bodies, so researchers must decide whether or not to include avatars. HMDs allow users to experience spaces that may be larger than the physical space that they are actually in, meaning that users’ abilities to navigate must be programmed and controlled. Such environments allow for the ready tracking of behavioral data (Yaremych and Persky, 2019) and interaction with objects, but all of these interactions must be designed. In this review, we highlight the solutions and tradeoffs that previous researchers have made in this context.

Best Practices for Successful IVE-Based Experimental Studies

Heydarian and Becerik-Gerber (2017) describe 𠇏our phases of IVE-based experimental studies” and discuss best practices for consideration in different phases of experimental studies (Figure 1, see Heydarian and Becerik-Gerber, 2017 for in-depth discussion).

FIGURE 1. Four phases of IVE-based experimental studies.

In this review, we focus on the �velopment of experimental procedure” phase, described by Heydarian and Becerik-Gerber as Phase 2. This includes the design and setup of the IVEs, especially considerations involving the level of detail required (i.e., factor(s) recognizable by participants Heydarian and Becerik-Gerber, 2017). This may differ between studies and can include visual appearance, behavioral realism, and virtual human behavior. To meet the study objectives, a sense of presence is key, allowing study participants to feel “there” and thus behave as if they were in the actual environment.

However, information on the design process in Phase II can be hard to find. Researchers typically describe the 𠇏inal environment” they have designed in publications, but justifications for the many design decisions they have made in the development of the virtual environment are less common, probably at least in part due to publication length limitations. However, this information is extremely valuable. The following review expands on the work by Heydarian and Becerik-Gerber (2017) by reviewing and synthesizing strategies from 18 studies using IVEs for behavior research. In addition, we have created a wiki (https://osf.io/gyadu/) to collect citations for other papers that use IVEs for this purpose so that this database can be updated. We hope this synthesis and this wiki will be an additional resource for researchers new to this space to build on the knowledge of previous researchers to make informed choices when they are designing such IVEs.


Social/Personality Psychology

The Social/Personality Psychology program at Yale University has trained research scholars for more than sixty years. Under the influence of Carl Hovland in the 1940’s and 1950’s, the Yale program was concerned primarily with persuasion and attitude change. This group of psychologists, some of whom continue to be active in the Department even today, set the course for the Yale program through their investigation of problems such as the links between frustration and aggression, public opinion formation, and the cognitive basis of social behavior. During these years and the decades that followed, the program remained committed to training students interested in both laboratory-based methods as well as field research. The Social/Personality program has focused on advancing both basic knowledge about intrapersonal and interpersonal processes, while at the same time encouraging applications of these theoretically driven investigations.

Since its inception, the character of the Social/Personality program has been unique in combining four training goals. First, we believe that training students in scientific fundamentals is the most effective way to influence progress in the field of psychology. Second, in addition to a strong emphasis on traditional laboratory experiments as the primary tool of the Social/Personality psychologist, the training focus has also encompassed diverse methodologies such as field experimentation, survey techniques, computer simulation, and case studies (where the “case” might be an individual, group, or organization). Third, the program attempts to foster an awareness among students of the use of applied contexts to test theoretically based ideas. Finally, the faculty in Social/Personality Psychology is committed to an integration of personality processes and interpersonal influences in the study of human behavior. We believe that meaningful analyses of human behavior can best be accomplished when researchers investigate interactions between intrapersonal processes (e.g., emotion, social cognition, motivation, attitudes, and belief systems) and social behavior (e.g., persuasion, communication, decision making, stereotyping, political behavior, health behavior, and intergroup cooperation or conflict).

We believe that young investigators are best trained by a program emphasizing carefully supervised independent research with one or more members of the faculty. Although students receive classroom training in the essentials of general psychology theory, research methods, history, and the current literature, they are encouraged from their first days at Yale to develop a program of collaborative research with members of the faculty. There are only a few course requirements, and students are expected to construct a program consistent with their own research interests that includes elective courses in other areas of psychology and in other social science fields. The Social/Personality area meets as a group every Monday for research presentations and discussion. Individuals interested in specific areas of specialization such as political psychology, health psychology, emotion, or social cognition can attend additional weekly meetings of like-minded faculty and students.


Modeling Human and Organizational Behavior: Application to Military Simulations (1998)

The purpose of this chapter is to provide general methodological guidelines for the development, instantiation, and validation of models of human behavior. We begin with a section describing the need for the tailoring of models that incorporate these representations in accordance with specific user needs. The core of the chapter is a proposed methodological framework for the development of human behavior representations.

THE NEED FOR SITUATION-SPECIFIC MODELING

At present, we are a long way from having either a general-purpose cognitive model or a general-purpose organizational unit model that can be incorporated directly into any simulation and prove useful. However, the field has developed to the point that simulations incorporating known models and results of cognition, coordination, and behavior will greatly improve present efforts by the military, if&mdashand only if&mdashthe models are developed and precisely tailored to the demands of a given task and situation, for example, the tasks of a tank driver or a fixed-wing pilot. It is also important to note that clear measures of performance of military tasks are needed. Currently, many measures are poorly defined or lacking altogether.

Given the present state of the field at the individual level, it is probably most useful to view a human operator as the controller of a large number of programmable components, such as sensory, perceptual, motor, memory, and decision processes. The key idea is that these components are highly adaptable and may be tuned to interact properly in order to handle the demands of each specific task in a particular environment and situation. Thus, the system may be seen as a

framework or architecture within which numerous choices and adaptations must be made when a given application is required. A number of such architectures have been developed and provide examples of how one might proceed, although the field is still in its infancy, and it is too early to recommend a commitment to any one architectural framework (see Chapter 3).

Given the present state of the field at the unit level, it is probably most useful to view a human as a node in a set of overlaid networks that connect humans to each other in various ways, connect humans to tasks and resources, and so forth. One key idea is that these networks (1) contain information (2) are adaptable and (3) can be changed by orders, technology, or actions taken by individuals. Which linkages in the network are operable and which nodes (humans, technology, tasks) are involved will need to be specified in accordance with the specific military application. Some unit-level models can be thought of as architectures in which the user, at least in principle, can describe an application by specifying the nodes and linkages. Examples include the virtual design team (Levitt et al., 1994) and ORGAHEAD (Carley and Svoboda, 1996 Carley, forthcoming).

The panel cannot overemphasize how critical it is to develop situation-specific models within whatever general architecture is adopted. The situations and tasks faced by humans in military domains are highly complex and very specific. Any effective model of human cognition and behavior must be tailored to the demands of the particular case. In effect, the tailoring of the model substitutes for the history of training and knowledge by the individual (or unit), a history that incorporates both personal training and military doctrine.

At the unit level, several computational frameworks for representing teams or groups are emerging. These frameworks at worst supply a few primitives for constructing or breaking apart groups and aggregating behavior and at best facilitate the representation of formal structure, such as the hierarchy, the resource allocation structure, the communication structure, and unit-level procedures inherited by all team members. These frameworks provide only a general language for constructing models of how human groups perform tasks and what coordination and communication are necessary for pursuing those tasks. Representing actual units requires filling in these frameworks with details for a specific team, group, or unit and for a particular task.

A METHODOLOGY FOR DEVELOPING HUMAN BEHAVIOR REPRESENTATIONS

The panel suggests that the Defense Modeling and Simulation Office (DMSO) encourage developers to employ a systematic methodology in developing human behavior representations. This methodology should include the following steps:

Developers should employ interdisciplinary teams.

They should review alternatives and adopt a general architecture that is most likely to be useful for the dominant demands of the specific situation of interest.

They should review available unit-level frameworks and support the development of a comprehensive framework for representing the command, control, and communications (C 3 ) structure. (The cognitive framework adopted should dictate the way C 3 procedures are represented.)

They should review available documentation and seek to understand the domain and its doctrine, procedures, and constraints in depth. They should prepare formal task analyses that describe the activities and tasks, as well as the information requirements and human skill requirements, that must be represented in the model. They should prepare unit-level task analyses that describe resource allocation, communication protocols, skills, and so forth for each subunit.

They should use behavioral research results from the literature, procedural model analysis, ad hoc experimentation, social network analysis, unit-level task analysis, field research, and, as a last resort, expert judgment to prepare estimates of the parameters and variables to be included in the model that are unconstrained by the domain or procedural requirements.

They should systematically test, verify, and validate the behavior and performance of the model at each stage of development. We also encourage government military representatives to work with researchers to define the incremental increase in model performance as a function of the effort required to produce that performance.

The sections that follow elaborate on the four most important of these methodological recommendations.

Employ Interdisciplinary Teams

For models of the individual combatant, development teams should include cognitive psychologists and computer scientists who are knowledgeable in the contemporary literature and modeling techniques. They should also include specialists in the military doctrine and procedures of the domain to be modeled. For team-, battalion-, and force-level models, as well as for models of command and control, teams composed of sociologists, organizational scientists, social psychologists, computer scientists, and military scientists are needed to ensure that the resultant models will make effective use of the relevant knowledge and many (partial) solutions that have emerged in cognitive psychology, artificial intelligence, and human factors for analyzing and representing individual human behavior in a computational format. Similarly, employing sociology, organizational science, and distributed artificial intelligence will ensure that the relevant knowledge and solutions for analyzing and representing unit-level behavior will be employed.

Understand the Domain in Depth, and Document the Required Activities and Tasks

The first and most critical information required to construct a model of human behavior for military simulations is information about the task to be performed by the simulated and real humans as regards the procedures, strategies, decision rules, and command and control structure involved. For example, under what conditions does a combat air patrol pilot engage an approaching enemy? What tactics are followed? How is a tank platoon deployed into defensive positions? As in the Soar-intelligent forces (IFOR) work (see Chapter 2), military experts have to supply information about the desired skilled behavior the model is to produce. The form in which this information is collected should be guided by the computational structure that will encode the tasks.

The first source of such information is military doctrine&mdashthe ''fundamental principles by which military forces guide their actions in support of national objectives" (U.S. Department of the Army, 1993b). Behavioral representations need to take account of doctrine (U.S. doctrine for own forces, non-U.S. doctrine for opposing forces). On the one hand, doctrinal consistency is important. On the other hand, real forces deviate from doctrine, whether because of a lack of training or knowledge of the doctrine or for good reason, say, to confound an enemy's expectations. Moreover, since doctrine is defined at a relatively high level, there is much room for behavior to vary even while remaining consistent with doctrine. The degree of doctrinal conformity that is appropriate and the way it is captured in a given model will depend on the goals of the simulation.

Conformity to doctrine is a good place to start in developing a human behavior representation because doctrine is written down and agreed upon by organizational management. However, reliance on doctrine is not enough. First, it does not provide the task-level detail required to create a human behavior representation. Second, just as there are both official organization charts and informal units, there are both doctrine and the ways jobs really get done. There is no substitute for detailed observation and task analysis of real forces conducting real exercises.

The Army has a large-scale project to develop computer-generated representations of tactical combat behavior, such as moving, shooting, and communicating. These representations are called combat instruction sets. According to the developers (IBM/Army Integrated Development Team, 1993), each combat instruction set should be:

Described in terms of a detailed syntax and structure layout.

Explicit in its reflection of U.S. and opposing force tactical doctrines.

Explicit in the way the combat instruction set will interface with the semiautomated forces simulation software.

Traceable back to doctrine.

Information used to develop the Army combat instruction sets comes from written doctrine and from subject matter experts at the various U.S. Army Training and Doctrine Command schools who develop the performance conditions and standards for mission training plans. The effort includes battalion, company, platoon, squad, and platform/system-level behavior. At the higher levels, the mission, enemy, troops, terrain, and time available (METT-T) evaluation process is used to guide the decision making process. The combat instruction sets, like the doctrine itself, should provide another useful input to the task definition process.

At the individual level, although the required information is not in the domain of psychology or of artificial intelligence, the process for obtaining and representing the information is. This process, called task analysis and knowledge engineering, is difficult and labor-intensive, but it is well developed and can be performed routinely by well-trained personnel.

Similarly, at the unit level, although the required information is not in the domain of sociology or organizational science, the process for obtaining and representing the information is. This process includes unit-level task analysis, social network analysis, process analysis, and content analysis. The procedures involved are difficult and labor-intensive, often requiring field research or survey efforts, but they can be performed routinely by well-trained researchers.

At the individual level, task analysis has traditionally been applied to identify and elaborate the tasks that must be performed by users when they interact with systems. Kirwan and Ainsworth (1992:1) define task analysis as:

&hellip a methodology which is supported by a number of specific techniques to help the analyst collect information, organize it, and then use it to make judgments or design decisions. The application of task analysis methods provides the user with a blueprint of human involvement in a system, building a detailed picture of that system from the human perspective. Such structured information can then be used to ensure that there is compatibility between system goals and human capabilities and organization so that the system goals will be achieved.

This definition of task analysis is conditioned by the purpose of designing systems. In this case, the human factors specialist is addressing the question of how best to design the system to support the tasks of the human operator. Both Kirwan and Ainsworth (1992) and Beevis et al. (1994) describe in detail a host of methods for performing task analysis as part of the system design process that can be equally well applied to the development of human behavior representations for military simulations.

If the human's cognitive behavior is being described, cognitive task analysis approaches that rely heavily on sophisticated methods of knowledge acquisition are employed. Many of these approaches are discussed by Essens et al. (1995). Specifically, Essens et al. report on 32 elicitation techniques, most of which rely either on interviewing experts and asking them to make judgments and categorize material, or on reviewing and analyzing documents.

Descriptions of the physical and cognitive tasks to be performed by humans in a simulation are important for guiding the realism of behavior representations. However, developing these descriptions is time-consuming and for the most part must be done manually by highly trained individuals. Although some parts of the task analysis process can be accomplished with computer programs, it appears unlikely that the knowledge acquisition stage will be automated in the near future. Consequently, sponsors will have to establish timing and funding priorities for analyzing the various aspects of human behavior that could add value to military engagement simulations.

At the unit or organizational level, task analysis involves specifying the task and the command and control structure in terms of assets, resources, knowledge, access, timing, and so forth. The basic idea is that the task and the command and control structure affect unit-level performance (see Chapter 10). Task analysis at the unit level does not involve looking at the motor actions an individual must perform or the cognitive processing in which an individual must engage. Rather, it involves laying out the set of tasks the unit as a whole must perform to achieve some goal, the order in which those tasks must be accomplished, what resources are needed, and which individuals or subunits have those resources.

A great deal of research in sociology, organizational theory, and management science has been and is being done on how to do task analysis at the unit level. For tasks, the focus has been on developing and extending project analysis techniques, such as program evaluation and review technique (PERT) charts and dependency graphs. For the command and control structure, early work focused on general features such as centralization, hierarchy, and span of control. Recently, however, network techniques have been used to measure and distinguish the formal reporting structure from the communication structure. These various approaches have led to a series of survey instruments and analysis tools. There are a variety of unresolved issues, including how to measure differences in the structures and how to represent change.

Instantiate the Model

A model of human behavior must be made complete and accurate with specific data. Ideally, the model with its parameters specified will already be incorporated into an architectural framework, along with the more general properties of human information processing mechanisms. Parameters for selected sensory and motor processes can and should be obtained from the literature. However, many human behavior representations are likely to include high-level decision making, planning, and information-seeking components. For these components, work is still being done to define suitable underlying structures, and general models at this level will require further research. In many cases, however, the cognitive activities of interest should conform to doctrine or are highly

proceduralized. In these cases, detailed task analyses provide data that will permit at least a first-order approximation of the behavior of interest.

Sometimes small-scale analytical studies or field observations can provide detailed data suitable for filling in certain aspects of a model, such as the time to carry out a sequence of actions that includes positioning, aiming, and firing a rifle or targeting and launching a missile. Some of these aspects could readily be measured, whereas others could be approximated without the need for new data collection by using approaches based on prediction methods employed for time and motion studies in the domain of industrial engineering (Antis et al., 1973 Konz, 1995), Fitts' law (Fitts and Posner, 1967), or GOMS 1 (John and Kieras, 1996 Card et al., 1983). These results could then be combined with estimates of perceptual and decision making times to yield reasonable estimates of human reaction times for incorporation into military simulations.

Inevitably, there will be some data and parameter requirements for which neither the literature nor modeling and analysis will be sufficient and for which it would be too expensive to conduct even an ad hoc study. In those cases, the developer should rely on expert judgment. However, in conducting this study, the panel found that expert judgment is often viewed as the primary source of the necessary data we emphasize that it should be the alternative of last resort because of the biases and lack of clarity or precision associated with such judgments.

Much of the modeling of human cognition that will be necessary for use in human behavior representations&mdashparticularly those aspects of cognition involving higher-level planning, information seeking, and decision making&mdashhas not yet been done and will require new research and development. At the same time, these new efforts can build productively on many recent developments in the psychological and sociological sciences, some of which are discussed in the next chapter.

Verify, Validate, and Accredit the Model

Before a model can be used with confidence, it must be verified, validated, and accredited. Verification refers here to the process of checking for errors in the programming, validation to determining how well the model represents reality, and accreditation to official certification that a model or simulation is acceptable for specific purposes. According to Bennett (1995), because models and simulations are based on only partial representations of the real world and are modified as data describing real events become available, it is necessary to conduct verification and validation on an ongoing basis. As a result, it is not possible to ensure

GOMS (goals, operators, methods, and selection rules) is a relatively simple methodology for making quantitative estimates of the performance times for carrying out well-structured procedural tasks.

Verification may be accomplished by several methods. One is to develop tracings of intermediate results of the program and check them for errors using either hand calculations or manual examination of the computations and results. Verification may also be accomplished through modular programming, structured walkthroughs, and correctness proofs (Kleijnen and van Groenendaal, 1992).

Validation is a more complex matter. Indeed, depending on the characteristics of the model, its size, and its intended use, adequate demonstration of validity may not be possible. According to DMSO, validation is defined as "the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended users of the model" (U.S. Department of Defense, 1996). The degree of precision needed for a model is guided by the types and levels of variables it represents and its intended use. For example, some large models have too many parameters for the entire model to be tested in these cases, an intelligent testing strategy is needed. Sensitivity analysis may be used to provide guidance on how much validity is needed, as well as to examine the contributions of particular models and their associated costs. Carley (1996b) describes several types of models, including emulation and intellective models. Emulation models are built to provide specific advice, so they need to include valid representations of everything that is critical to the situation at hand. Such models are characterized by a large number of parameters, several modules, and detailed user interfaces. Intellective models are built to show proof of concept or to illustrate the impact of a basic explanatory mechanism. Simpler and smaller than emulation models, they lack detail and should not be used to make specific predictions.

Validation can be accomplished by several methods, including grounding, calibration, and statistical comparisons. Grounding involves establishing the face validity or reasonableness of the model by showing that simplifications do not detract from credibility. Grounding can be enhanced by demonstrating that other researchers have made similar assumptions in their models or by applying some form of ethnographic analysis. Grounding is appropriate for all models, and it is often the only level of validation needed for intellective models.

Calibration and statistical comparisons both involve the requirement for real-world data. Real-life input data (based on historical records) are fed into the simulation model, the model is run, and the results are compared with the real-world output. Calibration is used to tune a model to fit detailed real data. This is often an interactive process in which the model is altered so that its predictions come to fit the real data. Calibration of a model occurs at two levels: at one level, the model's predictions are compared with real data at another, the processes and parameters within the model are compared with data about the processes and

parameters that produce the behavior of concern. All of these procedures are relevant to the validation of emulation models.

Statistical or graphical comparisons between a model's results and those in the real world may be used to examine the model's predictive power. A key requirement for this analysis is the availability of real data obtained under comparable conditions. If a model is to be used to make absolute predictions, it is important that not only the means of the model and the means of the real world data be identical, but also that the means be correlated. However, if the model is to be used to make relative predictions, the requirements are less stringent: the means of the model and the real world do not have to be equal, but they should be positively correlated (Kleijnen and van Groenendaal, 1992).

Since a model's validity is determined by its assumptions, it is important to provide these assumptions in the model's documentation. Unfortunately, in many cases assumptions are not made explicit. According to Fossett et al. (1991), a model's documentation should provide an analyst not involved in the model's development with sufficient information to assess, with some level of confidence, whether the model is appropriate for the intended use specified by its developers.

It is important to point out that validation is a labor-intensive process that often requires a team of researchers and several years to accomplish. It is recommended that model developers be aided in this work by trained investigators not involved in developing the models. In the military context, the most highly validated models are physiological models and a few specific weapons models. Few individual combatant or unit-level models in the military context have been validated using statistical comparisons for prediction in fact, many have only been grounded. Validation, clearly a critical issue, is necessary if simulations are to be used as the basis for training or policy making.

Large models cannot be validated by simply examining exhaustively the predictions of the model under all parameter settings and contrasting that behavior with experimental data. Basic research is therefore needed on how to design intelligent artificial agents for validating such models. Many of the more complex models can be validated only by examining the trends they predict. Additional research is needed on statistical techniques for locating patterns and examining trends. There is also a need for standardized validation techniques that go beyond those currently used. The development of such techniques may in part involve developing sample databases against which to validate models at each level. Sensitivity analysis may be used to distinguish between parameters of a model that influence results and those that are indirectly or loosely coupled to outcomes. Finally, it may be useful to set up a review board for ensuring that standardized validation procedures are applied to new models and that new versions of old models are docked against old versions (to ensure that the new versions still generate the same correct behavior as the old ones).


3. Cognitive Modeling in Cyber-Security

There are several ways in which cognitive and behavioral modeling paradigms may be useful in the context of cyber-security. Here we focus on embedded computational process cognitive models and model-tracing techniques. Embedded cognitive models are independent simulations of human cognition and behavior that can interact directly with the task-environment (Salvucci, 2006 Gluck, 2010). In the context of cyber-security, these are cognitive models of network users, defenders, and attackers that can interact with the same software that humans interact with. This may be useful for adding simulated participants in training scenarios, for generating offline predictions in applied tests of network security, or for basic research simulations, especially in the contexts of human-factors and cyber epidemiology.

Cognitive modeling is similar to behavioral modeling, and is often employed for similar purposes. For example, a behavioral model of desktop user behavior may be a Markov state-transition probability matrix, stating that that if the user is in the state where they are typing an email, they may transition to a state where they are looking up something on Google with a probability x and a state where they are installing software with a probability y. A cognitive model may represent the same state-transitions as state-actions (a.k.a. productions), and assign utilities to each state-action pair. State transitions may be directly calculated based on state-action utilities, with the major difference being that state-action utilities (as well as the states and the actions available in agent memory) will change based on agent experiences.

Simulations of network users, defenders, and attackers require models that include cognitive processes and generic knowledge, as well as domain-specific facts and procedures. There is a variety of cognitive architecture software that attempts to provide modelers with fundamental sets of generic cognitive processes and basic knowledge (e.g., ACT-R, Soar, Sigma, PyIBL, Clarion Anderson and Lebiere, 1998 Sun, 2006 Anderson, 2007 Laird, 2012 Morrison and Gonzalez, 2016 Rosenbloom et al., 2016). Cognitive architectures often overlap in cognitive theory and capabilities. However, different architectures often have different assumptions and implementations of generic cognitive processes, different modeling languages and requirements, and different level of analysis focus in cognitive time-scale. For this reason, some architectures may be preferable to others depending on the purpose of the modeling effort. For example, Soar and ACT-R architectures both include reward-based learning mechanisms and can update the aforementioned state-action utilities based on agent experiences. However, Soar may be the more appropriate framework for modeling multi-step planning (Laird, 2012), whereas ACT-R may be the better choice when precise fact-retrieval times are of importance (Anderson, 2007).

Regardless of the initial cognitive architecture choice, the modeling system can be tuned based on the specific task and population being modeled. There is no limit to such tuning, enabling modelers to add and remove whole modules in their architecture of choice. However, most of the time such tuning takes the form of parameter value adjustments and model development. Model development is often a form of knowledge engineering—specification of potential goals, inputs, facts, and procedures assumed to be in the mind of the human being modeled.

There are many models simulating parts of network user behavior. For example, in independent efforts Fu and Pirolli (2007) and Peck and John (1992) developed models that make fair predictions as to network user behavior in a web browser based on current goals. There are models simulating how goals are retrieved (e.g., Altmann and Trafton, 2002) and how they are juggled (e.g., Salvucci, 2005). There are user modeling efforts that have focused on social network use (e.g., Hannon et al., 2012), chat behavior (e.g., Ball et al., 2010), team performance (Ball et al., 2010), and email activity (Dredze and Wallach, 2008). Finally, robust models of human cognition, especially in the realm of reward-based motivation (e.g., Nason and Laird, 2005 Fu and Anderson, 2006), can aid in explaining and predicting human behavior in the cyber domain (e.g., Maqbool et al., 2017). There are also many efforts for integrating individual models into a comprehensive model that can encompass multi-agent behavior at network-level dynamics (Romero and Lebiere, 2014). Such models can become an essential component of simulations in cyber, useful for generating realistic traffic and security holes. Model-based agents can act as simulated humans, switching between applications, clicking links, and downloading and installing software.

Attackers and defender models require more domain-specific knowledge. Unfortunately, subject-matter experts in this field are rarely available to the academic groups that do the bulk cognitive model development. Some core components of human-software interaction may be modeled without any deeper understanding of attacker/defender subject-matter expertise. For example, Instance-Based Learning theory (Gonzalez et al., 2003), integrated with memory dynamics of ACT-R (Anderson, 2007), has been employed in efforts to explain situational awareness of cyber analysts (Arora and Dutt, 2013 Dutt et al., 2013 Gonzalez et al., 2014), and to predict the role of intrusion-detection systems on cyber-attack detection (Dutt et al., 2016). These modeling efforts involved abstracted scenarios, but still exemplify useful research for understanding and predicting expert behavior. Moreover, in the case where cognitive models are to be exported as part of decision aid software for real-world cyber-security experts, abstract states and procedures may always be remapped to more specific domain correlates.

Regardless of whether the attempt is to model users, defenders, or attackers, tailoring the model to reflect what may be known about the individuals being modeled may be necessary to achieve better precision and use in the simulation. Model tailoring may be done during and prior to model initialization, as well as live, while the model is running, based on incoming data points. Much of model tailoring takes the form of adjusting model parameters (e.g., learning rate, exploratory tendencies), but some of it takes the form of adjusting model experiences on the fly to match human subject experiences. This latter form of tailoring is known as model-tracing.

The focus of model-tracing is in tuning a cognitive model to real in-task experiences of a specific individual. This technique is employed for maintaining an individual's cognitive state throughout that individual's time within the task-environment. For example, Anderson et al. (1995) employed model-tracing in automated 𠆌ognitive tutors’ to predict why students made certain errors on algebra problems, so as to better suit instructions to each individual student. In the context of cyber-security, model-tracing of network user and defender cognition can aid in predicting potential biases, errors, and negligence and model-tracing of attacker cognition can aid in predicting probable attack paths.

The following sections discuss model embedding in network simulations, model initialization and dynamic tailoring, the use of modeling in defender-attacker dynamics, and the use of modeling in automation.


Dr. Strayer in Action

Observing Driver Distraction


Professors Joe Kearney and Jodie Plumert at the University of Iowa College of Liberal Arts & Sciences explain their research on pedestrian safety using 3-D immsersive virtual technology.

Bicycling injuries represent a significant public health problem in the United States. Five- to 15-year-old children represent a particularly vulnerable segment of the population, having the highest rate of injury per million cycling trips. Motor vehicles are involved in approximately one-third of all bicycle-related brain injuries and in 90% of all fatalities resulting from bicycle crashes. Many of these collisions between bicycles and motor vehicles occur at intersections. A critical first step in developing programs to prevent these car-bicycle collisions is understanding more about why such collisions occur. Our work uses virtual environment technology to examine the factors that may put children at risk for car-bicycle collisions when crossing intersections.


Human Behavior

Academic and commercial researchers alike are aiming towards a deeper understanding of how humans act, make decisions, plan, and memorize.

In this guide we will introduce you to human behavior fundamentals and how you can tap into previously unknown secrets of the human brain and mind.

The Complete Pocket Guide

This 52-page guide will introduce you to:

  • The basics… and beyond
  • Best practices in human behavior
  • The theories behind
  • How to go beyond surveys and focus groups
  • … and much more

About iMotions
Training & support
Terms & conditions
Contact

Europe
Global HQ, Copenhagen, Denmark +45 71 998 098 | [email protected]
Germany +49 (0)151-63980468 | [email protected]

United States
Boston, USA HQ +1 617-520-4958 | Chicago, USA +1 857-702-0776 | [email protected]

Asia Pacific
Singapore HQ +65 8480-9180 | Sydney, Australia +61 426 982 496 | Taiwan +886 931 684 806 | [email protected]

iMotions A/S (VAT: DK 33504004) is registered in Denmark. Registered address: Frederiksberg Allé 1-3, 1621 København V, Denmark. CEO: Peter Hartzbech.