MENTAT

Development Projects Comments Off on MENTAT

The Intelligence Advanced Research Projects Activity (IARPA) funds high-risk, high-reward research projects centered on improving the quality of the US intelligence community. IARPA’s Sirius Program, under the direction of Dr. Rita Bush, funds research to develop and study serious games to mitigate cognitive bias and improve decision making. This effort emphasizes the examination of different game variables and the effect those manipulations have on performance.

Drs. Alice Leung and Talib Hussain of Raytheon BBN Technologies were awarded one of the contracts and the UCF RETRO Lab was one of many subcontractors working under them on this project, named MENTAT, which stands for Mentally ENgaging Training for Analytic Thinking.


 

About MENTAT

Several teams were a part of this year-long effort which include game development (BreakAway Ltd.) and measure creation and validation (Draper Laboratory). While Drs. Jan Cannon-Bowers and Clint Bowers from RETRO served on the pedagogy and study design teams for MENTAT, our major role was data collection and analysis.

This study focused on how different aspects of a game, such as feedback delivery, orientation priming, and presence of in-game scaffolding, could be manipulated to change the player’s understanding of confirmation bias as well as improve their decision-making and bias mitigation abilities. This involved the development of complex confirmation bias instruments by the Draper team. We also examined the role of engagement on performance through both subjective and physiological measurement. RETRO was able to further examine the psychometric properties of our in-progress Flow scale while the team at Draper was able to validate an engagement algorithm combining heart rate, pulse plethysmography, pupil diameter, and galvanic skin response.


 

MENTAT Data Collection

The data collection effort was massive. The study involved an online 1-hour intake survey, a 2-hour in-person session, and an 8-week 30-minute follow-up survey. The in-person study protocol involved a complex game delivery system that manipulated in-game variables and provided detailed game performance feedback for each participant. Measures during this session included both traditional surveys, for engagement and confirmation bias, as well as a physiological measurement suite utilizing a binocular Arrington eye tracker and a BIOPAC MP150 data acquisition and analysis system.

 

 

We opened a brand new lab in the UCF Psychology building specifically for this data collection effort. We were able to set up four complete research stations, each with two computers, an eye tracking system, and a BIOPAC. Twenty-eight research assistants were recruited and data was collected in six 2-hour blocks, five days a week, for 10 weeks. During that time, we were able to recruit around 580 participants from the UCF Psychology participant pool, 357 of which completed both the online intake and in-person experimental session.


 

General Findings

The goal was to reduce the player’s tendency to blindly favor any given hypothesis, which would include decreasing the tendency to only seek evidence that supports that hypothesis or to give more value in the decision-making process to evidence that supports that hypothesis. The game also sought to enhance declarative knowledge and promote the development of skills similar to an intelligence community analyst as well as engender the meta-ability to avoid confirmation bias. We were also interested in examining how the game made individuals more aware of their tendency to experience confirmation bias (known as Bias Blind Spot), including both overall knowledge and and specific behaviors associated with confirmation bias. We were interested in how different pedagogical manipulations of a game’s feedback mechanisms, goal orientation priming through instructions, and presence of in-game scaffolding would affect a player’s confirmation bias and bias blind spot for confirmation bias as well as how engagement and individual differences, such as video game self-efficacy, played a role in performance and learning.

Detailed findings will be posted after the official publication of results.


 

MENTAT Research Team @ RETRO

There were several different members of the MENTAT team under BBN, however we’d like to call attention to the hard-working research assistants at RETRO by listing their names here.

RETRO MENTAT Research Team

Lab Directors Drs. Clint Bowers & Jan Cannon-Bowers
Study Manager and Supervisor Katelyn Procci
Lead Lab Supervisor Shan Lakhmani
Lab Supervisors Asli Soyler Akbas, Alen Chao, Brian Eddy, Jen Loglia, Skilan Ortiz, & Jenny Vogel
Graduate Research Assistants Michael Schwartz, Dustin Sarver, Rodrigo Velezmoro, Patrick Saikas, & Ryan Yordon
Research Assistants Brian Wardynski & Stephanie Formanek
Undergraduate Research Assistants Yeonsil Song, Katherine Huayhua, Latasha White, Jonathan McIntosh, Kristina Thani, Kat Hancock, David Do, Christine Kreutzer, Brian Gatlin, Shannon Marlow, Lauren Byrne, Courtney Ford, Lindsey Reid, & Sean Villegas