IDVS (Immersive Design and Validation Space) is the core of the XR laboratory which has been set up at the Sasakawa International Center for Space Architecture (SICSA), University of Houston. The intent of this laboratory is to establish a state of the art XR facility for developing and testing various XR based methodologies and provide students, faculty and researchers with a platform for utilization of immersive technologies and design for space.
Consumer grade VR hardware is becoming progressively smaller and cheaper. In 2021, the AR/VR hardware market hit the 30 US $ billion dollar mark. It also became more advanced: new technologies such as haptic hardware, eye tracking and micro LED screens are already implemented in the last generation of devices, although just few years ago they were considered being at early development stages. To take full advantage of the latest immersive technologies for Humans In The Loop (HITL) Testing purposes, an advanced XR test room was designed. The XR environment, named “XR cage” is a 3.5m length by 3.5m depth and 2.5m high, built with extruded aluminum profiles . The main structure supports the lidar sensors used for the object tracking inside the XR cage, and cameras and lighting fixtures are located on the top of testing environment delimitation. The room allows multi-user interactions with different headsets and digital overlaying using a green screen. Inside the XR cage, every object is tracked in two ways: with lidars and VR sensors.
This double tracking system is designed to be future-proof. Currently, technology development is leaning towards a tagless tracking environment, to be based on data interpolation from lidars and cameras. It is anticipated that at a certain point of the development, we will be able to eliminate active tracking while still keeping the testing infrastructure. The main structure of the XR cage can also be expanded thanks to wheels and magnets, that allows for future extension of the testing space with basic effort.
IDVS is both a physical and a digital space. While the XR cage provides the physical interaction infrastructure, the XR Framework describes the methodology to use the infrastructure for the space hardware design and testing
The first step to build the cage was to assemble a frame structure that provides support for mounting of input devices and support of output devices. The frame allows several key requirements for using VR and MR for design environment testing. It can be disassembled if needed, expanded or shrink to match virtual twin of the tested object. The frame is equipped with wheels allowing its elements to move to any desired location in case it is necessary for testing procedures. The frame consists of four corner structures that are independent from each other and can stand along and support all required technology assets. In some cases, the corner structures can be separated and work as separate structures at different locations.
As the second step, a green screen has ben shaped onto the cage, to provide the capability of overlapping digital and physical object from any external camera Point of View.
The project aimed to fulfil the following research objectives:
• Identification of the most efficient methods of XR for testing and validating high level design concepts
• Identification of possible design interventions and development of evaluation criteria (or Figures Of Merit)
• Refinement of systems and design requirements, the most important design considerations and defining the major
• Create a framework to grow new XR capabilities for the space sector.
• Define a standard evaluation system to validate and implement design assumptions using XR testing.
• Define a multi-mission procedure testing framework based on highly immersive XR technologies.
XR is a fundamental tool for the space sector, but as many new technologies do, it requires time for extension of its possibilities to be fully understood. Even though the first NASA experiments of using XR started in the late 80s, as of today, its use is relegated to empirical testing, with no standardized protocols for obtaining measurable data that can impact a design process. As it is already stated in the section IV, many private space companies and space agencies are researching effective utilization of this technology for human factors evaluation in space applications. The main outcome of the presented here XR framework is providing scientifically proven protocols for XR applications, described with qualitative and quantitative evaluations that can impact the design process. The XR framework was designed to test space hardware performance along with human factors accommodating design. Therefore, the evaluation phase was broken down into two major topics: Human performance evaluation and Hardware performance evaluation. The collected data points sequentially interpolated to obtain a final inclusive evaluation of the process. Additionally, the framework is flexible enough to allow reading the data independently for obtaining more precise outcomes from the process.
A. Hardware performance evaluation
Hardware performance with human factors evaluation is described as the measurement of hardware capability to perform the tasks that it was designed for and when operated by a user. That is a fundamental evaluation metrics because it is directly connected to design features of the hardware. This evaluation phase allows defining specific aspects of hardware designs that are responsible for performance deficiencies. In the XR framework, the hardware performance is evaluated with the mSUS (modified System Usability Scale) an adaptation of the largely used SUS. The mSUS is a qualitative evaluation method based on a user dispensed survey. The survey is organized in 10 questions to be answered
on a 5-level scale from Strongly Disagree to Strongly Agree. The 10 questions administered after the test scenario performed by the user. Questions are designed to cover system’s specific usability aspects.
B. Human performance evaluation
Human performance evaluation aims to assess physiological and psychological states of the user while using the hardware for performing a specific task. It is important for two reasons: it identifies specific design features that pose a challenge or risk for accomplishing an objective; and it acts as a control group for Hardware performance evaluation since a deep discrepancy between the results can usually be traced back to a faulty data collection phase or to an evaluation impairment from the user. Human performance evaluation collected in two different ways: through TLX (Task Load Index) and biofeedback reading. The TLX is a NASA designed survey dispensed to users for self-evaluation of their own performance while using a target hardware to perform a specific task. The Task Load Index was created at NASA Ames Research Center in 1980 by Sandra Hart. By incorporating a multi-dimensional rating procedure, NASA TLX derives an overall workload score based on a weighted average of ratings on six different subscales that include:
• Physical Demand
• Temporal Demand
The NASA TLX has been successfully used in a wide range of applications since its introduction, and it is still used today in a wide range of Human-Machine Interaction testing. The TLX survey assesses work load on five 7-point scales with increments of high, medium and low estimates for each point resulting in 21 gradations on the scales. The survey composed of 6 questions.