Mobile brain-imaging using functional near-infrared spectroscopy measures mental load and captures learning outcomes in VR through machine learning to optimize learning processes.
Autonomous AI systems, such as robots, smart home devices, or self-driving cars, offer support in a variety of situations. Through machine learning, these systems can increasingly operate independently. It is crucial that they adapt quickly, respond promptly, and function flawlessly. To achieve this, Reinforcement Learning (RL) is often used, where correct behaviour is rewarded, and errors are penalized. However, conventional RL algorithms are often costly, time-consuming, and data-intensive, requiring extensive feedback for accurate assessment and learning. Moreover, the training of AI systems is usually carried out in isolation, without active human involvement. When human feedback is obtained, it often occurs in a cumbersome manner through speech or gesture interaction, making the training feel unnatural and may require frequent interruptions.
Traditional industries in Germany are undergoing revolutionary changes due to automation and digital innovations. The increasing complexity from the continuous integration of artificial intelligence is placing higher demands on employee competencies. These developments necessitate continual knowledge acquisition to maintain employability. Efficient further training and adaptation to new technologies are therefore crucial for businesses. There is a clear need to make workplace training, retraining, and introductions to new technologies more efficient. Lifelong learning is key in this context and must take into account the individual abilities and needs of learners.
Within the project, we successfully conducted two empirical neuroscience studies to test the induction of various difficulty levels, associated mental load, and learning success in a VR environment. The first foundational study utilized the visual-spatial n-back paradigm to assess working memory load in two levels, capturing brain activity with a mobile fNIRS system.
In a subsequent longitudinal study, a realistic VR learning environment was used to train the assembly of an electrical cabinet, where task difficulty was scaled by the number of components and time limits. Participants completed three sessions over several days, allowing us to analyze training effects and the effectiveness of difficulty modulation via mobile fNIRS.
Finally, a neuroadaptive system was developed that analyzes fNIRS data in real-time to decode working memory load and adjust difficulty levels. This system combines machine learning with a synchronized simulation environment comprising a VR learning environment, fNIRS software, and a Python-based interface that integrates data streams, analyzes fNIRS signals in real-time, and adjusts learning content accordingly.
The results from these experimental and technological developments highlight the potential of VR and neurotechnology in creating an adaptive learning environment that responds to users' cognitive states and optimizes learning. Future studies will further evaluate the effectiveness and user acceptance of these approaches.