One major challenge in exposure therapy is tailoring recreated environments to individual patients, a process that is often time-intensive. Although virtual reality (VR)-based approaches have been explored in recent years, patient-specific customization remains difficult owing to the effort required to design bespoke environments.
In a new study published in ACM Transactions on Computing for Healthcare, researchers from the University of Tsukuba developed a system that automatically generates auditory VR experiences of traumatic sounds from natural language input by leveraging large language models and acoustic datasets. Users simply input a theme in text, and the system automatically generates appropriate sound materials and scenarios.