DeepSpace: Mood-Based Image Texture Generation for Virtual Reality from Music
Author(s)
Sra, Misha; Vijayaraghavan, Prashanth; Rudovic, Ognjen; Maes, Pattie; Roy, Deb
DownloadPublished version (608.2Kb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
© 2017 IEEE. Affective virtual spaces are of interest for many VR applications in areas of wellbeing, art, education, and entertainment. Creating content for virtual environments is a laborious task involving multiple skills like 3D modeling, texturing, animation, lighting, and programming. One way to facilitate content creation is to automate sub-processes like assignment of textures and materials within virtual environments. To this end, we introduce the DeepSpace approach that automatically creates and applies image textures to objects in procedurally created 3D scenes. The main novelty of our DeepSpace approach is that it uses music to automatically create kaleidoscopic textures for virtual environments designed to elicit emotional responses in users. Specifically, DeepSpace exploits the modeling power of deep neural networks, which have shown great performance in image generation tasks, to achieve mood-based image generation. Our study results indicate the virtual environments created by DeepSpace elicit positive emotions and achieve high presence scores.
Date issued
2017-07Department
Massachusetts Institute of Technology. Media LaboratoryPublisher
IEEE
Citation
Sra, Misha, Vijayaraghavan, Prashanth, Rudovic, Ognjen, Maes, Pattie and Roy, Deb. 2017. "DeepSpace: Mood-Based Image Texture Generation for Virtual Reality from Music."
Version: Final published version