Text2Texture: Generating 3D-Printed Models with Textures based on Text and Image Prompts
Author(s)
Yin, Joshua; Faruqi, Faraz; Nisser, Martin
Download3746058.3758373.pdf (3.818Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
To support users’ understanding of physical properties in 2D images, we propose Text2Texture, a webtool that converts 2D color images into textured 3D objects ready for 3D printing. This is achieved by extracting depth information using a monocular estimator, extracting local texture information using a fine-tuned stable diffusion model, and superimposing these macro- and micro-scale geometries to produce a composite 3D model with color, depth and texture. Images can be uploaded directly or generated via text prompt, and we print a variety of objects generated using each approach to suggest applications in physicallizing virtual worlds, adding haptic cues to photographs, and conveying information about scale in images.
Description
UIST Adjunct ’25, Busan, Republic of Korea
Date issued
2025-09-27Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryPublisher
ACM|The 38th Annual ACM Symposium on User Interface Software and Technology
Citation
Joshua Yin, Faraz Faruqi, and Martin Nisser. 2025. Text2Texture: Generating 3D-Printed Models with Textures based on Text and Image Prompts. In Adjunct Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology (UIST Adjunct '25). Association for Computing Machinery, New York, NY, USA, Article 169, 1–3.
Version: Final published version
ISBN
979-8-4007-2036-9