Abstract
To support users’ understanding of physical properties in 2D images, we propose Text2Texture, a webtool that converts 2D color images into textured 3D objects ready for 3D printing. This is achieved by extracting depth information using a monocular estimator, extracting local texture information using a fine-tuned stable diffusion model, and superimposing these macro- and micro-scale geometries to produce a composite 3D model with color, depth and texture. Images can be uploaded directly or generated via text prompt, and we print a variety of objects generated using each approach to suggest applications in physicallizing virtual worlds, adding haptic cues to photographs, and conveying information about scale in images.
Description
UIST Adjunct ’25, Busan, Republic of Korea
Publisher
ACM|The 38th Annual ACM Symposium on User Interface Software and Technology
Citation
Joshua Yin, Faraz Faruqi, and Martin Nisser. 2025. Text2Texture: Generating 3D-Printed Models with Textures based on Text and Image Prompts. In Adjunct Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology (UIST Adjunct '25). Association for Computing Machinery, New York, NY, USA, Article 169, 1–3.
Version: Final published version