Show simple item record

dc.contributor.advisorMueller, Stefanie
dc.contributor.authorWang, Sean
dc.date.accessioned2025-04-14T14:06:38Z
dc.date.available2025-04-14T14:06:38Z
dc.date.issued2025-02
dc.date.submitted2025-04-03T14:06:28.112Z
dc.identifier.urihttps://hdl.handle.net/1721.1/159117
dc.description.abstractRecent advances in 3D content creation with generative AI have made it easier to generate 3D models using text and images as input. However, translating these digital designs into usable objects in the physical world is still an open challenge. Since these 3D models are generated to be aesthetically similar to their inputs, the resulting models tend to have the visual features the user desires but often lack the functionality required for their use cases. This thesis proposes a novel approach to generative AI in 3D modeling, shifting the focus from replicating specific objects to generating affordances. We trained models that allow users to create point clouds that satisfy physical properties called affordances, which are properties that describe how an object should behave in the real world. By ensuring that the generated objects have the expected affordances, we explore how existing tools can be augmented to generate 3D objects whose functionality is consistent with their appearances.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleToward Affordance-Based Generation for 3D Generative AI
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record