Show simple item record

dc.contributor.authorLi, Yuxiao
dc.contributor.authorMichaud, Eric J.
dc.contributor.authorBaek, David D.
dc.contributor.authorEngels, Joshua
dc.contributor.authorSun, Xiaoqing
dc.contributor.authorTegmark, Max
dc.date.accessioned2025-05-07T20:05:53Z
dc.date.available2025-05-07T20:05:53Z
dc.date.issued2025-03-27
dc.identifier.urihttps://hdl.handle.net/1721.1/159239
dc.description.abstractSparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language models. We find that this concept universe has interesting structure at three levels: (1) The “atomic” small-scale structure contains “crystals” whose faces are parallelograms or trapezoids, generalizing well-known examples such as (man:woman::king:queen). We find that the quality of such parallelograms and associated function vectors improves greatly when projecting out global distractor directions such as word length, which is efficiently performed with linear discriminant analysis. (2) The “brain” intermediate-scale structure has significant spatial modularity; for example, math and code features form a “lobe” akin to functional lobes seen in neural fMRI images. We quantify the spatial locality of these lobes with multiple metrics and find that clusters of co-occurring features, at coarse enough scale, also cluster together spatially far more than one would expect if feature geometry were random. (3) The “galaxy”-scale large-scale structure of the feature point cloud is not isotropic, but instead has a power law of eigenvalues with steepest slope in middle layers. We also quantify how the clustering entropy depends on the layer.en_US
dc.publisherMultidisciplinary Digital Publishing Instituteen_US
dc.relation.isversionofhttp://dx.doi.org/10.3390/e27040344en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceMultidisciplinary Digital Publishing Instituteen_US
dc.titleThe Geometry of Concepts: Sparse Autoencoder Feature Structureen_US
dc.typeArticleen_US
dc.identifier.citationLi, Y.; Michaud, E.J.; Baek, D.D.; Engels, J.; Sun, X.; Tegmark, M. The Geometry of Concepts: Sparse Autoencoder Feature Structure. Entropy 2025, 27, 344.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Physicsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalEntropyen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-04-25T13:46:39Z
dspace.date.submission2025-04-25T13:46:39Z
mit.journal.volume27en_US
mit.journal.issue4en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record