Show simple item record

dc.contributor.advisorPolyanskiy, Yury
dc.contributor.authorJabbour, Mark
dc.date.accessioned2025-04-14T14:06:45Z
dc.date.available2025-04-14T14:06:45Z
dc.date.issued2025-02
dc.date.submitted2025-04-03T14:06:17.895Z
dc.identifier.urihttps://hdl.handle.net/1721.1/159119
dc.description.abstractWe study the ability of transformers to perform In Context Learning (ICL) in the setting of Empirical Bayes for the Poison Model. On the theoretical side, we demonstrate the expressibility of transformers by formulating a way to approximate the Robbins estimator, the first empirical Bayes estimator for the Poisson model. On the empirical side, we show that transformers pre-trained on synthetic data can generalize to unseen prior and sequence lengths, outperforming existing methods like Robbins, NPMLE, and ERM monotone in efficiency and accuracy. By studying the internal behavior of the representations of the intermediate layers of these transformers, we found that the representation converges quickly and smoothly over the layers. We also demonstrate that it’s unlikely transformers are implementing Robbin’s or NPMLE estimators in context.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleTransformers as Empirical Bayes Estimators The Poisson Model
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record