Transformers as Empirical Bayes Estimators The Poisson Model
Author(s)
Jabbour, Mark
DownloadThesis PDF (3.280Mb)
Advisor
Polyanskiy, Yury
Terms of use
Metadata
Show full item recordAbstract
We study the ability of transformers to perform In Context Learning (ICL) in the setting of Empirical Bayes for the Poison Model. On the theoretical side, we demonstrate the expressibility of transformers by formulating a way to approximate the Robbins estimator, the first empirical Bayes estimator for the Poisson model. On the empirical side, we show that transformers pre-trained on synthetic data can generalize to unseen prior and sequence lengths, outperforming existing methods like Robbins, NPMLE, and ERM monotone in efficiency and accuracy. By studying the internal behavior of the representations of the intermediate layers of these transformers, we found that the representation converges quickly and smoothly over the layers. We also demonstrate that it’s unlikely transformers are implementing Robbin’s or NPMLE estimators in context.
Date issued
2025-02Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology