| dc.contributor.advisor | Polyanskiy, Yury | |
| dc.contributor.author | Jabbour, Mark | |
| dc.date.accessioned | 2025-04-14T14:06:45Z | |
| dc.date.available | 2025-04-14T14:06:45Z | |
| dc.date.issued | 2025-02 | |
| dc.date.submitted | 2025-04-03T14:06:17.895Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/159119 | |
| dc.description.abstract | We study the ability of transformers to perform In Context Learning (ICL) in the setting of Empirical Bayes for the Poison Model. On the theoretical side, we demonstrate the expressibility of transformers by formulating a way to approximate the Robbins estimator, the first empirical Bayes estimator for the Poisson model. On the empirical side, we show that transformers pre-trained on synthetic data can generalize to unseen prior and sequence lengths, outperforming existing methods like Robbins, NPMLE, and ERM monotone in efficiency and accuracy. By studying the internal behavior of the representations of the intermediate layers of these transformers, we found that the representation converges quickly and smoothly over the layers. We also demonstrate that it’s unlikely transformers are implementing Robbin’s or NPMLE estimators in context. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | Transformers as Empirical Bayes Estimators The Poisson Model | |
| dc.type | Thesis | |
| dc.description.degree | M.Eng. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| mit.thesis.degree | Master | |
| thesis.degree.name | Master of Engineering in Electrical Engineering and Computer Science | |