Now showing items 1-20 of 26

    • Active boundary annotation using random MAP perturbations 

      Maji, Subhransu; Hazan, Tamir; Jaakkola, Tommi S (PLMR, 2014-04)
      We address the problem of efficiently annotating labels of objects when they are structured. Often the distribution over labels can be described using a joint potential function over the labels for which sampling is provably ...
    • Approximate inference in additive factorial HMMs with application to energy disaggregation 

      Kolter, Jeremy Z.; Jaakkola, Tommi S. (Proceedings of Machine Learning Research, 2012-04)
      This paper considers additive factorial hidden Markov models, an extension to HMMs where the state factors into multiple independent chains, and the output is an additive function of all the hidden states. Although such ...
    • Collaborative future event recommendation 

      Minkov, Einat; Charrow, Ben; Ledlie, Jonathan; Teller, Seth; Jaakkola, Tommi S. (Association for Computing Machinery, 2010-10)
      We demonstrate a method for collaborative ranking of future events. Previous work on recommender systems typically relies on feedback on a particular item, such as a movie, and generalizes this to other items or other ...
    • Controlling privacy in recommender systems 

      Xin, Yu; Jaakkola, Tommi S. (Neural Information Processing Systems, 2014)
      Recommender systems involve an inherent trade-off between accuracy of recommendations and the extent to which users are willing to release information about their preferences. In this paper, we explore a two-tiered notion ...
    • Convolutional Embedding of Attributed Molecular Graphs for Physical Property Prediction 

      Coley, Connor Wilson; Barzilay, Regina; Green Jr, William H; Jaakkola, Tommi S; Jensen, Klavs F (American Chemical Society (ACS), 2017-07)
      The task of learning an expressive molecular representation is central to developing quantitative structure–activity and property relationships. Traditional approaches rely on group additivity rules, empirical measurements ...
    • Dual decomposition for parsing with non-projective head automata 

      Koo, Terry; Rush, Alexander Matthew; Collins, Michael; Jaakkola, Tommi S.; Sontag, David Alexander (Association for Computational Linguistics, 2010-10)
      This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. ...
    • From random walks to distances on unweighted graphs 

      Hashimoto, Tatsunori Benjamin; Jaakkola, Tommi S; Sun, Yi (Neural Information Processing Systems Foundation, Inc., 2015-12)
      Large unweighted directed graphs are commonly used to capture relations between entities. A fundamental problem in the analysis of such networks is to properly define the similarity or dissimilarity between any two vertices. ...
    • Greed Is Good If Randomized: New Inference for Dependency Parsing 

      Zhang, Yuan; Lei, Tao; Barzilay, Regina; Jaakkola, Tommi S. (2014-10)
      Dependency parsing with high-order features results in a provably hard decoding problem. A lot of work has gone into developing powerful optimization methods for solving these combinatorial problems. In contrast, we explore, ...
    • Inverse Covariance Estimation for High-Dimensional Data in Linear Time and Space: Spectral Methods for Riccati and Sparse Models 

      Honorio, Jean; Jaakkola, Tommi S. (Association for Uncertainty in Artificial Intelligence (AUAI), 2013-07)
      We propose maximum likelihood estimation for learning Gaussian graphical models with a Gaussian (ℓ[2 over 2]) prior on the parameters. This is in contrast to the commonly used Laplace (ℓ[subscript 1) prior for encouraging ...
    • Learning bayesian network structure using lp relaxations 

      Jaakkola, Tommi S.; Sontag, David Alexander; Globerson, Amir; Meila, Marina (Society for Artificial Intelligence and Statistics, 2010-05)
      We propose to solve the combinatorial problem of finding the highest scoring Bayesian network structure from data. This structure learning problem can be viewed as an inference problem where the variables specify ...
    • Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions 

      Hazan, Tamir; Maji, Subhransu; Keshet, Joseph; Jaakkola, Tommi S. (Neural Information Processing Systems, 2013)
      In this work we develop efficient methods for learning random MAP predictors for structured label problems. In particular, we construct posterior distributions over perturbations that can be adjusted via stochastic gradient ...
    • Learning efficiently with approximate inference via dual losses 

      Meshi, Ofer; Sontag, David Alexander; Jaakkola, Tommi S.; Globerson, Amir (International Machine Learning Society, 2010-01)
      Many structured prediction tasks involve complex models where inference is computationally intractable, but where it can be well approximated using a linear programming relaxation. Previous approaches for learning for ...
    • Lineage-based identification of cellular states and expression programs 

      Hashimoto, Tatsunori Benjamin; Jaakkola, Tommi S.; Sherwood, Richard; Mazzoni, Esteban O.; Wichterle, Hynek; e.a. (Oxford University Press, 2012-01)
      We present a method, LineageProgram, that uses the developmental lineage relationship of observed gene expression measurements to improve the learning of developmentally relevant cellular states and expression programs. ...
    • Low-Rank Tensors for Scoring Dependency Structures 

      Lei, Tao; Zhang, Yuan; Barzilay, Regina; Jaakkola, Tommi S. (Association for Computational Linguistics, 2014-06)
      Accurate scoring of syntactic structures such as head-modifier arcs in dependency parsing typically requires rich, high-dimensional feature representations. A small subset of such features is often selected manually. This ...
    • Metric recovery from directed unweighted graphs 

      Hashimoto, Tatsunori Benjamin; Sun, Yi; Jaakkola, Tommi S (PMLR, 2015-05)
      We analyze directed, unweighted graphs obtained from x[subscript i] ∈ R[superscript d] by connecting vertex i to j iff |x[subscript i] − x[subscript j]| < ε(x[subscript i]). Examples of such graphs include k-nearest neighbor ...
    • More data means less inference: A pseudo-max approach to structured learning 

      Sontag, David; Meshi, Ofer; Jaakkola, Tommi S.; Globerson, Amir (Neural Information Processing Systems Foundation, 2010-12)
      The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference in this setting are intractable. Here we show that it is ...
    • On dual decomposition and linear programming relaxations for natural language processing 

      Rush, Alexander Matthew; Sontag, David Alexander; Collins, Michael; Jaakkola, Tommi S. (Association for Computational Linguistics, 2010-10)
      This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamic-programming algorithms as oracle solvers for sub-problems, together ...
    • On measure concentration of random maximum a-posteriori perturbations 

      Orabona, Francesco; Hazan, Tamir; Sarwate, Anand D.; Jaakkola, Tommi S. (Association for Computing Machinery (ACM), 2014)
      The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations ...
    • On sampling from the Gibbs distribution with random maximum a-posteriori perturbations 

      Hazan, Tamir; Maji, Subhransu; Jaakkola, Tommi S. (Neural Information Processing Systems, 2013)
      In this paper we describe how MAP inference can be used to sample efficiently from Gibbs distributions. Specifically, we provide means for drawing either approximate or unbiased samples from Gibbs' distributions by introducing ...
    • Principal differences analysis: Interpretable characterization of differences between distributions 

      Mueller, Jonas Weylin; Jaakkola, Tommi S (Neural Information Processing Systems Foundation, Inc., 2015-12)
      We introduce principal differences analysis (PDA) for analyzing differences between high-dimensional distributions. The method operates by finding the projection that maximizes the Wasserstein divergence between the resulting ...