MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Defending non-Bayesian learning against adversarial attacks

Author(s)
Su, Lili; Vaidya, Nitin H
Thumbnail
Download446_2018_336_ReferencePDF.pdf (232.4Kb)
Publisher Policy

Publisher Policy

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.

Terms of use
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Metadata
Show full item record
Abstract
Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state out of m alternatives. We focus on the impact of adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local learning updates with consensus primitives. In particular, we consider the scenario where an unknown subset of agents suffer Byzantine faults—agents suffering Byzantine faults behave arbitrarily. We propose two learning rules. In our learning rules, each non-faulty agent keeps a local variable which is a stochastic vector over the m possible states. Entries of this stochastic vector can be viewed as the scores assigned to the corresponding states by that agent. We say a non-faulty agent learns the underlying truth if it assigns one to the true state and zeros to the wrong states asymptotically. In our first update rule, each agent updates its local score vector as (up to normalization) the product of (1) the likelihood of the cumulative private signals and (2) the weighted geometric average of the score vectors of its incoming neighbors and itself. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically learn the true state almost surely. We propose a modified variant of our first learning rule whose complexity per iteration per agent is $$O(m^2 n \log n)$$ O ( m 2 n log n ) , where n is the number of agents in the network. In addition, we show that this modified learning rule works under a less restrictive network identifiability condition.
Date issued
2018-06-20
URI
https://hdl.handle.net/1721.1/131300
Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Publisher
Springer Berlin Heidelberg

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.