Center for Brains, Minds & Machines
https://hdl.handle.net/1721.1/88529
2020-04-01T09:41:46ZCan we Contain Covid-19 without Locking-down the Economy?
https://hdl.handle.net/1721.1/124356
Can we Contain Covid-19 without Locking-down the Economy?
Shalev-Shwartz, Shai; Shashua, Amnon
We present an analysis of a risk-based selective quarantine model where the population is divided into low and high-risk groups. The high-risk group is quarantined until the low-risk group achieves herd-immunity. We tackle the question of whether this model is safe, in the sense that the health system can contain the number of low-risk people that require severe ICU care (such as life support systems).
2020-03-26T00:00:00ZStable Foundations for Learning: a foundational framework for learning theory in both the classical and modern regime.
https://hdl.handle.net/1721.1/124343
Stable Foundations for Learning: a foundational framework for learning theory in both the classical and modern regime.
Poggio, Tomaso
We consider here the class of supervised learning algorithms known as Empirical Risk Minimization (ERM). The classical theory by Vapnik and others characterize universal consistency of ERM in the classical regime in which the architecture of the learning network is fixed and n, the number of training examples, goes to infinity. According to the classical theory, the minimizer of the empirical risk is consistent if the hypothesis space has finite complexity. We do not have a similar general theory for the modern regime of interpolating regressors and over-parameterized deep networks, in which d > n and 𝑑/n remains constant as n goes to infinity.
In this note I propose the outline of such a theory based on the specific notion of CVloo stability of the
learning algorithm with respect to perturbations of the training set. The theory shows that for interpolating regressors and separating classifiers (either kernel machines or deep RELU networks)
1. minimizing CVloo stability minimizes the expected error
2. the most stable solutions are minimum norm solutions
The hope is that this approach may lead to a unified theory encompassing both the modern regime and the classical one.
2020-03-25T00:00:00ZUniversal Metaphysics
https://hdl.handle.net/1721.1/123331
Universal Metaphysics
Liao, Qianli
The development of natural science especially physics allows us to understand to a large extent the material world. However, the world also contains a large amount of concepts that are non-material and abstract, which are often poorly described by our language, let alone being well understood. In order to provide a comprehensive and coherent account of the structure of the world, we argue that it is important to create an explicit system and language to describe the composition and working of the world, especially the non-material components. This is reminiscent of the goal of the millennia-old subject metaphysics. Yet instead of focusing on isolated topics like most existing metaphysical studies, we argue it is beneficial to develop a roadmap for metaphysics (or mind) — a unified and coherent theory of what exist in the world, how to describe them, how they interact and how they are organized. Such development might lead to new insight into research in the science and engineering of intelligence and perhaps also how we view the world.
2019-12-31T00:00:00ZDouble descent in the condition number
https://hdl.handle.net/1721.1/123108
Double descent in the condition number
Poggio, Tomaso; Kur, Gil; Banburski, Andrzej
In solving a system of n linear equations in d variables Ax=b, the condition number of the (n,d) matrix A measures how much errors in the data b affect the solution x. Bounds of this type are important in many inverse problems. An example is machine learning where the key task is to estimate an underlying function from a set of measurements at random points in a high dimensional space and where low sensitivity to error in the data is a requirement for good predictive performance. Here we report the simple observation that when the columns of A are random vectors, the condition number of A is highest, that is worse, when d=n, that is when the inverse of A exists. An overdetermined system (n>d) and especially an underdetermined system (n<d), for which the pseudoinverse must be used instead of the inverse, typically have significantly better, that is lower, condition numbers. Thus the condition number of A plotted as function of d shows a double descent behavior with a peak at d=n.
2019-12-04T00:00:00Z