<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>Computer Science and Artificial Intelligence Lab (CSAIL)</title>
<link>https://hdl.handle.net/1721.1/5458</link>
<description/>
<pubDate>Fri, 17 Apr 2026 20:05:36 GMT</pubDate>
<dc:date>2026-04-17T20:05:36Z</dc:date>
<item>
<title>On the Complexity of Neural Computation in Superposition</title>
<link>https://hdl.handle.net/1721.1/157073</link>
<description>On the Complexity of Neural Computation in Superposition
Adler, Micah; Shavit, Nir
Recent advances in the understanding of neural networks suggest that superposition, the ability of a single neuron to represent multiple features simultaneously, is a key mechanism underlying the computational efficiency of large-scale networks. This paper explores the theoretical foundations of computing in superposition, focusing on explicit, provably correct algorithms and their efficiency.&#13;
&#13;
We present the first lower bounds showing that for a broad class of problems, including permutations and pairwise logical operations, a neural net- work computing in superposition requires at least Ω(m′ log m′) parameters and Ω(√(m′ log m′)) neurons, where m′ is the number of output features being computed. This implies that any “lottery ticket” sparse sub-network must have at least Ω(m′ log m′ ) parameters no matter what the initial dense network size. Conversely, we show a nearly tight upper bound: logical operations like pair- wise AND can be computed using O(√(m′) log m′) neurons and O(m′ log^2 m′) parameters. There is thus an exponential gap between computing in superposition, the subject of this work, and representing features in superposition, which can require as little as O(log m′) neurons based on the Johnson-Lindenstrauss Lemma.&#13;
&#13;
Our hope is that our results open a path for using complexity theoretic techniques in neural network interpretability research.
</description>
<pubDate>Mon, 30 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157073</guid>
<dc:date>2024-09-30T00:00:00Z</dc:date>
</item>
<item>
<title>Belief Programming Implementation</title>
<link>https://hdl.handle.net/1721.1/153053</link>
<description>Belief Programming Implementation
Atkinson, Eric
</description>
<pubDate>Mon, 27 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/153053</guid>
<dc:date>2023-11-27T00:00:00Z</dc:date>
</item>
<item>
<title>Speranza: Usable, privacy-friendly software signing</title>
<link>https://hdl.handle.net/1721.1/152179</link>
<description>Speranza: Usable, privacy-friendly software signing
Merrill, Kelsey; Newman, Zachary; Torres-Arias, Santiago; Sollins, Karen
Software repositories, used for wide-scale open software distribu- tion, are a significant vector for security attacks. Software signing provides authenticity, mitigating many such attacks. Developer- managed signing keys pose usability challenges, but certificate- based systems introduce privacy problems. This work, Speranza, uses certificates to verify software authenticity but still provides anonymity to signers using zero-knowledge identity co-commitments.&#13;
In Speranza, a signer uses an automated certificate authority (CA) to create a private identity-bound signature and proof of authoriza- tion. Verifiers check that a signer was authorized to publish a pack- age without learning the signer’s identity. The package repository privately records each package’s authorized signers, but publishes only commitments to identities in a public map. Then, when issuing certificates, the CA issues the certificate to a distinct commitment to the same identity. The signer then creates a zero-knowledge proof that these are identity co-commitments.&#13;
We implemented a proof-of-concept for Speranza. We find that costs to maintainers (signing) and end users (verifying) are small (sub-millisecond), even for a repository with millions of packages. Techniques inspired by recent key transparency systems reduce the bandwidth for serving authorization policies to 2 KiB. Server costs in this system are negligible. Our evaluation finds that Speranza is practical on the scale of the largest software repositories.&#13;
We also emphasize practicality and deployability in this project. By building on existing technology and employing relatively sim- ple and well-established cryptographic techniques, Speranza can be deployed for wide-scale use with only a few hundred lines of code and minimal changes to existing infrastructure. Speranza is a practical way to bring privacy and authenticity together for more trustworthy open-source software.
This is an extended version of the shorter paper by the same name, published in ACM CCS 2023.
</description>
<pubDate>Tue, 19 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/152179</guid>
<dc:date>2023-09-19T00:00:00Z</dc:date>
</item>
<item>
<title>How Can Large Language Models Help Humans in Design And Manufacturing?</title>
<link>https://hdl.handle.net/1721.1/151174</link>
<description>How Can Large Language Models Help Humans in Design And Manufacturing?
Makatura, Liane; Foshey, Michael; Wang, Bohan; Hähnlein, Felix; Ma, Pingchuan; Deng, Bolei; Tjandrasuwita, Megan; Spielberg, Andrew; Owens, Crystal Elaine; Chen, Peter Yichen; Zhao, Allan; Zhu, Amy; Norton, Wil J; Gu, Edward; Jacob, Joshua; Li, Yifei; Schulz, Adriana; Matusik, Wojciech
</description>
<pubDate>Thu, 27 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151174</guid>
<dc:date>2023-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Counterfactual Explanations and Predictive Models to Enhance Clinical Decision-Making in Schizophrenia using Digital Phenotyping</title>
<link>https://hdl.handle.net/1721.1/150908</link>
<description>Counterfactual Explanations and Predictive Models to Enhance Clinical Decision-Making in Schizophrenia using Digital Phenotyping
Canas, Juan Sebastian; Gomez, Francisco; Costilla Reyes, Omar
Clinical practice in psychiatry is burdened with the increased demand for healthcare services and the scarce resources available. New paradigms of health data powered with machine learning techniques could open the possibility to improve clinical workflow in critical stages of clinical assessment and treatment in psychiatry. &#13;
In this work, we propose a machine learning system capable of predicting, detecting, and explaining individual changes in symptoms of patients with Schizophrenia by using behavioral digital phenotyping data. We forecast symptoms of patients with an error rate below 10%. &#13;
The system detects decreases in symptoms using changepoint algorithms and uses counterfactual explanations as a recourse in a simulated continuous monitoring scenario in healthcare.  Overall, this study offers valuable insights into the performance and potential of counterfactual explanations, predictive models, and change-point detection within a simulated clinical workflow. These findings lay the foundation for further research to explore additional facets of the workflow, aiming to enhance its effectiveness and applicability in real-world healthcare settings. By leveraging these components, the goal is to develop an actionable, interpretable, and trustworthy integrative decision support system that combines real-time clinical assessments with sensor-based inputs.
</description>
<pubDate>Thu, 15 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150908</guid>
<dc:date>2023-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>The AEGIS Processor Architecture for Tamper-Evident and Tamper-Resistant Processing</title>
<link>https://hdl.handle.net/1721.1/149977.4</link>
<description>The AEGIS Processor Architecture for Tamper-Evident and Tamper-Resistant Processing
Suh, G. Edward; Clarke, Dwaine; Gassend, Blaise; van Dijk, Marten; Devadas, Srinivas
We describe the architecture for a single-chip AEGIS processor which can be used to build computing systems secure against both physical and software attacks. Our architecture assumes that all components external to the processor, such as memory, are untrusted. We show two different implementations. In the first case, the core functionality of the operating system is trusted and implemented in a security kernel. We also describe a variant implementation assuming an untrusted operating system. AEGIS provides users with  tamper-evident, authenticated environments in which any physical or software tampering by an adversary is guaranteed to be detected, and private and authenticated tamper-resistant environments where additionally the adversary is unable to obtain any information about software or data by tampering with, or otherwise observing, system operation. AEGIS enables many applications, such as commercial grid computing, secure mobile agents, software licensing, and digital rights management. We also present a new encryption/decryption method that successfully hides a significant portion of encryption/decryption latency, in comparison to a conventional direct encryption scheme. Efficient memory encryption and integrity verification enable the implementation of a secure computing system with the only trusted component being a single-chip AEGIS CPU. Preliminary simulation results indicate that the overhead of security mechanisms in AEGIS is reasonable.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149977.4</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The AEGIS Processor Architecture for Tamper-Evident and Tamper-Resistant Processing</title>
<link>https://hdl.handle.net/1721.1/149977.3</link>
<description>The AEGIS Processor Architecture for Tamper-Evident and Tamper-Resistant Processing
Suh, G. Edward; Clarke, Dwaine; Gassend, Blaise; van Dijk, Marten; Devadas, Srinivas
We describe the architecture of the AEGIS processor which can be used to build computing systems secure against both physical and software attacks. AEGIS assumes that the operating system and all components external to it, such as memory, are untrusted. AEGIS provides tamper-evident, authenticated environments in which any physical or software tampering by the adversary is guaranteed to be detected, and private and authenticated, tamper-resistant environments where additionally the adversary is unable to obtain any information about software or data by tampering with, or otherwise observing, system operation. AEGIS enables many applications, such as commercial grid computing, software licensing, and digital rights management. We present a new encryption/decryption method that successfully hides a significant portion of encryption/decryption latency, in comparison to a conventional direct encryption scheme. Efficient memory encryption and integrity verification enable the implementation of a secure computing system with the only trusted component being a single-chip AEGIS CPU. Detailed simulation results indicate that the performance overhead of security mechanisms in AEGIS is reasonable.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149977.3</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>AEGIS: Architecture for Tamper-Evident and Tamper-Resistant Processing</title>
<link>https://hdl.handle.net/1721.1/149977.2</link>
<description>AEGIS: Architecture for Tamper-Evident and Tamper-Resistant Processing
Suh, G. Edward; Clarke, Dwaine; Gassend, Blaise; van Dijk, Marten; Devadas, Srinivas
We describe the architecture for a single-chip AEGIS processor which can be used to build computing systems secure against both physical and software attacks. Our architecture assumes that all components external to the processor, such as memory, are untrusted. We show two different implementations. In the first case, the core functionality of the operating system is trusted and implemented in a security kernel. We also describe a variant implementation assuming an untrusted operating system. AEGIS provides users with tamper-evident, authenticated environments in which any physical or software tampering by an adversary is guaranteed to be detected, and private and authenticated tamper-resistant environments where additionally the adversary is unable to obtain any information about software or data by tampering with, or otherwise observing, system operation. AEGIS enables many applications, such as commercial grid computing, secure mobile agents, software licensing, and digital rights management. Preliminary simulation results indicate that the overhead of security mechanisms in AEGIS is reasonable.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149977.2</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid I/O Automata*</title>
<link>https://hdl.handle.net/1721.1/149930.3</link>
<description>Hybrid I/O Automata*
Lynch, Nancy A.; Segala, Roberto; Vaandrager, Frits
Hybrid systems are systems that exhibit a combination of discrete and continuous behavior. Typical hybrid systems include computer components, which operate in discrete program steps, and real-world components, whose behavior over time intervals evolves according to physical constraints. Important examples of hybrid systems include automated transportation systems, robotics systems, process control systems, systems of embedded devices, and mobile computing systems. Such systems can be very complex, and very difficult to describe and analyze. This paper presents the Hybrid Input/Output Automaton (HIOA) modeling framework, a basic mathematical framework to support description and analysis of hybrid systems. An important feature of this model is its support for decomposing hybrid system descriptions. In particular, the framework includes a notion of external behavior for a hybrid I/O automaton, which captures its discrete and continuous interactions with its environment. The framework also defines what it means for one HIOA to implement another, based on an inclusion relationship between their external behavior sets, and defines a notion of simulation, which provides a sufficient condition for demonstrating implementation relationships. The framework also includes a composition operation for HIOAs, which respects external behavior, and a notion of receptiveness, which implies that an HIOA does not block the passage of time. The framework is intended to support analysis methods from both computer science and control theory. This work is a simplification of an earlier version of the HIOA model [49, 50]. The main simplification in the new model is a clearer separation between the mechanisms used to model discrete and continuous interaction between components. In particular, the new model removes the dual use of external variables for discrete and continuous interaction.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149930.3</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid I/O Automata*</title>
<link>https://hdl.handle.net/1721.1/149930.2</link>
<description>Hybrid I/O Automata*
Lynch, Nancy A.; Segala, Roberto; Vaandrager, Frits
Hybrid systems are systems that exhibit a combination of discrete and continuous behavior. Typical hybrid systems include computer components, which operate in discrete program steps, and real-world components, whose behavior over time intervals evolves according to physical constraints. Important examples of hybrid systems include automated transportation systems, robotics systems, process control systems, systems of embedded devices, and mobile computing systems. Such systems can be very complex, and very difficult to describe and analyze. This paper presents the Hybrid Input/Output Automaton (HIOA) modeling framework, a basic mathematical framework to support description and analysis of hybrid systems. An important feature of this model is its support for decomposing hybrid system descriptions. In particular, the framework includes a notion of external behavior for a hybrid I/O automaton, which captures its discrete and continuous interactions with its environment. The framework also defines what it means for one HIOA to implement another, based on an inclusion relationship between their external behavior sets, and defines a notion of simulation, which provides a sufficient condition for demonstrating implementation relationships. The framework also includes a composition operation for HIOAs, which respects external behavior, and a notion of receptiveness, which implies that an HIOA does not block the passage of time. The framework is intended to support analysis methods from both computer science and control theory. This work is a simplification of an earlier version of the HIOA model [49, 50]. The main simplification in the new model is a clearer separation between the mechanisms used to model discrete and continuous interaction between components. In particular, the new model removes the dual use of external variables for discrete and continuous interaction.
</description>
<pubDate>Fri, 01 Feb 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149930.2</guid>
<dc:date>2002-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced Certificate Revocation</title>
<link>https://hdl.handle.net/1721.1/149254.2</link>
<description>Enhanced Certificate Revocation
Micali, Silvio
We apply off-line/on-line signatures to provide an alternative solution to the problem of certificate revocation. The new systems dismiss with traditional CRLs (Certificate Revocation Lists) and yield public-key infrastructures that are substantially cheaper to run than traditional ones.
</description>
<pubDate>Fri, 01 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149254.2</guid>
<dc:date>1996-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>What are principal typings and what are they good for?</title>
<link>https://hdl.handle.net/1721.1/149247.2</link>
<description>What are principal typings and what are they good for?
Jim, Trevor
We demonstrate the pragmatic value of the principal typing property, a property more general than ML's principal type property, by studying a type system with principal typings. The type system is based on rank 2 intersection types and is closely related to ML. Its principal typing property provides elegant support for separate compilation, including "smartest recompilation" and incremental type inference, and for accurate type error messages. Moreover, it motivates a novel rule for typing recursive definitions that can type many examples of polymorphic recursion.
</description>
<pubDate>Wed, 01 Nov 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149247.2</guid>
<dc:date>1995-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rank 2 Type Systems and Recursive Definitions</title>
<link>https://hdl.handle.net/1721.1/149246.2</link>
<description>Rank 2 Type Systems and Recursive Definitions
Jim, Trevor
We demonstrate an equivalence between the rank 2 fragments of the polymorphic lambda calculus (System F) and the intersection type discipline: exactly the same terms are typable in each system.  An immediate consequence is that typability in the rank 2 intersection system is DEXPTIME-complete. We introduce a rank 2 system combining intersections and polymorphism and prove that it types exactly the same terms as the other rank 2 systems. The combined system suggests a new rule for typing recursive definitions. The result is a rank 2 type system with decidable type inference that can type some interesting examples of polymorphic recursion. Finally, we discuss some applications of the type system in data representation optimizations such as unboxing and overloading.
</description>
<pubDate>Wed, 01 Nov 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149246.2</guid>
<dc:date>1995-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Autoimmune Mechanism for AIDS' T4 Lymphopenia</title>
<link>https://hdl.handle.net/1721.1/149173.2</link>
<description>An Autoimmune Mechanism for AIDS' T4 Lymphopenia
Micali, Silvio
We put forward a new model for the T4 lymphopenia occuring in AIDS by suggesting a mechanism whose net effect is blocking the generation of T4 cells during HIV infection. Supporting evidence for this mechanism is derived from the experiments in the recent literature.
</description>
<pubDate>Wed, 01 May 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149173.2</guid>
<dc:date>1991-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Randomness-efficient Sampling of Arbitrary Functions</title>
<link>https://hdl.handle.net/1721.1/149164.2</link>
<description>Randomness-efficient Sampling of Arbitrary Functions
Bellare, Mihir; Rompel, John
</description>
<pubDate>Sun, 01 Jul 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149164.2</guid>
<dc:date>1990-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Modular Drinking Philosophers Algorithm</title>
<link>https://hdl.handle.net/1721.1/149155.2</link>
<description>A Modular Drinking Philosophers Algorithm
Welch, Jennifer Lundelius; Lynch, Nancy A.
A variant of the drinking philosphers algorithm of Chandy and Misra is described and proved correct in a module way, using the I/O automaton model of Lynch and Tuttle. The algorithm of Chandy and Misra is based on an particular dining philosophers algorithm, and relies on certain properties of its implementation. The drinking philosophers algorithm presented in this paper is able to use an arbitrary dining philosophers algorithm as a true subroutine; nothing about the implementation needs to be known, only that is solves the dining philosophers problem. An important advantage of this modularity is that by substituting a more time-efficient dining philosophers algorithm with O(1) worst-case waiting time is obtained, whereas the drinking philosophers algorithm of Chandy and Misra has O(n) worst-case waiting time (for n philosophers). Formal definitions are given to distinguish the drinking and dining philosophers problems and to specify precisely varying degrees of concurrency.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149155.2</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bandwidth Management in Wireless Sensor Networks</title>
<link>https://hdl.handle.net/1721.1/149994</link>
<description>Bandwidth Management in Wireless Sensor Networks
Hull, Bret; Jamieson, Kyle; Balakrishnan, Hari
Wireless sensor networks are often used in monitoring and control applications, where software running on generalpurpose computers ÔøΩpullÔøΩ information from remote sensors and ÔøΩpushÔøΩ actuations into the network. The sensors themselves form a multihop wireless network communicatingwith one or more sensor access points (SAPs) that interface between application software and the sensor network. This paper addresses the problem of managing wireless network bandwidth and improving network capacity in a sensor network deployed as a shared infrastructure, concurrently used by different applications. Our bandwidth management architecture incorporates three ideas: first, we develop a simple rule system that allows applications and the network administrator to specify how traffic generated by sensors should be treated by the sensor network. Each rule is a function that maps a sensor data type and generated value to a transmission rate and a traffic class. Second, we show how using multiple SAPs and SAP selection method that considers packet loss probabilities, path load, and path lengths improves the capacity of the network and the performance of individual sensor streams. Third, we show that hopby- hop flow control, rather than end-to-end congestion control, is a better way to cope with the nature of sensor network traffic and avoids unnecessary packet losses that waste valuable wireless network bandwidth. Our experimental results from a 40-node indoor wireless sensor testbed show that these three techniques are simple to implement and allow scarce network bandwidth to be used efficiently.
</description>
<pubDate>Tue, 01 Apr 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149994</guid>
<dc:date>2003-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer-Enforced Immutability for the Java Language</title>
<link>https://hdl.handle.net/1721.1/149993</link>
<description>Computer-Enforced Immutability for the Java Language
Birka, Adrian
This thesis presents the design, implementation, and evaluation of an extension to the Java language, ConstJava, that is capable of expressing immutability constraints and verifying them at compile time. The specific constraint expressed in ConstJava is that the transitive state of the object to which a given reference refers cannot be modified using that reference. In addition to the ability to specify and enforce this basic constraint, ConstJava includes several other features, such as mutable fields, immutable classes, templates, and the const cast operator, that make ConstJava a more useful language. The thesis evaluates the utility of ConstJava via experiments involving writing ConstJava code and converting Java code to ConstJava code. The evaluation of ConstJava shows that the language provides tangible benefits in early detection and correction of bugs that would otherwise be difficult to catch. There are also costs associated with the use of ConstJava. These are minimized by ConstJavaÔøΩs backward compatibility with Java, and by the high degree of inter-operability of the two languages, which allows for a less painful transition from Java to ConstJava. This technical report is a revision of the authorÔøΩs MasterÔøΩs thesis, which was advised by Prof. Michael D. Ernst.
</description>
<pubDate>Sun, 01 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149993</guid>
<dc:date>2003-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compositionality for Probabilistic Automata</title>
<link>https://hdl.handle.net/1721.1/149992</link>
<description>Compositionality for Probabilistic Automata
Lynch, Nancy A.; Segala, Roberto; Vaandrager, Frits
We establish that on the dfomain of probabilistic automata, the trace distribution preorder coincides with the simulation preorder.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149992</guid>
</item>
<item>
<title>Subexponential Parameterized Algorithms on Graphs of Bounded Genus and H-minor-free Graphs</title>
<link>https://hdl.handle.net/1721.1/149991</link>
<description>Subexponential Parameterized Algorithms on Graphs of Bounded Genus and H-minor-free Graphs
Demaine, Erik D.; Fomin, Fedor V.; Hajiaghayi, Mohammad Taghi; Thilikos, Dimitrios M.
We introduce a new framework for designing fixed-parameter algorithms with subexponential running time---2^O(sqrt k) n^O(1).  Our results apply to a broad family of graph problems, called bidimensional problems, which includes many domination and covering problems such as vertex cover, feedback vertex set, minimum maximal matching, dominating set, edge dominating set, clique-transversal set, and many others restricted to bounded genus graphs. Furthermore, it is fairly straightforward to prove that a problem is bidimensional.  In particular, our framework includes as special cases all previously known problems to have such subexponential algorithms.  Previously, these algorithms applied to planar graphs, single-crossing-minor-free graphs, and/or map graphs; we extend these results to apply to bounded-genus graphs as well.  In a parallel development of combinatorial results, we establish an upper bound on the treewidth (or branchwidth) of a bounded-genus graph that excludes some planar graph H as a minor.  This bound depends linearly on the size |V(H)| of the excluded graph H and the genus g(G) of the graph G, and applies and extends the graph-minors work of Robertson and Seymour.   Building on these results, we develop subexponential fixed-parameter algorithms for dominating set, vertex cover, and set cover in any class of graphs excluding a fixed graph H as a minor.  In particular, this general category of graphs includes planar graphs, bounded-genus graphs, single-crossing-minor-free graphs, and any class of graphs that is closed under taking minors. Specifically, the running time is 2^O(sqrt k) n^h, where h is a constant depending only on H, which is polynomial for k = O(log^2 n).  We introduce a general approach for developing algorithms on H-minor-free graphs, based on structural results about H-minor-free graphs at the heart of Robertson and Seymour's graph-minors work.  We believe this approach opens the way to further development on problems in H-minor-free graphs.
</description>
<pubDate>Sun, 01 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149991</guid>
<dc:date>2003-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fixed Parameter Algorithms for Minor-Closed Graphs (of Locally Bounded Treewidth)</title>
<link>https://hdl.handle.net/1721.1/149990</link>
<description>Fixed Parameter Algorithms for Minor-Closed Graphs (of Locally Bounded Treewidth)
Demaine, Erik D.; Hajiaghayi, Mohammad Taghi
Frick and Grohe showed that for each property phi that is definable in first-order logic, and for each class of minor-closed graphs of locally bounded treewidth, there is an O(n^(1+epsilon))-time algorithm deciding whether a given graph has property phi. In this paper, we extend this result for fixed-parameter algorithms and show that any minor-closed [contraction-closed] bidimensional parameter which can be computed in polynomial time on graphs of bounded treewidth is also fixed-parameter tractable on general minor-closed graphs [minor-closed class of graphs of locally bounded treewidth].  These parameters include many domination and covering parameters such as vertex cover, feedback vertex set, dominating set, and clique-transversal set.  Our algorithm is very simple and its running time is explicit (in contrast to the work of Frick and Grohe).  Along the way, we obtain interesting combinatorial bounds between the aforementioned parameters and the treewidth of the graphs.
</description>
<pubDate>Sun, 01 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149990</guid>
<dc:date>2003-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equivalence of Local Treewidth and Linear Local Treewidth and its Algorithmic Applications</title>
<link>https://hdl.handle.net/1721.1/149989</link>
<description>Equivalence of Local Treewidth and Linear Local Treewidth and its Algorithmic Applications
Demaine, Erik D.; Hajiaghayi, Mohammad Taghi
We solve an open problem posed by Eppstein in 1995 and re-enforced by Grohe concerning locally bounded treewidth in minor-closed families of graphs. A graph has bounded local treewidth if the subgraph induced by vertices within distance r of any vertex has treewidth bounded by a function of r (not n). Eppstein characterized minor-closed families of graphs with bounded local treewidth as precisely minor-closed families that minor-exclude an apex graph, where an apex graph has one vertex whose removal leaves a planar graph. In particular, Eppstein showed that all apex-minor-free graphs have bounded local treewidth, but his bound is doubly exponential in r, leaving open whether a tighter bound could be obtained.  We improve this doubly exponential bound to a linear bound, which is optimal. In particular, any minor-closed graph family with bounded local treewidth has linear local treewidth. Our bound generalizes previously known linear bounds for special classes of graphs proved by several authors.  As a consequence of our result, we obtain substantially faster polynomial-time approximation schemes for a broad class of problems in apex-minor-free graphs, improving the running time from 2^(2^(2^O(1/epsilon))) n^O(1) to 2^O(1/epsilon) n^O(1).
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149989</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Snapshots in a Distributed Persistent Object Storage System</title>
<link>https://hdl.handle.net/1721.1/149988</link>
<description>Snapshots in a Distributed Persistent Object Storage System
Moh, Chuang-Hue
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149988</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incremental Multiset Hash Functions and their Application to Memory Integrity Checking</title>
<link>https://hdl.handle.net/1721.1/149987</link>
<description>Incremental Multiset Hash Functions and their Application to Memory Integrity Checking
Clarke, Dwaine; Devadas, Srinivas; van Dijk, Marten; Gassend, Blaise; Suh, G. Edward
We introduce a new cryptographic tool: multiset hash functions. Unlike standard hash functions which take strings as input, multiset hash functions operate on multisets (or sets). They map multisets of arbitrary finite size to strings (hashes) of fixed length. They are incremental in that, when new members are added to the multiset, the hash can be updated in time proportional to the change. The functions may be multiset-collision resistant in that it is diÔøΩcult to find two multisets which produce the same hash, or just set-collision resistant in that it is diÔøΩcult to find a set and a multiset which produce the same hash. In particular, we introduce four multiset hash functions, each with its own advantages. MSet-XOR-Hash uses the XOR operation and is very eÔøΩcient; however, it uses a secret key and is only set-collision resistant. MSet-Add-Hash uses addition modulo a large integer and, thus, is slightly less eÔøΩcient than MSet-XOR-Hash; MSet-Add-Hash also uses a secret key but it is multiset-collision resistant. MSet-Mu-Hash uses finite field arithmetic and is not as eÔøΩcient as the other two hash functions; however, MSet-Mu-Hash is multiset-collision resistant, and unlike the other two hash functions, does not require a secret key. MSet-VAdd-Hash is more eÔøΩcient than MSet-Mu-Hash; it is also multiset-collision resistant, and does not use a secret key, but the hashes it produces are significantly longer than the hashes of the other functions. The proven security of MSet-XOR-Hash and MSet-Add-Hash is quantitative. We reduce the hardness of finding collisions to the hardness of breaking the underlying pseudorandom functions. The proven security of MSet-Mu-Hash is in the random oracle model and is based on the hardness of the discrete logarithm problem. The proven security of MSet-VAdd-Hash is also in the random oracle model and is based on the hardness of the worst-case shortest vector problem. We demonstrate how set-collision resistant multiset hash functions make an existing oÔøΩine memory integrity checker secure against active adversaries. We improve on this checker such that it can use smaller time stamps without increasing the frequency of checks. The improved checker uses multiset-collision resistant hash functions
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149987</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Application-level Network Services with Regions</title>
<link>https://hdl.handle.net/1721.1/149986</link>
<description>Improving Application-level Network Services with Regions
Li, Ji
The underlying premise of the Region Project is that the concept of a region should be a new architecture capability in networking. A region is an entity that encapsulates and implements scoping, grouping, subdividing, and crossing boundaries of sets of entities. It is a powerful tool for managing the increasingly complex demands on the Internet and its successors, and thus should be made into an explicit, first-class component of the network architecture. Autonomous Systems and peer-to-peer networks can be viewed as two simple forms of existing regions. In this work, we explore the utility of informing members in one region of the membership of those same entities in different regions. Specifically, we improve peer-to-peer networks with information derived from Autonomous Systems. This thesis makes three notable contributions. Firstly, we provide a general peer-to-peer simulation framework for different optimization schemes. Secondly, we achieve performance improvements in the lookup, caching and replication of peer-to-peer system. Finally, we enhance our overall understanding of regions through the simulation, as well as their utilities to improve system performance.
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149986</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sift: A MAC Protocol for Event-Driven Wireless Sensor Networks</title>
<link>https://hdl.handle.net/1721.1/149985</link>
<description>Sift: A MAC Protocol for Event-Driven Wireless Sensor Networks
Jamieson, Kyle; Balakrishnan, Hari; Tay, Y.C.
Nodes in sensor networks often encounter spatially-correlated contention, where multiple nodes in the same neighborhood all sense an event they need to transmit information about. Furthermore, in many sensor network applications, it is sufficient if a subset of the nodes that observe the same event report it. We show that traditional carrier-sense multiple access (CSMA) protocols like 802.11 do not handle the first constraint adequately, and do not take advantage of the second property, leading to degraded latency and throughput as the network scales in size.   We present Sift, a medium access protocol for wireless sensor networks designed with the above observations in mind. Sift is a randomized CSMA protocol, but unlike previous protocols, does not use a time-varying contention window from which a node randomly picks a transmission slot. Rather, to reduce the latency for the delivery of event reports, Sift uses a fixed-size contention window and a carefully-chosen, non-uniform probability distribution of transmitting in each slot within the window. We show using simulations that Sift can offer up to a 7-fold latency reduction compared to 802.11 as the size of the sensor network scales up to 500 nodes. We then analytically prove bounds on the best latency achievable by a decentralized CSMA-based MAC protocol for sensor networks where one report of each event is enough, and show that Sift comes close to meeting this bound.
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149985</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anchor-free Distributed Localization in Sensor Networks</title>
<link>https://hdl.handle.net/1721.1/149984</link>
<description>Anchor-free Distributed Localization in Sensor Networks
Priyantha, Nissanka B.; Balakrishnan, Hari; Demaine, Erik; Teller, Seth
Many sensor network applications require that each node's sensor stream be annotated with its physical location in some common coordinate system. Manual measurement and configuration methods for obtaining location don't scale and are error-prone, and equipping sensors with GPS is often expensive and does not work in indoor and urban deployments. Sensor networks can therefore benefit from a self-configuring method where nodes cooperate with each other, estimate local distances to their neighbors, and converge to a consistent coordinate assignment. This paper describes a fully decentralized algorithm called AFL (Anchor-Free Localization) where nodes start from a random initial coordinate assignment and converge to a consistent solution using only local node interactions. The key idea in AFL is fold-freedom, where nodes first configure into a topology that resembles a scaled and unfolded version of the true configuration, and then run a force-based relaxation procedure. We show using extensive simulations under a variety of network sizes, node densities, and distance estimation errors that our algorithm is superior to previously proposed methods that incrementally compute the coordinate of nodes in the network, in terms of its ability to computer correct coordinates under a wider variety of conditions and its robuestness to measurement errors.
</description>
<pubDate>Tue, 01 Apr 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149984</guid>
<dc:date>2003-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>System Dependence Graph Construction for Aspect-Oriented Programs</title>
<link>https://hdl.handle.net/1721.1/149983</link>
<description>System Dependence Graph Construction for Aspect-Oriented Programs
Zhao, Jianjun; Rinard, Martin
We extend previous dependence-based representations called system dependence graphs (SDGs) to represent aspect-oriented programs and present an SDG construction algorithm. This algorithm first constructs a module dependence graph (MDG) for each piece of advice, introduction, and method in aspects and classes. It then uses existing techniques to connect the MDGs at call sites to form a partial SDG. Finally, it weaves the MDG for each piece of advice into the partial SDG for those methods whose behavior may be affected by the advice. The result is the complete SDG. Our SDGs capture the additional structure present in many aspect-oriented features such as join points, advice, introduction, aspects, and aspect inheritance, and various types of interactions between aspects and classes. They also correctly reflect the semantics of aspect-oriented concepts such as advice precedence, introduction scope, and aspect weaving. SDGs therefore provide a solid foundation for the further analysis of aspect-oriented programs.
</description>
<pubDate>Sat, 01 Mar 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149983</guid>
<dc:date>2003-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>RAMBO II: Rapidly Reconfigurable Atomic Memory for Dynamic Networks</title>
<link>https://hdl.handle.net/1721.1/149982</link>
<description>RAMBO II: Rapidly Reconfigurable Atomic Memory for Dynamic Networks
Gilbert, Seth; Lynch, Nancy A.; Shvartsman, Alexander A.
Future civilian rescue and military operations will depend on a complex system of communicating devices that can operate in highly dynamic environments. In order to present a consistent view of a complex world, these devices will need to maintain data objects with atomic (linearizable) read/write semantics.
</description>
<pubDate>Sat, 01 Mar 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149982</guid>
<dc:date>2003-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference of Generic Types in Java</title>
<link>https://hdl.handle.net/1721.1/149981</link>
<description>Inference of Generic Types in Java
Donovan, Alan; Ernst, Michael D.
Future versions of Java will include support for parametric polymorphism, or generic classes.  This will bring many benefits to Java programmers, not least because current Java practise makes heavy use of pseudo-generic classes.  Such classes (for example, those in package java.util) have logically generic specifications and documentation, but the type system cannot prove their patterns of use to be safe.   This work aims to solve the problem of automatic translation of Java source code into Generic Java (GJ) source code.  We present two algorithms that together can be used to translate automatically a Java source program into a semantically-equivalent GJ program with generic types.   The first algorithm infers a candidate generalisation for any class, based on the methods of that class in isolation.  The second algorithm analyses the whole program; it determines a precise parametric type for every value in the program.  Optionally, it also refines the generalisations produced by the first analysis as required by the patterns of use of those classes in client code.
</description>
<pubDate>Sat, 01 Mar 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149981</guid>
<dc:date>2003-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Data Structures on Untrusted Peer-to-Peer Storage with Per-participant Logs</title>
<link>https://hdl.handle.net/1721.1/149980</link>
<description>Building Data Structures on Untrusted Peer-to-Peer Storage with Per-participant Logs
Chen, Benjie; Gil, Thomer M.; Muthitacharoen, Athicha; Morris, Robert T.
L* is a technique for building multi-user distributed data structures out of untrusted peer-to-peer distributed hash tables (DHTs). L* uses multiple logs, one log per participant, to store changes to the data structure. Each participant finds data by consulting all logs, but performs modifications by appending only to its own log. This dencentralized structure allows L* to maintain meta-data consistency without locking and to isolate users' changes from each other, an appropriate arrangement for unreliable users. Applications use L* to maintain consistent data structures. L* interleaves multiple logs deterministically so that decentralized clients can agree on the order of completed operations, even if those operations where issued concurrently. When the data structure is quiescent, L* guarantees that clients agree on the state of the data structure. L* optionally provides mutual exclusion for applications that need to ensure atomicity for multi-step operations. The Ivy file system, built on top of L*, demonstrates that L*'s consistency guarantees are useful and can be used and implemented efficiently.
</description>
<pubDate>Sat, 01 Mar 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149980</guid>
<dc:date>2003-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Consistency Proofs on a Committed Database</title>
<link>https://hdl.handle.net/1721.1/149979</link>
<description>Efficient Consistency Proofs on a Committed Database
Ostrovsky, Rafail; Rackoff, Charles; Smith, Adam
A consistent query protocol allows a database owner to publish a very short string c which commits her to a particular database D with special consistency property (i.e., given c, every allowable query has unique and well-defined answer with respect to D.)  Moreover, when a user makes a query, any server hosting the database can answer the query, and provide a very short proof P that the answer is well-defined, unique, and consistent with c (and hence with D).  One potential application of consistent query protocols is for guaranteeing the consistency of many replicated copies of D---the owner can publish c, and users can verify the consistency of a query to some copy of D by making sure P is consistent with c.  This strong guarantee holds even for owners who try to cheat, while creating c.  The task of consistent query protocols was originally proposed for membership queries by Micali and Rabin, and subsequently and independently, by Kilian. In this setting a server can prove to a client whether or not a given key is present or not in a database, based only on a short public commitment c.  We strengthen their results in several ways. For membership queries, we improve the communication complexity; more importantly, we provide protocols for more general types of queries and more general relational databases.  For example, we consider databases in which entries have several keys and where we allow range queries (e.g. we allow a client to ask for all entries within a certain age range and a certain salary range).   Towards this goal, we introduce query algorithms with certain inherent robustness properties---called data-robust algorithms---and show how this robustness can be achieved. In particular, we illustrate our general technique by constructing an efficient data-robust algorithm for proving consistency of orthogonal range queries (a particular case of a ``join''query).  The server's proof convinces the client not only that all the matching entries provided are in D, but also that no others are present.  Our guarantees hold even if the answer is the empty set.  In the case of one-dimensional range queries we also show a new data-hiding technique---called explicit hashing---which allows us to a execute consistent query protocol P and at the same time protect the privacy of all other information in the database efficiently. In particular, we avoid the NP reductions required in a generic zero-knowledge proof.
</description>
<pubDate>Sat, 01 Feb 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149979</guid>
<dc:date>2003-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>REX: Secure, modular remote execution through file descriptor passing</title>
<link>https://hdl.handle.net/1721.1/149978</link>
<description>REX: Secure, modular remote execution through file descriptor passing
Kaminsky, Michael; Peterson, Eric; Fu, Kevin; Mazières, David; Kaashoek, M. Frans
The ubiquitous SSH package has demonstrated the importance of   secure remote login and execution.  This paper presents a new system,   REX, designed to provide remote login and execution in the context of   the SFS secure distributed file system.  REX departs from traditional   remote login design and is built around two main mechanisms---file   descriptor passing and a user agent process.        File descriptor passing allows REX to be split into several   smaller pieces; privileged code can run as its own process to   provide enhanced security guarantees.  REX also emulates secure file   descriptor passing over network connections, allowing users to build   extensions to REX outside of the core REX software.        REX uses and extends SFS's agent mechanism to provide a   transparent distributed computing environment to users.  The   agent stores private keys, server nicknames, and other per-user   configuration state; REX makes the SFS agent available to programs   that it executes on remote machines.        We have an implementation of REX and demonstrate that its   flexibility does not come at the cost of performance.  Initial REX   connections are comparable to those of SSH in speed, while subsequent   connections are much faster because REX exploits the SFS agent to   cache connection state to avoid costly public-key operations.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149978</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The AEGIS Processor Architecture for Tamper-Evident and Private Tamper-Resistant Processing</title>
<link>https://hdl.handle.net/1721.1/149977</link>
<description>The AEGIS Processor Architecture for Tamper-Evident and Private Tamper-Resistant Processing
Suh, G. Edward; Clarke, Dwaine; Gassend, Blaise; van Dijk, Marten; Devadas, Srinivas
We describe the architecture of the AEGIS processor which can be used to build computing systems secure against both physical and software attacks. AEGIS assumes that the operating system and all components external to it, such as memory, are untrusted. AEGIS provides tamper-evident, authenticated environments in which any physical or software tampering by the adversary is guaranteed to be detected, and private and authenticated, tamper-resistant environments where additionally the adversary is unable to obtain any information about software or data by tampering with, or otherwise observing, system operation. AEGIS enables many applications, such as commercial grid computing, software licensing, and digital rights management. We present a new encryption/decryption method that successfully hides a significant portion of encryption/decryption latency, in comparison to a conventional direct encryption scheme. Efficient memory encryption and integrity verification enable the implementation of a secure computing system with the only trusted component being a single-chip AEGIS CPU. Detailed simulation results indicate that the performance overhead of security mechanisms in AEGIS is reasonable.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149977</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Web Browsing for Mobile Clients using HTTP Compression</title>
<link>https://hdl.handle.net/1721.1/149976</link>
<description>Efficient Web Browsing for Mobile Clients using HTTP Compression
Krashinsky, Ronny
Efficient web browsing on mobile computers presents a unique challenge.  These machines are different from other classes of client computers since they have relatively low-bandwidth connections and they are battery-powered and therefore limited by their energy consumption.  However, they tend to interact with the same servers for the delivery of web content.  This project investigates optimizing the final critical link between a mobile client and a stationary base station by compressing HTTP request and response messages.  Using a split proxy design, compression of individual request messages reduces bandwidth by 26% to 34% across a variety of benchmark traces, and applying compression to response messages yields savings of 59% to 82% of the compressible data.  Higher compression rates are achieved by using streaming compression algorithms to compress the streams of request and response messages.  In this case, the bandwidth for requests sees an order of magnitude improvement, and the response stream obtains additional savings of 7% to 25% on top of the savings achieved with per-response compression.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149976</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physical Random Functions</title>
<link>https://hdl.handle.net/1721.1/149975</link>
<description>Physical Random Functions
Gassend, Blaise
In general, secure protocols assume that participants are able to maintain secret key information. In practice, this assumption is often incorrect as an increasing number of devices are vulnerable to physical attacks.  Typical examples of vulnerable devices are smartcards and Automated Teller Machines.   To address this issue, Physical Random Functions are introduced. These are Random Functions that are physically tied to a particular device. To show that Physical Random Functions solve the initial problem, it must be shown that they can be made, and that it is possible to use them to provide secret keys for higher level protocols. Experiments with Field Programmable Gate Arrays are used to evaluate the feasibility of Physical Random Functions in silicon.
</description>
<pubDate>Sat, 01 Feb 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149975</guid>
<dc:date>2003-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Theory of Structural Subtyping</title>
<link>https://hdl.handle.net/1721.1/149974</link>
<description>On the Theory of Structural Subtyping
Kuncak, Viktor; Rinard, Martin
We show that the first-order theory of structural subtyping of non-recursive types is decidable.   Let Sigma be a language consisting of function symbols (representing type constructors) and C a decidable structure in the relational language L containing a binary relation &lt;. C represents primitive types; &lt; represents a subtype ordering.  We introduce the notion of Sigma-term-power of C, which generalizes the structure arising in structural subtyping.  The domain of the Sigma-term-power of C is the set of Sigma-terms over the set of elements of C.   We show that the decidability of the first-order theory of C implies the decidability of the first-order theory of the Sigma-term-power of C.  This result implies the decidability of the first-order theory of structural subtyping of non-recursive types.   Our decision procedure is based on quantifier elimination and makes use of quantifier elimination for term algebras and Feferman-Vaught construction for products of decidable structures.   We also explore connections between the theory of structural subtyping of recursive types and monadic second-order theory of tree-like structures.  In particular, we give an embedding of the monadic second-order theory of infinite binary tree into the first-order theory of structural subtyping of recursive types.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149974</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boosting Fault-Tolerance in Asynchronous Message Passing Systems is Impossible</title>
<link>https://hdl.handle.net/1721.1/149973</link>
<description>Boosting Fault-Tolerance in Asynchronous Message Passing Systems is Impossible
Attie, Paul C.; Lynch, Nancy A.; Rajsbaum, Sergio
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149973</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Economic Mechanisms for Efficient Wireless Coexistence</title>
<link>https://hdl.handle.net/1721.1/149972</link>
<description>Economic Mechanisms for Efficient Wireless Coexistence
Aftab, Omar
</description>
<pubDate>Thu, 01 Aug 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149972</guid>
<dc:date>2002-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Detection and Repair of Errors in Data Structures</title>
<link>https://hdl.handle.net/1721.1/149971</link>
<description>Automatic Detection and Repair of Errors in Data Structures
Demsky, Brian; Rinard, Martin
We present a system that accepts a specification of key data structure constraints, then dynamically detects and repairs violations of these constraints. Our experience using our system indicates that the specifications are relatively easy to develop once one understands the data structures. Furthermore, for our set of benchmark applications, our system can effectively repair errors to deliver consistent data structures that allow the program to continue to operate successfully within its designed operating envelope.
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149971</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Type System and Analysis for the Automatic Extraction and Enforcement of Design Information</title>
<link>https://hdl.handle.net/1721.1/149970</link>
<description>A Type System and Analysis for the Automatic Extraction and Enforcement of Design Information
Lam, Patrick; Rinard, Martin
We present a new type system and associated type checker, analysis, and model extraction algorithms for automatically extracting models that capture aspects of the design of the program. Our type system enables the developer to place a _token_ on each object; this token serves as the object's representative during the analysis and model extraction. The polymorphism in our type system enables the use of general-purpose classes whose instances may serve different purposes in the computation; programmers may also hide the details of internal data structures by placing the same token on all of the objects in these data structures.  Our combined type system and analysis provide the model extraction algorithms with sound heap aliasing information. Our algorithms can therefore extract both structural models that characterize object referencing relationships and behavioral models that capture indirect interactions mediated by objects in the heap. Previous approaches, in contrast, in the absence of aliasing information, have focused on control-flow interactions that take place at procedure call boundaries. We have implemented our type checker, analysis, and model extraction algorithms and used them to produce design models. Our experience indicates that it is straightforward to produce the token annotations and that the extracted models provide useful insight into the structure and behavior of the program.
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149970</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Dynamic Primary View Group Communication Service</title>
<link>https://hdl.handle.net/1721.1/149969</link>
<description>A Dynamic Primary View Group Communication Service
De Prisco, Roberto; Fekete, Alan; Lynch, Nancy A.; Shvartsman, Alexander A.
</description>
<pubDate>Fri, 01 Nov 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149969</guid>
<dc:date>2002-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardware Mechanisms for Memory Integrity Checking</title>
<link>https://hdl.handle.net/1721.1/149968</link>
<description>Hardware Mechanisms for Memory Integrity Checking
Suh, G. Edward; Clarke, Dwaine; Gassend, Blaise; van Dijk, Marten; Devadas, Srinivas
Memory integrity verification is a useful primitive when implementing  secure processors that are resistant to attacks on hardware components.  This paper proposes new hardware schemes to verify the integrity of  untrusted external memory using a very small amount of trusted on-chip  storage. Our schemes maintain incremental multiset hashes of all memory  reads and writes at run-time, and can verify a {\\em sequence} of memory  operations at a later time. We study the advantages and disadvantages of  the two new schemes and two existing integrity checking schemes, MACs  and hash trees, when implemented in hardware in a microprocessor.  Simulations show that the new schemes outperform existing schemes of  equivalent functionality when integrity verification is infrequent.
</description>
<pubDate>Fri, 01 Nov 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149968</guid>
<dc:date>2002-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Offline Integrity Checking of Untrusted Storage</title>
<link>https://hdl.handle.net/1721.1/149967</link>
<description>Offline Integrity Checking of Untrusted Storage
Clarke, Dwaine; Gassend, Blaise; Suh, G. Edward; van Dijk, Marten; Devadas, Srinivas
</description>
<pubDate>Fri, 01 Nov 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149967</guid>
<dc:date>2002-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Access-Controlled Resource Discovery for Pervasive Networks</title>
<link>https://hdl.handle.net/1721.1/149966</link>
<description>Access-Controlled Resource Discovery for Pervasive Networks
Raman, Sanjay; Clarke, Dwaine; Burnside, Matt; Devadas, Srinivas; Rivest, Ronald L.
Networks of the future will be characterized by a variety of computational devices that display a level of dynamism not seen in traditional wired networks. Because of the dynamic nature of these networks, resource discovery is one of the fundamental problems that must be faced. While resource discovery systems are not a novel concept, securing these systems in an efficient and scalable way is challenging. This paper describes the design and implementation of an architecture for access-controlled resource discovery. This system achieves this goal by integrating access control with the Intentional Naming System (INS), a resource discovery and service location system. The integration is scalable, efficient, and fits well within a proxy-based security framework designed for dynamic networks. We provide performance experiments that show how our solution outperforms existing schemes. The result is a system that provides secure, access-controlled resource discovery that can scale to large numbers of resources and users
</description>
<pubDate>Sun, 01 Sep 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149966</guid>
<dc:date>2002-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Type System for Safe Region-Based Memory Management in Real-Time Java</title>
<link>https://hdl.handle.net/1721.1/149965</link>
<description>A Type System for Safe Region-Based Memory Management in Real-Time Java
Salcianu, Alexandru; Boyapati, Chandrasekhar; Beebee, William S., Jr.; Rinard, Martin
The Real-Time Specification for Java (RTSJ) allows a program to create real-time threads with hard real time constraints. Real-time threads use immortal memory and region-based memory management to avoid unbounded pauses caused by interference from the garbage collector. The RTSJ uses runtime checks to ensure that deleting a region does not create dangling references and that real-time threads do not access references to objects allocated in the garbage-collected heap. This paper presents a static type system that guarantees that these runtime checks will never fail for well-typed programs. Our type system therefore 1) provides an important safety guarantee for real-time programs and 2) makes it possible to eliminate the runtime checks and their associated overhead. Our system also makes several contributions over previous work on region types. For object-oriented programs, it combines region types and ownership types in a unified type system framework. For multithreaded programs, it allows long-lived threads to share objects without using the heap and without having memory leaks. For real-time programs, it ensures that real-time threads do not interfere with the garbage collector. We have implemented several programs in our system. Our experience indicates that our type system is sufficiently expressive and requires little programming overhead. We also ran these programs on our RTSJ platform. Our experiments show that eliminating the RTSJ runtime checks using a static type system can significantly decrease the execution time of a real-time program.
</description>
<pubDate>Fri, 01 Nov 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149965</guid>
<dc:date>2002-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Formal Venture into Reliable Multicast Territory</title>
<link>https://hdl.handle.net/1721.1/149964</link>
<description>A Formal Venture into Reliable Multicast Territory
Livadas, Carolos; Lynch, Nancy A.
</description>
<pubDate>Fri, 01 Nov 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149964</guid>
<dc:date>2002-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Case for Exploiting Packet Loss Locality in Multicast Loss Recovery</title>
<link>https://hdl.handle.net/1721.1/149963</link>
<description>The Case for Exploiting Packet Loss Locality in Multicast Loss Recovery
Livadas, Carolos; Keidar, Idit
</description>
<pubDate>Tue, 01 Oct 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149963</guid>
<dc:date>2002-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Blueware: Bluetooth Simulator for ns</title>
<link>https://hdl.handle.net/1721.1/149962</link>
<description>Blueware: Bluetooth Simulator for ns
Tan, Godfrey
</description>
<pubDate>Tue, 01 Oct 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149962</guid>
<dc:date>2002-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tetris is Hard, Even to Approximate</title>
<link>https://hdl.handle.net/1721.1/149961</link>
<description>Tetris is Hard, Even to Approximate
Demaine, Erik D.; Hohenberger, Susan; Liben-Nowell, David
</description>
<pubDate>Tue, 01 Oct 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149961</guid>
<dc:date>2002-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Facility Location Problem with Concave Cost Functions</title>
<link>https://hdl.handle.net/1721.1/149960</link>
<description>The Facility Location Problem with Concave Cost Functions
Hajiaghayi, Mohammad Taghi; Mahdian, Mohammad; Mirrokni, Vahab S.
</description>
<pubDate>Sun, 01 Sep 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149960</guid>
<dc:date>2002-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalar Operand Networks: On-chip interconnect for ILP in Partitioned Architechures</title>
<link>https://hdl.handle.net/1721.1/149959</link>
<description>Scalar Operand Networks: On-chip interconnect for ILP in Partitioned Architechures
Taylor, Michael Bedford; Lee, Walter; Amarasinghe, Saman; Agarwal, Anant
</description>
<pubDate>Mon, 01 Jul 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149959</guid>
<dc:date>2002-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ownership Types and Safe Lazy Upgrades in Object-Oriented Databases</title>
<link>https://hdl.handle.net/1721.1/149958</link>
<description>Ownership Types and Safe Lazy Upgrades in Object-Oriented Databases
Boyapati, Chandrasekhar; Liskov, Barbara H.; Shrira, Liuba
</description>
<pubDate>Mon, 01 Jul 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149958</guid>
<dc:date>2002-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Caches and Merkle Trees for Efficient Memory Authentication</title>
<link>https://hdl.handle.net/1721.1/149957</link>
<description>Caches and Merkle Trees for Efficient Memory Authentication
Gassend, Blaise; Suh, G. Edward; Clarke, Dwaine; van Dijk, Marten; Devadas, Srinivas
</description>
<pubDate>Mon, 01 Jul 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149957</guid>
<dc:date>2002-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Delay-Based Circuit Authentication With Application to Key Cards</title>
<link>https://hdl.handle.net/1721.1/149956</link>
<description>Delay-Based Circuit Authentication With Application to Key Cards
Gassend, Blaise; Clarke, Dwaine; van Dijk, Marten; Devadas, Srinivas
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149956</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Safe Runtime Downcasts With Ownership Types</title>
<link>https://hdl.handle.net/1721.1/149955</link>
<description>Safe Runtime Downcasts With Ownership Types
Boyapati, Chandrasekhar; Lee, Robert; Rinard, Martin
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149955</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Generation and Checking of Program Specifications</title>
<link>https://hdl.handle.net/1721.1/149954</link>
<description>Automatic Generation and Checking of Program Specifications
Nimmer, Jeremy
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149954</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Safe Lazy Software Upgrades in Object-Oriented Databases</title>
<link>https://hdl.handle.net/1721.1/149953</link>
<description>Safe Lazy Software Upgrades in Object-Oriented Databases
Liskov, Barbara H.; Moh, Chuang-Hue; Richman, Steven; Shrira, Liuba; Chueng, Yin; Boyapati, Chandrasekhar
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149953</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Abstraction with Byzantine Fault-Tolerance</title>
<link>https://hdl.handle.net/1721.1/149952</link>
<description>Combining Abstraction with Byzantine Fault-Tolerance
Rodrigues, Rodrigo
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149952</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Scalable Byzantine Fault Tolerant Secure Domain Name Service</title>
<link>https://hdl.handle.net/1721.1/149951</link>
<description>A Scalable Byzantine Fault Tolerant Secure Domain Name Service
Ahmed, Sarah
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149951</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Test Suites via Generated Specifications</title>
<link>https://hdl.handle.net/1721.1/149950</link>
<description>Improving Test Suites via Generated Specifications
Harder, Michael
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149950</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Trusted Third-Party Computation Service</title>
<link>https://hdl.handle.net/1721.1/149949</link>
<description>A Trusted Third-Party Computation Service
Ajmani, Sameer; Morris, Robert T.; Liskov, Barbara H.
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149949</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Trusted Execution Platform for Multiparty Computation</title>
<link>https://hdl.handle.net/1721.1/149948</link>
<description>A Trusted Execution Platform for Multiparty Computation
Ajmani, Sameer
</description>
<pubDate>Fri, 01 Sep 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149948</guid>
<dc:date>2000-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlled Physical Unknown Functions: Applications to Secure Smartcards and Certified Execution</title>
<link>https://hdl.handle.net/1721.1/149947</link>
<description>Controlled Physical Unknown Functions: Applications to Secure Smartcards and Certified Execution
Gassend, Blaise; Clarke, Dwaine; van Dijk, Marten; Devadas, Srinivas
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149947</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Double-Pulsed Set-Conditional-Reset Flip-Flop</title>
<link>https://hdl.handle.net/1721.1/149946</link>
<description>A Double-Pulsed Set-Conditional-Reset Flip-Flop
Ma, Albert; Asanovi_, Krste
</description>
<pubDate>Wed, 01 May 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149946</guid>
<dc:date>2002-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The IOA Simulator</title>
<link>https://hdl.handle.net/1721.1/149945</link>
<description>The IOA Simulator
Kaynar, Dilsun Kirh; Chefter, Anna; Dean, Laura; Garland, Stephen J.; Lynch, Nancy A.; Ne Win, Toh; Ramírez-Robredo, Antonio
</description>
<pubDate>Mon, 01 Jul 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149945</guid>
<dc:date>2002-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards An Extensible Virtual Machine</title>
<link>https://hdl.handle.net/1721.1/149944</link>
<description>Towards An Extensible Virtual Machine
Boyapati, Chandrasekhar
</description>
<pubDate>Mon, 01 Apr 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149944</guid>
<dc:date>2002-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verifying Distributed Algorithms via Dynamic Analysis and Theorem Proving</title>
<link>https://hdl.handle.net/1721.1/149943</link>
<description>Verifying Distributed Algorithms via Dynamic Analysis and Theorem Proving
Ne Win, Toh; Ernst, Michael D.
</description>
<pubDate>Wed, 01 May 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149943</guid>
<dc:date>2002-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Early-Delivery Dynamic Atomic Broadcast</title>
<link>https://hdl.handle.net/1721.1/149942</link>
<description>Early-Delivery Dynamic Atomic Broadcast
Bar-Joseph, Ziv; Keidar, Idit; Lynch, Nancy A.
</description>
<pubDate>Mon, 01 Apr 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149942</guid>
<dc:date>2002-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Type System for Preventing Data Races and Deadlocks in Java Programs</title>
<link>https://hdl.handle.net/1721.1/149941</link>
<description>A Type System for Preventing Data Races and Deadlocks in Java Programs
Boyapati, Chandrasekhar; Lee, Robert; Rinard, Martin
</description>
<pubDate>Fri, 01 Mar 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149941</guid>
<dc:date>2002-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exponential Speedup of Fixed Parameter Algorithms K_{3,3}-minor-free or K_5-minor-free Graphs</title>
<link>https://hdl.handle.net/1721.1/149940</link>
<description>Exponential Speedup of Fixed Parameter Algorithms K_{3,3}-minor-free or K_5-minor-free Graphs
Demaine, Erik D.; Hajiaghayi, Mohammad Taghi; Thilikos, Dimitrios M.
</description>
<pubDate>Fri, 01 Mar 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149940</guid>
<dc:date>2002-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>L+: Scalable Landmark Routing and Address Lookup for Multi-hop Wireless Networks</title>
<link>https://hdl.handle.net/1721.1/149939</link>
<description>L+: Scalable Landmark Routing and Address Lookup for Multi-hop Wireless Networks
Chen, Benjie; Morris, Robert T.
</description>
<pubDate>Fri, 01 Mar 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149939</guid>
<dc:date>2002-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of Loss Rate on Ad Hoc Wireless Routing</title>
<link>https://hdl.handle.net/1721.1/149938</link>
<description>Effects of Loss Rate on Ad Hoc Wireless Routing
DeCouto, Douglas S.J.; Aguayo, Daniel; Chambers, Benjamin A.; Morris, Robert
This paper uses measurements from two deployed wireless ad hoc networks to illustrate the effects of link loss rates on routing protocol performance. Measurements of these networks show that the radio links between the majority of nodes have substantial loss rates. These loss rates are high enough to prevent existing ad hoc routing protocols from using the links. Link-level retransmission can mask high loss rates, at the cost of substantial decreases in throughput. Simulations, driven by the observed loss rates, show that the shortest paths chosen by existing routing protocols tend to find routes with much less capacity than is available along the best route. Based on these observations, we present a routing metric intended to allow routing protocols to find good routes in wireless ad hoc networks. The metric is the expected total number of transmissions required to deliver a packet along a route. This metric favors routes with high throughput and low total impact on spectrum. It is expected to perform better than existing techniques that eliminate links based on loss rate thresholds.
</description>
<pubDate>Fri, 01 Mar 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149938</guid>
<dc:date>2002-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Write Barrier Removal by Static Analysis</title>
<link>https://hdl.handle.net/1721.1/149937</link>
<description>Write Barrier Removal by Static Analysis
Zee, Karen; Rinard, Martin
</description>
<pubDate>Fri, 01 Feb 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149937</guid>
<dc:date>2002-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Silicon Physical Unknown Functions and Secure Smartcards</title>
<link>https://hdl.handle.net/1721.1/149936</link>
<description>Silicon Physical Unknown Functions and Secure Smartcards
Gassend, Blaise; Clarke, Dwaine; van Dijk, Marten; Devadas, Srinivas
</description>
<pubDate>Wed, 01 May 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149936</guid>
<dc:date>2002-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fine-Grain Dynamic Leakage Reduction</title>
<link>https://hdl.handle.net/1721.1/149935</link>
<description>Fine-Grain Dynamic Leakage Reduction
Heo, Seongmoo; Barr, Kenneth; Hampton, Mark; Asanovi_, Krste
</description>
<pubDate>Tue, 01 Jan 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149935</guid>
<dc:date>2002-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leakage-Biased Domino Circuits for Dynamic Fine-Grain Leakage Reduction</title>
<link>https://hdl.handle.net/1721.1/149934</link>
<description>Leakage-Biased Domino Circuits for Dynamic Fine-Grain Leakage Reduction
Heo, Seongmoo; Asanovi_, Krste
</description>
<pubDate>Tue, 01 Jan 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149934</guid>
<dc:date>2002-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Information-Theoretic Approach to Interest Making</title>
<link>https://hdl.handle.net/1721.1/149933</link>
<description>An Information-Theoretic Approach to Interest Making
Koh, Waikit
The Internet has brought a new meaning to the term communities. Geography is no longer a barrier to international communications. However, the paradigm of meeting new interesting people remains entrenched in traditional means; meeting new interesting people on the Internet still relies on chance and contacts. This thesis explores a new approach towards matching users in online communities in an effective fashion.  Instead of using the conventional feature vector scheme to profile users, each user is represented by a personalized concept hierarchy (or an ontology) that is learnt from the user's behavior in the system. Each concept hierarchy is then interpreted within the Information Theory framework as a probabilistic decision tree. The matching algorithm uses the Kullback-Leiber distance as a measure of deviation between two probabilistic decision trees. Thus, in an online community, where a personalized concept hierarchy represents each user, the Kullback-Leiber distance imposes a full- order rank on the level of similarity of all the users with respect to a particular user in question.  The validity and utility of the proposed scheme of matching users is then applied in a set of simulations, using the feature-vector-overlap measure as a baseline. The results of the simulations show that the Kullback Leiber distance, when used in conjunction with the concept hierarchy, is more robust to noise and is able to make a stronger and more distinctive classification of users into similar groups in comparison to the conventional keyword-overlap scheme. A graphical agent system that relies upon the ontology-based interest matching algorithm, called the Collaborative Sanctioning Network, is also described in this thesis.
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149933</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>2RegionRED: a Congestion Control Mechanism for the High Speed Internet</title>
<link>https://hdl.handle.net/1721.1/149932</link>
<description>2RegionRED: a Congestion Control Mechanism for the High Speed Internet
Wang, Karen
This thesis proposes a new Active Queue Management (AQM) scheme called 2RegionRED.  It is superior to the classic Random Early Detection (RED) algorithm in that there is an intuitive way to set its parameters and it is self-tuning.  Its design is motivated by an original principle to sustain the smallest queue possible while still allowing for maximum link utilization.  2RegionRED uses the number of competing TCPs as its measure of load.  However it does not keep an explicit count.  The result is a novel algorithm that adjusts the drop rate according to two regions of operation: that requiring less than and greater than one drop per round-trip time (RTT).  This thesis also analyzes methods for measuring the persistent queue and proposes the ABSMIN method.  Simulations of 2RegionRED using ABSMIN reveal some difficulties and insights.  Basic comparisons to the Adaptive RED and Flow Proportional Queuing (FPQ) adaptive algorithms are also demonstrated through simulation
</description>
<pubDate>Sat, 01 Dec 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149932</guid>
<dc:date>2001-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring Congestion Sharing and Path Characteristics from Packet Interarrival Times</title>
<link>https://hdl.handle.net/1721.1/149931</link>
<description>Inferring Congestion Sharing and Path Characteristics from Packet Interarrival Times
Katabi, Dina; Blake, Charles
</description>
<pubDate>Sat, 01 Dec 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149931</guid>
<dc:date>2001-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid I/O Automata*</title>
<link>https://hdl.handle.net/1721.1/149930</link>
<description>Hybrid I/O Automata*
Lynch, Nancy A.; Segala, Roberto; Vaandrager, Frits
Hybrid systems are systems that exhibit a combination of discrete and continuous behavior. Typical hybrid systems include computer components, which operate in discrete program steps, and real-world components, whose behavior over time intervals evolves according to physical constraints. Important examples of hybrid systems include automated transportation systems, robotics systems, process control systems, systems of embedded devices, and mobile computing systems. Such systems can be very complex, and very difficult to describe and analyze. This paper presents the Hybrid Input/Output Automaton (HIOA) modeling framework, a basic mathematical framework to support description and analysis of hybrid systems. An important feature of this model is its support for decomposing hybrid system descriptions. In particular, the framework includes a notion of external behavior for a hybrid I/O automaton, which captures its discrete and continuous interactions with its environment. The framework also defines what it means for one HIOA to implement another, based on an inclusion relationship between their external behavior sets, and defines a notion of simulation, which provides a sufficient condition for demonstrating implementation relationships. The framework also includes a composition operation for HIOAs, which respects external behavior, and a notion of receptiveness, which implies that an HIOA does not block the passage of time. The framework is intended to support analysis methods from both computer science and control theory. This work is a simplification of an earlier version of the HIOA model [49, 50]. The main simplification in the new model is a clearer separation between the mechanisms used to model discrete and continuous interaction between components. In particular, the new model removes the dual use of external variables for discrete and continuous interaction.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149930</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forming Scatternets from Bluetooth Personal Area Networks</title>
<link>https://hdl.handle.net/1721.1/149929</link>
<description>Forming Scatternets from Bluetooth Personal Area Networks
Tan, Godfrey; Mui, Allen; Guttag, John V.; Balakrishnan, Hari
There is increasing interest in wireless ad hoc networks built from portable devices equipped with short-range wireless network interfaces. This paper addresses issues related to internetworking such networks to form larger ÔøΩscatternets.ÔøΩ  Within the constraints imposed by the emerging standard Bluetooth link layer and MAC protocol, we describe an efficient online topology formation algorithm, called TSF (Tree Scatternet Formation) to build scatternets. TSF connects nodes in a tree structure that simplifies packet routing and scheduling. The design allows nodes to arrive and leave arbitrarily, incrementally building the topology and healing partitions when they occur. We present simulation results that show that TSF has low tree formation latency and also generates an efficient topology for forwarding packets.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149929</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable, Controlled Imagery Capture in Urban Environments</title>
<link>https://hdl.handle.net/1721.1/149928</link>
<description>Scalable, Controlled Imagery Capture in Urban Environments
Teller, Seth
We describe the design considerations underlying a system for scalable, automated capture of precisely controlled imagery in urban scenes. The system operates for architectural scenes in which, from every camera position, some  two vanishing points are visible. It has been used to capture thousands of controlled images in outdoor environments spanning hundreds of meters. The proposed system architecture forms the foundation for a future, fully robotic outdoor mapping capability for urban areas, analogous to existing, satellite-based robotic mapping systems which acquire images and models of natural terrain.  Four key ideas distinguish our approach from other methods. First, our sensor acquires georeferencing metadata with every image, enabling related images to be efficiently identified and registered. Second, the sensor acquires omni-directional images; we show strong experimental evidence that such images are fundamentally more powerful observations than conventional (narrow-FOV) images. Third, the system uses a probabilistic, projective error formulation to account for uncertainty. By treating measurement error in an appropriate depth-free framework, and by deferring decisions about camera calibration and scene structure until many noisy observations can be fused, the system achieves superior robustness and accuracy. Fourth, the system's computational requirements scale linearly in the input size, the area of the acquisition region, and the size of the output model. This is in contrast to most previous methods, which either assume constant-size inputs or exhibit quadratic running time (or worse) asymptotically. These attributes enable the system to operate in a regime of scale and physical extent which is unachievable by any other method, whether manual or automated. Consequently, it can acquire the most complex calibrated terrestrial image sets in existence, while operating faster thanany existing manual or algorithmic method.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149928</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Location Proxies and Intermediate Node Forwarding for Practical Geographic Forwarding</title>
<link>https://hdl.handle.net/1721.1/149927</link>
<description>Location Proxies and Intermediate Node Forwarding for Practical Geographic Forwarding
De Couto, Douglas S.J.; Morris, Robert T.
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149927</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Generation and Checking of Program Specifications</title>
<link>https://hdl.handle.net/1721.1/149926</link>
<description>Automatic Generation and Checking of Program Specifications
Nimmer, Jeremy W.; Ernst, Michael D.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149926</guid>
</item>
<item>
<title>Roles Are Really Great!</title>
<link>https://hdl.handle.net/1721.1/149925</link>
<description>Roles Are Really Great!
Kuncak, Viktor; Lam, Patrick; Rinard, Martin
</description>
<pubDate>Wed, 01 Aug 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149925</guid>
<dc:date>2001-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Cost of Fault-Tolerant Consensus When There Are No Faults - A Tutorial</title>
<link>https://hdl.handle.net/1721.1/149924</link>
<description>On the Cost of Fault-Tolerant Consensus When There Are No Faults - A Tutorial
Keidar, Idit; Rajsbaum, Sergio
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149924</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using precise feedback for controlling congestion in the Internet</title>
<link>https://hdl.handle.net/1721.1/149923</link>
<description>Using precise feedback for controlling congestion in the Internet
Katabi, Dina; Handley, Mark
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149923</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chord: A scalable peer-to-peer lookup service for Internet applications</title>
<link>https://hdl.handle.net/1721.1/149922</link>
<description>Chord: A scalable peer-to-peer lookup service for Internet applications
Stoica, Ion; Morris, Robert T.; Karger, David R.; Kaashoek, M. Frans; Balakrishnan, Hari
</description>
<pubDate>Thu, 01 Mar 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149922</guid>
<dc:date>2001-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Client Authentication on the Web</title>
<link>https://hdl.handle.net/1721.1/149921</link>
<description>Client Authentication on the Web
Fu, Kevin; Sit, Emil; Smith, Kendra; Feamster, Nick
</description>
<pubDate>Thu, 01 Mar 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149921</guid>
<dc:date>2001-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Byzantine Fault Tolerance</title>
<link>https://hdl.handle.net/1721.1/149920</link>
<description>Practical Byzantine Fault Tolerance
Castro, Miguel
Our growing reliance on online services accessible on the Internet demands highly-available systemsthat provide correct service without interruptions. Byzantine faults such as software bugs, operatormistakes, and malicious attacks are the major cause of service interruptions. This thesis describesa new replication algorithm, BFT, that can be used to build highly-available systems that tolerateByzantine faults. It shows, for the first time, how to build Byzantine-fault-tolerant systems that canbe used in practice to implement real services because they do not rely on unrealistic assumptionsand they perform well. BFT works in asynchronous environments like the Internet, it incorporatesmechanisms to defend against Byzantine-faulty clients, and it recovers replicas proactively. Therecovery mechanism allows the algorithm to tolerate any number of faults over the lifetime of thesystem provided fewer than 1=3 of the replicas become faulty within a small windowof vulnerability.The window may increase under a denial-of-service attack but the algorithm can detect and respondto such attacks and it can also detect when the state of a replica is corrupted by an attacker.BFT has been implemented as a generic program library with a simple interface. The BFTlibrary provides a complete solution to the problem of building real services that tolerate Byzantinefaults. We used the library to implement the first Byzantine-fault-tolerant NFS file system, BFS. TheBFT library and BFS perform well because the library incorporates several important optimizations.The most important optimization is the use of symmetric cryptography to authenticate messages.Public-key cryptography, which was the major bottleneck in previous systems, is used only toexchange the symmetric keys. The performance results show that BFS performs 2% faster to 24%slower than production implementations of the NFS protocol that are not replicated. Therefore, webelieve that the BFT library can be used to build practical systems that tolerate Byzantine faults.
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149920</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Object Models, Heaps and Interpretations</title>
<link>https://hdl.handle.net/1721.1/149919</link>
<description>Object Models, Heaps and Interpretations
Rinard, Martin; Kuncak, Viktor
This paper explores the use of object models for specifying verifiable heap invariants.  We define a simple language based on sets and relations and illustrate its use through examples.  We give formal semantics of the laguage by translation into predicate calculus and interpretation of predicates in terms of objects and references in the program heap.
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149919</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perspectives on the Use of the Internet in Sri Lanka</title>
<link>https://hdl.handle.net/1721.1/149918</link>
<description>Perspectives on the Use of the Internet in Sri Lanka
Shrestha, Govinda; Amarasinghe, Saman
The survey examines the use of computers and the Internet in Sri Lanka from the perspective of the Internet Service Provider (ISP) members. It attempts to describe the general nature of IT use in terms of the availability, access, familiarity and general conditions associated with using computers and the Internet in the country.  The survey was conducted in July 1999. Questionnaires were e-mailed to 9448 ISP members in Sri Lanka, using e-mail addresses available to us at that time. Altogether, 560 members completed and returned questionnaires via e-mail to MIT's Laboratory for Computer Science.  Descriptive analysis of both quantitative and qualitative data was then conducted.    Major quantitative findings include:  *Over 60% of the respondents were members of their respective ISPs for two or less years, and over half had first used a computer sometime during the 1990-99 period. *Sixty-two percent of the respondents had sent 10 or more e-mails per week over the past six (or less) months, and 52% had received 15 or more e-mails per week during the same period. *Nearly half of the respondents used a computer at home, and 48% indicated 33.6K as the baud rate to connect their ISPs. *Seventy-eight percent of the respondents spent 1-9 hours per week sending and receiving e-mails, and a large majority (68%) spent 1-9 hours surfing the Web. *A majority of the respondents were positive about conditions in the workplace, such as the number and quality of opportunities for training and skill development, the quality of telecommunications facilities, and the quality and reliability of Internet connections. *An overwhelming majority of the respondents indicated that ISP subscriber fees, computer hardware and software costs, and telecommunications charges were generally high. *Most respondents were generally positive about 1) the quality of access to the Internet, 2) the quality of access to e-mails, Web pages and other Internet-based features, and 3) various benefits of Internet access. *Seventy-one percent of the respondents were male; nearly half were younger than 35, and a large majority were educated (with at least a high school diploma.)  Private company employees and people in business comprised over half of the respondents.  Major qualitative findings include: * It is crucially important to have faster access to information, increased communication at low costs, online-education and training, and increased efficiency in business, professional and organizational activities. * Matters of considerable concern include the low bandwidth, the high telecommunications charges, the low quality of Internet services, and the lack of organized information and databases. * Greatly needed is a raising of awareness, a change in the current regulatory environment, an open government, and a set of local information resources to support commerce.
</description>
<pubDate>Wed, 01 Nov 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149918</guid>
<dc:date>2000-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Recovery of Camera Positions in Urban Scenes</title>
<link>https://hdl.handle.net/1721.1/149917</link>
<description>Automatic Recovery of Camera Positions in Urban Scenes
Antone, Matthew E.; Teller, Seth
Accurate camera calibration is crucial to the reconstruction of three-dimensional geometry and the recovery of photometric scene properties. Calibration involves the determination of intrinsic parameters (e.g. focal length, principal point, and radial lens distortion) and extrinsic parameters (orientation and position).  In urban scenes and other environments containing sufficient geometric structure, it is possible to decouple extrinsic calibration into rotational and translational components that can be treated separately, simplifying the registration problem. Here we present such a decoupled formulation and describe methods for automatically recovering the positions of a large set of cameras given intrinsic calibration, relative rotations, and approximate positions.  Our algorithm first estimates the directions of translation (up to an unknown scale factor) between adjacent camera pairs using point features but without requiring explicit correspondence between them. This technique combines the robustness and simplicity of a Hough transform with the accuracy of Monte Carlo expectation maximization. We then find a set of distances between the pairs that produces globally-consistent camera positions. Novel uncertainty formulations and match plausibility criteria improve reliability and accuracy.  We assess our system's performance using both synthetic data and a large set of real panoramic imagery. The system produces camera positions accurate to within 5 centimeters in image networks extending over hundreds of meters.
</description>
<pubDate>Fri, 01 Dec 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149917</guid>
<dc:date>2000-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fine-Grained Failover Using Connection Migration</title>
<link>https://hdl.handle.net/1721.1/149916</link>
<description>Fine-Grained Failover Using Connection Migration
Snoeren, Alex C.; Andersen, David G.; Balakrishnan, Hari
This paper presents a set of techniques for providing fine-grained failover of long-running connections across a distributed collection of replica servers, and is especially useful for fault-tolerant and load-balanced delivery of streaming media and telephony sessions. Our system achieves connection-level failover across both local- and wide-area server replication, without requiring a front-end transport- or application-layer switch. Our approach is enabled by the recently-developed end-to-end ``connection migration'' mechanism for transport protocols such as TCP, combined with a soft-state session synchronization protocol between replica servers.   The end result is a robust, fast, and fine-grained server failover mechanism that is transparent to both the client and server applications. We describe the details of our design and Linux implementation, as well as experiments with our implementation that show that this approach to failover is an attractive way to engineer robust systems for distributing long-running streams; connections suffer relatively low performance degradation even when server redirection occurs every few seconds, and overhead is negligible when compared to standard techniques. In particular, we observe the performance impact of migrating TCP connections depends on the length of time between migration and the most recent loss-recovery event.
</description>
<pubDate>Wed, 01 Nov 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149916</guid>
<dc:date>2000-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programming Language Techniques for Modular Router Configurations</title>
<link>https://hdl.handle.net/1721.1/149915</link>
<description>Programming Language Techniques for Modular Router Configurations
Kohler, Eddie; Chen, Benjie; Kaashoek, M. Frans; Morris, Robert T.; Poletto, Massimiliano
This paper applies programming language techniques to a high-level system description, both to optimize the system and to prove useful properties about it. The system in question is Click, a modular software router framework. Click routers are built from components called elements. Elements are written in C++, but the user creates a configuration using a simple, declarative data flow language. This language is amenable to data flow analysis and other conventional programming language techniques. Applied to a router configuration, these techniques have high-level results---for example, optimizing the router or verifying its high-level properties. This paper describes several programming language techniques that have been useful in practice, including optimization tools that remove virtual function calls from router definitions and remove redundant parts of adjacent routers. We also present performance results for an extensively optimized standards-compliant IP router. On conventional PC hardware, this router can forward up to 456,000 64-byte packets per second.
</description>
<pubDate>Tue, 01 Aug 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149915</guid>
<dc:date>2000-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Efficient Boosting Algorithm for Combining Preferences</title>
<link>https://hdl.handle.net/1721.1/149914</link>
<description>An Efficient Boosting Algorithm for Combining Preferences
Iyer, Raj Dharmarajan, Jr.
The problem of combining preferences arises in several applications, such as combining the results of di_x000B_erent search engines. This work describes an effcient algorithm for combining multiple preferences. We _x000C_rst give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an effcient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the _x000C_rst experiment, we used the algorithm to combine di_x000B_erent WWW search strategies, each of which is a queryexpansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations. Here, we present results comparing RankBoost to nearest-neighbor and regression algorithms.
</description>
<pubDate>Sun, 01 Aug 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149914</guid>
<dc:date>1999-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>TrendFinder: Automated Detection of Alarmable Trends</title>
<link>https://hdl.handle.net/1721.1/149913</link>
<description>TrendFinder: Automated Detection of Alarmable Trends
Tsien, Christine L.
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149913</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>System Support for Bandwidth Management and Content Adaptation in Internet Applications</title>
<link>https://hdl.handle.net/1721.1/149912</link>
<description>System Support for Bandwidth Management and Content Adaptation in Internet Applications
Andersen, David; Bansal, Deccuk; Curtis, Dorothy; Seshan, Srinivasan; Balakrishnan, Hari
</description>
<pubDate>Mon, 01 May 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149912</guid>
<dc:date>2000-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Short-Term Fairness in Wireless Media Access Protocols</title>
<link>https://hdl.handle.net/1721.1/149911</link>
<description>An Analysis of Short-Term Fairness in Wireless Media Access Protocols
Koksal, Can Emre; Kassab, Hisham; Balakrishnan, Hari
We investigate the problem of unfairness over short time scales in decentralized wireless media access (MAC) protocols.  Motivated by experimental results over a CSMA/CA-based WaveLAN wireless LAN that shows starvation and degraded TCP performance, we see
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149911</guid>
</item>
<item>
<title>TCP-friendly Congestion Control for Real-time Streaming Applications</title>
<link>https://hdl.handle.net/1721.1/149910</link>
<description>TCP-friendly Congestion Control for Real-time Streaming Applications
Bansal, Deepak; Balakrishnan, Hari
This paper introduces and analyzes a class of nonlinear congestion control algorithms called binomial algorithms, motivated in part by the needs of streaming audio and video applications for which a drastic reduction in transmission rate upon congestion i
</description>
<pubDate>Mon, 01 May 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149910</guid>
<dc:date>2000-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Building Blocks for Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/149909</link>
<description>On Building Blocks for Distributed Systems
De Prisco, Robert
</description>
<pubDate>Wed, 01 Dec 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149909</guid>
<dc:date>1999-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Static Single Information Form</title>
<link>https://hdl.handle.net/1721.1/149908</link>
<description>The Static Single Information Form
Ananian, C. Scott
The Static Single Information (SSI) form is a compiler intermediate representation that allows efficient sparse implementations of predicated analysis and backward dataflow algorithms.  It possesses several attractive graph-theoretic properties which aid
</description>
<pubDate>Wed, 01 Sep 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149908</guid>
<dc:date>1999-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maps:  A Compiler-Managed Memory System for Software-Exposed Architectures</title>
<link>https://hdl.handle.net/1721.1/149907</link>
<description>Maps:  A Compiler-Managed Memory System for Software-Exposed Architectures
Barua, Rajeev
Microprocessors must exploit both instruction-level parallelism (ILP) and memory parallelism for high performance.  Sophisticated techniques for ILP have boosted the ability of modern-day microprocessors to exploit ILP when available. Unfortunately, impro
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149907</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>QoS Preserving Totally Ordered Multicast</title>
<link>https://hdl.handle.net/1721.1/149906</link>
<description>QoS Preserving Totally Ordered Multicast
Bar-Joseph, Ziv; Keidar, Idit; Anker, Tal; Lynch, Nancy A.
This paper studies the Quality of Service (QoS) guarantees of totally ordered multicast algorithms. The paper shows that totally ordered multicast can coexist with guaranteed predictable delays in certain network models. The paper considers two reservatio
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149906</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compositional Pointer and Escape Analysis for Multithreaded Java Programs</title>
<link>https://hdl.handle.net/1721.1/149905</link>
<description>Compositional Pointer and Escape Analysis for Multithreaded Java Programs
Rinard, Martin; Whaley, John
</description>
<pubDate>Mon, 01 Nov 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149905</guid>
<dc:date>1999-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Client-Server Approach to Virtually Synchronous  Group Multicast: Specifications, Algorithms, and Proofs</title>
<link>https://hdl.handle.net/1721.1/149904</link>
<description>A Client-Server Approach to Virtually Synchronous  Group Multicast: Specifications, Algorithms, and Proofs
Keidar, Idit; Khazan, Roger
This paper presents a formal design for a novel group multicast service that provides virtually synchronous semantics in asynchronous fault-prone environments.  The design employs a client-server architecture in which group membership is maintained not by
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149904</guid>
</item>
<item>
<title>Information Technology Use in Developing Countries</title>
<link>https://hdl.handle.net/1721.1/149903</link>
<description>Information Technology Use in Developing Countries
Shrestha, Govinda
</description>
<pubDate>Sat, 01 Jul 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149903</guid>
<dc:date>2000-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Group Communication Specifications: A Comprehensive Study</title>
<link>https://hdl.handle.net/1721.1/149902</link>
<description>Group Communication Specifications: A Comprehensive Study
Vitenberg, Roman; Keidar, Idit; Chockler, Gregory V.; Dolev, Danny
View-oriented group communication is an important and widely used building block for many distributed applications. Much current research has been dedicated to specifying the semantics and services of view-oriented Group Communication Systems (GCSs). Howe
</description>
<pubDate>Wed, 01 Sep 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149902</guid>
<dc:date>1999-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>I/O Automaton Models and Proofs for Shared-Key Communication Systems</title>
<link>https://hdl.handle.net/1721.1/149901</link>
<description>I/O Automaton Models and Proofs for Shared-Key Communication Systems
Lynch, Nancy A.
The combination of two security protocols, a simple shared-key communication protocol and the Diffie-Hellman key distribution protocol, is modeled formally and proved correct. The modeling is based on the I/O automaton model for distributed algorithms, an
</description>
<pubDate>Sun, 01 Aug 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149901</guid>
<dc:date>1999-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Selection and Loop Analysis</title>
<link>https://hdl.handle.net/1721.1/149900</link>
<description>Natural Selection and Loop Analysis
Mohtashemi, Mojdeh
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149900</guid>
</item>
<item>
<title>Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions</title>
<link>https://hdl.handle.net/1721.1/149899</link>
<description>Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions
Adya, Atul
Current commercial databases allow application programmers to trade off consistency for performance.  However, existing definitions of weak consistency levels are either imprecise or they disallow efficient implementation techniques such as optimism.  Rul
</description>
<pubDate>Mon, 01 Mar 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149899</guid>
<dc:date>1999-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Portable High-Performance Programs</title>
<link>https://hdl.handle.net/1721.1/149898</link>
<description>Portable High-Performance Programs
Frigo, Matteo
This dissertation discusses how to write computer programs that attain both high performance and portability, despite the fact that current computer systems have different degrees of parallelism, deep memory hierarchies, and diverse processor architecture
</description>
<pubDate>Tue, 01 Jun 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149898</guid>
<dc:date>1999-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Immediate-Mode Ray-Casting</title>
<link>https://hdl.handle.net/1721.1/149897</link>
<description>Immediate-Mode Ray-Casting
Alex, John; Teller, Seth
We propose a simple modification to the classical polygon rasterization pipeline that enables exact, efficient raycasting of bounded implicit surfaces without the use of a global spatial data structure bounding hierarchy.  Our algorithm requires two descr
</description>
<pubDate>Tue, 01 Jun 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149897</guid>
<dc:date>1999-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mostly-Static Decentralized Information Flow Control</title>
<link>https://hdl.handle.net/1721.1/149896</link>
<description>Mostly-Static Decentralized Information Flow Control
Myers, Andrew C.
The growing use of mobile code in downloaded programs such as applets and servlets has increased interest in robust mechanisms for ensuring privacy and secrecy. Common security mechanisms such as sandboxing and access control are either too restrictive or
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149896</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance Nonmonotonicities:  A Case Study of the UltraSPARC Processor</title>
<link>https://hdl.handle.net/1721.1/149895</link>
<description>Performance Nonmonotonicities:  A Case Study of the UltraSPARC Processor
Kushman, Nathaniel A.
Modern microprocessor architectures are very complex designs. Consequently, they exhibit many idiosyncrasies. In fact, situations exist in which the addition or removal of a single instruction changes the performance of a program by a factor of 3 to 4. I
</description>
<pubDate>Mon, 01 Jun 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149895</guid>
<dc:date>1998-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regions:  A Scalable Infrastructure for Scoped Service Location in Ubiquitous Computing</title>
<link>https://hdl.handle.net/1721.1/149894</link>
<description>Regions:  A Scalable Infrastructure for Scoped Service Location in Ubiquitous Computing
Benedicto, Kathryn Flores
Until recently, most efforts in service location have focused on finding local services.  However, service location is also useful in large-scale networked environments containing numerous, possibly non-local services.  Regions address this need for scala
</description>
<pubDate>Sat, 01 May 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149894</guid>
<dc:date>1999-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creating and Rendering Image-Based Visual Hulls</title>
<link>https://hdl.handle.net/1721.1/149893</link>
<description>Creating and Rendering Image-Based Visual Hulls
Buehler, Chris; Matusik, Wojciech; McMillan, Leonard
In this paper, we present efficient algorithms for creating and rendering image-based visual hulls. These algorithms are motivated by our desire to render real-time views of dynamic, real-world scenes. We first describe the visual hull, an abstract geomet
</description>
<pubDate>Sat, 01 May 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149893</guid>
<dc:date>1999-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Providing QoS Guarantees in Input Buffered Crossbar Switches with Speedup</title>
<link>https://hdl.handle.net/1721.1/149892</link>
<description>Providing QoS Guarantees in Input Buffered Crossbar Switches with Speedup
Charney, Anna
</description>
<pubDate>Sat, 01 Aug 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149892</guid>
<dc:date>1998-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamically Reparameterized Light Fields</title>
<link>https://hdl.handle.net/1721.1/149891</link>
<description>Dynamically Reparameterized Light Fields
Isaksen, Aaron; McMillan, Leonard; Gortler, Steven J.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149891</guid>
</item>
<item>
<title>Teaching Policy to Computer Science Students</title>
<link>https://hdl.handle.net/1721.1/149890</link>
<description>Teaching Policy to Computer Science Students
Blumenthal, Marjory S.
Computing motivates more and more attention by policy-makers at all levels of government, and policy interests of all kind can touch on computer scienceÔøΩboth inspiring new research directions or constraining technology development.  Understanding public p
</description>
<pubDate>Tue, 01 Dec 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149890</guid>
<dc:date>1998-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Credible Compilers</title>
<link>https://hdl.handle.net/1721.1/149889</link>
<description>Credible Compilers
Rinard, Martin C.
This paper presents a new concept in compiler correctness: instead of proving that the compiler performs all of its transformations correctly, the compiler generates a proof that the transformed program correctly implements the input program. A simple pro
</description>
<pubDate>Mon, 01 Mar 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149889</guid>
<dc:date>1999-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Architecture for Intentional Name Resolution and Application-level Routing</title>
<link>https://hdl.handle.net/1721.1/149888</link>
<description>An Architecture for Intentional Name Resolution and Application-level Routing
Adjie-Winoto, William; Schwartz, Elliot; Balakrishnan, Hari
</description>
<pubDate>Mon, 01 Feb 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149888</guid>
<dc:date>1999-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Integrated Congestion Management Architecture for Internet Hosts</title>
<link>https://hdl.handle.net/1721.1/149887</link>
<description>An Integrated Congestion Management Architecture for Internet Hosts
Balakrishnan, Hari; Rahul, Hariharan S.; Seshan, Srinivasan
This paper presents a novel framework for managing network congestion from an end-to-end perspective.  Our work is motivated by several trends in traffic patterns that threaten the long-term stability of the Internet. These trends include the use of multi
</description>
<pubDate>Mon, 01 Feb 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149887</guid>
<dc:date>1999-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Place and Route Approaches for FPGAs</title>
<link>https://hdl.handle.net/1721.1/149886</link>
<description>Fast Place and Route Approaches for FPGAs
Tessier, Russell G.
</description>
<pubDate>Mon, 01 Feb 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149886</guid>
<dc:date>1999-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Case for SRPT Scheduling in Web Servers</title>
<link>https://hdl.handle.net/1721.1/149885</link>
<description>The Case for SRPT Scheduling in Web Servers
Harchol-Balter, Mor; Crovella, Mark E.; Park, SungSim
</description>
<pubDate>Thu, 01 Oct 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149885</guid>
<dc:date>1998-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polygonal Approximation of Voronoi Diagrams of Set of Triangles in Three Dimensions</title>
<link>https://hdl.handle.net/1721.1/149884</link>
<description>Polygonal Approximation of Voronoi Diagrams of Set of Triangles in Three Dimensions
Teichmann, Marek; Teller, Seth
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149884</guid>
</item>
<item>
<title>A Model for Window Based Flow Control Packet-Switched Networks</title>
<link>https://hdl.handle.net/1721.1/149883</link>
<description>A Model for Window Based Flow Control Packet-Switched Networks
Yang, Xiaowei
Recently, networks have increased rapidly both in scale and speed. Problems related to the control and management are of increasing interest. However, there is no satisfactory tool to study the behavior of such networks. The traditional event driven simul
</description>
<pubDate>Sun, 01 Mar 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149883</guid>
<dc:date>1998-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Providing QoS Guarantees in Input Buffered Crossbar Switches with Speedup</title>
<link>https://hdl.handle.net/1721.1/149882</link>
<description>Providing QoS Guarantees in Input Buffered Crossbar Switches with Speedup
Charny, Anna
</description>
<pubDate>Tue, 01 Sep 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149882</guid>
<dc:date>1998-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Thread Communication and Synchronization Mechanisms for a Scalable Single Chip Multiprocessor</title>
<link>https://hdl.handle.net/1721.1/149881</link>
<description>Fast Thread Communication and Synchronization Mechanisms for a Scalable Single Chip Multiprocessor
Keckler, Stephen William
</description>
<pubDate>Mon, 01 Jun 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149881</guid>
<dc:date>1998-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Inter-Cluster Communications Systems for Clustered Microprocessors</title>
<link>https://hdl.handle.net/1721.1/149880</link>
<description>Scalable Inter-Cluster Communications Systems for Clustered Microprocessors
Jiang, Xiaohu; Yeung, Donald
As workstation clusters move away from uniprocessors in favor of multiprocessors to support the increasing computational needs of distributed applications, greater demands are placed on the communication interfaces that couple individual workstations.  th
</description>
<pubDate>Mon, 01 Jun 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149880</guid>
<dc:date>1998-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Data-Race Detection in Multithreaded Programs</title>
<link>https://hdl.handle.net/1721.1/149879</link>
<description>Algorithms for Data-Race Detection in Multithreaded Programs
Cheng, Guang-Ien
Two parallel accesses to the same location, at least one of which is a write, form a race. Debugging such races is complicated by atomic critical sections. In programs without critical sections, a race is usually a bug causing nondeterminism. In programs
</description>
<pubDate>Wed, 01 Jul 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149879</guid>
<dc:date>1998-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Choosing a Task Assignment Policy for a Distributed Server System</title>
<link>https://hdl.handle.net/1721.1/149878</link>
<description>On Choosing a Task Assignment Policy for a Distributed Server System
Harchol-Balter, Mor; Crovella, Mark E.; Murta, Cristina D.
We consider a distributed server system model and ask which policy should be used for assigning tasks to hosts.  In our model each host processes tasks in First-Come-First-Serve order and the task's service demand is known in advance.  We consider four ta
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149878</guid>
</item>
<item>
<title>Exploring Optimal Cost-Performance Designs for RAW processors</title>
<link>https://hdl.handle.net/1721.1/149877</link>
<description>Exploring Optimal Cost-Performance Designs for RAW processors
Moritz, Csaba Andras; Yeung, Donald; Agarwal, Anant
The semiconductor industry roadmap projects that advances in VLSI technology will permit more than one billion transistors on a chip by the year 2010.  The MIT Raw microprocessor is a proposed architecture that strives to exploit these chip-level resource
</description>
<pubDate>Mon, 01 Jun 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149877</guid>
<dc:date>1998-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A  Model for Interactive Computation: Applications to Speech Research</title>
<link>https://hdl.handle.net/1721.1/149876</link>
<description>A  Model for Interactive Computation: Applications to Speech Research
McCandless, Michael Kyle
The speech research community has developed numerous toolkits to support ongoing research, e.g. Sapphire, Spire, ISP, ESPS/Waves+, HTK, CSLU Toolkit, LNKNet.  While these toolkits contain extensive and useful functionality, they typically offer limited en
</description>
<pubDate>Mon, 01 Jun 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149876</guid>
<dc:date>1998-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Debugging Multithreaded Programs that Incorporate User-Level Locking</title>
<link>https://hdl.handle.net/1721.1/149875</link>
<description>Debugging Multithreaded Programs that Incorporate User-Level Locking
Stark, Andrew F.
A multithreaded program with a bug may behave nondeterministically, and this nondeterminism typically makes the bug hard to localize.  This thesis presents a debugging tool, the Nondeterminator-2, which automatically finds certain nondeterminacy bugs in pr
</description>
<pubDate>Fri, 01 May 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149875</guid>
<dc:date>1998-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cilk: Efficient Multithreaded Computing</title>
<link>https://hdl.handle.net/1721.1/149874</link>
<description>Cilk: Efficient Multithreaded Computing
Randall, Keith H.
</description>
<pubDate>Fri, 01 May 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149874</guid>
<dc:date>1998-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounded-Error Interactive Ray Tracing</title>
<link>https://hdl.handle.net/1721.1/149873</link>
<description>Bounded-Error Interactive Ray Tracing
Bala, Kavita; Dorsey, Julie; Teller, Seth
</description>
<pubDate>Sun, 01 Mar 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149873</guid>
<dc:date>1998-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Efficient Virtual Network Interface in the Fugu Scalable Workstation</title>
<link>https://hdl.handle.net/1721.1/149872</link>
<description>An Efficient Virtual Network Interface in the Fugu Scalable Workstation
Mackenzie, Kenneth Martin
A scalable workstation is one vision of a mainstream parallel computer: a machine that combines scalable, fine-grain communication facilities for parallel applications with virtual memory and pre-emptive multiprogramming to support general-purpose workloa
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149872</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Shared-Memory and Message-Passing Communication in the Alewife Multiprocessor</title>
<link>https://hdl.handle.net/1721.1/149871</link>
<description>Integrated Shared-Memory and Message-Passing Communication in the Alewife Multiprocessor
Kubiatowicz, John David
To date, MIMD multiprocessors have been divided into two classes based on hardware communication models: those supporting shared memory and those supporting message passing. Breaking with tradition, this thesis argues that multiprocessors should integrate
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149871</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multigrain Shared Memory</title>
<link>https://hdl.handle.net/1721.1/149870</link>
<description>Multigrain Shared Memory
Yeung, Donald
Designers of parallel computers have to decide how to apportion a machine's resources between processing, memory, and communication.  How these resources are apportioned determine the grain and balance of the resulting machine.  Often, these design decisio
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149870</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Interactive Approach to the Identification and Extraction of Visual Events</title>
<link>https://hdl.handle.net/1721.1/149869</link>
<description>An Interactive Approach to the Identification and Extraction of Visual Events
Stasior, William F.
</description>
<pubDate>Sun, 01 Feb 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149869</guid>
<dc:date>1998-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Interactive Approach to the Indentification and Extraction of Visual Events</title>
<link>https://hdl.handle.net/1721.1/149868</link>
<description>An Interactive Approach to the Indentification and Extraction of Visual Events
Stasior, William F.
This report describes an interactive approach to the computerized processing and interpretation of visual information.  The objective is to facilitate the development of interactive applications that analyze and interpret video input.  The approach is to
</description>
<pubDate>Sun, 01 Feb 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149868</guid>
<dc:date>1998-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Frustum Casting for Progressive, Interactive Rendering</title>
<link>https://hdl.handle.net/1721.1/149867</link>
<description>Frustum Casting for Progressive, Interactive Rendering
Teller, Seth; Alex, John
Efficient visible surface determination algorithms have long been a fundamental goal of computer graphics.  We discuss the well-known ray casting problem: given a geometric scene description, a synthetic camera, and a viewport which discretizes the camer
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149867</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning and control in stochastic domains with imperfect information</title>
<link>https://hdl.handle.net/1721.1/149866</link>
<description>Planning and control in stochastic domains with imperfect information
Hauskrechts, Milos
</description>
<pubDate>Thu, 01 Aug 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149866</guid>
<dc:date>1996-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Speech Perception Using Real-Time Phoneme Detection: The BeBe System</title>
<link>https://hdl.handle.net/1721.1/149865</link>
<description>Speech Perception Using Real-Time Phoneme Detection: The BeBe System
Sweeny, Latanya; Thompson, Patrick
We define a new approach to speech recognition based on auditory perception and modeled after the human brain's tendency to automatically categorize speech sounds [House 1962; Liberman 1957]. As background, today's speech recognition systems are knowle
</description>
<pubDate>Wed, 01 Apr 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149865</guid>
<dc:date>1998-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boolean Compilation of Relational Specifications</title>
<link>https://hdl.handle.net/1721.1/149864</link>
<description>Boolean Compilation of Relational Specifications
Jackson, Daniel
A new method for analyzing relational specifications is described. A property to be checked is cast as a relational formula, which, if the property holds, has no finite models. The relational formula is translated into a boolean formula that has a model f
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149864</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding Reed Solomon Codes beyond the Error-Correction Diameter</title>
<link>https://hdl.handle.net/1721.1/149863</link>
<description>Decoding Reed Solomon Codes beyond the Error-Correction Diameter
Sudan, Madhu
</description>
<pubDate>Wed, 01 Jan 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149863</guid>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithmic issues in coding theory</title>
<link>https://hdl.handle.net/1721.1/149862</link>
<description>Algorithmic issues in coding theory
Sudan, Madhu
</description>
<pubDate>Wed, 01 Oct 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149862</guid>
<dc:date>1997-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formal Verification of Safety-Critical Hybrid Systems</title>
<link>https://hdl.handle.net/1721.1/149861</link>
<description>Formal Verification of Safety-Critical Hybrid Systems
Livadas, Carolos
This thesis investigates how the formal modeling and verification techniques of computer science can be used for the analysis of hybrid systems [1,2,3,4]---systems involving both discrete and continuous behavior. The motivation behind such research lies i
</description>
<pubDate>Mon, 01 Sep 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149861</guid>
<dc:date>1997-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Extraction of Textured Vertical Facades from Pose Imagery</title>
<link>https://hdl.handle.net/1721.1/149859</link>
<description>Automatic Extraction of Textured Vertical Facades from Pose Imagery
Coorg, Satvan; Teller, Seth
Extracting 3-dimensional structure from real-world imagery and rendering it from unrestricted viewpoints is an important problem in computer vision, and increasingly, computer graphics. Despite many years of research, a system that automatically recovers
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149859</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Fastest Fourier Transform in the West</title>
<link>https://hdl.handle.net/1721.1/149858</link>
<description>The Fastest Fourier Transform in the West
Frigo, Matteo; Johnson, Steven G.
</description>
<pubDate>Mon, 01 Sep 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149858</guid>
<dc:date>1997-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Garbage Collection in a Large, Distributed Object Store</title>
<link>https://hdl.handle.net/1721.1/149857</link>
<description>Garbage Collection in a Large, Distributed Object Store
Maheshwari, Umesh
</description>
<pubDate>Mon, 01 Sep 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149857</guid>
<dc:date>1997-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>View-based abstraction: Enhancing Maintainability and Modularity in the presence of Implementation Dependencies</title>
<link>https://hdl.handle.net/1721.1/149856</link>
<description>View-based abstraction: Enhancing Maintainability and Modularity in the presence of Implementation Dependencies
Rodriguez, Luis H., Jr.
This dissertation presents a new, backwards compatible, language independent, and incremental programming methodology called view-based abstraction. Unlike the well-known black-box abstraction approach, view-based abstraction enables programmers to maintain program modularity even in the presence of implementation couplings, i.e., dependencies among the code modules that rely on otherwise "hidden" implementation details not specified in the module interfaces. This dissertation also presents a transformation-based implementation of view-based abstraction, called ViewForm. ViewForm acts as a source-to-source preprocessor that automatically performs an implementation coupling expressed by the programmer. When the original code is later updated, ViewForm automatically attempts to reapply the implementation coupling to the updated code. ViewForm will modify the updated source code only if the coupling is still valid. In this way, by performing some extra work up front, the programmer performing an implementation coupling saves future programmers from having to pay for the consequences of broken modularity. To aid in writing this up-front ViewForm code, this dissertation presents a structured approach for using view-based abstraction and writing ViewForm transformations constructs.   To demonstrate view-based abstraction, ViewForm is used to produce automated, performance-based implementation couplings in three example programs: an amorphous computing simulator, a conditional-probability pedigree computation, and ViewForm itself. Unlike other approaches that also use interprocedural program analyses, the results indicate that view-based abstraction is practical and scales gracefully - the extra automation increased compilation time from a typical 34%, to 40% in the worst case, despite a less than fully optimized ViewForm implementation. Each optimization required the programmer to write only 65 to 137 lines of ViewForm code for programs of size 167 lines to 7,616 lines. This work is amortized as time saved by programmers modifying the original program in the future. In all three examples, ViewForm maintained modularity by regenerating correct code when the original modules were modified - even when those modifications were to the optimization-dependent sections of the original code.
</description>
<pubDate>Mon, 01 Sep 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149856</guid>
<dc:date>1997-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Managing Scheduled Routing with a High-level Communication Language</title>
<link>https://hdl.handle.net/1721.1/149855</link>
<description>Managing Scheduled Routing with a High-level Communication Language
Metcalf, Christopher D.
</description>
<pubDate>Fri, 01 Aug 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149855</guid>
<dc:date>1997-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Optimized Hardware Architecture and Communication Protocol for  Scheduled Communication</title>
<link>https://hdl.handle.net/1721.1/149854</link>
<description>An Optimized Hardware Architecture and Communication Protocol for  Scheduled Communication
Shoemaker, David
</description>
<pubDate>Fri, 01 Aug 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149854</guid>
<dc:date>1997-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building an Active Node on the Internet</title>
<link>https://hdl.handle.net/1721.1/149853</link>
<description>Building an Active Node on the Internet
Murphy, David M.
</description>
<pubDate>Thu, 01 May 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149853</guid>
<dc:date>1997-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Message-Driven Dynamics</title>
<link>https://hdl.handle.net/1721.1/149852</link>
<description>Message-Driven Dynamics
Lethin, Richard Anton
</description>
<pubDate>Tue, 01 Jul 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149852</guid>
<dc:date>1997-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>HULA: An Efficient Protocol for Reliable Delivery of Messages</title>
<link>https://hdl.handle.net/1721.1/149851</link>
<description>HULA: An Efficient Protocol for Reliable Delivery of Messages
Maheshwari, Umesh
We present a new protocol for reliable delivery of messages over a network that might lose, duplicate, reorder, or arbitrarily delay packets. It is the first protocol that guarantees exactly-once and ordered delivery on a connection while avoidin
</description>
<pubDate>Tue, 01 Jul 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149851</guid>
<dc:date>1997-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Study of Minimum Cut Algorithms</title>
<link>https://hdl.handle.net/1721.1/149850</link>
<description>Experimental Study of Minimum Cut Algorithms
Levine, Matthew S.
Recently, several new algorithms have been developed for the minimum cut problem that substantially improve worst-case time bounds for the problem. These algorithms are very different from the earlier ones and from each other.  We conduct an experimental
</description>
<pubDate>Thu, 01 May 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149850</guid>
<dc:date>1997-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Model-Based Expert System for interpretation of hemodynamic data from ICU patients</title>
<link>https://hdl.handle.net/1721.1/149849</link>
<description>A Model-Based Expert System for interpretation of hemodynamic data from ICU patients
Zhao, Ruilin
</description>
<pubDate>Thu, 01 May 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149849</guid>
<dc:date>1997-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revisiting the Paxos Algorithm</title>
<link>https://hdl.handle.net/1721.1/149848</link>
<description>Revisiting the Paxos Algorithm
De Prisco, Roberto
The Paxos algorithm is an efficient and highly fault-tolerant algorithm, devised by Lamport, for reaching consensus in a distributed system.  Although it appears to be practical, it seems to be not widely known or understood.  This thesis contains a new p
</description>
<pubDate>Sun, 01 Jun 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149848</guid>
<dc:date>1997-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relieving Hot Spots on the World Wide Web</title>
<link>https://hdl.handle.net/1721.1/149847</link>
<description>Relieving Hot Spots on the World Wide Web
Panigrahy, Rina
We describe a family of caching protocols for distributed networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Hot spots are web sites that swamped by a large number of requests for their pages.  Our protocols are
</description>
<pubDate>Sun, 01 Jun 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149847</guid>
<dc:date>1997-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Random Server Model for Private Information Retrieval (or Information Theoretic PIR Avoiding Database Replication</title>
<link>https://hdl.handle.net/1721.1/149846</link>
<description>A Random Server Model for Private Information Retrieval (or Information Theoretic PIR Avoiding Database Replication
Gertner, Yael; Goldwasser, Shafi; Malkin, Tal
Private information retrieval (PIR) schemes provide a user with information from a database while keeping his query secret from the database manager.  We propose a new model for PIR, utilizing auxiliary random servers providing privacy services for databas
</description>
<pubDate>Tue, 01 Apr 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149846</guid>
<dc:date>1997-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient distributed 1 out of n oblivious transfer</title>
<link>https://hdl.handle.net/1721.1/149845</link>
<description>Efficient distributed 1 out of n oblivious transfer
Gertner, Yael; Malkin, Tal
</description>
<pubDate>Tue, 01 Apr 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149845</guid>
<dc:date>1997-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fine-Grained Control of Java Applets Using a Simple Constraint Language</title>
<link>https://hdl.handle.net/1721.1/149844</link>
<description>Fine-Grained Control of Java Applets Using a Simple Constraint Language
Mehta, Nimisha V.
The use of the internet has increased extensively with a growing number of inexperienced users surfing the Web.  Lurking in Web pages, Java applets are automatically executed on users' machines.  As a result, popular Web browsers are understandably con
</description>
<pubDate>Sun, 01 Jun 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149844</guid>
<dc:date>1997-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering a Global Resolution Service</title>
<link>https://hdl.handle.net/1721.1/149843</link>
<description>Engineering a Global Resolution Service
Slottow, Edward C.
As the World Wide Web continues to balloon in size the issue of a robust information infrastructure has become increasingly important.  Currently, Web links are based on fragile names that have limited life due to semantic content.  Uniform Resource Na
</description>
<pubDate>Sun, 01 Jun 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149843</guid>
<dc:date>1997-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modularity in the Presence of Subclassing</title>
<link>https://hdl.handle.net/1721.1/149842</link>
<description>Modularity in the Presence of Subclassing
Stata, Raymie
Classes are harder to subclass than they need be.  This report addresses this problem, showing how to design classes that are more modular and easier to subclass without sacrificing the extensibility that makes subclassing useful.  In the context of singl
</description>
<pubDate>Tue, 01 Apr 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149842</guid>
<dc:date>1997-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Demand-Based Coscheduling of Parallel Jobs on Multiprogrammed Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149841</link>
<description>Demand-Based Coscheduling of Parallel Jobs on Multiprogrammed Multiprocessors
Sobalvarro, Patrick Gregory
</description>
<pubDate>Tue, 01 Apr 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149841</guid>
<dc:date>1997-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Baring it all to Software: The Raw Machine</title>
<link>https://hdl.handle.net/1721.1/149840</link>
<description>Baring it all to Software: The Raw Machine
Waingold, Elliot; Taylor, Michael; Sarkar, Vivek; Lee, Walter; Lee, Victor; Kim, Jang; Frank, Matthew; Finch, Peter; Devabhaktuni, Srikrishna; Barua, Rajeev; Babb, Jonathan; Amarasinghe, Saman; Agarwal, Anant
Rapid advances in technology force a quest for computer architectures that exploit new opportunities and shed existing mechanisms that do not scale.  Current architectures, such as hardware scheduled superscalars, are already hitting performance and comple
</description>
<pubDate>Sat, 01 Mar 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149840</guid>
<dc:date>1997-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimism vs. Locking: A Study of Concurrency Control for Client-Server Object-Oriented Databases</title>
<link>https://hdl.handle.net/1721.1/149839</link>
<description>Optimism vs. Locking: A Study of Concurrency Control for Client-Server Object-Oriented Databases
Gruber, Robert Edward
Many client-server object-oriented database systems (OODBs) run applications at clients and perform all accesses on cached copies of database objects. Moving both data and computation to the clients can improve response time, throughput, and scalability.
</description>
<pubDate>Wed, 01 Jan 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149839</guid>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Replication Control in Distributed B-Trees</title>
<link>https://hdl.handle.net/1721.1/149838</link>
<description>Replication Control in Distributed B-Trees
Cosway, Paul R.
B-trees are a commonly used data structure to associate symbols with related information, as in a symbol table or file index.  The performance of B-tree algorithms is well understood for sequential processing and even concurrent processing on small-scale
</description>
<pubDate>Sat, 01 Feb 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149838</guid>
<dc:date>1997-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Algorithms with Applications to Robot Navigation and Protein Folding</title>
<link>https://hdl.handle.net/1721.1/149836</link>
<description>Learning Algorithms with Applications to Robot Navigation and Protein Folding
Singh, Mona
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149836</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Public-Key Cryptosystems from Lattice Reduction Problems</title>
<link>https://hdl.handle.net/1721.1/149835</link>
<description>Public-Key Cryptosystems from Lattice Reduction Problems
Goldreich, Oded; Goldwasser, Shafi; Halevi, Shai
We present a new proposal for a trapdoor one-way function, from which  we derive  public-key encryption  and digital signatures. The security of the new construction is based on the conjectured computational difficulty of lattice-reduction proble
</description>
<pubDate>Fri, 01 Nov 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149835</guid>
<dc:date>1996-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Code Importing Techniques for Fast, Safe Client/Server Access</title>
<link>https://hdl.handle.net/1721.1/149834</link>
<description>Code Importing Techniques for Fast, Safe Client/Server Access
Bank, Joseph A.
</description>
<pubDate>Sun, 01 Sep 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149834</guid>
<dc:date>1996-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Cilk System for Parallel Multithreaded Computing</title>
<link>https://hdl.handle.net/1721.1/149833</link>
<description>The Cilk System for Parallel Multithreaded Computing
Joerg, Christopher Frank
</description>
<pubDate>Mon, 01 Jan 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149833</guid>
<dc:date>1996-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Consulting a Set of Experts and Searching</title>
<link>https://hdl.handle.net/1721.1/149832</link>
<description>On Consulting a Set of Experts and Searching
Galperin, Igal
Two chapters of this thesis analyze expert consulting problemas via game theoretic models; the first points out a close connectionn between the problem of consulting a set of experts and the problem of searching. The last chapter presents a solution to th
</description>
<pubDate>Sun, 01 Sep 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149832</guid>
<dc:date>1996-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partitioned Garbage Collection of a Large Object Store</title>
<link>https://hdl.handle.net/1721.1/149831</link>
<description>Partitioned Garbage Collection of a Large Object Store
Maheshwari, Umesh; Liskov, Barbara H.
This paper describes a new garbage collection scheme for large persisten object stores that makes efficient use of the disk and main memory. The heap is divided into partitions that are collected independently using information about inter-partit
</description>
<pubDate>Sat, 01 Feb 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149831</guid>
<dc:date>1997-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shared Memory Versus Message Passing for Iterative Solution of Sparse, Irregular Problems</title>
<link>https://hdl.handle.net/1721.1/149830</link>
<description>Shared Memory Versus Message Passing for Iterative Solution of Sparse, Irregular Problems
Chong, Frederic T.; Agarwal, Anant
The benefits of hardware support for shared memory versus those formessage passing are difficult to evaluate without an in-depth study ofreal applications on a common platform.  We evaluate the communicationmechanisms of the MIT Alewife machine, a multipr
</description>
<pubDate>Tue, 01 Oct 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149830</guid>
<dc:date>1996-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computer Science Technical Report (CS-TR) Project: Considerations from the Library Perspective</title>
<link>https://hdl.handle.net/1721.1/149829</link>
<description>The Computer Science Technical Report (CS-TR) Project: Considerations from the Library Perspective
Anderson, Greg; Lasher, Rebecca; Reich, Vicky
In 1992 the Advanced Research Projects Agency (ARPA) funded a three year grant to investigate the questions related to large-scale, distributed, digital libraries. The award focused research on Computer Science Technical Reports (CS-TR) and was granted to
</description>
<pubDate>Sat, 01 Jun 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149829</guid>
<dc:date>1996-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Garbage Collection for Large Object-Oriented Databases</title>
<link>https://hdl.handle.net/1721.1/149828</link>
<description>Efficient Garbage Collection for Large Object-Oriented Databases
Ng, Tony C.
This thesis presents the design of an efficient garbage collection scheme for large, persistent object-oriented databases in a client-server environment. The scheme uses a partitioned approach. A database is divided into disjoint partitions and each parti
</description>
<pubDate>Wed, 01 May 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149828</guid>
<dc:date>1996-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Security Model for the Information Mesh</title>
<link>https://hdl.handle.net/1721.1/149827</link>
<description>A Security Model for the Information Mesh
Condell, Matthew N.
Many distributed systems that are currently being designed are object based.  These sytems require a model for authentication and access control which conforms to the object model.  They need a model that allows objects to control their own security.  In
</description>
<pubDate>Sat, 01 Jun 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149827</guid>
<dc:date>1996-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Encapsulated Key Escrow</title>
<link>https://hdl.handle.net/1721.1/149826</link>
<description>Encapsulated Key Escrow
Bellare, Mihir; Goldwasser, Shafi
</description>
<pubDate>Mon, 01 Apr 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149826</guid>
<dc:date>1996-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phonological Parsing for Bi-directional Letter-to-Sound/Sound-to-Letter Generation</title>
<link>https://hdl.handle.net/1721.1/149825</link>
<description>Phonological Parsing for Bi-directional Letter-to-Sound/Sound-to-Letter Generation
Meng, Helen Mei-Ling
This thesis proposes a unified framework for integrating a variety of linguistic knowledge sources for representing speech, in order to facilitiate their concurrent utilization in spoken language systems.  The feasibility of the proposed methodology is de
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149825</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Structure of the Scaffolding Core of Bacteriophage T4 and Its Role in Head Length</title>
<link>https://hdl.handle.net/1721.1/149824</link>
<description>On the Structure of the Scaffolding Core of Bacteriophage T4 and Its Role in Head Length
Berger, Bonnie A.; Hoest, Gunnar W.; Paulson, James R.; Shor, Peter W.
</description>
<pubDate>Mon, 01 Jan 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149824</guid>
<dc:date>1996-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Correctness of Vehicle Control Systems: A Case Study</title>
<link>https://hdl.handle.net/1721.1/149823</link>
<description>Correctness of Vehicle Control Systems: A Case Study
Weinberg, Henri B.
</description>
<pubDate>Thu, 01 Feb 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149823</guid>
<dc:date>1996-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time-lock Puzzles and Timed-release Crypto</title>
<link>https://hdl.handle.net/1721.1/149822</link>
<description>Time-lock Puzzles and Timed-release Crypto
Rivest, Ronald L.; Shamir, Adi; Wagner, David A.
Our motivation is the notion of ``timed-release crypto,'' where the goal is to encrypt a message so that it can not be decrypted by anyone, not even the sender, until a pre-determined amount of time has passed.  The goal is to ``send information into the
</description>
<pubDate>Thu, 01 Feb 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149822</guid>
<dc:date>1996-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Translucent Cyrptography: An Alternative to Key Escrow, and its Implementation via Fractional Oblivious Transfer</title>
<link>https://hdl.handle.net/1721.1/149821</link>
<description>Translucent Cyrptography: An Alternative to Key Escrow, and its Implementation via Fractional Oblivious Transfer
Bellare, Mihir; Rivest, Ronald L.
</description>
<pubDate>Thu, 01 Feb 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149821</guid>
<dc:date>1996-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptively Secure Multi-party Computation</title>
<link>https://hdl.handle.net/1721.1/149820</link>
<description>Adaptively Secure Multi-party Computation
Canetti, Ran; Feige, Uri; Goldreich, Oded; Naor, Moni
A fundamental problem in designing secure multi-party protocols is how to deal with adaptive adversaries (i.e., adversaries that may choose the corrupted parties during the course of the computation), in a setting where the channels are insecure and secur
</description>
<pubDate>Thu, 01 Feb 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149820</guid>
<dc:date>1996-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theory of Clock Synchronization</title>
<link>https://hdl.handle.net/1721.1/149819</link>
<description>A Theory of Clock Synchronization
Patt, Boaz
We consider the problem of clock synchronization in a system with uncertain message delays and clocks with bounded drift. To analyze this classical problem, we introduce the concept of synchronization graphs, and show that the tightest achievable synchron
</description>
<pubDate>Sat, 01 Oct 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149819</guid>
<dc:date>1994-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Admission Control and Routing: Theory and Practice</title>
<link>https://hdl.handle.net/1721.1/149818</link>
<description>Admission Control and Routing: Theory and Practice
Gawlick, Rainer
Emerging high speed Broadband Integrated Services Digital Networks (B-ISDN) will carry traffic for services such as video-on-demand and video teleconferencing, which require resource reservation along the path on which the traffic is sent. As a result, su
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149818</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying and Merging Related Bibliographic Records</title>
<link>https://hdl.handle.net/1721.1/149817</link>
<description>Identifying and Merging Related Bibliographic Records
Hylton, Jeremy A.
Bibliographic records freely available on the Internet can be used to construct a high-quality,  digital finding aid that provides the ability to discover paper and electronic documents.  The key challenge to providing such a service is integrating mixed-
</description>
<pubDate>Thu, 01 Feb 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149817</guid>
<dc:date>1996-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Executing Multithreaded Programs Efficiently</title>
<link>https://hdl.handle.net/1721.1/149816</link>
<description>Executing Multithreaded Programs Efficiently
Blumofe, Robert D.
This thesis presents the theory, design, and implementation of Cilk (pronounced "silk") and Cilk-NOW.   Cilk is a C-based language and portable runtime system for programming and executing multithreaded parallel programs.  Cilk-NOW is an implementation of
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149816</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Verification of Randomized Distributed Real -Time Systems</title>
<link>https://hdl.handle.net/1721.1/149815</link>
<description>Modeling and Verification of Randomized Distributed Real -Time Systems
Segala, Roberto
Randomization is an excellent tool for the design of distributed algorithms, sometimes yielding efficient solutions to problems that are inherently complex, or even unsolvable, in the setting of deterministic algorithms.  However, this tool has a price: e
</description>
<pubDate>Sat, 01 Jun 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149815</guid>
<dc:date>1996-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Performance All-Software Distributed Shared Memory</title>
<link>https://hdl.handle.net/1721.1/149814</link>
<description>High-Performance All-Software Distributed Shared Memory
Johnson, Kirk L.
The C Region Library (CRL) is a new all-software distributed shared memory (DSM) system.  CRL requires no special compiler, hardware, or operating system support beyond the ability to send and receive messages between processing nodes.  It provides a simp
</description>
<pubDate>Thu, 01 Feb 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149814</guid>
<dc:date>1996-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aurora at MIT</title>
<link>https://hdl.handle.net/1721.1/149813</link>
<description>Aurora at MIT
Clark, David D; Houh, Henry; Tennenhouse, David L.
</description>
<pubDate>Fri, 01 Dec 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149813</guid>
<dc:date>1995-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralized Channel Management in Scalable Multihop Spread-Spectrum Packet Radio Networks</title>
<link>https://hdl.handle.net/1721.1/149812</link>
<description>Decentralized Channel Management in Scalable Multihop Spread-Spectrum Packet Radio Networks
Shepard, Timothy Jason
This thesis addresses the problems of managing the transmissions of stations in a spread-spectrum packet ratio network so that the system can remain effective when scaled to millions of nodes concentrated in a metropolitan area.  The principal difficulty
</description>
<pubDate>Sat, 01 Jul 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149812</guid>
<dc:date>1995-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theta Reference Manual</title>
<link>https://hdl.handle.net/1721.1/149811</link>
<description>Theta Reference Manual
Liskov, Barbara; Curtis, Dorothy; Day, Mark; Ghemawat, Sanjay; Gruber, Robert; Johnson, Paul; Myers, Andrew C.
</description>
<pubDate>Wed, 01 Feb 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149811</guid>
<dc:date>1995-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lottery and Stride Scheduling: Flexible Proportional-share Resource Management</title>
<link>https://hdl.handle.net/1721.1/149810</link>
<description>Lottery and Stride Scheduling: Flexible Proportional-share Resource Management
Waldspurger, Carl A.
This thesis presents flexible abstractions for specifying resource management policies, together with efficient mechanisms for implementing those abstractions.  Several novel scheduling techniques are introduced, including both randomized and deterministi
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149810</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Modified Object Buffer: A Storage Management Technique for Object-Oriented Databases</title>
<link>https://hdl.handle.net/1721.1/149809</link>
<description>The Modified Object Buffer: A Storage Management Technique for Object-Oriented Databases
Ghemawat, Sanjay
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149809</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Computation Migration in Distributed Shared Memory Systems</title>
<link>https://hdl.handle.net/1721.1/149808</link>
<description>Dynamic Computation Migration in Distributed Shared Memory Systems
Hsieh, Wilson Cheng-Yi
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149808</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reactive Synchronization Algorithms for Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149806</link>
<description>Reactive Synchronization Algorithms for Multiprocessors
Lim, Beng-Hong
Efficient synchronization algorithms are hard to design because their performance depends on run-time factors that are hard to predict. In particular, the designer has a choice of protocols to implement the synchronization operation, and a choice of wait
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149806</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Restricted Branching Programs and Hardware Verification</title>
<link>https://hdl.handle.net/1721.1/149805</link>
<description>Restricted Branching Programs and Hardware Verification
Ponzio, Stephen J.
Recent developments in the field of digital design and hardware verification have found great use for restricted forms of branching programs.  In particular, oblivious read-once branching programs (also called "OBDD's") are central to a very common techni
</description>
<pubDate>Tue, 01 Aug 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149805</guid>
<dc:date>1995-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computationally Efficient Error-Correcting Codes and Holographic Proofs</title>
<link>https://hdl.handle.net/1721.1/149804</link>
<description>Computationally Efficient Error-Correcting Codes and Holographic Proofs
Spielman, Daniel Alan
We present computationally efficient error-correcting codes and holographic proofs.Our error-correcting codes are asymptotically good and can be encoded and decoded in linear time.Our construction of holographic proofs provide, for every proof of any theo
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149804</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Link Architecture for a Global Information Infrastructure</title>
<link>https://hdl.handle.net/1721.1/149803</link>
<description>Link Architecture for a Global Information Infrastructure
Van Dyke, Jeffrey R.
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149803</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Increasing Cross-Domain Call Batching Using Promises and Batched Control Structures</title>
<link>https://hdl.handle.net/1721.1/149802</link>
<description>Increasing Cross-Domain Call Batching Using Promises and Batched Control Structures
Zondervan, Quinton Y.
In a client-server system, it may be possible for the client to corrupt server data through unsafe access methods  or programming error.  A common method for protecting the server data is to separate the client and server into distinct protection domains,
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149802</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Noise Tolerant Algorithms for Learning and Searching</title>
<link>https://hdl.handle.net/1721.1/149801</link>
<description>Noise Tolerant Algorithms for Learning and Searching
Aslam, Javed Alexander
We consider the problem of developing robust algorithms which cope with noisy data. In the Probably Approximately Correct model of machine learning, we develop a general technique which allows nearly all PAC learning algorithms to be converted into highly
</description>
<pubDate>Wed, 01 Feb 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149801</guid>
<dc:date>1995-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Performance Modeling of Scientific Computations</title>
<link>https://hdl.handle.net/1721.1/149800</link>
<description>Quantitative Performance Modeling of Scientific Computations
Toledo, Sivan Abraham
The first part of the thesis demonstrates that the performance of programs can be predicted accurately, automatically, and rapidly using a method called benchmapping.  The key aspects benchmapping are: automatic creation of detailed performance models, pr
</description>
<pubDate>Mon, 01 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149800</guid>
<dc:date>1995-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reducing Synchronization Overhead in Parallel Simulation</title>
<link>https://hdl.handle.net/1721.1/149799</link>
<description>Reducing Synchronization Overhead in Parallel Simulation
Legedza, Ulana
Synchronization is often the dominant cost in conservative parallel simulation, particularly in simulations of parallel computers, in which low-latency simulated communication requires frequent synchronization.  This thesis presents local barriers and pre
</description>
<pubDate>Mon, 01 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149799</guid>
<dc:date>1995-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Connecting Homes to the Internet: An Engineering Cost Model of Cable vs. ISDN</title>
<link>https://hdl.handle.net/1721.1/149798</link>
<description>Connecting Homes to the Internet: An Engineering Cost Model of Cable vs. ISDN
Gillett, Sharon Eisner
Using the World Wide Web at 28.8 Kbps (or less) can be a frustrating experience: a multimedia page that takes a fraction of a second to download at Ethernet speeds takes many seconds at modem rates. Two enhancements to existing infrastructure have the pot
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149798</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Interchange Standard and System for Browsing Digital Documents</title>
<link>https://hdl.handle.net/1721.1/149797</link>
<description>An Interchange Standard and System for Browsing Digital Documents
Kass, Andrew Jonathan
With the advent of fast global digital communication networks, information will increasingly be delivered in electronic form.  In addition, as libraries become increasingly more computerized, not just card catalogs but entire books will be stored on-line.
</description>
<pubDate>Mon, 01 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149797</guid>
<dc:date>1995-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Client Cache management in a Distributed Object Database</title>
<link>https://hdl.handle.net/1721.1/149796</link>
<description>Client Cache management in a Distributed Object Database
Day, Mark Stuart
A distributed object database stores persistently at servers.  Applications run on client machines, fetching objects into a client-side cache of objects.  If fetching and cache management are done in terms of objects, rather than fixed-size units such as
</description>
<pubDate>Mon, 01 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149796</guid>
<dc:date>1995-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Randomness Versus Non-Determinism in Distributed Computing</title>
<link>https://hdl.handle.net/1721.1/149795</link>
<description>Randomness Versus Non-Determinism in Distributed Computing
Saias, Alain Isaac
This thesis is devoted to the analysis and illustration of the effects of the interplay between randomness and non-determinism in randomized computing.  Using ideas from game theory , we provide a general model for randomized computing which formalizes th
</description>
<pubDate>Sat, 01 Oct 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149795</guid>
<dc:date>1994-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quickstep: A System for Performance Monitoring and Debugging Parallel Applications on the Alewife Multiprocessor</title>
<link>https://hdl.handle.net/1721.1/149794</link>
<description>Quickstep: A System for Performance Monitoring and Debugging Parallel Applications on the Alewife Multiprocessor
Mitra, Sramana
</description>
<pubDate>Sun, 01 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149794</guid>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Distributed Programming System for Media Applications</title>
<link>https://hdl.handle.net/1721.1/149793</link>
<description>A Distributed Programming System for Media Applications
Phillips, Brent M.
</description>
<pubDate>Wed, 01 Feb 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149793</guid>
<dc:date>1995-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional Encapsulation and Type Reconstruction in a Strongly-typed, Polymorphic Language</title>
<link>https://hdl.handle.net/1721.1/149792</link>
<description>Functional Encapsulation and Type Reconstruction in a Strongly-typed, Polymorphic Language
Gupta, Shail Aditya
</description>
<pubDate>Wed, 01 Feb 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149792</guid>
<dc:date>1995-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Statistical Approach to Language Modelling for the ATIS Problem</title>
<link>https://hdl.handle.net/1721.1/149791</link>
<description>A Statistical Approach to Language Modelling for the ATIS Problem
Koppelman, Joshua D.
</description>
<pubDate>Wed, 01 Feb 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149791</guid>
<dc:date>1995-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synchronized MIMD Computing</title>
<link>https://hdl.handle.net/1721.1/149790</link>
<description>Synchronized MIMD Computing
Kuszmaul, Bradley C.
Fast global synchronization provides simple, efficient solutions to many of the system problems of parallel computing.  It achieves this by providing composition of both performance and correctness.  If you understand the performance and meaning of parall
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149790</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms and Interfaces for Software-Extended Coherent Shared Memory</title>
<link>https://hdl.handle.net/1721.1/149789</link>
<description>Mechanisms and Interfaces for Software-Extended Coherent Shared Memory
Chaiken, David L.
Software-extended systems use a combination of hardware and software to implement shared memory on large-scale multiprocessors.  Hardware mechanisms accelerate common-case accesses, while software handles exceptional events.  This dissertation proposes, d
</description>
<pubDate>Sun, 01 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149789</guid>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Small-Depth Counting Networks and Related Topics</title>
<link>https://hdl.handle.net/1721.1/149788</link>
<description>Small-Depth Counting Networks and Related Topics
Klugerman, Michael Richard
</description>
<pubDate>Thu, 01 Sep 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149788</guid>
<dc:date>1994-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Trajectory Models for Phonetic Recognition</title>
<link>https://hdl.handle.net/1721.1/149787</link>
<description>Statistical Trajectory Models for Phonetic Recognition
Goldenthal, William David
The main goal of this work is to develop an alternative methodology for acoustic-phonetic modelling of speech sounds.  The approach utilizes a segment-based framework to capture the dynamical behavior and statistical dependencies of the acoustic attribute
</description>
<pubDate>Mon, 01 Aug 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149787</guid>
<dc:date>1994-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>On-Line Algorithms for Robot Navigation and Server Problems</title>
<link>https://hdl.handle.net/1721.1/149786</link>
<description>On-Line Algorithms for Robot Navigation and Server Problems
Kleinberg, Jon M.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149786</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Interactive Programming System for Media Computation</title>
<link>https://hdl.handle.net/1721.1/149785</link>
<description>An Interactive Programming System for Media Computation
Wetherall, David J.
As digital video is manipulated by increasingly powerful computers, many new applications are becoming viable.  This report investigates the programming language aspects of controlling such video applications.  It presents the design, implementation, and
</description>
<pubDate>Thu, 01 Sep 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149785</guid>
<dc:date>1994-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Structure of Near-minimum Edge Cuts</title>
<link>https://hdl.handle.net/1721.1/149784</link>
<description>The Structure of Near-minimum Edge Cuts
Benczúr, András A.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149784</guid>
</item>
<item>
<title>Serializing Parallel Programs by Removing Redundant Computation</title>
<link>https://hdl.handle.net/1721.1/149783</link>
<description>Serializing Parallel Programs by Removing Redundant Computation
Ernst, Michael D.
</description>
<pubDate>Mon, 01 Aug 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149783</guid>
<dc:date>1994-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Programming system for the Dynamic Manipulation of Temporally Sensitive Data</title>
<link>https://hdl.handle.net/1721.1/149782</link>
<description>A Programming system for the Dynamic Manipulation of Temporally Sensitive Data
Lindblad, Christopher J.
In computer-participative multimedia applications, the computer not only manipulates media, but also digests it and performs independent actions based on media content.  In this report I discuss an approach to the design of environments to support the dev
</description>
<pubDate>Mon, 01 Aug 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149782</guid>
<dc:date>1994-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Failsafe Key Escrow</title>
<link>https://hdl.handle.net/1721.1/149781</link>
<description>Failsafe Key Escrow
Kilian, Joseph; Leighton Frank Thomson
</description>
<pubDate>Mon, 01 Aug 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149781</guid>
<dc:date>1994-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Scheme Shell</title>
<link>https://hdl.handle.net/1721.1/149780</link>
<description>A Scheme Shell
Shivers, Olin
</description>
<pubDate>Fri, 01 Apr 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149780</guid>
<dc:date>1994-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time Optimal Self-Stabilizing Spanning Tree Algorithms</title>
<link>https://hdl.handle.net/1721.1/149779</link>
<description>Time Optimal Self-Stabilizing Spanning Tree Algorithms
Aggarwal, Sudhanshu Madan
</description>
<pubDate>Sat, 01 Jan 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149779</guid>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Integrated Approach to Dynamic Decision Making under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/149778</link>
<description>An Integrated Approach to Dynamic Decision Making under Uncertainty
Leong, Tze-Yun
</description>
<pubDate>Mon, 01 Aug 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149778</guid>
<dc:date>1994-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Global Partitioning of Parallel loops and Data Arrays for Caches and Distributed Memory in Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149777</link>
<description>Global Partitioning of Parallel loops and Data Arrays for Caches and Distributed Memory in Multiprocessors
Barua, Rajeev K.
</description>
<pubDate>Sat, 01 Jan 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149777</guid>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Specifications to Check Source Code</title>
<link>https://hdl.handle.net/1721.1/149776</link>
<description>Using Specifications to Check Source Code
Evans, David
</description>
<pubDate>Wed, 01 Jun 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149776</guid>
<dc:date>1994-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Acquisition of Language Models for Speech Recognition</title>
<link>https://hdl.handle.net/1721.1/149775</link>
<description>Automatic Acquisition of Language Models for Speech Recognition
McCandless, Michael Kyle
</description>
<pubDate>Wed, 01 Jun 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149775</guid>
<dc:date>1994-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transaction Management for Mobile Objects Using Optimistic Concurrency Control</title>
<link>https://hdl.handle.net/1721.1/149774</link>
<description>Transaction Management for Mobile Objects Using Optimistic Concurrency Control
Adya, Atul
We present computationally efficient error-correcting codes and holographic proofs. Our error-correcting codes are asymptotically good and can be  encoded and decoded in linear time. Our construction of holographic proofs provide, for every proof of any t
</description>
<pubDate>Fri, 01 Jul 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149774</guid>
<dc:date>1994-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Safe, Efficient Object Database Interface Using Batched Futures</title>
<link>https://hdl.handle.net/1721.1/149773</link>
<description>A Safe, Efficient Object Database Interface Using Batched Futures
Bogle, Phillip Lee
For many systems such as operating systems and databases it is important to run client code in a separate protection domain so that it cannot interfere with the correct operation of the system.  Clients communicate with the server by making cross domain c
</description>
<pubDate>Fri, 01 Jul 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149773</guid>
<dc:date>1994-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time Surveying:  Clock Synchronization over packet Networks</title>
<link>https://hdl.handle.net/1721.1/149772</link>
<description>Time Surveying:  Clock Synchronization over packet Networks
Troxel, Gregory D.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149772</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of a Preemptive Network Architecture</title>
<link>https://hdl.handle.net/1721.1/149771</link>
<description>Investigation of a Preemptive Network Architecture
Lefelhocz, Christopher James
Two network architectures, cell and packet, form the basis of most high bandwidth network research.  If analyzed from the perspective of building a switch,  both  architectures have unique advantages.  The preemptive architecture described herein proposes
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149771</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formal Specification Techniques for Promoting Software Modularity, Enhancing Documentation, and Testing Specifications</title>
<link>https://hdl.handle.net/1721.1/149770</link>
<description>Formal Specification Techniques for Promoting Software Modularity, Enhancing Documentation, and Testing Specifications
Tan, Yang Meng
This thesis presents three ideas.  First, it presents a novel use of formal specification to promote a programming style based on specified interfaces and data abstraction in a programming language that lacks such supports.  Second, it illustrates the uses of claims about specifications.
</description>
<pubDate>Wed, 01 Jun 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149770</guid>
<dc:date>1994-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Observing "True" Concurrency</title>
<link>https://hdl.handle.net/1721.1/149769</link>
<description>Observing "True" Concurrency
Jategaonkar, Lalita A.
In concurrent process theory, processors are often modeled by state machines and Petri Nets.  Algebraic process theories based on state machines, exemplified by Milner's CCS and Hoare's CSP, have been more fully developed than Net-based theories, but are inadequate for modeling "true" concurrency concepts such as non-atomic actions, action refinement, locality of actions, and multithreadedness.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149769</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Implementation of High-Level Languages on User-Level Communications Architectures</title>
<link>https://hdl.handle.net/1721.1/149768</link>
<description>Efficient Implementation of High-Level Languages on User-Level Communications Architectures
Hsieh, Wilson C.; Johnson, Kirk L.; Kaashoek, M. Frans; Wallach, Deborah A.; Weihl, William E.
User-level communication architectures --- parallel architectures that give user code direct but protected access to the network --- provide communication performance that is an order of magnitude higher than previous-generation message-passing architectures. Unfortunately, in order to take advantage of his level of performance, programmers must concern themselves with low-level issues that are often hardware dependent (e.g., what primitives to use for large and small data transfers, and either to use interrupts or polling).
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149768</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cellular Automata Methods in Mathematical Physics</title>
<link>https://hdl.handle.net/1721.1/149767</link>
<description>Cellular Automata Methods in Mathematical Physics
Smith, Mark Andrew
Cellular automata (CA) are fully discrete, spatially-distributed dynamical systems which can serve as an alternative framework for mathematical descriptions of physical systems.  Furthermore, they constitute intrinsically parallel models of computation which can be efficiently realized with special-purpose cellular automata machines.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149767</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Timing Analysis and Optimization System for Level-clocked Circuitry</title>
<link>https://hdl.handle.net/1721.1/149766</link>
<description>A Timing Analysis and Optimization System for Level-clocked Circuitry
Papaefthymiou, Marios Christos
This thesis investigates timing analysis and optimization issues in synchronous circuitry.  The major thrust of our work is a collection of provably correct and efficient algorithms that perform a variety of architectural-level operations on level-clocked
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149766</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guardian Angel: Patient-Centered Health Information Systems</title>
<link>https://hdl.handle.net/1721.1/149765</link>
<description>Guardian Angel: Patient-Centered Health Information Systems
Szolovits, Peter; Doyle, Jon; Long, William J.; Kohane, Isaac; Pauker, Stephen G.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149765</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributing Information for Collaborative Filtering on Usenet Net News</title>
<link>https://hdl.handle.net/1721.1/149764</link>
<description>Distributing Information for Collaborative Filtering on Usenet Net News
Maltz, David A.
As part of the Information Revolution," the amount of raw information available to computer users has increased as never before.  Unfortunately , there has been a corresponding jump in the amount of unrelated information users must search through in order
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149764</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Vidboard:A Video Capture and Processing Peripheral for the ViewStation System</title>
<link>https://hdl.handle.net/1721.1/149763</link>
<description>The Vidboard:A Video Capture and Processing Peripheral for the ViewStation System
Adam, Joel F.; Tennenhouse, David L.
With the growth of multimedia applications, video is increasingly being handled within the computing environment.  Since video presents serious technological challenges to the current generation of personal computers and networks, other systems based on t
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149763</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Algorithm for Rate Allocation in a Packet-Switching Network With Feedback</title>
<link>https://hdl.handle.net/1721.1/149762</link>
<description>An Algorithm for Rate Allocation in a Packet-Switching Network With Feedback
Charny, Anna
As the speed and complexity of computer networks evolve, sharing network resources becomes increasingly important.  thus, the issue of how to allocate the available bandwidth among the multitude of users needs to be addressed.  Such allocation needs to be
</description>
<pubDate>Fri, 01 Apr 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149762</guid>
<dc:date>1994-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Function-Based Indexing for Object-Oriented Databases</title>
<link>https://hdl.handle.net/1721.1/149761</link>
<description>Function-Based Indexing for Object-Oriented Databases
Hwang, Deborah Jing-Hwa
Object-oriented databases should support queries over user-defined sets based on properties computed using user-defined functions.  This dissertation presents a new function-based indexing scheme to make these queries run faster.  These indexes are diffic
</description>
<pubDate>Tue, 01 Feb 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149761</guid>
<dc:date>1994-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Object Operations in a  Persistent Programming System</title>
<link>https://hdl.handle.net/1721.1/149760</link>
<description>Fast Object Operations in a  Persistent Programming System
Myers, Andrew C.
</description>
<pubDate>Sat, 01 Jan 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149760</guid>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Specifications to Improve Program Performance</title>
<link>https://hdl.handle.net/1721.1/149759</link>
<description>Exploiting Specifications to Improve Program Performance
Vandevoorde, Mark T.
Although programmers benefit from interface specifications when reasoning about programs, existing compilers do not.  In this thesis, I discuss how to incorporate specifications into a programming language to improve performance.  I use specifications in
</description>
<pubDate>Tue, 01 Feb 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149759</guid>
<dc:date>1994-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Run-time Thread Management for Large-Scale Distributed-Memory Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149758</link>
<description>Run-time Thread Management for Large-Scale Distributed-Memory Multiprocessors
Nussbaum, Daniel
Effective thread management is crucial to achieving good performance on large-scale distributed-memory multiprocessors that support dynamic threads.  For a given parallel computation with some associated task constraints imposed by the task graph, a thread-management algorithm produces a running schedule as output, subject to the precedence constraints imposed by the task graph and the constraints imposed by the interprocessor communications network.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149758</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>compiler analysis to implement point-to-point synchronization in parallel programs</title>
<link>https://hdl.handle.net/1721.1/149757</link>
<description>compiler analysis to implement point-to-point synchronization in parallel programs
Nguyen, John
The shared-memory data-parallel model presents an attractive interface for programming multiprocessors by allowing for easy management of parallel tasks while hiding details of the underlying machine architecture.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149757</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Closing the Window of Vulnerability in Multiphase memory transaction: The alewife transaction store</title>
<link>https://hdl.handle.net/1721.1/149756</link>
<description>Closing the Window of Vulnerability in Multiphase memory transaction: The alewife transaction store
Kubiatowicz, John David
Multiprocessor architects have begun to explore several mechanisms such as prefetching, context-switching and software-assisted dynamic cache-coherence, which transform single-phase memory transactions in conventional memory systems into multi-phase operations.
</description>
<pubDate>Mon, 01 Feb 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149756</guid>
<dc:date>1993-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic language Identification Using a Segment-Based Approach</title>
<link>https://hdl.handle.net/1721.1/149755</link>
<description>Automatic language Identification Using a Segment-Based Approach
Hazen, Timothy J.
Automatic language Identification (ALI) is the problem of automatically identifying the language of an utterance through the use of a computer.  In 1977, House and Neuburg proposed an approach to ALI which focused on the phonotactic constraints of different languages.
</description>
<pubDate>Sun, 01 Aug 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149755</guid>
<dc:date>1993-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Correctness of Communications Protocols, A case Study</title>
<link>https://hdl.handle.net/1721.1/149754</link>
<description>Correctness of Communications Protocols, A case Study
Søgaard-Andersen, Jørgen; Lynch, Nancy A.; Lampson, Butler W.
During the past few years, the technology for formal specification and verification of communication protocols has matured to the point where we believe that it now provides practical assistance for protocol design and validation.
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149754</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Correctness Proof for a Network Synchronizer</title>
<link>https://hdl.handle.net/1721.1/149753</link>
<description>Correctness Proof for a Network Synchronizer
Devarajan, Harish; Fekete, Alan; Lynch, Nancy A.; Shrira, Liuba
In this paper we offer a formal, rigorous proof of the correctness of Awerbuch's algorithm for network synchronization [1]. We specify both the algorithm and the correctness condition using the I/O automaton model.
</description>
<pubDate>Wed, 01 Dec 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149753</guid>
<dc:date>1993-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Liveness in Timed and Untimed Systems</title>
<link>https://hdl.handle.net/1721.1/149752</link>
<description>Liveness in Timed and Untimed Systems
Gawlick, Rainer; Segala, Roberto; Søgaard-Andersen, Jørgen; Lynch, Nancy A.
When proving the correctness of algorithms in distributed systems, one generally considers safety conditions and liveness conditions. The Input /Output (I/)0 automaton model and its timed version have used successfully, but have focused on safety conditions and on a restricted from of liveness called fairness.
</description>
<pubDate>Wed, 01 Dec 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149752</guid>
<dc:date>1993-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Wires: Overcoming Pin Limitations in FPGA-based Logic Emulation</title>
<link>https://hdl.handle.net/1721.1/149751</link>
<description>Virtual Wires: Overcoming Pin Limitations in FPGA-based Logic Emulation
Babb, Jonathan William
Existing FPGA-based logic emulators are limited by inter-chip communication bandwidth, resulting in low gate utilization (10 to 20 percent of usable gates).  This resource imbalance increases the number of chips needed to emulate a particular logic design and thereby decreases emulation speed, since signals must cross more chip boundaries.
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149751</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reordering with Hindsight</title>
<link>https://hdl.handle.net/1721.1/149750</link>
<description>Reordering with Hindsight
Spiers, Bradford T.
This report presents the reordering technique for parallel debugging. This technique is useful for debugging ordering errors, caused when actions a programmer meant to occur in a specific order occur in a different, unintended order.
</description>
<pubDate>Fri, 01 Oct 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149750</guid>
<dc:date>1993-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-stabilization By Local Checking and Correction</title>
<link>https://hdl.handle.net/1721.1/149749</link>
<description>Self-stabilization By Local Checking and Correction
Varghese, George
A self-stabilizing protocol begins to behave correctly in bounded time, no matter what state it starts in.  Self-stabilization abstracts the ability to tolerate arbitrary faults that stop.  This thesis describes a simple paradigm called local checking and correction for the design of stabilizing network protocols.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149749</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cache Performance of Garbage-collected Programming Languages</title>
<link>https://hdl.handle.net/1721.1/149748</link>
<description>Cache Performance of Garbage-collected Programming Languages
Reinhold, Mark B.
As processor speeds continue to improve relative to main-memory access times, cache performance is becoming an increasingly important component of program performance.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149748</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structured Video: A Data Type with Content-based Access</title>
<link>https://hdl.handle.net/1721.1/149747</link>
<description>Structured Video: A Data Type with Content-based Access
Duda, Andrzej; Weiss, Ron
We describe structured video, a general video data model allowing free form annotation, composition, and content-based access to video segments. The structured video abstraction provides an efficient means of organizing and manipulating video data by assigning logical representations to the underlying video streams and their contents.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149747</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fair Cryptosystems</title>
<link>https://hdl.handle.net/1721.1/149746</link>
<description>Fair Cryptosystems
Micali, Silvio
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149746</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Content Routing System for Distributed Information Systems</title>
<link>https://hdl.handle.net/1721.1/149745</link>
<description>A Content Routing System for Distributed Information Systems
Sheldon, Mark A.; Duda, Andrzej; Weiss, Ron; O'Toole, James; Gifford, David K.
We describe the first system that provides query based associative access to the contents of distributed information servers. Queries describe desired object attributes, and are automatically forwarded to servers that contain relevant information.
</description>
<pubDate>Tue, 01 Jun 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149745</guid>
<dc:date>1993-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>I-95 The Information Market</title>
<link>https://hdl.handle.net/1721.1/149744</link>
<description>I-95 The Information Market
</description>
<pubDate>Sun, 01 Aug 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149744</guid>
<dc:date>1993-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Garbage Collection in a Client-server, Transaction, Persistent Object System</title>
<link>https://hdl.handle.net/1721.1/149743</link>
<description>Distributed Garbage Collection in a Client-server, Transaction, Persistent Object System
Maheshwari, Umesh
We present a design for distributed garbage collection in a object-oriented database system called Thor. Garbage collection in Thor is different from that in conventional distributed systems because Thor has a client-server architecture, in which clients fetch copies of objects from multiple servers ans run transactions.
</description>
<pubDate>Sun, 01 Aug 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149743</guid>
<dc:date>1993-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Orthogonal Persistence: A Simple Optimization Based on Replicating Collection</title>
<link>https://hdl.handle.net/1721.1/149742</link>
<description>Implementing Orthogonal Persistence: A Simple Optimization Based on Replicating Collection
Nettels, Scott; O'Toole James
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149742</guid>
</item>
<item>
<title>Concurrent Garbage Collection of Persistent Heaps</title>
<link>https://hdl.handle.net/1721.1/149741</link>
<description>Concurrent Garbage Collection of Persistent Heaps
O'Toole, James; Nettles, Scott; Gifford, David K.
We describe the first concurrent compacting garbage collector for a persistent heap.  Client threads read and write the heap in primary memory, and can independently commit or about their write operations.
</description>
<pubDate>Tue, 01 Jun 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149741</guid>
<dc:date>1993-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-time Replication GC: An Implementation Report</title>
<link>https://hdl.handle.net/1721.1/149740</link>
<description>Real-time Replication GC: An Implementation Report
O'Toole, James; Nettles, Scott
</description>
<pubDate>Thu, 01 Apr 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149740</guid>
<dc:date>1993-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Logical Disk: A Simple New Approach to Improving File System Performance</title>
<link>https://hdl.handle.net/1721.1/149739</link>
<description>Logical Disk: A Simple New Approach to Improving File System Performance
de Jonge, Wiebren; Kaashoek, M. Frans; Hsieh, Wilson C.
Making a file system efficient usually requires extensive modifications.  For example, making a file system log-structured requires the introduction of new data structures that are tightly coupled with the general file system code.
</description>
<pubDate>Thu, 01 Apr 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149739</guid>
<dc:date>1993-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Evaluation of Multiprocessor Support for Fine-grain Synchronization in Preconditioned Conjugate Gradient</title>
<link>https://hdl.handle.net/1721.1/149738</link>
<description>An Evaluation of Multiprocessor Support for Fine-grain Synchronization in Preconditioned Conjugate Gradient
Yeung, Donald
This thesis explores the use of fine-grain synchronization in the preconditioned conjugate gradient (PCG) method using the modified incomplete Cholesky factorization of the coefficient matrix as a preconditioner.
</description>
<pubDate>Mon, 01 Feb 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149738</guid>
<dc:date>1993-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Category of Functors from State Shapes to Bottomless CPOs in Adequate for Block Structure</title>
<link>https://hdl.handle.net/1721.1/149737</link>
<description>The Category of Functors from State Shapes to Bottomless CPOs in Adequate for Block Structure
Lent, Arthur Franklin
We present a programming language EoA, which embodies what Reynolds has described as the ``essence of ALGOL.''  In particular, EoA allows higher-order procedures and the declaration of block structured local variables.
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149737</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Constructive Approach to Artificial Intelligence Reexamined</title>
<link>https://hdl.handle.net/1721.1/149736</link>
<description>A Constructive Approach to Artificial Intelligence Reexamined
Ramstad, Robert Matthew
Made-Up Minds:  A Constructivist Approach to Artificial Intelligence, a Ph.D. thesis by Gary Drescher (MIT, Computer Science, September 1989) and a book published by MIT Press (1991) describe a learning system which controls a simulated robot and gathers information about causes and effects for various actions within the software simulated world the robot inhabits.
</description>
<pubDate>Wed, 01 Jul 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149736</guid>
<dc:date>1992-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Family Values: A Beahvior Notion of Subtyping</title>
<link>https://hdl.handle.net/1721.1/149735</link>
<description>Family Values: A Beahvior Notion of Subtyping
Liskov, Barbara; Wing, Jeannette M.
</description>
<pubDate>Sun, 01 Aug 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149735</guid>
<dc:date>1993-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A History of CLU</title>
<link>https://hdl.handle.net/1721.1/149734</link>
<description>A History of CLU
Liskov, Barbara H.
The idea of a data abstraction has had a significant impact on the development of programming languages and on programming methodology.  CLU was the first implemented programming language to provide direct linguistic support for data abstraction.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149734</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Process Algebraic View of I/O Automata</title>
<link>https://hdl.handle.net/1721.1/149733</link>
<description>A Process Algebraic View of I/O Automata
Segala, Roberto
The Input/Output Automata formalism of Lynch and Tuttle is a widely used framework for the specification and verification of concurrent algorithms. Unfortunately, it has never been provided with an algebraic characterization, a formalization which has been fundamental for the success of theories like CSP, CCP and ACP.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149733</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrent Timestamping Made Simple</title>
<link>https://hdl.handle.net/1721.1/149732</link>
<description>Concurrent Timestamping Made Simple
Gawlick, Rainer
Concurrent Timestamp Systems  (CTSS) allow processes to temporally order concurrent events in an asynchronous shared memory system. Bounded memory constructions of a CTSS are extremely powerful tools for concurrency control, and are the basis for solutions to many coordination problems including mutual exclusion, randomized consensus, and multiwriter multireader atomic registers.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149732</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compiler-directed Storage Reclamation Using Object Lifetime Analysis</title>
<link>https://hdl.handle.net/1721.1/149731</link>
<description>Compiler-directed Storage Reclamation Using Object Lifetime Analysis
Hicks, James Edward, Jr.
Many heap-oriented languages such as Lisp and Id depend on run-time garbage collection to reclaim storage.  Garbage collection can be a significant run-time expense, especially for functional languages that tend to allocate structures often.
</description>
<pubDate>Sun, 01 Nov 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149731</guid>
<dc:date>1992-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Managing Storage for Multithreaded Computations</title>
<link>https://hdl.handle.net/1721.1/149730</link>
<description>Managing Storage for Multithreaded Computations
Blumofe, Robert D.
Multithreading has become a  dominant paradigm in general purpose MIMD parallel computation.  To execute a multithread computation on a parallel computer, a scheduler must order and allocate threads to run on the individual processors.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149730</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance Assertion Checking</title>
<link>https://hdl.handle.net/1721.1/149729</link>
<description>Performance Assertion Checking
Perl, Sharon Esther
Performance assertion checking  is an approach to describing and monitoring the performance of complex software systems.  The idea is simple:  system implementors write assertions that capture their expectations for performance, the system is instrumented to collect performance data, and then the assertions are checked automatically against the data to detect violations signifying potential performance bugs.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149729</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proceedings of the 1992 MIT Student Workshop on VLSI and Parallel Systems</title>
<link>https://hdl.handle.net/1721.1/149728</link>
<description>Proceedings of the 1992 MIT Student Workshop on VLSI and Parallel Systems
Leiserson, C.E.
Proceedings of the 1992 MIT Student Workshop on VLSI and Parallel Systems. The papers in this volume were submitted to the 1992 MIT Student Workshop on VLSI and Parallel Systems. The workshop was organized by the VLSI and Parallel Systems  Group at MIT to promote an interchange of ideas among the various research activities at MIT in VSLI and parallel systems.  It was held on July 21, 1992 at the MIT Endicott House in Dedham, Massachusetts. Of 54 papers in this proceedings, 16 were chosen for presentation at the workshop. These papers are marked with an asterisk.
</description>
<pubDate>Wed, 01 Jul 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149728</guid>
<dc:date>1992-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Video Coding and the Application Level Framing Protocol Architecture</title>
<link>https://hdl.handle.net/1721.1/149727</link>
<description>Video Coding and the Application Level Framing Protocol Architecture
Heybey, Andrew T.
As networks and computers become faster, real time video transmission is expected to become common.  Variable bit rate video coders will be used in order to take advantage of the statistical multiplexing gain and bandwidth efficiency of packet switched networks.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149727</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>PIPES: Linguistic Support for Ordered Asynchronous Invocations</title>
<link>https://hdl.handle.net/1721.1/149726</link>
<description>PIPES: Linguistic Support for Ordered Asynchronous Invocations
Colbrook, Adrian; Brewer, Eric A.; Hsieh, Wilson C.; Wang, Paul; Weihl, William E.
We describe pipes, a new linguistic mechanism for sequences of ordered asynchronous procedure calls in multiprocessor systems.  Pipes allow a sequence of remote invocations to be performed in order, but asynchronously with respect to the calling thread.
</description>
<pubDate>Wed, 01 Apr 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149726</guid>
<dc:date>1992-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Organization of Systems with Bussed Interconnections</title>
<link>https://hdl.handle.net/1721.1/149725</link>
<description>Organization of Systems with Bussed Interconnections
Kipnis, Shlomo
This thesis explores using busses in communication architectures and control structures.  First, we investigate the organization of permutation architectures with bussed interconnections.  We explore how to efficiently permute data among VLSI chips in accordance with a predetermined set of permutations.
</description>
<pubDate>Sun, 01 Mar 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149725</guid>
<dc:date>1992-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Exploring an Unknown Graph</title>
<link>https://hdl.handle.net/1721.1/149724</link>
<description>Algorithms for Exploring an Unknown Graph
Betke, Margrit
We consider the problem of exploring an unknown strongly connected directed graph.  We use the exploration model introduced by Deng and Papadimitriou [DP90].  An explorer follows the edges of an unknown graph until she has seen all the edges and vertices of the graph.
</description>
<pubDate>Sun, 01 Mar 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149724</guid>
<dc:date>1992-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Sample Complexity of PAC-learning using Random and Chosen Examples</title>
<link>https://hdl.handle.net/1721.1/149723</link>
<description>On the Sample Complexity of PAC-learning using Random and Chosen Examples
Eisenberg, Bronwyn Bonnie
Two protocols used for learning under the pac-learning model introduced by Valiant are learning from random examples and learning from memberships queries.  Membership queries have been used to efficiently and exactly learn a concept class  C   that is too difficult  to pac-learn using random examples.
</description>
<pubDate>Sun, 01 Mar 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149723</guid>
<dc:date>1992-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Speaker Variability and Imposing Speaker Constrainst in Phonetic Classification</title>
<link>https://hdl.handle.net/1721.1/149722</link>
<description>Modeling Speaker Variability and Imposing Speaker Constrainst in Phonetic Classification
Niyogi, Partha
This thesis deals with intra-speaker correlation analyses of speech sounds, and the possible utilization of this correlation to speech recognition.  Current approaches to phonetic classification, regardless of whether they use context-dependent or -independent models, achieve classification based on locally optimum criteria.
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149722</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Distributed Data-balanced Dictionary Based on the B-link tree</title>
<link>https://hdl.handle.net/1721.1/149721</link>
<description>A Distributed Data-balanced Dictionary Based on the B-link tree
Johnson, Theodore; Colbrook, Adrian
Many concurrent dictionary data structures have been proposed, but usually in the context of shared memory multiprocessors.  In this paper, we present an algorithm for a concurrent distributed B-tree that can be implemented on message passing paralle
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149721</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design and Implementation of a Parallel Persistent Object System</title>
<link>https://hdl.handle.net/1721.1/149720</link>
<description>The Design and Implementation of a Parallel Persistent Object System
Heytens, Michael L.
This report describes Anga, an experimental persistent object system that we have developed that utilizes parallelism in a fundamental way to enhance performance.  Parallelism is incorporated into the design of the system at all levels.  We begin wit
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149720</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>File Systems with Multiple File Implementations</title>
<link>https://hdl.handle.net/1721.1/149719</link>
<description>File Systems with Multiple File Implementations
Stata, Raymie
This thesis proposes ideas for designing file system software for the large, high-performance file server hardware we feel will be common in the middle to late nineties.  In particular, the thesis examines the value and pragmatics of file systems wit
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149719</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preventing Recursion Deadlock in Concurrent Object-oriented Systems</title>
<link>https://hdl.handle.net/1721.1/149718</link>
<description>Preventing Recursion Deadlock in Concurrent Object-oriented Systems
Brewer, Eric A.; Waldspurger, Carl A.
This paper presents solutions to the problem of deadlock due to recursion in concurrent object-oriented programming languages.  Two language-independent, system-level mechanisms for solving this problem are proposed:  a novel technique using multi-ported objects, and a named-threads scheme that borrows from previous work in distributed computing.  We compare the solutions and present an analysis of their relative merits.
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149718</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Full Abstraction and the Context Lemma 1</title>
<link>https://hdl.handle.net/1721.1/149717</link>
<description>Full Abstraction and the Context Lemma 1
Jim, Trevor; Meyer, Albert R.
It is impossible to add a combinator to PCF to achieve full abstraction for models such as Berry's stable domains in a way analogous to the addition of the "parallel-or" combinator that achieves full abstraction for the familiar cpo model.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149717</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Reader-writer Locks for Parallel Systems</title>
<link>https://hdl.handle.net/1721.1/149716</link>
<description>Scalable Reader-writer Locks for Parallel Systems
Hsieh, William C.; Weihl, William E.
Current algorithms for reader-writer synchronization exhibit poor scalability because they do not allow readers to acquire locks independently.  We describe two new algorithms for reader-writer synchronization that allow parallelism among readers during lock acquisition.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149716</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>PRELUDE: A System for Portable Parallel Software</title>
<link>https://hdl.handle.net/1721.1/149715</link>
<description>PRELUDE: A System for Portable Parallel Software
Weihl, William Edward; Brewer, Eric A.; Colbrook, Adrian; Dellarocas, Chrysanthos N.; Hsieh, Wilson; Joseph, Anthony; Waldspurger, Carl; Wang, Paul
This paper describes PRELUDE, a programming language and accompanying system support for writing portable MIMD parallel programs.  PRELUDE supports a methodology for designing and organizing parallel programs that makes them easier to tune for particular architectures and to port to new  architectures.
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149715</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Real-time Cost of Timing Uncertainty: Consensus and Failure Detection</title>
<link>https://hdl.handle.net/1721.1/149714</link>
<description>The Real-time Cost of Timing Uncertainty: Consensus and Failure Detection
Ponzio, Stephen J.
In real distributed systems, processes may have only inexact information about the amount of real time needed for primitive operations such as process steps.  This thesis studies the effect of this timing uncertainty on the real-time behavior of distributed systems.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149714</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Search Trees on Message-passing Architectures</title>
<link>https://hdl.handle.net/1721.1/149713</link>
<description>Algorithms for Search Trees on Message-passing Architectures
Colbrook, Adrian; Brewer, Eric A.; Dellarocas, Chrysanthos N.; Weihl, William E.
In this paper we describe a new algorithm for maintaining a balanced search tree on a message-passing MIMD architecture; the algorithm is particularly well suited for implementation on a small number of processors.
</description>
<pubDate>Sun, 01 Sep 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149713</guid>
<dc:date>1991-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proteus: A High-performance Parallel-architecture Simulator</title>
<link>https://hdl.handle.net/1721.1/149712</link>
<description>Proteus: A High-performance Parallel-architecture Simulator
Brewer, Eric A.; Dellarocas, Chrysanthos N.; Colbrook, Adrian; Weihl, William E.
PROTEUS is a high-performance simulator for MIMD multiprocessors.  It is fast, accurate, and flexible:  it is one to two orders of magnitude faster than comparable simulators, it can reproduce results from real multiprocessors, and it is easily configured to simulate a wide range of architectures.
</description>
<pubDate>Sun, 01 Sep 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149712</guid>
<dc:date>1991-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of Distinctive Features for Automatic Speech Recognition</title>
<link>https://hdl.handle.net/1721.1/149711</link>
<description>The Use of Distinctive Features for Automatic Speech Recognition
Meng, Helen Mei-Ling
One of the most critical and yet unsolved problems in phonetic recognition is the transformation of the continuous speech signal to a discrete representation for accessing words in the lexicon. In order to find an efficient description of speech for recognition tasks, our research investigates to use distinctive features.
</description>
<pubDate>Sun, 01 Sep 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149711</guid>
<dc:date>1991-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Secure Computation (Preliminary Report)</title>
<link>https://hdl.handle.net/1721.1/149710</link>
<description>Secure Computation (Preliminary Report)
Micali, Silvio; Rogaway, Phillip
We define what it means for a network of communicating players to securely compute a function of privately held inputs. Intuitively, we wish to correctly compute its value in a computer manner which protects the privacy of each player's contribution, even though a powerful adversary may endeavor to disrupt this enterprise.
</description>
<pubDate>Thu, 01 Aug 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149710</guid>
<dc:date>1991-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Information-theoretical Approach to Studying Phoneme Collocational Constrainst</title>
<link>https://hdl.handle.net/1721.1/149709</link>
<description>An Information-theoretical Approach to Studying Phoneme Collocational Constrainst
Kassel, Robert Howard
This thesis describe a lexical study of phoneme collocational constraints using a metric motivated by information theory.  Phonologists have been describing the permissible combination of phonemes in the form of phonotactic rules. They have shown that these rules often can be expressed in terms of phoneme equivalence classes.
</description>
<pubDate>Mon, 01 Jul 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149709</guid>
<dc:date>1991-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Approximate Graph Coloring</title>
<link>https://hdl.handle.net/1721.1/149708</link>
<description>Algorithms for Approximate Graph Coloring
Blum, Avrim
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149708</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A High-performance Retargetable Simulator for Parallel Architectures</title>
<link>https://hdl.handle.net/1721.1/149707</link>
<description>A High-performance Retargetable Simulator for Parallel Architectures
Dellarocas, Chrysanthos N.
The complexity of the interaction between software and hardware in MIMD machines makes experimental evaluation of parallel programs an import complement to theoretical analysis. Traditional techniques used to monitor the direct execution of programs are intrusive an d may lead to inaccurate  results  when applied to parallel programs.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149707</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge Representation for Supporting Decision Model Formulation in Medicine</title>
<link>https://hdl.handle.net/1721.1/149706</link>
<description>Knowledge Representation for Supporting Decision Model Formulation in Medicine
Leong, Tze-Yun
Clinical decision making involves a large, complex, and ever-changing body of knowledge.  Characterizing such knowledge illuminates the representational and computational requirements for automated clinical decision analysis.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149706</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Round Complexity of Secure Protocols</title>
<link>https://hdl.handle.net/1721.1/149705</link>
<description>The Round Complexity of Secure Protocols
Rogaway, Phillip
Assume we have a network of three of more players, each player in possession of some private input. The players want to compute some function of these private inputs, but in a way which protects the privacy of each participant's contribution.
</description>
<pubDate>Mon, 01 Apr 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149705</guid>
<dc:date>1991-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance Tradeoffs in Multithreaded Processors</title>
<link>https://hdl.handle.net/1721.1/149704</link>
<description>Performance Tradeoffs in Multithreaded Processors
Agarwal, Anant
High network latencies in large-scale multiprocessors can cause a significant drop in processor utilization.  By maintaining multiple process contexts in hardware and switching among them in a few cycles, multithreaded processors can overlap computation with memory accesses and reduce processor idle time.
</description>
<pubDate>Mon, 01 Apr 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149704</guid>
<dc:date>1991-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Randomness and Robustness in Hypercube Computation</title>
<link>https://hdl.handle.net/1721.1/149703</link>
<description>Randomness and Robustness in Hypercube Computation
Newman, Mark Joseph
In this thesis we explore means by which hypercubes can compute despite faulty processors and links.  We also study techniques which enable hypercubes to simulate dynamically changing networks and data structures.
</description>
<pubDate>Mon, 01 Apr 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149703</guid>
<dc:date>1991-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Waiting Algorithms for Synchornization in Large-scale Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149702</link>
<description>Waiting Algorithms for Synchornization in Large-scale Multiprocessors
Lim, Beng-Hong; Agarwal, Anant
Through analysis and experiments, this paper investigates two-phase waiting algorithms to minimize the cost of waiting for synchronization in large-scale multiprocessors. In a two-phase algorithm, a thread first waits by polling a synchronization variable.
</description>
<pubDate>Fri, 01 Feb 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149702</guid>
<dc:date>1991-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Evaluation of Concurrent Priority Queue Algorithms</title>
<link>https://hdl.handle.net/1721.1/149701</link>
<description>An Evaluation of Concurrent Priority Queue Algorithms
Huang, Qin
The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs.
</description>
<pubDate>Fri, 01 Feb 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149701</guid>
<dc:date>1991-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Spectral Norm of Finite Functions</title>
<link>https://hdl.handle.net/1721.1/149700</link>
<description>The Spectral Norm of Finite Functions
Bellare, Mihir
In many recent results in learning and computational complexity theory which rely on Fourier analysis, the spectral norm plays a key role.  An understanding of this quantity would appear to be useful in both gauging and exploiting these results, and in understanding the underlying techniques.
</description>
<pubDate>Fri, 01 Feb 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149700</guid>
<dc:date>1991-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>TCP Packet Trace Analysis</title>
<link>https://hdl.handle.net/1721.1/149699</link>
<description>TCP Packet Trace Analysis
Shepard, Timothy Jason
Examination of a trace of packets collected from the network is often the only method available for diagnosing protocol performance problems in computer networks.  This thesis explores the use of packet traces to diagnose performance problems of the transport protocol TCP.
</description>
<pubDate>Fri, 01 Feb 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149699</guid>
<dc:date>1991-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cache Coherence Protocols for Large-Scale Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149698</link>
<description>Cache Coherence Protocols for Large-Scale Multiprocessors
Chaiken, David Lars
Caches have the potential to provide multiprocessors with an automatic mechanism for reducing both network traffic and average memory access latency.  However, cache-based systems must address the problem of cache coherence.
</description>
<pubDate>Sat, 01 Sep 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149698</guid>
<dc:date>1990-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Incremental Type Inference System for the Programming Language ID</title>
<link>https://hdl.handle.net/1721.1/149697</link>
<description>An Incremental Type Inference System for the Programming Language ID
Gupta, Shail Aditya
Modern computing environments strive to be robust and reliable, and at the same time, aim at providing enough flexibility to an interactive user to edit, debug, and test programs easily and efficiently.
</description>
<pubDate>Thu, 01 Nov 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149697</guid>
<dc:date>1990-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specification and Verification of Real-team Constraints in Coarse-grain Dataflow</title>
<link>https://hdl.handle.net/1721.1/149696</link>
<description>Specification and Verification of Real-team Constraints in Coarse-grain Dataflow
Henry, Dana S.
We present a method for verifying real-time constraints in a distributed, coarse-grain dataflow environment starting with a program which has already been allocated onto a machine.  The user specifies the timing of each module together with real-time constraints; and we verify the constraints.
</description>
<pubDate>Wed, 01 May 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149696</guid>
<dc:date>1991-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Retiming Synchronous Circuitry and Mixed-integer Optimization</title>
<link>https://hdl.handle.net/1721.1/149695</link>
<description>On Retiming Synchronous Circuitry and Mixed-integer Optimization
Papaefthymiou, Marios Christos
In this paper we investigate properties of retiming, a circuit transformation which preserves the behavior of the circuit as a whole.  We present an algorithm which transforms a given combinational circuit into a functionally equivalent pipelined circuit with minimum latency and clock-period no greater than a given upper bound c.
</description>
<pubDate>Sat, 01 Sep 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149695</guid>
<dc:date>1990-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lazy Replication: Exploiting the Semantics of Distributed Services</title>
<link>https://hdl.handle.net/1721.1/149694</link>
<description>Lazy Replication: Exploiting the Semantics of Distributed Services
Ladin, Rivka; Liskov, Barbara; Shrira, Liuba; Ghemawat, Sanjay
To provide high availability for services such as mail or bulletin boards, data must be replicated.  One way to guarantee consistency of replicated data is to force service operations to occur in the same order at all sites, but this approach is expensive.
</description>
<pubDate>Sun, 01 Jul 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149694</guid>
<dc:date>1990-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Implementation of a Packet Switched Routing Chip</title>
<link>https://hdl.handle.net/1721.1/149693</link>
<description>Design and Implementation of a Packet Switched Routing Chip
Joerg, Christopher Frank
Monsoon is a parallel processing dataflow computer that will require a high bandwidth interconnection network.  A packet switched routing chip (PaRC) is described that will be used as the basis of this network.  PaRC is a 4 by 4 routing switch which has been designed and fabricated as a CMOS gate array.
</description>
<pubDate>Sat, 01 Dec 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149693</guid>
<dc:date>1990-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Complexity of Computing Algebraic Functions</title>
<link>https://hdl.handle.net/1721.1/149692</link>
<description>On the Complexity of Computing Algebraic Functions
Mansour, Yishay
This research addresses the problem of proving lower bounds on the complexity of algebraic computations involving the floor operation.  The model of computation considered is a computation tree with the set of basic operations {+,-,*,*,[.],._ }.                         The constants available to the computation are 0 and 1, and every other constant needs to be generated explicitly.
</description>
<pubDate>Sat, 01 Sep 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149692</guid>
<dc:date>1990-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of the Held-Karp Heuristic for the Traveling Salesman Problem</title>
<link>https://hdl.handle.net/1721.1/149691</link>
<description>Analysis of the Held-Karp Heuristic for the Traveling Salesman Problem
Williamson, D.P.
The Held-Karp heuristic for the Traveling Salesman Problem (TSP) has in practice provided near-optimal lower bounds on the cost of solutions to the TSP.  We analyze the structure of Held-Karp solutions in order to shed light on their quality.
</description>
<pubDate>Fri, 01 Jun 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149691</guid>
<dc:date>1990-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient At-most-once Messages Based on Synchronized Clocks</title>
<link>https://hdl.handle.net/1721.1/149690</link>
<description>Efficient At-most-once Messages Based on Synchronized Clocks
Liskov, Barbara; Shrira, Liuba; Wroclawski, John
This paper describes a new message passing protocol that provides guaranteed detection of duplicate messages even when the receiver has no state stored for the sender.
</description>
<pubDate>Sun, 01 Apr 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149690</guid>
<dc:date>1990-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Disconnected Actions: An Asynchronous Extensions to a Nested Atomic Action System</title>
<link>https://hdl.handle.net/1721.1/149689</link>
<description>Disconnected Actions: An Asynchronous Extensions to a Nested Atomic Action System
Ben-Zvi, Boaz
Nested transactions, a generalization of atomic transactions, provide a uniform mechanism for coping with failures and obtaining concurrency within an action.
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149689</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Replication for Highly Available Services</title>
<link>https://hdl.handle.net/1721.1/149688</link>
<description>Automatic Replication for Highly Available Services
Ghemawat, Sanjay
Replicating various components of a system is a common technique for providing highly available services in the presence of failures.  A replication scheme is a mechanism for organizing these replicas so that as a group they provide a service that has the same semantics as the original unreplicated service. Viewstamped replication is a new replication scheme for providing high availability.
</description>
<pubDate>Thu, 01 Mar 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149688</guid>
<dc:date>1990-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rate-based Congestion Control in Networks with Smart Links</title>
<link>https://hdl.handle.net/1721.1/149687</link>
<description>Rate-based Congestion Control in Networks with Smart Links
Heybey, Andrew Tyrrell
I use a network simulator to explore rate-based congestion control in networks with "smart" links that can feed back information to tell senders to adjust their transmission rates. This method differs in a very important way from congestion control in
</description>
<pubDate>Mon, 01 Jan 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149687</guid>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>ML with Extended Pattern Matching and Subtypes</title>
<link>https://hdl.handle.net/1721.1/149686</link>
<description>ML with Extended Pattern Matching and Subtypes
Jategaonkar, Lalita A.
We extend a fragment of the programming language ML by incorporating a more general form of record pattern matching and providing for user-declared subtypes. Together, these two enhancements may be used to support a restricted object-oriented program
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149686</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Reasoning in the Domain of Genetic Counseling</title>
<link>https://hdl.handle.net/1721.1/149685</link>
<description>Probabilistic Reasoning in the Domain of Genetic Counseling
Harris, Nomi L.
This paper describes a program, GENINFER, which uses belief networks to calculate risks of inheriting genetic disorders.  GENINFER is based on Judea Pearl's [17] algorithm for fusion and propagation in probabilistic belief networks.  These networks allow the effects of various pieces of information to be propagated and fused in such a way that, when equilibrium is reached, each proposition can be assigned a degree of believe consistent with the axioms of probability theory.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149685</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Should a Function Continue?</title>
<link>https://hdl.handle.net/1721.1/149684</link>
<description>Should a Function Continue?
Riecke, Jon Gary
We show that two l-calculus terms can be observationally congruent (i.e., agree in all contexts) but their continuation-passing transforms may not be.  We also show that two terms may be congruent in all untyped contexts but fail to be congruent in a language with call/ cc operators, and that two terms may have the same meaning in a direct semantics but in a continuation semantics.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149684</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Typechecking is Undecidable when 'Type' is a Type</title>
<link>https://hdl.handle.net/1721.1/149683</link>
<description>Typechecking is Undecidable when 'Type' is a Type
Reinhold, Mark B.
A function has a dependent type when the type of its result depends upon the value of its argument. The type of all types  is the type of every type, including itself. In a typed l-calculus, these two features synergize in a conceptually clean and uniform way to yield enormous expressive power at very little apparent cost. By reconstructing and analyzing a paradox due to Girard, we argue that there is no effective typechecking algorithm for such a language.
</description>
<pubDate>Fri, 01 Dec 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149683</guid>
<dc:date>1989-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Cycles and Scaling in Parallel Algorithms</title>
<link>https://hdl.handle.net/1721.1/149682</link>
<description>Using Cycles and Scaling in Parallel Algorithms
Stein, Clifford
We introduce the technique of decomposing an undirected graph by finding a maximal set of edge-disjoint cycles.  We give a parallel algorithm to find this decomposition in O(log n) time on (m+ n)/log n  processors.  We then use this decomposition to to give the first efficient parallel algorithm for finding an approximation to a minimum cycle cover.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149682</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>ParaTran: A Transparent, Transaction Based Runtime Mechanism for Parallel Execution of Scheme</title>
<link>https://hdl.handle.net/1721.1/149681</link>
<description>ParaTran: A Transparent, Transaction Based Runtime Mechanism for Parallel Execution of Scheme
Katz, Morry
The number of applications requiring high speed symbolic computation and the performance requirements of these projects are both rapidly increasing.  However, the computer science community's ability to produce high performance uniprocessor hardware is being outstripped by these needs. Therefore, we propose a unique multiprocessing solution to the high speed, symbolic computation problem. Our approach is to develop a transparent runtime mechanism for executing standard, sequential Lisp code on a multiprocessor computer.
</description>
<pubDate>Sat, 01 Jul 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149681</guid>
<dc:date>1989-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimistic Concurrency Control for Nested Distributed Transactions</title>
<link>https://hdl.handle.net/1721.1/149680</link>
<description>Optimistic Concurrency Control for Nested Distributed Transactions
Gruber, Robert Edward
Optimistic concurrency control techniques allow atomic transactions (or actions for short) to execute without synchronization, relying on commit-time validation to ensure serializability.  Previous work in this area has focussed on single-level actions. This thesis extends previous work on optimistic concurrency control to distributed system with nested actions.
</description>
<pubDate>Thu, 01 Jun 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149680</guid>
<dc:date>1989-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Study of Backoff Barrier Synchronization</title>
<link>https://hdl.handle.net/1721.1/149679</link>
<description>A Study of Backoff Barrier Synchronization
Cherian, Mathews Malieakkal
Shared-memory multiprocessors commonly use shared variables for synchronization.  Simulations of real parallel applications show that large-scale cache-coherent multiprocessors suffer significant amounts of invalidation traffic due to synchronization. Large multiprocessors that do not cache synchronization variables are often more severely impacted.
</description>
<pubDate>Thu, 01 Jun 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149679</guid>
<dc:date>1989-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Network Layer Protocols with Byzantine Robustness</title>
<link>https://hdl.handle.net/1721.1/149678</link>
<description>Network Layer Protocols with Byzantine Robustness
Perlman, Radia
The Network Layer of a network architecture is a distributed protocol that facilitates packet delivery across multiple hops.  One of its chief functions is the calculation of routes throughout the network.  Traditional Network Layer protocols have addressed robustness in the face of  simple failures, i.e. nodes or links becoming inoperative. This thesis examines Network Layer protocol designs that are robust in the presence in the Byzantine failures, i.e., nodes that through  malice or malfunction exhibit arbitrary behavior such as corrupting, forging, or delaying routing protocol messages.
</description>
<pubDate>Sat, 01 Oct 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149678</guid>
<dc:date>1988-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Code-mapping Policies for the Tagged-token Dataflow Architecture</title>
<link>https://hdl.handle.net/1721.1/149677</link>
<description>Code-mapping Policies for the Tagged-token Dataflow Architecture
Maa, Gino K.
Multiprocessing seems to be the only viable way to gain significant speedup beyond that afforded by performance advances in semiconductor devices and hardware construction, which are beginning to face the limitations of physics.  Although it is relatively easy to improve the "raw" computational performance of a system simply by adding more processors to it, the far more difficult task is to insure that the additional resources actually reduce a program's computing time.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149677</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Fault-tolerant Network Kernel for Linda</title>
<link>https://hdl.handle.net/1721.1/149676</link>
<description>A Fault-tolerant Network Kernel for Linda
Xu, Andrew S.
The parallel programming system Linda consists of a number of processes and a shared memory called the tuple space.  In a distributed implementation of Linda, processes and the tuple space reside on different computing nodes connected by a communications network subject to a variety of node and network failures.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149676</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Viewstamped Replication for Highly Available Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/149675</link>
<description>Viewstamped Replication for Highly Available Distributed Systems
Oki, Brian Masao
This dissertation presents viewstamped replication, a new algorithm for the implementation of highly available computer services that continue to be usable in spite of node crashes and network partitions.  Our goal is to design an efficient mechanism that makes it easy for programmers to implement these services without complicating the programming model.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149675</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>FX-87 Performance Measurements: Dataflow Implementation</title>
<link>https://hdl.handle.net/1721.1/149674</link>
<description>FX-87 Performance Measurements: Dataflow Implementation
Hammel, R. Todd; Gifford, David K.
We analyze how much the FX-87 static effect system can improve the execution times of five benchmark programs on a parallel graph interpreter.  Three of our benchmark programs do not use side-effects (factorial, fibonacci, and polynomial division) an
</description>
<pubDate>Thu, 01 Sep 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149674</guid>
<dc:date>1988-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compsoing Data &amp; Process Descriptions in the Design of Software Systems</title>
<link>https://hdl.handle.net/1721.1/149673</link>
<description>Compsoing Data &amp; Process Descriptions in the Design of Software Systems
Jackson, Daniel
Two paradigms are dominant in software development, the data paradigm and the process paradigm.  Our contention is that relying exclusively on either is counter-productive.  In the data paradigm, a system is specified as operations acting on states. The process paradigm focuses on sequences of events.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149673</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Qualitative Analysis of Ordinary Differential Equations Using Piecewise Linear Approximations</title>
<link>https://hdl.handle.net/1721.1/149672</link>
<description>Automatic Qualitative Analysis of Ordinary Differential Equations Using Piecewise Linear Approximations
Sacks, Elisha, Peretz
This thesis explores automating the qualitative analysis of physical systems.  Scientists and engineers model many physical systems with ordinary differential equations.  They deduce the behavior of the system by analyzing the equations.  Most realistic models are nonlinear, hence difficult or impossible to solve explicitly.
</description>
<pubDate>Tue, 01 Mar 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149672</guid>
<dc:date>1988-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A High-level Signal Processing Programming Language</title>
<link>https://hdl.handle.net/1721.1/149671</link>
<description>A High-level Signal Processing Programming Language
Hicks, James Edward, Jr.
The motivations for an abstract, diagrammatic signal processing language are presented along with a study of the semantics that such language should have.  D-PICT, the proposed Digital Signal Processing Pictorial Language, is thoroughly described. D-PICT has a diagrammatic representation with a corresponding textual representation.
</description>
<pubDate>Tue, 01 Mar 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149671</guid>
<dc:date>1988-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diversity-based Inference of Finite Automata</title>
<link>https://hdl.handle.net/1721.1/149670</link>
<description>Diversity-based Inference of Finite Automata
Schapire, Robert Elias
We present a new procedure for inferring the structure of a finite-state automaton (FSA) from its input/output behavior, using access to the automaton to perform experiments.  Our procedure uses a new representation for FSA's, based on the notion of equivalence between tests.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149670</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constructing a Highly-available Location Service for a Distributed Environment</title>
<link>https://hdl.handle.net/1721.1/149669</link>
<description>Constructing a Highly-available Location Service for a Distributed Environment
Jing-Hwa Hwang, Deborah
One possible advantage a distributed system has over a centralized system is the ability to move objects from one node to another.  For example, we may want to move an object if the node where it resides is overloaded. This thesis proposes to use a location service to aid in finding objects that move.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149669</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Technique for Constructing Highly-Available Services</title>
<link>https://hdl.handle.net/1721.1/149668</link>
<description>A Technique for Constructing Highly-Available Services
Ladin, Rivka; Liskov, Barbara; Shrira, Liuba
This paper describes a general method for constructing a highly available service for use in a distributed system.  It gives a specific implementation of the method and proves the implementation correct.  The service consists of replicas that reside at several different locations in a network. It presents its clients with a consistent view of its state, but the view may contain old information.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149668</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>CALVIN: A Rule Based Expert System for Improving Arrhymia Detector Performance During Noisy ECGS</title>
<link>https://hdl.handle.net/1721.1/149667</link>
<description>CALVIN: A Rule Based Expert System for Improving Arrhymia Detector Performance During Noisy ECGS
Muldrow, Warren K.
Human experts far outperform automated arrhythmia detectors in analyzing ECG data corrupted by noise and artifact.  Humans make use of considerable a priori knowledge about cardiac electrophysiology and knowledge acquired from the specific ECG under analysis. R-R interval, coupling intervals of ectopic beats, and commonly occurring beat patterns observed during noise-free ECG segments form a knowledge base which is used in accurately detecting and classifying true QRS complexes in the presence of severe noise.
</description>
<pubDate>Tue, 01 Sep 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149667</guid>
<dc:date>1987-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Framework for Incorporating Abstraction Mechanisms into the Logic Programming Paradigm</title>
<link>https://hdl.handle.net/1721.1/149666</link>
<description>A Framework for Incorporating Abstraction Mechanisms into the Logic Programming Paradigm
Zachary, Joseph Lawrence
To help make logic programming more suitable for writing large systems, we develop linguistic mechanisms that permit the organization of logic programs around abstractions.  In particular, we present the design of Danali, an equational logic programming language that supports predicate and data abstraction.
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149666</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rule Based Analysis of Computer Security</title>
<link>https://hdl.handle.net/1721.1/149665</link>
<description>Rule Based Analysis of Computer Security
Baldwin, Robert W.
Computers are rarely as secure as they could be.  Users are lax or inconsistent in the way they configure a computer's protection system, and these user mistakes often lead to serious security holes.  For example, a privileged user might accidentally make his login initialization file publicly writable and that mistake could allow ordinary users to acquire super-user privileges.
</description>
<pubDate>Tue, 01 Mar 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149665</guid>
<dc:date>1988-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Argus Reference Manual</title>
<link>https://hdl.handle.net/1721.1/149664</link>
<description>Argus Reference Manual
Liskov, Barbara; Day, M.; Herlihy, M.; Johnson, P.; Leavens, G.
Argus is an experimental language/system designed to support the construction and execution of distributed programs.  Argus is intended to support only a subset of the applications that could benefit from being implemented by a distributed program. Two properties distinguish these applications: they make use of on-line data that must remain consistent in spite of concurrency and hardware failures, and they provide services under real-time constraints that are not severe.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149664</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Walter User's Manual (Version 1.0)</title>
<link>https://hdl.handle.net/1721.1/149663</link>
<description>Walter User's Manual (Version 1.0)
Gifford, David K.; Cote, Robert G.; Segal, David A.
Walter is a UNIX program that provides access to databases located at MIT via the DARPA Internet.  The databases provided by Walter include the full-text of the New York Times for the past 90 days.  A sophisticated full-text query language is provided, and Walter uses a query routing algorithm to direct requests to the proper database server at MIT.
</description>
<pubDate>Tue, 01 Sep 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149663</guid>
<dc:date>1987-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Clipping Service User's Manual (Version 1.2)</title>
<link>https://hdl.handle.net/1721.1/149662</link>
<description>Clipping Service User's Manual (Version 1.2)
Gifford, David K.; Cote, Robert G.; Segal, David A.
The Clipping Service is a program that will send selected stories from the New York Times and other information sources to you via electronic mail.  In order to use the Clipping Service, you first describe your interests to the Clipping Service in a simple  full-text query language, and then mail this interest profile to the DARPA Internet mail address clip@db.lcs.mit.edu.
</description>
<pubDate>Tue, 01 Sep 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149662</guid>
<dc:date>1987-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boston Community Information System - 1986 Experimental Test Results</title>
<link>https://hdl.handle.net/1721.1/149661</link>
<description>Boston Community Information System - 1986 Experimental Test Results
Gifford, David K.; Heitmann, Dawn; Segal, David A.; Cote, Robert G.; Tanacea, Kendra; Burmaster, David E.
This report describes the first year of an experimental test of the Boston Community Information System (Boston CommInS).  The experiment implements new ideas of data communication and database design in the transmission and reception of data.  The system offers the Associated Press and New York Times to participants and is provided in exchange for their monthly feedback.
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149661</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>KOLA: Knowledge Organization Language</title>
<link>https://hdl.handle.net/1721.1/149660</link>
<description>KOLA: Knowledge Organization Language
Jang, Yeona
The focus of this research is on a representation of knowledge that captures the structure of a domain into the computational model for efficient retrieval and reasoning.  With this desideratum in mind, a concept-based knowledge representation system KOLA (Knowledge Organization LAnguage ) is described.
</description>
<pubDate>Sat, 01 Oct 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149660</guid>
<dc:date>1988-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication Patterns in a Symbolic Multiprocessor</title>
<link>https://hdl.handle.net/1721.1/149659</link>
<description>Communication Patterns in a Symbolic Multiprocessor
Nuth, Peter Robert
An important design decision for large scale multiprocessors is the balance of processor power to communication network bandwidth.  In order to evaluate different design alternatives, it is necessary to be able to predict the load imposed on the network by a programming model. This thesis quantifies that communication load for a model of parallel symbolic computing using the Multilisp language.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149659</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Programming on Graphs with Bounded Treewidth</title>
<link>https://hdl.handle.net/1721.1/149658</link>
<description>Dynamic Programming on Graphs with Bounded Treewidth
Bodlaender, Hans L.
In this paper we study the complexity of graph decision problems, restricted to the class of graphs with treewidth   k, (or equivalently, the class of partial k-trees), for fixed k.  We introduce two classes of graph decision problems, LCC and ECC, and subclasses C-LCC, and C-ECC. We show that each problem in LCC (or C-LCC) is solvable in polynomial (O(nc)) time, when restricted to graphs with fixed up-perbounds on the treewidth and degree; and that each problem in ECC (or C- ECC) is solvable in polynomial  (O(n c)) time, when restricted to graphs with a fixed upperbound on the treewidth (with given corresponding tree-decomposition).
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149658</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of Self-timed VLSI Circuits from Graph-theoretic Specifications</title>
<link>https://hdl.handle.net/1721.1/149657</link>
<description>Synthesis of Self-timed VLSI Circuits from Graph-theoretic Specifications
Chu, Tam-Anh
This thesis presents an approach for direct and efficient synthesis of self-timed (asynchronous) control circuits from formal specifications called Signal Transition Graphs (STGs).  Control circuits synthesized from this graph model are speed-independent and capable of performing concurrent operation.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149657</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>MAM: A Semi-automatic Debugging Tool for Distrubuted Programs</title>
<link>https://hdl.handle.net/1721.1/149656</link>
<description>MAM: A Semi-automatic Debugging Tool for Distrubuted Programs
Kolodney, Lawrence Kenneth
Traditional debuggers, designed to examine single process serial programs, do not provide sufficient functionality for efficient debugging of distributed programs.  There are a number of fundamental differences in the way in which a programmer understands the execution of a distributed programs, and a debugger must present data to its user in light of that fact.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149656</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Methods for Calculating Maximum Entropy Distributions</title>
<link>https://hdl.handle.net/1721.1/149655</link>
<description>Efficient Methods for Calculating Maximum Entropy Distributions
Goldman, Sally A.
We present a new algorithm for computing the maximum entropy probability distribution satisfying a set of constraints.  Unlike previous approaches, our method is integrated with the planning of data collection and tabulation.  We show how adding constraints and performing the associated additional  tabulations can substantially speed up computation by replacing the usual iterative techniques with a straight-forward computation.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149655</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Replication in Nested Transaction Systems</title>
<link>https://hdl.handle.net/1721.1/149654</link>
<description>Data Replication in Nested Transaction Systems
Goldman, Kenneth J.
Gifford's basic Quorum Consensus algorithm for data replication is generalized to accommodate nested transactions and transaction failures (aborts).  A formal description of the generalized algorithm is presented using the new Lynch-Merritt input-output automaton model for nested transaction systems.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149654</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Temporal Reasoning in Medical Expert Systems</title>
<link>https://hdl.handle.net/1721.1/149653</link>
<description>Temporal Reasoning in Medical Expert Systems
Kohane, Isaac S.
Diseases develop and change over time.  Much of the distinction between pathophysiological complexes rests on the temporal relations of their component events.  Therefore, knowledge bases that fail to capture the temporal component of the course of disease omit useful diagnostic knowledge.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149653</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Expert System for Diagnosing Gait for Cerebral Palsy Patients</title>
<link>https://hdl.handle.net/1721.1/149652</link>
<description>An Expert System for Diagnosing Gait for Cerebral Palsy Patients
Hirsch, David Edward
Many first generation expert systems in medicine assumed that a single fault was the cause of the patient's problems.  However, this is not always so and in the domain of gait analysis this is usually not the case.  This work looks at an expert system for performing gait analysis on cerebral palsy patients. The system is able to handle cases where there are many interacting faults causing the patient's gait deviations.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149652</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Correctness Proofs for Distributed Algorithms</title>
<link>https://hdl.handle.net/1721.1/149651</link>
<description>Hierarchical Correctness Proofs for Distributed Algorithms
Lynch, Nancy A.; Tuttle, Mark S.
This thesis introduces a new model for distributed computation in asynchronous networks, the input-output automaton.  This simple, powerful model captures in a novel way the game-theoretical interaction between a system and its environment, and allows fundamental properties of distributed computation such as fair computation to be naturally expressed.
</description>
<pubDate>Wed, 01 Apr 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149651</guid>
<dc:date>1987-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Simulation Environment for Schema</title>
<link>https://hdl.handle.net/1721.1/149650</link>
<description>A Simulation Environment for Schema
St.Pierre, Margaret Ann
In present day circuit design, many independent simulation tools are available for analyzing circuits at various levels of detail.  This thesis presents a framework to tie these tools into the Simulation Environment in Schema, an integrated CAD system. The framework easily incorporates additional simulators, serves as a foundation upon which to build new analysis tools, and provides the ability for mixed-mode simulation.
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149650</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Flow Computer Architecture Final Report</title>
<link>https://hdl.handle.net/1721.1/149649</link>
<description>Data Flow Computer Architecture Final Report
Dennis, Jack B.
This report covers the work done by Computation Structures Group of the MIT Laboratory for Computer Science on developing models, languages, and architectures for data flow computation from 1966 to the end 1985. The work was supported by research grants and contracts from NSF, the University of California, DOE, NASA, and DARPA having periods of support as follows: Advanced Research Projects Agency (DARPA).
</description>
<pubDate>Thu, 01 Oct 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149649</guid>
<dc:date>1987-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remote Pipe and Procedures for Efficient Distributed Communication</title>
<link>https://hdl.handle.net/1721.1/149648</link>
<description>Remote Pipe and Procedures for Efficient Distributed Communication
Gifford, D.
A new communication model for distributed systems is described that combines the advantages of remote procedure call with  the efficient transfer of bulk data. Three ideas form the basis of this model. First, remote procedures are first-class values which can be freely exchanged among nodes, thus enabling a greater variety of protocols to be directly implemented in a remote procedure call framework.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149648</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Congestion Control in Routing Networks</title>
<link>https://hdl.handle.net/1721.1/149647</link>
<description>Congestion Control in Routing Networks
Chien, Andrew Andai
Multistage routing networks present an attractive cost-effective method of interconnection for medium to large scale multiprocessors.  Recent results concerning performance degradation in the presence of "hot spots" have raised serious questions about the robustness of previous performance estimates for these routing networks. Research to date has focused on a limited class of hot spots-those in which all the hot spot traffic is destined for the same memory address.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149647</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Logic Simulation of a Multiprocessor</title>
<link>https://hdl.handle.net/1721.1/149646</link>
<description>Logic Simulation of a Multiprocessor
Bradley, Elizabeth
The performance of circuit simulators running on SISD computers is fundamentally limited by the Von Neumann bottleneck.  Multiprocessors do not share this limitation.  The task of solving the equations for the many parallel signal paths found in most circuits lends itself readily to concurrent computation. for both of these reasons, parallel processing is a highly promising approach to circuit simulation. This thesis explores several facets of this problem.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149646</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Notion of Security for Probabilistic Public-key Cryptosystems</title>
<link>https://hdl.handle.net/1721.1/149645</link>
<description>The Notion of Security for Probabilistic Public-key Cryptosystems
Sloan, Robert Hal
The purpose of a cryptosystem is to allow people to communicate securely over an open channel.  Before one can discuss whether a cryptosystem meets this goal, however, one must first rigorously define what is meant by security.  Three very different formal definitions of security for public-key cryptosystems have been proposed-two by Goldwasser and Micali and one by Yao. In this thesis, it is shown that the three definitions are essentially equivalent.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149645</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>MACE: A Multiprocessing Approach to Circuit Extraction</title>
<link>https://hdl.handle.net/1721.1/149644</link>
<description>MACE: A Multiprocessing Approach to Circuit Extraction
Levitin, Samuel M.; Terman, Christopher J.; Slater, Kenneth H.
The ever-increasing complexity of VLSI chips threaten to choke out all available computer power unless methods are devised to keep the CAD tasks conveniently sized.  A review of the current methods of multiprocessing approaches in the domain of layout verification precedes the discussion of current work.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149644</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Long Atomic Computations</title>
<link>https://hdl.handle.net/1721.1/149643</link>
<description>Long Atomic Computations
Ng, Pui
Distributed computing systems are becoming commonplace and offer interesting opportunities for new applications.  In a practical system, the problems of synchronizing concurrent computations and recovering from failures must be dealt with effectively.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149643</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Replication and Reconfiguration in a Distributed Mail Repository</title>
<link>https://hdl.handle.net/1721.1/149642</link>
<description>Replication and Reconfiguration in a Distributed Mail Repository
Day, Mark S.
Conventional approaches to programming produce centralized programs that run on a single computer.  However, an unconventional approach can take advantage of low-cost communication and small, inexpensive computers.  A distributed program provides service through programs executing at several nodes of a distributed system.
</description>
<pubDate>Wed, 01 Apr 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149642</guid>
<dc:date>1987-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Graph Algorithms for Sequential and Parallel Computers</title>
<link>https://hdl.handle.net/1721.1/149641</link>
<description>Efficient Graph Algorithms for Sequential and Parallel Computers
Goldberg, Andrew V.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149641</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boston Community Information System User's Manual</title>
<link>https://hdl.handle.net/1721.1/149640</link>
<description>Boston Community Information System User's Manual
Segal, David A.; Gifford, David K.; Lucassen, John M.; Henderson, James B.; Berlin, Stephen T.; Burmaster, David E.
The Boston Community Information System turns your computer into a personal information assistant that monitors the news as it happens.  This experiment, CommInS, tests a new way of distributing world news as it happens and features from the New York Times and the Associate Press wire service directly to personal computers via radio waves.
</description>
<pubDate>Thu, 01 Oct 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149640</guid>
<dc:date>1987-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compaction with Automatic Job Introduction</title>
<link>https://hdl.handle.net/1721.1/149639</link>
<description>Compaction with Automatic Job Introduction
Maley, F. Miller
This thesis presents an algorithm for one-dimensional compaction of VLSI layouts.  It differs from older methods in treating wires not as objects to be moved, but as constraints on the positions of other circuit components.  These constraints are determined for each wiring layer using the theory of planar routing.
</description>
<pubDate>Sat, 01 Nov 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149639</guid>
<dc:date>1986-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Compiler for the MIT Tagged-token Dataflow Architecture</title>
<link>https://hdl.handle.net/1721.1/149638</link>
<description>A Compiler for the MIT Tagged-token Dataflow Architecture
Traub, Kenneth R.
Compilation of the programming language Id Nouveau into machine code for the MIT tagged-token dataflow architecture is thoroughly described.  Id Nouveau is a higher-order functional language augmented with a novel data structure facility known as I-Structures. The tagged-token dataflow  architecture is a dataflow computer of the dynamic variety.
</description>
<pubDate>Fri, 01 Aug 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149638</guid>
<dc:date>1986-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programming Simultaneous Action Using Common Knowledge</title>
<link>https://hdl.handle.net/1721.1/149637</link>
<description>Programming Simultaneous Action Using Common Knowledge
Moses, Yoram; Tuttle, Mark R.
This work applies the theory of knowledge in distributed systems to the design of efficient fault-tolerant protocols.  We define a large class of problems requiring coordinated, simultaneous action in synchronous systems, and give a method of transforming specifications of such problems into protocols that are optimal in all runs: for every possible input to the system and faculty processor behavior, these protocols are guaranteed to perform the simultaneous actions as soon as any other protocol could possibly perform them.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149637</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The X Window System</title>
<link>https://hdl.handle.net/1721.1/149636</link>
<description>The X Window System
Scheifler, Robert W.; Gettys, Jim
An overview of the X Window System is presented, focusing on the system substrate and the low-level facilities provided to build applications and to manage the desktop.  The system provides high-performance, high-level, device-independent graphics.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149636</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Introduction to the Theory of Nested Transactions</title>
<link>https://hdl.handle.net/1721.1/149635</link>
<description>Introduction to the Theory of Nested Transactions
Lynch, Nancy A.; Merritt, Michael
A new formal model is presented for studying concurrency and resiliency properties for nested transactions.  The model is used to state and prove correctness of a well-known locking algorithm.
</description>
<pubDate>Tue, 01 Jul 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149635</guid>
<dc:date>1986-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Clock Distribution Systems of the Multiprocessor Emulation Facility</title>
<link>https://hdl.handle.net/1721.1/149634</link>
<description>The Clock Distribution Systems of the Multiprocessor Emulation Facility
Younis, Saed G.
Consisting of 32 high-speed processors, the multiple processor emulation facility communicates data between its processors through the use of synchronous, high-bandwidth packet switches residing on the ports of every processor.  Because of the synchronous nature of these packet switches, there was a need to design a clock distribution system that can distribute a clock signal to the 32 ports with as little clock skew as possible.
</description>
<pubDate>Sun, 01 Jun 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149634</guid>
<dc:date>1986-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>ID World: An Environment for the Development of a Dataflow Programs Written in ID</title>
<link>https://hdl.handle.net/1721.1/149633</link>
<description>ID World: An Environment for the Development of a Dataflow Programs Written in ID
Morais, Dinarte R.
The ID WORLD involves the interfacing of a compiler, interpreter, debugger and editor mode to create an environment for the development of dataflow programs written in ID.  It replaces the Tagged-Token Dataflow Architecture (TTDA) Emulator as the foundation for Multiprocessor Emulation Facility at the Laboratory for Computer Science,M.I.T.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149633</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Correctness Conditions for Highly Available Replicated Databases</title>
<link>https://hdl.handle.net/1721.1/149632</link>
<description>Correctness Conditions for Highly Available Replicated Databases
Lynch, Nancy A.; Blaustein, Barbara; Siegel, Michael
Correctness conditions are given which describe some of the properties exhibited by highly available distributed database systems such as the SHARD (System for Highly Available Replicated Data) system currently being developed at Computer Corporation of America. This system allows a database application to continue operation in the face of communication failures, including network partitions.
</description>
<pubDate>Sun, 01 Jun 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149632</guid>
<dc:date>1986-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Parallelism in VLSI CAD</title>
<link>https://hdl.handle.net/1721.1/149631</link>
<description>Exploiting Parallelism in VLSI CAD
Marantz, Joshua David
In the domain of computer science, particularly VLSI CAD, an increasing amount of engineering time is spent running compute-bound programs.  Many of these programs have an intrinsic parallelism that is externally accessible.  This thesis describes a novel software system that uses a small number of independent computers connected by a network to exploit the parallelism inherent in existing software, and thereby, reduce its running time.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149631</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating Applicative Architectures on the Connection Machine</title>
<link>https://hdl.handle.net/1721.1/149630</link>
<description>Simulating Applicative Architectures on the Connection Machine
Kuszmaul, Bradley C.
The Connection Machine (CM) is a highly parallel single instruction multiple data (SIMD) computer, which has been described as "a huge piece of hardware looking for a programming methodology.'  Applicative languages, on the other hand, can be described as a programming methodology looking for a parallel computing engine.
</description>
<pubDate>Sun, 01 Jun 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149630</guid>
<dc:date>1986-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounded Width Branching Programs</title>
<link>https://hdl.handle.net/1721.1/149629</link>
<description>Bounded Width Branching Programs
Barrington, David A.
We examine the branching program model of computation and in particular the classes of languages which can be recognized when the width of the programs is bounded by a constant.  After slightly revising the framework of definitions to sharpen analogies with other models, we prove that width 5 polynomial size branching programs can recognize exactly the parallel complexity class NC1, refuting a conjecture of Borodin et al. in [BDFP83].
</description>
<pubDate>Sun, 01 Jun 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149629</guid>
<dc:date>1986-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligent Physiologic Modeling</title>
<link>https://hdl.handle.net/1721.1/149628</link>
<description>Intelligent Physiologic Modeling
Kunstaetter, Robert
This thesis describes the design and implementation of a knowledge based physiologic modeling systems (KBPMS) and a preliminary evaluation of its use as a learning resource within the context of an experimental medical curriculum--the Harvard New Pathway. KBPMS posesses combined numeric and qualitative simulation capabilities and provide explanations of its knowledge and behaviour.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149628</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A CATV-Based High-speed Packet-switching Network Design</title>
<link>https://hdl.handle.net/1721.1/149627</link>
<description>A CATV-Based High-speed Packet-switching Network Design
Feldmeier, David Charles
A high-speed packet-switching data network to the home can be built on an existing, unmodified, residential cable television (CATV) system.  This thesis considers the theoretical and practical technical aspects of providing such a service and suggest a possible system design.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149627</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Primitives for Real-time Animation in Three Dimensions</title>
<link>https://hdl.handle.net/1721.1/149626</link>
<description>Primitives for Real-time Animation in Three Dimensions
Chaing, Carol J.
We present a general purpose imaging model which can efficiently produce computer-generated animated scenes.  Displaying sophisticated graphics scenes is a computationally complex operation.  Thus, an efficient imaging model is necessary for producing real-time motion.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149626</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computation Management in a Single Address Space System</title>
<link>https://hdl.handle.net/1721.1/149625</link>
<description>Computation Management in a Single Address Space System
Gibson, James C.
A multiprogramming operating system needs a mechanism to recover from the termination of one of its computations.  Cleaning up, or unlinking a terminated computation from those remaining requires identifying the end of a computation, freeing resources that the computation was using, and shutting down its interfaces with other computations.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149625</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Logical Structure for Functional Languages</title>
<link>https://hdl.handle.net/1721.1/149624</link>
<description>Logical Structure for Functional Languages
Beckerle, Michael J.
Functional Programming is frequently advocated as an appropriate  programming discipline for parallel processing because of the difficulty of extracting parallelism from programs written in conventional sequential programming languages.  Unfortunately, the use of Functional operations often implies excessive copying or unnecessary sequentiality in the access and construction of data structures.
</description>
<pubDate>Sat, 01 Feb 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149624</guid>
<dc:date>1986-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Structure Management in a Data Flow Computer System</title>
<link>https://hdl.handle.net/1721.1/149623</link>
<description>Data Structure Management in a Data Flow Computer System
Guharoy, Bhaskar
VIM is an experimental computer system being developed at MIT for supporting functional programming.  The execution mechanism of the computer is based on data flow.  This thesis presents mechanisms for managing data structures in this system. This thesis also develops a methodology for designing computers, which is based on successive refinement of formal models of the computer.
</description>
<pubDate>Wed, 01 May 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149623</guid>
<dc:date>1985-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remote Evaluation</title>
<link>https://hdl.handle.net/1721.1/149622</link>
<description>Remote Evaluation
Stamos, James William
A new technique for computer-to-computer communication is presented that can increase the generality and performance of distributed systems.  This technique, called Remote Evaluation, lets one computer send another computer a request in the form of a program. A computer that receives such a request executes the program in the request and returns the results to the sending computer.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149622</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Backup and Recovery in a Computer Architecture for Functional</title>
<link>https://hdl.handle.net/1721.1/149621</link>
<description>Data Backup and Recovery in a Computer Architecture for Functional
Jagannathan, Suresh
The Vim computer system, an experimental project under development in the MIT/LCS Computation Structures Group, is intended to examine the efficient implementation of functional languages using the principles of data flow computation.  In this thesis, we examine how to incorporate backup and recovery mechanisms into this system to guarantee that no online information is lost because of hardware malfunction.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149621</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boston Community Information System User Manual (Version 6.0)</title>
<link>https://hdl.handle.net/1721.1/149620</link>
<description>Boston Community Information System User Manual (Version 6.0)
Lucassen, John M.; Gifford, David K.; Berlin, Stephen T.; Burmaster, David E.
The Boston Community Information System turns your computer into a personal information assistant that monitors the news as it happens.  This experiment, CommInS, tests a new way of distributing world news and features from the New York Times and the Associated Press (AP) wire service directly to personal computers via radio waves.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149620</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of Graph Layour and Channel Routing for VLSI</title>
<link>https://hdl.handle.net/1721.1/149619</link>
<description>The Complexity of Graph Layour and Channel Routing for VLSI
Bhatt, Sandeep N.
This thesis is motivated by the need for a clearer understanding of various issues in VLSI layout.  Within a formal setting, we identify critical properties of circuits that determine the quality of their layouts.
</description>
<pubDate>Wed, 01 Feb 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149619</guid>
<dc:date>1984-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Implementation of a Distributed Program for Collaborative Editing</title>
<link>https://hdl.handle.net/1721.1/149618</link>
<description>Design and Implementation of a Distributed Program for Collaborative Editing
Seliger, R.
This thesis presents the design and implementation of a distributed program for the support of multi-author collaboration on shared documents.  The Collaborative Editing System, CES, provides an environment in which authors working on a document can cooperate and coordinate their individual contributions to a single document.
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149618</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Playing Well in a Sum of Games</title>
<link>https://hdl.handle.net/1721.1/149617</link>
<description>On Playing Well in a Sum of Games
Yedwab, Laura
Many games are naturally described as a sum of games, e.g., nim and the endgame of Go.  Let G  ,...,G  represent n games.  Then a move in the sum G + ...+G   consists of picking a component game G  and making a move in G ..  This thesis analyzes play in a sum of games from three different perspective: computational complexity, approximate solutions, and optimal research algorithms.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149617</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Relative-motion Microworld</title>
<link>https://hdl.handle.net/1721.1/149616</link>
<description>A Relative-motion Microworld
Morecroft, Linda E.
A relative-motion microworld has been designed to aid high-school students in understanding the concepts of relative motion and frames of reference.  Relative motion and frames of reference are usually introduced in a high-school physics or mathematics course. Most students, and many teachers too, have difficulty understanding the concepts and applying them to solve problems. The traditional approach to relative motion uses vector algebra. However, vector terminology is complex and it does not allow a mental picture of what is happening to be easily built. students do not understand what it means to be in a different frame of reference and how moving objects appear within this reference frame. Most people have a much more intuitive approach to motion problems.
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149616</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equational Theories and Database Constraints</title>
<link>https://hdl.handle.net/1721.1/149615</link>
<description>Equational Theories and Database Constraints
Cosmadakis, Stavros Stylianos
The implication problem for database constraints is central in the fields of automated schema design and query optimization and has been traditionally approached with resolution-based techniques.  We present a novel approach to database constraints, using equations instead of Horn clauses.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149615</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Generalized Approach to Equational Unification</title>
<link>https://hdl.handle.net/1721.1/149614</link>
<description>A Generalized Approach to Equational Unification
Yelick, Katherine Anne
Given a set of equational axioms and two terms containing function symbols and variables, the equational unification problem is to find a uniform replacement of terms for the variables that makes the terms provably equal from the axioms.  In the variable-only case, the two terms contain only variables and function symbols from the axioms. In the general case, the terms may contain symbols not appearing in the axioms, there may be more than on instance of a set of anxioms, and there may  be more than one set of axioms.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149614</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Program for Generating and Analyzing Term Rewriting Systems</title>
<link>https://hdl.handle.net/1721.1/149613</link>
<description>A Program for Generating and Analyzing Term Rewriting Systems
Forgaard, Randy
This thesis presents new results in the use of term rewriting systems for automatic theorem proving.  The design and implementation of REVE 2, a computer program that incorporates these results, is described.  In addition, an introduction to the basic theory, procedures, and algorithms of term rewriting is provided, in a manner suitable for non-specialists.
</description>
<pubDate>Sat, 01 Sep 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149613</guid>
<dc:date>1984-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundations of a Theory of Specification for Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/149612</link>
<description>Foundations of a Theory of Specification for Distributed Systems
Stark, Eugene W.
This thesis investigates a particular approach, called state-transition specification, to the problem of describing the behavior of modules in a distributed or concurrent computer system.  A state-transition specification consists of: (1) a state machine, which incorporates the safety or invariance properties of the module, and (2) validity conditions on the computations of the machine, which capture the desired liveness or eventuality properties.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149612</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Routing Networks for Packet Communication Systems</title>
<link>https://hdl.handle.net/1721.1/149611</link>
<description>Routing Networks for Packet Communication Systems
Boughton, George Andrew
This thesis examines the design of geographically centralized high performance packet switched networks called routing networks.  Each of these networks is intended to be used to interconnect the modules of a highly parallel computer system.  The design of such networks is considered in present (1984) technology where only a small number of network nodes can be placed on a single chip and in  VLSI technology where a large number of nodes can be placed on a chip.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149611</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reasoning about Preference Models</title>
<link>https://hdl.handle.net/1721.1/149610</link>
<description>Reasoning about Preference Models
Wellman, Michael Paul
Programs that make decisions need mechanisms for representing and reasoning about the desirability of the possible consequences of their choices.  This work is an exploration of preference models  based on utility theory.  The framework presented is distinguished by a qualitative view of preferences and a knowledge-based approach to the application of utility theory. The design for a comprehensive preference modeler is implemented in part by the Utility Reasoning Package (URP), a collection of  facilities for constructing and analyzing preference models.
</description>
<pubDate>Wed, 01 May 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149610</guid>
<dc:date>1985-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generic Software for Emulating Multiporocessor Architectures</title>
<link>https://hdl.handle.net/1721.1/149609</link>
<description>Generic Software for Emulating Multiporocessor Architectures
Soley, Richard Mark
The expense of designing, prototyping, and testing a new computer architecture (particularly non-traditional supercomputer architectures, such as the dataflow machine) is enormous.  The relative inflexibility of hardware to experimental changes increases the need to fully test a new architectural idea.
</description>
<pubDate>Wed, 01 May 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149609</guid>
<dc:date>1985-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Problem Solving System for Molecular Genetics</title>
<link>https://hdl.handle.net/1721.1/149608</link>
<description>Towards a Problem Solving System for Molecular Genetics
Koton, Phyllis A.
This paper describes a program called GENEX that reasons about the behavior of bacterial operons.  It is the first step towards a generalized system that will reason about genetic control mechanisms.  The system is easily extensible and able to produce detailed explanations without relying on canned text. Problems in molecular genetics are complicated by uncertainty introduced when reasoning about conformations.
</description>
<pubDate>Wed, 01 May 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149608</guid>
<dc:date>1985-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Implications of Complexity Theory on Pseudo-random Bit Generation</title>
<link>https://hdl.handle.net/1721.1/149607</link>
<description>Some Implications of Complexity Theory on Pseudo-random Bit Generation
Trilling, Stephen
A recent area of interest in theoretical computer science has been in the construction of so-called pseudo-random bit generators.  These generators "stretch" a short sequence of truly random bits into a longer sequence of "pseudo-random" bits.  These bits are sufficiently indistinguishable from truly random bits to be useful in deterministic simulation of probabilistic computation.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149607</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synchornizing Clocks in a Distributed System</title>
<link>https://hdl.handle.net/1721.1/149606</link>
<description>Synchornizing Clocks in a Distributed System
Lundelius, Jennifer
Keeping the local times of processes in distributed system synchronized in the presence of arbitrary faults is important in many applications and is an interesting theoretical problem in its own right.  In order to be practical, any algorithm to synchronize clocks must be able to deal with process failures and repairs, clock drift, and varying message delivery times, but these conditions complicate the design and analysis of algorithms.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149606</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Approach to Functional Office Automation</title>
<link>https://hdl.handle.net/1721.1/149605</link>
<description>An Approach to Functional Office Automation
Zarmer, Craig L.
Current efforts in office automation emphasize developing tools for supporting common, low-level tasks such as word processing and electronic mail.  While they have a wide market, they are not very sophisticated.  At the other end of the spectrum are office-specific systems, designed with complete knowledge of the office's operations. Unfortunately, such systems have a market size of one, and so are not very practical.
</description>
<pubDate>Sun, 01 Apr 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149605</guid>
<dc:date>1984-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Simulation of Digital LSI Circuits</title>
<link>https://hdl.handle.net/1721.1/149604</link>
<description>Parallel Simulation of Digital LSI Circuits
Aronld, Jeffrey M.
Integrated circuit technology has been advancing at a phenomenal rate over the last several years, and promises to continue to do so.  If circuit design is to keep pace with fabrication technology, radically new approaches to computer-aided design will be necessary. One appealing approach is general purpose parallel processing. This explores the issues involved in developing a framework for circuit simulation  which exploits the locality exhibited by circuit operation to achieve a high degree of parallelism.
</description>
<pubDate>Fri, 01 Feb 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149604</guid>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resource Management for the Tagged Token Dataflow Architecture</title>
<link>https://hdl.handle.net/1721.1/149603</link>
<description>Resource Management for the Tagged Token Dataflow Architecture
Culler, David E.
The Tagged Token Dataflow Architecture is a multiprocessor based on the U-interpreter model of dataflow computation.  It captures the essential execution mechanism of the U-interpreter precisely; operations are enabled for execution by the availability of operated data. However, computational resources in the model and the machine are viewed quite differently. This thesis addresses four major resource management issues essential to bridge the gap between the U-interpreter and the Tagged Token Dataflow Architecture.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149603</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Name Management</title>
<link>https://hdl.handle.net/1721.1/149602</link>
<description>Distributed Name Management
Sollins, Karen Rosin
The problem being addressed in this research is the design of a naming facility achieving the following goals.  First, two functions on names must be supported: accessing a named object, and acting as a place holder for the named object.  Second, it must be possible to share those names. Third , communication of the names as well as communication by use of the names must be possible. Finally, feasibility of implementation is a goal.
</description>
<pubDate>Fri, 01 Feb 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149602</guid>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Mathematical Reasoning</title>
<link>https://hdl.handle.net/1721.1/149601</link>
<description>Qualitative Mathematical Reasoning
Sacks, Elisha
Qualitative analysis is the study of abstract causal reasoning.  It explores the mechanisms whereby humans analyze complex systems abstractly, while ignoring unimportant and unknown low-level details.  Previous research has focused on qualitative simulation techniques, analogous to numerical simulation, that use local information  about a system to predict its short-term behavior. This thesis presents a new, calculus based, type of qualitative analysis, called qualitative mathematical reasoning. It derives functional descriptions of systems and uses them to predict global behavior.
</description>
<pubDate>Thu, 01 Nov 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149601</guid>
<dc:date>1984-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Debugging Distributed Computations in a Nest Atomic Action System</title>
<link>https://hdl.handle.net/1721.1/149600</link>
<description>Debugging Distributed Computations in a Nest Atomic Action System
Chiu, Sheng Yang
Concurrent and distributed programs are hard to debug.  In this thesis, we argue that structuring activities as nested atomic actions can make debugging such programs much like debugging traditional sequential programs.  To support the argument, we present a method for debugging computations in the Argus language and system. Our method is applicable to other action systems since it depends only on the atomicity properties of actions.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149600</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Orphan Detection in the Argus System</title>
<link>https://hdl.handle.net/1721.1/149599</link>
<description>Orphan Detection in the Argus System
Walker, Edward Franklin
In a distributed system, an activity running at one node can request another node to perform some service.  This request results in an activity being created at the latter node to perform the requested service.  The former node may then crash, destroying the activity that requested the service, but leaving behind the activity performing the service. Such surviving are known as orphans [Nelson81]. Orphans are undesirable since they waste resources and can view inconsistent data.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149599</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Untypes Lambda Calculus to Computer with Atoms</title>
<link>https://hdl.handle.net/1721.1/149598</link>
<description>Using Untypes Lambda Calculus to Computer with Atoms
Weiss, Paul G.
Axioms and verification rules are given for typeless  A -calculus with a conditional test for equality between atoms.  A semantic completeness theorem is proved and a deterministic evaluator is proposed.
</description>
<pubDate>Wed, 01 Feb 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149598</guid>
<dc:date>1984-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partial Evaluation as a Means of Language Extensibility</title>
<link>https://hdl.handle.net/1721.1/149597</link>
<description>Partial Evaluation as a Means of Language Extensibility
Schooler, Richard
An optimization technique known as partial evaluation is explored.  A partial evaluator optimizes code by making use of static information about program values.  Our partial evaluator is designed to optimize mainly applicative code.  Un-checked assertions are used to identify applicative constructs in the input code and guide the partial evaluator. Side-effects in the input code are retained but are not optimized.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149597</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Implementation of Applicative Languages</title>
<link>https://hdl.handle.net/1721.1/149596</link>
<description>Efficient Implementation of Applicative Languages
Ackerman, William B.
The analysis of parallelism in an applicative program is much easier than in a program written in a conventional statement-oriented style.  This makes it possible for an optimizing compiler to prepare such a program for extremely efficient execution on a suitable enormously parallel computer. This thesis explores the transformations that must be made to achieve very high performance for numerical programs when executed on a computer that uses data flow principles in its operation.
</description>
<pubDate>Sun, 01 Apr 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149596</guid>
<dc:date>1984-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Routing the Power and Ground Wires on a VLSI Chip</title>
<link>https://hdl.handle.net/1721.1/149595</link>
<description>Routing the Power and Ground Wires on a VLSI Chip
Moulton, Andrew Strout
This thesis presents four new algorithms to route noncrossing power and ground trees in one metal layer of a VLSI chip.  The implementation of the best algorithm forms MIT's Placement-Interconnect (PI) Project's power-ground routing phase.  The input to this power-ground algorithm is a set of rectangular modules on a rectangular chip.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149595</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Type Checking in Vimval</title>
<link>https://hdl.handle.net/1721.1/149594</link>
<description>Type Checking in Vimval
Kuszmaul, Bradley C.
A type system is developed for the revised version of the Val programming language (VimVal) which has the following features: (1) Type Inference:  allows programs to be written with incomplete type specifications.  The type checker infers the types of expressions from their context. (2) Polymorphism: allows modules to be written which operate on more than one type, performing analogous operations on different types of data. (3) higher order functions: are first class data in VIMVAL. (4) Recursive types: a type may to itself.
</description>
<pubDate>Fri, 01 Jun 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149594</guid>
<dc:date>1984-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coordinating Pebble Motion on Graphs, The Diameter of Permutation Groups, and Applications</title>
<link>https://hdl.handle.net/1721.1/149593</link>
<description>Coordinating Pebble Motion on Graphs, The Diameter of Permutation Groups, and Applications
Kornhauser, Daniel Martin
The problem of memory management in totally distributed computing systems leads to the following movers' problem on graphics:  Let G be a graph with n vertices with k &lt; n pebbles number 1...,k on distinct vertices.  A move consists of transferring a pebble to an adjacent unoccupied vertex. The problem is to decide whether one arrangement of the pebbles is reachable from another, and to find the shortest sequence of moves to find the rearrangement when it is possible.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149593</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Replication Methods for Abstract Data Types</title>
<link>https://hdl.handle.net/1721.1/149592</link>
<description>Replication Methods for Abstract Data Types
Herlihy, Maurice Peter
Replication can enhance the availability of data in a distributed system.  This thesis introduces a new method for managing replicated data.  We propose new techniques to address four problems associated with replication: (i) the representation and manipulation of replicated data, (iii) on- the-fly reconfiguration, and (iv) enhancing availability in the presence of partitions.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149592</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Constraint Representation and Explanation Facility for Renal Physiology</title>
<link>https://hdl.handle.net/1721.1/149591</link>
<description>A Constraint Representation and Explanation Facility for Renal Physiology
Asbell, Irwin
Current research in Artificial Intelligence has yielded computer programs which have potential to augment the physician's ability to diagnose illness.  The medical diagnoses programs of the first generation contain medical facts representing associations between diseases and findings. A most important step is the development of computer programs that have models of physiological processes and have the ability to derive physiological justifications of observed signs and symptoms.
</description>
<pubDate>Fri, 01 Jun 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149591</guid>
<dc:date>1984-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Abstract Architecture for Parallel Graph Reduction</title>
<link>https://hdl.handle.net/1721.1/149590</link>
<description>An Abstract Architecture for Parallel Graph Reduction
Traub, Kenneth R.
An implementation technique for functional languages that has received recent attention is graph reduction, which offers opportunity for the exploitation of parallelism by multiple processors.  While several proposals for parallel graph reduction machines have been made, differing terminology and approaches make these proposals difficult to compare.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149590</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extending Binary Byzantine Agreement to Multivalued Byzantine Agreement</title>
<link>https://hdl.handle.net/1721.1/149589</link>
<description>Extending Binary Byzantine Agreement to Multivalued Byzantine Agreement
Turpin, Russell; Coan, Brian A.
A binary Byzantine agreement algorithm can be extended to produce a multivalued Byzantine agreement algorithm.  The resulting multivalued algorithm is cheaper than previously published algorithms when the cost of transmitting values from the multival
</description>
<pubDate>Sun, 01 Apr 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149589</guid>
<dc:date>1984-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specification and Implementation of Atomic Data Types</title>
<link>https://hdl.handle.net/1721.1/149588</link>
<description>Specification and Implementation of Atomic Data Types
Weihl, William Edward
Maintaining the consistency of long-lived, on-line data is a difficult task, particularly in a distributed system.  This dissertation focuses on atomicity as a fundamental organizational concept for such systems.  It explores an approach in which
</description>
<pubDate>Thu, 01 Mar 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149588</guid>
<dc:date>1984-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design and Implementation of an Online Directory Assistance System</title>
<link>https://hdl.handle.net/1721.1/149587</link>
<description>The Design and Implementation of an Online Directory Assistance System
Koile, Kimberle
This thesis describes the design and implementation of an online directory assistance system called DIRSYS that was modeled after the white pages of a paper telephone book and a full-screen display editor such as Emacs.  As the user begins typing a n
</description>
<pubDate>Thu, 01 Dec 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149587</guid>
<dc:date>1983-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cohesion in Computer Text Generation: Lexical Substitution</title>
<link>https://hdl.handle.net/1721.1/149586</link>
<description>Cohesion in Computer Text Generation: Lexical Substitution
Granville, Robert Alan
This report describes Paul, a computer text generation system designed to create cohesive text.  The device used to a achieve this cohesion is lexical substitution.  Through the use of syntactic and semantic information, the system is able to determine which type of lexical substitution will provide the necessary information to generate an understandable reference, while no providing so much information that the reference is confusing or unnatural.
</description>
<pubDate>Sun, 01 May 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149586</guid>
<dc:date>1983-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Formal Model of Non-Determinate Dataflow Computation</title>
<link>https://hdl.handle.net/1721.1/149585</link>
<description>A Formal Model of Non-Determinate Dataflow Computation
Brock, Jarvis Dean
Almost ten years ago, Gilles Kahn used the fixed point theory of Dana Scott to define a formal and elegant model of computation for determinate dataflow graphs, networks of determinate processes communicating asynchronously through unbounded channels.
</description>
<pubDate>Mon, 01 Aug 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149585</guid>
<dc:date>1983-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reliable Object Storage to Support Atomic Actions</title>
<link>https://hdl.handle.net/1721.1/149584</link>
<description>Reliable Object Storage to Support Atomic Actions
Oki, Brian Masao
To preserve the consistency of on-line, long-lived, distributed data in the presence of concurrency and in the event of hardware failures, it is necessary to ensure atomicity and data resiliency in applications.  The programming language Argus is designed to support such applications. This thesis investigates the mechanism needed to support the notion of data resiliency present in Argus.
</description>
<pubDate>Sun, 01 May 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149584</guid>
<dc:date>1983-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preliminary Report on the Larch Shared Language*</title>
<link>https://hdl.handle.net/1721.1/149583</link>
<description>Preliminary Report on the Larch Shared Language*
Guttag, John V.; Horning, J.J.
Each member of the Larch family of formal specification languages has a component derived from a programming language and another component common to all programming languages.  We call the former interface languages, and the latter the Larch Shared Language.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149583</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>PADL - A Packet Architecture Description Language: A Preliminary Reference Manual</title>
<link>https://hdl.handle.net/1721.1/149582</link>
<description>PADL - A Packet Architecture Description Language: A Preliminary Reference Manual
Leung, Clement Kin Cho; William Y-P.
PADL is a hardware description language for specifying the behavior and structure of packet communication systems.  In such systems, hardware units called modules communicate by sending and receiving packets.  The behavior of such a system can be specified by providing the algorithm it executes and the data structures it manipulates. On the other hand, the structure of a system is specified by giving the components or of the system and their interconnection.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149582</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Framework for Solving VSLI Graph Layout Problems</title>
<link>https://hdl.handle.net/1721.1/149581</link>
<description>A Framework for Solving VSLI Graph Layout Problems
Bhatt, Sandeep N.; Leighton, Frank Thomson
This paper introduces a new divide-and-conquer framework for VLSI graph layout.  Universally close upper and lower bounds are obtained for important cost functions such as layout area and propagation delay.  The framework is also effectively used to design regular and configuration layouts, to assemble large networks of processors using restructurable chips, and to configure networks around faulty processors. it is also shown how good graph partitioning heuristics may be used to develop a provably good layout strategy.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149581</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulation Tools for Digital LSI Design</title>
<link>https://hdl.handle.net/1721.1/149580</link>
<description>Simulation Tools for Digital LSI Design
Terman, Christopher J.
This thesis proposes a timing simulator (RSIM) based on a uniquely simple transistor model.  RSIM allows a designer to determine both the functional and approximate timing characteristics of a MOS network with more accuracy than gate-level simulation, and using larger circuits than are accommodated by circuit analysis programs. In RSIM, transistors are modeled as resistors; the logic states of a transistor's terminal nodes determine its effective resistance.
</description>
<pubDate>Thu, 01 Sep 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149580</guid>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Module Replacement in a Distributed Programming System</title>
<link>https://hdl.handle.net/1721.1/149579</link>
<description>Dynamic Module Replacement in a Distributed Programming System
Bloom, Toby
The replacement of parts of software systems is an important aspect of programming methodology.  Most of the research in this area has centered around support for modular construction and the clear separation of interface from implementation.  The emphasis has been on producing easily modified static program structures.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149579</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Multiprocessor Emulation Facility</title>
<link>https://hdl.handle.net/1721.1/149578</link>
<description>A Multiprocessor Emulation Facility
Arvind,; Dertouzos, Michael L.; Lannucci, Robert A.
Interest in multiprocessor computer architectures has increased dramatically in the last ten years.  However, it has become clear that, in order to effectively use multiprocessors in a general way, some fundamental changes in the model of computation are necessary. Moreover, experimentation in the field is hindered by low-performance simulation tools and high-cost hardware modeling schemes.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149578</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creating a Computer-based Learning Environment for Physically Handicapped Children</title>
<link>https://hdl.handle.net/1721.1/149577</link>
<description>Creating a Computer-based Learning Environment for Physically Handicapped Children
Valente, Jose Armando
The objective of this research is to develop a computer-based learning environment for children physically handicapped by cerebral palsy and to study several issues related to the use of this environment for diagnostic, educational, and remedial purposes. The study is motivated by the desire to better understand the intellectual and motoric deficiencies of cerebral palsied children and to use this information in the development of teaching methods to accommodate each child's particular needs.
</description>
<pubDate>Thu, 01 Sep 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149577</guid>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Argument for Soft Layering of Protocols</title>
<link>https://hdl.handle.net/1721.1/149576</link>
<description>An Argument for Soft Layering of Protocols
Cooper, Geoffrey Howard
This thesis is about the efficiency of protocol layering.  It examines the technique of protocol layering in an abstract way and finds two major sources of inefficiency in protocol implementations which are caused by the imposition on them of a layered structure. The conventional approach to making layered protocol implementations run efficiently--- for avoiding the sources of inefficiency discussed herein --- are all independent of the protocol specification, and thus all decrease the value of the protocol specification as a guide for implementing protocols.
</description>
<pubDate>Mon, 01 Aug 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149576</guid>
<dc:date>1983-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Two-tiered Approach to Specifying Programs</title>
<link>https://hdl.handle.net/1721.1/149575</link>
<description>A Two-tiered Approach to Specifying Programs
Marie Wing, Jeannette
Current research in specifications is beginning to emphasize the practical use of formal specifications in program design.  This thesis presents a specification approach, a specification language that supports that approach, and some ways to evaluate specifications written that language.
</description>
<pubDate>Sun, 01 May 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149575</guid>
<dc:date>1983-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Video Games and Computer Aided Instruction</title>
<link>https://hdl.handle.net/1721.1/149574</link>
<description>Video Games and Computer Aided Instruction
Krugler, Ken
This document will briefly outline the evolution of video games, discuss current video game theory, and describe a program to teach typing on the IBM Personal Computer.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149574</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamental Design Problems of Distributed Systems for the Hard-real-time Environment</title>
<link>https://hdl.handle.net/1721.1/149573</link>
<description>Fundamental Design Problems of Distributed Systems for the Hard-real-time Environment
Mok, Aloysisu Ka-Lau
Software designed to function in a hard-real-time environment where strict timing constraints must be met often entails implicit assumptions about a programming language and the underlying system which supports it.  Programs which are logically correct, i.e., implement the intended algorithms, may not function correctly if their assumed timing characteristics are not met.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149573</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The MDL Programming Environment</title>
<link>https://hdl.handle.net/1721.1/149572</link>
<description>The MDL Programming Environment
Lebling, P. David
</description>
<pubDate>Thu, 01 May 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149572</guid>
<dc:date>1980-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The MDL Programming Language</title>
<link>https://hdl.handle.net/1721.1/149571</link>
<description>The MDL Programming Language
Galley, S.W.; Pfister, Greg
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149571</guid>
</item>
<item>
<title>The MDL Programming Language Primer</title>
<link>https://hdl.handle.net/1721.1/149570</link>
<description>The MDL Programming Language Primer
Dornbrook, Michael; Blank, Marc
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149570</guid>
</item>
<item>
<title>The Impact of Layer Assignment Methods on Layout Algorithms for Integrated Circuits</title>
<link>https://hdl.handle.net/1721.1/149569</link>
<description>The Impact of Layer Assignment Methods on Layout Algorithms for Integrated Circuits
Pinter, Ron Yair
Programs for integrated circuit layout at the module assembly level are typically decomposed into two phases - placement and routing.  In this thesis we investigate a third phase which is often implicitly assumed - layer assignment.  This thesis studies how layer assignment methodologies interact with placement and routing.
</description>
<pubDate>Mon, 01 Aug 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149569</guid>
<dc:date>1983-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Office Analysis and Diagnosis Methodology</title>
<link>https://hdl.handle.net/1721.1/149568</link>
<description>An Office Analysis and Diagnosis Methodology
Sutherland, Juliet
With the advent of computer technology designed for use in the office, office analysis, or the process of understanding office work for the purposes of introducing technology, has become increasingly important.  The Office Analysis and Diagnosis Methodology (OADM) is a tool to help the analyst gather the data required to decide how, and whether, to introduce office automation technology into a particular office, OADM is best suited for studying semi-structured offices, rather than pure processing operations or special projects. OADM is used to perform a detailed study of a single office and is not designed for use in determining the general automation needs of a large organization.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149568</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Office Analysis: Methodology and Case Studies</title>
<link>https://hdl.handle.net/1721.1/149567</link>
<description>Office Analysis: Methodology and Case Studies
Sirbu, Marvin A., Jr.; Schoichet, Sandor R.; Kunin, Jay S.; Hammer, Michael M.; Sutherland, Juliet B.; Zarmer, Craig L.
The Office Analysis Methodology (OAM) is a structured methodology for understanding the current operations of an office.  OAM provides guidance in interviewing techniques and approaches to establishing a positive atmosphere for possible office automation efforts. It is designed to be to learn so that people with experience in office work but little experience in analysis can easily perform a study.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149567</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Computing Galois Groups and Its Application To Solvability by Radicals</title>
<link>https://hdl.handle.net/1721.1/149566</link>
<description>On Computing Galois Groups and Its Application To Solvability by Radicals
Laudau, Susan Eva
This thesis presents a polynomial time algorithm for the basic question of Galois theory, checking the solvability by radicals of a monic irreducible polynomial over the integers.  It also presents polynomial time algorithms for factoring polynomials over algebraic number fields, for computing blocks of imprimitivity of roots of a polynomial under the transitive action of the Galois group on the roots of the polynomial, and for computing intersections algebraic number fields.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149566</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Bisecting Random Graphs</title>
<link>https://hdl.handle.net/1721.1/149565</link>
<description>On Bisecting Random Graphs
Bui, Thang Nguyen
A bisection of a graph with an even number of vertices is a partition of the vertex set into two disjoint sets of equal size.  Given a bisection, the number of edges having one end in each of the two subsets of the bisection is called the size of the bisection. The bisection size of a graph is the minimum size of all possible bisections of the graph.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149565</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Internal Consistency of a Distributed Transaction System with Orphan Detection</title>
<link>https://hdl.handle.net/1721.1/149564</link>
<description>Internal Consistency of a Distributed Transaction System with Orphan Detection
Goree, John A., Jr.
This thesis defines a property called "view-serializability", which formalizes internal consistency for a system of nested atomic transactions.  Internal consistency is a stronger condition than the usual notion of data base consistency, because it takes into account the views of transactions which will never commit. In a distributed system, local aborts of remote subactions and crashes of nodes can generate orphans: active actions which are descendants of actions that have aborted or are guaranteed to abort.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149564</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrency Control for Resilient Nested Transaction</title>
<link>https://hdl.handle.net/1721.1/149563</link>
<description>Concurrency Control for Resilient Nested Transaction
Lynch, Nancy A.
Concurrency control theory is extended to handle nested transactions with failures. The theory is used to present a rigorous correctness proof of a variant of Moss' locking algorithm for implementing nested transactions. The proof has an interesting structure using many levels of abstraction.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149563</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Translating Updates of Relational Database Views</title>
<link>https://hdl.handle.net/1721.1/149562</link>
<description>Translating Updates of Relational Database Views
Cosmadakis, Stavros Stylianos
We study the problem of translating updates of data base views.  We disambiguate a view update by requiring that a specified view compliment (i.e. a second view which contains all the data base information omitted from the given view) remains constant during the translation. We study some of the computational problems related to the application of this general methodology in the context of relational databases.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149562</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Comparative Study of Computer-aided Clinical Diagnosis</title>
<link>https://hdl.handle.net/1721.1/149561</link>
<description>A Comparative Study of Computer-aided Clinical Diagnosis
Sherman, Howard Bruce
In recent years many computer systems have been developed to assist in medical decision making.  Two of these systems in particular, INTERNIST and the Present Illness Program (PIP), have been proposed as suitable for performing general medical diagnosis. However, there has been no way of comparing the performance of these two programs since the medical data used by the programs differs extensively.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149561</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impossibility of Distributed Consensus with One Faulty Process*atio</title>
<link>https://hdl.handle.net/1721.1/149560</link>
<description>Impossibility of Distributed Consensus with One Faulty Process*atio
Fischer, Michael J.; Lynch, Nancy A.; Paterson, Michael S.
The consensus problem involves an asynchronous system of processes, some of which may be unreliable.  The problem is for the reliable processes to agree on a binary value.  We show that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the "Byzantine Generals" problem.
</description>
<pubDate>Wed, 01 Sep 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149560</guid>
<dc:date>1982-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multilevel Atomicity: A New Correctness Criterion for Database Concurrency Control</title>
<link>https://hdl.handle.net/1721.1/149559</link>
<description>Multilevel Atomicity: A New Correctness Criterion for Database Concurrency Control
Lynch, Nancy A.
Multilevel atomicity, a new correctness criteria for database concurrency control, is defined.  It weakens the usual notion of serializability by permitting controlled interleaving among transactions.  It appears to be especially suitable for applications in which the set of transactions has a natural hierarchical structure based on the hierarchical structure of an organization.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149559</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Implementation Scheme for Array Operations in Static Data Flow Computers</title>
<link>https://hdl.handle.net/1721.1/149558</link>
<description>An Implementation Scheme for Array Operations in Static Data Flow Computers
Guang-Rong, Gao
The mapping of array operations in VAL programs on a static data flow machine with array memory is studied.  The flow dependency graph is introduced as a model of array operations in VAL programs.  The balancing and optimization of the flow dependency graphs is presented. The class of well-be VAL programs which can be modeled by flow dependency graphs is specified.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149558</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design of a Multiprocessor Development System</title>
<link>https://hdl.handle.net/1721.1/149557</link>
<description>The Design of a Multiprocessor Development System
Anderson, Thomas Lee
A multiprocessor development system has been designed and a prototype system is being constructed.  The system, known as Concert, is intended to support multiprocessor research efforts at M.I.T.  The motivation for Concert and the project history are summarized briefly. Some intended applications are also identified.
</description>
<pubDate>Wed, 01 Sep 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149557</guid>
<dc:date>1982-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Display Management in an Integrated Office</title>
<link>https://hdl.handle.net/1721.1/149556</link>
<description>Display Management in an Integrated Office
Rosenstein, Larry S.
Advances in technology now make it possible to build office workstations that have a large amount of local computing power and high-resolution output devices.  Such workstations can be used for various office applications, such as document preparation, personal databases, and electronic mail.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149556</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Modeling for Short Channel MOS Circuit Cimulation</title>
<link>https://hdl.handle.net/1721.1/149555</link>
<description>Efficient Modeling for Short Channel MOS Circuit Cimulation
Johnson, Mark Griffin
Existing circuit models for short-channel MOS transistors represent a compromise between speed and ease of use.  Empirical models are very fast to evaluate, but their parameters must be fitted from experimental measurements.  Theoretical models require longer computation time, but they may be used to predict the performance of new, unmeasured MOS technologies since their parameters are not curve-fitted from experimental data.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149555</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Synthesis of Implementations for Abstract Data Types from Algebraic Specifications</title>
<link>https://hdl.handle.net/1721.1/149554</link>
<description>Automatic Synthesis of Implementations for Abstract Data Types from Algebraic Specifications
Srivas, Mandayam K.
Algebraic specifications have been used extensively to prove properties of abstract data types and to establish the correctness of implementations of data types.  This thesis explores an automatic method of synthesizing implementations for data types from their algebraic specifications.
</description>
<pubDate>Tue, 01 Jun 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149554</guid>
<dc:date>1982-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis and Specification of Office Procedures</title>
<link>https://hdl.handle.net/1721.1/149553</link>
<description>Analysis and Specification of Office Procedures
Kunuin, Jay S.
Conventional approaches to "office automation" focus on the lowest common denominator of office work: typing, filing, filling in forms, etc.  As a consequence, the process of office systems analysis lacks tools and techniques that address the office in terms of business functions rather than as manipulation of paper artifacts.
</description>
<pubDate>Mon, 01 Feb 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149553</guid>
<dc:date>1982-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layouts for the Shuffle-exchange Graph and Lower Bound Techniques for VLSI</title>
<link>https://hdl.handle.net/1721.1/149552</link>
<description>Layouts for the Shuffle-exchange Graph and Lower Bound Techniques for VLSI
Leighton, Frank Thomson
The thesis is divided into two parts.  In the first part, we describe and analyze several new VLSI layouts for the shuffle-exchange graph.  These include:1) an asymptotically optimal,   (N  /log  N)-area layout for the N-node shuffle-exchange graph, and 2) several practical layouts for small shuffle-exchange graphs.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149552</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Communications via Cable Television Networks: Technical and Policy Considerations</title>
<link>https://hdl.handle.net/1721.1/149551</link>
<description>Data Communications via Cable Television Networks: Technical and Policy Considerations
Estrin, Deborah Lynn
Cable television networks offer peak communication data rates that are orders of magnitude greater than the telephone local loop.  Although one-way television signal distribution continues to be the primary application of cable television systems, the cable television network can be used for two-way data communication.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149551</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Procedural Reflection in Programming Languages Volume I</title>
<link>https://hdl.handle.net/1721.1/149550</link>
<description>Procedural Reflection in Programming Languages Volume I
Smith, Brian Cantwell
We show how a computational system can be constructed to "reason," effectively and consequentially, about its own inferential processes.  The analysis proceeds in two parts.  First, we consider the general question of computational semantics, rejecting traditional approaches, and arguing that the declarative and procedural aspects of computational symbols (what they stand for, and what behaviour  they engender) should be analysed independently, in order that they may be coherently related.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149550</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computer System for Decision Analysis in Hodgkins Disease</title>
<link>https://hdl.handle.net/1721.1/149549</link>
<description>A Computer System for Decision Analysis in Hodgkins Disease
Rutherford, Cynthia, J.; Davies, Byron; Barnett, Arnold I.; Desforges, Jane F.
This report draws together the diverse strands involved in developing a unique computer-based system to stage and manage Hodgkins Disease (HD). Those of us worked on the final version of this project included two hematologists, a computer scientist, and a statistician.
</description>
<pubDate>Mon, 01 Feb 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149549</guid>
<dc:date>1982-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design of a Routing Service for Campus-wide Internet Transport</title>
<link>https://hdl.handle.net/1721.1/149548</link>
<description>The Design of a Routing Service for Campus-wide Internet Transport
Singh, Vineet
A campus-wide network requires many subnetworks connected by gateways and it has a relatively loose administration.  Modularization of network implementing is important in this environment to make efficient use of ever-improving technologies and protocols.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149548</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of Concurrency Control for Distributed Databases</title>
<link>https://hdl.handle.net/1721.1/149547</link>
<description>The Complexity of Concurrency Control for Distributed Databases
Kanellakis, Paris C.
This study is an analysis of the distributed version of data base concurrency control.  It provides concrete mathematical evidence that the distributed problem is an inherently more complex task than the centralized one.  The notions of transaction, concurrency, history, serializability, scheduler, etc, for centralized databases are now well-understood both from a theoretical and a practical point of view.
</description>
<pubDate>Tue, 01 Dec 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149547</guid>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Derived Pairs, Overlap Closures, and Rewrite Dominoes: New Tools for Analyzing Term Rewriting Systems</title>
<link>https://hdl.handle.net/1721.1/149546</link>
<description>Derived Pairs, Overlap Closures, and Rewrite Dominoes: New Tools for Analyzing Term Rewriting Systems
Guttag, John V.; Kapur, Deepak; Musser, David R.
Starting from the seminal work of Knuth and Bendix, we develop several notions useful in the study of term rewriting systems.  In particular we introduce the notions of "derived pairs" and "overlap closure" and show that they are useful in analyzing sets of rewrite rules for various properties related to termination. We also introduce a new representation, based on rewrite dominoes, for rewrite rules and sequences of rewrites.
</description>
<pubDate>Tue, 01 Dec 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149546</guid>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Representation of Patient Illness for Electrolyte and Acid-base Diagnosis</title>
<link>https://hdl.handle.net/1721.1/149545</link>
<description>Causal Representation of Patient Illness for Electrolyte and Acid-base Diagnosis
Patil, Ramesh S.
Much of the medical knowledge in the first generation Al in Medicine programs is phenomenological; that is, it describes the associations among phenomena without knowledge of the underlying causal mechanisms.  Although these AIM programs provide a good first approximation to the way clinicians reason, they fail to produce clinicians reasoning based on a deaper understanding of the phenomens.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149545</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Ease of Use Evaluation of an Integrated Editor and Formatter</title>
<link>https://hdl.handle.net/1721.1/149544</link>
<description>An Ease of Use Evaluation of an Integrated Editor and Formatter
Good, Michael
Etude is an integrated text editor and formatter that was designed to be easy to learn and easy to use.  To measure Etude's success in meeting these goals, twenty-one computer-naive temporary office workers were taught to use Etude in a controlled experiment. Ninety percent of the subjects were able to create and edit letters after a training period of less than two hours and twenty minutes, though they were not able to perform these tasks as quickly as they could when using a typewriter.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149544</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Data Flow Architecture with Improved Asymptotic Performance</title>
<link>https://hdl.handle.net/1721.1/149543</link>
<description>A Data Flow Architecture with Improved Asymptotic Performance
Thomas, Robert E.
Large scale integration presents a unique opportunity to design a computer compromising large numbers of small, inexpensive processors.  This paper presents a design for such a machine based on the asynchronous and functional semantics of data flow.
</description>
<pubDate>Wed, 01 Apr 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149543</guid>
<dc:date>1981-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Debigging in a Distributed Computational Envrionment</title>
<link>https://hdl.handle.net/1721.1/149542</link>
<description>Interactive Debigging in a Distributed Computational Envrionment
Schiffenbauer, Robert David
</description>
<pubDate>Tue, 01 Sep 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149542</guid>
<dc:date>1981-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Propositional Dynamic Logic of Looping and Converse</title>
<link>https://hdl.handle.net/1721.1/149541</link>
<description>Propositional Dynamic Logic of Looping and Converse
Streett, Robert S.
Dynamic logic [5,6,15,16] applies concepts from modal logic to a relational semantics of programs to yield various systems for reasoning about the before-after behavior of programs. Analogues to the modal logic assertions ?p (possibly p) and ?p(necessarily p) are the dynamic logic constructs &lt;a&gt;p and [a]p. If a is a program and p is an assertion about the state of a computation, then ,&lt;a&gt;p asserts that after executing a, p can be the case, and [a]p asserts that after executing a, p must be the case.
</description>
<pubDate>Fri, 01 May 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149541</guid>
<dc:date>1981-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Level VAL Constructs in a Static Data Flow Machine</title>
<link>https://hdl.handle.net/1721.1/149540</link>
<description>High Level VAL Constructs in a Static Data Flow Machine
Todd, Kenneth Wayne
The Dennis-Misunas Form 1 Data Flow Machine can best be described as a static and scalar machine.  Despite these two limiting characteristics, it is still possible to translate the whole of the functional programming language VAL into the base language of this machine. Methods for translating the various high constructs of VAL are presented which exploit the parallelism inherent in programs written in VAL mainly by pipelining through a single expression (vertical parallelism) rather than employing many copies of that same expression (horizontal parallelism), although the latter is not ruled out.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149540</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Switch-level Simulation Model for Integrated Logic Circuits</title>
<link>https://hdl.handle.net/1721.1/149539</link>
<description>A Switch-level Simulation Model for Integrated Logic Circuits
Bryant, Randal Everitt
Switch-level simulators model a metal oxide semiconductor (MOS) large scale integrated (LSI) circuits as a network of transistor "switches". They can simulate many aspects of MOS circuits which cannot be expressed in the Boolean logic gate model, such as bidrecttional pass transistors, dynamic storage, and charge sharing.
</description>
<pubDate>Sun, 01 Mar 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149539</guid>
<dc:date>1981-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Design Methodology for Self-time Systems</title>
<link>https://hdl.handle.net/1721.1/149538</link>
<description>A Design Methodology for Self-time Systems
Singh, Narinder Pal
This thesis presents a design methodology for self-timed systems which will be extremely attractive for implementing systems in VLSI.  Self-timed systems are characterized by the absence of a timing reference to which all operations are synchronized.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149538</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Data Flow Architecture to Computer Music Synthesis</title>
<link>https://hdl.handle.net/1721.1/149537</link>
<description>Application of Data Flow Architecture to Computer Music Synthesis
Cesari, Carol Andrea
A computer music synthesis system is the most flexible of synthesis systems.  It offers a composer extensive control over the sound of his piece.  A user of such a system describes his composition in some synthesis language.  The computer uses this description to calculate samples of a voltage waveform that can be fed to D/A converters at a specified sampling rate.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149537</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semiautomatic Translation of Cobol In Hibol</title>
<link>https://hdl.handle.net/1721.1/149536</link>
<description>Semiautomatic Translation of Cobol In Hibol
Faust, Gregory Gerard
A severe software crisis is currently being experienced by the data processing community due to intolerable maintenance costs.  A system is introduced to reduce those costs by the translation of existing COBOL software into HIBOL; a very high level language that is significantly easier to maintain.
</description>
<pubDate>Wed, 01 Apr 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149536</guid>
<dc:date>1981-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protecting Externally Supplied Software in Small Computers</title>
<link>https://hdl.handle.net/1721.1/149535</link>
<description>Protecting Externally Supplied Software in Small Computers
Kent, Stephen T.
The increasing decentralization of computing resources and the proliferation of personal and small business computers create new problems in computer security.  One such problem is the protection of externally supplied software, i.e., software supplied by other than the users/owners of these small computers. In the case of personal and small business computers, proprietary software serves as the primary example.
</description>
<pubDate>Sun, 01 Mar 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149535</guid>
<dc:date>1981-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Integrated Approach to Formatted Document Production</title>
<link>https://hdl.handle.net/1721.1/149534</link>
<description>An Integrated Approach to Formatted Document Production
Ilson, Richard
Recent advances in printing technology have reduced the cost of typeset quality printers.  Unfortunately, the production of attractively formatted documents requires typographic skill and special training on computer-based text processing systems. The principal characteristics of Etude are that it embodies substantial typographic expertise, and is based on concepts familiar to untrained users . Furthermore, Etude provides a real-time display facility that allows the results of editing and formatting operations to be seen immediately. Thus, Etude supports the entire process of producing decorously formatted documents.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149534</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recovery of the Swallow Repository</title>
<link>https://hdl.handle.net/1721.1/149533</link>
<description>Recovery of the Swallow Repository
Arens, Gail C.
This thesis presents the design of a set of recovery mechanisms for the Swallow repository.  Swallow is a distributed data storage system that supports highly reliable long term storage of arbitrary sized data objects with special mechanisms for implementing multi-site atomic actions. The Swallow repository is a data storage server that keeps permanent data in write-once stable storage such as optical disk.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149533</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Producing Explanations and Justifications of Expert Consulting Programs</title>
<link>https://hdl.handle.net/1721.1/149532</link>
<description>Producing Explanations and Justifications of Expert Consulting Programs
Swartout, William R.
Traditional methods for explaining programs provide explanations by converting to English the code of the program or traces of the execution of that code.  While such methods can provide adequate explanations of what the program does or did, they typically cannot provide justifications of the code without resorting to canned-text explanations. That is, such systems cannot tell why what the system is doing is a reasonable thing to be doing.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149532</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fault Tolerance in Packet Communication Computer Archiectures</title>
<link>https://hdl.handle.net/1721.1/149531</link>
<description>Fault Tolerance in Packet Communication Computer Archiectures
Leung, Clement Kin Cho
It is attractive to implement a large scale parallel processing system as a self-timed hardware system with decentralized control and to improve maintainability and availability in such a system through fault tolerance.  In this thesis we show how to tolerate hardware failures in a self-timed hardware system with a packet communication architecture, designed to execute parallel programs organized by data flow concepts.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149531</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computers and People: Personal Computation</title>
<link>https://hdl.handle.net/1721.1/149530</link>
<description>Computers and People: Personal Computation
Turkle, Sherry
In  the January 1975 issue of Popular Electronics, MITs, short for Micro Instrumentation and Telementry System, a small computer company in Albequerque, New Mexico, announced the Altair, a computer small enough to sit on a desktop, powerful  enough to support high level language programming, and that you could build for only $429.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149530</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Integrated Circuit Layout: An Analytic Approach</title>
<link>https://hdl.handle.net/1721.1/149529</link>
<description>Algorithms for Integrated Circuit Layout: An Analytic Approach
LaPaugh, Andrea Suzanne
In this thesis, the problem of designing the layout of integrated circuits is examined.  The layout of an integrated circuit specifies the position of the chip of functional components and wires interconnecting the components.  We use a general model
</description>
<pubDate>Sat, 01 Nov 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149529</guid>
<dc:date>1980-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interprocedural Data Flow Analysis in the Presence of Pointers, Procedure Variables, and Label Variables</title>
<link>https://hdl.handle.net/1721.1/149528</link>
<description>Interprocedural Data Flow Analysis in the Presence of Pointers, Procedure Variables, and Label Variables
Weihl, William Edward
The compilation of highly modular programs requires extensive interprocedural analysis in order to produce reasonable object code. Such analysis is greatly complicated when the source language contains such constructs as procedure variables and label variables.
</description>
<pubDate>Wed, 01 Oct 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149528</guid>
<dc:date>1980-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Survey of the Logic of Effective Definitions</title>
<link>https://hdl.handle.net/1721.1/149527</link>
<description>A Survey of the Logic of Effective Definitions
Tiuryn, J.
LED, the Logic of Effective Definitions, is an extension of first order predicate calculus used for making assertions about programs.  Programs are modeled as effective definitional schemes (following Friedman).  Logical properties of LED and its relations to classical logics and other programming logics are surveyed.
</description>
<pubDate>Wed, 01 Oct 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149527</guid>
<dc:date>1980-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Memory Limitations in Natural Language Processing</title>
<link>https://hdl.handle.net/1721.1/149526</link>
<description>On Memory Limitations in Natural Language Processing
Church, Kenneth Ward
This paper proposes a welcome hypothesis: a computationally simple device is sufficient for processing natural language.  Traditionally it has been argued that processing natural language syntax requires very powerful machinery.  Many engineers have come to this rather grim conclusion: almost all working parsers are actually Turing Machines (TM). For example, Woods specifically designed his Augmented Transition Networks (ATNs) to be Turing Equivalent.
</description>
<pubDate>Mon, 01 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149526</guid>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Driven Loops</title>
<link>https://hdl.handle.net/1721.1/149525</link>
<description>Data Driven Loops
Ruth, Gregory R.
The notion of the data driven loop arises in connection with our work in the Very High Level Language HIBOL and the automatic programming system (ProtoSystem I) that supports it.  Although the concept is of general interest outside of VHLL's and automatic programming, we find it profitable to use HIBOL as a vehicle for our discussion and a means of narrowing the scope of our discussion. Therefore we first present description of the domain which HIBOL treats.
</description>
<pubDate>Fri, 01 Aug 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149525</guid>
<dc:date>1980-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Management of Object Histories in the Swallow Repository</title>
<link>https://hdl.handle.net/1721.1/149524</link>
<description>Management of Object Histories in the Swallow Repository
Svobodova, Liba
SWALLOW is an experimental distributed data storage system that provides personal computers with a uniform interface to their local data and the data stored in shared remote servers called repositories.  The SWALLOW repositories provide reliable, secure reliable, secure, and efficient long-term storage for both very small and very large objects and support updating of a group of objects at one or several repositories in a single atomic action.
</description>
<pubDate>Fri, 01 Aug 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149524</guid>
<dc:date>1980-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulations Amond Multidimensional Turing Machines</title>
<link>https://hdl.handle.net/1721.1/149523</link>
<description>Simulations Amond Multidimensional Turing Machines
Loui, Michael Conrad
This thesis presents three independent papers: nearly optimal on-line simulations among multidimensional Turing machines, a space bound for one-tape multidimensional Turing machines, and new proofs in the pebble game.
</description>
<pubDate>Fri, 01 Aug 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149523</guid>
<dc:date>1980-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation and Analysis of Real-Time Control Structures</title>
<link>https://hdl.handle.net/1721.1/149522</link>
<description>Representation and Analysis of Real-Time Control Structures
Archer, Rowland F., Jr.
A new notation is introduced for representing real-time scheduling at the task and event level.  These schedules are called control structures.  The primary constructs included which direct the flow of control are sequencing, iteration, and preemption. Additional notation allows the representation of interrupt masking, task termination by external events, task restart as well as resumption from the point of preemption and codestripping. Algorithms are given for finding the presentation structures of a  given control structure in the notation.
</description>
<pubDate>Fri, 01 Aug 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149522</guid>
<dc:date>1980-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Safety and Optimization Transformations for Data Flow Programs</title>
<link>https://hdl.handle.net/1721.1/149521</link>
<description>Safety and Optimization Transformations for Data Flow Programs
Montz, Lynn Barbara
The data flow  concept of computation seeks to achieve high performance by allowing concurrent execution of instructions based on the availability of data.  This thesis explores the translation of a subset of the high level languages VAL to data flow graphs. The major problem in performing this translation for the target machine. the Dennis-Misunas data flow computer, stems from the restriction that graph execution sequences place at most one value on any given are at any time. The data/acknowledge are pair transformation is introduced as a means of implementing this required operational behavior. Its effect on data flow graph operation is subsequently explored as it relates to correctness and performance.
</description>
<pubDate>Tue, 01 Jul 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149521</guid>
<dc:date>1980-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artwork Analysis Tool for VLSI Circuits</title>
<link>https://hdl.handle.net/1721.1/149520</link>
<description>Artwork Analysis Tool for VLSI Circuits
Baker, Clark Marshall
Current methods for designing VLSI chips do not insure that the chips will perform correctly when manufactured.  Because the turn around time on chip fabrication varies from a few weeks to a few months, a scheme other than "try it and see if it works" is needed. Checking of chips by hand simulation and visual inspection of check plots will not cash all of the errors. In addition, the number of transistors per chip is likely to increase from ten thousand to over a million in the next few years.This increase in complexity precludes any manual verification methods; some better method is needed.
</description>
<pubDate>Sun, 01 Jun 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149520</guid>
<dc:date>1980-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of Monotone Boolean Functions and an Algorithm for Finding Shortest Paths on a Graph</title>
<link>https://hdl.handle.net/1721.1/149519</link>
<description>The Complexity of Monotone Boolean Functions and an Algorithm for Finding Shortest Paths on a Graph
Bloniarz, Peter Anthony
The first part of this thesis considers the complexity of Boolean functions.  The main complexity measures used are the number of gates in combinational networks and the size of Boolean formulas.  The case of monotone realizations, using only the operations AND and OR, of monotone  functions is emphasized.
</description>
<pubDate>Sun, 01 Jun 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149519</guid>
<dc:date>1980-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Theory for Abstract Data Types</title>
<link>https://hdl.handle.net/1721.1/149518</link>
<description>Towards a Theory for Abstract Data Types
Kapur, Deepak
A rigorous framework for studying immutable data types having nondeterministic operations and operations exhibiting exceptional behavior is developed.  The framework embodies the view of a data type taken in programming languages, and supports hierarchical and modular structure among data types.
</description>
<pubDate>Sun, 01 Jun 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149518</guid>
<dc:date>1980-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scheduling Task Systems with Resources</title>
<link>https://hdl.handle.net/1721.1/149517</link>
<description>Scheduling Task Systems with Resources
Lloyd, Errol Lynn
Minimum execution time scheduling of task systems with resources has been the subject of several papers over the past few years.  The model used for much of this work assumes that the resources in the system are continuous. That the, there is one unit of each resource, and a task may require any portion of that unit during its execution.
</description>
<pubDate>Thu, 01 May 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149517</guid>
<dc:date>1980-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Concept of Independence with Applications in Various Fields of Mathematics</title>
<link>https://hdl.handle.net/1721.1/149516</link>
<description>A Concept of Independence with Applications in Various Fields of Mathematics
Levin, Leonid A.
We use Kolmogorov's algorithmic approach to information theory to define a concept of independence of sequences, or equivalently, the boundedness of their mutual information.  This concept is applied to probability theory, intuitionistic logic, and the theory of algorithms.
</description>
<pubDate>Thu, 01 May 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149516</guid>
<dc:date>1980-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transmitting Abstract Values in Messages</title>
<link>https://hdl.handle.net/1721.1/149515</link>
<description>Transmitting Abstract Values in Messages
Herlihy, Maurice Peter
This thesis develops primitives for a programming language intended for use in a distributed computer system where individual nodes may have different hardware or software configurations.  Our primitives are presented as extensions to the CLU language.
</description>
<pubDate>Thu, 01 May 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149515</guid>
<dc:date>1980-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Extension of an Augmented Transition Network Grammar for Morse Code Conversations</title>
<link>https://hdl.handle.net/1721.1/149514</link>
<description>Automatic Extension of an Augmented Transition Network Grammar for Morse Code Conversations
Kaiser, Gail E.
This report describes a 'learning program' that acquires much of the knowledge required by a parsing system that processes conversations in a 'natural' language akin to ham-radio jargon.  The learning program derives information from example sentence taken from transcripts of actual conversations, and uses this knowledge to extend the 'core' augmented transition network (ATN) grammar.
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149514</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of the Maximum Network Flow Problem</title>
<link>https://hdl.handle.net/1721.1/149513</link>
<description>The Complexity of the Maximum Network Flow Problem
Baratz, Alan Edward
This thesis deals with the computational complexity of the maximum network flow problem.  We first introduce the basic concepts and fundamental theorems upon which the study of "max-flow" has been built.  We then trace the development of max-flow algorithms from the original "labeling algorithm" of Ford and Fulkerson, through a recent 0   (V-E-log 2 V) algorithm due to Galil and Naamad.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149513</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Verification of Serializers</title>
<link>https://hdl.handle.net/1721.1/149512</link>
<description>Automatic Verification of Serializers
Atkinson, Russ R.
This thesis is concerned with the problem of controlling concurrent access to shared data.  A language construct is proposed to enforce such control; a specification language is defined to describe the formal requirements of such control; and verification techniques are given to prove that instances of the construct satisfy their specifications.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149512</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Abstract Implementation for a Generalized Data Flow Language</title>
<link>https://hdl.handle.net/1721.1/149511</link>
<description>An Abstract Implementation for a Generalized Data Flow Language
Weng, Kung-Song
In this thesis we are concerned with issues arising from the need to achieve concurrency of operation with a computation on a large scale. Several factors contribute toward increasing interest in systems capable of exploiting the concurrency of computation.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149511</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incomprehensible Computer Systems: Knowledge Without Wisdom</title>
<link>https://hdl.handle.net/1721.1/149510</link>
<description>Incomprehensible Computer Systems: Knowledge Without Wisdom
Rosenberg, Ronni Lynne
An analysis of the incomprehensibility of large, complex computer systems is made.  The thesis is that there is a strong relationship between system incomprehensibility and the necessity to trust computer systems.  A cogent definition of incomprehensibility in computer system is established, with common themes drawn from interdisciplinary literature dealing with computers and society.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149510</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>CLU Reference Manual</title>
<link>https://hdl.handle.net/1721.1/149509</link>
<description>CLU Reference Manual
Liskov, Barbara; Atkinson, Russ R.; Bloom, Toby; Moss, J. Eliot B.; Schaffert, Craig; Scheifler, Bob; Snyder, Alan
This document serves both as an introduction to CLU and as a language reference manual.  Sections 1 through 4 present an overview of the language.  These sections highlight the essential features of CLU, and discuss how CLU differs from other, more conventional, languages.
</description>
<pubDate>Mon, 01 Oct 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149509</guid>
<dc:date>1979-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Computational Theory of Indirect Speech Acts</title>
<link>https://hdl.handle.net/1721.1/149508</link>
<description>Toward a Computational Theory of Indirect Speech Acts
Brown, Gretchen P.
The variety of surface forms that may be used to convey a given speech act pose a major problem in modeling task-oriented (and other) dialogues.  Many such forms are so-called indirect speech acts, that is, surface form does not correspond to the (or one) intended speech act.
</description>
<pubDate>Mon, 01 Oct 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149508</guid>
<dc:date>1979-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Abstract Model Specifications for Data Abstractions</title>
<link>https://hdl.handle.net/1721.1/149507</link>
<description>Abstract Model Specifications for Data Abstractions
B_rzin_, Valdis Andris
A data abstraction introduces a data type with a hidden representation.  Specifications of data abstractions are required to allow the data to be described and used without reference to the underlying representation.  There are two main approaches to specifying data abstractions, the abstract model approach and the axiomatic approach.
</description>
<pubDate>Sun, 01 Jul 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149507</guid>
<dc:date>1979-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Denotational Semantics of Determinate and Non-Determinate Data Flow Programs</title>
<link>https://hdl.handle.net/1721.1/149506</link>
<description>Denotational Semantics of Determinate and Non-Determinate Data Flow Programs
Kosinski, Paul Roman
Among its other characteristics, a programming language should be conducive to writing modular program's, be able to express parallelism and non-determinate behavior, and it should have a cleanly formalizable semantics.  Data flow programming languages have all these characteristics and are especially amenable to mathematization of their semantics in the denotational style of Scott and Strachey.
</description>
<pubDate>Sun, 01 Jul 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149506</guid>
<dc:date>1979-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Copying Complex Structures in a Distributed System</title>
<link>https://hdl.handle.net/1721.1/149505</link>
<description>Copying Complex Structures in a Distributed System
Sollins, Karen Rosin
This thesis presents a model of a distributed system.  The universe of objects in the distributed system is divided into mutually exclusive sets, each set corresponding to a context.  This model allows naming beyond the context boundaries, but limits communication across such boundaries to message passing only.
</description>
<pubDate>Sun, 01 Jul 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149505</guid>
<dc:date>1979-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>VAL- A Value-oriented Algorithmic Language: Preliminary Reference Manual</title>
<link>https://hdl.handle.net/1721.1/149504</link>
<description>VAL- A Value-oriented Algorithmic Language: Preliminary Reference Manual
Ackerman, William B.; Dennis, Jack B.
The programming language VAL (Value-Oriented Algorithmic Language) is designed for expressing algorithms for execution on computers capable of highly concurrent operation.  More specifically, the application area to be supported is numerical computation which strains the limits of high performance machines, and primary targets for translation of VAL programs are data driven machines of the form under development by the Computation Structures Group of the MIT Laboratory for Computer Science for high performance numerical computation.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149504</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Storage and Access Costs for Implementations of Variable - Length lists</title>
<link>https://hdl.handle.net/1721.1/149503</link>
<description>Storage and Access Costs for Implementations of Variable - Length lists
Brown, Donna Jean
Consider a machine with a cellular memory used to store a list X , where X is a finite alphabet and i  N.  We investigate the machine representation of such a list and the implementation of common list operations such as determining the i th  element and adding or deleting an element.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149503</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of the Simple Code for Dataflow Computation</title>
<link>https://hdl.handle.net/1721.1/149502</link>
<description>Analysis of the Simple Code for Dataflow Computation
Myers, John M.
We analyze a problem in hydrodynamics from the standpoint of computation on a data flow compute that is not yet fully specified, with the objectives of helping to further specify the computer and helping to develop VAL as its source language.  Lawrence Liver,ore Laboratory supplied the algorithm for hydrodynamics, including heat flow, as a 1749-line FORTRAN code called SIMPLE.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149502</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Computer Systems: Structure and Semantics</title>
<link>https://hdl.handle.net/1721.1/149501</link>
<description>Distributed Computer Systems: Structure and Semantics
Svobodova, Liba; Liskov, Barbara; Clark, David D
This report describes an ongoing project in the area of design of distributed systems.  The goal is to develop an effective programming system that will support well-structured design implementation, maintenance and control of distributed processing applications.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149501</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Algorithm in Finite Fields</title>
<link>https://hdl.handle.net/1721.1/149500</link>
<description>Probabilistic Algorithm in Finite Fields
Rabin, Michael O.
We present probabilistic algorithms for the problems of finding an irreducible polynomial of degree n  over a finite field, finding roots of a polynomial and factoring the polynomial into its irreducible factors over a finite field.  All of these problems are of importance in algebraic coding theory, algebraic symbol manipulation, and number theory.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149500</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Digitalized Signatures and Public-key Functions as Intractable as Factorization</title>
<link>https://hdl.handle.net/1721.1/149499</link>
<description>Digitalized Signatures and Public-key Functions as Intractable as Factorization
Rabin, Michael O.
We introduce a new class of public-key functions involving a number n = p.q having two large prime factors.  As usual, the key n is public, while p and q are the private key used by the issuer for production of signatures and function inversion.  These functions can be used for all the applications involving public-key functions proposed by Diffie and Hellman  [ 2 ], including digitalized signatures.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149499</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synchronization Mechanism for Modular Programming Language</title>
<link>https://hdl.handle.net/1721.1/149498</link>
<description>Synchronization Mechanism for Modular Programming Language
Bloom, Toby
Any programming language that supports concurrency needs a synchronization construct with which to express access control for shared resources.  This thesis examines synchronization constructs from the standpoint of language design for reliable software. The criteria a synchronization mechanism must satisfy to support construction of reliable, easily maintainable concurrent software are defined.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149498</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Serializability of Concurrent Data Base Updates</title>
<link>https://hdl.handle.net/1721.1/149497</link>
<description>Serializability of Concurrent Data Base Updates
Papadimitriou, Christos H.
A sequence of interleaved user transactions in a data base system may not be serializable, i.e., equivalent to some sequential execution of the individual transactions.  Using a simple transaction model we show that recognizing the transaction histories which are serializable is an NP-complete problem.
</description>
<pubDate>Thu, 01 Mar 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149497</guid>
<dc:date>1979-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Machine Architecture to Support an Object-Oriented Language</title>
<link>https://hdl.handle.net/1721.1/149496</link>
<description>A Machine Architecture to Support an Object-Oriented Language
Snyder, Alan
In object-oriented languages (e.g., LISP, Simula, and CLU), all (or most) data objects used by a program are implicitly allocated from a free-storage area and are accessed via fixed-size references.  The storage for an object is automatically reclaimed (garbage collected) when the object is no longer accessible to the program.
</description>
<pubDate>Thu, 01 Mar 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149496</guid>
<dc:date>1979-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Microcomputer Network Simulation System</title>
<link>https://hdl.handle.net/1721.1/149495</link>
<description>A Microcomputer Network Simulation System
Krizan, Brock Collins
The design, development and use of cost-effective computer networks require information about system behavior given a variety of network structures and operational policies.  Because computer networks are complex systems whose behavior is generally not intuitively understood, there is a need for system analysis tools to provide a wide range of performance information.
</description>
<pubDate>Thu, 01 Feb 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149495</guid>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Concurrency Control for a Distriuted Information System</title>
<link>https://hdl.handle.net/1721.1/149494</link>
<description>Robust Concurrency Control for a Distriuted Information System
Montgomery, Warren A.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149494</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Naming and Synchornization in a Decentralized Computer System</title>
<link>https://hdl.handle.net/1721.1/149493</link>
<description>Naming and Synchornization in a Decentralized Computer System
Reed, David Patrick
In this dissertation a new approach to the synchronization of accesses to shared data objects is developed.  Traditional approaches to the synchronization problems of shared data accessed by concurrently running computations have relied on mutual exclusion--the ability of one computation to stop the execution of other computations that might access or change shared data accessed by that computation.
</description>
<pubDate>Sun, 01 Oct 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149493</guid>
<dc:date>1978-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-time Control Structures for Block Diagram Schemata</title>
<link>https://hdl.handle.net/1721.1/149492</link>
<description>Real-time Control Structures for Block Diagram Schemata
Teixeira, Thomas Joseph
Block diagram schemata model computation systems in the context of an external environment.  The environment imposes various constraints on the real-time performance of any implementation of a block diagram schema.  The model is used to provide precise definitions of real-time performance. The portion of the implementation that affects the real-time performance is called the control structure.
</description>
<pubDate>Tue, 01 Aug 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149492</guid>
<dc:date>1978-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of Synchronization Code for Data Abstractions</title>
<link>https://hdl.handle.net/1721.1/149491</link>
<description>Synthesis of Synchronization Code for Data Abstractions
Laventhal, Mark Steven
Synchronization code is necessary to control shared access of an abstract data object in a parallel-processing environment.  This thesis explores an approach in which a synchronization property can be specified in a high-level nonprocedural language, and an implementation for the specified property can be synthesized algorithmically.
</description>
<pubDate>Sat, 01 Jul 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149491</guid>
<dc:date>1978-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Formalization of the State Machine Specification Technique</title>
<link>https://hdl.handle.net/1721.1/149490</link>
<description>A Formalization of the State Machine Specification Technique
Principato, Robert N., Jr.
This thesis develops the state machine specification technique, a formal specification technique for data abstractions based on Parnas' work on specifying software modules.
</description>
<pubDate>Sat, 01 Jul 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149490</guid>
<dc:date>1978-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Denotational Semantics of CLU</title>
<link>https://hdl.handle.net/1721.1/149489</link>
<description>A Denotational Semantics of CLU
Scheifler, Robert W.
A denotational semantics of CLU, an object-oriented language supporting data abstractions, is presented.  The definition is based on Scott's lattice-theoretic approach to the theory of computation.  Modules, the basic unit of compilation, are represented in terms of a set of recursively defined domains called the abstract syntax.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149489</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Logics of Programs: Axiomatics and Descriptive Power</title>
<link>https://hdl.handle.net/1721.1/149488</link>
<description>Logics of Programs: Axiomatics and Descriptive Power
Harel, David
This thesis is concerned with the development of mathematical tools for reasoning about computer programs.  The approach is to design and investigate the properties of various dynamic logics  with an emphasis on useful expressive power and adequate proof theory.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149488</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Specification of Code Generation Algorithms</title>
<link>https://hdl.handle.net/1721.1/149487</link>
<description>The Specification of Code Generation Algorithms
Terman, Christopher J.
This thesis addresses the problem of automatically constructing the code generation phrase of a compiler from a specification of the source language and target machine.  A framework for such a specification is presented in which information about language and machine dependent semantics in incorporated as a set of transformation on an internal representation of the source language program.
</description>
<pubDate>Sat, 01 Apr 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149487</guid>
<dc:date>1978-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Actor Systems for Real-time Computation</title>
<link>https://hdl.handle.net/1721.1/149486</link>
<description>Actor Systems for Real-time Computation
Baker, Henry Givens, Jr.
Actor theory was invented by Hewitt and collaborators as a synthesis of many of the ideas from the high-level languages LISP, GEDANKEN, SMALLTALK, SIMULA-67, and others.  Actor theory consists of a group of active objects called Actors, which communicate by passing messages to one another.
</description>
<pubDate>Wed, 01 Mar 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149486</guid>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Final Report of the Multics Kernal Design Project</title>
<link>https://hdl.handle.net/1721.1/149485</link>
<description>Final Report of the Multics Kernal Design Project
Schroeder, Michael D.; Clark, David D; Saltzer, Jerome H.; Wells, D.H.
We describe a plan to create an auditable version of Multics.  The engineering experiments of that plan are now complete.  Type extension as a design discipline has been demonstrated feasible, even for the internal workings of an operating system, where many subtle intermodule dependencies were discovered and controlled.
</description>
<pubDate>Wed, 01 Mar 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149485</guid>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Time-space Classes and Their Relation to the Theory of Real Addition</title>
<link>https://hdl.handle.net/1721.1/149484</link>
<description>On Time-space Classes and Their Relation to the Theory of Real Addition
Bruss, Anna R.
A new lower bound on the computational complexity of the theory of real addition and several related theories is established: any decision procedure for these theories requires either space n2 or nondeterministic time 2en2 for some constant E&gt; 0 and infinitely many n.
</description>
<pubDate>Wed, 01 Mar 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149484</guid>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Actors and Continuous Functionals</title>
<link>https://hdl.handle.net/1721.1/149483</link>
<description>Actors and Continuous Functionals
Hewitt, Carl; Baker, Henry Givens, Jr.
This paper presents precise versions of some "laws" that must be satisfied by computations involving communicating parallel processes.  The laws take the form of stating plausible restrictions on the histories of computations that are physically realizable .
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149483</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Attribute Partitioning in a Self-adaptive Relational Data Base Systems</title>
<link>https://hdl.handle.net/1721.1/149482</link>
<description>Attribute Partitioning in a Self-adaptive Relational Data Base Systems
Niamir, Bahram
One technique that is sometimes employed to enhance the performance of a data base management system is known as attribute partitioning.  This is the process of dividing the attributes of a file into subfiles that are stored separately.  By storing together those attributes that are frequently requested together by transactions, and by separating those that are not, attribute partitioning can reduce the number of pages that must be transferred from secondary  storage to primary memory in order to process a transaction.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149482</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specifications and Verification Techniques for Parallel Programs Based on Message Passing Semantics</title>
<link>https://hdl.handle.net/1721.1/149481</link>
<description>Specifications and Verification Techniques for Parallel Programs Based on Message Passing Semantics
Yonezawa, Akinori
This thesis presents formal specification and verification techniques for both serial and parallel programs written in SIMULA-like object oriented languages.  These techniques are based on the notion of states of individual objects which are defined uniformly in serial and parallel computations. They can specify and verify the behavior of data and procedural objects in multi-process environments, thus overcoming some of the difficulties in dealing with parallelism which characterized previous work on formal specifications for abstract data types.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149481</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Abstract Data Types in Stack Based Languages</title>
<link>https://hdl.handle.net/1721.1/149480</link>
<description>Abstract Data Types in Stack Based Languages
Moss, J. Eliot B.
Abstract data types are the basis of an emerging methodology of computer programming.  The only existing languages supporting abstract data types directly, CLU and Simula, both require compacting garbage collection, and thus they are not suitable for many applications. This thesis presents the design of a new language incorporating abstract data types; the language requires only a run-time stack, and not garbage collection.
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149480</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formal Specifications for Packet Communication Systems</title>
<link>https://hdl.handle.net/1721.1/149479</link>
<description>Formal Specifications for Packet Communication Systems
Ellis, David J.
One of the most difficult tasks facing computer scientists is that of designing systems and making sure that they perform their intended functions correctly.  As computer systems have grown in size and complexity, the problems of system design and verification have become increasingly acute.
</description>
<pubDate>Tue, 01 Nov 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149479</guid>
<dc:date>1977-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulation of Packet Communication Architecture Computer Systems</title>
<link>https://hdl.handle.net/1721.1/149478</link>
<description>Simulation of Packet Communication Architecture Computer Systems
Bryant, Randal R.
Simulations of computer systems have traditionally been performed on a single sequential computer, even if the system to be simulated contains a number of components which operate concurrently.
</description>
<pubDate>Tue, 01 Nov 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149478</guid>
<dc:date>1977-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Structure Memory for Data Flow Computers</title>
<link>https://hdl.handle.net/1721.1/149477</link>
<description>A Structure Memory for Data Flow Computers
Ackerman, William B.
A data flow computer is one which achieves enormous concurrency of instruction execution through a machine architecture that acts directly on a data dependency graph of the program.
</description>
<pubDate>Mon, 01 Aug 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149477</guid>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deadlock Detection in Computer Networks</title>
<link>https://hdl.handle.net/1721.1/149476</link>
<description>Deadlock Detection in Computer Networks
Goldman, Barry
The problem of detecting process deadlocks is common to transaction oriented computer systems which allow data sharing. Several good algorithms exist for detecting  process deadlocks in a single location facility. However, the deadlock detection problem becomes more complex in a geographically distributed  computer network due to the fact that all the information needed to detect a deadlock is not necessarily available in a single node, and communications delays may lead to synchronization problems in getting an accurate view of the network state.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149476</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Facilitating Interprocess Communication in a Heterogeneous Network Environment</title>
<link>https://hdl.handle.net/1721.1/149475</link>
<description>Facilitating Interprocess Communication in a Heterogeneous Network Environment
Levine, Paul H.
Passing information among processors with different internal data formatting schemes has proven to be a major complication to computer networking efforts.  Data format translation is necessary to support information exchange in a heterogeneous network environment. Three strategies for performing this translation for communications between a message sender translation by an intermediate translator, and the use of a standard intermediate format.
</description>
<pubDate>Fri, 01 Jul 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149475</guid>
<dc:date>1977-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Framework for Processing Dialogue</title>
<link>https://hdl.handle.net/1721.1/149474</link>
<description>A Framework for Processing Dialogue
Brown, Gretchen P.
This report describes a framework for handling mixed-initiative English dialogue in a console session environment, with emphasis on recognition.  Within this framework, both linguistic and non-linguistic activities are modelled by structures called methods, which are a declarative form of procedural knowledge. Our design focus on units of linguistic activity larger than the speech act, so that the pragmatic and semantic context of an utterance can be used to guide its interpretation. also important is the treatment of indirect speech acts,e.g., the different ways to ask a question, give a command, etc.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149474</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of the Finite Containment Problem for Petri Nets</title>
<link>https://hdl.handle.net/1721.1/149473</link>
<description>The Complexity of the Finite Containment Problem for Petri Nets
Mayr, Ernst Wilhelm
If the reachability set of a Petri net (or, equivalently, vector addition system) is finite it can be effectively constructed.  Furthermore, the finiteness is decidable.  Thus, the containment and equality problem for finite reachability sets become solvable. We investigate the complexity of decision procedures for these problems and show by reducing a bounded version of Hilbert's Tenth Problem to the finite containment problem that these two problems are extremely hard, that, in fact, the complexity of each decision procedure exceeds any primitive recursive function infinitely often.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149473</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Simple and Flexible System Initialization Mechanism</title>
<link>https://hdl.handle.net/1721.1/149472</link>
<description>A Simple and Flexible System Initialization Mechanism
Luniewski, Allen W.
This thesis presents an approach to system initialization which is simple and easy to understand and, at the same time, is versatile in the face of configuration changes.  This thesis considers initialization of a layered system.  The initialization mechanism is built upon three key concepts: existence of a minimal configuration, a core image of the system and dynamic reconfiguration.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149472</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-discretionary Access Control for Decentralized Computing Systems</title>
<link>https://hdl.handle.net/1721.1/149471</link>
<description>Non-discretionary Access Control for Decentralized Computing Systems
Karger, Paul A.
This thesis examines the issues relating to non-discretionary access controls for decentralized computing systems.  Decentralization changes the basic character of a computing system from a set of processes referencing a data base to a set of processes sending and receiving messages. Because massages must be acknowledge, operations that were read-only in a centralized system become read-write operations.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149471</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Layered Virtual Memory Manager</title>
<link>https://hdl.handle.net/1721.1/149470</link>
<description>A Layered Virtual Memory Manager
Mason, Andrew Halstead
This thesis presents a specification for the Multics virtual memory manager.  The virtual memory manager is that part of the operating system which coordinates the usage of physical memory and which manages the bindings between logical memory and physical memory. In the case of Multics, physical memory is composed of fixed-length blocks called frames and logical memory consists of segments, representing sets of frames.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149470</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Digitalis Therapy Advisor with Explanations</title>
<link>https://hdl.handle.net/1721.1/149469</link>
<description>A Digitalis Therapy Advisor with Explanations
Swartout, William R.
This thesis describes the English explanation facility of the OWL Digitalis Advisor, a program designed to advise physicians regarding digitalis therapy.  The program is written in OWL, an English-based computer language being developed at MIT.  The system can explain, in English, both the methods it uses and how those methods were applied during a particular session. In addition, the program can explain how it acquires information and tell the user how it deals with that information either in general or during a particular session.
</description>
<pubDate>Tue, 01 Feb 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149469</guid>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Robust Environment for Program Development</title>
<link>https://hdl.handle.net/1721.1/149468</link>
<description>A Robust Environment for Program Development
Goldberg, Harold J.
This thesis examines the problems of debugging and preservation of the user programming environment and proposes a scheme by which the program development environment can be protected.  Typically, designers of timeshared or multiprogrammed computer systems only consider inter-user interference as a source of problems and do not worry about what users do in their own environments. Thus, users can, by writing incorrect programs, cause destruction of the programming environment and personal data bases.
</description>
<pubDate>Tue, 01 Feb 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149468</guid>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Case Study of Intermodule Dependencies in a Virtual Memory Subsystem</title>
<link>https://hdl.handle.net/1721.1/149467</link>
<description>A Case Study of Intermodule Dependencies in a Virtual Memory Subsystem
Hunt, Douglas H.
A problem currently confronting computer scientists is to develop a method for the production of large software systems that are easy to understand and certify.  The most promising methods involve decomposing a system into small modules in such a way that there are few intermodule dependencies.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149467</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coordination of Parallel Processes in the Actor Model of Computation</title>
<link>https://hdl.handle.net/1721.1/149466</link>
<description>Coordination of Parallel Processes in the Actor Model of Computation
Goodman, Nathan
Two algorithms for the mutual exclusion problem are described and proven to operate correctly.  The algorithms are unique in that they use very simple synchronization primitives yet are fair and retain their fairness even if the number of parallel processes in the computer system increases unboundedly over time. One of the algorithms uses simple cells of read/write storage as the primitive; the algorithm is similar to the classic algorithms for this problem proposed by Dijkstra and Knuth, but is generalized to handle an arbitrary number of processes.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149466</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Multi-process Design of Paging System</title>
<link>https://hdl.handle.net/1721.1/149465</link>
<description>A Multi-process Design of Paging System
Huber, Andrew R.
This thesis presents a design for a paging system that may be used to implement a virtual memory on a large scale, demand paged computer utility.  A model for such a computer system with a multi-level, hierarchical memory system is presented.  The functional requirements of a paging system for such a model are discussed, with emphasis on the parallelism inherent in the algorithms used to implement the memory management functions.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149465</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Logic of Systems</title>
<link>https://hdl.handle.net/1721.1/149464</link>
<description>The Logic of Systems
Furtek, Frederick Curtis
We present a theory about the logical relationships associated with system behavior.  The rules governing the behavior of a system are expressed by a Petri net.  A set of assumptions about the modeling of a system permit us to separate system behavior into two components, what we refer to as information and control.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149464</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnostic Planning and Cancer Management</title>
<link>https://hdl.handle.net/1721.1/149463</link>
<description>Diagnostic Planning and Cancer Management
Safran, Charles; Desforges, Jane F.; Tsichlis, Philip N.
This report describes a computer system for evaluating patients with Hodgkin's disease which has been developed by Clinical Decision Making Group (CDMG) at MIT Laboratory for Computer Science in conjunction with the Blood Research Laboratory of the New England Medical Center Hospitals and Department of Hematology, Tufts University School of Medicine (T-NEMC.H.). This system uses decision theoretic techniques to aid in the formulation of a diagnostic plan for cancer patient.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149463</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semantical Considerations on Floyd-Hoare Logic</title>
<link>https://hdl.handle.net/1721.1/149462</link>
<description>Semantical Considerations on Floyd-Hoare Logic
Pratt, Vaughan R.
This paper deals with logics of programs.  The objective is to formalize a notion of program description and to give both plausible (semantic) and effective (syntactic) criteria for the notion of truth of a description.  A novel feature of this treatment is the development of the mathematics underlying Floyed-Hoare axiom systems independently of such systems. Our directions that such research might take are also considered.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149462</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Type Extension to Organize Virtual Memory Mechanisms</title>
<link>https://hdl.handle.net/1721.1/149461</link>
<description>Using Type Extension to Organize Virtual Memory Mechanisms
Janson, Philippe Arnaud
Much effort is currently being devoted to producing systems that are easy to understand, to verify and to develop.  The general methodology for designing such a system consists of decomposing it into a structured set of modules so that the modules can be understood, verified and developed individually, and so that the understanding/ verification of the system can be derived from the understanding/ verification of its modules. while many of the mechanisms in a computer system have been decomposed successfully into a structured set of modules, no technique has been proposed to organize the virtual memory mechanism of a system in such a way.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149461</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Index Selection in a Self-Adaptive Relational Data Base Management System</title>
<link>https://hdl.handle.net/1721.1/149460</link>
<description>Index Selection in a Self-Adaptive Relational Data Base Management System
Chan, Arvola Y.
The development of large integrated data bases that support a variety of applications in an enterprise promises to be one of the most important data processing activities of the next decade.  The effective utilization o such data bases depends on the ability of data base management systems to cope with the evolution of data base applications.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149460</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Level Expression of Semantic Integrity Specifications in a Relational Data Base System</title>
<link>https://hdl.handle.net/1721.1/149459</link>
<description>High Level Expression of Semantic Integrity Specifications in a Relational Data Base System
McLeod, Dennis J.
The "semantic integrity" of a data base is said to be violated when the data base ceases to represent a legitimate configuration of the application environment it is intended to model.  In the context of the relational data model, it is possible to identify multiple levels of semantic integrity information: (1) the description of the domains of the data base, as abstract sets of atomic data values (domain definition), (2) the specification of the fundamental structure of the data (relation structure specification), (3) the definition of the abstract operations which are meaningful in terms of the application environment (structured operations), and (4) the expression of additional semantic information not contained in the structure of the relations nor in the identities of their underlying domains  (relation constraints).
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149459</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Processor Multiplexing in a Layered Operating Systems</title>
<link>https://hdl.handle.net/1721.1/149458</link>
<description>Processor Multiplexing in a Layered Operating Systems
Reed, David Patrick
This thesis presents a simply structured design for the implementation of process in a kernel-structured operating system.  The design provides a minimal mechanism for the support of two distinct classes of processes found in the computer  system -- those which are part of kernel operating system itself, and those used to execute user-specified computations.
</description>
<pubDate>Thu, 01 Jul 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149458</guid>
<dc:date>1976-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Secure and Flexible Model of Process Initiation for a Computer Utility</title>
<link>https://hdl.handle.net/1721.1/149457</link>
<description>A Secure and Flexible Model of Process Initiation for a Computer Utility
Montgomery, Warren Alan
This thesis demonstrates that the amount of protected, privileged code related to process initiation in a computer utility can be greatly reduced by making process creation unprivileged.  The creation of processes can be controlled by the standard mechanism for controlling entry to a domain, which forces a new process to begin execution at a controlled location.
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149457</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Encryption-based Protection Protocols for Interactive User-computer Communication</title>
<link>https://hdl.handle.net/1721.1/149456</link>
<description>Encryption-based Protection Protocols for Interactive User-computer Communication
Kent, Stephen T.
This thesis develops a complete set of protocols, which utilize a block cipher, e.g., the NBS data encryption standard, for protection interactive user-computer communication over physically unsecured channels.  The use of the block cipher protects against disclosure of message contents to an intruder, and the protocols provide for the detection of message stream modification and dental of message service by an intruder.
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149456</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decidability Questions for Petri Nets</title>
<link>https://hdl.handle.net/1721.1/149455</link>
<description>Decidability Questions for Petri Nets
Hack, Michel Henri Théodore
An understanding of the mathematical properties of Petri Nets is essential when one wishes to use Petri Nets as an abstract model for concurrent systems.  The decidability of various problems which arise in this context is an important aspect of this question. The fact that these problems also arise in the context of other mathematical theories, such as commutative, closure under linear relations,  Matrix Context-Free grammars, or Weak Counter Automata, provides further motivation.
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149455</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Program for the Design of Procurement Systems</title>
<link>https://hdl.handle.net/1721.1/149454</link>
<description>A Program for the Design of Procurement Systems
Bosyj, Michael
Computer technology has had a limited success in producing useful business applications.  Management systems seldom meet users' requirements, are often inappropriate to an application, and are frequently abandoned.  But why?  Business lacks expertise in the application of computers .
</description>
<pubDate>Sat, 01 May 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149454</guid>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Petri Net Language</title>
<link>https://hdl.handle.net/1721.1/149453</link>
<description>Petri Net Language
Hack, Michel Henri Théodore
In a labeled Petri Net we assign symbols from an alphabet to some or all the transitions of a Petri Net.  To each firing sequence of such a Labeled Petri Net corresponds to a string over the alphabet.  We study the languages obtained in this way by all firing sequences of a Petri Net, or by all firing sequences which reach a given final marking.
</description>
<pubDate>Mon, 01 Mar 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149453</guid>
<dc:date>1976-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Data Base Applications of Constraint Expressions</title>
<link>https://hdl.handle.net/1721.1/149452</link>
<description>Some Data Base Applications of Constraint Expressions
Grossman, Richard Weaver
This report presents a novel network-like representation for information, called "constraint expressions" (CE).  CE makes use of some of the knowledge-representation techniques developed by Artificial Intelligence research.
</description>
<pubDate>Sun, 01 Feb 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149452</guid>
<dc:date>1976-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Preliminary Study in Computer-aided Legal Analysis</title>
<link>https://hdl.handle.net/1721.1/149451</link>
<description>A Preliminary Study in Computer-aided Legal Analysis
Meldman, Jeffrey A.
This paper describes the prototype for a computer system that can perform a simple kind of legal analysis.  The system user, who is presumed to be a lawyer, describes to the system a hypothetical set of facts.  The system determines the extent to which these facts fall within certain legal doctrines (by syllogism), or near to these doctrines  (by analogy).
</description>
<pubDate>Sat, 01 Nov 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149451</guid>
<dc:date>1975-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimizing the Naming Facilities Requiring Protection in a Computing Utility</title>
<link>https://hdl.handle.net/1721.1/149450</link>
<description>Minimizing the Naming Facilities Requiring Protection in a Computing Utility
Bratt, Richard Glenn
This thesis examines the various mechanisms for naming the information objects stored in a general-purpose computing utility, and isolates a basic set of naming facilities that must be protected to assure complete control over user interaction and that allow desired interactions among users to occur in a natural way. Minimizing the protected naming facilities consistent with functional objective of controlled, but natural, user interaction contribute to defining a security kernel for a general-purpose computing utility.
</description>
<pubDate>Mon, 01 Sep 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149450</guid>
<dc:date>1975-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanization of Temporal Knowledge</title>
<link>https://hdl.handle.net/1721.1/149449</link>
<description>Mechanization of Temporal Knowledge
Kahn, Kenneth M.
The design and implementation of a collection of computer programs knowledgeable about time "in general," called the time specialist, is described.  The thesis that this time specialist can be placed in the service of larger more general problem solvers is demonstrated for two examples, medical diagnosis and the understanding of a time-travel story.
</description>
<pubDate>Mon, 01 Sep 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149449</guid>
<dc:date>1975-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semantic of Communication Parallel Processes</title>
<link>https://hdl.handle.net/1721.1/149448</link>
<description>Semantic of Communication Parallel Processes
Greif, Irene Gloria
The thesis of this dissertation is that an understanding of the ordering constraints that are introduced among events of parallel process is essential to the understanding of synchronization and that therefore any language for specifying synchronization of parallel processes should be based on a theory of such orderings. While it is possible to write specifications for systems communicating parallel processes by reference to the time ordering of some global clock external to the system, such specifications cannot be as useful as ones which are in terms of orderings derivable within the system.
</description>
<pubDate>Mon, 01 Sep 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149448</guid>
<dc:date>1975-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategy Selection in Medical Diagnosis</title>
<link>https://hdl.handle.net/1721.1/149447</link>
<description>Strategy Selection in Medical Diagnosis
Miller, Peter B.
The recorded, verbal problem-solving behavior of doctors performing the diagnostic task of taking a present illness was analyzed in this research.  The goal of the analysis was to discover that data-acquisition strategies were used by the doctors to accomplish the task. A model called the strategy frame model was created to describe the strategies that were found and to provide a mechanism for the selection of a strategy.
</description>
<pubDate>Mon, 01 Sep 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149447</guid>
<dc:date>1975-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equivalence Problems for Monadic Schemas</title>
<link>https://hdl.handle.net/1721.1/149446</link>
<description>Equivalence Problems for Monadic Schemas
Qualitz, Joseph E.
A class of monadic program schemas is defined.  This class, called iteration schemas, consists of schemas whose programs comprise assignment statements, conditional statements, and iteration statements.  These schemas are shown to correspond to program schemas which are structured, and are shown to be strictly less "powerful" than monadic program schemas.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149446</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Test, Configuration and Repair of Cellular Arrays</title>
<link>https://hdl.handle.net/1721.1/149445</link>
<description>Automatic Test, Configuration and Repair of Cellular Arrays
Manning, Frank B.
A cellular array is an iterative array of identical information processing machines, cells.  The arrays discussed are rectangular arrays of programmable logic, in which information stored in a working cell tells the cell how to behave.  No signal line connects more than a few cells. A loading mechanism in each cell allows a computer directly connected to one cell to load any good cell that is not walled off by flawed cells.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149445</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Portable Compiler for the Language C</title>
<link>https://hdl.handle.net/1721.1/149444</link>
<description>A Portable Compiler for the Language C
Snyder, Alan
This paper describes the implementation of a compiler for the language C.  The compiler has been designed to be able to be capable of producing assembly-language code for most register-oriented machines with only minor recoding.
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149444</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Restructuring for Virtual Memory Systems</title>
<link>https://hdl.handle.net/1721.1/149443</link>
<description>Program Restructuring for Virtual Memory Systems
Johnson, Jerry William
The problem area addressed in this report is program restructuring, a method of reordering the relocatable sectors of a program in its address space to increase the locality of the programs reference behavior, thereby reducing the number of page fetches require for execution in a virtual memory system.
</description>
<pubDate>Sat, 01 Mar 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149443</guid>
<dc:date>1975-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Formalization and Correctness Proof of the CGOL Language System</title>
<link>https://hdl.handle.net/1721.1/149442</link>
<description>A Formalization and Correctness Proof of the CGOL Language System
VanDeVanter, Michael Lee
In many important ways the design and implementation of programming languages are hindered rather than helped by BNF.  We present an alternative meta-language based on the work of Pratt which retains much of the effective power of BNF but is more convenient for designer, implementer, and user alike. Its amenability to formal treatment is demonstrated by a rigorous correctness proof of a simple implementation.
</description>
<pubDate>Sat, 01 Mar 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149442</guid>
<dc:date>1975-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computational Complexity of Some Logical Theories</title>
<link>https://hdl.handle.net/1721.1/149441</link>
<description>The Computational Complexity of Some Logical Theories
Rackoff, Charles Weill
Upper and lower bounds on the inherent computational complexity of the decision problem for a number of logical theories are established.  A general form of Ehrenfeucht game technique for deciding theories is developed which involves analyzing the expressive power of formulas with given quantifier depth. The method allows one to decide the truth of sentences by limiting quantifiers to range over finite sets.
</description>
<pubDate>Sat, 01 Feb 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149441</guid>
<dc:date>1975-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Digitalis Therapy Advisory</title>
<link>https://hdl.handle.net/1721.1/149440</link>
<description>A Digitalis Therapy Advisory
Silverman, Howard
The physician administering digitalis makes use of the full richness of the clinical setting to form his/her impressions and decide on a therapeutic program.  The weakness of existing programs which formulate digitalis dosage regimens lies in their inability to use all of the clinical data available-both quantitative. and qualitative. This report describes the construction of a computer system which formulates digitalis dosage regimens and which adjusts this regimen by interpreting the patient's response to the original dosage regimen.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149440</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Problems in German in English Machine Translation</title>
<link>https://hdl.handle.net/1721.1/149439</link>
<description>Some Problems in German in English Machine Translation
Brown, Gretchen P.
This paper discusses some problems in the machine translation of natural language, in particular, for translation from German into English.  An implementation of some parts of the translating process has been built.  The system consists of a German interpretive grammar, to take in German text and output a set of semantic representation, and a generator, to produce English sentences from single semantic representations.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149439</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Naming and Protection in Extendable Operating Systems</title>
<link>https://hdl.handle.net/1721.1/149438</link>
<description>Naming and Protection in Extendable Operating Systems
Redell, David D.
The properties of capability-based extendible operating systems are described, and various aspects of such systems are discussed, with emphasis on the conflict between free distribution of access privileges and later revocation of those privileges.
</description>
<pubDate>Fri, 01 Nov 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149438</guid>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nondeterministic Time and Space Complexity Classes</title>
<link>https://hdl.handle.net/1721.1/149437</link>
<description>Nondeterministic Time and Space Complexity Classes
Seiferas, Joel Irvin
The marginal utility of the Turing machine computational resources running time and storage space are studied.  A technique is developed which, unlike diagonalization, applies equally well to nondeterministic and deterministic automata.  For f, g time or space bounding functions with f (n+1) small compared to g(n), it is shown that, in terms of word length n, there are languages which are accepted by Turing machines operating within time or space g(n) but which are accepted by no Turing machine operating within time or space f(n). The proof involves use of the recursion theorem together with "padding" or "translational" techniques of formal language theory.
</description>
<pubDate>Sun, 01 Sep 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149437</guid>
<dc:date>1974-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional Domains of Applicative Languages</title>
<link>https://hdl.handle.net/1721.1/149436</link>
<description>Functional Domains of Applicative Languages
Ward, Stephen A.
The expressive power of a particular applicative language may be characterized by the set of abstract functions directly representable in that language. The common FUNARG and applicative order problems are scrutinized in this way, and the effects of these weaknesses are related to the inexpressibility of classes of functions.
</description>
<pubDate>Sun, 01 Sep 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149436</guid>
<dc:date>1974-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semantics of Data Structures and References</title>
<link>https://hdl.handle.net/1721.1/149435</link>
<description>Semantics of Data Structures and References
Ellis, David J.
Each programming language that handles data structures has its own set of rules for working with them.  Notions such as assignment and construction of structures values appear in a huge number of different and complicated versions.  This thesis presents a methodology which provides a common basis for describing ways in which programming languages deal with  data structures and reference to them.
</description>
<pubDate>Thu, 01 Aug 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149435</guid>
<dc:date>1974-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Removing the Dynamic Linker from the Security Kernel of a Computing Utility</title>
<link>https://hdl.handle.net/1721.1/149434</link>
<description>Removing the Dynamic Linker from the Security Kernel of a Computing Utility
Jason, Philippe Arnaud
In order to enforce the security of the information stored in a computing utility, it is necessary to certify that the protection mechanism is correctly implemented so that there exist no uncontrolled access path to the stored information.
</description>
<pubDate>Sat, 01 Jun 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149434</guid>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematical Logic for Computer Scientists</title>
<link>https://hdl.handle.net/1721.1/149433</link>
<description>Mathematical Logic for Computer Scientists
Levin, Michael
This book is an introductory course in mathematical logic covering basic topics in quantification theory and recursive function theory, and is intended for the reader who is interested in artificial intelligence, computer linguistics, and other related areas. The text is theoretical, but organized with implementation in mind.
</description>
<pubDate>Sat, 01 Jun 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149433</guid>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Interactive Graphics in Simulating the Hospital Emergency Room</title>
<link>https://hdl.handle.net/1721.1/149432</link>
<description>Using Interactive Graphics in Simulating the Hospital Emergency Room
Weissberg, Richard W.
The hospital emergency room is a complex system having many interrelated factors contributing to its operation.  The emergency room administrator has limited control over certain of these factors: numbers of beds, nurses, doctors, x-ray units; for example. Other factors such as patient arrival rates and demands made upon available resources are largely uncontrollable.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149432</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computer Utility as a Marketplace for Computer Service</title>
<link>https://hdl.handle.net/1721.1/149431</link>
<description>The Computer Utility as a Marketplace for Computer Service
Frankston, Robert Mm.
Computers are unique in their ability to be programmed for a wide variety of applications.  This is in contrast with hardware dedicated to specific tasks such as the telephone system.  Because of its flexibility, a computer system can support, concurrently, many diverse services that do not require dedicated hardware.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149431</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Experimental Analysis of Program Reference Patterns in the Multics Virtual Memory</title>
<link>https://hdl.handle.net/1721.1/149430</link>
<description>An Experimental Analysis of Program Reference Patterns in the Multics Virtual Memory
Greenberg, Bernard Stewart
This thesis reports the design, conducting, and results of an experiment intended to measure the paging rate of a virtual memory computer system as a function of paging memory size.  This experiment, conducted on the Multics computer system at MIT, a large interactive computer utility serving an academic community, sought to predict paging rates for paging memory sizes larger than existing memory at the time.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149430</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Model-debugging System</title>
<link>https://hdl.handle.net/1721.1/149429</link>
<description>A Model-debugging System
Mark, William S.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149429</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verification of Programs Operating on Structured Data</title>
<link>https://hdl.handle.net/1721.1/149428</link>
<description>Verification of Programs Operating on Structured Data
Laventhal, Mark Steven
The major method for verifying the correctness of computer program is the inductive assertion approach.  This approach has been limited in the past by the lack of techniques for handling data structures.  In particular, there has been a need for concepts with which to describe structured data during intermediate and final stages of a computation.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149428</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Introduction to Multics</title>
<link>https://hdl.handle.net/1721.1/149427</link>
<description>Introduction to Multics
Saltzer, Jerome H.
The Multics project was begun in 1964 by the Computer Systems Research group of M.I.T. Project MAC.  The goal was to create a prototype of a computer utility.  In 1965, the project became a cooperative venture of M.I.T. Project MAC, the General Electric Company Computer Department (now Honeywell Information Systems Inc. ) and Bell Telephone  Laboratories. In 1969, at the end of the research phase of the project, Bell Telephone Laboratories ended its active involvement.
</description>
<pubDate>Fri, 01 Feb 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149427</guid>
<dc:date>1974-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Lower Bounds for Selection Problems</title>
<link>https://hdl.handle.net/1721.1/149426</link>
<description>On Lower Bounds for Selection Problems
Yao, Foong Frances
Let V i (n) be the minimum number of binary comparisons that are required to determine the i-th largest of n elements drawn from a totally ordered set.  In this thesis we use adversary strategies to prove lower bounds on V i (n).  For i = 3, our lower bounds determine V 3(n) precisely for infinitely many values of n,and determine V 3(n) to within 2 for all n.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149426</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Asynchronous Concurrent Systems by Timed Petri Nets</title>
<link>https://hdl.handle.net/1721.1/149425</link>
<description>Analysis of Asynchronous Concurrent Systems by Timed Petri Nets
Ramchandani, Chander
This thesis is concerned with the modeling and performance analysis of systems which consist of concurrently acting components, an example of which is an asynchronous pipelined processor.  The work is divided into two parts.  In the first part, a suitable model is developed for describing the structure of asynchronous concurrent systems. In conventional automata theory, the finite-state machine model is used to describe the behavior of systems; the problem with this is that a large number of states results when practical systems are modelled.
</description>
<pubDate>Fri, 01 Feb 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149425</guid>
<dc:date>1974-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Abstract Model of a Research Institute: Simple Automatic Programming Approach</title>
<link>https://hdl.handle.net/1721.1/149424</link>
<description>An Abstract Model of a Research Institute: Simple Automatic Programming Approach
Briabrin, Victor
A problem of knowledge representation is considered in terms of designing a model for a simple sociological structure.  A version of the access language is proposed which is based on three kind of expressions accepted by the system - constructors, specificators and requests. In addition, some topics concerned with model implementation and extension are discussed.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149424</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Input/Output Architecture for Virtual Memory Computer Systems</title>
<link>https://hdl.handle.net/1721.1/149423</link>
<description>An Input/Output Architecture for Virtual Memory Computer Systems
Clark, David D
In many large systems today, input/output is not performed directly by the user, but is done interpretively by the system for him, which causes additional overhead and also restricts the user to whatever algorithms the system has implemented.  Many causes contribute to this involvement of the system in user input/output, including the need to enforce protection requirements, the inability to provide adequate response to control signals from devices, and the difficulty of running devices in a virtual environment, especially a virtual memory.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149423</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Backup and Recovery of On-line Information in a Computer Utility</title>
<link>https://hdl.handle.net/1721.1/149422</link>
<description>Backup and Recovery of On-line Information in a Computer Utility
Stern, Jerry A.
This thesis describes a design for an automatic backup mechanism to be incorporated in a computer utility for the protection of on-line information against accidental or malicious destruction.  This protection is achieved by preserving on magnetic tape recent copies of all items of information known to the online system. In the event of a system failure, file system damage is automatically assessed and missing information is recovered from backup storage.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149422</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Packet Communication</title>
<link>https://hdl.handle.net/1721.1/149421</link>
<description>Packet Communication
Metcalfe, Robert Melancton
This report develops a theory of packet communication; it analyzes users of computers in digital communication systems and examines structures for organizing computers in highly communicative environments.  Various examples from existing computer networks, including the ARPA Computer Network and the ALOHA System, are used to motivate and substantiate analysis of (1) store-and-forward packet communication, (2) broadcast packet communication, and (3) distributed interprocess communication.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149421</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Reducibility Among Combinatorial Problems</title>
<link>https://hdl.handle.net/1721.1/149420</link>
<description>On Reducibility Among Combinatorial Problems
Herrmann, Paul Peter
A large class of combinatorial problems have been shown by Cook and Karp to be computationally equivalent to within a polynomial.  We exhibit some new problems in this class, and provide simpler proofs for some of the known reductions.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149420</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Productivity in Parallel Computational Schemata</title>
<link>https://hdl.handle.net/1721.1/149419</link>
<description>Productivity in Parallel Computational Schemata
Linderman, John P.
A general model for parallel computation is developed in three parts.  One part, the data flow graph, describes how actors which transform and test values are connected to the locations in a finite memory.  Another part, an interpretation, supplies information about the contents of memory and the detailed nature of the transformations and tests.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149419</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complexity Classes of Recursive Functions</title>
<link>https://hdl.handle.net/1721.1/149418</link>
<description>Complexity Classes of Recursive Functions
Moll, Robert
An honest function is one whose size honestly reflects its computation time.  In 1969 Meyer and McCreight proved the "honesty theorem," which says that for every t, the t-computable functions are the same as the t'computable functions for some honest honest t'.
</description>
<pubDate>Fri, 01 Jun 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149418</guid>
<dc:date>1973-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Storage Hierarchy Systems</title>
<link>https://hdl.handle.net/1721.1/149417</link>
<description>Storage Hierarchy Systems
Madnick, Stuart E.
The relationship between page size, program behavior, and page fetch frequency in storage hierarchy systems is formalized and analyzed.  It is proven that there exist cyclic program reference patterns that can cause page fetch frequency to increase significantly if the page used is decreased (e.g., reduced by half).
</description>
<pubDate>Sun, 01 Apr 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149417</guid>
<dc:date>1973-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Emptiness and Complementation Problems for Automata on Infinite Trees</title>
<link>https://hdl.handle.net/1721.1/149416</link>
<description>The Emptiness and Complementation Problems for Automata on Infinite Trees
Rackoff, Charles Weill
In [6] Rabin defines Automata on Infinite Trees, and the body of that paper is concerned with proving two theorems about these automata.  The result we consider in the first chapter says that there exists an effective procedure to determine, given an automaton on infinite trees, whether or not it accepts anything at all. We present a new decision procedure which is much simpler than Rabin's since we do not use an induction argument as he does.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149416</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Sorting Networks</title>
<link>https://hdl.handle.net/1721.1/149415</link>
<description>An Analysis of Sorting Networks
Smith, Burton J.
Comparators which sort two numbers can be interconnected to form networks which sort n numbers for any n.  The input and output characteristics of comparator networks are analyzed from several different points of view.
</description>
<pubDate>Sun, 01 Oct 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149415</guid>
<dc:date>1972-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cooperation of Mutually Suspicious Subsystems in a Computer Utility</title>
<link>https://hdl.handle.net/1721.1/149414</link>
<description>Cooperation of Mutually Suspicious Subsystems in a Computer Utility
Schroeder, Michael D.
This thesis describes practical protection mechanisms that allow mutually suspicious subsystems to cooperate in a single computation and still be protected from one another.  The mechanisms are based on the division of a computation into independent domains of access privilege, each of which may encapsulate a protected subsystem. The central component of the mechanisms is a hardware processor that automatically enforces the access constraints associated with a multidomain computation implemented as a single execution point in a segmented virtual memory.
</description>
<pubDate>Fri, 01 Sep 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149414</guid>
<dc:date>1972-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finite Tree Automata and W-Automata</title>
<link>https://hdl.handle.net/1721.1/149413</link>
<description>Finite Tree Automata and W-Automata
Hossley, Robert
Chapter I is a survey of finite automata as acceptors of finite labeled trees.  Chapter II is a survey of finite automata as acceptors of infinite strings on a finite alphabet.  Among the automata models considered in Chapter II are those used by McNaughton, Buchi, and Landweber. In Chapter II we also consider several new automata models based on a notion of a run of a finite automataton on  an infinite string suggested by Professor A.R. Meyer in private communication. We show that these new models are all equivalent to various previously formulated models.
</description>
<pubDate>Fri, 01 Sep 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149413</guid>
<dc:date>1972-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Design and Specification of a Common Base Lanaguage</title>
<link>https://hdl.handle.net/1721.1/149412</link>
<description>On the Design and Specification of a Common Base Lanaguage
Dennis, Jack B.
This is the report on the work of the Computational Structures Group of Project MAC toward the design and specification of a common base language for programs and information structures.  We envision that the meanings of programs expressed in practical source languages will be defined by rules of translation into the base language.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149412</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Further Results on Hierarchies of Canonic Systems</title>
<link>https://hdl.handle.net/1721.1/149411</link>
<description>Further Results on Hierarchies of Canonic Systems
Mandl, Robert
This thesis outlines a new way of presenting the theory of canonic systems, including a distinction (for mathodic reasons) between simple canonic systems and general canonic systems, and proves a series of results on hierarchies of canonic systems. After a brief summary of Doyle's results on a partial hierarchy of canonic systems, a new hierarchy is developed (Chapter II) which relates the general canonic systems not only to all 4 types of formal grammars defined by Chomsky but also to any class shown (Chapter III) that all attempts to define a mathematical system which exactly corresponds to the recursive sets are necessarily fruitless.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149411</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relativization of the Theory of Computational Complexity</title>
<link>https://hdl.handle.net/1721.1/149410</link>
<description>Relativization of the Theory of Computational Complexity
Lynch, Nancy A.
Blum's machine-independent treatment of the complexity of partial recursive functions is extended to relative algorithms (as represented by Turing machines with oracles).  We prove relativizations of several results of Blum complexity theory, such as the compression theorem. A recursive relatedness theorem is proved, showing that any two relative complexity measures are related by fixed recursive function. This theorem allows us to obtain proofs of results  for all measures from proofs for a particular measure.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149410</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of Finite Functions</title>
<link>https://hdl.handle.net/1721.1/149409</link>
<description>The Complexity of Finite Functions
Vilfan, Bostjan
Lower bounds on the length of formulas for finite functions are obtained from a generalization of a theorem  of Specker.  Let f: (0,1,...,d-1)    [0,1,...,d-1] be a function which can be represented by a formula of length  &lt; c.n. For any m, if n is sufficiently large, there is a restriction f': {0,1,...,d-1}m  &gt; {0,...,d-1} of f which, is representable by special class of formulas called homogeneous e-complexes.
</description>
<pubDate>Wed, 01 Mar 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149409</guid>
<dc:date>1972-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous, Synchronous Counters Constructed Only of J-K Flip-flops</title>
<link>https://hdl.handle.net/1721.1/149408</link>
<description>Autonomous, Synchronous Counters Constructed Only of J-K Flip-flops
Manning, Frank
This report describes research into some properties of autonomous, synchronous counters constructed with only the simplest form of J-K Flip-Flop.  The research revolved around a system with a special-purpose digital machine and a general-purpose computer. The special-purpose searched through all the possible counters constructed of five or fewer J-K Flip-Flops for all counters with a period equal to that specified by th input to the system.
</description>
<pubDate>Mon, 01 May 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149408</guid>
<dc:date>1972-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Essays in Algebraic Simplification</title>
<link>https://hdl.handle.net/1721.1/149407</link>
<description>Essays in Algebraic Simplification
Fateman, Richard J.
This thesis consists of essays on several aspects of the problem of algebraic simplification by computer.  Since simplification is at the core of most algebraic manipulations, efficient and effective simplification procedures are essential to building useful computer systems for non-numerical mathematics. Efficiency is attained through carefully designed and engineered algorithms, heuristics,and data types, while effectiveness is assured through theoretical considerations.
</description>
<pubDate>Sat, 01 Apr 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149407</guid>
<dc:date>1972-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Production Schemata by Petri Nets</title>
<link>https://hdl.handle.net/1721.1/149406</link>
<description>Analysis of Production Schemata by Petri Nets
Hack, Michel Henri Théodore
Petri nets provide a powerful graphical tool for representing and analyzing complex concurrent systems.  Properties such as hang-up freeness, determinacy, conflict, concurrency and dependency, can be represented and studied.  The precise relationship between structural and behavioral properties, and between local and global properties is not well-understood for the most general class of Petri Nets.
</description>
<pubDate>Tue, 01 Feb 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149406</guid>
<dc:date>1972-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Induction in Proofs about Programs</title>
<link>https://hdl.handle.net/1721.1/149405</link>
<description>Induction in Proofs about Programs
Greif, Irene Gloria
Four methods for proving equivalence of programs by induction are described and compared.  They are recursion induction, structural induction, mu-rule induction, and truncation induction.  McCarthy's formalism for conditional expressions as function definitions is used and reinterpreted in view of Park's work on results on results in lattice theory as related to proofs about programs. The possible application of this work to automatic program verification is commented upon.
</description>
<pubDate>Tue, 01 Feb 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149405</guid>
<dc:date>1972-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of Definite Integrals by Symbolic Manipulation</title>
<link>https://hdl.handle.net/1721.1/149404</link>
<description>Evaluation of Definite Integrals by Symbolic Manipulation
Wang, Paul S.
A heuristic computer program for the evaluation of real definite integrals of elementary functions is described.  This program, called WANDERER (WANg's DEfinite integRal EvaluatoR), evaluates many proper and improper integrals.  The improper integrals may have a finite or infinite range of integration. Evaluation by contour integration and residue theory is among the methods used. A program called DELIMITER (DEfinitive LIMITEvaluatoR) is used for the limit computations needed in evaluating some definite integrals.
</description>
<pubDate>Wed, 01 Sep 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149404</guid>
<dc:date>1971-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cost Analysis of Debugging Systems</title>
<link>https://hdl.handle.net/1721.1/149403</link>
<description>Cost Analysis of Debugging Systems
Lester, Bruce P.
A general method is presented for performing cost analysis of interactive debugging systems.  The method is based on an abstract model of program execution.  This model is derived from the interpreter used in the Vienna method of semantic definition of PL/I. A brief discussion of the overall operation and significance of Vienna interpreter is included.
</description>
<pubDate>Wed, 01 Sep 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149403</guid>
<dc:date>1971-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Primary Access Control in Large-scale Time-shared Decision Systems</title>
<link>https://hdl.handle.net/1721.1/149402</link>
<description>Primary Access Control in Large-scale Time-shared Decision Systems
Owens, Richard C., Jr.
The computer differs from other tools in that it presently does not provide its users with a working environment transparent to their desires; in particular, current computer systems do not support adequate mechanisms for controlled sharing of sensitive information. Four primary dimensions of the access control problem are identified.  They are: 1) the physical level at which to apply control; 2) the fineness of distinction applied to the term ""access"" 3) the meaning of the term "user identification",and 4) the degree of sophistication employed in automatically assigned restrictions to new data files.
</description>
<pubDate>Thu, 01 Jul 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149402</guid>
<dc:date>1971-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounds on Information Retrieval Efficiency in Static File Structures</title>
<link>https://hdl.handle.net/1721.1/149401</link>
<description>Bounds on Information Retrieval Efficiency in Static File Structures
Welch, Terry A.
This research addresses the problem of file organization for efficient information retrieval when each file item may be accessed through any one of a large number of identification keys.  The emphasis is on library problems, namely large, low-update, directory-oriented files, but other types of files are discussed.
</description>
<pubDate>Tue, 01 Jun 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149401</guid>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Reconfiguration in a Modular Computer System</title>
<link>https://hdl.handle.net/1721.1/149400</link>
<description>Dynamic Reconfiguration in a Modular Computer System
Schell, Roger R.
This thesis presents an orderly design approach for dynamically changing the configuration of constituent physical units in a modular computer system.  Dynamic reconfiguration contributes to high system availability by allowing preventative maintenance, development of new operating systems, and changes in system capacity on a non-interference basis.
</description>
<pubDate>Tue, 01 Jun 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149400</guid>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Creation of a Code Generator from a Machine Description</title>
<link>https://hdl.handle.net/1721.1/149399</link>
<description>Automatic Creation of a Code Generator from a Machine Description
Miller, Perry L.
This paper studies some of the problems involved in attaining machine independence for a code generator, similar to the language independence and the token independence attained by automatic parsing and automatic lexical systems.  In particular, the paper examines the logic involved in two areas of code generation: computation and data reference.
</description>
<pubDate>Sat, 01 May 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149399</guid>
<dc:date>1971-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Analysis of Visual Properties of Curved Objects</title>
<link>https://hdl.handle.net/1721.1/149398</link>
<description>Computer Analysis of Visual Properties of Curved Objects
Krakauer, Lawrence J.
A  method is presented for the visual analysis of objects by computer.  It is particularly well suited for opaque objects with smoothly curved surfaces.  The method extracts information about the object's surface properties, including measures of its specularity, texture, and regularity. It also aids in determining the object's shape.
</description>
<pubDate>Sat, 01 May 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149398</guid>
<dc:date>1971-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information Processing and Transmission in Cellular Automata</title>
<link>https://hdl.handle.net/1721.1/149397</link>
<description>Information Processing and Transmission in Cellular Automata
Banks, Edwin R.
A cellular automaton is an iterative array of very simple identical information processing machines called cells.  Each cell can communicate with neighboring cells.  At discrete moments of time the cells can change from one state to another as a function of the states of the cell and its neighbors. Thus on a global basis, the collection of cells is characterized by some type of behavior.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149397</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shape from Shading: A Method for Obtaining the Shape of a Smooth Opaque Object From One View</title>
<link>https://hdl.handle.net/1721.1/149396</link>
<description>Shape from Shading: A Method for Obtaining the Shape of a Smooth Opaque Object From One View
Horn, Berthold K. P.
A method will be described for finding the shape of a smooth opaque object from a monocular image, given a knowledge of the surface photometry, the position of the light-source and certain auxiliary information to resolve ambiguities.  This method is complementary to the use of stereoscopy which relies on matching up sharp detail and will fail on smooth objects. Until now the image processing of a single views has been restricted to objects which can meaningfully be considered two-dimensional or bounded by plane surfaces.
</description>
<pubDate>Sun, 01 Nov 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149396</guid>
<dc:date>1970-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Strategies for File Systems</title>
<link>https://hdl.handle.net/1721.1/149395</link>
<description>Design Strategies for File Systems
Madnick, S.E.
This thesis describes a methodology for the analysis and synthesis of modern general purpose file systems.  The two basic concepts developed are (1) establishment of a uniform representation of a file's structure in the form of virtual memory or segmentation and (2) determination of a hierarchy of logical transformations within a file system.
</description>
<pubDate>Thu, 01 Oct 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149395</guid>
<dc:date>1970-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complexity Measures for Language Recognition by Canonic Systems</title>
<link>https://hdl.handle.net/1721.1/149394</link>
<description>Complexity Measures for Language Recognition by Canonic Systems
Haggerty, Joseph P.
A canonic system C is a specification of a recursively enumerable set, such as a set of strings over a finite alphabet.  From this description C, it is possible to generate a system C , called a proof measure function, which is an indication of the complexity of the language defined. For certain simple but important classes of canonic system, algebraic bounds on these functions can be derived from the structure of the system.
</description>
<pubDate>Thu, 01 Oct 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149394</guid>
<dc:date>1970-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deadlock-free Sharing of Resources in Asynchornous Systems</title>
<link>https://hdl.handle.net/1721.1/149393</link>
<description>Deadlock-free Sharing of Resources in Asynchornous Systems
Hebalkar, Prakash G.
Whenever resources are shared among several activities that hoard resources, the activities can attain a state of deadlock in which, for lack of resources, none of the activities can proceed.  Deadlocks can be prevented by coordination of the sharing. efficient running of the activities under such coordination requires knowledge of the patterns of use of resources by the activities.
</description>
<pubDate>Tue, 01 Sep 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149393</guid>
<dc:date>1970-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integral Convex Polyhedra and an Approach to Integralization</title>
<link>https://hdl.handle.net/1721.1/149392</link>
<description>Integral Convex Polyhedra and an Approach to Integralization
Edelberg, Murray
Many combinatorial optimization problems may be formulated as integer linear programming problems - that is, problems of the form: given a convex polyhedron P contained in the non-negative orthant of n-dimensional space, find a integer point in P which maximizes (or minimizes) a given linear objective function. Well known linear programming methods would suffice to solve such a problem if:  (i) P is an integral convex polyhedron, or  (ii) P is transformed into the integral convex polyhedron that is the convex hull of the set of integer points in P, a process which is called integralization.
</description>
<pubDate>Sat, 01 Aug 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149392</guid>
<dc:date>1970-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Recognition of Prismatic Solids</title>
<link>https://hdl.handle.net/1721.1/149391</link>
<description>Computer Recognition of Prismatic Solids
Griffith, Arnold Koons
An investigation is made into the problem of constructing a model of the appearance to an optical input device of scenes consisting of plane-faced geometric solids.  The goal is to study algorithms which find the real straight edges in the scenes, taking into account smooth variations in intensity over faces of the solids, blurring of edges and noise.
</description>
<pubDate>Sat, 01 Aug 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149391</guid>
<dc:date>1970-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coordination of Asynchronous Event</title>
<link>https://hdl.handle.net/1721.1/149390</link>
<description>Coordination of Asynchronous Event
Patil, Suhas Shrikrishna
The way activity in a system proceeds is that events occur as a result of some conditions and lead to some new conditions which make other events possible.  Often it is necessary to coordinate such events to ensure proper behavior. Coordination nets for representing such coordinations and physically realizable structures for enforcing such coordinations are presented. These structures are modular and can be mechanically derived from the coordination nets. Coordination involved in concurrent management of resources are also discussed.
</description>
<pubDate>Mon, 01 Jun 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149390</guid>
<dc:date>1970-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computer-controlled Graphical Display Processor</title>
<link>https://hdl.handle.net/1721.1/149389</link>
<description>A Computer-controlled Graphical Display Processor
Fiasconaro, James Gerard
A cathode-ray tube, (CRT), is frequently employed to display text and drawings generated by a digital computer.  Unfortunately, all of the commercially available CRT display systems are either very expensive or have limited dynamic capability resulting from the use of some form of storage-type CRT. A need exists to develop a low-cost, relatively sophisticated display compute-generated pictures.
</description>
<pubDate>Mon, 01 Jun 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149389</guid>
<dc:date>1970-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalized Organization of Large Data Bases: A Set-theoretic Approach to Relations</title>
<link>https://hdl.handle.net/1721.1/149388</link>
<description>Generalized Organization of Large Data Bases: A Set-theoretic Approach to Relations
Fillat, Andrew Irwin; Kraning, Leslie Alan
Problems inherent in representation and manipulation of large data bases are discussed.  Data management is considered as the manipulation of relationships among elements of a data base.  A detailed analogy introduces concepts embodied in a data management system. Set theory is used to describe a model for data-bases, and operations suitable for manipulation of relations are defined.
</description>
<pubDate>Mon, 01 Jun 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149388</guid>
<dc:date>1970-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Economies of Scale in Computer Use: Initial Test and Implication A for the Computer Utility</title>
<link>https://hdl.handle.net/1721.1/149387</link>
<description>Economies of Scale in Computer Use: Initial Test and Implication A for the Computer Utility
Selwyn, Lee L.
This study is concerned with the existence of economies of scale in the production of data processing and other computing services, and the possible regulatory and public policy implications of such economies.  The rapid development of the technology of computation since the Second World War has raised many questions as to the supervision by public authorities of the use and progress of this technology.
</description>
<pubDate>Mon, 01 Jun 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149387</guid>
<dc:date>1970-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlled Information Sharing in a Computer Utility</title>
<link>https://hdl.handle.net/1721.1/149386</link>
<description>Controlled Information Sharing in a Computer Utility
Vanderbilt, Dean H.
A computer utility is envisioned as a large, multi-access computer system providing its users with the ability to store information and share its use with other system users.  This thesis considers the nature of information sharing and how a computer utility can provide facilities allowing such sharing to take place in a controlled manner.
</description>
<pubDate>Wed, 01 Oct 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149386</guid>
<dc:date>1969-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognition of Translators Invariants* By Iterative Arrays</title>
<link>https://hdl.handle.net/1721.1/149385</link>
<description>Recognition of Translators Invariants* By Iterative Arrays
Beyer, Wendel Terry
A study is made of the recognition and transformation of figures by iterative arrays of finite state automata. A figure is a finite rectangular two-dimensional array of symbols. The iterative arrays considered are also finite, rectangular and two-dimensional.
</description>
<pubDate>Wed, 01 Oct 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149385</guid>
<dc:date>1969-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Translators for LR(k) Languages</title>
<link>https://hdl.handle.net/1721.1/149384</link>
<description>Practical Translators for LR(k) Languages
Deremer, Franklin Lewis
A context-free syntactical translator (CFST) is a machine which defines a translation from one context-free language to another.  A transduction grammar is a formal system based on a context-free grammar and it specifies a context-free syntactical translation. A simple suffix transduction grammar based on a context-free grammar which is LR(k) specifies a translation which can be defined by a deterministic push-down automation (DPDA).
</description>
<pubDate>Wed, 01 Oct 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149384</guid>
<dc:date>1969-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Graph Model for Parallel Computations</title>
<link>https://hdl.handle.net/1721.1/149383</link>
<description>A Graph Model for Parallel Computations
Rodrigues, Jorge E.
This report presents a computational model called program  graphs  which makes possible a precise description of parallel computations of arbitrary complexity on non-structured data.  In the model, the computation steps are represented by the nodes of a directed graph whose links represent elements of storage and transmission of data and /or control information.
</description>
<pubDate>Mon, 01 Sep 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149383</guid>
<dc:date>1969-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Case Study in Interactive Graphics Programming: A Circuit Drawing and Editing Program for Use with A Storage-tube Display Terminal</title>
<link>https://hdl.handle.net/1721.1/149382</link>
<description>Case Study in Interactive Graphics Programming: A Circuit Drawing and Editing Program for Use with A Storage-tube Display Terminal
Brackett, J.; Hammer, M.M.; Thornhill, D.
The concepts involved in building and manipulating a data structure through graphical interaction are presented, using the drawing and editing of electrical circuits as a vehicle. The circuit drawings program was designed to operate on an ARDS storage-tube display terminal attached to the M.I.T. Project MAC IBM 7094 Compatible Time-Sharing System.
</description>
<pubDate>Wed, 01 Oct 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149382</guid>
<dc:date>1969-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>EPS: An Interactive System for Solving Elliptic Boundary-Value Problems with Facilities for Data Manipulation</title>
<link>https://hdl.handle.net/1721.1/149381</link>
<description>EPS: An Interactive System for Solving Elliptic Boundary-Value Problems with Facilities for Data Manipulation
Tillman, Coyt C., Jr.
This appendix for the author's forthcoming thesis, "On-Line Solution of Elliptic Boundary-Value Problems," is a user's guide for EPS. EPS solves two-dimensional boundary-value problems for elliptic systems of second-order partial differential equations. It also has general-purpose capabilities which permit the on-line definition and execution  of arbitrary numerical procedures.
</description>
<pubDate>Sun, 01 Jun 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149381</guid>
<dc:date>1969-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Computer-mediated Animation</title>
<link>https://hdl.handle.net/1721.1/149380</link>
<description>Interactive Computer-mediated Animation
Baeker, Ronald M.
The use of interactive computer graphics in the construction of animated visual displays is investigated. The dissertation presents a process called interactive computer-mediated animation, in which dynamic displays are constructed by utilizing direct console commands, algorithms, free-hand sketches, and real-time actions. The resulting "movie" can then be immediately viewed and altered.
</description>
<pubDate>Sun, 01 Jun 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149380</guid>
<dc:date>1969-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Formal System for Defining the Syntax and Semantics of Computer Languages</title>
<link>https://hdl.handle.net/1721.1/149379</link>
<description>A Formal System for Defining the Syntax and Semantics of Computer Languages
Ledgard, Henry Francis
The thesis of this dissertation is that formal definitions of the syntax and semantics of computer languages are needed.  This dissertation investigates two candidates for formally defining computer languages: (1) the formalism of canonical systems for defining the syntax of a computer language and its translation into a target language for defining the semantics of a computer language.
</description>
<pubDate>Tue, 01 Apr 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149379</guid>
<dc:date>1969-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Recognition of Three-Dimensional Objects in a Visual Scene</title>
<link>https://hdl.handle.net/1721.1/149378</link>
<description>Computer Recognition of Three-Dimensional Objects in a Visual Scene
Guzman-Arenas, Aldolfo
Methods are presented 1) to partition or decompose a visual scene into the bodies forming it; (2) to position these bodies in three-dimensional space, by combining two scenes that make a stereoscopic pair; 3) to find the regions or zones of a visual scene that belong to its background, (4) to carry out the isolation of objects in (1) when the input has inaccuracies.
</description>
<pubDate>Sun, 01 Dec 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149378</guid>
<dc:date>1968-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Simulator of Multiple Interactive Users to Drive a Time-shared Computer System</title>
<link>https://hdl.handle.net/1721.1/149377</link>
<description>A Simulator of Multiple Interactive Users to Drive a Time-shared Computer System
Greenbaum, Howard Jacques
In the construction and maintenance of a time-shared computer system the need arises for a tool which can provide a controlled, repeatable environment for the purpose of making performance measurements.  This thesis describes the use of a small second computer to simulate the actions of multiple interactive users over individual communication lines. Each simulated user exhibits responses similar to those of a "normal" interactive user.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149377</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lambda Calculus Models of Programming Languages</title>
<link>https://hdl.handle.net/1721.1/149376</link>
<description>Lambda Calculus Models of Programming Languages
Morris, James H.
Two aspects of programming languages, recursive definitions and type declarations are analyzed in detail.  Church's -calculus is used as a model of a programming language for purposes of the analysis.  The main result on recursion is an analogue to Kleene's first recursion theorem: If A= FA for any A-expressions A and F, then A is an extension of YF in the sense that if E[YE], any expressions containing YF, has a normal form then E[F] =E {A]. Y is Curry's paradoxical combinator. The result is shown to be invariant for many different versions of Y.
</description>
<pubDate>Sun, 01 Dec 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149376</guid>
<dc:date>1968-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Integrated Hardware-software Systems for Computer Graphics in Time-sharing</title>
<link>https://hdl.handle.net/1721.1/149375</link>
<description>An Integrated Hardware-software Systems for Computer Graphics in Time-sharing
Thornhill, D.E.; Stotz, R.H.; Ross, D.T.; Ward, J.E.
This report describes the ESL Display Console and its associated user-oriented software systems developed by the M.I.T. Computer-Aided Design Project with Project MAC.  Console facilities include hardware projection of three-dimensional line drawings, automatic light pen tracking, and a flexible set of knob, switch, and push-button inputs. The console is attached to the Project MAC IBM 7094 Compatible Time-Sharing System either directly or through a PDP-7 Computer.
</description>
<pubDate>Sun, 01 Dec 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149375</guid>
<dc:date>1968-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Multi-process Primitives in a Multiplexed Computer System</title>
<link>https://hdl.handle.net/1721.1/149374</link>
<description>Implementing Multi-process Primitives in a Multiplexed Computer System
Rappaport, Robert Lee
In any computer system primitive functions are needed to control the actions of processes in the system.  This thesis discusses a set of six such process control primitives which are sufficient to solve many of the problems involved in parallel processing as well as in the efficient multiplexing of  system resources among the many processes in a system.
</description>
<pubDate>Fri, 01 Nov 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149374</guid>
<dc:date>1968-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Graph Display as an Aid in the Monitoring of a Time-shared Computer System</title>
<link>https://hdl.handle.net/1721.1/149373</link>
<description>The Graph Display as an Aid in the Monitoring of a Time-shared Computer System
Grochow, Jerrold Marvin
The problem of dynamic observation of the state of a time-shared computer system is investigated.  The Graphical Display Monitoring System was developed as a medium for this experimental work.  It is an integrated system for creating graphic displays, dynamically retrieving data from Multics Time-Sharing Systems supervisor data bases, and on-line viewing of this data viewing of this data via the graphics displays.
</description>
<pubDate>Tue, 01 Oct 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149373</guid>
<dc:date>1968-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Absenetee Computations in a Multiple-access Computer System</title>
<link>https://hdl.handle.net/1721.1/149372</link>
<description>Absenetee Computations in a Multiple-access Computer System
Deital, H.M.
in multiple-access computer systems, emphasis is placed upon serving several interactive users simultaneously. However, many computations do not require user interaction, and user may therefore want to run these computations 'absentee'  (or, user not present). A mechanism is presented which provides for the handling of absentee computations in a multiple-access computer system.
</description>
<pubDate>Thu, 01 Aug 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149372</guid>
<dc:date>1968-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>CARPS, A Program Which Solves Calculus Word Problems</title>
<link>https://hdl.handle.net/1721.1/149371</link>
<description>CARPS, A Program Which Solves Calculus Word Problems
Charniak, Eugene
A program was written to solve calculus word problems.  The program CARPS (Calculus Rate Problem Solver), is restricted to rate problems.  The overall plan of the program is similar to Bobrow's STUDENT,  the primary difference being the introduction of "structures" as the internal model in CARPS. Structures are stored internally as trees. Each structures is designed to hold the information gathered about one object.
</description>
<pubDate>Mon, 01 Jul 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149371</guid>
<dc:date>1968-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resource Allocation in Multiprocess Computer Systems</title>
<link>https://hdl.handle.net/1721.1/149370</link>
<description>Resource Allocation in Multiprocess Computer Systems
Denning, Peter James
The dynamic allocation for limited processor and main memory resources among members of a user community is investigated as a supply-and-demand problem.  The work is divided into four phases.  First phase is the construction of the working set model for program behavior. This model is based on locality, the concept that, during any interval of execution, a program favors a subset of its information; a computation's working set is a dynamic measure of this set of favored information. A working set storage management policy is one that allocates processors to a computation if and only if there is enough uncommitted  space in main memory to contain its working set.
</description>
<pubDate>Wed, 01 May 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149370</guid>
<dc:date>1968-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incremental Simulation on a Time-shared Computer</title>
<link>https://hdl.handle.net/1721.1/149369</link>
<description>Incremental Simulation on a Time-shared Computer
Jones, Malcolm Murray
This thesis describes a system which allows simulation models to be built and tested incrementally.  It is called OPS-4 and is specifically designed to operate in the environment of the Multics system.  It represents a major expansion and improvement of the OPS-3 system implemented in CTSS and also includes many features adapted from other current simulation systems.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149369</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic Integration</title>
<link>https://hdl.handle.net/1721.1/149368</link>
<description>Symbolic Integration
Moses, Joel
SIN and SOLDIER are heuristic programs written in LISP which solve symbolic integration problems.  SIN (Symbolic INtegrator) solves indefinite integration problems at the difficulty approaching those in the larger integral tables.  SIN contains several more methods than are used in the previous symbolic integration program SAINT, and solves most of the problems attempted by SAINT in less than one second. SOLDIER (SOLution of Ordinary Differential Equations Routine) solves first order, first degree ordinary differential equations at the level of a good college sophomore and at an average of about five seconds per problem attempted.
</description>
<pubDate>Fri, 01 Dec 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149368</guid>
<dc:date>1967-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Canonic Translator</title>
<link>https://hdl.handle.net/1721.1/149367</link>
<description>A Canonic Translator
Alsop, Joseph Wright
An algorithm to recognize and translate sets of character strings specified by canonic system is presented.  The ability of canonic systems to define the context sensitive features of strings and to specify their translation allows the algorithm to recognize and translate real computer languages. It is also applicable in other languages systems.
</description>
<pubDate>Wed, 01 Nov 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149367</guid>
<dc:date>1967-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Simulation of Dynamic Systems with Lumped Parameters and Time Delays</title>
<link>https://hdl.handle.net/1721.1/149366</link>
<description>On the Simulation of Dynamic Systems with Lumped Parameters and Time Delays
Leal-Cantu, Nestor
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149366</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A System for Computer-aided Diagnosis</title>
<link>https://hdl.handle.net/1721.1/149365</link>
<description>A System for Computer-aided Diagnosis
Gorry, Gregory Anthony
This thesis describes a model diagnostic problem and a computer program designed to deal with this problem.  The model diagnostic problem is an abstract problem.  A major contention of this thesis, however, is that this problem subsumes the principal feature of a number of ostensibly different real diagnostic problems including certain problems of medical diagnosis and the diagnosis of machine failures. A second major contention of this thesis is that strategies for the solution of the model diagnostic problem can be formulated in terms sufficiently explicit to permit their incorporation in a computer program.
</description>
<pubDate>Fri, 01 Sep 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149365</guid>
<dc:date>1967-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Analysis by Digital Computer</title>
<link>https://hdl.handle.net/1721.1/149364</link>
<description>Program Analysis by Digital Computer
Wilde, Daniel Underwood
A comparison of the properties of non-modifying and self-modifying programs leads to the definition of independent and dependent instructions.  Because non-modifying programs contain only independent instructions, such programs can be analyzed by a straight forward, two -step analysis procedure. First, the program control flow is detected; second, that control flow is used to determine the program data flow or data processing. However, self-modifying programs can also contain dependent instructions, and the program control flows and data flows exhibit cyclic interaction.
</description>
<pubDate>Tue, 01 Aug 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149364</guid>
<dc:date>1967-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Implementation of a Table-Driven Compiler System</title>
<link>https://hdl.handle.net/1721.1/149363</link>
<description>Design and Implementation of a Table-Driven Compiler System
Liu, Chung L.; Change, Gabriel D.; Marks, Richard E.
Our goal is to provide users of the table-driven compiler system with an environment within which they can freely design and produce their compilers.  The primary design criterion is generality so that the users can define a large class of input languages oriented toward any kind of problem-solving purposes, and can also define a large class of object programs to be executed on different computer systems. Therefore, in our system we do not limit the users to specific ways of doing syntactic analysis, or doing storage allocation, or producing binary programs of a specific format for a particular computer system. What we provide are mechanisms that are general enough for whichever way a user desires to build his compiler.
</description>
<pubDate>Sat, 01 Jul 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149363</guid>
<dc:date>1967-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surfaces for Computer-aided Design of Space Forms</title>
<link>https://hdl.handle.net/1721.1/149362</link>
<description>Surfaces for Computer-aided Design of Space Forms
Coons, Steven A.
The design of airplanes, ships, automobiles, and so-called ""sculptured parts"" involves the design, delineation, and mathematical description of bounding surfaces.  A method is described which makes possible the description of free-form doubly curved surfaces of a very general kind. An extension of these ideas to hyper-surfaces in higher dimensional spaces is also indicated.
</description>
<pubDate>Thu, 01 Jun 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149362</guid>
<dc:date>1967-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>On-line Analysis for Social Scientists</title>
<link>https://hdl.handle.net/1721.1/149361</link>
<description>On-line Analysis for Social Scientists
Miller, James R.
A library of computer routines has been compiled to facilitate the analysis of social science research data.  Many of these routines are designed to test statistical hypotheses.  All routines are operated on-line and permit conversational interaction between the user and a time-shared computer. Input data are typed directly into the computer through a teletype console. Explicit typing directions and error diagnostics, where appropriate, are printed out by each routine to guide the input process. Analyses are executed immediately, and computed results are printed out in typical publication language.
</description>
<pubDate>Mon, 01 May 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149361</guid>
<dc:date>1967-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Syntax-based Analytic Reading of Musical Scores</title>
<link>https://hdl.handle.net/1721.1/149360</link>
<description>Syntax-based Analytic Reading of Musical Scores
Forte, Allen
As part of a larger research project in musical structure, a program has been written which ""reads"" scores encoded in an input language isomorphic to music notation.  The program is believed to be the first of its kind.  From a small number of parsing rules the program derives complex configurations, each of which is associated  with a set of reference points in a numerical representation of a time-comtinuum.  The logical structure of the program is such that all and only the defined classes of events are represented in the output.
</description>
<pubDate>Sat, 01 Apr 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149360</guid>
<dc:date>1967-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Low-cost Output Terminal for Time-shared Computers</title>
<link>https://hdl.handle.net/1721.1/149359</link>
<description>A Low-cost Output Terminal for Time-shared Computers
Rosenburg, Ronald C.; Kennedy, Daniel W.; Humphrey, Roger A.
This report describes a low-cost remote terminal to provide switch-form output from a time-shared digital computer.  The terminal consists of a modified model 35 KSR teletype and a local memory unit.  The unit is independent of any particular computer, and is easy to test and maintain. The states of the memory control and memory words are observable directly by indicator lights.
</description>
<pubDate>Wed, 01 Mar 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149359</guid>
<dc:date>1967-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Aspects of Pattrn Recognition by Computer</title>
<link>https://hdl.handle.net/1721.1/149358</link>
<description>Some Aspects of Pattrn Recognition by Computer
Guzman-Arenas, Adolfo
A computer may gather a lot of information from its environment in an optical or graphical manner.  A scene, as seen for instance from a TV camera or a picture, can be transformed into a symbolic description of points and lines or surfaces.  This thesis describes several programs, written in the language CONVERT, for the analysis of such descriptions in order to recognize, differentiate and identify desired objects or classes of objects in the scene. Examples are given in each case.
</description>
<pubDate>Wed, 01 Feb 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149358</guid>
<dc:date>1967-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An On-line System for Algebraic Manipulation</title>
<link>https://hdl.handle.net/1721.1/149357</link>
<description>An On-line System for Algebraic Manipulation
Fenichel, Robert R.
This thesis describes an approach to the problem of programming a computer for algebraic manipulation.  The motivating threads of the work are first picked up in Chapter I.  To test the descriptive intuitions urged normatively in Chapter I, an experimental system was actually implemented. This system is described in Chapter II and in the Appendices.
</description>
<pubDate>Thu, 01 Dec 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149357</guid>
<dc:date>1966-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Design for Asynchronously Reproducible Multiprocessing</title>
<link>https://hdl.handle.net/1721.1/149356</link>
<description>Computer Design for Asynchronously Reproducible Multiprocessing
Van Horn, Earl C.
A concept is presented for designing either a computing system, or a programming language system, so that the following problem is avoided: during a multiprocess computation in which several processes communicate, and in which the relative timing of the processes is arbitrary, the output produced by the computation might not be a function of only the initial computation state, i.e., of only the inputs and initial program of the computation.
</description>
<pubDate>Tue, 01 Nov 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149356</guid>
<dc:date>1966-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>ADEPT: A Heuristic Program for Proving Theorems of Group Theory</title>
<link>https://hdl.handle.net/1721.1/149355</link>
<description>ADEPT: A Heuristic Program for Proving Theorems of Group Theory
Norton, Lewis Mark
A computer program, named ADEPT (A Distinctly  Empirical Prover of Theorems), has been written which proves theorems taken from the abstract theory of groups.  Its organization is basically heuristic, incorporating many of the techniques of the human mathematician in a "natural" way. This program has proved almost 100 theorems, as well as serving as a vehicle for testing and evaluating special-purpose heuristics.
</description>
<pubDate>Sat, 01 Oct 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149355</guid>
<dc:date>1966-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pilot: A Step Towards Man-Computer Symbiosis</title>
<link>https://hdl.handle.net/1721.1/149354</link>
<description>Pilot: A Step Towards Man-Computer Symbiosis
Teitelman, Warren
PILOT  is a programming system constructed in LISP.  It is designed to facilitate the development of programs by easing the familiar sequence: write some code, run the program, make some changes, write some more code, run the program again, etc. As a program becomes more complex, making theses changes becomes harder and harder because the implications of changes are harder to anticipate.
</description>
<pubDate>Thu, 01 Sep 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149354</guid>
<dc:date>1966-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Models and Data Structures for Digital Logic Simulation</title>
<link>https://hdl.handle.net/1721.1/149353</link>
<description>Models and Data Structures for Digital Logic Simulation
Smith, Donald Leigh
A digital  logic simulation system is proposed for design verification.  Logic to be simulated is specified with a high level register transfer design language, and the simulation system operates on-line on a large time-shared computer.  The problem of selecting adequate circuit and signal models for this purpose is considered. models are proposed with sufficient timing detail to allow the simulation system to detect timing errors which currently are found by manual checking or prototype.
</description>
<pubDate>Mon, 01 Aug 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149353</guid>
<dc:date>1966-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Traffic Control in a Multiplexed Computer System</title>
<link>https://hdl.handle.net/1721.1/149352</link>
<description>Traffic Control in a Multiplexed Computer System
Saltzer, Jerome H.
This thesis describes a scheme for processor multiplexing in a multiple user, multiple processor computer system.  The scheme is based upon a distributed supervisor which may be different for different users.  The processor multiplexing method provides smooth inter-process communication, treatment of input/output  control as a special case of inter-process communication, and provision for a user to specify parallel processing or simultaneous input/output without interrupt logic.
</description>
<pubDate>Fri, 01 Jul 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149352</guid>
<dc:date>1966-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Search Procedures Based on Measures of Relatedness Between Documents</title>
<link>https://hdl.handle.net/1721.1/149351</link>
<description>Search Procedures Based on Measures of Relatedness Between Documents
Ivie, Evan Leon
In this thesis a new type of information retrieval system is suggested which utilizes data of the type generated by the users of the system instead of data generated by indexers.  The theoretical model on which the system is based consists of three basic elements. The first element is measure of the relatedness between document-pairs. It is derived from information theory.
</description>
<pubDate>Wed, 01 Jun 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149351</guid>
<dc:date>1966-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Input/Output in Time-shared, Segmented, Multiprocessor Systems</title>
<link>https://hdl.handle.net/1721.1/149350</link>
<description>Input/Output in Time-shared, Segmented, Multiprocessor Systems
Smith, Arthur Anshel
After introducing and defining the concepts of time-sharing, segmentation, and multiprocessing, two classes of systems incorporating these are introduced.  Both classes use associative memories, as 'look behind' devices to speed the operation of addressing the segment memory, with the distinction between classes being the location of the associative memory.
</description>
<pubDate>Wed, 01 Jun 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149350</guid>
<dc:date>1966-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>OCAS - On-line Cryptanalytic Aid System</title>
<link>https://hdl.handle.net/1721.1/149349</link>
<description>OCAS - On-line Cryptanalytic Aid System
Edwards, Daniel James
Deficiencies of various programming languages for dealing with quantities frequently encountered  in cryptanalysis of simple cipher systems will be discussed.  A programming system is proposed which will permit a cryptanalyst to write and debug programs to aid in he solution of cryptograms or cryptographic systems.  The basic elements of the proposed programming system are discussed in detail.  They include: 1) a programming language to handle both algebraic quantities and character strings, 2) a display generator to permit quick specification of a display frame containing both alphanumeric strings and numerical data for an on-line CRT display device, and 2) an on-line program to control operation of the system and in debugging programs written in the proposed language.
</description>
<pubDate>Sun, 01 May 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149349</guid>
<dc:date>1966-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a Low-cost Character Generator for Remote Computer Displays</title>
<link>https://hdl.handle.net/1721.1/149348</link>
<description>Design of a Low-cost Character Generator for Remote Computer Displays
Cheek, Thomas Burrell
A requirement exists for a low-cost remote display terminal with alphanumeric and line-drawing capabilities for use with time-shared computer systems.  This thesis, conducted as part of the overall remote display design project, was undertaken to investigate novel approaches to character generation, with the goal of drastically reducing present-day costs for such devices.      A survey of existing devices and character generation techniques was carried out, and a design approach was chosen which takes advantage of mass-fabrication techniques.  This includes using a five-by-seven dot matrix raster and a resistor array "read-only" character memory for the 96 printable symbols of the Revised Proposed ASCII Code.  Circuits designed, included a dot matrix generator and a register array memory with selection logic sense amplifiers, and a shift register output buffer.  An experimental character generator with an eight-word memory was built, largely using integrated circuits and was found to work as desired.  It is concluded that the design approach will yield a character generator that is of low enough cost to find wide use in remote computer terminals.
</description>
<pubDate>Tue, 01 Mar 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149348</guid>
<dc:date>1966-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of an Analog Technique to Decrease Pentracking Time in Computer Display</title>
<link>https://hdl.handle.net/1721.1/149347</link>
<description>Investigation of an Analog Technique to Decrease Pentracking Time in Computer Display
Stratton, William David
Many modern digital computer systems contain cathode-ray tube display equipment to facilitate man-machine communications.  Through the use of a display and a light-sensitive pen, graphical material can be directly inserted into the computer by using the pen to control the position of the electron beam at the face of the CRT-a process called pen tracking.  Beam position is continually sampled by the computer, permitting continuous display of the material being sketched.  In present digital pen-tracking techniques, a tracking pattern (usually a cross) with a substantial number of points is generated on the face of the CRT and the binary response of the pen to the individual points of the pattern is employed to calculate pen position.  The large number of pattern points, and the phosphor decay time associated with each, yield a typical tracking cycle of 500 to 1000 microseconds.  Since the cycle must be repeated about 100 times per second, 5 to 10 percent of display time is consumed.      To reduce the time required by the tracking operation, an analog technique employing a four-point tracking pattern is proposed in this study, in which the amplitude response of the pen to corresponding pairs of points is used to determine the position of the pen relative to the center of the pattern.  To study the method, one channel of the proposed two-channel analog tracking system was designed, constructed, and coupled to the horizontal channel of a high-speed computer display console.  To avoid the phosphor-decay limitation, an experimental "Beam" pen capable of detecting the electron beam rather than the phosphor luminescence is employed.  The system included a pattern generator, sample-and-hold gates, difference amplifier, envelope detector and noise filter, and a threshold-logic analog-to-digital converter.  The time required to generate the tracking pattern and develop the binary equivalent of the horizontal distance separating pen and pattern center is only 25 microseconds.  Tracking is generally satisfactory, but some anomalies were noted, apparently due to the characteristics of the experimental pen being used.      It is concluded that the analog technique is feasible for improving the speed of pen tracking, but recommended that further studies be made of the limitations inherent in the method.
</description>
<pubDate>Tue, 01 Mar 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149347</guid>
<dc:date>1966-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>MAP: A System for On-line Mathematical Analysis</title>
<link>https://hdl.handle.net/1721.1/149346</link>
<description>MAP: A System for On-line Mathematical Analysis
Kaplow, Roy; Strong, Stephen; Brackett, John
This manual describes a computer suitable for use on the time-sharing facility at the M.I.T. Computation Center or at Project MAC.  Designated for direct computer access through a remote console, the system replaces the normal procedures of programming with a question and answer interchange between the user (hereinafter called U) and the computer (hereinafter called C).  The system is intended for the solution of mathematical problems.  It should be usable by a person with no knowledge of computers or programming and little knowledge of numerical analysis.  Within its range of capabilities, it should be as efficient as are the normal means of computer access for the more sophisticated user.      The system establishes a "conversation" between U and C with an electric typewriter as the means of communication.  U can give information to C and can ask it certain questions.  C can answer those questions if it is given enough information.  C can also ask questions and can therefore request any missing information.  In addition, C can explain procedures to U in order to help the latter transmit the required information in a proper form.  U, therefore, only needs to know a few basic rules, such as how to phrase his questions and how to name and tabulate his data.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149346</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programming Semantics for Multiprogrammed Computations</title>
<link>https://hdl.handle.net/1721.1/149345</link>
<description>Programming Semantics for Multiprogrammed Computations
Dennis, Jack B.; Van Jhorn, Earl C.
The semantics are defined for a number of meta-instructions which perform operations essential to the writing of programs in multiprogrammed computer systems.  These meta-instructions relate to parallel procession, protection of separate computations, program debugging, and the sharing among users of memory segments and other computing objects, the names of which are hierarchically structured.  The language sophistication contemplated is midway between an assembly language and an advanced algebraic language.
</description>
<pubDate>Wed, 01 Dec 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149345</guid>
<dc:date>1965-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Priority Problem</title>
<link>https://hdl.handle.net/1721.1/149344</link>
<description>The Priority Problem
Greenberger, Martin
Priority decisions arise whenever limited facilities must be apportioned among competitive demands for service.  Broadly viewed, even the familiar first-come-first served discipline is a priority rule.  It favors the longest-waiting user, and guards against excessive delays.  Other priority rules, such as shortest-job-next, are keyed instead to considerations of operating efficiency.  Urgency of request is still another common consideration.  Since these considerations often conflict, the priority rule serves as mediator.  Use of a common cost measure can help effect this mediation, as results from recent job-shop simulations illustrate.      A priority operation of contemporary interest is scheduling a time-shared computer among its concurrent users.  Service requirements are not known in advance of execution.  To keep response times short for small requests, service intervals are partitioned and segments are served separately in round-robin fashion.  A mathematical analysis pinpoints the tradeoff between overhead and discrimination implicit in this procedure, and allows alternate strategies to be costed.  Extensions of the simple round-robin procedure are suggested, the objectives of time-sharing are reviewed, and implications are drawn for the design of future priority and pricing systems.
</description>
<pubDate>Mon, 01 Nov 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149344</guid>
<dc:date>1965-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Queueing Models for File Memory Operation</title>
<link>https://hdl.handle.net/1721.1/149343</link>
<description>Queueing Models for File Memory Operation
Denning, Peter James
A model for the auxiliary memory function of a segmented, multiprocessor, time-shared computer system is set up.  A drum system in particular is discussed, although no loss of generality is implied by limiting the discussion to drums.  Particular attention is given to the queue of requests waiting for drum use.  It is shown that a shortest access time first queue discipline is the most efficient, with the access time being defined as the time required for the drum to be positioned, and is measured from the finish of service of the last request to the beginning of the data transfer for the present request.  A detailed study of the shortest access time queue is made, giving the minimum access time probability distribution, equations for the number in the queue, and equations for the wait in the queue.  Simulations were used to verify these equations; the results are discussed.  Finally, a general Markov Model for Queues is discussed in an Appendix.
</description>
<pubDate>Fri, 01 Oct 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149343</guid>
<dc:date>1965-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Calculaid: An On-line System for Algebraic Computation and Analysis</title>
<link>https://hdl.handle.net/1721.1/149342</link>
<description>Calculaid: An On-line System for Algebraic Computation and Analysis
Wantman, Mayer Elihu
OPS is an on-line system developed by M. Greenberger et al. at Project MAC.  The present work provides a powerful and simple way to perform numerical manipulations and calculations within OPS.  The program package is called CALCULAID.      A method of executing algebraic assignment statements, of which MAD and FORTRAN assignments are a subset, is provided.  When this assignment-statement ability is coupled with other features of the OPS system, such as unconditional transfers, general conditionals, and array and function declarations, most of the ability of a compiler language is provided.  Because the programs written in OPS are executed interpretively, OPS-3 programs can be changed and re-run immediately, without being compiled.      The other elements of CALCULAID are a program for creating multiple linear regression models, rank-ordering and counting data, and finding roots to polynomial equations in one unknown.      The applications of CALCULAID to the analysis of a round-robin scheduling model and to a process-control problem are discussed, and conclusions regarding the suitability of running computational programs in an interpretive mode are drawn.
</description>
<pubDate>Wed, 01 Sep 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149342</guid>
<dc:date>1965-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Heuristic Approach to Alternate Routing in a Job Shop</title>
<link>https://hdl.handle.net/1721.1/149341</link>
<description>A Heuristic Approach to Alternate Routing in a Job Shop
Russo, F.J.
The research reported here investigates the use of heuristics for selecting from several alternate routes resulting from partially ordered tasks in a job shop order file.  The experimental vehicle employed was digital simulation.      The concept of the "Alternate string" has been developed to generalize the existence of partially ordered operations.  That term is defined as a concatenation of operations that can be performed in any order, with the additional specification that all within the string can be attempted.  The presence of alternate strings with two or more member gives rise to the alternate routing problem, whose solution is approached by heuristic methods.      Choosing from among several alternate routes constitutes a three level decision problem.  At the lowest level, routes can be chosen when the order enters the shop.  This is equivalent to fixed routing.  At a higher level,  alternates can be selected at the time of transition from one work station to another.  The third decision level occurs at operation time, when one of the alternate operations is placed on a machine.  Heuristics were tested at the latter two levels.      There were two prior assertions that this thesis set out to prove.  The first was that alternate routing at the highest decision level would produce significant reductions in the mean tardiness of orders completed past their designated due dates, the improvement being both relative to fixed routing and to alternate routing heuristics implemented at lower decision levels.  Secondly, the contention was made that the improvement would be as such a magnitude that on-line, real-time systems become economically justifiable as a means of mitigating the attendant control problems caused by non-deterministic paths through the queuing network.      The methodology employed here was to conduct two passes of simulated shop runs.  The first, with two artificially high levels of alternate incidence, tested the efficiency of five different alternate routing heuristics in reducing mean tardiness.  The second pass consisted of runs with the best heuristic developed during the first experimental phase applied to a realistic length and frequency of alternate strings.      The results of the experiments strongly support the assertions made at the outset of the thesis.  The performance characteristics of the different heuristics are discussed at length.  In addition, some implications are drawn of the computational nature of alternate routing and the difficulties encountered in implementing alternate routing heuristics at operation time.
</description>
<pubDate>Tue, 01 Jun 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149341</guid>
<dc:date>1965-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Time-Shared Computer Systems</title>
<link>https://hdl.handle.net/1721.1/149340</link>
<description>An Analysis of Time-Shared Computer Systems
Scherr, Allan Lee
Some of the aspects of the operation of time-shared, interactive computer systems are analyzed.  The emphasis is on the reaction of hardware systems to the demands that its users make upon it.  Simply shared systems and their users in order to be able to predict the performance of the two operating together.  Portions of this problem include the specification and measurement of user characteristics, the development and verification of both simulation and mathematical models for time-shared systems, and the specification and measurement of performance metrics for such systems.  The user and some of the performance measurements were made on Project MAC's "Compatible Time-Sharing System" (CTSS).      First, simulation models are used to study the effects of changing small details in the operation of CTS-like systems.  Then, a continuous-time Markov process model is derived to predict the performance of a broad class of systems.  Throughout, the CTSS data are used as a basis for comparison with model predictions.  In order to be able to take measurements and to build models, many definitions of commonly used time-shared system terminology are made precise.
</description>
<pubDate>Tue, 01 Jun 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149340</guid>
<dc:date>1965-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time Sharing on a Multiconsole Computer</title>
<link>https://hdl.handle.net/1721.1/149339</link>
<description>Time Sharing on a Multiconsole Computer
Samuel, Arthur L.
After a brief historical review and a description of the three basic types for time-sharing systems, the general purpose time-sharing system as exemplified by the M.I.T. CTSS system is described in general terms, with particular attention to the way the system looks to the user.
</description>
<pubDate>Mon, 01 Mar 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149339</guid>
<dc:date>1965-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>CTSS Technical Notes</title>
<link>https://hdl.handle.net/1721.1/149338</link>
<description>CTSS Technical Notes
Saltzer, Jerome H.
This report is a technical description of the 7094 Compatible Time-Sharing System in use at Project MAC and the M.I.T. Computation Center.  It is designed to acquaint a system programmer with the techniques of construction which were used in this particular time-sharing system.  Separate chapters discuss the overall supervisor program flow: console message input and output: the scheduling and storage algorithms: and a thumbnail sketch is given of each of the subroutines which make up the supervisor program.      This report was prepared with the aid of the compatible time-sharing system and the TYPSET and RUNOFF  commands.
</description>
<pubDate>Mon, 01 Mar 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149338</guid>
<dc:date>1965-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Use of CTSS in a Teaching Environment</title>
<link>https://hdl.handle.net/1721.1/149337</link>
<description>Use of CTSS in a Teaching Environment
Roos, Daniel
Computer time-sharing offers many interesting possibilities for use in teaching computer technology.  It might be expected that with proper hardware and software, students using time-sharing as a teaching machine could acquire proficiency in the fundamentals of programming more easily than using batch-processing.  To test this hypothesis, the M.I.T. Department of Civil Engineering divided a freshman programming class so that half the students used batch-processing methods, and half used the Project MAC time-sharing system to do the same work.  This paper describes the experiment and its tentative results.
</description>
<pubDate>Sun, 01 Nov 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149337</guid>
<dc:date>1964-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A New Methodology for Computer Simulation</title>
<link>https://hdl.handle.net/1721.1/149336</link>
<description>A New Methodology for Computer Simulation
Greenberger, Martin
Computer simulation is a cooperative venture between researcher and information processor, but the processor's role customarily begins too late.  The researcher can benefit substantially by bringing  the computer up into the earlier, creative phases of the simulation process.  An on-line computer system that makes this possible is described.
</description>
<pubDate>Thu, 01 Oct 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149336</guid>
<dc:date>1964-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The MAC System: A Progress Report</title>
<link>https://hdl.handle.net/1721.1/149335</link>
<description>The MAC System: A Progress Report
Fano, Robert M.
The notion of machine-aided cognition implies an intimate collaboration between a human user and a computer in a real-time dialogue on the solution of a problem, in which the two parties contribute their best capabilities.  In order for this intimate collaboration to be possible, a computer system is needed that can serve simultaneously a large number of people, and that is easily accessible to them, both physically and intellectually.  The present MAC System is a first step toward this goal.  The purpose of this paper is to present a brief description of the current system, to report on the experience gained from its operation, and to indicate directions along which future developments are like to proceed.
</description>
<pubDate>Thu, 01 Oct 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149335</guid>
<dc:date>1964-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Structure in a Multi-access Computer</title>
<link>https://hdl.handle.net/1721.1/149334</link>
<description>Program Structure in a Multi-access Computer
Dennis, Jack B.
A multi-access computer (MAC) system consists of processing  units  and directly addressable main  memory  in which procedure information is interpreted as sequences of operations on data, a system of terminal  devices  through which users may communicate with procedures operating for them, and mass memory where procedures and data may be held when not required for immediate reference.  One fundamental attraction of the MAC concept is the increased productivity of "computer catalyzed research" that results from close man-machine interaction.  Another attraction is wealth of data and procedures that are accessible to a large user community through the file memory of a MAC system.  In this report thoughts are developed which form an adequate model of program structure.  These concepts have grown out of many discussions with colleges in Project MAC, and our experience to date in the design and operation of multi-access computer systems.
</description>
<pubDate>Fri, 01 May 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149334</guid>
<dc:date>1964-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The OPS-1 Manual</title>
<link>https://hdl.handle.net/1721.1/149333</link>
<description>The OPS-1 Manual
Greenberger, Martin
The recent attainment and continuing development of personally accessible computer facilities have opened another chapter in the use of machines by man.  A number of current research efforts, including Project MAC at M.I.T., are designing new conceptual systems to adapt the emerging technology to a wide range of human activity.  Activities relating to management are the concern of a trial system at Project MAC called OPS-1.  The OPS-1 system and the experiment that launched it are described in this manual. {AD 604-681}
</description>
<pubDate>Fri, 01 May 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149333</guid>
<dc:date>1964-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>OPL-I An Open Ended Programming System Within CTSS</title>
<link>https://hdl.handle.net/1721.1/149332</link>
<description>OPL-I An Open Ended Programming System Within CTSS
Weizenbaum, Joseph
OPL-1, an incremental programming system presently operating with CTSS, permits the user to augment both his program and his data base during widely separated successive sessions at his terminal.  Facilities are provided which make it possible for the user to operate on his already established data base both by means of built-in operators and in terms of operators (functions) which the user has previously defined in the language of the system.  Underlying the system is a powerful list processing scheme embedded in FORTRAN (SLIP).  The machinery of this fundamental language drives the system and is also largely available to the user.  The data base generated by the user is therefore a set of list structures (trees), and most of the operators available to him are list processing  operators.  Data structures with considerably complex inter-relational properties may therefore be treated quite directly.
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149332</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stress: A Problem-oriented Language for Structural Engineering</title>
<link>https://hdl.handle.net/1721.1/149331</link>
<description>Stress: A Problem-oriented Language for Structural Engineering
Biggs, John M.; Logcher, Robert D.
STRESS  is a general purpose programming system for the analysis of structures.  Compared to most other structural programs it has three distinguishing characteristics: (1)  The input language is that of the structural engineer which makes possible direct communication between the engineer and the machine; (2)  The system is capable of analyzing a wide variety of structural types and loading conditions thus permitting industrial use on a routine basis; and (3)  The design process is expedited by the fact that modifications of the original structure for alternate designs can be easily executed.  This last capability is most effective when STRESS  is used in the time-sharing mode.  These features combine to provide a system which not only reduces the effort required for structural analysis but, more significantly, enhances the designer's ability to evolve an efficient structure.
</description>
<pubDate>Fri, 01 Jul 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149331</guid>
<dc:date>1966-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>CARPS, A Program Which Solves Calculus Word Problems</title>
<link>https://hdl.handle.net/1721.1/149330</link>
<description>CARPS, A Program Which Solves Calculus Word Problems
Charniak, Eugene
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149330</guid>
</item>
<item>
<title>Verbal and Graphical Language for the AED System: A Progress Report</title>
<link>https://hdl.handle.net/1721.1/149329</link>
<description>Verbal and Graphical Language for the AED System: A Progress Report
Ross, Douglas T.; Feldman, Clarence G.
For Computer-Aided Design, use of time-sharing a single language which can take either verbal or graphical form is required.  This paper describes how a single language processing technique, which is in turn a special application of more general concepts concerning the step-by-step growth and processing of large structures of interrelated elements, can efficiently process both language forms in the same manner.  Illustrations of the concepts involved are also drawn from the methods used in the AED-O Compiler, an efficient ALGOL-60-based compiler used in Computer-Aided Design work, which is available as a public command in the Project MAC CTSS.
</description>
<pubDate>Fri, 01 May 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149329</guid>
<dc:date>1964-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>System Requirements for Multiple  -Access, Time-shared Computers</title>
<link>https://hdl.handle.net/1721.1/149328</link>
<description>System Requirements for Multiple  -Access, Time-shared Computers
Corbató, Fernando J.
It is now clear that it is possible to create a general-purpose time-shared multiple access system on most contemporary computers.  However, it is equally clear that none of the existent computers are well designed for multiple access systems.  At present, good service to a few dozen simultaneous users is considered state-of-the-art.      Discussions include: clocks, memory protection and supervisor mode, program relocation and common subroutines which expose the reader to the difficulties encountered with contemporary machines when multiple user multiple-processor systems are considered.
</description>
<pubDate>Fri, 01 May 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149328</guid>
<dc:date>1964-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>SIR: A Computer Program for Semantic Information Retrieval</title>
<link>https://hdl.handle.net/1721.1/149327</link>
<description>SIR: A Computer Program for Semantic Information Retrieval
Raphael, Bertram
SIR  is a computer system, programmed in the LISP language, which accepts information and answers questions expressed in a restricted form of English.  This system demonstrates what can reasonably be called an ability to "understand" semantic information.  SIR's  semantic and deductive ability is based on the construction of an internal model, which uses word associations and property lists, for the relational information normally conveyed in conversational statements.      A format-matching procedure extracts semantic content from English sentences.  If an input sentence is declarative, the system adds appropriate information to the model.  If an input sentence is a question, the system searches the model until it either finds the answer or determines why it cannot find the answer.  In all cases SIR  reports its conclusions.  The system has some capacity to recognize exceptions to general rules, resolve certain semantic ambiguities, and modify its model structure in order to save computer memory space.      Judging from its conversational ability, SIR  is more "intelligent" than any existing question-answering system.  The author describes how this ability was developed and how the basic features of SIR  compare with those of other systems.       The working system, SIR , is a first step toward intelligent machine communication.  The author proposes a next step by describing how to construct a more general system which is less complex and yet more powerful than SIR .  This proposed system contains a generalized version of the SIR  model, a formal logical system called SIR1 , and a computer program for testing the truth of SIR1  statements with respect to the generalized model by using partial proof procedures in the predicate calculus.  The thesis also describes the formal properties of SIR1  and how they relate to the logical structure of SIR .
</description>
<pubDate>Mon, 01 Jun 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149327</guid>
<dc:date>1964-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Language Input for a Computer Problem Solving System</title>
<link>https://hdl.handle.net/1721.1/149326</link>
<description>Natural Language Input for a Computer Problem Solving System
Bobrow, Daniel .G
The STUDENT  problem solving system, programmed in LISP, accepts as input a comfortable but restricted subset of English which can express a wide variety of algebra story problems.  STUDENT  finds the solution to a large class of these problems.  STUDENT  can utilize a store of global information not specific to any one problem, and may make assumptions about the interpretation of ambiguities in the wording of the problem being solved.  If it uses such information, or makes any assumptions, STUDENT communicates this fact to the user.       The thesis includes a summary of other English language question-answering systems.  All these systems, and STUDENT are evaluated according to four standard criteria.      The linguistic analysis in STUDENT  is a first approximation to the analytic portion of a semantic theory of discourse outlined in the thesis.  STUDENT  finds the set of kernel sentences which are the base of the input discourse, and transforms this sequence of kernel sentences into a set of simultaneous equations which form the semantic base of the Student  system.  STUDENT  then tries to solve this set of equations for the values of requested unknowns.  If it is successful it gives the answers in English.  If not, STUDENT  asks the user for more information, and indicates the nature of the desired information.  The STUDENT  system is a first step toward natural language communication with computers.  Further work on the semantic theory proposed should result in much more sophisticated systems.
</description>
<pubDate>Tue, 01 Sep 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149326</guid>
<dc:date>1964-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Execution Model Enforcement Via Program Shepherding</title>
<link>https://hdl.handle.net/1721.1/149325</link>
<description>Execution Model Enforcement Via Program Shepherding
Kiriansky, Vladimir; Bruening, Derek; Amarasinghe, Saman
Nearly all security attacks have one thing in common: they coerce the target program into performing actions that it was never intended to perform.  In short, they violate the program's execution model. The execution model encompasses the Application Binary Interface (ABI), higher-level specifications from the program's source programming language, and components specific to the program --- for example, which values a particular function pointer may take.  If this execution model were enforced, and only program actions that the programmer intended were allowed, a majority of current security holes would be closed.   In this paper, we employ program shepherding[26] to enforce a program's execution model.  Program shepherding monitors control flow in order to enforce a security policy.  We use static and dynamic analyses to automatically build a custom security policy for a target program which specifies the program's execution model.  We have implemented our analyses in the DynamoRIO [5] runtime code modification system.  The resulting system imposes minimal or no performance overhead, operates on unmodified native binaries, and requires no special hardware or operating system support.  Our static analyses require source code access but not recompilation.  The analysis process requires no user interaction, but is able to build a strict enough policy to prevent all deviations from the program's control flow graph and nearly all violations of the calling convention, greatly reducing the possibility of an unintended program action.
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149325</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Packet Classification Using Bit Vector Aggregating and Folding</title>
<link>https://hdl.handle.net/1721.1/149324</link>
<description>Scalable Packet Classification Using Bit Vector Aggregating and Folding
Li, Ji; Liu, Haiyang; Sollins, Karen
Packet classification is a central function for a number of network applications, such as routing and firewalls. Most existing algorithms for packet classification scale poorly in either time or space when the database size grows. The scalable algorithm Aggregated Bit Vector (ABV) is an improvement on the Lucent bit vector scheme (BV), but has some limitations. Our algorithm, Aggregated and Folded Bit Vector (AFBV), seeks to reduce false matches while keeping the benefits of bit vector aggregation and avoiding rule rearrangement. It combines bit vector aggregation and folding to achieve this goal. Experiments showed that our algorithm outperforms both the BV and ABV schemes in synthetically generated databases.
</description>
<pubDate>Tue, 01 Apr 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149324</guid>
<dc:date>2003-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stream Algorithms and Architecture</title>
<link>https://hdl.handle.net/1721.1/149323</link>
<description>Stream Algorithms and Architecture
Henry, Hoffman; Strumpen, Volker; Agarwal, Anant
Wire-exposed, programmable microarchitectures including Trips [11]], Smart Memories [8], and Raw [13] offer an opportunity to schedule instruction execution and data movement explicitly. This paper proposes stream algorithms, which, along with a decoupled systolic architecture, provide an excellent match for the physical and technological constraints of single-chip tiles architectures. Stream algorithms enable programmed systolic computations for different problem sizes, without incurring the cost of memory accesses. To that end, we decouple memory accesses from computation and move the memory accesses off the critical path. By structuring computations in systolic phases, and deferring memory accesses to dedicated memory processors, stream algorithms can solve many regular problems with varying sizes on a constant-sized tiled array. Contrary to common sense, the compute efficiency of stream algorithms increases as we increase the number of processing elements. In particular, we show that the compute efficiency of stream algorithms can approach 100% asymptotically, that is for large numbers of processors and appropriate problem size.
</description>
<pubDate>Sat, 01 Mar 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149323</guid>
<dc:date>2003-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theoretical and Practical Approach to Instruction Scheduling on Spatial Architectures</title>
<link>https://hdl.handle.net/1721.1/149322</link>
<description>A Theoretical and Practical Approach to Instruction Scheduling on Spatial Architectures
Mirrokni, Vahab S.; Lee, Walter; Karger, David; Amarasinghe, Saman
This paper studies the problem of instruction assignment and scheduling on spatial architectures. Spatial architectures are architectures whose resources are organized in clusters, with non-zero communication delays between the clusters. On these architectures, instruction scheduling include both space scheduling, where instructions are mapped to clusters, and the traditional time scheduling. This paper considers the problem from both the theoretical and practical perspectives. It presents two integer linear program formulations with known performance bounds. We also present an 8-approximation algorithm for constant m and constant communication delays. Then, we introduce three heuristic algorithms based on list scheduling. Then we study a layer partitioning method. Our final algorithm is a combination of layer partitioning and the third heuristic. Two of the better algorithms are evaluated on the Raw machine. Results show that they are competitive with previously published results; for scientfici codes, our heuristics can perform an average of 25% better.
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149322</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>pStore: A Secure Peer-to-Peer Backup System</title>
<link>https://hdl.handle.net/1721.1/149321</link>
<description>pStore: A Secure Peer-to-Peer Backup System
Batten, Christopher; Barr, Kenneth; Saraf, Arvind; Trepetin, Stanley
In an effort to combine research in peer-to-peer systems with techniques for incremental backup systems, we propose pStore: a secure distributed backup system based on an adaptive peer-to-peer network. pStore exploits unused personal hard drive space attached to the Internet to provide the distributed redundancy needed for reliable and effective data backup. Experiments on a 30 node network show that 95% of the files in a 13 MB dataset can be retrieved even when 7 of the nodes have failed. On top of this reliability, pStore includes support for file encryption, versioning, and secure sharing. Its custom versioning system permits arbitrary version retrieval similar to CVS. pStore provides this functionality at less than 10% of the network bandwidth and requires 85% less storage capacity than simpler local tape backup schemes for a representative workload.
</description>
<pubDate>Tue, 01 Oct 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149321</guid>
<dc:date>2002-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Offline Authentication of Untrusted Storage</title>
<link>https://hdl.handle.net/1721.1/149320</link>
<description>Offline Authentication of Untrusted Storage
Clarke, Dwaine; Gassend, Blaise; Suh, G. Edward; van Dijk, Marten; Devadas, Srinivas
We extend the offline memory correctness checking scheme presented by Blum et. al [BEG+91], by using incremental cryptography, to detect attacks by an active adversary. We also introduce a hybrid o_ine-online checking scheme designed for untrusted storages in file systems and databases. Previous work [GSC+02] [FKM00] [MVS00] describe systems in which Merkle trees are used to verify the authenticity of data stored on untrusted storage. The Merkle trees [Mer79] are used to check, after each operation, whether the storage performed correctly. The offline and hybrid checkers are designed for checking sequences of operations on an untrusted storage and, in the common case, require only a constant overhead on the number of accesses to the storage, as compared to the logarithmic overhead incurred by online Merkle tree schemes
</description>
<pubDate>Thu, 01 Aug 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149320</guid>
<dc:date>2002-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phased Computation Graphs in the Polyhedral Model</title>
<link>https://hdl.handle.net/1721.1/149319</link>
<description>Phased Computation Graphs in the Polyhedral Model
Thies, William; Lin, Jasper; Amarasinghe, Saman
We present a translation scheme that allows a broad class of dataflow graphs to be considered under the optimization framework of the polyhedral model. The input to our analysis is a Phased Computation Graph, which we define as a generalization of the most widely used dataflow representations, including synchronous dataflow, cyclo-static dataflow, and computation graphs. The output of our analysis is a System of Affine Recurrence Equations (SARE) that exactly captures the data dependencies between the nodes of the original graph. Using the SARE representation, one can apply many techniques from the scientific community that are new to the DSP domain. For example, we propose simple optimizations such as node splitting, decimation propagation, and stead-state invariant code motion that leverage the fine-grained dependence information of the SARE to perform novel transformations on a stream graph. We also propose ways in which the polyhedral model can offer new approaches to classic problems of the DSP community, such as minimizing buffer size, code size, and optimizing the schedule.
</description>
<pubDate>Thu, 01 Aug 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149319</guid>
<dc:date>2002-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Collision Model for Randomized Routing In Fat-Tree Networks</title>
<link>https://hdl.handle.net/1721.1/149318</link>
<description>A Collision Model for Randomized Routing In Fat-Tree Networks
Strumpen, Volker; Krishnamurthy, Arvind
</description>
<pubDate>Mon, 01 Jul 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149318</guid>
<dc:date>2002-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Build Scalable On-Chip ILP Networks for a Decentralized Architecture</title>
<link>https://hdl.handle.net/1721.1/149317</link>
<description>How to Build Scalable On-Chip ILP Networks for a Decentralized Architecture
Taylor, Michael Bedford; Lee, Walter; Frank, Matthew; Amarasinghe, Saman; Agarwal, Anant
The era of billion transistors-on-a-chip is creating a completely different set of design constraints, forcing radically new microprocessor archiecture designs. This paper examines a few of the possible microarchitectures that are capable of obtaining scalable ILP performance. First, we observe that the network that interconnects the processing elements is the critical design point in the microarchitecture. Next, we characterize four fundamental properties that have to be satisfied by the interconnection network. Next, we provide case studies of two different networks that satisfy these properties. Finally, a detailed evaluation of these networks is presented to highlight the scalability and performance of these microarchitectures. We show that by using compile time information, we can build simpler networks and use them efficiently.
</description>
<pubDate>Sat, 01 Apr 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149317</guid>
<dc:date>2000-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Stream Compiler for Communication-Exposed Architectures</title>
<link>https://hdl.handle.net/1721.1/149316</link>
<description>A Stream Compiler for Communication-Exposed Architectures
Gordon, Michael; Thies, William; Karczmarek, Michael; Wong, Jeremy; Hoffmann, Henry; Maze, David; Amarasinghe, Saman
With the increasing miniturization of transistors, wire delays are becoming a dominant factor in microprocessor performance. To address this issue, a number of emerging architectures contain replicated processing units with software-exposed communication between one unit and another (e.g., Raw, iWarp, SmartMemories). However, for their use to be widespread, it will be necessary to develop compiler technology that enables a portable, high-level language to execute efficiently across a range of wire-exposed architectures. In this paper, we describe our compiler for StreamIt: a high-level, architecture-independent language for streaming applications. We focus on our backend for the Raw processor. Though StreamIt exposes the parallelism and communication patterns of stream programs, much analysis is needed to adapt a stream program to a parallel stream processor. We describe fission and fusion transformations that can be used to adjust the granularity of a stream graph, a layout algorithm for mapping a stream graph to a given network topology, and a scheduling algorithm for generating a fine-grained static communication pattern for each computational element. We have implemented a fully functional compiler that parallelizes StreamIt applications for Raw, including several load-balancing optimizations. Using the cycle-accurate Raw simulator, we demonstrate that these optimizations can improve performance by up to 145%. We consider this work to be a first step towards a portable programming model for communication-exposed architectures.
</description>
<pubDate>Fri, 01 Mar 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149316</guid>
<dc:date>2002-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Note on the Stability Requirements of Adaptive Virtual Queue</title>
<link>https://hdl.handle.net/1721.1/149315</link>
<description>A Note on the Stability Requirements of Adaptive Virtual Queue
Katabi, Dina; Blake, Charles
Choosing the correct value for the parameters of an Active Queue Management (AQM) scheme is a well-known hard problem. The Adaptive Virtual Queue (AVQ) attempts at solving this problem by using stability requirements to devise a rule for setting its parameter. This memo shows that the AVQ rule for setting its parameter is impractical for many real-life situations.
</description>
<pubDate>Fri, 01 Feb 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149315</guid>
<dc:date>2002-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Secure Execution Via Program Shepherding</title>
<link>https://hdl.handle.net/1721.1/149314</link>
<description>Secure Execution Via Program Shepherding
Kiriansky, Vladimir; Bruening, Derek; Amarasinghe, Saman
We introduce program shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.
</description>
<pubDate>Fri, 01 Feb 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149314</guid>
<dc:date>2002-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient View-Dependent Sampling of Visual Hulls</title>
<link>https://hdl.handle.net/1721.1/149313</link>
<description>Efficient View-Dependent Sampling of Visual Hulls
Matusik, Wojciech; Buehler, Chris; McMillan, Leonard
In this paper we present an efficient algorithm for sampling visual hulls. Our algorithm computers exact points and normals on the surface of visual hull instead of a more traditional volumetric representation. The main feature that distinguishes our algorithm from previous ones is that it allows for sampling along arbitrary viewing rays with no loss of efficiency. Using this property, we adaptively sample visual hulls to minimize the number of samples needed to attain a given fidelity. In our experiments, the number of samples can typically be reduced by an order of magnitude, resulting in a corresponding performance increase over previous algorithms.
</description>
<pubDate>Fri, 01 Feb 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149313</guid>
<dc:date>2002-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Efficient Visual Hull Computation Algorithm</title>
<link>https://hdl.handle.net/1721.1/149312</link>
<description>An Efficient Visual Hull Computation Algorithm
Matusik, Wojciech; Buehler, Chris; McMillan, Leonard; Gortler, Steven J.
In this paper we describe an efficient algorithm for computing the visual hull of an object. This problem is equivalent to computing the intersection of generalized cones. The naïve visual hull computation algorithm requires intersecting 3D polyhedra. We exploit the special structure of generalized cone polyhedra and show how to reduce this computation to a set of intersections in 2D. Moreover, we describe how the 2D intersections can be carried out efficiently.
</description>
<pubDate>Fri, 01 Feb 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149312</guid>
<dc:date>2002-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>StreamIT: A Complier for Streaming Applications</title>
<link>https://hdl.handle.net/1721.1/149311</link>
<description>StreamIT: A Complier for Streaming Applications
Thies, William F.; Karczmarek, Michael; Gordon, Michael; Maze, David; Wong, Jeremy; Hoffmann, Henry; Brown, Matthew; Amarasinghe, Saman
Streaming programs represent an increasingly important and widespread class of applications that holds unprecedented opportunitie sfor high-impact compiler technology. Unlike sequential programs with obscured dependence information and complex communication patterns, a stream program is naturally written as a set of concurrent filters with regular steady-state communication. The StreamIt language aims to provide a natural, high-level syntax that improves programmer productivity in the streaming domain. At the same time, the language imposes a hierarchical structure on the stream graph that enables novel representations and optimizations within the StreamIt compiler. We define the "stream dependence function," a fundamental relationship between the input channels of two filters in a stream graph. We also describe a suite of stream optimizations, a denotational semantics for validating these optimizations, and a novel phased scheduling algorithm for stream graphs. In addition, we have implemented a prototype of the StreamIt optimizing compiler that is showing promising results.
</description>
<pubDate>Fri, 01 Feb 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149311</guid>
<dc:date>2002-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Techniques for Increasing and Detecting Memory Alignment</title>
<link>https://hdl.handle.net/1721.1/149310</link>
<description>Techniques for Increasing and Detecting Memory Alignment
Larsen, Samuel; Witchel, Emmett; Amarasinghe, Saman
Memory alignment is an important property in memory system performance. Extraction of alignment information at compile-time enables the possibility for new classes of program optimization. In this paper, we present methods for increasing and detecting the alignment of memory references in a program. Our transformations and analyses do not require interprocedural analysis and introduce almost no overhead. As a result, they can be incorporated into real compilation systems. On average, our techniques are able to achieve a five-fold increase in the number of dynamically aligned memory references. We are then able to detect 94% of these operations. This success is invaluable in providing performance gains in a range of different areas. When alignment information is incorporated into a vectorizing compiler, we can increase the performance of a G4 AltiVec processor by more than a factor of two. Using the same methods, we are able to reduce energy consumption in a data cache by as much as 35%.
</description>
<pubDate>Thu, 01 Nov 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149310</guid>
<dc:date>2001-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>StreaMIT: A Language for Streaming Applications</title>
<link>https://hdl.handle.net/1721.1/149309</link>
<description>StreaMIT: A Language for Streaming Applications
Thies, William F.; Karczmarek, Michael; Amarasinghe, Saman
We characterize high-performance streaming applications as a new and distinct domain of programs that is becoming increasingly important. The StreaMIT language provides novel high-level representations to improve programmer productivity and program robustness within the streaming domain. At the same time, the StreaMIT compiler aims to improve the performance of streaming applications via stream-specific analyses and optimizations. In this paper, we motivate, describe and justify the language features of StreaMIT, which include: a structured model of streams, a messaging system for control, a re-initialization mechanism, and a natural textual syntax. We also present a means of reasoning about time in terms of "information flow": a concept that we believe is fundamental to the streaming domain. Using this concept, we give a formal semantics for StreaMIT's messaging system, as well as a simple algorithm for detecting deadlock and buffer overlow.
</description>
<pubDate>Wed, 01 Aug 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149309</guid>
<dc:date>2001-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Software Framework for Supporting General Purpose Applications on RAW Computation Fabrics</title>
<link>https://hdl.handle.net/1721.1/149308</link>
<description>A Software Framework for Supporting General Purpose Applications on RAW Computation Fabrics
Frank, Matthew; Lee, Walter; Amarasinghe, Saman
This paper presents SUDS (Software Un-Do Systems), a data speculation system for Raw processors. SUDS manages specultation in software. Thekey to managing speculation in software is to use the compiler to minimize the number of data items that need to be managed in runtime. Managing speculation in software enables Raw processors to achieve good performance on integer applications without sacrificing chip area for speculation hardware. This additional area can instead be devoted to additional computer resources, improving the performance of dense matrix and media applications.
</description>
<pubDate>Sun, 01 Jul 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149308</guid>
<dc:date>2001-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Persona: A Contextualized and Personalized Web Search</title>
<link>https://hdl.handle.net/1721.1/149307</link>
<description>Persona: A Contextualized and Personalized Web Search
Tanudjaja, Francisco; Mui, Lik
Recent advances in graph-based search techniques derived from Kleinberg's work [1] have been impressive. This paper further improves the graph-based search algorithm in two dimensions. Firstly, variants of Kleinberg's techniques do not take into account the semantics of the query string nor of the nodes being searched. As a result, polysemy of query words cannot be resolved. This paper presents an interactive query scheme utilizing the simple web ontology provided by the Open Directory Project to resolve meanings of a user query. Secondly, we extend a recently proposed personalized version of the Kleinberg algorithm [3]. Simulation results are presented to illustrate the sensitivity of our technique. We outline the implementation of our algorithm in the Persona personalized web search system.
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149307</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ratings in Distributed Systems: A Bayesian Approach</title>
<link>https://hdl.handle.net/1721.1/149306</link>
<description>Ratings in Distributed Systems: A Bayesian Approach
Mui, Lik; Mohtashemi, Mojdeh; Ang, Cheewee; Szolovits, Peter; Halberstadt, Ari
For distributed systems at large and e-commerce systems in particular, ratings play an increasingly important role. Rating confer reputation measures about sources. This paper reports our formalization of the rating process. This paper argues that rating shuold be context- and individual- dependent quantities. In contrast to existing rating systems in many e-commerce or developer sites, our approach makes use of personalized and contextualized ratings for assessing source reputation. Our approach is based on a Bayesian probabilistic framework.
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149306</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Round Zero-Knowledge Using a Proof of Knowledge Assumption</title>
<link>https://hdl.handle.net/1721.1/149305</link>
<description>Three Round Zero-Knowledge Using a Proof of Knowledge Assumption
Lepinski, Matthew; Micali, Silvio
We provide a proof of knowledge assumption that allows us to construct a three round zero-knowledge proof system for any language in NP.
</description>
<pubDate>Sun, 01 Apr 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149305</guid>
<dc:date>2001-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mutually Independent Commitment</title>
<link>https://hdl.handle.net/1721.1/149304</link>
<description>Mutually Independent Commitment
Liskov, Moses; Lysyanskeya, Anna; Micali, Silvio; Reyzin, Leonid; Smith, Adam
We describe a new kind of commitment scheme in which two parties commit to values in a commitment stage, at the end of which we are assured that the values they have committed to cannot be correlated to one another. We call this new primitive mutually independent commitments. We present three mutually independent commitment schemes which handle single bit commitments, and which are computationally hiding and perfecting binding.
</description>
<pubDate>Sun, 01 Apr 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149304</guid>
<dc:date>2001-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forward-Secure Signatures with Optimal Signing and Verifying</title>
<link>https://hdl.handle.net/1721.1/149303</link>
<description>Forward-Secure Signatures with Optimal Signing and Verifying
Itkis, Gene; Reyzin, Leonid
Ordinary digital signatures have an inherent weakness: if the secret key is leaked, then all signatures, even the ones generated before the leak, are no longer trustworthy. Forward-secure digital signatures were recently proposed to address this weakness: they ensure that past signatures remain secure even if the current secret key is leaked. We propose the first forward-secure signature scheme for which both signing and verifying are as efficient as for one of the most efficient ordinary signature schemes (Guillou-Quisquater): each requiring just two modular exponentiations with a short exponent. All previously proposed forward-secure signature schemes took significantly longer to sign and verify than ordinary signature schemees. Our scheme requires only fractional increases to the sizes of keys and signatures, and no additional public storage. Like the underlying Guillou-Quisquater scheme, our scheme is provably secure in the random oracle model.
</description>
<pubDate>Sun, 01 Apr 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149303</guid>
<dc:date>2001-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Framework for Schedule and Storage Optimization</title>
<link>https://hdl.handle.net/1721.1/149302</link>
<description>A Unified Framework for Schedule and Storage Optimization
Thies, William F.; Viven, Frederic; Sheldon, Jeffery W.; Amarasinghe, Saman
We present a unified mathematical framework for analyzing the tradeoffs between parallelism and storage allocation within a parallelizing compiler. Using this framework, we show how to find the best storage mapping for a given schedule, the best schedule for a given storage mapping, and the best storage mapping that is valid for all legal schedules. Our techniques combines affine scheduling techniques with occupancy vector analysis, and incorporates general affine dependencies across statements and loop nests. We formulate the constraints imposed by the data dependencies and the storage mapping as a set of linear inequalities, and apply numerical programming techniques to efficiently solve for the best occupancy vector. We consider out method to be a first step towards automating a procedure that finds the optimal tradeoff between parallelism and storage space.
</description>
<pubDate>Wed, 01 Nov 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149302</guid>
<dc:date>2000-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Availability Study of Dynamic Voting Algorithms</title>
<link>https://hdl.handle.net/1721.1/149301</link>
<description>Availability Study of Dynamic Voting Algorithms
Ingols, Kyle; Keidar, Idit
Fault tolerant distributed systems often select a primary component to allow a subset of the processes to function when failures occur. The dynamic voting paradigm defines rules for selecting the primary component adaptively: when a partition occurs, if a majority of the previous primary component is connected, a new and possibly smaller primary is chosen. Several studies have shown that dynamic voting leads to more available solutions than other paradigms for maintaining a primary component. However, these studies have assumed that every attempt made by the algorithm to form a new primary component terminates successfully. Unfortunately, in real systems, this is not always the case: a change in connectivity can interrupt the algorithm whiel it is still attempting to form a new primary component; in such cases, algorithms typically block until processes can resolve the outcome of the interrupted attempt. This paper uses simulations to evaluate the effect of interruptions on the availability of dynamic voting algorithms. We study four dynamic voting algorithms, and identify two important characteristics that impact an algorithm's availability in runs with frequent connectivity changes. First, we show that the number of communication rounds exchanged in an algorithm plays a significant role in the availability achieved, especially in the degradation of availability as connectivity changes become more frequent. Second, we show that the number of processes that need to be present in order to resolve past attempts impacts the availability, especially during long runs with numerous connectivity changes.
</description>
<pubDate>Wed, 01 Nov 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149301</guid>
<dc:date>2000-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A General Framework for Highly Available Services based on Group Communication</title>
<link>https://hdl.handle.net/1721.1/149300</link>
<description>A General Framework for Highly Available Services based on Group Communication
Fekete, Alan; Keidar, Idit
We present a general framework for building highly available services. The framework uses group communication to coordinate a collection of servers. Our framework is configurable, in that one can adjust parameters such as the number of servers and the extent to which they are synchronized. We analyze the scenarios that can lead to the service availability being temporarily comprised, and we discuss the tradeoffs that govern the choice of parameters.
</description>
<pubDate>Wed, 01 Nov 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149300</guid>
<dc:date>2000-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrent/Resettable Zero-Knowledge Protocols for NP in the Public Key Model</title>
<link>https://hdl.handle.net/1721.1/149299</link>
<description>Concurrent/Resettable Zero-Knowledge Protocols for NP in the Public Key Model
Micali, Silvio; Reyzin, Leonid
We propose a four-round protocol for concurrent and resettable zero-knowledge arguments for any langauge in NP, assuming the verifier has a pre-registered public-key. We also propose a three-round protocol with an additional timing assumption.
</description>
<pubDate>Tue, 01 Aug 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149299</guid>
<dc:date>2000-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A New Self-Play Experiment in Computer Chess</title>
<link>https://hdl.handle.net/1721.1/149298</link>
<description>A New Self-Play Experiment in Computer Chess
Heinz, Ernst
This paper presents the results of a new self-play experiment in computer chess. It is the _x000C_rst such experiment ever to feature search depths beyond 9 plies and thousands of games for every single match. Overall, we executed 17,150 self-play games (1,050{3,000 per match) in one \\calibration" match and seven \\depth X+1 , X" handicap matches at _x000C_xed iteration depths ranging from 5{12 plies. For the experiment to be realistic and independently repeatable, we relied on a state-of-the-art commercial contestant: Fritz 6 , one of the strongest modern chess pro- grams available. The main result of our new experimentis thatit shows the existence of diminishing returns for additional search in computer chess self-play by Fritz 6 with 95% statistical con_x000C_dence. The dimin- ishing returns manifest themselves by declining rates of won games and reversely increasing rates of drawn games for the deeper searching pro- gram versions. The rate of lost games, however, remains quite steady for the whole depth range of 5{12 plies.
</description>
<pubDate>Mon, 01 May 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149298</guid>
<dc:date>2000-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Testing of Multithreaded Programs</title>
<link>https://hdl.handle.net/1721.1/149297</link>
<description>Systematic Testing of Multithreaded Programs
Bruening, Derek; Chapin, John
We present a practical testing algorithm called ExitBlock that systematically and deterministically finds program errors resulting from unintended timing dependencies.  ExitBlock executes a program or a portion of a program on a given input multiple times, enumerating meaningful schedules in order to cover all program behaviors.
</description>
<pubDate>Mon, 01 May 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149297</guid>
<dc:date>2000-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Softspec:  Software-based Speculative Parallelism</title>
<link>https://hdl.handle.net/1721.1/149296</link>
<description>Softspec:  Software-based Speculative Parallelism
Bruering, Derek; Devabhaktuni, Srikrishna; Amarasinghe, Saman
We present Softspec, a technique for parallelizing sequential applications using only simple software mechanisms, requiring no complex program analysis or hardware support.  Softspec parallelizes loops whose memory references are stride-predictable.
</description>
<pubDate>Sat, 01 Apr 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149296</guid>
<dc:date>2000-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Information Theoretic Approach for Shared Bottleneck Inference Based on End-to-end Measurements</title>
<link>https://hdl.handle.net/1721.1/149295</link>
<description>An Information Theoretic Approach for Shared Bottleneck Inference Based on End-to-end Measurements
Katabi, Dina; Bazzi, Issam; Yang, Xiaowei
Recent years have marked a growing interest in studying Internet path characteristics. However, most of the currently available tools to an end system to perform such measurements are slow inaccurate and generate an excessive amount of probing traffic. This paper introduces entropy as a novel and efficient metric for discovering Internet path characteristics based on data collected by an end system. In particular, the paper presents an entropy-based technique that enables an end system to cluster flows it receives according to their shared bottleneck. Our mechanism relies solely on information extracted from the packets' inter-arrivals at the receiver. It does not generate any probing traffic and can use data extracted from both TCP and UDP flows. Moreover, it requires only a small number of packets from each flow, which makes it useful for short-lived flows. We report the result of running the algorithm on simulated data and Internet traffic.
</description>
<pubDate>Wed, 01 Mar 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149295</guid>
<dc:date>2000-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proving Correctness of a Distributed Shared Memory Implementation</title>
<link>https://hdl.handle.net/1721.1/149294</link>
<description>Proving Correctness of a Distributed Shared Memory Implementation
Castro, Miquel
DiSOM [3,4,2] is a distributed shared memory system that offers users an atomic collection of memory cells provided they satisfy certain well-formedness conditions. This report proves the correctness of DiSOM.  The system partitions memory into a set of objects and implicitly associates a read-write lock with each object. Users synchronize accesses to these objects-write implementation guarantees progress and the usual read-write lock exclusions conditions.
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149294</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bitwidth Analysis with Application to Silicon Compilation</title>
<link>https://hdl.handle.net/1721.1/149293</link>
<description>Bitwidth Analysis with Application to Silicon Compilation
Stephenson, Mark; Babb, Jonathan; Amarasinghe, Saman
In this paper introduces Bitwise, a compiler that minimizes the bitwidth - the number of bits used to represent each operand - for both integers and pointers in a program. By propagating static information both forward and backward in the program dataflow graph, Bitwise frees in cases where the compiler can determine bitwidths automatically. We find a rich opportunity for bitwidth reductionin modern multimedia and streaming application workloads. For new architectures that support sub-word quantities, we expect that our bitwidth reductions will save power and increase processor performance.
</description>
<pubDate>Mon, 01 Nov 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149293</guid>
<dc:date>1999-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Superword Level Parallelism with Multimedia Instruction Sets</title>
<link>https://hdl.handle.net/1721.1/149292</link>
<description>Exploiting Superword Level Parallelism with Multimedia Instruction Sets
Larsen, Samuel; Amarasinghe, Saman
Increasing focus on multimedia applications has prompted the addition of multimedia extensions to most existing general-purpose microprocessors. This added functionality comes primarily in the addition of short SIMD instructions. Unfortunately, access to these instructions is limited to in-line assembly and library calls. Some researchers have proposed using vector compilers as a means of exploiting multimedia instructions. Although vectorization technology is well understood, it is inherently complex and fragile. In addition, it is incapable of locating SIMD-style parallelism within a basic block. In this paper we introduce the concept of Superword Level Parallelism (SLP), a novel way of viewing parallelism in multimedia applications. We believe SLP is fundamentally different from the loop-level parallelism exploited by traditional vector processing, and therefore warrants a different method for extracting it. We have developed a simple and robust compiler technique for detecting SLP that targets basic blocks rather than loop nests. As with techniques designed to extract ILP, ours is able to exploit parallelism both across loop iterations and within badic blocks. The result is an algorithm that provides excellent performance in several application domains. Experiments on scientific and multimedia benchmarks have yielded average performance improvements of 84%, and range as high as 253%.
</description>
<pubDate>Mon, 01 Nov 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149292</guid>
<dc:date>1999-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strength Reduction of Integer Division and Modulo Operations</title>
<link>https://hdl.handle.net/1721.1/149291</link>
<description>Strength Reduction of Integer Division and Modulo Operations
Amarasinghe, Saman; Lee, Walter; Greenwald, Ben
Integer division, modulo, and remainder operations are expressive and useful operations.  They are logical candidates to express complex data accesses such as the wrap-around behavior in queues using ring buffers, array address calculations in data distribution, and cache locality compiler-optimizations. Experienced application programmers, however, avoid them because they are slow. Furthermore, while advances in both hardware in both hardware and software have improved the performance of many parts of a program, few are applicable to division and modulo operations. This trend makes these operations increasingly detrimental to program performance.
</description>
<pubDate>Mon, 01 Nov 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149291</guid>
<dc:date>1999-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Correctness Proof for a Practical Byzantine-Fault-Tolerant Replication Algorithm</title>
<link>https://hdl.handle.net/1721.1/149290</link>
<description>A Correctness Proof for a Practical Byzantine-Fault-Tolerant Replication Algorithm
Castro, Miguel
We have developed a practical algorithm for state-machine replication [7,11] that tolerates Byzantine faults. The algorithm is described in [4]. It offers a strong safety property - it implements a linearizable [5] object such that all operations invoked on the object execute atomically despite Byzantine failures and concurrency. Unlike previous algorithms [11, 10, 6], ours works correctly in asynchronous systems like the Internet, and it incorporates important optimizations that enable it to outperform previous systems by more than on order of magnitude [4].
</description>
<pubDate>Tue, 01 Jun 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149290</guid>
<dc:date>1999-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The MASC Composable Computing Infrastructure for Intelligent Environments</title>
<link>https://hdl.handle.net/1721.1/149289</link>
<description>The MASC Composable Computing Infrastructure for Intelligent Environments
Shatterjee, Sandeep; Devadas, Srinivas
We present a system architecture and framework for creating rapidly deployable intelligent environments. The rapid pace of innovation of computer hardware and intelligent systems software leads to uncertainty that deters manufacturers from adopting a single processor, network, or software environment for placement into their products. The MASC Composable Computing infrastructure addresses these issues by providing an upgradable hardware and software infrastructure that supports rapid development and deployment, as well as simple and economical maintenance of intelligent environmental systems.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149289</guid>
</item>
<item>
<title>Authenticated Byzantine Fault Tolerance Without Public-Key Cryptography</title>
<link>https://hdl.handle.net/1721.1/149288</link>
<description>Authenticated Byzantine Fault Tolerance Without Public-Key Cryptography
Castro, Miguel; Liskov, Barbara
We have developed a practical state-machine replication algorithm that tolerates Byzantine faults: it works correctly in asynchronous systems like the Internet and it incorporates several optimizations that improve the response time of previous algorithms by more than an order of magnitude.
</description>
<pubDate>Tue, 01 Jun 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149288</guid>
<dc:date>1999-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Can Statistical Zero Knowledge be made Non-interactive? or On the Relationship of SZK and NISZK</title>
<link>https://hdl.handle.net/1721.1/149287</link>
<description>Can Statistical Zero Knowledge be made Non-interactive? or On the Relationship of SZK and NISZK
Goldreich, Oded; Sahai, Amit; Vadhan, Salil
We extend the study of non-interactive statistical zero-knowledge proofs. Our main focus is to compare the class NISZK of problems possessing such non-interactive proofs to the class SZK of problems possessing interactive statistical zero-knowledge proofs. Along these lines, we first show that if statistical zero knowledge is non-trivial then so is non-interactive statistical zero knowledge, where by non-trivial we mean that the class includes problems which are not solvable in probabilistic polynommial-time. (The hypothesis holds under various assumptions, such as the intractability of the Discrete Logarithm Problem.) Furthermore, we show that if NISZK is closed under complement, then in fact SZK = NISZk, i.e. all statistical zero-knowledge proofs can be made non-interactive. The main tools in our analysis are two promise problems that are natural restrictions of promise problems known to be complete for SZK. We show that these restricted problems are in fact completer for NISZK and use this relationship to derive our results comparing the two classes. The two problems refer to the statistical difference, and difference in entropy, respectively, of a given distribution from the uniform one. We also consider a weak form of NISZK, in which only requires that for every inverse polynomial 1/p(n), there exists a simulator which achieves simulator deviation 1/p(n), and show that this weak form of NISZK actually equals NISZK.
</description>
<pubDate>Wed, 01 Sep 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149287</guid>
<dc:date>1999-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Client-Server Oriented Algorithm for Virtually Synchronous Group Membership in WANs</title>
<link>https://hdl.handle.net/1721.1/149286</link>
<description>A Client-Server Oriented Algorithm for Virtually Synchronous Group Membership in WANs
Keidar, Idit; Sussman, Jeremy; Marzullo, Keith; Dolev, Danny
We describe a novel scalable group membership algorithm designed for wide area networks(WANs.) Our membership service does not evolve from existing LAN-oriented membership services; it was designed explicitly for WANs.
</description>
<pubDate>Tue, 01 Jun 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149286</guid>
<dc:date>1999-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>NAIVE - Network Aware Internet Video Encoding</title>
<link>https://hdl.handle.net/1721.1/149285</link>
<description>NAIVE - Network Aware Internet Video Encoding
Briceno, Hector; Gortler, Steven J.; McMillan, Leonard
The distribution of digital video content over computer networks has become commonplace. Unfortunately, most digital video encoding standards do not degrade gracefully in the face of packet losses, which often occur in a  bursty fashion. We propose an new video encoding system that scales well with respect to the network's performance and degrades gracefully under packet loss.
</description>
<pubDate>Thu, 01 Apr 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149285</guid>
<dc:date>1999-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Load Balancing with Group Communication</title>
<link>https://hdl.handle.net/1721.1/149284</link>
<description>Dynamic Load Balancing with Group Communication
Dolev, Shlomi; Segala, Roberto; Shvartsman, Alexander A.
This work considers the problem of efficiently performing a set of tasks using a network of processors in the setting where the network is subject to dynamic reconfigurations, including partitions and merges. A key challenge for this setting is the implementation of dynamic load balancing that reduces the number of tasks that are performed redundantly because of the reconfigurations. We explore new approaches for load balancing in dynamic networks that can be employed by applications using a group communication service. The group communication services that we consider include a membership service (establishing new groups to reflect dynamic changes) but does not include maintenance of a primary component. For the n-processor, n-task load balancing problem defined in this work, the following specific results are obtained. For the case of fully dynamic changes including fragmentation and merges we show that the termination time of any on-line task assignment algorithm is greater than the termination time of an off-line task assignment algorithm by a factor greater than n/12. We present a load balancing algorithmthat guarantees completion of all tasks in all fragments caused by partitions with work O(n + f ÔøΩ n) in the presence of f fragmentation failures. We develop an effective scheduling strategy for minimizing the task execution redundancy and we prove that our strategy provides each of the n processors with a schedule of ?(n1/3) tasks such that at most one task is performed redundantly by any two processors.
</description>
<pubDate>Fri, 01 Oct 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149284</guid>
<dc:date>1999-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complexity Results for Single Machine Distance Constrained Scheduling Problems</title>
<link>https://hdl.handle.net/1721.1/149283</link>
<description>Complexity Results for Single Machine Distance Constrained Scheduling Problems
Engels, Daniel W.; Karger, David; Devadas, Srinivas
Scheduling problems that involve timing constraints between tasks occur often in machine shop scheduling (e.g., job shop scheduling problems) and code scheduling during software compilation for pipelined processors (e.g., multiprocessor sequencing and scheduling problems).
</description>
<pubDate>Sun, 01 Nov 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149283</guid>
<dc:date>1998-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extracting all the Randomness from a Weakly Random Source</title>
<link>https://hdl.handle.net/1721.1/149282</link>
<description>Extracting all the Randomness from a Weakly Random Source
Vadhan, Salil
In this paper, we give two explicit constructions of extractors, both of which work for a source of any min-entropy on strings of length n.  The first extracts any constant fraction of the min-entropy using O(log^2 n) additional random bits.  The second extracts all the min-entropy using O(log 3 n) additional random bits. Both constructions use fewer truly random bits than any previous. construction which works for all min-entropies ans extracts a constant fraction of the min-entropy.
</description>
<pubDate>Sat, 01 Aug 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149282</guid>
<dc:date>1998-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local Rules Modeling of Nucleation-Limited Virus Capsid Assembly</title>
<link>https://hdl.handle.net/1721.1/149281</link>
<description>Local Rules Modeling of Nucleation-Limited Virus Capsid Assembly
Schwartz, Russell; Prevelige, Peter E.; Berger, Bonnie
We describe an application of computer modeling to the study of the kinetics of virus capsid (protein shell) assembly.  We examine two proposed models of the source of nucleation-limited growth, an observed growth pattern in which initiation of new capsids occurs significantly more slowly than subunit addition onto initiated capsids.
</description>
<pubDate>Sat, 01 Aug 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149281</guid>
<dc:date>1998-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maps: a Compiler-Managed Memory System for RAW Machines</title>
<link>https://hdl.handle.net/1721.1/149280</link>
<description>Maps: a Compiler-Managed Memory System for RAW Machines
Barua, Rajeev; Lee, Walter; Amarasinghe, Saman; Agarwal, Anant
Microprocessors of the next decade and beyond will be built using VLSI chips employing billions of transistors.  In this generation of microprocessors, achieving a high level of parallelism at a reasonable clock speed will require full distribution of mac.
</description>
<pubDate>Wed, 01 Jul 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149280</guid>
<dc:date>1998-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Indolent Closure Creation</title>
<link>https://hdl.handle.net/1721.1/149279</link>
<description>Indolent Closure Creation
Strumpen, Volker
A closure is a representation of a thread in memory, ready to be executed. The goal of this work is to create portable closures that can be transferred across binary incompatible architectures. Consequently, indolent closures are software-implemented, and rely on a copy mechanism which allows for potential data representation conversion on-the-fly.
</description>
<pubDate>Mon, 01 Jun 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149279</guid>
<dc:date>1998-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>More on Proofs of Knowledge</title>
<link>https://hdl.handle.net/1721.1/149278</link>
<description>More on Proofs of Knowledge
Halevi, Shai; Micali, Silvio
The notion of proofs of knowledge is central to cryptographhic protocols, and many definitions for it have been proposed. In this work we explore a different facet of this notion, not addressed by prior definitions. Specifically, prior definitions concentrate on capturing the properties of the verifier, and do not pay much attention to the properties of the prover. Our new definition is strictly stronger than previous ones, and captures new and desirable properies. In particular, it guarantees prover feasibility, that is, it guarantees that the time spent by the prover in a proof of knowledge is comparable to that it spends in an "extraction" of this knowledge. Our definition also enables one to consider meaningfully the case of a single, specific prover.
</description>
<pubDate>Fri, 01 May 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149278</guid>
<dc:date>1998-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Copmutationally Sound Proofs</title>
<link>https://hdl.handle.net/1721.1/149277</link>
<description>Copmutationally Sound Proofs
Micali, Silvio
This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. Computationally sound proofs provide, in a novel and meaningful framework, answer to old and new questions in complexity theory. In particular, given a random oracle or a new complexity assumption, they enable us to 1. prove that verifying is easier than deciding for all theorems; 2. provides a quite effective way to prove membership in computationally hard languages (such as C-NP-complete ones); and 3. show that every computation possesses a short certificate vouching its correctness. FInally, if a special type of computationally sound proof exists, we show that Blum's notion of program checking can be meaningfully broadened so as to prove that NP-complete languages are checkable.
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149277</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proving Correctness of a Controller Algorithm for the RAID Level 5 System</title>
<link>https://hdl.handle.net/1721.1/149276</link>
<description>Proving Correctness of a Controller Algorithm for the RAID Level 5 System
Vazirir, Mandana; Lynch, Nancy A.; Wing, Jeannette
Mos RAID controllers implemented in industry are complicated and difficult to reason about. This complexity has led to software and hardware systems that are difficult to debug and hard to modify. To overcome this problem Courtright and Gibson have developed a rapidf prototyping framework for RAID architectures which relies on a generic controller algorithm [1]. The designer of a new architecture needs to specify parts of the generic controller algorithm and must justify the validity of the controller algorithm obtained. However the latter task may be difficult due to the concurrency of operations on the disks. This is the reason why it would be useful to provide designers with an automated verification tool tailored specificially for the RAID prototyping system. As a first step towards building such a tool, our approach consists of studying several controller algorithms manually, to determine the key properties that need to be verified. This paper presents the modeling and verification of a controller algorithm for the RAID Level 5 System [5]. We model the system using I/O automata [6], give an external requirements specification, and prove that the model implements its specification. We use a key invariant to find an error in a controller algorithm for the RAID Level 6 System [5].
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149276</guid>
</item>
<item>
<title>Signing with Partially Adversarial Hashing</title>
<link>https://hdl.handle.net/1721.1/149275</link>
<description>Signing with Partially Adversarial Hashing
Micali, Silvio; Reyzin, Leonid
Digital signatures usually utilize one-way hash functions designed by other parties. It is thus possible that such hash functions are adverserially designed so as to enable forging signatures in otherwise secure schemes.  We initiate the study of signing
</description>
<pubDate>Sun, 01 Feb 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149275</guid>
<dc:date>1998-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Inapproximability of the Shortest Vector in a Lattice Within Some Constant Factor</title>
<link>https://hdl.handle.net/1721.1/149274</link>
<description>On the Inapproximability of the Shortest Vector in a Lattice Within Some Constant Factor
Micciancio, Danielle
We show that computing the approximate length of the shortest vector in a lattice within a factor c is NP-hard for randomized reductions for any constant c &lt; ? (2).
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149274</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Space - Time Scheduling of Instruction-Level Parallelism on a Raw Machine</title>
<link>https://hdl.handle.net/1721.1/149273</link>
<description>Space - Time Scheduling of Instruction-Level Parallelism on a Raw Machine
Lee, Walter; Barua, R.; Srikrishna, D.; Babb, Jonathan; Sarkar, V.; Amarasinghe, Saman; Agarwal, Anant
Advances in VLSI technology will enable chips with over a billion transistors within the next decade. Unfortunately, the centralized-resource architectures of modern microprocessors are ill-suited to exploit such advances. Achieving a high level of parallelism at a reasonable clock speed requires distributing the processor resources - a trend already visible in the dual-register-file architecture of the Alpha 21264.
</description>
<pubDate>Mon, 01 Dec 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149273</guid>
<dc:date>1997-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specifying and Using a Partitionable Group Communication Service*</title>
<link>https://hdl.handle.net/1721.1/149272</link>
<description>Specifying and Using a Partitionable Group Communication Service*
Fekete, Alan; Lynch, Nancy A.; Shvartsman, Alexander A.
A new, simple formal specification is presented for a partitionable view-oriented group communication service. The specification consists of a state machine to express safety requirements and a timed trace property to express performance and fault-toleran.
</description>
<pubDate>Fri, 01 Aug 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149272</guid>
<dc:date>1997-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Acquisition of a Large Pose-Mosaic Dataset</title>
<link>https://hdl.handle.net/1721.1/149271</link>
<description>Acquisition of a Large Pose-Mosaic Dataset
Coorg, Satyan; Master, Neel; Teller, Seth
We describe the generation of a large pose-mosaic dataset: a collection of several thousand digital images, grouped by spatial position into spherical mosaics, each annotated with estimates of the acquiring camera's 6 DOF pose (3 DOF position and 3 DOF orientation) in an absolute coordinate system. The pose-mosaic dataset was generated by acquiring images, grouped by spatial position into nodes (essentially, spherical mosaics). A prototype mechanical pan-tilt head was manually deployed to acquire the data. Manual surverying provided initial position estimates for each node. A back-projecting scheme provided initial rotational estimates. Relative rotations within each node, along with internal camera parameters, were refined automatically by an optimization-correlation scheme. Relative translations and rotations among nodes were refined according to point correspondences, generated automatically and by a human operator. The resulting pose-imagery is self-consistent under a variety of evaluation metrics. Pose-mosaics are useful "first-class" data objects, for example in automatic reconstruction of textured 3D CAD models which represent urban exteriors.
</description>
<pubDate>Thu, 01 Jan 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149271</guid>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lazy Reference Counting for Transactional Storage Systems</title>
<link>https://hdl.handle.net/1721.1/149270</link>
<description>Lazy Reference Counting for Transactional Storage Systems
Castro, Miguel; Adya, Atul; Liskov, Barbara
HAC is a novel technique for managing the direct the client cache in a distributed, persistent object storage system. In a companion paper, we showed that it outperforms other techniques across a wide range of cache sizes and workloads. This report describes HAC's solution to a specific problem: how to discard indirection table entries in an indirect pointer swizzling scheme.
</description>
<pubDate>Wed, 01 Oct 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149270</guid>
<dc:date>1997-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Sensitivity of Communication Mechanisms to Bandwidth and Latency</title>
<link>https://hdl.handle.net/1721.1/149269</link>
<description>The Sensitivity of Communication Mechanisms to Bandwidth and Latency
Chong, Frederic T.; Barua, Rajeev; Dahlgren, Fredrik; Kubiatowicz, John D.; Agarwal, Anant
The goal of this paper is to gain insight into the relative performance of communication mechanisms as bisection bandwidth and network latency vary. We compare shared memory with and without prefetching, message passing with interrupts and with polling, and bulk transfer via DMA. We present two sets of experiments involing four irregular applications on the MIT Alewife multiprocessor. First, we introduce I/O cross-traffic to vary bisection bandwidth. Second, we change processor clock speeds to vary relative network latency. We establish a framework from which to understand a range of results. On Alewife, shared memory provides good performance, even on producer-consumer applications with little data-reuse. On machines with lower bisection bandwidth and higher network latency, however, message-passing mechanisms become important. In particular, the high communication volume of shared memory threatens to become difficult to support on future machines without expensive, high-dimensional networks. Furthermore, the round-trip nature of shared memory may not be able to tolerate the latencies of future networks.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149269</guid>
</item>
<item>
<title>Matching and Pose Refinement with Camera Pose Estimates</title>
<link>https://hdl.handle.net/1721.1/149268</link>
<description>Matching and Pose Refinement with Camera Pose Estimates
Coorg, Satyan; Teller, Seth
This paper describes novel algorithms that use absolute camera pose information to identify correspondence among point features in hundreds or thousands of images. Our incidence counting algorithm is a geometric approach to matching: it makes features by extruding them into an absolute 3-D coordinate system, then searching 3-D space for regions into which many features project.
</description>
<pubDate>Mon, 01 Jan 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149268</guid>
<dc:date>1996-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light Traps</title>
<link>https://hdl.handle.net/1721.1/149267</link>
<description>Light Traps
Dawson, R.J. Macg.; McDonald, B.E.; Mycielski, J.; Pachter, L.
In the February 1992 issue of the American Mathematical Monthly, J. E. Connett  [1] asked whether it is possible to construct a 'light trap': a reflective-sided container with the property that a beam of light, shone into it from an appropriate direction, would be reflected inside it over and over again and never escape. Connett suggests that such a trap might be of value as a device to store light rays; however, the market for escape-proof golf holes might be even more lucrative!
</description>
<pubDate>Tue, 01 Oct 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149267</guid>
<dc:date>1996-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein Folding in the Generalized Hydrophobic-Polar Model on the Triangular Lattice</title>
<link>https://hdl.handle.net/1721.1/149266</link>
<description>Protein Folding in the Generalized Hydrophobic-Polar Model on the Triangular Lattice
Decatur, Scott E.
We consider the problem of determining the three-dimensional folding of a protein given its one-dimensional amino sequence. The model we use is based on the Hydrophobic-Polar (HP) model [2] on cubic lattices in which the goal is to find the fold with the maximum number of contacts between non-covalently linked hydrophobic amino acids.
</description>
<pubDate>Wed, 01 May 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149266</guid>
<dc:date>1996-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Survey of Active Network Research</title>
<link>https://hdl.handle.net/1721.1/149265</link>
<description>A Survey of Active Network Research
Tennenhouse, David L.; Smith, Jonathan M.; Sincoskie, W. David; Wetherall, David J.; Minden, Gary J.
Active networks are a novel approach to network architecture in which the switches of the network perform customized computations on the messages flowing through them. This approach is motivated by both lead user applications, which perform user-driven computation at nodes within the network today, and the emergence of mobile code technologies that make dynamic network service innovation attainable. In this paper, we discuss two approaches to the realization of active networks and provide a snapshot of the current research issues and activities.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149265</guid>
</item>
<item>
<title>UDM: User Direct Messaging for General-Purpose Multiprocessing</title>
<link>https://hdl.handle.net/1721.1/149264</link>
<description>UDM: User Direct Messaging for General-Purpose Multiprocessing
Mackenzie, Kenneth; Kubiatowicz, John; Frank, Matthew; Lee, Walter; Victor, Lee; Agarwal, Anant; Kaashoek, M. Frans
User Direct Messaging (UDM) allows user-level, processor-to- processor messaging to coexist with general multiprogramming and virtual memory. Direct messaging, where processors launch and receive messages in tens of cycles directly via network interface FIFO's as opposed to indirectly via memory, offers high message bandwidth and low delivery latency by avoiding memory delay and buffer management overhead.
</description>
<pubDate>Fri, 01 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149264</guid>
<dc:date>1996-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verification of the Randomized Consensus Algorithm of Aspnes and Herlihy: a Case Study*</title>
<link>https://hdl.handle.net/1721.1/149263</link>
<description>Verification of the Randomized Consensus Algorithm of Aspnes and Herlihy: a Case Study*
Pogosyants, Anna; Segala, Roberto; Lynch, Nancy A.
The Probabilistic I/O Automaton model of [20] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time.   The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succesion of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk.
</description>
<pubDate>Sun, 01 Jun 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149263</guid>
<dc:date>1997-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Oblivious Data Structure and its Applications to Cryptography</title>
<link>https://hdl.handle.net/1721.1/149262</link>
<description>An Oblivious Data Structure and its Applications to Cryptography
Micciancio, Danielle
</description>
<pubDate>Sat, 01 Jun 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149262</guid>
<dc:date>1996-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parameterized Types and Java</title>
<link>https://hdl.handle.net/1721.1/149261</link>
<description>Parameterized Types and Java
Bank, Joseph A.; Liskov, Barbara; Myers, Albert C.
Java offers the real possibility that most programs can be written in a type-safe language However, for Java to be broadly useful, it needs additional expressive power. This paper extends Java in one area where more power is needed: support for parametric polymorphism, which allows the definition and implementation of generic abstractions.
</description>
<pubDate>Wed, 01 May 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149261</guid>
<dc:date>1996-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conservative Radiance Interpolants for Ray Tracing</title>
<link>https://hdl.handle.net/1721.1/149260</link>
<description>Conservative Radiance Interpolants for Ray Tracing
Teller, Seth; Bala, Kavita; Dorsey, Julie
Classical ray-tracing algorithms compute radiance returning to the eye along one or more sample rays through each pixel of an image. The output of a ray-tracing algorithm, although potentially photorealistic, is a two-dimensional quality an image array of radiance values and is not directly useful from any viewpoint other than the one for which it was computed.
</description>
<pubDate>Mon, 01 Apr 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149260</guid>
<dc:date>1996-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cilk: An Efficient Multithreaded Runtime System</title>
<link>https://hdl.handle.net/1721.1/149259</link>
<description>Cilk: An Efficient Multithreaded Runtime System
Blumofe, Robert D.; Joerg, Christopher F.; Kuszmaul, Bradley C.; Leiserson, Charles E.; Randall, Keith H.; Yuli, Zhou
Cilk (pronounced "silk") is a C-based runtime system for multithreaded parallel programming. In this paper, we document the efficiency of the Cilk work-stealing scheduler, both empirically and analytically. We show that on real and synthetic applications, the "work" and  "critical-path length" of a Cilk computation can be used to model performance accurately.
</description>
<pubDate>Mon, 01 Jan 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149259</guid>
<dc:date>1996-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Brief Overview of the GSM Radio Interface</title>
<link>https://hdl.handle.net/1721.1/149258</link>
<description>A Brief Overview of the GSM Radio Interface
Turletti, Thierry
This technical memorandum contains a compilation of several papers, reports and books relative to the GSM-900 radio interface. It is not exhaustive and it is restricted to the Traffic Channel/Full-Rate Speech (TCH/FS).
</description>
<pubDate>Fri, 01 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149258</guid>
<dc:date>1996-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Spacially and Temporally Coherent Object Space Visibility Algorithm</title>
<link>https://hdl.handle.net/1721.1/149257</link>
<description>A Spacially and Temporally Coherent Object Space Visibility Algorithm
Coorg, Satyan; Teller, Seth
Efficiently identifying polygons that are visible from a changing synthetic viewpoint is an important problem in computer graphics. In many complex geometric models, most parts of the model are invisible from the instantaneous viewpoint. Despite this, hidden-surface algorithms like the z-buffer or BSP tree often expend significant computation processing invisible portions of the model.
</description>
<pubDate>Thu, 01 Feb 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149257</guid>
<dc:date>1996-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modelling and Verification of Automated Transit Systems, Using Timed Automata, Invariants and Simulations</title>
<link>https://hdl.handle.net/1721.1/149256</link>
<description>Modelling and Verification of Automated Transit Systems, Using Timed Automata, Invariants and Simulations
Lynch, Nancy A.
This paper contains an overview of recent and current work in the M.I.T. Theory of Distributed Systems research group on modelling, verifying and analyzing problems arising in automated transit systems. The problems we consider are inspired by design work in Personal Rapid Transit (PRT) project at Raytheon (as described to us by Toy Johnson, Steve Spielman and Norm Delisle), and in the California PATH project (as described to us by Shankar  Sastry, Datta Godbole and John Lygeros)  [7, 6,13, 3].
</description>
<pubDate>Fri, 01 Dec 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149256</guid>
<dc:date>1995-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid I/O Automata</title>
<link>https://hdl.handle.net/1721.1/149255</link>
<description>Hybrid I/O Automata
Lynch, Nancy A.; Segala, Roberto; Vaandrager, Frits; Weinberg, H. B.
We propose a new hybrid I/O automaton model that is capable of describing both continuous and discrete behavior. The model, which extends the timed I/O automaton model of [12, 7] and the phase transition system models of [15, 2], allows communication among components using both shared variables and shared actions.
</description>
<pubDate>Fri, 01 Dec 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149255</guid>
<dc:date>1995-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced Certificate Revocation System</title>
<link>https://hdl.handle.net/1721.1/149254</link>
<description>Enhanced Certificate Revocation System
Micali, Silvio
We apply off-linne digital signatures to provide a novel approach to certificate revocation. Our approach dismisses with traditional CRLs and yields pubilc-key infrastructures that are several-hundred times cheaper to run than traditional ones. More generally, our technology also yields effective methods to lengthen the validity of a digital signature.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149254</guid>
</item>
<item>
<title>Symmetric Alternation Captures BPP</title>
<link>https://hdl.handle.net/1721.1/149253</link>
<description>Symmetric Alternation Captures BPP
Russell, Alexander; Sundaram, Ravi
We introduce the natural class Sp2 containing those languages which may be expressed in terms of two symmetric quantifiers. This class lies between ? and ? and naturally generates a "symmetric" hierarchy corresponding to the polynomial-time hierarchy. We demonstrate, using the  probabilistic method, new containment theorems for BPP.
</description>
<pubDate>Wed, 01 Nov 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149253</guid>
<dc:date>1995-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Temporally Coherent Conservative Visibility</title>
<link>https://hdl.handle.net/1721.1/149252</link>
<description>Temporally Coherent Conservative Visibility
Coorg, Satvan; Teller, Seth
Efficiently identifying polygons that are visible from a changing synthetic viewpoint is an important problem in computer graphics. even with hardware support, simple algorithms like depth-buffering cannot achieve interactive frame rates when applied to geometric models with many polygons. However, a visibility algorithm that exploits the occlusion properties of the scene to identify a superset of visible polygons, without touching most invisible polygons, could achieve fast rates while viewing such models.
</description>
<pubDate>Wed, 01 Nov 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149252</guid>
<dc:date>1995-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Partitioning of Parallel Loops and Data Arrays for Distributed Shared-memory Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149251</link>
<description>Automatic Partitioning of Parallel Loops and Data Arrays for Distributed Shared-memory Multiprocessors
Agarwal, Anant; Kranz, David A.; Natarajan, Venkat
This paper presents a theoretical framework for automatically partitioning parallel loops to minimize cache coherency traffic on shared-memory multiprocessors.  While several previous papers have looked at hyperplane partitioning of iteration spaces to reduce communication traffic, the problem of deriving the optimal tiling parameters for minimal communication in loops with general affine index expressions had remained open. Our paper solves this open problem by presenting a method for deriving an optimal hyperparallelepiped tiling of iteration spaces for minimal communication in multiprocessors with caches. We show that the same theoretical framework can also be used to determine optimal tiling parameters for both data and loop partitioning in distributed memory multicomputers. Our framework uses matrices to represent iteration and data space mappings and the notion of uniformly intersecting references to capture temporal locality in array references. We introduce the notion of data footprints to estimate the communication traffic between processors and use linear algebraic methods and lattice theory to compute precisely the size of data footprints. We have implemented this framework in a compiler for Alewife, a distributed shared-memory multiprocessor.
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149251</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guaranteeds Partial Key-escrow</title>
<link>https://hdl.handle.net/1721.1/149250</link>
<description>Guaranteeds Partial Key-escrow
Micali, Silvio
</description>
<pubDate>Tue, 01 Aug 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149250</guid>
<dc:date>1995-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Redundancy Achieved by Huffman Codes</title>
<link>https://hdl.handle.net/1721.1/149249</link>
<description>On the Redundancy Achieved by Huffman Codes
De Prisco, Roberto; De Santis, Alfredo
It has been recently proved that the redundancy r of any discrete memoryless source satisfies r &lt; 1 -H(pn), where pn is the least likely source letter probability. This bound is achieved only by sources consisting of two letters. We prove a sharper bound if the number of source letters is greater than two. Also provided is a new upper bound on r, as function of the two least likely source letter probabilities which improve on previous results.
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149249</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Power of Team Exploration: Two Robots Can Learn Unlabeled Directed Graphs</title>
<link>https://hdl.handle.net/1721.1/149248</link>
<description>The Power of Team Exploration: Two Robots Can Learn Unlabeled Directed Graphs
Bender, Michael A.; Slonim, Donna K.
We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for robots, which helps the robots recognize certain previously-seen nodes. We represent an algorithm in which the robots learn the graph and the homing sequence simultaneously by actively wandering through the graph.
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149248</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>What are principal typings and what are they good for?</title>
<link>https://hdl.handle.net/1721.1/149247</link>
<description>What are principal typings and what are they good for?
Jim, Trevor
We demonstrate the pragmatic value of the principal typing property, a property more general than ML's principal type property, by studying a type system with principal typings. The type system is based on rank 2 intersection types and is closely related to ML. Its principal typing property provides elegant support for separate compilation, including "smartest recompilation" and incremental type inference, and for accurate type error messages. Moreover, it motivates a novel rule for typing recursive definitions that can type many examples of polymorphic recursion. Type inference remains decidable; this is surprising, since type inference for ML plus polymorphic recursion is undecidable.
</description>
<pubDate>Tue, 01 Aug 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149247</guid>
<dc:date>1995-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rank 2 Type Systems and Recursive Definitions</title>
<link>https://hdl.handle.net/1721.1/149246</link>
<description>Rank 2 Type Systems and Recursive Definitions
Jim, Trevor
We demonstrate an equivalence between the rank 2 fragments of the polymorphic lambda calculus (System F) and the intersection type discipline: exactly the same terms are typable in each system. An immediate consequence is that typability in the rank 2 intersection system is DEXPTIME-complete. We introduce a rank 2 system combining intersections and polymorphism and prove that it types exactly the same terms as the other rank 2 systems. The combined system suggests a new rule for typing recursive definitions. The result is a rank 2 type system with decidable type inference that can type some interesting examples of polymorphic recursion. Finally, we discuss some applications of the type system in data representation optimizations such as unboxing and overloading.
</description>
<pubDate>Tue, 01 Aug 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149246</guid>
<dc:date>1995-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Charge-Based Proportional Scheduling</title>
<link>https://hdl.handle.net/1721.1/149245</link>
<description>Charge-Based Proportional Scheduling
Maheshwari, Umesh
Most priority-based schedulers lack the ability to control the relative execution rates of applications. A recent scheme, called lottery scheduling [WW94], uses randomization to control the execution rates of threads in proportion to the tickets allocated to them.
</description>
<pubDate>Wed, 01 May 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149245</guid>
<dc:date>1996-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stride Scheduling: Deterministic Proportional- Share Resource Management</title>
<link>https://hdl.handle.net/1721.1/149244</link>
<description>Stride Scheduling: Deterministic Proportional- Share Resource Management
Waldspurger, Carl A.; Weihl, William E.
This paper presents stride scheduling, a deterministic scheduling technique that efficiently supports the same flexible resource management abstractions introduced by lottery scheduling. Compared to lottery scheduling, stride scheduling archives significantly improved accuracy over relative throughput rates, with significantly lower response time variability.
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149244</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local Rule Switching Mechanism for Viral Shell Geometry</title>
<link>https://hdl.handle.net/1721.1/149243</link>
<description>Local Rule Switching Mechanism for Viral Shell Geometry
Berger, Bonnie; Shor, Peter W.
In a previous paper [Berger et al., PNAS 91 7732,1994] a theory of virus shell formation was proposed in which shell assembly is directed by local interactions of the coat ans scaffolding subunits. This theory requires that the same chemical subunits assume different, stable conformations depending on their position in the shell.
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149243</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>'C: A Language for High-Level, Efficient, and Machine-independant Dynamic Code Generation</title>
<link>https://hdl.handle.net/1721.1/149242</link>
<description>'C: A Language for High-Level, Efficient, and Machine-independant Dynamic Code Generation
Engler, Dawson R.; Hsieh, Wilson C.; Kaashoek, M. Frans
Dynamic code generation allows specialized code sequences to be crafted using runtime information. Since this information is by definition not available statically, the use of dynamic code generation can achieve performance inherently beyond that of static code generation. Previous attempts to support dynamic code generation have been low-leveled, expensive, or machine-dependent.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149242</guid>
</item>
<item>
<title>Algorithms for Modeling and Measuring Proteins</title>
<link>https://hdl.handle.net/1721.1/149241</link>
<description>Algorithms for Modeling and Measuring Proteins
Slonim, Donna K.
In this paper we investigate efficient algorithms for computing the volume and surface area of protein molecules are modeled by sets of overlapping spheres in R 3. We summarize and critique three papers in the field, and we add several new contributions of our own.
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149241</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Case Study of Shared Memory and Message Passing: The Triangle Puzzle</title>
<link>https://hdl.handle.net/1721.1/149240</link>
<description>A Case Study of Shared Memory and Message Passing: The Triangle Puzzle
Lew, Kevin
This thesis is the first controlled case study that compares shared-memory and message-passing implementations of an application that solves the triangle puzzle and runs on actual hardware: only the communication interfaces used by the implementations vary; all other system components remained fixed. The implementations run on the MIT Alewife machine, a cache-coherent, distributed-shared-memory multiprocessor that efficiently supports both the shared-memory and message-passing programming models.
</description>
<pubDate>Sun, 01 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149240</guid>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication-Minimal Partitioning of Parallel Loops and Data Arrays for Cache-Coherent Distributed -Memory Multiprocess</title>
<link>https://hdl.handle.net/1721.1/149239</link>
<description>Communication-Minimal Partitioning of Parallel Loops and Data Arrays for Cache-Coherent Distributed -Memory Multiprocess
Barua, Rajeev; Kranz, David; Agarwal, Anant
Harnessing the full performance potential of cache-coherent distributed shared memory multiprocessors without inordinate user effort requires a compilation technology that can automatically manage multiple levels of memory hierarchy. This paper describes a working compiler for such machines that automatically partitions loops and data arrays to optimize locality of access.
</description>
<pubDate>Sun, 01 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149239</guid>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing Partitioned Arrays in Distributed Memory Multiprocessors - the Software Virtual Memory Approach</title>
<link>https://hdl.handle.net/1721.1/149238</link>
<description>Addressing Partitioned Arrays in Distributed Memory Multiprocessors - the Software Virtual Memory Approach
Barua, Rajeev; Kranz, David; Agarwal, Anant
Harnessing the full performance potential of cache-coherent distributed shared memory multiprocessors without inordinate user effort requires a compilation technology that can automatically manage multiple levels of memory hierarchy. This paper describes a working compiler for such machines that automatically partitions loops and data arrays to optimize locality of access.
</description>
<pubDate>Thu, 01 Dec 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149238</guid>
<dc:date>1994-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symmetric Alteration Captures BPP</title>
<link>https://hdl.handle.net/1721.1/149237</link>
<description>Symmetric Alteration Captures BPP
Russell, Alexander; Sundaram, Ravi
We introduce the natural class Sp2 containing those languages which may be expressed terms of two symmetric quantifiers. This class lies between ?p2 and ? and naturally generates a "symmetric"  hierarchy corresponding to the polynomial-time hierarchy.  We demonstrate, using the probabilistic method, new containment theorems for BPP.
</description>
<pubDate>Wed, 01 Nov 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149237</guid>
<dc:date>1995-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Mathematics of Virus Shell Assembly</title>
<link>https://hdl.handle.net/1721.1/149236</link>
<description>On the Mathematics of Virus Shell Assembly
Berger, Bonnie; Shor, Peter W.
A local rule theory is developed which shows that the self-assembly of icosahedral virus shells may depend on only the lower-level interactions of a protein subunit with its neighbors, i.e. local rules, rather than on larger structural building blocks. The local rule theory provides a framework for understanding the assembly of icosahedral viruses.
</description>
<pubDate>Fri, 01 Jul 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149236</guid>
<dc:date>1994-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Sequentially Consistant Shared Objects using Broadcast and Point-To-Point Communications</title>
<link>https://hdl.handle.net/1721.1/149235</link>
<description>Implementing Sequentially Consistant Shared Objects using Broadcast and Point-To-Point Communications
Fekete, Alan; Kaashoek, M. Frans; Lynch, Nancy A.
A distributed algorithm that implements a sequentially consistent collection of shared read/update objects using a combination of broadcast and point-to-point communication is presented and proved correct. This algorithm is a generalization of one used in the Orca shared object system. The algorithm caches objects in the local memory of processors according to application needs; each read operation accesses a single copy of the object, while each update accesses all copies. Copies of all the objects are kept consistent using a strategy based on sequence numbers for broadcasts.
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149235</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>CRL: High - Performance All-Software Distributed Shared Memory*</title>
<link>https://hdl.handle.net/1721.1/149234</link>
<description>CRL: High - Performance All-Software Distributed Shared Memory*
Johnson, Kirk L.; Kaashoek, M. Frans; Wallach, Deborah A.
This paper introduces the C Region Library (CRL), a new all-software distributed shared memory (DSM) system. CRL requires no special compiler, hardware , or operating system support beyond the ability to send and receive messages. It provides a simple, portable shared address space programming model that it capable of delivering good performance on a wide range of multiprocessor and distributed system architectures.
</description>
<pubDate>Wed, 01 Mar 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149234</guid>
<dc:date>1995-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Piecemeal Graph Exploration by a Mobile Robot*</title>
<link>https://hdl.handle.net/1721.1/149233</link>
<description>Piecemeal Graph Exploration by a Mobile Robot*
Awerbuch, Baruch; Betke, Margrit; Rivest, Ronald; Singh, Mona
We study how a mobile robot can piecemeal learn an unknown environment. The robot's goal is to learn a complete map of its environment, while satisfying the constraint that it must return every so often to its starting position (for refueling, say). The environment is modelled as an arbitrary, undirected graph, which is initially unknown to the robot.
</description>
<pubDate>Sun, 01 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149233</guid>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Non-interactive Proofs to Achieve Independence Efficiently and Securely</title>
<link>https://hdl.handle.net/1721.1/149232</link>
<description>Using Non-interactive Proofs to Achieve Independence Efficiently and Securely
Gennaro, Rosario
Independence or simultaneous broadcast is a fundamental tool to achieve security in fault tolerant distributed computing. It allows n players to commit to independently chosen values. In this paper we present a constant round protocol to perform this task. Previous solutions were 0(log n) rounds. In the process we develop a new and stronger formal definition from this problem.
</description>
<pubDate>Tue, 01 Nov 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149232</guid>
<dc:date>1994-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of the Domain Name System for Dynamic References in an Online Library</title>
<link>https://hdl.handle.net/1721.1/149231</link>
<description>The Use of the Domain Name System for Dynamic References in an Online Library
Alavi, Ali
Persistent, dynamic references (or links) to remove documents are an essential part of an online library. This thesis examines two distributed database systems, X.500 and the Domain Name System(DNS), upon which to build dynamic references. DNS was chosen and was used to design a model and build a sample dynamic reference system. This system seems to exhibit the scalability, robustness, usuability, and efficiency necessary for building global distributed online libraries.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149231</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Can We Compute with Arrays of Nanstructures?</title>
<link>https://hdl.handle.net/1721.1/149230</link>
<description>How Can We Compute with Arrays of Nanstructures?
Biafore, Michael
In part the goal of the Ultra Program is to extract useful computation from nanometer scale effects. To accomplish this goal those of us who are computer scientists must communicate clearly to those of you who are chemists and device physicists precisely what kinds of "computational primitives" you need to obtain from a nanoscale structure  before we can contemplate using it as a building block for ultra-dense ultra-fast computation.
</description>
<pubDate>Mon, 01 Aug 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149230</guid>
<dc:date>1994-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Generalized Railroad Crossing: A Case Study in Formali Verification of Real-time Systems</title>
<link>https://hdl.handle.net/1721.1/149229</link>
<description>The Generalized Railroad Crossing: A Case Study in Formali Verification of Real-time Systems
Heitmeyer, Constance; Lynch, Nancy A.
A new solution to the Generalized Railroad Crossing problem, based on timed automata, invariants and simulation mappings, is presented and evaluated. The solution shows formally the correspondence between four system descriptions: an axiomatic specification, an operational specification, a discrete system implementation, and a system implementation that works with a continuous gate model.
</description>
<pubDate>Tue, 01 Nov 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149229</guid>
<dc:date>1994-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Efficient Implementation of A Hierarchical Weighted Fair Queue Packet Scheduler</title>
<link>https://hdl.handle.net/1721.1/149228</link>
<description>An Efficient Implementation of A Hierarchical Weighted Fair Queue Packet Scheduler
Ndiaye, Oumar
The technical developments in computer networks in recent years have spawned the possibility of merging different services into a single Integrated Service Packet Network (ISPN). The types of service quality required by each of the individual services in an ISPN often differ greatly. Thus, the packet scheduling algorithms used in such networks  must be flexible enough to allocate the available link shares according to the service quality requirements of the different services.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149228</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Minimal Perfect Hashing in Main Memory Indexing</title>
<link>https://hdl.handle.net/1721.1/149227</link>
<description>Application of Minimal Perfect Hashing in Main Memory Indexing
Ho, Yuk
With the rapid decrease in the cost of random access memory (RAM), it will soon become economically feasible to place full-text indexes of a library in main memory.  One essential component of  the indexing system is a hashing algorithm, which maps a keyword into the memory address of the index information corresponding to that keyword.  This thesis studies the application of the minimal perfect hashing algorithm in main memory indexing.  This algorithm is integrated into the index search engine of the Library 2000 system, a digital on-line library system. The performance if this algorithm is compared with that of the open-address hashing scheme.  We find that although the minimal perfect hashing algorithm needs fewer keyword comparisons per keyword search on average, its hashing performance is slower than the open-addressing scheme.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149227</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid Caching for Scalable Oject Systems (Think Globally, Act Locally)</title>
<link>https://hdl.handle.net/1721.1/149226</link>
<description>Hybrid Caching for Scalable Oject Systems (Think Globally, Act Locally)
O'Toole, James; Shrira, Liuba
Object-based client caching allows clients to keep more frequently accessed objects while discarding colder objects that reside on the same page. However, when these objects are modified and sent to the server, it may need to read the corresponding page from disk to install the update. These installation reads are not required with a page-based cache because whole pages are sent to the server.
</description>
<pubDate>Fri, 01 Apr 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149226</guid>
<dc:date>1994-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Opportunistic Log: Efficient Installation Reads in a Reliable Object Server</title>
<link>https://hdl.handle.net/1721.1/149225</link>
<description>Opportunistic Log: Efficient Installation Reads in a Reliable Object Server
O'Toole, James; Shrira, Liuba
In a distributed storage system, client caches managed on the basis of small granularity objects can provide better memory utilization then page-based caches. However, object servers, unlike page servers, must perform additional disk reads. These installation reads are required to install modified objects onto their corresponding disk pages.
</description>
<pubDate>Fri, 01 Apr 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149225</guid>
<dc:date>1994-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coordinated Resource Management in a Replicated Objected Server</title>
<link>https://hdl.handle.net/1721.1/149224</link>
<description>Coordinated Resource Management in a Replicated Objected Server
Ghemawat, Sanjay; Gruber, Robert; O'Toole, James, Jr.; Shrira, Liuba
We propose several new  techniques for resource management in a replicated object server.  By coordinating cache and disk usage among the replicas, these techniques increase throughput and reduce fetch latency.  Cache splitting speeds up fetches by avoiding redundant cache entries, effectively increasing the cache size.  Coordinated writing coordinates disk writes to ensure that one replica is always available to service fetches. We investigate the performance of a replicated server using these techniques, and we present simulation results showing that these techniques provide substantial performance improvements across a variety of workloads.
</description>
<pubDate>Tue, 01 Feb 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149224</guid>
<dc:date>1994-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Clock Synchronization Under Different Delay Assumptions</title>
<link>https://hdl.handle.net/1721.1/149223</link>
<description>Optimal Clock Synchronization Under Different Delay Assumptions
Attiya, Hagit; Herzberg, Amir; Rajsbaum, Sergio
The problem of achieving optimal clock synchronization in a communication network with arbitrary topology and perfect clocks (that do not drift) is studied. Clock synchronization algorithms are presented for a large family of delay assumptions. Our algorithms are modular and consist of three major components. The first component holds for any type of delay assumptions; the second component holds for a large, natural family of local delay assumptions; the third component has to be tailored for each specific delay assumption. Optimal clock synchronization algorithms are derived for several types of delay assumptions by appropriately tuning the third component. The delay assumptions include lower and upper delay bounds, no bounds at all, and bounds on the difference of the delay in opposite directions. In addition, our model handles systems where some processors are connected by broadcast networks in which every message arrives to all processors at approximately the same time. A composition theorem allows combinations of different assumptions for different lins or even for the same link; such mixtures are common in practice. Our results acheive the best possible precision in each execution. This notion of optimality is stronger than the more common notion of worst case optimality. The new notion of optimality applied to systems where the worst case behavior of any clock synchronization algorithm is inherently unbounded.
</description>
<pubDate>Fri, 01 Apr 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149223</guid>
<dc:date>1994-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>FUGU: Implementing Translation and Protection in a Multiuser, Multimodel Multiprocessor</title>
<link>https://hdl.handle.net/1721.1/149222</link>
<description>FUGU: Implementing Translation and Protection in a Multiuser, Multimodel Multiprocessor
Mackenzie, Kenneth; Kubiatowicz, John; Agarwal, Anant; Kaashoek, M. Frans
Multimodel multiprocessors provide both shared memory and message passing primitives to the user for efficient communication. In a multiuser machine, translation permits machine resource to be virtualized and protection permits users to be isolated. The challenge in a multiuser multiprocessor is to provide translation and protection sufficient for general-purpose computing without compromising communication performance, particularly the performance of communication between parallel threads belong to the same computation.
</description>
<pubDate>Sat, 01 Oct 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149222</guid>
<dc:date>1994-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verifiable Secret Sharing as Secure Computation</title>
<link>https://hdl.handle.net/1721.1/149221</link>
<description>Verifiable Secret Sharing as Secure Computation
Gennaro, Rosario; Micali, Silvio
We present a stronger notion of verifiable secret sharing and exhibit a protocol implementing it.  We show that our new notion is preferable to the old ones whenever verifiable secret sharing is used as a tool within larger protocols, rather than being a goal in itself.
</description>
<pubDate>Tue, 01 Mar 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149221</guid>
<dc:date>1994-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Secure and Efficient Digital Signature Algorithm</title>
<link>https://hdl.handle.net/1721.1/149220</link>
<description>A Secure and Efficient Digital Signature Algorithm
Micali, Silvio
</description>
<pubDate>Tue, 01 Mar 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149220</guid>
<dc:date>1994-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>PAC-Learning Prolog Clauses With or Without Errors</title>
<link>https://hdl.handle.net/1721.1/149219</link>
<description>PAC-Learning Prolog Clauses With or Without Errors
Gennaro, Rosario
Recently researchers have been interested in trying to expand the  domain of learnability to subsets of first-order logic,  in particular Prolog programs. This new research area has been named  Inductive Logic Programming (ILP).   In a nutshell we can describe a generic ILP problem as following: given  a set E of (positive and negative) examples of a target predicate,  and some background knowledge B about the world (usually a logic  program including facts and auxiliary predicates), the task is to  find a logic program H (our hypothesis) such that all positive  examples can be deduced from B and H, while no negative  example can.   In this paper we review some of the results achieved in this area  and discuss the techniques used. Moreover we prove the following new results:   (1)  Predicates described by non-recursive, local clauses of at        most k literals are PAC-learnable under any distribution.       This generalizes a previous result that was valid only        for constrained clauses.   (2)  Predicates that are described by k non-recursive local        clauses are PAC-learnable under any distribution.        This generalizes a previous result that was non constructive        and valid only under some class of distributions.   Finally we introduce what we believe is the first theoretical framework  for learning Prolog clauses in the presence of errors.  To this purpose we introduce a new noise model, that we call  the fixed attribute noise model, for learning propositional  concepts over the Boolean domain. This new noise model  can be of its own interest.
</description>
<pubDate>Tue, 01 Feb 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149219</guid>
<dc:date>1994-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Comparison of Simulation Techniques and Algebraic Techniques for Verifying Concurrent Systems</title>
<link>https://hdl.handle.net/1721.1/149218</link>
<description>A Comparison of Simulation Techniques and Algebraic Techniques for Verifying Concurrent Systems
Lynch, Nancy A.; Segala, Roberto
Simulation-based assertional techniques and process algebraic techniques are two of the major methods that have been proposed for the verification of concurrent and distributed systems. It is shown how each of these techniques can be applied to the task of verifying systems described as input/output automata; both of these ways, first using forward simulations, an execution correspondence lemma, and a simple fairness argument, and second using deductions within the process algebra DIOA for I/O automata. An extended evaluation and comparison of the two methods is given.
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149218</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anatomy of a Message in the Alewife Multiprocessor</title>
<link>https://hdl.handle.net/1721.1/149217</link>
<description>Anatomy of a Message in the Alewife Multiprocessor
Kubiatowicz, John; Agarwal, Anant
</description>
<pubDate>Mon, 01 Feb 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149217</guid>
<dc:date>1993-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing Multiprocessor Cache Behavior Through Data Reference Modeling</title>
<link>https://hdl.handle.net/1721.1/149216</link>
<description>Analyzing Multiprocessor Cache Behavior Through Data Reference Modeling
Tsai, Jory; Agarwal, Anant
This paper develops a data reference modeling technique to estimate with high accuracy the cache miss ratio in cache-coherent multiprocessors. The technique involves analyzing the dynamic data referencing behavior of parallel algorithms. Data reference modeling first identifies of different types of shared data blocks accessed during the execution of a parallel algorithm, then captures in a few parameters the cache behavior of each shared block as a function of the problem size, number of processors, and cache size, and finally constructs an analytical expression for each algorithm to estimate the cache miss ratio.
</description>
<pubDate>Mon, 01 Feb 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149216</guid>
<dc:date>1993-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulation Techniques for Proving Properties of Real-time Systems</title>
<link>https://hdl.handle.net/1721.1/149215</link>
<description>Simulation Techniques for Proving Properties of Real-time Systems
Lynch, Nancy A.
The method of simulations is an important technique for reasoning about real-time and other timing-based systems. It is adapted from an analogous method for untimed systems. This paper presents the simulation method in the context  of a very general automaton (i.e., labelled transition system) model for timing-based systems. Sketches are presented of several typical examples for which the method has been used successfully. other complementary tools are also described, in particular invariants for safety proofs, progress functions for timing proofs, and execution correspondences for liveness proofs.
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149215</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Software-Extended Coherent Shared Memory: Performance and Cost</title>
<link>https://hdl.handle.net/1721.1/149214</link>
<description>Software-Extended Coherent Shared Memory: Performance and Cost
Chaiken, David; Agarwal, Anant
This paper evaluates the tradeoffs involved when designing a directory-based protocol that implements coherent shared memory through a combination of hardware and software mechanisms. The fundamental design decisions involve balancing the size and cost of the hardware directory and control, the complexity of the software interface, and the overall performance of the system. In order to study these design problems, we experiment with a spectrum of cache-coherence schemes, raging from a full-map directory that supports all sharing patterns in hardware to an implementation that performs all memory-side actions in software.
</description>
<pubDate>Fri, 01 Oct 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149214</guid>
<dc:date>1993-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Revitalized Relationship Between Probabilistically Checkable Debate Systems, IP, and PSpace</title>
<link>https://hdl.handle.net/1721.1/149213</link>
<description>The Revitalized Relationship Between Probabilistically Checkable Debate Systems, IP, and PSpace
Russell, Alexander; Sundaram, Ravi
In 1990, PSPACE was shown to be identical to IP, the class of languages with interactive proofs  [11, 2]. Recently, PSPACE was again recharacterized, this time in terms of (Random) Probabilistically Checkable Debate Systems [4, 5]. In particular, it was shown that SPACE = PCDS[log n, 1] = RPCDS [log n, 1]. We study the relativized behaviour of the classes defined by these debates systems in comparison with the classes IP and PSPACE.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149213</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Wires: Overcoming Pin Limitations in FPGA-based Logic Emulators</title>
<link>https://hdl.handle.net/1721.1/149212</link>
<description>Virtual Wires: Overcoming Pin Limitations in FPGA-based Logic Emulators
Babb, Jonathan; Tessier, Russell; Agarwal, Anant
Existing FPGA-based logic emulators suffer from limited inter-chip communication bandwidth, resulting in low gate utilization (10 20 percent). This resource imbalance increases the number of chips needed to emulate a particular logic design and thereby decreases emulation speed, since signals must cross more chip boundaries. Current emulators only use a fraction of potential communication bandwidth because they dedicate each FPGA pin (physical wire) to a single emulated signal (logical wire). These logical wires are not active simultaneously are only switched at emulation clock speeds.
</description>
<pubDate>Sun, 01 Nov 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149212</guid>
<dc:date>1992-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compile-time Loop Splitting for Distributed Memory Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149211</link>
<description>Compile-time Loop Splitting for Distributed Memory Multiprocessors
Tanguay, Donald O., Jr.
In a distributed memory multiprocessor, a program's task is partitioned among the processors to exploit parallelism, and the data are partitioned to increase referential locality. Though the purpose of partitioning is to shorten the execution time of an algorithm, each data reference can become a complex expression based upon the data partitions. As an attempt to minimize the computation needed for array references, loop splitting can further divide a partitioned loop into segments that allow the code hoisting and strength reduction optimizations. This thesis introduces two methods of loop splitting, rational and interval. While rational splitting divides the loop into equal-length GCD segments, interval splitting specifies segments as an explicit list of intervals. These two methods have been implemented and studied. Under our execution model, the loop in the algorithms analyzed executes an average of 2 to 3 times faster after loop splitting.
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149211</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Column-associative Caches: A Technique for Reducing the Miss Rate of Direct-mapped Caches</title>
<link>https://hdl.handle.net/1721.1/149210</link>
<description>Column-associative Caches: A Technique for Reducing the Miss Rate of Direct-mapped Caches
Agarwal, Anant; Pudar, Steven D.
Direct-mapped caches are a popular design choice for high-performance processors; unfortunately, direct-mapped caches suffer systematic interference misses when more than one address map into the same cache set. This paper describes the design of column-associative caches, which minimize the conflicts that arise in direct-mapped accesses by allowing conflicting addresses to dynamically choose alternate hashing functions, so that most of the conflicting data can reside in the cache. At the same time, however, the critical hit access path is unchanged. The key to implementing this scheme efficiently is the addition to each cache set of a rehash bit, which indicates whether that set stores data that is referenced by an alternate hashing function. When multiple addresses map into the same location, these rehashed locations are preferentially replaced. We demonstrate using trace-driven simulations and an analytical model that a column-associative cache removed virtually all interference misses for large caches, without altering the critical hit access time.
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149210</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Multiprogrammed Caches</title>
<link>https://hdl.handle.net/1721.1/149209</link>
<description>Modeling Multiprogrammed Caches
Agarwal, Anant
This paper presents a simple, yet accurate, model for multiprogrammed caches and validates it against trace-driven simulation. The model takes into account nonstationary behavior of processes and process sharing. By making judicious approximations, the paper shows that a very simple expression of the form u^2(p - 1)/tS accurately models the multiprogramming component of the miss rate of large direct-mapped caches. In the above expression, t is the context-switching interval, S is the cache size in blocks, p is the number of processes, and u is the number of unique blocks accesses by a process during the interval t.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149209</guid>
</item>
<item>
<title>Forward and Backward Simulations Part II: Timing-based Systems</title>
<link>https://hdl.handle.net/1721.1/149208</link>
<description>Forward and Backward Simulations Part II: Timing-based Systems
Lynch, Nancy A.; Vaandrager, Frits
A general automaton model for timing-based systems is presented and is used as the context for developing a variety of simulation proof techniques for such systems. These techniques include  (1) refinments, (2) forward and backward simulations,  (3)  hybrid forward-backward and backward-forward simulations, and (4) history and prophecy relations. Relationships between the different types of simulations, as well as soundness and completeness results, are stated and proved.
</description>
<pubDate>Mon, 01 Mar 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149208</guid>
<dc:date>1993-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forward and Backward Simulations Part I: Untimed Systems (Replaces TM-486)</title>
<link>https://hdl.handle.net/1721.1/149207</link>
<description>Forward and Backward Simulations Part I: Untimed Systems (Replaces TM-486)
Lynch, Nancy A.; Vaandrager, Frits
A unified, comprehensive presentation of simulation techniques for verification of concurrent systems is given, in terms of a simple untimed automaton model. In particular, (1) refinements, (2) forward and backward simulations, (3) hybrid forward-backward and backward-forward simulations, and (4) history and prophecy relations are defined.
</description>
<pubDate>Mon, 01 Mar 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149207</guid>
<dc:date>1993-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Failsafe Key Escrow Systems (Extended Abstract)</title>
<link>https://hdl.handle.net/1721.1/149206</link>
<description>Failsafe Key Escrow Systems (Extended Abstract)
Leighton, Tom
This paper describes a method for escrowing cryptographic keys, which we call Failsafe Key Escrow (FKE). The method is substantially more secure than alternative such as the Fair Public Key Cryptosystem approach advocated by Micali, and it is particular well suited for use in escrowing DSS keys.
</description>
<pubDate>Mon, 01 Aug 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149206</guid>
<dc:date>1994-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Partitioning of Parallel Loops for Cache-coherent Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149205</link>
<description>Automatic Partitioning of Parallel Loops for Cache-coherent Multiprocessors
Agarwal, Anant; Kranz, David; Natarajan, Venkat
This paper presents a theoretical framework for automatically partitioning parallel loops to minimize cache coherency traffic on shared-memory multiprocessors.  The framework introduces the notion of uniformly intersecting references to capture temporal locality in array references, and the idea of data footprints to estimate the communication traffic between processors.  The framework uses lattice theory to compute the size of data footprints.  We demonstrate that algorithms based on our framework discover optimal partitions in many cases, such as non-communication-free parallelogram partitions of affine loop index functions, which were not handled by previous algorithms.  We also show that our framework correctly reproduces results from previous loop partitioning algorithms proposed by Abraham and Hudak and by Sadayappan and Ramanujam.  Because they deal only with index expressions, the algorithms are computationally efficient as well.  We have implemented a subset of this framework for rectangular partitioning in a compiler for the cache-coherent Alewife machine.
</description>
<pubDate>Tue, 01 Dec 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149205</guid>
<dc:date>1992-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Action Transducers and Timed Automata</title>
<link>https://hdl.handle.net/1721.1/149204</link>
<description>Action Transducers and Timed Automata
Lynch, Nancy A.; Vaandrager, Frits
The timed automaton model of [29, 30] is a general model for timing-based systems. A notion of timed action transducer is here defined as an automata-theoretic way of representing operations on timed automata. It is shown that two timed trace inclusion relations are substitutive with respect to operations that can be described by timed action transducers. Examples are given of operations that can be describe in this way, and a preliminary proposal is given for an appropriate language of operators for describing timing-based systems.
</description>
<pubDate>Sun, 01 Nov 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149204</guid>
<dc:date>1992-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experience with Fine-grain Synchronization in MIMD Machines for Preconditioned Conjugate Gradient</title>
<link>https://hdl.handle.net/1721.1/149203</link>
<description>Experience with Fine-grain Synchronization in MIMD Machines for Preconditioned Conjugate Gradient
Yeung, Donald; Agarwal, Anant
This paper discusses our experience with fine-grain synchronization for the preconditioned conjugate gradient method using the modified incomplete Cholesky factorization of the coefficient matrix as a preconditioner.  This algorithm represents a large class of algorithms that have been widely used but traditionally difficult to implement efficiently on vector and parallel machines.  Through a series of experiments conducted using a simulator of a distributed shared-memory multiprocessor, this paper addresses two major questions related to fine-grain synchronization in the context of this application.  First, what is the overall impact of fine-grain synchronization on performance?  Second, what are the individual contributions of the following three mechanisms typically provided to support fine-grain synchronization:  language-level support, full-empty bits for compact storage and communication of synchronization state, and efficient processor operations on the state bits?      The experiments indicate that fine-grain synchronization improves overall performance by a factor of 3.7 on 16 processors using the largest problem size we could simulate; the paper also projects that a significant performance advantage will be sustained for larger problem sizes.   Preliminary experience shows that the bulk of the performance advantage for this application can be attributed to exposing increased parallelism through language-level expression of fine-grain synchronization.  A smaller fraction relies on a compact-implementation of synchronization state, while an even smaller fraction results from efficient full-empty bit operations.  The paper also shows that the last two components are likely to have a greater impact on performance as mechanisms for latency tolerance are employed.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149203</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Message-passing and Shared-memory: Early Experience</title>
<link>https://hdl.handle.net/1721.1/149202</link>
<description>Integrating Message-passing and Shared-memory: Early Experience
Kranz, David; Johnson, Kirk; Agarwal, Anant; Kubiatowicz, John; Lim, Beng-Hong
This paper discusses some of the issues involved in implementing a shared-address space programming model on large-scale, distributed-memory multiprocessors.  Because message-passing mechanisms are much more efficient than shared-memory loads and stores for certain types of interprocessor communication and synchronization operations, we argue for building multiprocessors that efficiently support both shared-memory and message-passing mechanisms.  We describe an architecture, Alewife, that integrates support for shared-memory and message-passing through a simple interface.  We expect the compiler and runtime system to cooperate in using appropriate hardware mechanisms that are most efficient for specific operations.  We report on both integrated and exclusively shared-memory implementations of our runtime system and one complete application; the final paper will contain results for other applications as well.  The integrated runtime system drastically cuts down the cost of communication incurred by the scheduling, load balancing, and certain synchronization operations.  We also present some preliminary performance results comparing the two systems.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149202</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid Atomicity for Nested Transactions</title>
<link>https://hdl.handle.net/1721.1/149201</link>
<description>Hybrid Atomicity for Nested Transactions
Fekete, Alan; Lynch, Nancy A.; Weihl, William E.
This paper defines the notion of hybrid atomicity for nested transaction systems, and presents and verifies an algorithm providing this property. Hybrid atomicity is a modular property; it allows the correctness of a system to be deduced from the fact each object is implemented to have the property. It allows more concurrency than dynamic atomicity, by assigning timestamps to transaction at commit. The Avalon system provides exactly this facility.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149201</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>More Choices Allow More Faults: Set Consensus Problems in Totally Asynchronous Systems</title>
<link>https://hdl.handle.net/1721.1/149200</link>
<description>More Choices Allow More Faults: Set Consensus Problems in Totally Asynchronous Systems
Chaudhuri, Soma
We define k-set consensus problem as an extension of the consensus problem, where each processors decides on a single value such that the set of decided values in any run is of size at most k. We require the agreement condition that all values decided upon are initial values of some processor. We show that the problem has a simple ( k - 1 )-resilient protocol in a totally asynchronous system.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149200</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dribble-Back Registers: A Technique for Latency Tolerance in Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149199</link>
<description>Dribble-Back Registers: A Technique for Latency Tolerance in Multiprocessors
Soundararajan, Vijayaraghavan
As parallel machines grow in scale and complexity, latency tolerance of synchronization faults and remote memory accesses becomes increasingly important. One method for tolerating this by multithreading the processor and rapidly context switching between these threads. Fast context switching is most effective when the latencies being tolerated are short compared to the total run lengths of all the resident threads.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149199</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Integration of the Organization Engine and Library 2000</title>
<link>https://hdl.handle.net/1721.1/149198</link>
<description>The Integration of the Organization Engine and Library 2000
Weiss, Ron
In the contemporary research environment, users access and manipulate information gathered from diverse data sources. The organization Engine is a prototype being developed at the Cambridge Research Lab of Digital Equipment Corporation for the incorporation of data from disparate sources into a local homogeneous framework. It relies on information management based on the notion of retrieval and manipulation through the organization of the data in a non strict hierarchical structure.
</description>
<pubDate>Fri, 01 May 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149198</guid>
<dc:date>1992-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approximating the Minimum-cost Maximum Flow is P-Complete</title>
<link>https://hdl.handle.net/1721.1/149197</link>
<description>Approximating the Minimum-cost Maximum Flow is P-Complete
Stein, Clifford; Wein, Joel
We show that it is impossible, in NC, to approximate the value of the minimum-cost maximum flow unless P = NC.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149197</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Closing the Window of Vulnerability in Multiphase Memory Transcations</title>
<link>https://hdl.handle.net/1721.1/149196</link>
<description>Closing the Window of Vulnerability in Multiphase Memory Transcations
Kubiatowicz, John; Chaiken, David; Agarwal, Anant
Multiprocessor architects have begun to explore several mechanisms such as prefetching, context-switching and software-assisted dynamic cache-coherence, which transform single-phase memory transactions in conventional memory systems into multiphase operations. Multiphase operations introduce a window of vulnerability in which data can be lost before it is used either through protocol invalidation or cache conflicts. Losing data introduces damaging livelock situations. This paper discusses the origins of the window of vulnerability and proposes an architectural framework that closes it.  The framework in implemented in Alwife, a large-scale multiprocessor being built at MIT.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149196</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Low-cost Support for Fine-grain Synchronization in Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149195</link>
<description>Low-cost Support for Fine-grain Synchronization in Multiprocessors
Kranz, David; Lim, Beng-Hong; Agarwal, Anant
As multiprocessors scale beyond the limits of a few tens of processors, they must look beyond traditional methods of synchronization to minimize serialization and achieve the high degrees of parallelism required to utilize large machines. By allowing synchronization at the level of the smallest unit of memory, fine-grain synchronization achieves these goals. Unfortunately, supporting efficient fine-grain synchronization without inordinate amounts of hardware has remained a challenge.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149195</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compile-time Techniques for Processor Allocation in Macro Dataflow Graphs for Multiprocessors</title>
<link>https://hdl.handle.net/1721.1/149194</link>
<description>Compile-time Techniques for Processor Allocation in Macro Dataflow Graphs for Multiprocessors
Prasanna, G.N. Srinivasa; Agarwal, Anant
When compiling a progam consisting of multiple nested loops for execution on a multiprocessor, processor allocation is the problem of determining the number of processors over which to partition each nested loop. This paper presents processor allocation techniques for compiling such programs for multiprocessors with local memory. Programs consisting of multiple loops, where the precedence constraints between the loops is known, can be viewed as macro dataflow graphs. Macro dataflow graphs comprise several macro nodes (or macro operations) that must be executed subject to prespecified precedence constraints. Optimal processor allocation specifies the number of processors computing each macro node and their sequencing to optimize run time. This paper presents computing each macro node and their sequencing to optimize run time. This paper presents computationally efficient techniques for determining the optimal processor allocation using estimated speedup functions of the macro nodes. These ideas have been implemented in a structure-driven compiler, SDC, for expressions of matrix operations. The paper presents the performance of the compiler for several matrix expressions on a simulator of the Alewife multiprocessor.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149194</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Communication Locality on Large-scale Multiprocessor Performance</title>
<link>https://hdl.handle.net/1721.1/149193</link>
<description>The Impact of Communication Locality on Large-scale Multiprocessor Performance
Johnson, Kirk L.
As multiprocessor sizes scale and computer architects turn to interconnection networks with non-uniform communication latencies, the lure of exploiting communication locality to increase performance becomes inevitable. Models that accurately quantify locality effects provide invaluable insight into the importance of exploiting locality as machine sizes and features change. This paper presents a framework for modeling the impact of communication locality on system performance. The framework provides a means for combining simple models of application, processor, and network behavior to obtain a combined model that accurately reflects feedback effects between processors and networks. We introduce a model that characterizes application behavior with three parameters that capture computation grain, sensitivity to communication latency, and amount of locality present at execution time. The combined model is validated with measurements taken from a detailed simulator for a complete multiprocessor system. Using the combined model, we show that exploiting communication locality provides gains which are at most linear in the factor by which average communication distance is reduced when the number of outstanding communication transactions per processor is bounded. The combined model is also used to obtain rough upper bounds on the performance improvement from exploiting locality to minimize communication distance.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149193</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Compilation of Macro Dataflow Graphs for Multiprocessors with Local Memory</title>
<link>https://hdl.handle.net/1721.1/149192</link>
<description>Hierarchical Compilation of Macro Dataflow Graphs for Multiprocessors with Local Memory
Prasanna, G.N. Srinivasa; Agarwal, Anant; Musicus, Bruce R.
This paper presents a hierarchical approach for compiling macro dataflow graphs for multiprocessors with local memory. Macro dataflow graphs comprise several nodes (or macros operations) that must be executed subject to prespecified precedence constraints. Programs consisting of multiple nested loops, where the precedence constraints between the loops are known, can be viewed as macro dataflow graphs. The hierarchical compilation approach comprises a processor allocation phase followed by a partitioning phase. In the processor allocation phase, using estimated speedup functions for the macro nodes, computationally efficient techniques establish the sequencing and parallelism of macro operations for close-to-optimal run times. The second phase partitions the computations in each macro node to maximize communication locality for the level of parallelism determined by the processor allocation phase. The same approach can also be used for programs consisting of multiple loop nests, when each of the nested loops can be characterized by a speedup function. These ideas have been implemented in a prototype structure-driven compiler, SDC, for expressions of matrix operations. The paper presents the performance of the compiler for several matrix expressions on a simulator of the Alewife multiprocessor.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149192</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Memory Assignment for Multiprocessor Caches Through Graph Coloring</title>
<link>https://hdl.handle.net/1721.1/149191</link>
<description>Memory Assignment for Multiprocessor Caches Through Graph Coloring
Agarwal, Anant; Guttag, John; Papaefthymiou, Marios
It has become apparent that the achieved performance of multiprocessors is heavily dependent upon the quality of the availabel compilers. In this paper we are concerned with compile-time techniques that can be used to achieve better performance by improving cache utilization. Specifically, we investigate the problem of assigning data chunks to memory in a way that will minimize collisions in direct-mapped multiprocessor caches. We show that while this problem is computationally intractable, there are interesting special cases that can be solved in polynomial time. We also present several techniques that can be used when conflict-free assignment is not possible, or when finding a conflict-free assignment is computationally infeasible. These techniques include uniform decaching, which involves not caching specific data blocks, and data replication, which involves making multiple copies of read-only data. Finally, we present a memory assigment technique, grey coloring, that reduces latency in the presence of collisions by distributing cache misses among processors in a way that minimized the total number of cache misses in any specific cache.
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149191</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Compilation of Macro Dataflow Graphs for Multiprocessors with Local Memory</title>
<link>https://hdl.handle.net/1721.1/149190</link>
<description>Hierarchical Compilation of Macro Dataflow Graphs for Multiprocessors with Local Memory
Prasanna, G.N. Srinivasa; Agarwal, Anant; Musicus, Bruce R.
This paper presents a hierarchical approach for compiling macro dataflow graphs for multiprocessors with local memory. Macro dataflow graphs comprise several nodes (or macros operations) that must be executed subject to prespecified precedence constraints. Programs consisting of multiple nested loops, where the precedence constraints between the loops are known, can be viewed as macro dataflow graphs. The hierarchical compilation approach comprises a processor allocation phase followed by a partitioning phase. In the processor allocation phase, using estimated speedup functions for the macro nodes, computationally efficient techniques establish the sequencing and parallelism of macro operations for close-to-optimal run times. The second phase partitions the computations in each macro node to maximize communication locality for the level of parallelism determined by the processor allocation phase. The same approach can also be used for programs consisting of multiple loop nests, when each of the nested loops can be characterized by a speedup function. These ideas have been implemented in a prototype structure-driven compiler, SDC, for expressions of matrix operations. The paper presents the performance of the compiler for several matrix expressions on a simulator of the Alewife multiprocessor.
</description>
<pubDate>Tue, 01 Dec 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149190</guid>
<dc:date>1992-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Communication Locality on Large-scale Multiprocessor Performance</title>
<link>https://hdl.handle.net/1721.1/149189</link>
<description>The Impact of Communication Locality on Large-scale Multiprocessor Performance
Johnson, Kirk L.
As multiprocessor sizes scale and computer architects turn to interconnection networks with non-uniform communication latencies, the lure of exploiting communication locality to increase performance becomes inevitable. Models that accurately quantify locality effects provide invaluable insight into the importance of exploiting locality as machine sizes and features change. This paper presents a framework for modeling the impact of communication locality on system performance.
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149189</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Rabin's Randomized Mutual Exclusion Algorithm: Preliminary Report</title>
<link>https://hdl.handle.net/1721.1/149188</link>
<description>An Analysis of Rabin's Randomized Mutual Exclusion Algorithm: Preliminary Report
Lynch, Nancy A.; Saias, Isaac
In 1982, Michael Rabin published a randomized distributed algorithm implementing mutual exclusion for n processes using a read-modify-write primitive on a shared variable with O(log n) values. He claimed that this algorithm satisfied the following informally-stated strong probabilistic no-lockout property. Define the adversary to be the entity controlling the order in which processes take steps; then, for every adversary, any process competing for entrance to the critical section succeeds with probability Ω(1/m), where m is the number of competing processes. In this paper we consider several different ways in which this property can be expressed formally. We express explicitly the dependency of the probability on the adversary and show that this dependency is so strong that the algorithm does not satisfy any of these conditions. In fact, the algorithm does not even satisfya much weaker Ω(1/n) property.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149188</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Deterministic Constructions of Low-Diameter Network Decompositions</title>
<link>https://hdl.handle.net/1721.1/149187</link>
<description>Fast Deterministic Constructions of Low-Diameter Network Decompositions
Berger, Bonnie; Cowen, Lenore
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149187</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linearizable Counting Networks</title>
<link>https://hdl.handle.net/1721.1/149186</link>
<description>Linearizable Counting Networks
Merlihy, Maurice; Shavit, Nir; Waarts, Orli
The counting problem requires n asynchronous processors to assign themselves successive values. A solution is linearizable if the order of the values assigned reflects the real-time order in which they were requested. Linearizable counting lies at the heart of concurrent timestamp generation, as well as concurrent implementations of shared counters, FIFO buffers, and similar data structures. We consider solutions to the linearizable counting problem in a multiprocessor architecture in which processors communicate by applying read-modify-write operations to a shared memory. Linearizable counting algorithms can be judged by three criteria: the memory contention produced, whether processors are required to wait for one another, and how long it takes a processor to choose a value (the latency). A solution is ideal if it has low contention, low latency, and it eschews waiting. The conventional software solution, where processors synchornize at a single variable, avoids waiting and has low latency, but has high contention. In this paper we give two new counting network constructions, one with low latency and low contention, but that requires processors to wait for one another, and one with low contention and no waiting, but that has high latency. Finally, we prove that these trade-offs are inescapable: an ideal linearizable counting algorithm is impossible. Since ideal non-linearizable counting algorithms exist, these results establish a substantial complexity gap between linearizable and non-linearizable counting.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149186</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forward and Backward Simulations for Timing-based Systems</title>
<link>https://hdl.handle.net/1721.1/149185</link>
<description>Forward and Backward Simulations for Timing-based Systems
Lynch, Nancy A.; Vaandrager, Frits
A general automaton model for timing-based systems is presented and is used as the context for developing a variety of simulation proof techniques for such systems. As a first step, a comprehensive overview of simulation techniques for simple untimed automata is given. In particular, soundness and completeness results for (1) refinements, (2) forward and backward simulations, (3) forward-backward and backward-forward simulations, and (4) history and prophecy relations are given.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149185</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Algorithm for the Tramp Steamer Problem Based on Mean-weight Cycles</title>
<link>https://hdl.handle.net/1721.1/149184</link>
<description>An Algorithm for the Tramp Steamer Problem Based on Mean-weight Cycles
Ishii, Alexander T.; Leiserson, Charles E.; Papaefthymiou, Marios C.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149184</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Replication in the Harp File System</title>
<link>https://hdl.handle.net/1721.1/149183</link>
<description>Replication in the Harp File System
Liskov, B.; Ghemawat, S.; Gruber, R.; Johnson, P.; Shrira, L.; Williams, M.
This paper describes the design and implementation of the Harp file system. Harp is a replicated Unix file system accessible via the VFS interface. It provides highly available and reliable storage for files and guarantees that file operations are executed atomically in spite of concurrency and failures. It uses a novel variation of the primary copy replication technique that provides good performance because it allows us to trade disk accesses for network communication. Harp is intended to be used within a file service in a distributed network; in our current implementation, it is accessed via NFS. Preliminary performance results indicate that Harp provides equal or better response time and system capacity than an unreplicated implementation of NFS that uses Unix files directly.
</description>
<pubDate>Thu, 01 Aug 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149183</guid>
<dc:date>1991-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Fast Multiport Memory Based on Single-port Memory Cells</title>
<link>https://hdl.handle.net/1721.1/149182</link>
<description>A Fast Multiport Memory Based on Single-port Memory Cells
Rivest, Ronald L.; Glasser, L.
We present a new design for dual-port memories that uses single-port memory cells but guarantees fast deterministic read/write access. The basic unit of storage is the word, rather than the bit, and addresses conflicts result in bit errors that are removed by correction circuitry. The addressing scheme uses Galois field arithmetic to guarantee that the maximum number of bit errors in any word accessed is one. These errors can be corrected every time with a simple correction scheme. The scheme can be generalized to an arbitrary number of ports.
</description>
<pubDate>Mon, 01 Jul 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149182</guid>
<dc:date>1991-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The MIT Alewife Machine: A Large-scale Distributed-memory Multiprocessor</title>
<link>https://hdl.handle.net/1721.1/149181</link>
<description>The MIT Alewife Machine: A Large-scale Distributed-memory Multiprocessor
Agarwal, Anant; Chaiken, David; Johnson, Kirk; Kranz, David; Kubiatowicz, John; Kurihara, Kiyoshi; Lim, Beng-Hong; Maa, Gino; Nussbaum, Dan
The Alewife multiprocessor project focuses on the architecture and design of a large-scale parallel machine. The machine uses a low dimension direct interconnection network to provide scalable communication band-width, while allowing the exploitation of locality. Despite its distributed memory architecture, Alewife allows efficient shared memory programming through a multilayered approach to locality management. A new scalable cache coherence scheme called LimitLess directories allows the use of caches for reducing communication latency and network bandwidth requirements.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149181</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cost-sensitive Analysis of Communication Protocols</title>
<link>https://hdl.handle.net/1721.1/149180</link>
<description>Cost-sensitive Analysis of Communication Protocols
Awerbuch, Baruch; Baratz, Alan; Peleg, David
This paper introduces the notion of cost-sensitive communication complexity and exemplifies it on the following basic communication problems: computing a global function, network synchornization, clock synchronization, controlling protocols' worst-case execution, connected components, spanning tree, etc., contructing a minimymn spanning tree, constructing a shortest path tree.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149180</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of Continuous Optimization</title>
<link>https://hdl.handle.net/1721.1/149179</link>
<description>The Complexity of Continuous Optimization
Rogaway, Phillip
Given a polynomial objective function f(x1,…,xn), we consider the problem of finding the maximum of this polynomial inside some convex set D = {x : Ax &lt;= B}. We show that, under a complexity assumption, this extremum cannot be approximated by any polynomial-time algorithm, even exceedingly poorly. This represents an unusual interplay of discrete and continuous mathematics: using a combinatorial argument to get a hardness result for a continuous optimization problem.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149179</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Counting Networks</title>
<link>https://hdl.handle.net/1721.1/149178</link>
<description>Counting Networks
Aspnes, James; Herlihy, Maurice; Shavit, Nir
Many fundamental multi-processor coordination problems can be expressed as counting problems: processes must cooperate to assign successive values from a given range, such as addresses in memory of destinations on an interconnection network. Conventional solutions to these problems perform poorly because of synchronization bottlenecks and high memory contention. Motivated by observations on the behavior of sorting networks, we offer a completely new approach to solving such problems. We introduce a new class of networks called counting networks, i.e., networks that can be used to count. We give two counting network constructions of depth log^2 n, using n log^2 n "gates," avoiding the sequential bottlenecks inherent to former solutions, and substantially lowering the memory contention. Finally, to show that counting networks are not merely mathematical creatueres, we provide experimental evidence that they outperform conventional synchronization techniques under a variety of circumstances.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149178</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>APRIL: A Processor Architecture for Multiprocessing</title>
<link>https://hdl.handle.net/1721.1/149177</link>
<description>APRIL: A Processor Architecture for Multiprocessing
Agarwal, Anant; Lim, Beng-Hong; Kranz, David; Kubiatowicz, John
Processors in large-scale multiprocessors must be able to tolerate large communication latencies and synchronization delays. This paper describes the architecture of a rapid-context-switching processor called APRIL with support for fine-grain threads and synchronization. APRIL achieves high single-thread performance and supports virtual dynamic threads. A commercial RISC-based implementation of APRIL and a run-time software system that can switch contexts in about 10 cycles is described. Measurements taken for several parallel applications on an APRIL simulator show that the overhead for supporting parallel tasks based on futures is reduced by a factor of two over a corresponding implementation on the Encore Multimax. The scalability of a multiprocessor based on APRIL is explored using a performance model. We show that the SPARC-based implementation of APRIL can achieve close to 80% processor utilization with as few as three resident threads per processor in a large-scale cache-based machine with an average base network latency of 55 cycles.
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149177</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lazy Task Creation: A Technique for Increasing the Granularity of Parallel Programs</title>
<link>https://hdl.handle.net/1721.1/149176</link>
<description>Lazy Task Creation: A Technique for Increasing the Granularity of Parallel Programs
Mohr, Eric; Kranz, David; Halstead; Robert H., Jr.
Many parallel algorithms are naturally expressed at a fine level of granularity, often finer than MIMD parallel system can exploit efficiently. Most builders of parallel systems have looked to either the programmer or a parallelizing compiler to increase the granularity of such algorithms. In this paper we explore a third approach to the granularity problem by analyzing two strategies for combining parallel tasks dynamically at run-time.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149176</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Limitless Directories: A Scalable Cache Coherence Scheme</title>
<link>https://hdl.handle.net/1721.1/149175</link>
<description>Limitless Directories: A Scalable Cache Coherence Scheme
Chaiken, David; Kubiatowicz, John; Agarwal, Anant
Caches enhance the performance of multiprocessors by reducing network traffic and average memory access latency. However, cache-based systems must address the problem of cache coherence. We propose the LimitLESS directory protocol to solve this problem. The LimitLESS scheme uses a combination of hardware and software techniques to realize the performance of full-map directory with the memory overhead of limited directory. This protocol is supported by Alewife, a large-scale multiprocessor. We describe the architectural interfaces needed to implement the LimitLESS directory, and evaluate its performance though simulations of the Alewife machine.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149175</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reliable Communication Over Unreliable Channels</title>
<link>https://hdl.handle.net/1721.1/149174</link>
<description>Reliable Communication Over Unreliable Channels
Afek, Yehuda; Attiya, Hagit; Fekete, Alan; Fischer, Michael; Lynch, Nancy A.; Mansour, Yishay; Wang, Da-Wei; Zuck, Lenore
Layered communication protocols frequently implement a FIFO message facility on top of an unreliable non-FIFO service such as that provided by a packet-switching network. This paper investigates the possibility of implementing a reliable message layer on top of an underlying layer that can lose packets and deliver them out of order, with the additional restriction that the implementation uses only a fixed finite number of different packets. A new formalism is presented to specify communication layers and their properties, the notion of their implementation by I/O automata, and the properties of such implementations. An I/O automaton that implements a reliable layer over an unreliable layer is presented. In this implementation, the number of packets needed to deliver each succeeding message increases permanently as additional packet-loss and reordering faults occur. A proof is given that no protocol can avoid such performance degradation.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149174</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Autoimmune Mechanism for AIDS' T4 Lymphopenia</title>
<link>https://hdl.handle.net/1721.1/149173</link>
<description>An Autoimmune Mechanism for AIDS' T4 Lymphopenia
Micali, Silvio
We put forward a new model for the T4 lymphopenia occuring in AIDS by suggesting a mechanism whose net effect is blocking the generation of T4 cells during HIV infection. Supporting evidence for this mechanism is derived from the experiments in the recent literature.
</description>
<pubDate>Mon, 01 Apr 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149173</guid>
<dc:date>1991-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of Decision Versus Search</title>
<link>https://hdl.handle.net/1721.1/149172</link>
<description>The Complexity of Decision Versus Search
Bellare, Mihir; Goldwasser, Shafi
A basic question  about NP is whether or not search (the problem of finding a witness)  reduces in polynomial time to decision ( the problem deciding whether there exists a witness). The fact that search does reduce to decision for SAT and other NP-complete problems (self-reducibility) is among the most well known facts in the theory of computation. But the general question of whether search reduces to decision for every language in NP remains open.   We indicate that the answer is negative: under a  natural complexity assumption (that deterministic and non deterministic double exponential time are unequal) we construct a language in NP for which search does not reduce to decision.
</description>
<pubDate>Mon, 01 Apr 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149172</guid>
<dc:date>1991-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Autoimmune Mechanism for AIDS' T4 Lymphopenia</title>
<link>https://hdl.handle.net/1721.1/149171</link>
<description>An Autoimmune Mechanism for AIDS' T4 Lymphopenia
Micali, Silvio
We put forward a new model for the T4 lymphopenia occuring in AIDS by suggesting a mechanism whose net effect is blocking the generation of T4 cells during HIV infection.
</description>
<pubDate>Fri, 01 Mar 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149171</guid>
<dc:date>1991-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Are Wait-free Algorithms Fast?</title>
<link>https://hdl.handle.net/1721.1/149170</link>
<description>Are Wait-free Algorithms Fast?
Attiya, Hagit; Lynch, Nancy A.; Shavit, Nir
The time complexity of wait-free algorithms in "normal" executions, where no failures occure and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any wait-free algorithm that achieves approximate agreements among n processes is proved. In contrast, there exists a non-wait-free algorithm that solves this problem in constant time. This implies an Ω(log n) time separation between the wait-free and non-wait-free computation models. On the positive side, we present an O(log n) time wait-free approximate agreement algorithm; the complexity of this algorithm is within a small constant of the lower bound.
</description>
<pubDate>Fri, 01 Mar 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149170</guid>
<dc:date>1991-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On-line Algorithms for 2-coloring Hypergraphs via Chip Games</title>
<link>https://hdl.handle.net/1721.1/149169</link>
<description>On-line Algorithms for 2-coloring Hypergraphs via Chip Games
Aslam, Javed A.; Dhagat, Aditi
</description>
<pubDate>Sat, 01 Dec 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149169</guid>
<dc:date>1990-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Massively Parallel Solution of The Assignment Problem</title>
<link>https://hdl.handle.net/1721.1/149168</link>
<description>On the Massively Parallel Solution of The Assignment Problem
Wein, Joel; Zenios, Stavros
In this paper we discuss the design, implementation and effectiveness of massively parallel algorithms for the solution of large-scale assignment problems. In particular, we study the auction algorithm of Bertsekas, an algorithm based on the method of multipliers of Hestenes and Powell, and an algorithm based on the alternating direction method of multipliers of Eckstein. We discuss alternative approaches to the massively parallel implementation of the auction algorithm, including Jacobi, Gauss-Seidel and a hybrid scheme. The hybrid scheme, in particular, exploits two different levels of parallelism and an efficient way of communicating the data between them without the need to perform general router operations across the hypercube network. We then study the performance of massively parallel implementations of the two methods of multipliers. Implementations are carried out on the Connection Machine CM-2, and the algorithms are evaluated empirically with the solution of large scale problems. The hybrid scheme significantly outperforms all of the other methods and gives the best computational results to date for a massively parallel solution to this problem.
</description>
<pubDate>Sat, 01 Dec 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149168</guid>
<dc:date>1990-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>On-line Scheduling of Parallel Machines</title>
<link>https://hdl.handle.net/1721.1/149167</link>
<description>On-line Scheduling of Parallel Machines
Wein, Joel; Williamson, David P.
We study the problem of scheduling jobs on parallel machines in an on-line fashion, where the processing requirement of a job is not known until the job is completed. Despite this lack of knowledge of the future, we wish to schedule so as to minimize the completion time of the entire set of jobs. In general, the performance of an on-line algorithm is measured by its competitive ratio: the worst case ratio of its performance of an optimal algorithm with total prior knowledge. We study two fundamental models for this problem, that of identical machines, where all the machines run at the same speed, and uniformaly related machines, where the machines run at different speeds. Our results include: 1) Matching upper and lower bounds on the competitive ratio for the case of identical machines. 2) Upper and lower bounds that differ by a constant factor for uniformly related machines. 3) A lower bound for randomized algorithms for identical machines that nearly matches the deterministic upper bound. 4) Several upper and lower bounds for variations on these models.
</description>
<pubDate>Thu, 01 Nov 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149167</guid>
<dc:date>1990-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounds on the Time to Reach Agreement in the Presence of Timing Uncertainty</title>
<link>https://hdl.handle.net/1721.1/149166</link>
<description>Bounds on the Time to Reach Agreement in the Presence of Timing Uncertainty
Attiya, Hagit; Dwork, Cynthia; Lynch, Nancy A.; Stockmeyer, Larry
</description>
<pubDate>Thu, 01 Nov 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149166</guid>
<dc:date>1990-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The MD4 Message Digest Algorithm</title>
<link>https://hdl.handle.net/1721.1/149165</link>
<description>The MD4 Message Digest Algorithm
Rivest, Ronald L.
The MD4 message digest algorithm takes an input message of arbitrary length and produces an output 128-bit "fingerprint" or "message digest," in such a way that it is (hopefully) computationally infeasible to produce two messages having the same message digest, or to produce any message having a given prespecified target message digest. The MD4 algorithm is thus ideal for digital signature applications: a large file can be securely "compressed" with MD4 before being signed with (say) the RSA public-key cyrptosystem. The MD4 algorithm is designed to be quite fast on 32-bit machines. For example, on a SUN Sparc station, MD4 runs at 1,450,000 bytes/second (11.6 Mbit/sec). In addition, the MD4 algorithm does not require any large substitution tables; the algorithm can be coded quite compactly. The MD4 algorithm is being place in the public domain for review and possible adoption as a standard.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149165</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Randomness-efficient Sampling of Arbitrary Functions</title>
<link>https://hdl.handle.net/1721.1/149164</link>
<description>Randomness-efficient Sampling of Arbitrary Functions
Bellare, Mihir; Rompel, John
</description>
<pubDate>Sun, 01 Jul 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149164</guid>
<dc:date>1990-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Sign Given Any Trapdoor Permutation</title>
<link>https://hdl.handle.net/1721.1/149163</link>
<description>How to Sign Given Any Trapdoor Permutation
Bellare, Mihir; Micali, Silvio
We present a digital signature scheme which is based on the existence of any trapdoor permutation.  Our scheme is secure in the strongest possible natural sense: namely, it is secure against existential forgery under adaptive chosen message attack.
</description>
<pubDate>Fri, 01 Jun 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149163</guid>
<dc:date>1990-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomic Snapshots of Shared Memory</title>
<link>https://hdl.handle.net/1721.1/149162</link>
<description>Atomic Snapshots of Shared Memory
Afek, Yehuda; Attiya, Hagit; Dolev, Danny; Gafni, Eli; Merritt, Michael; Shavit, Nir
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149162</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modelling Shared State in a Shared Action Model</title>
<link>https://hdl.handle.net/1721.1/149161</link>
<description>Modelling Shared State in a Shared Action Model
Goldman, Kenneth; Lynch, Nancy A.
</description>
<pubDate>Thu, 01 Mar 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149161</guid>
<dc:date>1990-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-ontrusive Synchonizers</title>
<link>https://hdl.handle.net/1721.1/149160</link>
<description>Non-ontrusive Synchonizers
Awerbuch, Baruch; Peleg, David
</description>
<pubDate>Sun, 01 Apr 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149160</guid>
<dc:date>1990-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Workstation Services and Kerberos Authentication at Project Athena</title>
<link>https://hdl.handle.net/1721.1/149159</link>
<description>Workstation Services and Kerberos Authentication at Project Athena
Davis, Don; Swick, Ralph
This document proposes solutions for two problems obstructing Project Athena's implementation of workstation services.      The principal problem is that workstation services demand a more flexible mutual-authentication protocol than Kerberos currently provides.  The egregious X access-control hack, xhost, for example, has lack of authentication as its root cause. The protocol weakness is also the reason that public workstations can't accept authenticated connections from rlogin, rcp, rsh, etc. We propose an extension to the Kerberos Ticket Granting Service protocol, that cleanly supports user-to-user mutual authentication.    Our second proposal addresses the problem of ticket propagation. Currently, if a user wants tickets that are valid on a remote host, he has to run kinit an encrypted login session, unless he's willing to send his password in cleartext. As an example of the use of our protocol extension, we describe a Kerberos application that would support a limited facility for secure ticket-propagation.
</description>
<pubDate>Wed, 01 Mar 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149159</guid>
<dc:date>1989-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sharing Memory Robustly in Message-passing Systems</title>
<link>https://hdl.handle.net/1721.1/149158</link>
<description>Sharing Memory Robustly in Message-passing Systems
Attiya, Hagit; Bar-Noy, Amotz; Dolev, Danny
</description>
<pubDate>Thu, 01 Feb 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149158</guid>
<dc:date>1990-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multivalued Possibilities Mappings</title>
<link>https://hdl.handle.net/1721.1/149157</link>
<description>Multivalued Possibilities Mappings
Lynch, Nancy A.
Abrastraction mappings are one of the major tools used to construct correctness proofs for concurrent algorithms. Several examples are given of situations in which it is useful to allow the abstraction mappings to be multivalued. The examples involve algorithm optimization, algorithm distribution, and proofs of time bounds.
</description>
<pubDate>Wed, 01 Aug 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149157</guid>
<dc:date>1990-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Analysis of Qualitative Dynamics</title>
<link>https://hdl.handle.net/1721.1/149156</link>
<description>Stochastic Analysis of Qualitative Dynamics
Doyle, Jon; Sacks, Elisha P.
</description>
<pubDate>Fri, 01 Dec 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149156</guid>
<dc:date>1989-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of Efficient Drinking Philosphers Algorithms</title>
<link>https://hdl.handle.net/1721.1/149155</link>
<description>Synthesis of Efficient Drinking Philosphers Algorithms
Welch, Jennifer Lundelius; Lynch, Nancy A.
A variant of the drinking philosphers algorithm of Chandy and Misra is described and proved correct in a module way, using the I/O automaton model of Lynch and Tuttle. The algorithm of Chandy and Misra is based on an particular dining philosophers algorithm, and relies on certain properties of its implementation. The drinking philosophers algorithm presented in this paper is able to use an arbitrary dining philosophers algorithm as a true subroutine; nothing about the implementation needs to be known, only that is solves the dining philosophers problem. An important advantage of this modularity is that by substituting a more time-efficient dining philosophers algorithm with O(1) worst-case waiting time is obtained, whereas the drinking philosophers algorithm of Chandy and Misra has O(n) worst-case waiting time (for n philosophers). Formal definitions are given to distinguish the drinking and dining philosophers problems and to specify precisely varying degrees of concurrency.
</description>
<pubDate>Wed, 01 Nov 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149155</guid>
<dc:date>1989-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impediments to Universal Preference-based Default Theories</title>
<link>https://hdl.handle.net/1721.1/149154</link>
<description>Impediments to Universal Preference-based Default Theories
Doyle, Jon; Wellman, Michael
Research on nonmonotonic and default reasoning has identified several important criteria for preferring alternative default inferences.  The theories of reasoning based on each of these criteria may uniformly be viewed as theories of rational inference, in which the reasoner selects maximally preferred states of belief.  Though researchers have noted some cases of apparent conflict between the preferences supported by different theories, it has been hoped that these special theories of reasoning may be combined into a universal logic of nonmonotonic reasoning.  We show that the different categories of preferences conflict more than has been realized, and adapt formal results from social choice theory to prove that every universal theory of default reasoning will violate at least one reasonable principle of rational reasoning.  Our results can be interpreted as demonstrating that, within the preferential framework, we cannot expect much improvement on the rigid lexicographic priority mechanisms that have been proposed for conflict resolution.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149154</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Routing with Polynomial Communication-space Tradeoff</title>
<link>https://hdl.handle.net/1721.1/149153</link>
<description>Routing with Polynomial Communication-space Tradeoff
Awercuch, Baruch; Peleg, David
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149153</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Online Tracking of Mobile Users</title>
<link>https://hdl.handle.net/1721.1/149152</link>
<description>Online Tracking of Mobile Users
Awercuch, Baruch; Peleg, David
This paper deals with the problem of maintaining a distributed directory server, that enables us to keep track of mobile users in a distributed network. The paper introduces the graph-theoretic concept of regional matching, and demonstrates how fining a regional matching with certain parameters enables efficient tracking. A polynomial-time algorithm that constructs such a regional matching is presented. The communication overhead of our tracking mechanism is within a polylogarithmic factor of the lower bound.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149152</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nnuclear Fusion Through Dimensional Confinement</title>
<link>https://hdl.handle.net/1721.1/149151</link>
<description>Nnuclear Fusion Through Dimensional Confinement
Smith, Mark A.
A formal mechanism for enhancing nnuclear fusion rates is proposed. The enhancement results whenever the reacting nuclei preferentially migrate in a restricted subspace of phase space - in particular, a fractal subspace. An extended Lawsom criterion is derived, and the prospects for this mechanism in condensed matter are discussed.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149151</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of Computation Group Research Summary June 1988 - July 1989</title>
<link>https://hdl.handle.net/1721.1/149150</link>
<description>Theory of Computation Group Research Summary June 1988 - July 1989
Theory of Computation Group
</description>
<pubDate>Sat, 01 Jul 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149150</guid>
<dc:date>1989-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time Bounds for Real-time Process Control in the Presence of Time Uncertainty</title>
<link>https://hdl.handle.net/1721.1/149149</link>
<description>Time Bounds for Real-time Process Control in the Presence of Time Uncertainty
Attiya, Hagit; Lynch, Nancy A.
</description>
<pubDate>Sat, 01 Jul 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149149</guid>
<dc:date>1989-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Hundred Impossibility Proofs for Distributed Computing</title>
<link>https://hdl.handle.net/1721.1/149148</link>
<description>A Hundred Impossibility Proofs for Distributed Computing
Lynch, Nancy A.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149148</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Type Abstraction Rules for References: A Comparison of Four Which have Achieved Notoriety</title>
<link>https://hdl.handle.net/1721.1/149147</link>
<description>Type Abstraction Rules for References: A Comparison of Four Which have Achieved Notoriety
O'Toole Jr., James William
I present four type abstraction rules which have been introduced by various authors to permit polymorphic type safety in the presence of mutable data. each of the type abstraction rules is discussed in the context of the language in which is was introduced, and the various abstraction rules are compared.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149147</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Methods for Range Queries in Computational Geometry</title>
<link>https://hdl.handle.net/1721.1/149146</link>
<description>Three Methods for Range Queries in Computational Geometry
Kipnis, Shlomo
This paper surveys a variety of recent results addressing the problem of range queries in computational geometry. The major contribution of this paper is in identifying three general methods for range queries in computational geometry and in classifying many of the recent results into one or more of these approaches. The three methods discussed in this paper are random sampling, search-tree tables, and space-partition trees. This survey assumes some familiarity with basic computational geometry concepts and techniques.
</description>
<pubDate>Wed, 01 Mar 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149146</guid>
<dc:date>1989-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication Effects for Message-based Concurrency</title>
<link>https://hdl.handle.net/1721.1/149145</link>
<description>Communication Effects for Message-based Concurrency
Jouvelot, Pierre; Gifford, David K.
We describe a new framework for explicity concurrency that uses an effect system to describe the communication behavior of expressions in a typed polymorphic programming language. Concurrency occurs between processes connected by channels on which messages are transmitted. Communication operations are characterized by two communication effect constructors, out and in, depending on whether a message has been sent or received. Synchronization is only allowed by message passing along shared channels; communication via mutation of global variables is staticially prohibited by our communication effect system, thus restricting the amount of non-determinancy in user programs. Unobservable communication effects can be masked by the effect system. We show that this system is powerful enough to express many other parallel paradigms, like systolic arrays or pipes, in a typed framework. The programmer can thus express concurrency in a rather flexible way while preserving the correctness of implicit detection of parallelism and optimization by the compiler. This new concurrency framework has been implemented in the FX-87 programming language.
</description>
<pubDate>Wed, 01 Feb 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149145</guid>
<dc:date>1989-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Random Numbers</title>
<link>https://hdl.handle.net/1721.1/149144</link>
<description>Natural Random Numbers
Gofford, David K.
We present a method for generaing random numbers from natural noise sources that is able to produce random numbers to any desired level of perfection. The method works by transducing a physical noise source to generate a stream of biased natural bits, and then applying an unbiasing algorithm. The Wiener-Kinchine relation is used to derive the autocorrelation present in the stream of biased bits and to define safe sampling rate. Experimental results from an implementation of our method support our analysis. One consequence of our analysis is that a broad class of natural random number generators, including ours, can not generate absolutely perfect random numbers.
</description>
<pubDate>Thu, 01 Sep 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149144</guid>
<dc:date>1988-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Lattice-structured Proof Technique Applied to a Minimum Spanning Tree Algorithm</title>
<link>https://hdl.handle.net/1721.1/149143</link>
<description>A Lattice-structured Proof Technique Applied to a Minimum Spanning Tree Algorithm
Welch, Jennifer Lundelius; Lamport, Leslie; Lynch, Nancy A.
Higly-optimized concurrent algorithms are often hard to prove correct because they have no natural decomposition into separately provable parts. This paper presents a proof technique for the modular verification of such non-modular algorithms. It generalizes existing verification techniques based on a totally-ordered hierarchy of refinements to allow a partially-ordered hierarchy - that is, a lattice of different views of the algorithm. The technique is applied to the well-known distributed minimum spanning tree algorithm of Gallager, Humblet, and Spira, which has until recently lacked a rigorous proof.
</description>
<pubDate>Wed, 01 Jun 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149143</guid>
<dc:date>1988-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combinatorial Algorithms for the Generalized Circulation Problem</title>
<link>https://hdl.handle.net/1721.1/149142</link>
<description>Combinatorial Algorithms for the Generalized Circulation Problem
Goldberg, Andrew V.; Plotkin, Serge A.; Tardos, Eva
We consider a generalization of the maximum flow problem in which the amounts of flow entering and leaving an arc are linearly related. More precisely, if x(e) units of flow enter an arc e, x(e) ?(e) units arrive at the other end. For instance, nodes of the graph can correspond to different currencies, with the multipliers being the exchange rates. We require conservation of flow at every node except a given source node. The goal is to maximize the amount of flow excess at the source. This problem is a special case of linear programming, and therefore can be solved in polynomial time. In this paper we present the first polynomial time combinatorial algorithms for this problem. The algorithms are simple and intuitive.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149142</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sublinear-time Parallel Algorithms for Matching and Related Problems</title>
<link>https://hdl.handle.net/1721.1/149141</link>
<description>Sublinear-time Parallel Algorithms for Matching and Related Problems
Goldberg, Andrew V.; Plotkin, Serge A.; Vaidya, Pravin
This paper presents the first sublinear-time deterministic parallel algorithms for bipartite matching and several related problems, including maximal node-disjoint paths, depth-first search, and flows in zero-one networks. Our results are based on a better understanding of the combinatorial structure of the above problems, which leads to new algorithmic techniques. In particular, we show how to use maximal matching to extend, in parallel, a current set of node-disjoint paths and how to take advantage of the parallelism that aries when a large number of nodes are "active" during an execution of a push/relabel network flow algorithm. We also show how to apply our techniques to design parallel algorithms for the weighted versions of the above problems. In particular, we present sublinear-time deterministic parallel algorithms for finding a minimum-weight bipartite matching and for finding a minimum-cost flow in a network with zero-one capacities, if the weights are polynomially bounded integers.
</description>
<pubDate>Wed, 01 Jun 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149141</guid>
<dc:date>1988-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semantical Paradigms: Notes for an Invited Lecture</title>
<link>https://hdl.handle.net/1721.1/149140</link>
<description>Semantical Paradigms: Notes for an Invited Lecture
Meyer, Albert R.; Cosmadakis, Stavros S.
It tooke me quite a few years to understand the point of the continuity in denotational semantics. I'm happy to report below on some recent results which justify my muddle-headedness and help to explain the point too. What follows are some global comments on denotational semantics of teh kinds invited lecturers sometimes indulge themselves in, highlighting "goodness of fit" criteria between semantic domains and symbolic evaluators. For readers impatient with sketchy overviews, two appendices mostly by Cosmadakis provide the key parts of a long proof that Scott domains give a computationally adequate and fully abstract semantics for lambda calculus with simple recursive types.
</description>
<pubDate>Fri, 01 Jul 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149140</guid>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>I/O Automata: A Model for Discrete Event Systems</title>
<link>https://hdl.handle.net/1721.1/149139</link>
<description>I/O Automata: A Model for Discrete Event Systems
Lynch, Nancy A.
</description>
<pubDate>Tue, 01 Mar 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149139</guid>
<dc:date>1988-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Modular Proof of Correctness for a Network Synchronizer</title>
<link>https://hdl.handle.net/1721.1/149138</link>
<description>A Modular Proof of Correctness for a Network Synchronizer
Fekete, A.; Lynch, N.; Shrira, L.
In this paper we offer a formal, rigorous proof of the correctness of Awerbuch's algorithm for network synchronization. We specify both the algorithm and the correctness condition using the I/O automaton model, which has previously been used to describe and verify algorithms for concurrency control and resource allocation. We show that the model is also a powerful tool for reasoning about distributed graph algorithmss. Our prood of correctness follows closely the intuitive arguments made by the designer of the algorithm by exploiting the model's natural support for such important design techniques as stepwise refinement and modularity. In particular, since the algorithm uses simpler algorithms for synchronization within and between "clusters" of nodes, our prood can import as lemmas the correctness of these simpler algorithms.
</description>
<pubDate>Tue, 01 Sep 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149138</guid>
<dc:date>1987-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring Decision Trees Using the Minimum Description Length Principle</title>
<link>https://hdl.handle.net/1721.1/149137</link>
<description>Inferring Decision Trees Using the Minimum Description Length Principle
Quinlan, L. Ross; Rivest, Ronald L.
We explore the use of Rissanen's Minimum Description Length Principle for the construction of decision trees. Empirical results comparing this approach to other methods are given.
</description>
<pubDate>Tue, 01 Sep 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149137</guid>
<dc:date>1987-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lower Bounds for Recognizing Small Cliques on CRCW PRAM's</title>
<link>https://hdl.handle.net/1721.1/149136</link>
<description>Lower Bounds for Recognizing Small Cliques on CRCW PRAM's
Beame, Paul
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149136</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Semantics of Miranda's Algebraic Types</title>
<link>https://hdl.handle.net/1721.1/149135</link>
<description>The Semantics of Miranda's Algebraic Types
Bruce, Kim B.; Riecker, Jon G.
Miranda has two interesting features in its typing system: implicit polymorphism (also known as ML-style polymorphism) and algebraic types. Algebraic types create new types from old and can operate on arbitrary types. This paper argues that functions of types, or type constructors, best represent the meaning of algebraic types. Building upon this idea, we develop a denotational semantics for algebraic types. We first define a typed lambda calculus that specifies type constructors. A semantic model of type constructors is them built, using the ideal model as a basis. (The ideal model gives the most natural semantics for Miranda's implicit polymorphism.) The model is shown to be sound with respect to this lambda calculus. FInally, we demonstrate how to use the model to interpret algebraic types, and prove that the translation produces elements in the model.
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149135</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Minimum-cost Circulations by Canceling Negative Cycles</title>
<link>https://hdl.handle.net/1721.1/149134</link>
<description>Finding Minimum-cost Circulations by Canceling Negative Cycles
Goldberg, Andrew V.; Tarjan, Robert E.
A classical algorithm for finding a minimum-cost circultaion consists of repeatedly finding a residual cycle of negative cost and canceling it by pushing enough flow around the cycle to saturate an arc. We show that a judicious choice of cycles for canceling leads to a polynomial bound on the number of iterations in this algorithm. This gives a very simple strongly polynomial algorithm that uses no scaling. A variant of the algorithm that uses dynamic trees runs in O(nm(log n)min{log(nC),mlogn}) time on a network of n verticies, m arcs, and arc costs of maximum absolute value C. This bound is comparable to those of the fastest previously known algorithms.
</description>
<pubDate>Wed, 01 Jul 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149134</guid>
<dc:date>1987-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Minimum-cost Circulations by Successive Approximation</title>
<link>https://hdl.handle.net/1721.1/149133</link>
<description>Finding Minimum-cost Circulations by Successive Approximation
Goldberg, Andrew V.; Tarjan, Robert E.
</description>
<pubDate>Wed, 01 Jul 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149133</guid>
<dc:date>1987-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formulation of Tradeoffs in Planning Under Uncertainty</title>
<link>https://hdl.handle.net/1721.1/149132</link>
<description>Formulation of Tradeoffs in Planning Under Uncertainty
Wellman, Michael P.
Planning under uncertainty with multiple, competing objectives is impossible when goals are represented as predicates and the effects of actions are modeled as deterministic functions of situations. Decision-theoretic models, on the other hand, do not address the problem of constructing strategies from more primitive representations of actions. In this proposal, I describe a method for formulating plans from large knowledge bases that can accomodate uncertain and partial satisifaction of goals. At the core of the planner is a dominance prover that derives admissibility properties of plan classes. The representation for the effects of actions is based on a qualitative formalism for asserting influences among variables. The planner makes decisions "up to tradeoffs," an intuitive description that seems to characterize the power of a dominance prover based on the qualitative influence formalism.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149132</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling Worst-case Performance of a Communication Protocol and Dynamic Resource Management</title>
<link>https://hdl.handle.net/1721.1/149131</link>
<description>Controlling Worst-case Performance of a Communication Protocol and Dynamic Resource Management
Awerbuch, Baruch
This paper raises a fundamental questions, neglected so far in the literature: how to make a distributed algorithm robust against input errors and wrong probabilistic assumptions about the distribution of the inputs or of the link delays. We introduce a notion of complexity-preserving protocol controller: this is an automatic procedure that controls worst-case execution of any distributed algorithm. We then suggest a controlled with poly-logarithmic overhead. We show that the problem of designing controllers is a special case of another problem, referred to as dynamic resource management. We generalize our solution to solve the latter problem. We believe that the techniques used are basic ones, and will be used to solve a variety of unrelated network problems. Our solution seems to be very practical, since the formal code of the protocol is very simple and thus easy to implement. The technique used in the solution appears to be interesting because a global resource is manipulated locally. This somewhat resembles the "parallel prefix" technique used extensively in parallel computing.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149131</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Space-efficient Algorithm for Finding the Connected Components of Rectangles in the Plane</title>
<link>https://hdl.handle.net/1721.1/149130</link>
<description>A Space-efficient Algorithm for Finding the Connected Components of Rectangles in the Plane
Leiserson, Charles E.; Phillips, Cynthia A.
We present an algorithm for determining the connectivity of a set of N rectangles in the plane, a problem central to avoiding aliasing in VLSI design rule checkers. Previous algorithms for this problem either worked slowly with a small amount of primary memory space, or worked quickly but used more space. Our algorithm uses O(W) primary memory space, where W, the scan width, is the maximum number of rectangles to cross any vertical cut. The algorithm runs in O(N lg N) time and requires no more than O(N) transfers between primary and secondary memory.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149130</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Multichip Partial Concentrator Switches</title>
<link>https://hdl.handle.net/1721.1/149129</link>
<description>Efficient Multichip Partial Concentrator Switches
Cormen, Thomas H.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149129</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Parallel Algorithms for (_+1)-coloring and Maximal Indepdendent Set Problems</title>
<link>https://hdl.handle.net/1721.1/149128</link>
<description>Efficient Parallel Algorithms for (_+1)-coloring and Maximal Indepdendent Set Problems
Goldberg, Andrew V.; Plotkin, Serge A.
We describe an efficient technique for breaking symmetry in paralle. The technique works especially well on rooted trees and on graphs with a small maximum degree. In particular, we can find a maximal independent set on a constant-degree graph in O(lg*n) time on an EREW PRAM using a linear number of processors. We show how to apply this technique to construct more efficient paralle algorithms for several problems, including coloring of planar graphs and (Δ+1)-coloring of constant-degree graphs. We also prove lower bounds for two related problems.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149128</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Murmur Clinic: An Auscultation Expert System</title>
<link>https://hdl.handle.net/1721.1/149127</link>
<description>Murmur Clinic: An Auscultation Expert System
Leong, Tze-Yun
Auscultation is a technique used in cardiac physical examination to detect irregularities by analyzing heart sounds. This paper reports on the development of Murmur Clinic, a cardiac auscultation expert system which is able to interpret and analyze auscultatory findings, and performs a tentative diagnosis based on a formalized diagnostic reasoning process. Descriptions of the scope addressed, the design, the diagnostic algorithm used and implementation of the system, as well as a sample session, and a discussion of limitations and possible improvements are presented.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149127</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication-efficient Parallel Graph Algorithms</title>
<link>https://hdl.handle.net/1721.1/149126</link>
<description>Communication-efficient Parallel Graph Algorithms
Leiserson, Charles E.; Maggs, Bruce M.
Communication bandwidth is a resource ignored by most parallel random-access machine (PRAM) models. This paper shows that many graph problems can be solved in parallel, not only with polylogarithmic performance, but with efficient communication at each step of the computation. We measure the communication requirements of an algorithm in a model called the distributed random-access machine (DRAM), in whcih communication cost is measured in terms of the congestion of memory access across cuts of an underlying network. The algorithms are based on a communication-efficient variant of the tree contraction technique due to Miller and Reif.
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149126</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cellular Automata '86 Conference</title>
<link>https://hdl.handle.net/1721.1/149125</link>
<description>Cellular Automata '86 Conference
Bennett, Charles H.; Toffoli, Tommaso; Wolfram, Stephen
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149125</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Sharing in Group Work</title>
<link>https://hdl.handle.net/1721.1/149124</link>
<description>Data Sharing in Group Work
Greif Irene; Sarin, Sunil
Data sharing is fundamental to computer-supported cooperative work: people share information through explicit communication channels and through their coordinated use of shared databases. Database support tools are therefore critical to the effective implementation of software for group work. This paper survey data sharing requirements for grouop work, highlight new database technologies that are especially likely to affect our ability to build computer systems supporting group work.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149124</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomic Shared Register Access by Asynchronous Harward</title>
<link>https://hdl.handle.net/1721.1/149123</link>
<description>Atomic Shared Register Access by Asynchronous Harward
Vitányi, Paul M.B.; Awerbuch, Baruch
The contribution of this paper is two-fold. First, we describe two ways to construct multivalued atomic n-writer n-reader registers. The first solution uses atomic 1-write 1-reader registers and unbounded tags. The other solution uses atomic 1-write n-reader registers and bounded tags. The second part of the paper develops a general methodology to porve atomicity, by identifying a set of criteria which guaranty an effective construction  for the required atomic mapping. We apply the method to prove atomicity of the two implementations for atomic multiwriter multireader registers.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149123</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of Computation Group Research Summary June 1985 - July 1986</title>
<link>https://hdl.handle.net/1721.1/149122</link>
<description>Theory of Computation Group Research Summary June 1985 - July 1986
Theory of Computation Group
</description>
<pubDate>Fri, 01 Aug 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149122</guid>
<dc:date>1986-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Inequality Reasoning</title>
<link>https://hdl.handle.net/1721.1/149121</link>
<description>Hierarchical Inequality Reasoning
Sacks, Elisha P.
This paper describes a program called BOUNDER that proves inequalities between elementary functions over finite sets of constraints. Previous inequality algorithms perform well on some subset of the elementary functions, but poorly elsewhere. Although complex algorithms perform better than simple ones for most functions, exceptions exist. To overcome these problems, BOUNDER maintains a hierarchy of increasingly complex algorithms. When on fails to resolve an inequality, it tries the next. This strategy resolves more inequalities than any single algorithm. It also performs well on hard problems without wasting time on easier ones. The current hierarchy consists of four algorithms: bounds propogation, substitution, derivative inspection, and iterative approximation. Propogation is an extension of interval arithmetic that takes linear time, but ignores constraints between variables and multiple occurences of variables. The remaining algorithms consider these factors, but require exponential time. Substitution is a new, provably correct, algorithm for utilizing constraints between variables. An earlier attempt by Brooks does not terminate on all inputs and exploits fewer constraints. The final two algorithms analyze constraints between variables. Inspection examines the signs of partial derivatives. Iteration is based on several earlier algorithms from interval arithmetic.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149121</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Game Tree Searching by Min/Max Approximation</title>
<link>https://hdl.handle.net/1721.1/149120</link>
<description>Game Tree Searching by Min/Max Approximation
Rivest, Ronald L.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149120</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Artificial Intelligence Approach to Clinical Decision Making</title>
<link>https://hdl.handle.net/1721.1/149119</link>
<description>An Artificial Intelligence Approach to Clinical Decision Making
Szolovits, Peter; Kassirer, Jerome P.; Long, William J.; Moskowitz, Alan J.; Pauker, Stephen G.; Patil, Ramesh S.; Wellman, Michael P.
This memo is the text of a proposal from the MIT Laboratory for Computer Science Clinical Decision Making group to the National Library of Medicine, requesting support for a five-year program of research.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149119</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Retiming Synchronous Circuitry</title>
<link>https://hdl.handle.net/1721.1/149118</link>
<description>Retiming Synchronous Circuitry
Leiserson, Charles E.; Saxe, James B.
This paper shows how the technique of retiming can be used to transform a given sycnhronous circuit into a more efficient circuit under a variety of different cost criteria. We model a circuit as a graph, and we give an O(|V||E|log|V|) algorithm for determining an equivalent circuit with the smallest possible clock period. We show that the problem of determining an equivalent retimed circuit with minimum state (total number of registers) is polynomial-time solvable. This result yields a polynomimal-time optimal solution to the problem of pipelining combinatorial circuitry with minimum register cost. We also give a characterization of optimal retiming based on an efficiently solvable mixed-integer linear programming problem.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149118</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Floyd-Hoare Logic Defines Semantics</title>
<link>https://hdl.handle.net/1721.1/149117</link>
<description>Floyd-Hoare Logic Defines Semantics
Meyer, Albert R.
The first-order patrial correctness assertions provable in Floyd-Hoare logic about an uninterpreted while-program scheme determine the scheme up to equivalence. This settles an open problem of Meyer and Halpern. The simple proof of this fact carries over to other partial correctness axiomatizations given in the literature for wider classes of ALGOL-like program schemes.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149117</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Randomized Routing on Fat-trees</title>
<link>https://hdl.handle.net/1721.1/149116</link>
<description>Randomized Routing on Fat-trees
Greenberg, Ronald I.; Leiserson, Charles E.
Fat-trees are a class of routing networks for hardware-efficient paralle computation. This paper presents a randomized algorithm for routing messages on a fat-tree. The quality of the algorithm is measured in terms of the load factor of a set of messages to be routed, which is a lower bound on the time required to deliver the messages. We show that if a set of messages has load factor lambda on a fat-tree with n processors, the number of delivery cyles (routing attempts) that the algorithm requires is O(lambda + lg n lg lg n) with probability 1-O(1/n). The best previous bound was )(lambda lg n) for the off-line problem where switch settings can be determined in advance. In a VLSI-like model where hardware cost is equated with physical volume, the routing algorithm demonstrates that fat-trees are universal routing networks in the sense that any routing network can be efficiently simulated by a fat-tree of comparable hardward cost.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149116</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonsequential Computation and Laws of Nature</title>
<link>https://hdl.handle.net/1721.1/149115</link>
<description>Nonsequential Computation and Laws of Nature
Vitányi, Paul M.B.
Traditionally, computational complexity theory deals with sequential computations. In the computational models the underlying physics is hardly accounted for. This attitude has persisted in common models for parallel computations. Wrongly, we shall argue, since the laws of physic intrude forcefully when we want to obtain realistic estimates of the performance of paralle or distributed algorithms. First, we shall explain why it is reasonable to abstract away from the physical details in sequential computations. Second, we show why certain common approaches in the theory of paralle complexity do not give useful information about the actual complexity of the parallel computation. Third, we give some examples of the interplay between physical considerations and actual complexity of distributed computations.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149115</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representing Change</title>
<link>https://hdl.handle.net/1721.1/149114</link>
<description>Representing Change
Sacks, Elisha
This paper evaluates knowledge representations for time-dependent information. It compares recent work by Moore, McDermott, and Allen with an ealier proposal by McCarthy and Hayes. Moore's formalism is faulted for its needless and unmotivated complexity and a simpler alternative is outlined. McDermott's formalism is proved inconsistent and unintuitive. Allen achieves the most by attempting the least. He proposes a simple plausible formalism, which makes few ontological or computational commitments. The paper concludes with a high-level discussion of the merits formal logic as a representation for empirical knowledge.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149114</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Control in Computer Networks and Cross-sections of Colored Multidimensional Bodies</title>
<link>https://hdl.handle.net/1721.1/149113</link>
<description>Distributed Control in Computer Networks and Cross-sections of Colored Multidimensional Bodies
Kranakis, Evangelos; Vitanti, Paul M.B.
The number of messages to match a pair of processes in a multiprocessor network with mobile processes is a measure for the cost of setting up temporary communication between processes. We establish lower bounds on the average number of point-to-point transmissions between any pair of nodes in this context. The present analysis allows for the possibility of multiple transmissions (as opposed to a single one) between any two nodes, and also for the possibility of multiple queries (as opposed to the two, i.e. post and a single query considered  before). Applications of the results include lower bounds on the number of messages for distributed s-matching, that is, matching a group of s processes, and distributed s-mutual exclusion, that is, s-1 processes may enter a critical section simultaneously, but s process may not, for &gt;=2. The idea of the proof of the combinatorial result needed for this analysis is further extended to obtain a lower bound on the average number of colors occuring in random cross-sections of colored, multidimensional bodies in terms of the total (multidimensional) volume of each color in the whole body.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149113</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Power of the Queue</title>
<link>https://hdl.handle.net/1721.1/149112</link>
<description>The Power of the Queue
Li, Ming; Longpre, Luc; Vitányi, Paul M.B.
Queues, stacks (pushdown stores), and tapes are storage models which have direct applications in compiler design and the general desig of algorithms. Whereas stacks (pushdown store or last-in-first-out storage) have been thoroughly investigated and are well understood, this is much less the case for queues (first-in-first-out storage). This paper contains a comprehensive study comparing queues to stacks and tapes. We address off-line machines with a one-way input, both deterministic and nondeterministic. The techniques relly on algorithmic information theory (Kolmogorov Complexity).
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149112</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Survey of Algorithms for Integrating Wafer-scale Systolic Arrays</title>
<link>https://hdl.handle.net/1721.1/149111</link>
<description>A Survey of Algorithms for Integrating Wafer-scale Systolic Arrays
Leighton, Tom; Leiserson, Charles
VLSI technologists are fast developing wafer-scale integration. Rather than partitioning a silicon wafer into chips as is usually done, the idea behind wafer-scale integration is to assemble an entire system (or network of chips) on a single wafer, thus avoiding the costs and performance loss associated with individual packaging of chips. A major problem with assembling a large system of microprocessors on a single wafer, however, is that some of the processor, or cells, on the wafer are likely to be defective. In the paper, we describe practical procedures for integrating wafer-scale systems "around" such faults. The procedures are designed to minimize the length of the longest wire in the system, thus minimizing the communication time between cells. Although the underlying network problems are NP-complete, we prove that the procedures are reliable by assuming a probabilistic model of cell failure.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149111</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interval and Recency-rank Source Coding: Two On-line Adaptive Variable-length Schemes</title>
<link>https://hdl.handle.net/1721.1/149110</link>
<description>Interval and Recency-rank Source Coding: Two On-line Adaptive Variable-length Schemes
Elias, Peter
In these schemes the encoder maps each message into a codeword in a prefix-free codeword set. In interval encoding the codeword is indexed by the interval since the last previous occurrence of that message, and the codeword set must be countably infinite. In recency rank encoding the codeword is indexed by the number of distinct messages in that interval, and there must be no fewer codewords than messages. The decoder decodes each codewords on receipt. Users need not know message probabilities but must agree on indexings, of the codeword set in an order of increasing length and of the message set in some arbitrary order. The average codeword length over a communications bout is never much larger than the value for an off-line scheme which maps the jth most frequent message in the bout into the jth shortest codeword in the give set, and is never too much larger than the value for off-line Huffman encoding of messages into the codeword set best for the bout message frequencies. Both schemes can do much better than Huffman coding when successive selections of each message type cluster much more than in the independent case.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149110</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge and Common Knowledge in a Byzantine Environment: Crash failures</title>
<link>https://hdl.handle.net/1721.1/149109</link>
<description>Knowledge and Common Knowledge in a Byzantine Environment: Crash failures
Dwork, Cynthia; Moses, Yoram
By analyzing the states of knowledge that the processors attain in an unreliable system of a simple type, we capture some of the basic underlying structure of such systems. In particular, we study what facts become common knowledge at various points in the execution of protocols in an unreliable system. This characterizes the simultaneous actions that can be carried out in such systems. For example, we obtain a complete characterization of the number of rounds required to reach Simultaneous Byzantine Agreement, given the pattern in which failures occur. From this we derive a new protocol for this problem that is optimal in all runs, rather than just always matching the worst-case lower bound. In some cases this protocol attains Simultaneous Byzantine Agreement in as few as 2 rounds. We also present a non-trivial simultaneous agreement problem called bivalent agreement for which there is a protocol that always halts in two rounds. Our analysis applies to simultaneous actions in general, and not just to Byzantine agreement. The lower bound proofs presented here generalize and simplify the previously known proofs.
</description>
<pubDate>Tue, 01 Jul 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149109</guid>
<dc:date>1986-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Application of Digital Broadcast Communication to Large Scale Information Systems</title>
<link>https://hdl.handle.net/1721.1/149108</link>
<description>An Application of Digital Broadcast Communication to Large Scale Information Systems
Gifford, David K.; Lucassen, John M.; Berline, Stephen T.
A new type of information system is described that combines personal computers, broadcast data communication, and bidirectional communication. The system is designed to use broadcast communciation whenever possible to deliever information to personal computers, which are used for data storage, indexing, and retrieval. This paper starts with an overview of the system, and then discuss the problem of reliable digital broadcast communication in some detail. A parameterized broadcast protocol is described, and we show how to choose protocol parameters based on observed channel error characteristics. A flexible encryption-based protection system is included in the protocol. We discuss the implementation of the system on contemporary personal computers. A broadcast system based on these ideas is now operating in Boston area homes.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149108</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tight Bounds for Minimax Grid Matching, with Applications to the Average Case Analysis of Algorithms</title>
<link>https://hdl.handle.net/1721.1/149107</link>
<description>Tight Bounds for Minimax Grid Matching, with Applications to the Average Case Analysis of Algorithms
Leighton, Tom; Shor, Peter
The minimax grid matching problem is a fundamental combinatorial problem associated with the average case analysis of algorithms. The problem has arisen in a number of interesting and seemingly unrelated areas, including wafer-scale integration of systolic arrays, two-dimentsional discrepancy problems, and testing pseudorandom number generators. However, the minimax grid matching problem is best known for its application to the maximum up-right matching problem. The maximum up-right matching problem was originally defined by Karp, Luby and Marchetti-Spaccamela in association with algorithms for 2-dimensional bin packing. More recently, the up-right matching problem has arisen in the average case analysis of on-line algorithms for 1-dimensional bin packing and dynamic allocation. In this paper, we solve both the minimax grid matching problem and the maximum up-right matching problem. As a direct result, we obtain tight upper bounds on the average case behavior of the best algorithms known for 2-dimensional bin packing, 1-dimensional on-line packing and on-line dynamic allocation. The results also solve a long-open question in mathematical statistics.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149107</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Randomized Data Structure for Ordered Sets</title>
<link>https://hdl.handle.net/1721.1/149106</link>
<description>A Randomized Data Structure for Ordered Sets
Bentley, Jon L.; Leighton Frank Thomson; Lepley, Margaret; Stanat, Donald F.; Steele, J. Michael
In this paper, we consider a simple randomized data structure for representing ordered sets, and give a precise combinatorial analysis of the time required to perform various operations. In addition to a practical data structure, this work provides new and nontrivial proabilistic lower bounds and an instance of a practical problem whose randomized complexity is provably less than its deterministic complexity.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149106</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cellular Automata Supercomputers for Fluid Dynamics Modeling</title>
<link>https://hdl.handle.net/1721.1/149105</link>
<description>Cellular Automata Supercomputers for Fluid Dynamics Modeling
Margolis, Norman; Toffoli, Tommaso; Vichniac, Gerard
We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer.
</description>
<pubDate>Sun, 01 Dec 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149105</guid>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomic Data Abstractions in a Distributed Collaborative Editing System (Extended Abstract)</title>
<link>https://hdl.handle.net/1721.1/149104</link>
<description>Atomic Data Abstractions in a Distributed Collaborative Editing System (Extended Abstract)
Greif, Irene; Selinger, Robert; Weihl, William
This paper describes our experience implementing CES, a distributed Collaborative Editing System written in Argus, a language that includes facilities for managing long-lived distributed data. Argus provides atomic actions, which simplify the handling of concurrency and failures, and t the mechanisms for implementing atomic data types, which ensure serializability and recoverability of actions that use them. This paper focuses on the support for atomicity in Argus, especially the support for building new atomic types. Overall the mechanisms in Argus made it relatively easy to build CES; however, we encountered interesting problems in several areas. For example, much of the processing of an atomic action in Argus is handled automatically by the run-time system; several examples are presented that illustrate areas where more explicit control in the implementations of atomic types would be useful.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149104</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dataflow Architectures</title>
<link>https://hdl.handle.net/1721.1/149103</link>
<description>Dataflow Architectures
Arvind; Culler, David E.
Dataflow graphs are described as a machine language for parallel machines. Static and dynamic dataflow architectures are presented as two implementations of the abstract dataflow model. Static dataflow allows at most one token per arc in dataflow graphs and thus only approximates the abstract model where unbounded token storage per arc is assumed. Dynamic architectures tag each token and keep then in a common pool storage, thus permitting a better approximation of the abstract model. The relative merits of the two approaches are discussed. Functional data structures and I-structures are presented as two views of data structures which are both compatible with the dataflow model. These views are contrasted and compared in regard to efficiency and exploitation of potential parallelism in programs. A discussion of major dataflow projects and a prognosis for dataflow architectures are also presented.
</description>
<pubDate>Sun, 01 Jan 0002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149103</guid>
<dc:date>0002-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Width-3 Permutation Branching Programs</title>
<link>https://hdl.handle.net/1721.1/149102</link>
<description>Width-3 Permutation Branching Programs
Barrington, David A.
We consider a restricted class of width-3 branching programs where each column of nodes depends on a single variable, and the 0-edges and the 1-edges out of each column form a permutation. In this model, parity and the mod-3 function are easy to calculate, but the and-function is hard. We show that any function of n inputs can be calculated in length O(2^n), and that the and-function in particular requires length O(2^n) if the branching program has one accept node and one reject node.
</description>
<pubDate>Sun, 01 Dec 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149102</guid>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Packet Trains: Measurements and a New Model for Computer Network Traffic</title>
<link>https://hdl.handle.net/1721.1/149101</link>
<description>Packet Trains: Measurements and a New Model for Computer Network Traffic
Jain, Raj; Routhier, Shawn
Traffic measurements on a ring local area computer network at Massachusetts Institute of Technology are presented. The analysis of the arrival pattern shows that the arrival processes are neither Poisson nor Compound Poisson. An alternative model called "packet train" is proposed. In the train model, the traffic on the network consists of a number of packet streams between various pairs of nodes on the network. Each node-pair stream (or node-pair process, as we call them) consists of a number of trains. Each train consists of a number of packets (or cars) going in either direction (from node A to B or from node B to A). The inter-car gap is large (compared to packet transmission time) and random. The inter-train time is even larger. The Poisson and the Compound Poisson arrivals are shown to be special cases of the train arrival model. Another important observation is that the packet arrivals exhibit a "source locality." If a packet is seen on the network going from A to B, the probability of the next packet going from A to B or from B to A is very high. Implications of the train arrivals, and source locality on the design of bridges, gateways and reservation protocols are discussed. A number of open problems requiring development of analysis techniques for systems with train arrival processes are also described.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149101</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A New Max-flow Algorithm</title>
<link>https://hdl.handle.net/1721.1/149100</link>
<description>A New Max-flow Algorithm
Goldberg, Andrew V.
All previously known max-flow algorithms worked by finding augmenting paths, either one path at a time (Ford and Fulkerson algorithm), or all shortest augmenting paths at once (by using the level network technique of Dinic). We introduce an alternative way of dealing with the problem. Our method is to push flow through the original network. The algorithm and its analysis are simple and intuitive, yet the algorithm does as well as any other network flow algorithm on dense graphs, achieving O(n^3) running time. The algorithm admits distributed and parallel implementations as well as a sequential implementation. The algorithm requires less storage then the only other parallel max-flow algorithm known (due to Shiloach and Vishkin), and its parallel running time is the same, O(n^2 logn). In fact, our algorithm uses constant amount of storage for every edge or vertex of the network, allowing an implementation under and more realistic distributed model.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149100</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed FIFO Allocation of Identical Resources Using Small Shared Space</title>
<link>https://hdl.handle.net/1721.1/149099</link>
<description>Distributed FIFO Allocation of Identical Resources Using Small Shared Space
Fischer, Michael J.; Lynch, Nancy A.; Burns, James; Borodin, Allan
We present a simple and efficient algorithm for the FIFO allocation of k identical resources among asynchronous processes which communicate via shared memory. The algorithm simulates a shared queue but uses exponentially fewer shared memory values, resulting in practical savings of time and space as well as program complexity. The algorithm is robust against processes failure through unannounced stopping, making it attractive also for use in an environment of processes of widely differing speeds. In addition to its practical advantages, we show the algorithm is optimal (to within a constant factor) with respect to shared space complexity.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149099</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The CAM-7 Multiprocessor: A Cellular Automata Machine</title>
<link>https://hdl.handle.net/1721.1/149098</link>
<description>The CAM-7 Multiprocessor: A Cellular Automata Machine
Toffoli, Tommaso; Margolis, Norman
</description>
<pubDate>Sun, 01 Dec 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149098</guid>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dscribe: A Scribe Server</title>
<link>https://hdl.handle.net/1721.1/149097</link>
<description>Dscribe: A Scribe Server
Chung, Janice C.
This document gives a complete description of the design and implementation of Dscribe, the Scribe server. Dscribe is a program which allows users on a variety of hosts to have files processed remotely by the Scribe document preparation system. The first part of the document describes the functionality of Dscribe and the motivation for writing the program. It also gives an overview of how the program works. Later sections discuss important design issues and describe the implementation in detail.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149097</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Network Control by Bayesian Broadcast</title>
<link>https://hdl.handle.net/1721.1/149096</link>
<description>Network Control by Bayesian Broadcast
Rivest, Ronald L.
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149096</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improvements of Yao's Results on Parity Circuits</title>
<link>https://hdl.handle.net/1721.1/149095</link>
<description>Improvements of Yao's Results on Parity Circuits
Hastad, Johan
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149095</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two Undecidability Results in Probabilistic Automata Theory</title>
<link>https://hdl.handle.net/1721.1/149094</link>
<description>Two Undecidability Results in Probabilistic Automata Theory
Kilian, Joseph J.
The language accepted by a probabilistic finite state acceptor with an isolated cutpoint is known to be regular. We show that determining if a cutpoint is isolated is undecideable.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149094</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Mixed Integer Linear Programming Problem Which is Efficiently Solvable</title>
<link>https://hdl.handle.net/1721.1/149093</link>
<description>A Mixed Integer Linear Programming Problem Which is Efficiently Solvable
Leiserson, Charles E.; Saxe, James B.
Efficient algorithms are known for the simple linear programming problem where each inequality is of the form xj-xi&lt;=aij. Furthermore, these techniques extend to the integer linear programming variant of the problem. This paper gives an efficient solution to the mixed-integer linear programming variant where some, but not necessarily all, of the unknowns are required to be integers. The algorithm we develop is based on a graph representation of the constraint system and runs in O(|V||E|+|V|62lh|V|) time. It has several applications including optimal retiming of synchronous circuitry, VLSI layout compaction in the presence of power and ground buses, and PERT scheduling with periodic constraints.
</description>
<pubDate>Mon, 01 Jul 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149093</guid>
<dc:date>1985-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unbiased Bits from Sources of Weak Randomness and Probabilistic Communication Complexity</title>
<link>https://hdl.handle.net/1721.1/149092</link>
<description>Unbiased Bits from Sources of Weak Randomness and Probabilistic Communication Complexity
Chor, Benny; Goldreich, Oded
A new model for weak random physical sources is presented. The new model strictly generalizes previous models (e.g. the Santha and Vazirani model [26]). The sources considered output strings according to probability distributions in which no single string is too probable. The new model provides a fruitful viewpoint on problems studied previously as: 1) Extracting almost perfect bits from sources of weak randomness: the question of possibility as well as the question of efficiency of such extraction schemes are addressed. 2) Probabilistic Communication Complexity: it is shown that most functions have linear communication complexity in a very strong probabilistic sense. 3) Robustness of BPP with respect to sources of weak randomness (generalizing a result of Vazirani and Vazirani [29]).
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149092</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer-based Real-time Conferences</title>
<link>https://hdl.handle.net/1721.1/149091</link>
<description>Computer-based Real-time Conferences
Sarin, Sunil K.; Greif, Irene
A real-time conferencing system allows a group of users to conduct a problem-solving meeting from their workstations. Participants in such a conference use the computer to jointly view, edit, and process relevant information, and use voice communication to discuss the information they are sharing. General principles are presented in this paper for selecting a set of user functions in a real-time conferencing system. The available implementation strategies are reviewed and compared, with emphasis on the tradeoffs between reusing existing single-user interactive programs and writing new distributed multi-user programs. Network communication requirements for real-time conferences, and their potential impact on communication protocol standards, are discussed. Real-time conferencing is contrasted with aynchronous communication support such as electronic message systems and shared databases, and the need for the two to work together within the total system environment is emphasized.
</description>
<pubDate>Mon, 01 Jul 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149091</guid>
<dc:date>1985-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>What Price for Eliminating Expression Side-effects?</title>
<link>https://hdl.handle.net/1721.1/149090</link>
<description>What Price for Eliminating Expression Side-effects?
Hailperin, Max
Separating a programming language into side-effect-free expressions and effect-only statements should make the language more amenable to axiomatization, as well as providing benefits for style, pedagogy, and implementation efficiency (particularly in parallel-computing environments). This paper shows that such a division does not come at an unreasonable cost in programming convenience. First a dialect of Lisp is defined, in which a distinction is made between statements, which may have side-effects, and expressions, which may not. Next, a representative collection of examples from Abelson and Sussman's Structure and Interpretation of Computer Programs is coded in this dialect of Lisp. Most of the examples divide neatly into functional and imperative portions, and a few relatively clean transformations prove sufficient for the more stubborn cases.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149090</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Simulation in Medical Physiology: A Progress Report</title>
<link>https://hdl.handle.net/1721.1/149089</link>
<description>Qualitative Simulation in Medical Physiology: A Progress Report
Kuipers, Benjamin
This progress report describes the current status of the application of the QSIM qualitative simulation representation and algorithm to mechanisms drawn from medical physiology. QSIM takes a qualitative description of the structure of a mechanism and produces and qualitative description of its behavior. Here we apply it to a set of different, medically realistic examples, to represent the following kinds of knowledge: 1) Physiology: qualitative simulation handles the response of normally-functioning mechanisms for salt and water balance to a variety of different environmental perturbations. 2) Pathophysiology: local changes to the structure describing a normal mechanism produces a structure that accurately describes the pathophysiology of a set of diseases. 3) Abstraction: the knowledge of the complexity of human physiology can only be handled by organizing it hierarchically. A hierarchy according to the temporal scale of equilibrium processes appears to be promising. 4) Cardiology: a complex structure describing maintenance of heart rate and blood pressure was adequately constructed during a short meeting with a set of computationally sophisticated physicians. 5) Future Directions: we can outline some of the representation barriers in the way of capturing a broader range of medical knowledge.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149089</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Analysis of a Network Resource Allocation Algorithm</title>
<link>https://hdl.handle.net/1721.1/149088</link>
<description>Probabilistic Analysis of a Network Resource Allocation Algorithm
Fischer, Michael J.; Griffeth, Nancy; Guibas, Leonidas J.; Lynch, Nancy A.
A distributed algorithm is presented, for allocating a large number of identical resources (such as airline tickets) to requests which can arrive anywhere in a distributed network. Resources, one allocated, are never returned. The algorithm searches sequentially, exhausting certain neighborhoods of the request origin before proceeding to search at great distances. Choice of search direction is made nondeterministically. Analysis of expected response time is simplified by assuming that the search direction is chosen probabilistically, that messages require constant time, that the network is a tree with all leaves at the same distance from the root, and that requests and resources occur only at leaves. It is shown that the response time is approximated by the number of messages of one that are sent during the execution of the algorithm, and that this number of messages is a nondecreasing function of the interarrival time for requests. Therefor, the worst case occurs when requests come in so far apart that they are processed sequentially. The expected time for the sequential case of the algorithm is analyzed by standard techniques. This time is shown to be bounded by a constant, independent of the size of the network. It follows that the expected response time for the algorithm is bounded in the same way.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149088</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electing a Leader in a Synchronous Ring</title>
<link>https://hdl.handle.net/1721.1/149087</link>
<description>Electing a Leader in a Synchronous Ring
Frederickson, Greg N.; Lynch, Nancy A.
We consider the problem of electing a leader in a synchronous ring of n processors. We obtain both positive and negative results. One the one hand, we show that if processor ID's are chosen from some countable set, then there is an alorithm which uses only O(n) messages in the worst case. On the other hand, we obtain two lower bound results. If the algorithm is restructed to use only comparisons of ID's, then we obtain an Ω(n log n) lower bound for the number of messages required in the worst case. Alternatively, there is a (very fast-growing) function f with the following property. If the number of rounds is required to be bounded by some t in the worst case, and ID's are chosen from any set having at leas f(n,t) elements, then any algorithm requires Ω(n log n) messages in the worst case.
</description>
<pubDate>Mon, 01 Jul 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149087</guid>
<dc:date>1985-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reaching Approximate Agreement in the Presence of Faults</title>
<link>https://hdl.handle.net/1721.1/149086</link>
<description>Reaching Approximate Agreement in the Presence of Faults
Dolev, Danny; Lynch, Nancy A.; Pinter, Shlomit S.; Stark, Eugene W.; Weihl, William E.
This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Booleann values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in aynchronous, as well as synchornous systems. The asynchronous agreement algorithm is an interesting contrast to a result of Fischer, Lynch, and Paterson, who show that exact agreement is not attainable in an asychronous system with as few as one fault process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proven, and the algorithms presented are shown to be optimal.
</description>
<pubDate>Wed, 01 May 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149086</guid>
<dc:date>1985-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Byzantine Firing Squad Problem</title>
<link>https://hdl.handle.net/1721.1/149085</link>
<description>The Byzantine Firing Squad Problem
Burns, James E.; Lynch, Nancy A.
A new problem, the Byzanntine Firing Squad problem, is defined and solved in two versions, Permissive and Strict. Both problems provide for synchronization of initially unsychronized processors in a synchronous network, in the absense of a common clock and in the presence of a limited number of faulty processors. Solutions are given which take the same number of rounds as Byzantine Agreement but might transmit r times as many bits, where r is the number of rounds used. Additional solutions are provided which use at most one (Permissive) or two (Strict) additional rounds and send at most n^2 bits plus four times the number of bits sent by a chosen Byzantine Agreement algorithm.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149085</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Simulation of Mechanisms</title>
<link>https://hdl.handle.net/1721.1/149084</link>
<description>Qualitative Simulation of Mechanisms
Kuipers, Benjamin
Qualitative simulation is a key inference process in qualitative causal reasoning. However, the precise meaning of the different proposals and their relation with differential equations is often unclear. In this paper, we present a precise definition of qualitative structure and behavior descriptions as abstractions of differential equations and continuously differentiable functions. We present a new algorithm for qualitative simulation that generalizes the best features of existing algorithms, and allows direct comparisons among alternate approaches. Starting with a structural description abstracted from a differential equation, we prove that the QSIM algorithm is guaranteed to produce a qualitative behavior corresponding to any solution to the original equation. We also show that any qualitative simulation algorithm, because of its local point of view, will sometimes produce spurious qualitative behaviors: ones which do not correspond to any mechanism satisfying the structureal description. These observations suggest specific types of care that must be taken in designing applications of qualitative causal reasoning systems, and in constructing and validating a knowledge base of mechanism descriptions.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149084</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalized Planar Matching</title>
<link>https://hdl.handle.net/1721.1/149083</link>
<description>Generalized Planar Matching
Berman, Fran; Leighton, Tom; Shor, Peter; Snyder, Larry
In this paper, we prove that maximum planar H-matching (the problem of determining the maximum number of node-disjointed copies of the fixed graph H contained in a variable planar graph G) is NP-complete for any connected planar graph H with three or more nodes. We also show that perfect planar H-matching is NP-complete for any connected outerplanar graph H with three or more nodes, and is, somewhat surprisingly, solvable in linear time for triangulated H with four or more nodes. The results generalize and unify several special-case results proved in the literature. The techniques can also be applied to solve a variety of problems, including the optimal tile salvage problem from wafer-scale integration. Although we prove that the optimal tile salvage problem and other like it are NP-complete, we also describe provably good approximation algorithms that are suitable for practical applications.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149083</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tight Bounds on the Complexity of Parallel Sorting</title>
<link>https://hdl.handle.net/1721.1/149082</link>
<description>Tight Bounds on the Complexity of Parallel Sorting
Leighton, Tom
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149082</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Patterns in Trees</title>
<link>https://hdl.handle.net/1721.1/149081</link>
<description>Patterns in Trees
Dershowitz, Nachum; Zaks, Shmuel
A very general enumeration formula for occurences of a pattern, or set of patterns, in the class of ordered trees with a given number of edges is presented, and its wide usefulness is demonstrated.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149081</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Consensus in the Presence of Partial Synchrony</title>
<link>https://hdl.handle.net/1721.1/149080</link>
<description>Consensus in the Presence of Partial Synchrony
Dwork, Cynthia; Lynch, Nancy A.; Stockmeyer, Larry
The concept of partial synchrony in a distributed system is introduced. Partial synchrony lies between the cases of a synchronous system and an asynchronous system. In a synchronous system, there is a known fixed upper bound Δ on the time required for a message to be sent from one processor to another and a known fixed upper bound Φ on the relative speeds of different processors. In an asynchronous system, no fixed uppper bounds Δ and Φ exist. In one version of partial synchrony, fixed bounds Δ and Φ exist but they are not know a priori. The problem is to design protocols which work correctly in the partially synchronous system regardless of the actual values of the bounds Δ and Φ. In another version of partial synchrony, the bounds are known but they are only guaranteed to hold starting at some unknown time T, and protocols must be designed to work correctly regardless of when the time T occurs. Fault tolerant consensus protocols are given for various cases of parial synchrony and various fault models. Lower bounds are also given which show in many cases that out protocols are optimal with respect to the number of faults tolerated. Our consensus protocols for partially synchronous processors use new protocols for fault-tolerant "distributed clocks" which allow partially synchronous processors to reach some approximately common notion of time.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149080</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Colored Ticket Algorithm</title>
<link>https://hdl.handle.net/1721.1/149079</link>
<description>The Colored Ticket Algorithm
Fischer, Michael J.; Lynch, Nancy A.; Burns, James; Borodin, Allan
Upper and lower bounds are proved for shared space requirements for solution of a problem involving resource allocation among asynchronous processes. The problem is to allocate some number, k≥1, of resources, in an environment in which processes can fail by stopping without warning. Allocation is to be as FIFO as possible, subject to variations imposed by the possibility of failures.
</description>
<pubDate>Mon, 01 Aug 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149079</guid>
<dc:date>1983-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complexity of Network Synchronization</title>
<link>https://hdl.handle.net/1721.1/149078</link>
<description>Complexity of Network Synchronization
Awerbuch, Baruch
In this paper we investigate the problem of simulation of the synchronous network by the asynchronous one. We propose a new simulation technique, referred to as "Synchronizer" which is a new, simple methodology for desiging efficient distributed algorithms in asynchronous networks. Our Synchronizer exhibits a trade-off between its communication and time complexities, which is proved to be within a constant factor of the lower bound.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149078</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal for a Small Scheme Implementation</title>
<link>https://hdl.handle.net/1721.1/149077</link>
<description>Proposal for a Small Scheme Implementation
Schooler, Richard; Stamos, James W.
Scheme is a lexically scoped dialect of LISP developed at MIT. In this report we determine the feasibility of implementing a Scheme-based programming/application environment on a contemporary personal computer such as the Apple Macintosh. The absense of virtual memmory, coupled with a limitation on the maximum amount of physical memory, means that space is at a premium. We suggest the use of bytecodes and sketch a possible instruction set. Because of space constraints, tail-recursion optimization and an efficient mechanism for the reclamation of inaccessible contexts are also examined. Using the built-in operating system and user interface of the Macintosh realizes speed, functionality, and friendliness but raises a number of interesting issues. For example, the Pascal and assembler routines make many assumptions about data representation, type checking, and parameter passing. Since an implementation of Scheme is likely to have radically different conventions, the two environments must be interfaced smoothly and efficiently. In addition to the bytecode instruction set, we specify the virtual machine informally, discuss the implementation of basic and advanced features, and estimate the performance of such an implementation, and finally evaluate the proposed design.
</description>
<pubDate>Mon, 01 Oct 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149077</guid>
<dc:date>1984-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Simple and Efficient Randomized Byzantine Agreement Algorithm</title>
<link>https://hdl.handle.net/1721.1/149076</link>
<description>A Simple and Efficient Randomized Byzantine Agreement Algorithm
Chor, Benny; Coan, Brian A.
A new randomized Byantine agreement algorithm is presented. This algorithm operates in a synchronous systems of n processors, at most t of which can fail. The algorithm reaches agreement in O(t/log n) expected rounds and O(n^2 t/log n) expected message bits independent of the distribution of processor failures. This performance is further improved to a constant expected number of rounds and O(n^2) message bits if the distribution of processor failures is assumed to be uniform. In either event, the algorithm improves on the known lower bound on rounds for deterministic algorithms. Some other advantages of the algorithm are that it requires no cryptographic techniques, that the amount of local computation is small, and that the expected number of random bits used per processor is only one. It is argued that in many practical applications of Byzantine agreement, the randomized algorithm of this paper achieves superior performance.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149076</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A New Fault-tolerant Algorithm for Clock Sychronization</title>
<link>https://hdl.handle.net/1721.1/149075</link>
<description>A New Fault-tolerant Algorithm for Clock Sychronization
Lundelius, Jennifer; Lynch, Nancy A.
We describe a new fault-tolerant algorithm for solving a variant of Lamport's clock synchronization problem. The algorithm is designed for a system of distributed processes that communicate by sending messages. Each process has its own read only physical clock whose drift rate from real time is very small. By adding a value to its physical clock time, the process obtains its local time. The algorithm solves the problem of maintaining closely synchornized local times, assuming that processes' local times are closely synchronized initially. The algorithm is able to tolerate the failure of just under a third of the participating processes. It maintains synchornization to within a small constant, whose magnitude depends upon the rate of clock drift, the message delivery time, and the initial closeness of synchronization. We also give a characterization of how far the clocks drift from real time. Reintegration of a repaired process can be accomlished using a slight modification of the basic algorithm. A similiar style algorithm can also be used to achieve synchronization initially.
</description>
<pubDate>Sun, 01 Jul 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149075</guid>
<dc:date>1984-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Software for Interactive On-line Conferences</title>
<link>https://hdl.handle.net/1721.1/149074</link>
<description>Software for Interactive On-line Conferences
Sarin, Sunil K.; Greif, Irene
A layered architecture for the implementation of real-time conferences is presented. In a real-time conference a group of users, each at his or her own workstation, share identical views of on-line application information. The users cooperate in a problem solving task by interactively modifying or editing the shared view or the underlying information, and can use a voice communication channel for discussion and negotiation. The lower layer in this architecture, named Ensemble, supports the sharing of arbitrary application-defined objects among the participants of a conference, and the manipulation of these objects via one or more application-defined groups of commands called activities. Ensemble provides generic facilities for sharing objects and activities, and for dynamically adding and removing participants in a conference; these can be used in constructing real-time conferencing systems for many different applications. An example is presented of how the Ensemble functions can be used to implement a shared bitmap with independent participant cursors. The relation between this layered architecture and the ISO Open Systems Interconnection reference model is discussed. In particular, it is argued that Ensemble represents a plausible first step toward a Session-layer protocol for "multi-endpoint connections," a neglected area of communication protocol development.
</description>
<pubDate>Sun, 01 Jul 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149074</guid>
<dc:date>1984-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Naming and Directory Issues in Message Transfer Systems</title>
<link>https://hdl.handle.net/1721.1/149073</link>
<description>Naming and Directory Issues in Message Transfer Systems
Sirbu, Marvin A., Jr.; Sutherland, Juliet B.
A message transfer system requires some means for users to determine the addresses of their correspondents. A Directory Service aids users in identifying a particular correspondent and the correspondent's address. In this paper we discuss the technical, economic, organizational and political requirements which must be satisified by a directory service. We develop a language for describing alternative architectures for directory service borrowed from notions of hierarchical computer file system design. We propose a system of naming and directory services which meets the stated requirements based on names which specify a path through a sequence of directories. Finally, we compare our proposal to several alternative designs for directory service which have appeared in the literature.
</description>
<pubDate>Sun, 01 Jul 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149073</guid>
<dc:date>1984-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three-dimensional Circuit Layouts</title>
<link>https://hdl.handle.net/1721.1/149072</link>
<description>Three-dimensional Circuit Layouts
Leighton, Tom; Rosenberg, Arnold
</description>
<pubDate>Fri, 01 Jun 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149072</guid>
<dc:date>1984-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Distributed Algorithms for Sorting and Ranking</title>
<link>https://hdl.handle.net/1721.1/149071</link>
<description>Optimal Distributed Algorithms for Sorting and Ranking
Zaks, Shmuel
We study the problems of sorting and ranking n processors that have initial values - not necessarily distinct - in a distrubuted system. Sorting means that the initial values have to move around in the network and be assigned to the processors according to their distinct identities, while ranking means that the numbers 1,2,...,n have to be assigned to the processors according to their initial values; ties between initial values can be broken in any chosen way. Assuming a tree network, and assuming that a message can contain an initial value, an identity or a rank, we present an algorithm for the ranking problem that uses, in the worst case, at most 1/2n^2 + O(n) such messages. The algorithm is them extended to perform sorting, using in the worst case at most 3/4n^2 + O(n) messages. Both algorithms are using a total of O(n) space. The algorithms are extended to general networks. The expected behavior of these algorithms for three classes of trees are discussed. Assuming that the initial values, identities and ranks can only be compared within themselves, lower bounds of 1/2n^2 and 3/4n^2 messages are proved for a worst case execution of any algorithm to solve the ranking and sorting problems, correspondingly.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149071</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>RSA/RABIN Least Significent Bits Are 1/2 + 1/poly(logN) Secure</title>
<link>https://hdl.handle.net/1721.1/149070</link>
<description>RSA/RABIN Least Significent Bits Are 1/2 + 1/poly(logN) Secure
Chor, Benny; Goldreich, Oded
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149070</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Synchronous Communication on. The Problem of Electing a Leader in a Ring</title>
<link>https://hdl.handle.net/1721.1/149069</link>
<description>The Impact of Synchronous Communication on. The Problem of Electing a Leader in a Ring
Lynch, Nancy A.; Frederickson, Greg N.
We consider the problem of electing a leader in a synchronous ring of n processors. We obtain both positive and negative results. One the one hand, we show that if processor ID's are chosen from some countable set, then there is an alorithm which uses only O(n) messages in the worst case. On the other hand, we obtain two lower bound results. If the algorithm is restructed to use only comparisons of ID's, then we obtain an Ω(n log n) lower bound for the number of messages required in the worst case. Alternatively, there is a (very fast-growing) function f with the following property. If the number of rounds is required to be bounded by some t in the worst case, and ID's are chosen from any set having at leas f(n,t) elements, then any algorithm requires Ω(n log n) messages in the worst case.
</description>
<pubDate>Sun, 01 Apr 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149069</guid>
<dc:date>1984-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Semantics of Local Storage of What Makes The Free-list Free?</title>
<link>https://hdl.handle.net/1721.1/149068</link>
<description>The Semantics of Local Storage of What Makes The Free-list Free?
Halpern, Joseph Y.; Meyer, Albert R.; Trakhtenbrot, B.A.
Denotational semantics for an ALGOL-like language with finite-mode procedures, blocks with local storage, and sharing (aliasing) is given by translating programs into an appropriately typed lambda-calculus. Procedures are entirely explained at a purely functional level - independent of the interpretation of program constructs - by continuous models for lambda-calculus. However, the usual (cpo) models are not adequate to model local storage allocation for blocks because storage overflow presents an apparent discontinuity. New domains of store models are offered to solve this problem.
</description>
<pubDate>Sun, 01 Apr 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149068</guid>
<dc:date>1984-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Sequential Nature of Unification</title>
<link>https://hdl.handle.net/1721.1/149067</link>
<description>On the Sequential Nature of Unification
Dwork, Cynthia; Kanellakis, Paris C.; Mitchell, John C.
The problem of unification of terms is log-space complete for P. In deriving this lower bound no use is made of the potentially concise representation of terms by directed acyclic graphs. In addition, the problem remains complete even if infinite substitutions are allowed. A consequence of this result is that parallelism cannot significantly improve on the best sequential solutions for unification. The "dual" problem of computing the congruence closure of an equivalence relation is also log-space complete for P. However, we show that for the problem of term matching, an important subcase of unification, there is a good parallel algorithm using O(log^2 n) time and n^O(1) processors on a PRAM. For the O(log^2 n) parallel time upper bound we assume that the terms are presented by directed acyclic graphs; if the longer string representation is used we obtain an O(log n) parallel time bound.
</description>
<pubDate>Thu, 01 Mar 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149067</guid>
<dc:date>1984-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Numbers of Close-and-equal Pairs of Bits in a String (with Implications on the Security of RSA'S L.S.B.)</title>
<link>https://hdl.handle.net/1721.1/149066</link>
<description>On the Numbers of Close-and-equal Pairs of Bits in a String (with Implications on the Security of RSA'S L.S.B.)
Goldreich, Oded
We consider the following problem: Let s be a n-bit string with m ones and n-m zeros. Denote by CEt(s) the number of pairs, of equal bits which are within distance t apart, in the string s. What is the minimum value of Cet(*), when the minimum is taken over all n-bit strings which consists of m ones and n-m zeros? We prove a (reasonably) tight lower bound for this combinatorial problem. Implications, on the cryptographic secruity of the least significant bit of a message encrypted by the RSA scheme, follow. E.g. under the assumption that the RSA is unbreakable; there exist no probabilistic polynomial-time algorithm which guesses the least significant bit of a message (correctly) with probability at least 0.725, when given the encryption of the message using the RSA. This is the best result known concerning the security of RSA's least significant bit.
</description>
<pubDate>Thu, 01 Mar 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149066</guid>
<dc:date>1984-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Assemble Tree Machines</title>
<link>https://hdl.handle.net/1721.1/149065</link>
<description>How to Assemble Tree Machines
Bhatt, Sandeep Nautam; Leiserson, Charles E.
Many researchers have proposed that ensembles of processing elements be organized as trees. This paper explores how large tree machines can be assembled efficiently from smaller components. A principal constraint considered is the limited number of external connections from an integrated circuit chip. We also explore the emerging capabilities of restructurable VLSI which allows a chip to be customized after fabrication. We give a linear-area chip of m processors and only four off-chip connections which can be used as the sole building block to construct an arbirtarily large complete binary tree. We also present a restructurable linear-areas layout of m processors with O(lg m) pins that can realize an arbitrary binary tree of any size. This layout is based on a solution to the graph-theoretic problem: Given a tree in which each vertex is either black or white, determine how many edges need to be cut in order to bisect the tree into equal-size components, each containing exactly half the black and half the white vertices. These ideas extend to more general graphs using separator theoerems and bifurcators.
</description>
<pubDate>Thu, 01 Mar 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149065</guid>
<dc:date>1984-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empirical Analysis of a Token Ring Network</title>
<link>https://hdl.handle.net/1721.1/149064</link>
<description>Empirical Analysis of a Token Ring Network
Feldmeier, David C.
The MIT Laboratory for Computer Science 10 Megabit token ring local area network was monitored. Over a one-week period 7 million packets and 1.3 billion bytes passes by the monitor. This thesis compares the MIT ring traffic with that observed on the Xerox Palo Alto Research Center experimental Ethernet by Shoch and Hupp.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149064</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Application of Number Theory to the Organization of Raster Graphics Memory</title>
<link>https://hdl.handle.net/1721.1/149063</link>
<description>An Application of Number Theory to the Organization of Raster Graphics Memory
Chor, Benny; Leiserson, Charles E.; Rivest, Ronald L.; Shearer, James B.
A high-resolution raster-graphics display is usually combined with processing power and a memory organization that facilitates basic graphics operations. For many applications, including interactive text processing, the ability to quickly move or copy small rectangles of pixels is essential. This paper proposes a novel organization of raster-graphics memory that permits all small rectangles to be moved efficiently. The memory organization is based on a doubly periodic assignment of pixels to M memory chips according to a "Fibonacci" lattice. The memory organization guarantees that if a rectilinearly oriented rectangle contains fewer than M/√5 pixels, then all pixels will reside in different memory chips, and thus can be accesses simultaneously. We also define a continuous analogue of the problem which can be posed as, "What is the maximum density of a set of points in the plane such that no two points are contained in the interior of a rectilinearly oriented rectangle of unit area." We show the existence of such a set with density 1/√5, and prove this is optimal by giving a matching upper bound.
</description>
<pubDate>Sun, 01 Apr 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149063</guid>
<dc:date>1984-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>On BPP</title>
<link>https://hdl.handle.net/1721.1/149062</link>
<description>On BPP
Zachos, Stathis K.; Heller, Hans
</description>
<pubDate>Thu, 01 Dec 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149062</guid>
<dc:date>1983-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reaching Approximate Agreement in the Presence of Faults</title>
<link>https://hdl.handle.net/1721.1/149061</link>
<description>Reaching Approximate Agreement in the Presence of Faults
Dolev, Danny; Lynch, Nancy A.; Pinter, Shlomit S.; Stark, Eugene W.; Weihl, William E.
This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Booleann values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in aynchronous, as well as synchornous systems. The asynchronous agreement algorithm is an interesting contrast to a result of Fischer, Lynch, and Paterson, who show that exact agreement is not attainable in an asychronous system with as few as one fault process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proven, and the algorithms presented are shown to be optimal.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149061</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Concurrent Identification Protocols</title>
<link>https://hdl.handle.net/1721.1/149060</link>
<description>On Concurrent Identification Protocols
Goldreich, Oded
We consider communication networks in which it is not possible to identify the source of a message which is broadcastes through the network. A natural question is whether it is possible for two users to identify each other concurrently, through a secure two-party protocol. We show that more than the existence of a secure Public Key Cryptosystem should be assumed in order to present a secure protocol for concurrent identification. We present two concurrent identification protocols: The first one relies on the existence of a center who has distributed "identification tags" two the users; while the second protocol relies on the distribution of "experimental sequences" by instances of a pre-protocol which have taken place between every two users.
</description>
<pubDate>Thu, 01 Dec 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149060</guid>
<dc:date>1983-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Markov Chain Tree Theorem</title>
<link>https://hdl.handle.net/1721.1/149059</link>
<description>The Markov Chain Tree Theorem
Leighton, Frank Thomson; Rivest, Ronald L.
Let M be a finite first-order stationary Markov chain. We define an arborescence to be a set of edges in the directed graph for M having at most one edge out of every vertex, no cyles, and maximum cardinality. The weight of an arborescence is defined to be the product over each edge in the arborescence of the probability of the transition associated with the edge. We prove that if M starts in state i, its limiting average probability of being in state j is proportional to the sum of the weights of all arborescences having a path from i to j and no edge out of j. We present two proofs. The first is derived from simple graph theoretic identities. The second is derived from the closely-related Matrix Tree Theorem.
</description>
<pubDate>Tue, 01 Nov 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149059</guid>
<dc:date>1983-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimateing a Probability using Finite Memory</title>
<link>https://hdl.handle.net/1721.1/149058</link>
<description>Estimateing a Probability using Finite Memory
Leighton, Frank Thomson; Rivest, Ronald L.
</description>
<pubDate>Tue, 01 Nov 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149058</guid>
<dc:date>1983-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Searching in Sorted Linked Lists</title>
<link>https://hdl.handle.net/1721.1/149057</link>
<description>Probabilistic Searching in Sorted Linked Lists
Leighton, Frank Thomson; Lepley, Margaret
Janko [2] and Bentley, Stanat, and Steele [1] have described probabilistic procedures for data manipulation in sorted linnked lists. Their procedures are based on an algorithm which performs a Member search operation using 2N^1/2 + O(1) expected steps where N is the number of elements in the list. In addition, Bentley, Stanat and Steele have shown that every Member search algorithm requires (2N)^1/2 + Ω(!) expected steps. In this paper, we improve the lower bound result in order to prove that the known algorithm for Member search is optimal.
</description>
<pubDate>Tue, 01 Nov 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149057</guid>
<dc:date>1983-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Denotational to Operational and Axiomatic Semantics for ALGOL-like Languages: An Overview</title>
<link>https://hdl.handle.net/1721.1/149056</link>
<description>From Denotational to Operational and Axiomatic Semantics for ALGOL-like Languages: An Overview
Trakhtenbrot, B.A.; Halpern, Joseph Y.; Meyer, Albert R.
The advantages of denotational over operational semantics are argued. A denotational semantics is provided for an ALGOL-like language with finite-model procedures, blocks with local storage, and sharing (aliasing). Procedure declarations are completely explained in the ususal framework of complete partial orders, but cpo's are inadequate for the semantics of blocks, and a new class of store models is developed. Partial correctness theory over store models is developed for commands which may contain calls to global procedures, but do not contain function procedures returning storable values.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149056</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding ALGOL: A View of a Recent Convert to Denotational Semantics</title>
<link>https://hdl.handle.net/1721.1/149055</link>
<description>Understanding ALGOL: A View of a Recent Convert to Denotational Semantics
Meyer, Albert R.
The advantages of denotational over copy-rule semantics are argued. A denotational semantics is indicated for an ALGOL-like language with finite-mode procedures, blocks with local storage, and sharing (aliasing). Procedure declarations are completely explained in the usual framework of complete partial orders, but cpo's are inadequate for the semantics of blocks, and a new class of store models is described. The semantics justifies a proof system for partial correctness of commands containing global procedures.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149055</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Construct Random Functions</title>
<link>https://hdl.handle.net/1721.1/149054</link>
<description>How to Construct Random Functions
Goldreich, Oded; Goldwasser, Shafi; Micali, Silvio
We assume that functions that are one-way in a very weak sense exist. We prove that in probabilitic polynomial time it is possible to construct deterministic polynomial time computable functions g:{1,…,2^k} -&gt; {1,…,2^k} that cannot be distinguished by an probabilistic polynomial time algorithm from a random function. Loosely speaking, g provides random access to a K2^k -bit long pad whose entries record the outcome of independent coin flips. This complexity theoretic result has many important applications in Cryptography, Protocols, and Hashing.
</description>
<pubDate>Mon, 01 Nov 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149054</guid>
<dc:date>1982-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Demand-Driven Evaluation (II)</title>
<link>https://hdl.handle.net/1721.1/149053</link>
<description>Efficient Demand-Driven Evaluation (II)
Pingali, Keshav; Arvind
In Part I of this paper, we presented a scheme whereby a compiler could propogate demands through programs in a powerful stream language L. A data-driven evaluation of the transformed program performed exactly the same computation as a demand-driven evaluation of the original program. In this paper, we explore a different transformation which trades the complexity of demand propogation for a bounded amount of extra computation on some data lines.
</description>
<pubDate>Thu, 01 Sep 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149053</guid>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Demand-Driven Evaluation (I)</title>
<link>https://hdl.handle.net/1721.1/149052</link>
<description>Efficient Demand-Driven Evaluation (I)
Pingali, Keshav; Arvind
We describe a program transformation technique for programs in a general stream language L whereby a data-driven evaluation of the transformed program performs exactly the same computation as a demand-driven evaluation of the original program. The transformational technique suggests a simple denotational characterization of demand-driven evaluation.
</description>
<pubDate>Thu, 01 Sep 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149052</guid>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two Fundamental Issues in Multiprocessing: The Dataflow Solution</title>
<link>https://hdl.handle.net/1721.1/149051</link>
<description>Two Fundamental Issues in Multiprocessing: The Dataflow Solution
Arvind; Iannucci, Robert A.
To exploit the parallelism inherent in algorithms, any multiprocessor system must address two very basic issues - long memory latencies and waits for synchronization events. It is argued on the basis of the evolution of high performance computers that the processor idle time induced by memory latency and synchronization waits cannot be reduced simultaneously in von Neumann style multiprocessors. Dataflow architectures are offered as an alternative because, given enough parallelism in the program, they can reduce both latency and sychronization costs.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149051</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Program for Therapy of Acid-base and Electrolyte Disorders</title>
<link>https://hdl.handle.net/1721.1/149050</link>
<description>A Program for Therapy of Acid-base and Electrolyte Disorders
Bromley, Hank
This thesis describes work done on the therapy component of an on-going project for the diagnosis and management of acid-base and electrolyte disoders. Therapeutic interventions can be classified as symptomatic or etiologic, and as acute or chronic. We have focused on the problem of acute symptomatic therapy. Based on observation of clinical practice, we have developed a formalization of the domain-independent aspects of the task of acute symptomatic therapy, then applied the formalization to the particular field of acid-base and electrolyte disorders. A rule-based program named ABET (the Acide-Base ad Electrolye Therapy Advisor) has been designed and written to test this formalization. The thesis presents the methods used by ABET, the program's implementation, a sample session, and a discussion of limitations and possible improvements.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149050</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of an Office Analysis Methodology</title>
<link>https://hdl.handle.net/1721.1/149049</link>
<description>Evaluation of an Office Analysis Methodology
Sutherland, Juliet; Sirbu, Marvin
We have developed a model of the office that describes semi-structured office work. This model underlies an office analysis methodology and an office-specification language. An evaluation of the usefulness and practicality of the model, the specification language, and the methodology has dhown that the model is clearly a useful approach to understanding offices, the specification language is interesting but not as useful in practice as we had hoped, and the methodology is useful but could be improved. We have developed a new methodology that addresses the issue of diagnosis as well as description. This new methodology is still being evaluated, but early results show that it is as useful for training new analysts as the old methodology.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149049</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Approximation Algorithm for Manhattan Routing</title>
<link>https://hdl.handle.net/1721.1/149048</link>
<description>An Approximation Algorithm for Manhattan Routing
Baker, Brenda S.; Bhatt, Sandeep N.; Leighton, Frank Thomson
Density has long been known to be an important measure of difficulty for Manhattan routing. In this paper, we identify a second important measure of difficulty, which we call flux. We show that flux, like density, is a lower bound on channel width. In addition, we present a linear-time algorithm which routes any multipoint net Manhattan routing problem with density d and flux f in a channel of width 2d+O(f). (For 2-point net, the bound is d+O(f).) Thus we show that Manhattan routing is one of the NP-complete problems for which there is a provably good approximation algorithm.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149048</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planar Embedding of Planar Graphs</title>
<link>https://hdl.handle.net/1721.1/149047</link>
<description>Planar Embedding of Planar Graphs
Dolev, Danny; Leighton, Frank Thomson; Trickey, Howard
Planar embedding with minimal area of graphs on an integer grid is an interesting problem in VLSI theory. Valiant [V] gave an algorithm to construct a planar embedding for trees in linear area; he also proved that there are planar graphs that require quadratic area. We fill in a spectrum between Valiant's results by showing that a N-node planar graph has a planar embedding with area O(NF), where F is a bound on the path length from any node to the exterior face. In particular, an outerplanar graph can be embedded without crossovers in linear area. This bound is tight, up to constant factors: for any N and F, there exist graphs requiring Ω(NF) area for planar embedding. Also, finding a minimal embedding area is shown to be NP-complete for forests, and hence for more general types of graphs.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149047</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wafer-scale Integration of Systolic Arrays</title>
<link>https://hdl.handle.net/1721.1/149046</link>
<description>Wafer-scale Integration of Systolic Arrays
Leighton, Frank Thomson; Leiserson, Charles E.
VLSI technologists are fast developing wafer-scale integration. Rather than partitioning a silicon wafer into chips as is usually done, the idea behind wafer-scale integration is to assemble an entire system (or network of chips) on a single wafer, thus avoiding the costs and performance loss associated with individual packaging of chips. A major problem with assembling a large system of microprocessors on a single wafer, however, is that some of the processor, or cells, on the wafer are likely to be defective. In the paper, we describe practical procedures for integrating wafer-scale systems "around" such faults. The procedures are designed to minimize the length of the longest wire in the system, thus minimizing the communication time between cells. Although the underlying network problems are NP-complete, we prove that the procedures are reliable by assuming a probabilistic model of cell failure. We also discuss applications of this work to problems in VLSI layout theory, graphy theory, fault-tolerant systems and planar geometry.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149046</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Implication Problem for Functional and Inclusion Dependencies</title>
<link>https://hdl.handle.net/1721.1/149045</link>
<description>The Implication Problem for Functional and Inclusion Dependencies
Mitchell, John C.
There are two implication problems for functional dependencies and inclusion dependencies: general implication and finite implication. Given a set of dependencies ∑∪{σ}, the problems are to determine whether σ holds in all databases satisfying ∑ or all finite databases satisfying ∑. Contrary to the possibility suggested in [5], there is a natural, complete axiom system for general implication. However, a simple observation shows that both implication problems are recursively unsolvable. It follows that there is no recursively enumerable set of axioms for finite implication.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149045</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Randomized Encryption Techniques</title>
<link>https://hdl.handle.net/1721.1/149044</link>
<description>Randomized Encryption Techniques
Rivest, Ronald L.; Sherman, Alan T.
A randomized encryption procedure enciphers a message by randomly choosing a ciphertext from a set of ciphertexts corresponding to the message under the current encryption key. At the cost of increasing the required bandwidth, such procedures may achieve greater cryptographic security than their deterministic counterparts by increasing the apparent size of the message space, eliminating the threat of chosen plaintext attacks, and improving the a priori statistics for the inputs to the encryption algorithms. In this paper we explore various ways of using randomization in encryption.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149044</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Internet Remost Logic on a Personal Computer</title>
<link>https://hdl.handle.net/1721.1/149043</link>
<description>Implementing Internet Remost Logic on a Personal Computer
Konopelski, Louis J.
This thesis demonstrates that a desktop personal computer can support an efficient internet remote login implementation with the same protocols used by large mainframes. It describes a project in which the Telnet remote login protocol, along with the supporting Transmission Control Protocol and Internet Protocol were implemented on an IBM Personal Computer. The utility of the implementation depended heavily on the software speed. Strategies discusses to insure quick performance included tailoring protocols to their clients needs, sharing the overhead of asynchronous actions, and sharing data. A natural order in which to process the protocol data was identified, and two control structures were presented that allowed the protocol modules to run in this order. One of the control structures used procedures and processes, while the other used procedures alone. A full scale protocol was successfully placed in the personal computer. With some foreign hosts, the implementation echoed characters in less than a quarter of a second, and processed a screenful of data in less than three seconds. The protocol software overhead was never the dominating performance bottleneck. The serial line interface limited the character echoing performance while the speed with which the processor could operate its display limited the processing speed of large amounts of data. Memory size was not a significant constraint.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149043</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>PLY: A System of Plausibility Inference with a Probabilistic Basis</title>
<link>https://hdl.handle.net/1721.1/149042</link>
<description>PLY: A System of Plausibility Inference with a Probabilistic Basis
Yeh, Alexander
An overview is given of a system of plausibility inference that will be developed for use in planning. This system, to be called PLY, will be specifically designed to work with propositions of the form "when A is true (occurs), B is likely to be true (to occur)." Previous systems performing similiar functions have been designed as aids for such tasks as medical diagnosis (MYCIN and others) and mineral prospecting (PROSPECTOR). PLY will have a probabilistics basis. Intuitive assumptions to deal with knowledge not explicitly given to the system will be made with the aid of an information-theoretic measure on the amount of information in a probability distribution. Unlike many other systems, PLY will not use these assumptions when the given knowledge indicates they are not tenable. In addition to standard probabilities, PLY will be able to make use of knowledge (information) in the form of correlations and increased/decreased likelihoods, which most people find easier to estimate than probabilities. PLY's knowledge will be in an organized and structures form, which will help in knowledge acquisition and revisino, faciliate system explanations, and lower the storage requirements of the system.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149042</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Asymptotically Optimal Layout for the Shuffle-exchange Graph</title>
<link>https://hdl.handle.net/1721.1/149041</link>
<description>An Asymptotically Optimal Layout for the Shuffle-exchange Graph
Kleitman, Daniel; Leighton, Frank Thomson; Lepley, Margaret; Miller, Gary L.
The shuffle-exchange graph is one of the best structures known for parallel computation. Among other things, a shuffle-exchange computer can be used to compute discrete Fourier transforms, multiply matrices, evaluate polynomials, perform permutations and sort lists. The algorithms needed for these operations are quite simple and many require no more than logarithmic time and constant space per processor. In this paper, we describe an O(n^2/log^2N)-area layout for the shuffle-exchange graph on a two-dimentional grid. The layout is the first which is known to achieve Thompson's asymptotic lower bound.
</description>
<pubDate>Fri, 01 Oct 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149041</guid>
<dc:date>1982-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embedding Cryptographic Trapdoors in Arbirtrary Knapsack Systems</title>
<link>https://hdl.handle.net/1721.1/149040</link>
<description>Embedding Cryptographic Trapdoors in Arbirtrary Knapsack Systems
Shamir, Adi
In this paper we show that after sufficiently many modular multiplications, any knapsack system becomes a trapdoor system that can be used in pubic-key cryptography.
</description>
<pubDate>Wed, 01 Sep 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149040</guid>
<dc:date>1982-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of Evaluation Relational Queries</title>
<link>https://hdl.handle.net/1721.1/149039</link>
<description>The Complexity of Evaluation Relational Queries
Cosmadakis, Stavros S.
We show that, given a relation R, a relational query ? Involving only projection and join, and a conjectured result r, resting with ?(R)=r is D^p complete. Bounding the size of ?(R) from below (above) is NP-hard (co-NP-hard), and bounding it both ways is D^p hard. Computing the size of ?(R) is #P-hard. We also show that, given two relations R1 and R2 and two queries ?1 and ?2 as above, testing whether ?1)R1)⊆?2(R2) and testing whether ?1(R1)=?2(R2) and both ∏p2-complete, even when R1-R2 or when ?1=?2.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149039</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two Remarks on the Power of Counting</title>
<link>https://hdl.handle.net/1721.1/149038</link>
<description>Two Remarks on the Power of Counting
Papadimitriou, Christos H.; Zachos, Stathis K.
The relationship between the polynomial hierarchy and Valiant's class #P is at present unknown. We show that some low portions of the polynomial hierarchy, namely deterministic polynomial algorithms using an NP oracle at most a logarithmic number of times, can be simulated by one #P computation. We also show that the class of problems solvable by polynomial-time nondeterministic Turing machines which accept whenever there is an odd number of accepting computations is idempotent, that is closed under usage of oracles from the same class.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149038</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Lower Bound Techniques For VLSI</title>
<link>https://hdl.handle.net/1721.1/149037</link>
<description>New Lower Bound Techniques For VLSI
Leighton, Frank Thomson
In this paper, we use crossing number and wire area arguments to find lower bounds on the layout area and maximum edge length of a variety of new and computationally useful networks. In particular, we describe 1) an N-node planar graph which has layout area ⊖ (NlogN) and maximum edge length ⊖(N^1/2/log^1/2N), 2) an N-node graph with an O(x^1/2)-separator which has layout area ⊖ (Nlog^2N) and maximum edge length ⊖ (N^1/2logN/loglogN), and 3) an N-node graph with an O(x^1-1/r)-separator which has maximum edge length ⊖(N1-1/4) for an r≥3.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149037</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hoare's Logic Is Not Complete When It Could Be</title>
<link>https://hdl.handle.net/1721.1/149036</link>
<description>Hoare's Logic Is Not Complete When It Could Be
Bergstra, J.; Chielinksa, A.; Tiuryn, J.
It is known (cf.[2]) that is the Hoare rules are complete for a first-order structure A, then the set of partial correctness assertions true over A is recursive in the first-order theory of A. We show that the converse is not true. Namely, there is a first-order structure C such that the set of partial correctness assertions true over C is recursive in the theory of C, but the Hoare rules are not complete for C.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149036</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundations for Office Semantics</title>
<link>https://hdl.handle.net/1721.1/149035</link>
<description>Foundations for Office Semantics
Barber, Gerald R.; Hewitt, Carl
In this paper we develop the semantics of work in the office in terms of the concepts of application structure and organizational structure of the office. Application structure is concerned with the rules and constraints of the domain of the office work such as accounting, law, or social security regulations. Organizational structure is concerned with the informal and formal social relationships within the organization. Detailed knowledge of office application structures and organizational structures is necessary in order to understand how they interact and evolve. Problem solving is a pervasive activity within offices which is performed when office workers apply general knowledge about office procedures to the specific cases encountered in their daily work. We discuss how a description system (named OMEGA) can aid in the construction of interactive systems whose intent is to describe the application and organization structures. Using the knowledge embedded within itself about the office OMEGA can help support office workers in their problems solving processes.
</description>
<pubDate>Thu, 01 Jul 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149035</guid>
<dc:date>1982-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Supporting Organizational Problem Solving with a Workstation</title>
<link>https://hdl.handle.net/1721.1/149034</link>
<description>Supporting Organizational Problem Solving with a Workstation
Barber, Gerald R.
This paper describes an approach to supporting work in the office. Using and extending ideas from the field of Artificial Intelligence (AI) we describe office work as a problem solving activity. A knowledge embedding language called Omega is used to embed knowledge of the organization into an office worker's workstation in order to support the office worker in his or her problem solving. A particular approach to reasoning about change and contradiction is discussed. This approach uses Omega's viewpoint mechanism. Omega's viewpoint mechanism is a general contradiction handling facility. Unlike other Knowledge Representation systems, when a contradiction is reached the reasons for the contradiction can be analyzed by the deduction mechanisms without having to resort to a backtracking mechanism. The Viewpoint mechanism is the heart of the Problem Solving Support Paradigm. This paradigm supplements the classical AI view of problem solving. Office workers are supported using the Problem Solving Support Paradigm. An example is presented where Omega's facilities are used to support an office worker's problem solving activities. The example illustrates the use of viewpoints and of Omega's capabilities to reason about it's own reasoning process.
</description>
<pubDate>Thu, 01 Jul 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149034</guid>
<dc:date>1982-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Principled Design for an Integrate Computational Environment</title>
<link>https://hdl.handle.net/1721.1/149033</link>
<description>A Principled Design for an Integrate Computational Environment
diSessa, Andrea A.
Boxer is a computer language designed to be the base of an integrated computational environment providing a broad array of functionality -- from text editing to programming -- for naïve and novice users. It stands in the line of Lisp inspired languages (Lisp, Logo, Scheme), but differs from these in achieveing much of its understandability from pervasive use of a spatial metaphor reinforced through suitable graphics. This paper describes a set of learnability and understandability issues first and then uses them to motivate design decisions made concerning Boxer and the environment in which it is embedded.
</description>
<pubDate>Thu, 01 Jul 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149033</guid>
<dc:date>1982-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Telex Gateway for the Internet</title>
<link>https://hdl.handle.net/1721.1/149032</link>
<description>A Telex Gateway for the Internet
Meier zu Sieker, Friedrich
The design of a gateway connecting one of the networks of the MIT Laboratory for Computer Science to the telex network is discussed. A description of the telex network is given. The relationship of the gateway to other resources of the network environment is considered to obtain directions for the implementation of new resources. The implementation of the gateway on the UNIX operating system outlined.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149032</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layouts for the Suffle-Exchange Graph Based on the Complex Plane Diagram</title>
<link>https://hdl.handle.net/1721.1/149031</link>
<description>Layouts for the Suffle-Exchange Graph Based on the Complex Plane Diagram
Leighton, Frank Thomson; Lepley, Margaret; Miller, Gary L.
The shuffule-exchange graph is one of the best structures known for parallel computation. Among other things, a shuffle-exchange computer can be used to compute discrete. Fourier transforms, multiply matrices, evaluate polynomials, performa permutations and sort lists. The algorithms needed for these operations are extremely simple and many require no more than logarithmic time and constant space per processor. In this paper, we analyze the algebraic structure of the shuffle-exchange graph in order to find area-efficient embeddings of the graph in a two-dimensinoal grid. The results are applicable to the design of Very Large Scale Integration (VLSI) circuit layouts for a shuffle-exchange computer.
</description>
<pubDate>Tue, 01 Jun 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149031</guid>
<dc:date>1982-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Circuit Analysis of Self-timed Elements for NMOS VLSI Systems</title>
<link>https://hdl.handle.net/1721.1/149030</link>
<description>Circuit Analysis of Self-timed Elements for NMOS VLSI Systems
Chu, Tam-Anh
Scalingof VLSI digital systems introduces new problems to the design of synchronous systems, due to the disproportional increase in wire delays with the decrease in transistor sizes. One the other hand, the asynchronous self-timed design approach, which has been traditionally less attractive, offer a number of advantages for VLSI. Also, this approach can be directly incorporated into a structured design methodology for Packet Communication Architectures. This paper considers a practical self-timed design methodology and studies its implementation in nMOS. The C-element and the arbiter circuit, two main circuit components of self-timed systems, are analyzed to allow the evaluation of the design approach.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149030</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recursive Decomposition Ordering and Multiset Orderings</title>
<link>https://hdl.handle.net/1721.1/149029</link>
<description>Recursive Decomposition Ordering and Multiset Orderings
Jouannaud, Jean-Pierre; Lescanne, Pierre; Reinig, Fernand
The Recursive Decomposition Ordering, a simplification ordering on terms, is useful to prove termination of term rewriting systems. In this paper we give the definition of the decomposition ordering and prove that it is a well-founded simplication ordering containing Dershowitz's Recursive Path Ordering. We also show that the Recursive Decomposition Ordering has a very interesting incremental property. In the second paper, we propose two well-founded orderings on multisets that extend the Dershowitz-Manna ordering. Unlike the Dershowitz-Manna ordering, ours do not have a natural monotonicity property. This lack of monotonicity suggests using monotonicity to provide a new characterization of the Dershowitz -Manna ordering. Section 5 proposes an efficient and correct implementation of that ordering.
</description>
<pubDate>Tue, 01 Jun 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149029</guid>
<dc:date>1982-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cooperative Office Work, Teleconferencing and Calendar Management: A Collection of Papers</title>
<link>https://hdl.handle.net/1721.1/149028</link>
<description>Cooperative Office Work, Teleconferencing and Calendar Management: A Collection of Papers
Greif, Irene
This technical memo consists of a collection of papers that have been presented at conferences. They all present results of research in the "Multi-person Informational Work" project in the Office Automation Group.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149028</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A File Transfer Program for a Personal Computer</title>
<link>https://hdl.handle.net/1721.1/149027</link>
<description>A File Transfer Program for a Personal Computer
Wright, Karl D.
This thesis explores engineering decisions involved in implementing a network file transfer program on a personal computer in response to criteria of low cost and reasonable efficiency. The issues include choice of hardware, design of the network, choice of implementation language, choice of communication protocols, and choice of software structure. A machine level protocol is designed. A project incorporating these and other ideas is undertaken and the ideas thus evaluated. Insight is gleaned into the performance expected under varying operating system and interrupt enviroments. A notion of an "ideal" operating system interface for applications similiar to file transfer (which can exploit concurrency) is developed. Finally, possible improvements on the actual project are suggested based in part on the efficiency data obtained.
</description>
<pubDate>Thu, 01 Apr 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149027</guid>
<dc:date>1982-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coping with Syntactic Ambiguity or How to Put the Block in the Box on the Table</title>
<link>https://hdl.handle.net/1721.1/149026</link>
<description>Coping with Syntactic Ambiguity or How to Put the Block in the Box on the Table
Church, Kenneth; Patil, Ramesh
Sentences are far more ambiguous than one might have thought. There may be hundreds, perhaps thousands of syntatic parse trees for certain very natural sentences of English. This fact has been a major problem confronting natural language processing because it indicates that it may require a long time to construct a list of all the parse trees, and furthermore, it isn't clear what to do with the list once it has ben constructed. This list may be so numerous that it is probably not the most convenient representation for communication with the semantic and pragmatic processing modules. In this paper we propose some methods for dealing with syntactic ambiguity in ways that take advantage of certain regularities among the alternative parse trees. These regularities will be expressed as linear combinations of ATN networks, and also as sums and products of formal power series. We will suggest some ways that practical processor can take advantage of this modularity in order to deal more efficiently with combinatoric ambiguity. In particular, we will show how a processor can efficiently compute the ambiguity of an input sentence (or any portion thereof). Furthermore, we will show how to compile certain grammars into a form that can be processed more efficiently. In some cases, including the "every way ambiguous" grammar (e.g., conjunction, prepositional phrases, noun-noun modification), processing time will be reduced from O9n^3) to O(n). Finally, we will show how to uncompile certain highly optimized grammars into a form suitable for linguistic analysis.
</description>
<pubDate>Thu, 01 Apr 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149026</guid>
<dc:date>1982-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Synchronous Systems</title>
<link>https://hdl.handle.net/1721.1/149025</link>
<description>Optimizing Synchronous Systems
Leiserson, Charles E.; Saxe, James B.
The complexity of integrated-circuit chips produced today makes it feasible to build inexpensive, special-purpose subsystems that rapidly solve sophisticated problems on behalf of a general-purpose host computer. This paper contributes to the design methodology of efficient VLSI algorithms. We present a transformation that converts synchronous systems into more time-efficient, systolic implementations by removing combinatorial rippling. The problem of determining the optimized system can be reduced to the graph-theoretic single-destination-shortest-paths problem. More importantly from an engineering standpoint, however, the kinds of rippling that can be removed from a circuit at essentially no cost can be easily characterized. For example, if the only global communication in a system is broadcasting from the host computer, the broadcast can always be replaced by local communication.
</description>
<pubDate>Mon, 01 Mar 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149025</guid>
<dc:date>1982-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Termination Assertions for Recursive Programs: Completeness and Axiomatic Definability</title>
<link>https://hdl.handle.net/1721.1/149024</link>
<description>Termination Assertions for Recursive Programs: Completeness and Axiomatic Definability
Meyer, Albert R.; Mitchell, John C.
The termination assertion p&lt;S&gt;q means that whenever the formular p is true, there is an execution of the possibly nondeterministic program S which terminates in a state in qhich q is true. A recursive program S may declare and use local variables and nondeterministic recursive procedures with call-by-address and call-by--value parameters, in addition to accessing undeclared variables and global procedures. Assertions p and q about calls to global procedures are first order formulas extended to express hypotheses about the termination of calls to undeclared global procedures. A complete, effective axiom system with axioms corresponding to the syntax of the programming language is given for the termination assertinos valid over all interpretations. Termination assertions define the semantics of recursive programs in the following sense: if two programs have different input-output semantics, then there is a termination assertion that is valid for one program but not for the other. Thus the complete axiomatization of termination assertions constitutes an axiomatic definition of the semantics of recursive programs.
</description>
<pubDate>Mon, 01 Mar 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149024</guid>
<dc:date>1982-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimax Optimal Universal Codeword Sets</title>
<link>https://hdl.handle.net/1721.1/149023</link>
<description>Minimax Optimal Universal Codeword Sets
Elias, Peter
In an interactive multi-user data-processing system a user knows the probabilities of his messages and must encode them into a fixed system-wide variable-length codeword set. He needs to receive the answer to his last message before selecting the next, so his encoding is one-shot. To minimize average codeword length he encodes his messages in order of decreasing probability into codewords in order of increasing length. I give an algorithm which, for each of several measures of performance, finds the codeword set best by that measure for the worst user, and some of the minimax optimal codeword sets the algorithm has found. Some of the results hold for all user distributions: others require e.g. that all users send exactly or at most m distinct messages, and/or that there is an integer k such that no user has a message of probability greater than 1/k.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149023</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Note on Equivalences Among Logics of Programs</title>
<link>https://hdl.handle.net/1721.1/149022</link>
<description>A Note on Equivalences Among Logics of Programs
Meyer, Albert R.; Tiuryn, Jersey
Several different first order formal logics of programs -- Algoritmic Logic, Dynamic Logic, and Logic of Effective Definitions -- are compared and shown to be equivalent to a fragment of constructive L ω1ω. When programs are modelled as effective flowcharts, the logics of deterministic and nondeterministic programs are equivalent.
</description>
<pubDate>Tue, 01 Dec 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149022</guid>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Software for the "Roles" People Play</title>
<link>https://hdl.handle.net/1721.1/149021</link>
<description>Software for the "Roles" People Play
Greif, Irene
Office work consists largely of cooperative efforts by numbers of people. To support such work, applications programs can be designed as "multi-person" systems organized around notions of "roles" and "working relationships." A group of co-workers can then describe to the system their agreed upon roles in a project as well as the working relationships among those roles. Based on this description, application software can provide support for communications protocols and access control that is tailored to the working situation. As working relationships evolve, these descriptions can be modified so that the software will continue to meet the needs of the users. The paper presents an approach to office systems research emphasizing the development of software modules that can be used to build end-user application programs. The requirements that "multi-person" applications place on this software architecture are discussed in the context of a series of examples of multi-person activities, including joint document writing and calendar management.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149021</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Complexity and the Traveling Salesman Problem</title>
<link>https://hdl.handle.net/1721.1/149020</link>
<description>Computational Complexity and the Traveling Salesman Problem
Johnson, David; Papadimitriou, Christos
</description>
<pubDate>Tue, 01 Dec 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149020</guid>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Traveling Saleman Problem with Many Visits to Few Cities</title>
<link>https://hdl.handle.net/1721.1/149019</link>
<description>The Traveling Saleman Problem with Many Visits to Few Cities
Cosmadakis, Stavros S.; Papadimitriou, Christos H.
We study the version of the traveling salesman problem in which a relatively small number of cities -- say, six -- must be visited a huge number of times -- e.g., several hundred times each. )It costs to go from one city to itself). We develop an algorithm for this problem whose running time is exponential in the number of cities, but logarithmic in the number of visits. Our algorithm is a practical approach to the problem for instances of size in the range indicated above. The implementation and analysis of our algorithm give rise to a number of interesting graph-theoretic and counting problems.
</description>
<pubDate>Sun, 01 Nov 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149019</guid>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Power Set Models of Lambda-Calculus: Theories, Expansions, Isomorphisms</title>
<link>https://hdl.handle.net/1721.1/149018</link>
<description>Power Set Models of Lambda-Calculus: Theories, Expansions, Isomorphisms
Longo, Guiseppe
</description>
<pubDate>Sun, 01 Nov 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149018</guid>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Placement for River Routing</title>
<link>https://hdl.handle.net/1721.1/149017</link>
<description>Optimal Placement for River Routing
Leiseron, Charles E.; Pinter, Ron Y.
Programs for integrated circuit layout typically have two phases: placement and routing. The router should produce as efficient a layout as possible, but of course the quality of the routhing depends heavily on the quality of the placement. On the other hand, the placement procedure ideally should know the quality of a routing before it routes the wires. In this talk we present an optimal solution for a practical, common version of this placement and routing problem. River routing is the problem of connecting in order a set of terminals a1,...,an on a line to another set b1,...,bn across a rectangular channel. Since the terminals are located on modules, the modules must be placed relative to one another before routing. This placement problem arises frequently in design systems like bristle-blocks where stretch lines through a module can effectively break it into several chunks, each of which must be placed separately. In this talk, we shall present concise necessary and sufficient conditions for wirability which are applied to reduce the optimal placement problem to the graph-theoretic single-source-longest-paths problems. By exploiting the special structure of graphs that arise from the placement problem for rectilinear wiring, an optimal solution may be determined in linear time.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149017</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Circuit-size Lower Bounds and Non-reducibility to Sparse Sets</title>
<link>https://hdl.handle.net/1721.1/149016</link>
<description>Circuit-size Lower Bounds and Non-reducibility to Sparse Sets
Kannan, Ravindran
As remarked in Cook (1980), we do not know any nonlinear lower bound on the circuit-size of a language in P or even in NP. The best known lower bound seems to be due to Paul (1975). In this paper we show that first for each nonnegative integer k, there is a language Lk in Σ2∩π2 (of Meyer and Stockmeyer (1972) hierarchy) which does not have 0(n^k)-size circuits. Using the same techniques, one is able to prove several similar results. For example, we show that for each nonnegative integer k, there is a language Lk in NP that does not have 0(n^k)-size uniform circuits. This follows as a corollary of a stronger result shown in the paper. Finally, we note that existence of "small circuits" is in suitable contexts equivalent to being reducible to sparse sets. Using this, we are able to prove for example that for any time-constructible super-polynomial function f(n), NTIME(f(n)) contains a language which is not many-to-one p-time reducible to any sparse set.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149016</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Expressive Power of Dynamic Logic, II</title>
<link>https://hdl.handle.net/1721.1/149015</link>
<description>On the Expressive Power of Dynamic Logic, II
Halpern, Joseph Y.
</description>
<pubDate>Sat, 01 Aug 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149015</guid>
<dc:date>1981-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maclisp Extensions</title>
<link>https://hdl.handle.net/1721.1/149014</link>
<description>Maclisp Extensions
Bawden, Alan; Burke, Glenn S.; Hoffman, Carl W.
This document describes a common subset of selected facilities available in Maclisp and its derivatives: PDP-10 and Multics Maclisp., List Machine Lisp (Zetalisp), and NIL. The object of this document is to aid people in writing code which can run compatibly in more than one of these environments.
</description>
<pubDate>Wed, 01 Jul 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149014</guid>
<dc:date>1981-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Communication Ring Initialization Without Central Control</title>
<link>https://hdl.handle.net/1721.1/149013</link>
<description>Communication Ring Initialization Without Central Control
Saltzer, Jerome H.
This short memorandum describes a novel combination of three well-known techniques; the combination provides a systematic way of initializing a local-area ring network without previous, static designation of a distinguished station. The result is a distributed algorithm that dynamically designates a distinguished station from among a group of stations whose ability to communicate is hampered by the fact that the ring is not yet initialized. An appendix describes how this approach could be implemented as part of the 10 Megabit/second (version 2) ring network currently being installed at the MIT Laboratory for Computer Science.
</description>
<pubDate>Tue, 01 Dec 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149013</guid>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>What is a Model of the Lamda Calculus? Expanded Version</title>
<link>https://hdl.handle.net/1721.1/149012</link>
<description>What is a Model of the Lamda Calculus? Expanded Version
Meyer, Albert R.
An elementary, purely algebraic definition of model for the untypes lambda calculus is given. This definition is shown to be equivalent to the natural semantic definition based on environments. These definitions of model are consistent with, and yield a completeness theorem for, the standard axioms for lambda convertibility. A simple construction of models for lambda calculus is reviewed. The algebraic formulation clarifies the relation between combinators and lambda terms.
</description>
<pubDate>Wed, 01 Jul 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149012</guid>
<dc:date>1981-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>LSB Manual</title>
<link>https://hdl.handle.net/1721.1/149011</link>
<description>LSB Manual
Burke, Glenn
LSB (for Layered System Building) is an integrated set of facilities for aiding in the construction of highly-modular, multi-layered, implementation-independent Lisp systems. It provides for conditional inclusion of source text, documentation production, automated declarations, and "high-level" definitions. Lisp code compiled with LSB in general does not require LSB in its run-time environment. LSB has been in use for some time in PDP-10 Maclisp, is operation in Multics Maclisp and Lisp Machine Lisp, and is being developed for NIL.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149011</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of the Word Problems for Commutative Semigroups and Polynomial Ideals</title>
<link>https://hdl.handle.net/1721.1/149010</link>
<description>The Complexity of the Word Problems for Commutative Semigroups and Polynomial Ideals
Mayr, Ernst W.; Meyer, Albert R.
Any decision procedure for the word problems for commutative semigroups and polynomial ideals inherently requires computational storage space growing exponentially with the size of the problem instrance to which the procedure is applied. This bound is achieved by a simple procedure for the semigroup problem.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149010</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Propositional Dynamic Logic of Deterministic, Well-Structured Programs</title>
<link>https://hdl.handle.net/1721.1/149009</link>
<description>The Propositional Dynamic Logic of Deterministic, Well-Structured Programs
Halpern, Joseph Y.; Reif, John H.
We consider a restricted propositional dynamic logic, Strict Deterministic Propositional Dynamic Logic (SDPDL), which is appropriate for reasoning about deterministic well-structured programs. In contrast to PDL, for which the validity problem is known to be complete in deterministic exponential time, the validity problem for SDPDL is shown to be polynomial space complete. We also show that SDPDL is less expensive than PDL. The results rely on structure theorems for models of satifiable SDPDL formulas, and the proods give insight into the effects of nondeterminism on intractability and expressiveness in program logics.
</description>
<pubDate>Sun, 01 Mar 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149009</guid>
<dc:date>1981-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conservative Logic</title>
<link>https://hdl.handle.net/1721.1/149008</link>
<description>Conservative Logic
Fredkin, Edward; Toffoli, Tommaso
Conservative logic is a comprehensive model of computation which explicitly reflects a number of fundamental principles of physics, such as the reversibility of the dynamical laws and the conservation of certain additive quantities (amond which energy plays a distinguished role). Because it more closely mirrors physics than traditional models of computation, conservative logic is in a better position to provide indications concerning the realization of high-performance computing systems, i.e., of systems that make very efficient use of the "computing resources" actually offered by nature. In particular, conservative logic shows that it is ideally possible to build sequential circuits with zero internal power dissipation. After establishing a general framework, we discuss two specific models of computation. The first uses binary variables and is the conservative-logic counterpart of switching theory; this model proves that universal computing capabilities are compatible with the reversibility and conservation constraints. The second model, which is a refinement of the first, constitutes a substantial breakthrough in establishing a correspondence between computation and physics. In fact, this model is based on elastic collisions of identical "balls," and thus is formally identical with the atomic model that underlies the (classical) kinetic theory of perfect gases. Quite literally, the functional behavior of a general-purpose digital computer can be reproduced by a perfect gas placed in a suitably shaped container and given appropriate initial conditions.
</description>
<pubDate>Fri, 01 May 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149008</guid>
<dc:date>1981-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Concentration and Connection Networks</title>
<link>https://hdl.handle.net/1721.1/149007</link>
<description>On Concentration and Connection Networks
Bhatt, Sandeep Nautam
This thesis deals with the structural complexity of swicthing networks which realize concentration and connection requests when operated in a rearrangeable or incremental manner. Some of the important results and constructions are briefly reviewed. On the basis of non-constructive proof techniques used to obtain linear upper bounds on the complexity of rearrangeable concentrators, it is shown that not only are certain random graphs very likely to be rearrangeably non-blocking concentrators, but that is a randomly constructed graph is not non-blocking, then, on the average, only a constant number of edges need by added to the graph to make it non-blocking. Although the problem of recognizing non-blocking networks appears to be a computationally hard problem, the extra edges may be added to the graph efficiently, during operation of the network. Finally, we obtain a constructive as well as an improved non-constructive upper bound on the complexity of incrementally non-blocking connection networks.
</description>
<pubDate>Sun, 01 Mar 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149007</guid>
<dc:date>1981-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Record of the Workshop on Research in Office Semantics</title>
<link>https://hdl.handle.net/1721.1/149006</link>
<description>Record of the Workshop on Research in Office Semantics
Barber, Gerald R.
This paper is a compendium of the ideas and issues presented at the Chatham Bars Workshop of Office Semantics. The intent of the workshop was to examine the state of the art in office systems and to elucidate the issues system designers were concerned with in developing next generation office systems. The workshop involved a cross-section of people from government, industry and academia. Presentations in the form of talks and video tapes were made of prototypical systems.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149006</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recursion Theoretic Operators and Morphisms on Numbered Sets</title>
<link>https://hdl.handle.net/1721.1/149005</link>
<description>Recursion Theoretic Operators and Morphisms on Numbered Sets
Barendregt, Henk; Longo, Giuseppe
An operator is a map ?: Pω-&gt;Pω. By embedding Pω in two natural ways into the λ-calculus model Pω^2 (and T^ω) the computable maps on this latter structure induce several classes of recursion theoretic operators.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149005</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algebraic Dependencies</title>
<link>https://hdl.handle.net/1721.1/149004</link>
<description>Algebraic Dependencies
Yannakakis, Mihalis; Papadimitrou, Christos H.
We propose a new kind of data dependencies called algebraic dependencies, which generalize all previous known kinds. We give a complete axiomatization of algebraic dependencies in terms of simple algebraic rewriting rules. In the process we characterize exactly the expressive power of tableaux, thus solving an open problem of Aho, Sagiv and Ullman; we show that it is NP-complete to tell whether a tableau is realizable by an expression; and we give an interesting dual interpretation of the chase procedure. We also show that algebraic dependencies over a language augmented to contain union and set difference can express arbitrary domain-independent predicates of finite index over finite relations. The class of embedded implicational dependencies recently - and independently - introduced by Fagin is shown to coincide with our algebraic dependencies. Based on this, we give a simple proof of Fagin's Armstrong relation theorem.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149004</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Deducibility Problem in Propositional Dynamic Logic</title>
<link>https://hdl.handle.net/1721.1/149003</link>
<description>The Deducibility Problem in Propositional Dynamic Logic
Meyer, Albert R.; Streett, Robert S.; Mirkowska, Grazina
The problem of whether an arbitrary formula of Propositional Dynamic Logic (PDL) is deducible from a fixed axiom scheme of PDL is _ ]1-complete. Ths contrasts with the decidability of the problem when the axiom scheme is replaced by any single PDL formula.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149003</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Propositional Dynamic Logics of Programs: A Survey</title>
<link>https://hdl.handle.net/1721.1/149002</link>
<description>Propositional Dynamic Logics of Programs: A Survey
Parikh, Rohit
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149002</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deterministic Propositional Dynamic Logic: Finite Models, Complexity, and Completeness</title>
<link>https://hdl.handle.net/1721.1/149001</link>
<description>Deterministic Propositional Dynamic Logic: Finite Models, Complexity, and Completeness
Ben-Ari, Mordechai; Halpern, Joseph Y.; Pnueli, Amir
Let p be a formular in deterministic propositional dynamic logic. A decision procedure for the satisfiability of p is given along with a construction of a finite model for every satisifiable p. The decision procedure runs in deterministic time 2^cn and the size of the model is bounded by n^2 * 4^n, where n is the lenth of p. Finally, a complete axiomatization for deterministic propositional dynamic logic is given, based on the Segerberg axioms for propositional dynamic logic.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149001</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Persistence of Vector Replacement Systems is Decidable</title>
<link>https://hdl.handle.net/1721.1/149000</link>
<description>Persistence of Vector Replacement Systems is Decidable
Mayr, Ernst
In a persistent vector replacement system (VRS) or Petri net, an enabled transition can become disabled only by firing itself. Here, an algorithm is presented which allows to decide whether an arbitrary VRS is persistent of not, and if so, to construct a semilinera representation of the set of states reachable in the system.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/149000</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Effective Representation of the Reachability Set of Persistent Petri Nets</title>
<link>https://hdl.handle.net/1721.1/148999</link>
<description>An Effective Representation of the Reachability Set of Persistent Petri Nets
Mayr, Ernst
In a persistent Petri net, an enabled transition can become disabled only by firing itself. Here, an algorithm is presented which constructs a semilinear representation of the set of states reachable in an arbitrary persistent Petri net.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148999</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ω(n log n) Lower Bounds on Length of Boolean Formulas</title>
<link>https://hdl.handle.net/1721.1/148998</link>
<description>Ω(n log n) Lower Bounds on Length of Boolean Formulas
Fischer, Michael J.; Meyer, Albert R.; Paterson, Michael S.
A property of Boolean functions of n variables is described and shown to imply lower bounds as large as Ω(n log n) on the number of literals in any Boolean formula for any function with the property. Formulas over the full basis of binary operations (∧, ⊕, etc.) are considered. The lower bounds apply to all but the vanishing fraction of symmetric functions, in particular to all threshold functions with sufficiently large threshold and to the "congruent to zero modulo k" function for k&gt;2. In the case k = 4 the bound is optimal.
</description>
<pubDate>Sat, 01 Nov 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148998</guid>
<dc:date>1980-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>BRAND X Manual</title>
<link>https://hdl.handle.net/1721.1/148997</link>
<description>BRAND X Manual
Szolovits, Peter; Martin, William A.
BRAND X is a simple representation language implemented as a pure extension of LISP. BRAND X provides the following additional facilities over LISP: Unique and canonical structures, property lists for all objects, labels for all objects, and a syntax to express each of these, supported by a reader and printer. BRAND X is intended as an "assembly language" for representation languages, attempting to provide facilities generally found useful in the simplest manner, without any strong commitment to specific representational conventions.
</description>
<pubDate>Sat, 01 Nov 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148997</guid>
<dc:date>1980-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Optimality Theory of Concurrency Control for Databases</title>
<link>https://hdl.handle.net/1721.1/148996</link>
<description>An Optimality Theory of Concurrency Control for Databases
Kung, Hsing-Tsung; Papadimitrou, Christos H.
A concurrency control mechanism (or a scheduler) is the component of a database system that safeguards the consistency of the database in the presence of interleaved accesses and update requests. We formally show that the performance of a scheduler, i.e. the amount of parallelism that it supports, depends explicitly upon the amount of information that is available to the scheduler. We point out that most previous work on concurrency control is simply concerned with specific points of this basic trade-off between performance and information. In fact, several of these approaches are shown to be optimal for the amount of information that they use.
</description>
<pubDate>Sat, 01 Nov 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148996</guid>
<dc:date>1980-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some New Methods of Music Synthesis</title>
<link>https://hdl.handle.net/1721.1/148995</link>
<description>Some New Methods of Music Synthesis
Paseman, William Gerhard
There are two distinct sections to this thesis. The first section discusses music composition, shows why it is a useful domain for Artificial Intelligence research and presents a set of "Design Rules" that facilitate research in the field of tonal music composition. It begins with a short chapter presenting a subset of music theory. This chapter assumes no prior knowledge of the subject, it completely defines all terms used in the thesis, and is geared particiularly toward those unfamiliar with music, those unwilling to learn standard music notation and those interested in Artificial Intelligence research. Next, (using the terms defined in the thesis), a context sensitive generative grammar for producing pitch progressions in the major mode is introduced. It is seen that the grammar can be made context free by switching between two interpretations of the input string. A mechanism for switching from one interpretation to another when parsing sentences generated from this grammar is described. It is shown that a model of music composition, perception, and improvisation fits within the framework of the grammar. This multiple view model and switching mechanism can be interpreted as a primitive "frame." The section section describes some of the problems and issues encountered while designing the initial hardware for the Music Aided Cognition Project at MIT. All of the developed hardware permits computer control, performance and recording of music in real time. The first chapter in this section discusses a machine called the Inexpensive Synthesizer/Recorder. It capable of synthesizing 14 square wave voices, each voice having a range of 7 octaves, with each octave having 12 bits of frequency control. Its purpose it to allow the user to record key depression times, key release times and key impact velocities when playing a keyboard piece. Its primary constraint was low cost, allowing many copies to be made. Its microprocessor interface allows it to be easily controlled by many different means, including home computers. The complete schematics for the synthesizer and the controller are provided as an appendix. The next chapter discusses an oscillator which synthesizes sound using 32 sine or 8 FM waveforms. The machine can be easily expanded to produce 256 sine voices and 64 (or more) FM voices. All since waveforms in both types of synthesis are weighted with two independent coefficients. Micropogrammable firmware allows one to produce sound by a limited number of methods other than sine summation or FM synthesis.
</description>
<pubDate>Fri, 01 Aug 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148995</guid>
<dc:date>1980-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Programs for Research in Gravitation and Differential Geometry</title>
<link>https://hdl.handle.net/1721.1/148994</link>
<description>Computer Programs for Research in Gravitation and Differential Geometry
Pavelle, Richard; Wester, Michael
This report contains a description of all current functions and features (with many examples) of the programs CTENSR and ITENSR which are available with MACSYMA. CTENSR is a standard Component TENSoR manipulation system which means that geometrical tensor objects are represented as arrays or matrices. Tensor operations such as contraction or covariant differentiation are carried out by actually summing over repeated (dummy) indices with DO statements. ITENSR, is a unique Indicial TENSoR manipulation system which is implemented by representing rensors as functions of their covariant, contravariant and derivative indicies. Tensor operations such as contraction or covariant differentiation are performed by manipulating the indices themselves rather than the components to which they correspond. The programs are connected in the sense that one can obtain an expression in ITENSR and have the corresponding expression generated in the CTENSR format automatically.
</description>
<pubDate>Sun, 01 Jun 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148994</guid>
<dc:date>1980-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report on the Workshop on Self-timed Systems</title>
<link>https://hdl.handle.net/1721.1/148993</link>
<description>Report on the Workshop on Self-timed Systems
Bryant, Randal E.
</description>
<pubDate>Thu, 01 May 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148993</guid>
<dc:date>1980-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory and Practice of Text Editors or A Cookbook for an Emacs</title>
<link>https://hdl.handle.net/1721.1/148992</link>
<description>Theory and Practice of Text Editors or A Cookbook for an Emacs
Finseth, Craig A.
A comprehensive summary of the available technology for implementing text editors. It is written to be a guide for the implementor of a text editor. It does not provide a finished, polished algorithm for any part of a text editor. Rather, it provides a breakdown of the problems involved and discusses the pitfalls and the available tradeoffs to be considered when designing a text editor. Specific reference is made to the relevant tradeoffs for an Emacs-type editor, a character-oriented, extensible display editor.
</description>
<pubDate>Thu, 01 May 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148992</guid>
<dc:date>1980-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Cryptographic Security of Compact Knapsacks (Preliminary Report)</title>
<link>https://hdl.handle.net/1721.1/148991</link>
<description>The Cryptographic Security of Compact Knapsacks (Preliminary Report)
Shamir, Adi
In 1978, Merkle and Hellman introduced a knapsack-based public-key cryptosystem, which received widespread attention. The two major open problems concerning this cryptosystem are: (i) Security: How difficult are the Merkle-Hellman knapsacks? (ii) Efficiency: Can the huge key size be reduced? In this paper we analyze the cryptographic security of knapsack problems with small keys, develop a new (non-enumerative) type of algorithm for solving them, and use the algorithm to show that under certain assumptions it is as difficult to find the hidden trapdoors in Merkle-Hellman knapsacks as it is to solve general knapsack problems.
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148991</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Axiomatic Definitions of Programming Languages: A Theoretical Assessment</title>
<link>https://hdl.handle.net/1721.1/148990</link>
<description>Axiomatic Definitions of Programming Languages: A Theoretical Assessment
Meyer, Albert R.; Halpern, Joseph Y.
A precise definition is given of how partial correctness or termination assertions serve to define the semantics of classees of program schemes. Assertions involving only formulas of first order predicate calculus are proved capable of defining program scheme semantics, and effective axiom systems for deriving such assertions are described. Such axiomatic definitions are possible despite the limited expressive power of predicate calculus.
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148990</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Manager for Named, Permanent Objects</title>
<link>https://hdl.handle.net/1721.1/148989</link>
<description>A Manager for Named, Permanent Objects
Marcus, Alan Michael
Storing data in a computing system for a long time has been of interest ever since it was possible to do so. Classically, on stores bit- or byte- strings, or perhaps arrays of "records." Yet, current programming philosophy stresses data abstraction techniques and concepts. This report describes an object-oriented filing system which stores abstract objects, and allows the user to view the system as though one were storing abstract objects, rather than storing some external representation of the abstractions. Names may be attached to the (permanent) objects, and objects may be contained in (and may contain) other objects. Furthermore, an object may be contained in more than one object, thereby allowing the naming structure to be a network.
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148989</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Critical Path Scheduling of Task Systems with Resource and Processor Constraints</title>
<link>https://hdl.handle.net/1721.1/148988</link>
<description>Critical Path Scheduling of Task Systems with Resource and Processor Constraints
Lloyd, Error Lynn
Several papers over the past few years have investigated minimum execution time scheduling of unit execution time (UET) task systems with resources. Because such scheduling problems are, in general, NP-hard, a variety of heuristic methods for producing schedules have been studied, among them, critical path scheduling. The strongest results to date have been for systems where there is no processor constraint. These results may be utilized for systems with a processor constraint by treating the processors as an additional resource. Unfortunately, in those cases where the number of processors is close to the number of resources, this results in an upper bound which is somewhat misleading. In this paper we investigate the performance of critical path scheduling for UET task systems with resources and a fixed number of processors. An upper bound for the worst case performance of critical path scheduling is given. This bound depends both on the number of processors and on the number of different resources. Moreover, we show that this is the best possible (asymptotic) upper bound).
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148988</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Computational Complexity of Cardinality Constraints in Relational Databases</title>
<link>https://hdl.handle.net/1721.1/148987</link>
<description>On the Computational Complexity of Cardinality Constraints in Relational Databases
Kanellakis, Paris C.
We show that the problem of determining whether of not a lossless join property holds for a database, in the presence of key dependencies and cardinality constraints on the domains of the attributes is NP-complete.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148987</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Algebras and the Nature of Induction</title>
<link>https://hdl.handle.net/1721.1/148986</link>
<description>Dynamic Algebras and the Nature of Induction
Pratt, Vaughan R.
Dynamic algebras constitute the variety (equationally defined class) of models of the Segerberg axioms for propositional dynamic logic. We obtrain the following results (to within inseparability). (i) In any dynamic algebra * is reflexive transitive closure. (ii) Every free dynamic algebra can be factored into finite dynamic algebras. (iii) Every finite dynamic algebra is isomorphic to a Kripke structure. (ii) and (iii) imply Parikh's completeness theorem for the Segerberg axioms. We also present an approach to treating the inductive aspect of recursion within dynamic algebras.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148986</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semaphore Primitives and Starvation-free Mutual Exclusion</title>
<link>https://hdl.handle.net/1721.1/148985</link>
<description>Semaphore Primitives and Starvation-free Mutual Exclusion
Stark, Eugene William
Most discussions of semaphore primitives in the literature provide only an informal description of their behavior, rather than a more precissde definition. These informal descriptions may be incorrect, incomplete, or subject to misinterpretation. As a result, the literature actually contains several different definitions of the semaphore primitives. The differences are important, since the particular choice of definition can affect whether a solution to the mutal exclusion problem using semaphore primitives allows the possibility of process starvation. This thesis attempts to alleviate some of the confusion by giving precise definitions of two varieties of semaphore primitives; here called weak and blocked-set primitives. It is then shown that under certain natural conditions, although it is possible to implement starvation-free mutual exclusion with blocked-set semaphores, it is not possible to do so with weak semaphores. Thus weak semaphores are strictly less "powerful" than block-set semaphores.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148985</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Expressive Power of Dynamic Logic</title>
<link>https://hdl.handle.net/1721.1/148984</link>
<description>On the Expressive Power of Dynamic Logic
Meyer, Albert R.; Winklmann, Karl
We show that "looping" of while-programs can be expressed in Regular First Order Dynamic Logic, disproving a conjecture made by Harel and Pratt. In addition we show that the expressive power of quantifier-free Dynamic Logic increases when nondeterminism is introduced in the programs that are part of formulae of Dynamic Logic. Allowing Assignments of random values to variables also increases expressive power.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148984</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Definability in Dynamic Logic</title>
<link>https://hdl.handle.net/1721.1/148983</link>
<description>Definability in Dynamic Logic
Meyer, Albert R.; Parikh, Rohit
We study the expressive power of various versions of Dynamic Logic and compare them with each other as well as with standard languages in the logical literature. One version of Dynamic Logic is equivalent to the infinitary logic L CK ω1,ω, but regular Dynamic Logic is strictly less expressive. In particular, the ordinals ω^ω and ω^ω*2 are indistinguishable by formulas of regular Dynamic Logic.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148983</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Covering Graphs by Simple Circuits</title>
<link>https://hdl.handle.net/1721.1/148982</link>
<description>Covering Graphs by Simple Circuits
Atai, Alon; Lipton, Richard J.; Papadimitriou, Christos H.; Rodeh, M.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148982</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Linear Characterizations of Combinatorial Optimization Problems</title>
<link>https://hdl.handle.net/1721.1/148981</link>
<description>On Linear Characterizations of Combinatorial Optimization Problems
Karp, Richard M. Papadimitriou, Christos H.
We show that there can be no computationally tractable description by linear inequalities of the polyhedron associated with any NP-complete combinatorial optimization problem unless NP=co-NP -- a very unlikely event. We also use the recent result by Khacian to present even stronger evidence that NP-complete combinatorial optimization problems cannot have efficient generators of violated inequalities.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148981</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Worst-case and Probabilistic Analysis of a Geometric Location Problem</title>
<link>https://hdl.handle.net/1721.1/148980</link>
<description>Worst-case and Probabilistic Analysis of a Geometric Location Problem
Papadimitriou, Christos H.
We consider the problem of choosing K "medians" among n points on the Euclidean plane such that the sum of the distances from each of the n points to its closest median is minimized. We show that this problem is NP-complete. We also present two heuristics that produce arbitrarily good solutions with probability going to 1. One is a partition heuristic, and works when K grows lineraly -- or almost so -- with n. The other is the "honeycomb" heuristic, and is applicable to rates of grother of K of the form K ~ n^Є, 0&lt;Є&lt;1.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148980</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Complexity of Integer Programming</title>
<link>https://hdl.handle.net/1721.1/148979</link>
<description>On the Complexity of Integer Programming
Papadimitriou, Christos H.
We give a simple proof that integer programming is in NP. Our proof also establishes that there is a pseudopolynomial time algorithm for integer programming with any (fixed) number of constraints.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148979</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reversible Computing</title>
<link>https://hdl.handle.net/1721.1/148978</link>
<description>Reversible Computing
Toffoli, Tommaso
The theory of reversible computing is based on invertible primitives and composition rules that preserve invertibility. With these constraints, one can still satisfactorily deal with both functional and structural aspects of computing processes; at the same time, one attains a closer correspondence between the behavior of abstract computing systems and the microscopic physical laws (which are presumed to be strictly reversible) that underly any concrete implementation of such systems. Here, we integrate into a comprehensive picture a variety of concepts and results. According to a physical interpretation,  the central result of this paper is that it is ideally possible to build sequential circuits with zero internal power dissipation. Even when these circuits are interfaced with conventional ones, power dissipation at the interface would be at most proportional to the number of input/output lines, rather than to the number of logic gates as in conventional computers.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148978</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ten Thousand and One Logics of Programming</title>
<link>https://hdl.handle.net/1721.1/148977</link>
<description>Ten Thousand and One Logics of Programming
Meyer, Albert R.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148977</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Efficient Algorithm for Determine the Length of the Longest Dead Path in an "LIFO" Branch-and-bound Exploration Schema</title>
<link>https://hdl.handle.net/1721.1/148976</link>
<description>An Efficient Algorithm for Determine the Length of the Longest Dead Path in an "LIFO" Branch-and-bound Exploration Schema
Pallottino, Stefano; Toffoli, Tommaso
The length of the longest dead path (LLDP) is a widely used parameter in estimating the efficiency of branch-and-bound optimization algorithms that employ the LIFO exploration schema. Thanks to two original theorems, we are able to present a particularly attractive procedure for determining of the LLDP. In fact, this procedure requires a number of storage variables which is independent of problem size and very small; moreover, the procedure is self-contained in the sense that it can be externally attached to any LIFO branch-and-bound program without interfering with its algorithms and data structures.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148976</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Space-Bounded Simulation of Multitape Turing Machines</title>
<link>https://hdl.handle.net/1721.1/148975</link>
<description>Space-Bounded Simulation of Multitape Turing Machines
Adleman, Leonard M.; Loui, Michael C.
A new proof of a theorem of Hopcroft, Paul, and Valiant is presented: every deterministic multitape Turing machine of time complexity T(n) can be simulated by a deterministic Turing machine of space complexity T(n)/log T(n). The proof includes an overlap argument.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148975</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A T=0(2^n/2), S=0(2^n/4) Algorithm for Certain NP-Complete Problems</title>
<link>https://hdl.handle.net/1721.1/148974</link>
<description>A T=0(2^n/2), S=0(2^n/4) Algorithm for Certain NP-Complete Problems
Schroeppel, Richard; Shamir, Adi
In this paper we develop a general prupose algorithm that can solve a number of NP-complete problems in time T=0(2^n/2) and space S=0(2^n/4). The algorithm can be generalized to a family of algorithms whose time and space complexities are related by T*S^2=0(2^n). The problems it can handle are characterized by a few decomposition axioms, and they include knapsack problems, exact satisfiability problems, set covering problems, etc. The new algorithm has a considerable cryptanalytic significance, since it can break knapsack-based cryptosystems with up to n=100 generators.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148974</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Machine Language Instruction Set for a Data Flow Processor</title>
<link>https://hdl.handle.net/1721.1/148973</link>
<description>A Machine Language Instruction Set for a Data Flow Processor
Aoki, Donald J.
A data flow processor is a computer in which instructions are data driven and enabled for execution by the arrival of their operands. Data flow processors execute data flow programs, normally represented as program graphs, which represent the data dependencies between operations. This thesis presents a machine language instruction set for a Form 1 data flow machine based on the Dennis-Misunas design.
</description>
<pubDate>Sat, 01 Dec 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148973</guid>
<dc:date>1979-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Space Bound for One-tape Multidimensional Turing Machines</title>
<link>https://hdl.handle.net/1721.1/148972</link>
<description>A Space Bound for One-tape Multidimensional Turing Machines
Loui, Michael C.
Let L be a language recognized by a nondeterministic Turing machine with one d-dimensional worktape of time complexity T(n). Then L can be recognized by a deterministic Turing machine of space complexity (T(n)logT(n))^d/d+1. The prood employs a generalized crossing sequence argument.
</description>
<pubDate>Thu, 01 Nov 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148972</guid>
<dc:date>1979-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrent and Reliable Updates of Distributed Databases</title>
<link>https://hdl.handle.net/1721.1/148971</link>
<description>Concurrent and Reliable Updates of Distributed Databases
Takagi, Akihiro
A concurrent execution of transactions and various failures occuring during transaction processing in a distributed database system can lead to an inconsistent database state. In order to prevent such inconsistency from occuring , 1) the schedule of transactions must be equivalent to some serial schedule and 2) each transaction must be either completed or backed out. This paper develops a set of schemes that satisfy these requirements and still realize highly concurrent execution of transactions. This paper also shows how to incorporate these schemes into a multi-level distributed database system where there exists a hierarchy of transactions. Detailed algorithms for concurrent and reliable updates of distrubuted databases based on the proposed schemes are included in the appendix.
</description>
<pubDate>Thu, 01 Nov 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148971</guid>
<dc:date>1979-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Intermediate Form for Data Flow Programs</title>
<link>https://hdl.handle.net/1721.1/148970</link>
<description>An Intermediate Form for Data Flow Programs
Leth, James William
A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data-independent operations in parallel (that is, a sequential ordering existing only between operations for which the result of one operation is an operand of the other). This thesis presents a form of computer representation of data flow programs (based on data flow graphs) that can serve as an intermediate form in the translation of source language code into machine code for a data flow computer. The proposed intermediate representation is implemented in the structured programming language CLU, and is designed to allow analysis and transformation of programs (for optimization purposes) to be performd either automatically or with programmer interaction.
</description>
<pubDate>Thu, 01 Nov 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148970</guid>
<dc:date>1979-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Data Bases with Incomplete Information</title>
<link>https://hdl.handle.net/1721.1/148969</link>
<description>On Data Bases with Incomplete Information
Lipski, Witold, Jr.
Semantic and logical problems arising in an incomplete information data base are investigated. A simple query language is described and its semantics is defined, which refers the queries to the information about reality contained in a data base, rather than to reality itself. This approach, called the internal interpretation, is shown to lead in a natural way to the notions of a topological Booleans algebra and a model logic related to S4, in teh same way as referring queries directly to reality (external interpretation) leads to Boolean algebras and classical logic. An axiom system is given for equivalent (with respect to the internal interpretation) transformation of queries, which is then exploited as a basic tool in a method for computing the internal intepretation for a broad class of queries. An interesting special case of the problem of determining the internal intepretation amounts to deciding whether an assertion about reality (a "yes-no" query) is consistent with the incomplete information about reality contained in a data base. We give a solution to this problem, which relies on the classical combinatorial problem of distinct representatives of subsets.
</description>
<pubDate>Mon, 01 Oct 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148969</guid>
<dc:date>1979-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Database Management System Architecture</title>
<link>https://hdl.handle.net/1721.1/148968</link>
<description>On Database Management System Architecture
Hammer, Michael; McLeod, Dennis
Despite the many advances that have been made in the field of database management in the last two decades, in many respects the paradigm of database management has not changed much since its inception. Several long-standing assumptions pervade the field and exert a great influence on the architecture of database management systems, their functions, and. the kinds of databases that they manage. This paper reconsiders some of these assumptions and suggests certain alternatives to them. In particular, it is argued that the concept of an integrated database ought to be supplanted by that of a federated database, a loose assembly of semi-independent components; the position of the database management system in the context of a total information system is reexamined, and arguments are made for extending its functional capabilities; and controlled logical redundancy in the schema is introduced as a means of improving the usability of a database and of enhancing its life-cycle performance. An underlying theme throughout is that of the importance of a semantic schema of the database, which specifies enough of the meaning of the application domain to enable enhanced functionality to be achieved. A number of characteristics of a conceptual data model (in which this scheme would be expressed) are described.
</description>
<pubDate>Mon, 01 Oct 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148968</guid>
<dc:date>1979-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence and Clinical Problem Solving</title>
<link>https://hdl.handle.net/1721.1/148967</link>
<description>Artificial Intelligence and Clinical Problem Solving
Szolovits, Peter
An ambitious, but intriguing, possibility for radically increasing the availability and adequacy of health case, while containing cost, is to use the computer as a consultant to augment and extend the skills of all health care providers. We propose to pursue a program of fundamental research in representation of knowledge, decision-making, problem-solving, program explanation and clinical cognition, to understand how to construct computer programs that, as an integral part of the health care system, can amplify the knowledge and reasoning powers of medical decision makers. We plan to apply the techniques so developed to problems in the diagnosis and therapy of acid/base and electrolyte disturbances, diagnosis of birth defects using an existing date-base of diseases and associated manifestations, the development of multi-modal cancer therapy protocols, and the application of the methods of decision analysis to produce general tools for physicians to use in analysing difficult clinical cases.
</description>
<pubDate>Sat, 01 Sep 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148967</guid>
<dc:date>1979-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Roles, Co-descriptors, and the Formal Representation of Quantified English Expressions</title>
<link>https://hdl.handle.net/1721.1/148966</link>
<description>Roles, Co-descriptors, and the Formal Representation of Quantified English Expressions
Martin, William A.
</description>
<pubDate>Sat, 01 Sep 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148966</guid>
<dc:date>1979-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Algebras: Examples, Constructions, Application</title>
<link>https://hdl.handle.net/1721.1/148965</link>
<description>Dynamic Algebras: Examples, Constructions, Application
Pratt, Vaughan R.
Dynamic algebras combine the classes of Boolean (B V ' 0) and regular (R U ; *) algebras into a single finitely axiomatized variety (B R ♦) resembling an R-module with "scalar" multiplication ♦. The basis result is that * is reflexive transitive closure, contrary to the intuition that this concept should require quantifiers for its definition. Using this result we give several examples of dynamic algebras arising naturally in connection with additive functions, binary relations, state trajectories, languages, and flowcharts. The main result is that free separable dynamic algebras are residually separable-and-finite, important because finite separable dynamic algebras are isomorphic to Kripke structures. Applications include a new completeness proof for the Segerberg axiomatization of propositional dynamic logic, and yet another notion of regular algebra.
</description>
<pubDate>Sun, 01 Jul 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148965</guid>
<dc:date>1979-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algorithms for Scheduling Tasks on Unrelated Processors</title>
<link>https://hdl.handle.net/1721.1/148964</link>
<description>Algorithms for Scheduling Tasks on Unrelated Processors
Davis, Ernest; Jaffe, Jeffrey M.
Several algorithms are presented for the nonpreemptive assignment of n independent tasks to m unrelated processors. One algorithm requires polynomial time in n and m, and is at most 2√m times worse than optimal in the worst case. This is the best polynomial time algorithm known for scheduling such sets of tasks. An algorithm with slightly better worst case performance requires polynomial time in n but exponential time in m. This is the best algorithm known that requires time O(nlog(n)) for every fixed value of m.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148964</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report on the Second Workshop on Data Flow Computer and Program Organization</title>
<link>https://hdl.handle.net/1721.1/148963</link>
<description>Report on the Second Workshop on Data Flow Computer and Program Organization
Misunas, David P.
The following report comprises an edited transcript of presentations made at the Workshop of Data Flow Computer and Program Organization, held at MIT on July 9-13, 1978, and co-sponsored by the Lawrence Livermore Laboratory (LLL) and the Department of Energy, Mathematical Sciences Branch. These informal transcriptions are only intended to provide a general picture of ongoing work in the area and to that end, have been heavily edited and summarized.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148963</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Timestamps and Capability-based Protection in a Distributed Computer Facility</title>
<link>https://hdl.handle.net/1721.1/148962</link>
<description>Timestamps and Capability-based Protection in a Distributed Computer Facility
Wyleczuk, Rosanne H.
This thesis investigates the problems of supporting security requirements and providing protection mechanisms in a distributed computer facility. The nature of the environment necessitates examination of operating systems, data base systems, and computer networks. The capability approach to providing protection in a centralized system is chosen as the foundation for the protection mechanism of the distributed system. The thesis also relies on an interesting approach to the representation of objects in a computer system. An object is represented by a sequence of immutable versions that represent the state of the object over time; each version is the result of an update on the object. This approach to describing objects provides the basis for a flexible definition of the world in which timestamps are naturally associated with every object in the system. The development of a DCF capability mechanism resulted in the following discoveries: Capabilities need not become immediately effective upon their generation. It is not necessary that the object to which access is being authorized exist at the time the capability is generated. And, the revocation of access privileges and the control of capability propagation are not insurmountable problems even in a distributed environment.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148962</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Share a Secret</title>
<link>https://hdl.handle.net/1721.1/148961</link>
<description>How to Share a Secret
Shamir, Adi
In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k-1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148961</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Space Complexity of Two Pebbles Games on Trees</title>
<link>https://hdl.handle.net/1721.1/148960</link>
<description>The Space Complexity of Two Pebbles Games on Trees
Loui, Michael C.
In the standard pebble game the number of pebbles required to pebble the root of a tree can be computed in time linearly proportional to the number of nodes. For the black/white pebble game the number of pebbles necessary to pebble the root of a complete tree is derived.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148960</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a Program for Expert Diagnosis of Acid Base and Electrolyte Disturbances</title>
<link>https://hdl.handle.net/1721.1/148959</link>
<description>Design of a Program for Expert Diagnosis of Acid Base and Electrolyte Disturbances
Patil, Ramesh S.
This research develops the diagnostic component of an interactive system for providing expert advice for the diagnosis, therapy and ongoing management of patients with acid-base and electrolyte disturbances. We have developed a hierarchic representation of a patient's illness which unifies the known facts about the patient, their suspected interrelationships, the hyptheses and how hypotheses account for various known and hypothesized facts. An expectation driven problem solver based on the hypothesize and reformulate paradigm performsn the diagnosis.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148959</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time, Space and Randomness</title>
<link>https://hdl.handle.net/1721.1/148958</link>
<description>Time, Space and Randomness
Adleman, Leonard M.
Space and time are the fundamental parameters of complexity theory. The thesis of this paper is that randomness is of equal importance. We introduce a notion of randomness (based on Kologorov-Chaitin-Randomness), which we suspect will contribute to the understanding of some of the central problems in complexity theory. The purpose of this paper is primarily conceptual, though several easy theorems are given with clarify the relationship of this notion of randomness to the NP=P question, the complexity of integer factoring, and the sets computable in random polynomial time. Finally, using factoring as an example, we raise the possibility of performing experiments on functions of unknown complexity to indicate the extent of their tractability.
</description>
<pubDate>Thu, 01 Mar 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148958</guid>
<dc:date>1979-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Cryptocomplexity of Knapsack Systems</title>
<link>https://hdl.handle.net/1721.1/148957</link>
<description>On the Cryptocomplexity of Knapsack Systems
Shamir, Adi
A recent trend in cryptographic systems is to base their encryption/decryption functions on NP-complete problems, and in particular on the knapsack problem. To analyze the security of these systems, we need a complexity theory which is less worst-case oriented and which takes into account the extra conditions imposed on the problems to make them cryptographically useful. In this paper we consider the two classes of one-to-one and onto knapsack systems, analyze the complexity of recognizing them and of solving their instances, introduce a new complexity measure (median complexity), and show that this complexity is inversely proportional to the density of the knapsack system. The tradeoff result is based on a fast probabilistic knapsack solving algorithm which is applicable only to one-to-one systems, and it indicates that knapsack-based cryptographic systems in which one can both encrypt and sign messages are relatively insecure.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148957</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimum Register Allocation is Complete in Polynomial Space</title>
<link>https://hdl.handle.net/1721.1/148956</link>
<description>Minimum Register Allocation is Complete in Polynomial Space
Loui, Michael C.
The Minimum Register Allocation Problem is to determine the minimum number of registers required to evaluate an arithmetic expression. A pebble game on directed acyclic graphs is used to prove that this is complete is complete in polynomial space.
</description>
<pubDate>Thu, 01 Mar 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148956</guid>
<dc:date>1979-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Network Traffic Generator for Decent</title>
<link>https://hdl.handle.net/1721.1/148955</link>
<description>A Network Traffic Generator for Decent
Strazdas, Richard J.
Computer network traffic generators provide a means for supplying benchmark results and for measuring computer network performance at all levels. Eventually they will also aid in fault diagnosis. The network traffic generator described in this thesis allows flexible yet convenient control over a number of parameters useful for generating loads over both test and real networks based on DEC's PDP-11 minicomputer. Implementation on a test network provides sample results. A discussion of design compromises, an recommendations for further study and design point to various open issues.
</description>
<pubDate>Thu, 01 Mar 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148955</guid>
<dc:date>1979-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>With what Frequency are Apparently Intractable Problems Difficult?</title>
<link>https://hdl.handle.net/1721.1/148954</link>
<description>With what Frequency are Apparently Intractable Problems Difficult?
Meyer, A.R.; Paterson, M.S.
An algorithm is almost polynomial-time (apt) iff there is a polynomial p such that for all n, the algorithm halts within p(n) steps on all by at most p(n) inputs of size at most n. It is nown that for NP-complete and polynomial space-complete problems, as well as certain other apparently intractable problems such as integer factoring, the following conditions are equivalent: (1) the problem is solveable by an apt algorithm, (2) the problem (or its complement) is polynomial-time transformable to a polynomial-sparse set, (3) the problem is solvable in polynomial time.
</description>
<pubDate>Thu, 01 Feb 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148954</guid>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metal Poker</title>
<link>https://hdl.handle.net/1721.1/148953</link>
<description>Metal Poker
Shamir, Adi; Rivest, Ronald L.; Adleman, Leonard M.
Can two potentially dishonest players play a fair game of poker without using any cards (e.g. over the phone)? This paper provides the following answers: 1. No. (Rigorous mathematical proof supplied. 2. Yes. (Correct &amp; complete protocol given.)
</description>
<pubDate>Thu, 01 Feb 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148953</guid>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bicontinuous Extensions of Invertible Combinatorial Functions</title>
<link>https://hdl.handle.net/1721.1/148952</link>
<description>Bicontinuous Extensions of Invertible Combinatorial Functions
Toffoli, Tommaso
We discuss and solve the problem of constructing a diffeomorphic componentwise extension for an arbitrary invertible combinatorial function. Interpreted in physical terms, our solution constitutes a proof of the physical realizability of general computing mechanisms based on reversible primitives.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148952</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Improved Proof of the Rabin-Harmanis-Stearns Conjecture</title>
<link>https://hdl.handle.net/1721.1/148951</link>
<description>An Improved Proof of the Rabin-Harmanis-Stearns Conjecture
Perry, Harold M.
We offer an improved presentation of Aanderaa's constructive proof of the Rabin-Hartmanis-Stearns conjecture: For all k≥2, there exists a language Lk such that Lk can be recoginzed by a k-worktape real time Turing machine but cannot be recognized by any (k-1)-worktape real time Turing machine.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148951</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Scheduling of Tasks Without Full Use of Processor Resources</title>
<link>https://hdl.handle.net/1721.1/148950</link>
<description>Efficient Scheduling of Tasks Without Full Use of Processor Resources
Jaffe, Jeffrey
The nonpreemptive scheduling of a partially ordered set of tasks on a machine with m processors of different speeds is studied. Heuristics are presented which benefit from selective non-use of slow processors. The performance of these heuristics is aymptotic of √m times worse than optimal, whereas demand driven schedules are unboundedly worse than optimal for any fixed value of m. The algorithms are extended to the situation where functionally dediciated processors must process tasks of a given type. Here, too, the worse case performance of the algorithms improves on the worst case performance of known algorithms. The techniques of analyzing these schedules are used to obtain a bound on a large class of preemptive schedules.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148950</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Equivalence of R. E. Programs and Data Flow Schemes</title>
<link>https://hdl.handle.net/1721.1/148949</link>
<description>The Equivalence of R. E. Programs and Data Flow Schemes
Jaffe, Jeffrey
The Expressibe power of the data flow schemes of Dennis is evaluated. It is shown that data flow schemes have the power to express an arbitrary determinate functional. The proof involves a demonstration that "restricted data flow schemes" can simulate Turing Machines. This provides a new, simple basis for computability.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148949</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Operational Semantics of a Data Flow Language</title>
<link>https://hdl.handle.net/1721.1/148948</link>
<description>Operational Semantics of a Data Flow Language
Brock, Jarvis D.
A data flow machine achieves high performance by the concurrent execution of machine code consisting of data flow graphs which explicitly represent the data dependencies among program instructions. This thesis presents the operational semantics of ADFL, an applicative data flow language with an iteration construct resembling tail recursion and an error-handling scheme appropriate to the concurrency of data flow. The operation semantics O*T of ADFL are expressed by a two step process. The translation algorithm T maps an ADFL expression into its graph implementation, and the semantic function O maps the graph into its semantic characterization. Data flow graphs are specified by use of a graph assembly language, and the semantics of these graphs are derived by use of Kahn's fixpoint theory of communciating processes.
</description>
<pubDate>Fri, 01 Dec 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148948</guid>
<dc:date>1978-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Security of the Merkle-Hellman Cryptographic Scheme</title>
<link>https://hdl.handle.net/1721.1/148947</link>
<description>On the Security of the Merkle-Hellman Cryptographic Scheme
Shamir, Adi; Zippel, Richard E.
In this paper we show that a simplified version of the Merkel-Hellman public-key cryptographic system is breakable. While their full-fledged system seems to be resistant to the cryptanalytic attack we propose, this result suggests some ways in which the security of their system can be further enhanced.
</description>
<pubDate>Fri, 01 Dec 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148947</guid>
<dc:date>1978-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Model Equivalence</title>
<link>https://hdl.handle.net/1721.1/148946</link>
<description>Data Model Equivalence
Borkin, Sheldon A.
The current proliferation of proposals for database system data models and the desire for database systems which support several different data models raise many questions concerning "equivalence properties" of different data models. To answer these questions, one first needs clear definitions of the concepts under discussion. This paper presents formal definitions of the terms database, operation, operation type, application model and data model. Using this formal framework, database state equivalence, operation equivalence, application model equivalence and data model equivalence are distinguished. Three types of application and data model equivalence are defined - isomorphic, composed operation and state dependent. Possiblities for partial equivalences are mentioned. Implementation implications of these different equivalences are discussed. Examples are presented using two semantic data models, the semantic relation data model and the semantic graph data model.
</description>
<pubDate>Fri, 01 Dec 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148946</guid>
<dc:date>1978-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Six Lectures on Dynamic Logic</title>
<link>https://hdl.handle.net/1721.1/148945</link>
<description>Six Lectures on Dynamic Logic
Pratt, Vaughan R.
The distinction made there between static and dynamic logic has a very simple character, yet can play a central and unifying role in logic as a vantage point from which one can compare propositional calculus, predicate calculus, intensional logics such as modal logic and temporal logic, various algorithmic logics (logics of programs) and Quine's notions of transparency and opacity.
</description>
<pubDate>Fri, 01 Dec 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148945</guid>
<dc:date>1978-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of Modal Logic to Programming</title>
<link>https://hdl.handle.net/1721.1/148944</link>
<description>Applications of Modal Logic to Programming
Pratt, Vaughan R.
The modal logician's notion of possible world and the computer scientist's notion of state of a machine provide a point of commonality which can form the foundation of a logic of action. Extending ordinary modal logic with the calculus of binary relations leads to a very natural logic for describing the behavior of computer programs.
</description>
<pubDate>Fri, 01 Dec 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148944</guid>
<dc:date>1978-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrent Programming</title>
<link>https://hdl.handle.net/1721.1/148943</link>
<description>Concurrent Programming
Bryant, Randal E.; Dennis, Jack B.
</description>
<pubDate>Sun, 01 Oct 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148943</guid>
<dc:date>1978-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Research Directions in Computer Architecture</title>
<link>https://hdl.handle.net/1721.1/148942</link>
<description>Research Directions in Computer Architecture
Dennis, Jack B.; Fuller, Samuel H.; Ackerman, William B.; Swan, Richard J.; Weng, Kung-Song
</description>
<pubDate>Fri, 01 Sep 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148942</guid>
<dc:date>1978-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Near-optimal Method for Reasoning About Action</title>
<link>https://hdl.handle.net/1721.1/148941</link>
<description>A Near-optimal Method for Reasoning About Action
Pratt, Vaughan R.
We give an algorithm for "before-after" reasoning about action. The algorithm decides satisfiability and validity of formulae of propositional dynamic logic, a recently developed logic of change of state that subsumes the zero-order component of most other action-oriented logics. The algorithm requires time at most proportional to an exponentially growing function of the length (number of occurences of variabes and connectives) of the input. Fischer and Ladner have shown that that every algorithm for this problem must take exponential time, making this algorithm optimal to within a polynomial. No decision method for any other logic is known to be optimal to within less than an expoential. The typical time for our algorithm makes it a heuristically efficient algorithm of considerable pratical interest. Application areas incluse program verification, program synthesis, and discourse analysis. The algorithm is based on the method of semantic tableaux, appropriately generalized to dynamic logic. A novel treatment of Hintikka sets via theory algebras supplies the theoretical basis for our treatment of tableaux.
</description>
<pubDate>Fri, 01 Sep 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148941</guid>
<dc:date>1978-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Decidability Result for a Second Order Process Logic</title>
<link>https://hdl.handle.net/1721.1/148940</link>
<description>A Decidability Result for a Second Order Process Logic
Parikh, Rohit
We prove the decidability of the validity problem for a rather general language for talking about computations. As corollaries of our result, we obtain some decidability results of Pratt, Constable, Fischer-Ladner, and Pnueli and also a new decidability result for deterministic propositional dynamic logic.
</description>
<pubDate>Fri, 01 Sep 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148940</guid>
<dc:date>1978-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounds on the Scheduling of Types Task Systems</title>
<link>https://hdl.handle.net/1721.1/148939</link>
<description>Bounds on the Scheduling of Types Task Systems
Jaffe, Jeffrey M.
We study the scheduling of different types of tasks on different types of processors. If there are k types of tasks and mi identifical processors for takss of type I, the finishing times of any demand driven or list schedule is at most k+1-(1/mas(m1,…,mk)) times worse than the optimal schedule. This bound is best possible. If the processors execute at different speeds then the performance ratio of any list schedule (relative to the optimal schedule) is bounded by k plus the maximum ratio between the speeds of any two processors of the same type.
</description>
<pubDate>Fri, 01 Sep 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148939</guid>
<dc:date>1978-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Preemptive Multiprocessor Job Scheduling</title>
<link>https://hdl.handle.net/1721.1/148938</link>
<description>An Analysis of Preemptive Multiprocessor Job Scheduling
Jaffe, Jeffrey M.
The preemptive scheduling of a partially ordered set of tasks is studied. A class of scheduling heuristics is introduced, and the performance of schedules in this class is analyzed with respect to the least finishing time optimality criterion. If there are m processors, then the finishing time of any schedule in the class is at most √m + (1/2) times worse than optimal, independent of the speeds of the processors. Examples are given which indicate that there are schedules whcih may be as bad as √m-1 times worse than optimal even for machines with one fast processor.
</description>
<pubDate>Fri, 01 Sep 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148938</guid>
<dc:date>1978-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effectiveness</title>
<link>https://hdl.handle.net/1721.1/148937</link>
<description>Effectiveness
Parikh, Rohit
Church's thesis equates the intuitive notion "effective" with the mathematical notion "recursive." In order for this thesis to provide any information to us we have to have a clear understanding of both notions. We consider one of the prevalent definitions of "effective" and compare it with the notions of syntatic and semantic consequence to see whcih one it corresponds to better. The notion of syntactic consequence, while useful, is subservient to the semantic notion and when we go from one language to another we expect to have to change the syntatic notion of conseuqnce, if we are lucky enough to have one at all. Similiarly the prevalent notion of effectiveness is a restricted one and has had the effect of limiting our view. At the end of section 3, we give a more general analysis of effectiveness and propose a mathematical theory. In section 4 we consider the question whether the set of grammatical sentences of English is recursive. We show that this question is not well posed and that the arguments in favour of a positive answer are question begging. We reformulate this question in the form "How recursive is the set of grammatical sentences of English?", and propose a way of turning it into a precise technical problem. The method used is a generalisation of the Kolmogorov-Chaitin theory of randomness which is briefly sketched.
</description>
<pubDate>Sat, 01 Jul 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148937</guid>
<dc:date>1978-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of the Solovay and Strassen Test for Primality</title>
<link>https://hdl.handle.net/1721.1/148936</link>
<description>An Analysis of the Solovay and Strassen Test for Primality
Baratz, Alan E.
In this paper we will analyze the performace of the Solovay and Strasses probabilistic primality testing algorithm. We will show that iterating Solovay and Strassen's algorithm r times using independent random numbers at each iteration, results in a test for the primality of any positive odd integer, n&gt;2, with error probability of 0 (if n is prime), error probability at most 4^-r (if n is composite and non-Carmichael), and error probability at most 2^-r (if n is composite and Carmichael).
</description>
<pubDate>Sat, 01 Jul 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148936</guid>
<dc:date>1978-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Fast Signature Scheme</title>
<link>https://hdl.handle.net/1721.1/148935</link>
<description>A Fast Signature Scheme
Shamir, Adi
In this paper we propose a new scheme for generating and verifying "electronic signatures" in public-key communications. The scheme is based on the difficulty of solving the knapsack problem, and its two main advantages over previous schemes are speed and simplicity.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148935</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Completeness Result for a Propositional Dynamic Logic</title>
<link>https://hdl.handle.net/1721.1/148934</link>
<description>A Completeness Result for a Propositional Dynamic Logic
Parikh, Rohit
Propositional modal logic of programs has been introduced by Fischer and Ladner [1], following ideas of Pratt [4]. We shall call it propositional dynamic logic (PDL) following the terminology of Heral, Meyer and Pratt. In the following we prove the completeness of a rather natural set of axioms for this logic and for an extension of it obtained by allowing the inverse operation which converts a program into its inverse.
</description>
<pubDate>Sat, 01 Jul 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148934</guid>
<dc:date>1978-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Faster Algorithm Computing String Edit Distances</title>
<link>https://hdl.handle.net/1721.1/148933</link>
<description>A Faster Algorithm Computing String Edit Distances
Masek, William J.; Patterson, Michael S.
The edit-distance between two character strings can be defined as the minimum cost of a sequence of editing operations which transforms one string into the other. The operations allowed are deleteing, inserting and replacing one symbol at a time, with possibly different costs for each of these operations. The problem of finding the logest common subsequence of two strings is a special case of the problem of computing edit-distances. We describe an algorithm for computing the edit-distance between two strings of length n and m, n&gt;=m, which requires 0(nm/min(log n, m)) steps whenever the costs of edit-operations are integral multiples of a single positive real number and the alphabet for the strings is finite. These conditions are necessary for the algorithm to achieve the time bound.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148933</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of Queues in the Parallel Data Flow Evaluation of "If-Then-While" Programs</title>
<link>https://hdl.handle.net/1721.1/148932</link>
<description>The Use of Queues in the Parallel Data Flow Evaluation of "If-Then-While" Programs
Jaffe, Jeffrey
A property of a model of parallel computation is analyzed. We show that the use of queues may speed-up the execution of well formed data flow schemas by an arbitrarily large factor. A general model of data flow computation is presented to provide a framework for the comparison of data flow models. In particular a formal definition of a data flow version of the Computation Graphs of Karp and Miller and the Data Flow Schemas of Dennis are provided within the context of this model.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148932</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Arithemtical Completeness in Logics of Programs</title>
<link>https://hdl.handle.net/1721.1/148931</link>
<description>Arithemtical Completeness in Logics of Programs
Harel, David
We consider the problem of designing arithmetically complete axiom systems for proving general properties of programs; 1.e. axiom systems which are complete over arithmetical universes, when all first-order formulae which are valid in such universes are taken as axioms. We prove a general Theorem of Completeness which takes care of a major part of the responsibility when designing such systems. It is them shown that what is left to do in order to establish an arithmetical completeness result, such as those appearing in [12] and [14] for the logics of DL and DL+, can be described as a chain of reasoning which involves some simple utilizations of arithmetical induction. An immediate application of these observations is given in the form of an arithmetical completeness result for a new logic similar to that os Salwicki [22]. Finally, we contrast this discipline with Cook's [5] notion of relative completeness.
</description>
<pubDate>Sat, 01 Apr 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148931</guid>
<dc:date>1978-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lower Bounds on Information Transfer in Distributed Computations</title>
<link>https://hdl.handle.net/1721.1/148930</link>
<description>Lower Bounds on Information Transfer in Distributed Computations
Abelson, Harold
We derive a lower bound on the interprocessor information transfer required for computing a function in a distributed network. The bound is expressed in terms of the function's derivatives, and we use it to exhibit functions whose computation requires a great deal of interprocess communication. As a sample application, we give lower bounds on information transfer in the distributed computation of some typical matrix operations.
</description>
<pubDate>Sat, 01 Apr 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148930</guid>
<dc:date>1978-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Descriptions and the Specialization of Concepts</title>
<link>https://hdl.handle.net/1721.1/148929</link>
<description>Descriptions and the Specialization of Concepts
Martin, William A.
The OWL II System computes with expressions which describe an object from a particular viewpoint. These partial descriptions form a tree structure under the specialization operation, which preserves intensional properties. The descriptions are also related in terms of their extensions by characterization and exemplar links. Descriptions of individuals must always specify a context of the description. Eight ways in which one description can be a specialization of another are distinguished.
</description>
<pubDate>Wed, 01 Mar 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148929</guid>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computer Architecture for Data-flow Computation</title>
<link>https://hdl.handle.net/1721.1/148928</link>
<description>A Computer Architecture for Data-flow Computation
Misunas, David P.
The structure of a computer which utilizes a data-flow program representation as its base language is described. The use of the data-flow representation allows full exploitation by the processor of the parallelism and concurrency achievable through the data-flow form. The unique architecture of the processor avoids the usual problems of processor switching and memory/processor interconnection by the use of interconnection networks which has a great deal of inherent parallelism. The structure of the processor allows a large number of instructions to be active simultaneously. These active instructions pass through the interconnection networks concurrently and form streams of instructions for the pipelined functional units. Due to the cyclic nature of an iterative computation, the possiblity of deadlock can arise in the performance of such a computation within the data-flow architecture. A deadlock is caused by the interaction of several simultaneously active cycles of the same iterative computation. The use of a recursive rather than iterative representation of a computation avoids the deadlock problem and provides a more efficient implementation of the computation within the architecture. For this reason, a program executed by the data-flow processor is restricted to an acyclic directed graph representation.
</description>
<pubDate>Wed, 01 Mar 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148928</guid>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Subgraph Homeomorphism Problem</title>
<link>https://hdl.handle.net/1721.1/148927</link>
<description>The Subgraph Homeomorphism Problem
LaPaugh, Andrea Suzanne
The problem investigated in this thesis is that of finding homeomorphic images of a given graph, called the pattern graph, in a larger graph. A homeomorphism is a pair of mappings, (v,a), suc that v maps the nodes of the pattern graph to nodes of the larger graph, and a maps the edges of the mattern graph to (edge or node) disjoint paths in the larger graph. A homeomorphism represents a similarity of structure between the graphs involved. Therefore, it is an important concept for both graph theory and applications such as programming schema. We give a formal definition of the subgraph homeomorphism problem. In our investigation, we focus on algorithsm which depend on the pattern graph and allow the node mapping, v, to be partially or totally specified. Reductions between node disjoint and edge disjoint formulations of the problem are discussed. Also, reductions faciliating the solution of given subgraph homeomorphism problems are formulated. A linera time algorithm for finding a cycle in a graph containing three given nodes of the graph is presented. FInally, the two disjoint paths problem, an open problem, is discussed in detail.
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148927</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nondeterminism in Logics of Programs</title>
<link>https://hdl.handle.net/1721.1/148926</link>
<description>Nondeterminism in Logics of Programs
Harel, David; Pratt, Vaughan R.
We investigate the principles underlying reasoning about nondeterministic programs, and present a logic to support this kind of reasoning. Our logic, an extension of dynamic logic ([22] and [12]), subsumes most existing first-order logics of nondeterministic programs, including that developed by Dijkstra based on the concept of weakest precondition. A significant feature is the strict separation between the two kinds of nonterminating computations: infinite computations and failures. The logic has a Tarskian truth-value semanics, an essential prerequisite to establishing completeness of axiomatizations of the logic. We give an axiomatization for flowchart (regular) programs that is complete relative to arithmetic in the sense of Cook. Having a satisfactory tool at hand, we turn to the clarification of the concept of the total correctness of nondeterministic programs, providing in passing, a critical evaluation of the widely used "predicate transformer" approach to the definition of programming constructs, initiated by Dijkstra [5]. Our axiom system supplies a complete axiomatization of wp.
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148926</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computability and Completeness in Logics of Programs</title>
<link>https://hdl.handle.net/1721.1/148925</link>
<description>Computability and Completeness in Logics of Programs
Harel, David; Meyer, Albert R.; Pratt, Vaughan R.
Dynamic logic is a generalization of first order logic in which quantifiers of the form "for all X…" are replaced by phrases of the form "after executing program α…". This logic subsumes most existing first-order logic of programs that manipulate their environment, including Floyd's and Hoare's logics of partial correctness and Manna and Waldinger's logic of total correctness, yet is more closely related to classical first-order logic than any other proposed logic of programs. We consider two issues: how hard is the validity problem for the formulae of dynamic logic, and how might one axiomatize dynamic logic? We give bounds on the validity problem for some special cases, include a π0/2-completeness result for the partial correctness theories of uninterpreted flowchart programs and a π1/1-completeness result for unrestricted dynamic logic. We also demonstrate the completeness of an axiomatization of dynamic logic relative to arithmetic.
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148925</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Complete Axiomatic System for Proving Deductions About Recursive Programs</title>
<link>https://hdl.handle.net/1721.1/148924</link>
<description>A Complete Axiomatic System for Proving Deductions About Recursive Programs
Harel, David; Pnueli, Amir; Stavi, Jonathan
Denoting a version of Hoare's system for proving partial correctness of recursive programs by H, we present an extension D which may be thought of a H u {^,v,∃,∀} uH^-1, including the rules of H, four special purpose rules and inverse rules to those of Hoare. D is shown to be a complete system (in Cook's sense) for proving deductions of the form σ1.....σn ?σ over a language, the wff's of which are assertions in some assertion language L and partial correctness specifications of the form p{α}q. All valid formulae of L are taken as axioms of D. It is shown that D is sufficient for proving partial correctness, total correctness and program equivalence as well as other important properties of programs, the proofs of which are impossibel in H. The entire presentation is worked out in the framework of nondeterministic programs employing iteration and mutually recursive procedures.
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148924</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Second Order Logic with First Order Quantifiers</title>
<link>https://hdl.handle.net/1721.1/148923</link>
<description>Characterizing Second Order Logic with First Order Quantifiers
Harel, David
A language Q is defined and given semantics, the formulae of which are quantifier-free first-order matrices prefixed by combinations of finite partially ordered first-order quantifiers. It is shown that Q is equivalent in expressive power to second order logic by establishing the equivalence of alternating second order quantifiers and forming conjunctions of partially ordered first-order quantifiers.
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148923</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Dynamic Debugging System for MDL</title>
<link>https://hdl.handle.net/1721.1/148922</link>
<description>A Dynamic Debugging System for MDL
Berez, Joel M.
Program debugging is a time consuming process. Conventional debugging techniques and aids typically give the user a narrow view of the program's operation, making debigging difficult. A debugging system that would present a clear overall picture of a program's behavior and would be both flexible and simple to operate would be a valuable tool. Such a system was designed and implemented in and for MDL, a high-level applicative programming language. This report discusses: the design alternatives considered during the debigging system's design and implementation phases, the reasons for the resulting design choices, and the system attributes. A major attribute of the system (MEND) is that it does not simulate the program being debugged but instead monitors it from another process. This attribute results in a robust and viable debugging system, because MEND need not be modified in order to handle each new extension to MDL and/or each new user-defined primitive.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148922</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Logic Design for the Cell Block of a Data-flow Processor</title>
<link>https://hdl.handle.net/1721.1/148921</link>
<description>A Logic Design for the Cell Block of a Data-flow Processor
Amikura, Katsuhiko
Recently studies on parallel computation architecture have yielded a new type of computer architecture known as the data-flow processor. As part of the effort in realizing the data-flow processor, a logic design for the Cell Block of the basic data-flow processor is proposed in this thesis. The resulting design has a modular structure which is derived from a top-down decomposition of the specification given in an Aechitecutere Description Language. The desired speed of operation of the Cell Block is obtained by exploiting the parallellism inherent in its operation. The logic design is carried out using electronic devices available commerically today, but is based on an aynchronous communciation protocol.
</description>
<pubDate>Thu, 01 Dec 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148921</guid>
<dc:date>1977-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report on the Workshop on Data Flow Computer and Program Organization</title>
<link>https://hdl.handle.net/1721.1/148920</link>
<description>Report on the Workshop on Data Flow Computer and Program Organization
Misunas, David P.
The following report comprises an edited transcript of presentations made at the Workshop of Data Flow Computer and Program Organization, held at MIT on July 10-14, 1977 and co-sponsored by the Lawrence Livermore Laboratory (LLL) and the Department of Energy, Mathematical Sciences Branch. These informal transcriptions are only intended to provide a general picture of ongoing work in the area and, to that end, have been heavily edited and summarized. For further details, the interested reader should consult the bibliography at the end of the report.
</description>
<pubDate>Tue, 01 Nov 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148920</guid>
<dc:date>1977-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Factoring Numbers in 0(log n) Arithmetic Steps</title>
<link>https://hdl.handle.net/1721.1/148919</link>
<description>Factoring Numbers in 0(log n) Arithmetic Steps
Shamir, Adi
In this paper we show that a non-trivial factor of a composite number n can be found by performing arithmetic steps in a number proportional to the number of bits in n, and thus there are extremely short straight-line factoring programs. However, this theoretical result does not imply that natural numbers can be factored in polynomial time in the Turing-Machine model of complexity, since the numbers operated on can be as big as 2^cn^2, thus requiring exponentially many bit operations.
</description>
<pubDate>Tue, 01 Nov 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148919</guid>
<dc:date>1977-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Computer Decentralization</title>
<link>https://hdl.handle.net/1721.1/148918</link>
<description>An Analysis of Computer Decentralization
D'Oliveira, Cecilia R.
This thesis is concerned with the recent trend toward decentralization of the computer facility. We conjecture that there are strong forces in many organizations leading towards decentralization, which have been held in check by technological and economic constraints that are beginning to relax. This conjecture is explored by analyzing approxiately forty case studies of decentralization decisions. The results indicate that (1) strong decentralization forces do exist in many organizations. The forces derived from these particular case studies are classified as either functional, economic, or psychological. (2) The drop in hardware costs allows decentralization to occur at the initiative of lower level managers. The consequences could include disintegration of the organization's information system. Decisions by lower level managers may overlook the technological constraints of decentralization, especially the problems of networking loosely coupled computers. This could result in a future inability to share data or programs amond organizational units. Because of the many functional advantages it provides, we do not feel that top level management should discourage decentralization. However, top level management must be aware that the technological constraints require that decentralization occur with their guidance and their perspective of the entire organization.
</description>
<pubDate>Sat, 01 Oct 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148918</guid>
<dc:date>1977-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measuring User Characteristics on the MILTICS System</title>
<link>https://hdl.handle.net/1721.1/148917</link>
<description>Measuring User Characteristics on the MILTICS System
Rodriguez, Humberto, Jr.
One of the problems in measuring the performance of a computer system is in defining its normal workload. In the case of timesharing systems, it is necessary to develop a behavioral model of the average user. This thesis presents a study of several parameter that characterize user behavior on the Multics timesharing system at MIT. Data was gathered by monitoring the logon sessions of three different groups of users. The results are presented and comparisons are made between the command usage of the groups. Some patterns of usage do appear in the results, but it is unclear if they can be applied in other situations. A probability distribution of the think time between commands is shown and compared with other distributions. The benchmark program currently used on the Multics system is also compared with the user model described in this study. The capability to monitor user behavior and characteristics is shown to be useful and worth installing in the system.
</description>
<pubDate>Mon, 01 Aug 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148917</guid>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Triangulations of a Set of Points in the Plane</title>
<link>https://hdl.handle.net/1721.1/148916</link>
<description>On Triangulations of a Set of Points in the Plane
Lloyd, Error Lynn
A set, V, of points in the plane is triangulated by a subset, T, of the straight line segments whose enpoints are in V, if T is a maximal subset such that the line segments in T intersect only at their endpoints. The weight of any triangulation is the sum of the Euclidean lengths of the line segments in the triangulation. We examine two problems involving triangulations. We discuss several aspects of the problem of finding a minimum weight triangulation among all triangulations of a set of points and give counterexamples to two published solutions to this problem. Secondly, we show that the problem of determining the existence of a triangulation in a given subset of the straight line segments whose endpoints are in V is NP-Complete.
</description>
<pubDate>Fri, 01 Jul 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148916</guid>
<dc:date>1977-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ancillary Reports: Kernel Design Project</title>
<link>https://hdl.handle.net/1721.1/148915</link>
<description>Ancillary Reports: Kernel Design Project
Clark, David D; Saltzer, Jerome H.; Voydock, V.L.; Janson, P.A.; Hunt, D.H.; Forsdick, H.C.; Reed, D.P.; Frankston, R.M.; Mabee, R.F.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148915</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Overview of OWL, A Language for Knowledge Representation</title>
<link>https://hdl.handle.net/1721.1/148914</link>
<description>An Overview of OWL, A Language for Knowledge Representation
Szolovits, Peter; Hawkinson, Lowell B.; Martin, William A.
We describe the motivation and overall organization of the OWL language for knowledge representation. OWL consists of a memory of concepts in terms of which all English phrases and all knowledge of an application domain are represented, a theory of English grammar which tells how to map English phrases into concepts, a parser to perform that mapping for individual sentences, and an interpreter to carry out procedures which are written in the same representational formalism. The system has been applied to the study of interactive dialogs, explanations of its own reasoning, and question answering.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148914</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Minimum Cutsets in Reducible Graphs</title>
<link>https://hdl.handle.net/1721.1/148913</link>
<description>Finding Minimum Cutsets in Reducible Graphs
Shamir, Adi
The analysis of many processes modelled by directed graphs requires the selection of a subset of vertices which cut all the cycles in the graph. Reducing the size of such a cutset usually leads to a simpler and more efficient analysis, but the problem of finding minimum cutsets in general directed graphs is know to be NP-complete. In this paper we show that in reducible graphs (and thus in almost all the "practicla" flowcahrts of programs), minimum cutsets can be found in linear time. An immediate application of this result is in program verification systems based on Floyd's inductive assertions method.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148913</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Mutual Exclusion Problem for Unreliable Processes</title>
<link>https://hdl.handle.net/1721.1/148912</link>
<description>The Mutual Exclusion Problem for Unreliable Processes
Rivest, Ronald L.; Pratt, Vaughan R.
Consider n processes operating asynchronously in parallel, each of which maintains a single "public" variable which can be read (but not written) by the other processes. We show that the processes can synchronize their actions by the basic operations of (1) reading each other's public variables, and (2) setting their own public variable to some value. A process may "die" (fail) at any time, when its public variable is (automatically) set to a special "dead" value. A dead process may revive. Reading a public variable which is being simultaneously updated returns either the old or the new value. Each process may be in a certain "critical" state (which it leaves if it dies). We present a synchronization scheme with the following properties. (1) At most one process is ever in its critical state at a time. (2) If a process wants to enter its critical state, it may do so before any other process enters its critical state more than once. (3) The public variables assume only a finite number of values. (4) A process wanting to enter its critical state can always make progress towards that goal. (5) The various processes may run arbitrary speeds relative to one another. By the definition of the problem, no process can prevent another from entering its critical state by repeatedly failing and restartying. In the case of two processes, what makes our solution of particular interest is its remarkable simplicity when compared with the extant solutions to this problem. Our n-process solution uses the two-process solution as a subroutine, and is not quite as elegant as the two-process solution.
</description>
<pubDate>Fri, 01 Apr 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148912</guid>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Construction and Analysis of Network Flow Problem Which Forces Karzanov Algorithm 0(n^3) Running Time</title>
<link>https://hdl.handle.net/1721.1/148911</link>
<description>Construction and Analysis of Network Flow Problem Which Forces Karzanov Algorithm 0(n^3) Running Time
Baratz, Alan E.
The intest of this paper is to demonstrate the construction of a network flow problem which will force the Karzanov "Preflow" algorithm to run in its theoretic worst case time 0(n^3). Once such a "bad case" network has been constructed, an analysis is performed to determine the exact time required by the algorithm to computer the maximum flow through the network.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148911</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Method for Obtaining Digital Signatures and Public-key Cryptosystems</title>
<link>https://hdl.handle.net/1721.1/148910</link>
<description>A Method for Obtaining Digital Signatures and Public-key Cryptosystems
Rivest, Ronald L.; Shamir, Adi; Adleman, Len
We present an encryption method with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences. 1. Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an excryption key publicly revealed by the inteded recipient. Only he can decipher the message, since only he knows the corresponding decryption key. 2. A message can be "signed" using a privately-held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in "electronic mail" and "electronic funds transfer" systems. A message is encrypted by representing it as a number M, raising M to a publicly-specified power e, and then taking the remainder when the result is dividied by the publicly specified product n of two large secrete prime numbers p and q. Decryption is similar; only a different sectre, power d is used, where e*d=1(modp-1)*(q-1)). The secruity of the system rests in part on the difficulty of factoring the published divisor, n.
</description>
<pubDate>Fri, 01 Apr 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148910</guid>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardware Estimatino of a Process' Primary Memory Requirements</title>
<link>https://hdl.handle.net/1721.1/148909</link>
<description>Hardware Estimatino of a Process' Primary Memory Requirements
Gifforf, David K.
It is shown that a process' primary memory requirements can be approximated by use of the miss rate in the Honeywell 6180's page table word associative memory. This primary memory requirement estimate was employed by an experimental version of Multics to control the level of multiprogramming in the system, and bill for memory usage. The resultant system's tuning parameters were shown to be configuration insensitive, and it was conjectured that the system would also track shifts in the referencing characteristics of its workload and keep the system in tune. The limitations of the assumptions made about a process' referencing characteristics are examined, and directions for future research are outlined.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148909</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Max Flow Algorithm of Dinis and Karzanov: An Exposition</title>
<link>https://hdl.handle.net/1721.1/148908</link>
<description>The Max Flow Algorithm of Dinis and Karzanov: An Exposition
Even, Shimon
Recently A.V. Karzanov improved Dinic's algorithm to run in time 0(n^3) for networks of n vertices. For the benefit of those who do not read Russian, the Dinic-Karzanov algorithm is explained and proved. In addition to being the best algorithm known for network flow, this algorithm is unique in that it does not use path augmentation.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148908</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A System to Process Dialogue: A Progress Report</title>
<link>https://hdl.handle.net/1721.1/148907</link>
<description>A System to Process Dialogue: A Progress Report
Brown, Gretchen P.
This is a progress report on work toward and English language interface for expert systems. A framework for handling mixed-initiative English dialogue in a console session environment is discussed, with special emphasis placed on recognition. The ideas presented here are being implemented in a prototype system called Susie Software, which is embedded in the OWL system. OWL is currently under development in the Automatic Programming Group at the MIT Laboratory for Computer Science. We are using OWL to explore the problems of constructing expert systems, and for Susie Software the domain of expertise is programming. In the Susie effort to date, major emphasis has been placed on the construction of a computational model for the structural aspects of English dialogue; it is this structural model that will be discussed.
</description>
<pubDate>Fri, 01 Oct 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148907</guid>
<dc:date>1976-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Information Storage Reliability Using a Data Network</title>
<link>https://hdl.handle.net/1721.1/148906</link>
<description>Improving Information Storage Reliability Using a Data Network
Benjamin, Arthur J.
Backup and recovery methods using magnetic tapes are common in computer utilities, since information stored on-line is subject to damage. The serial access nature of the tape medium severely restricts the flexibility and simplicity of accessing and managing the stored data. A method using a data network will be described, to present a backup mechanism which takes advantage of a large, inexpensive, random access remote data storage facility to provide data access and management functions that are more flexible than. those provided by a traditional backup facility. Although data transfer rates will be reduced, data access and management will be simplified, and system availability will be improved. The work described is based on a network backup facility built for the Multics computer utility, using the ARPAnet.
</description>
<pubDate>Fri, 01 Oct 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148906</guid>
<dc:date>1976-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Task Scheduling in the Control Robotics Environment</title>
<link>https://hdl.handle.net/1721.1/148905</link>
<description>Task Scheduling in the Control Robotics Environment
Mok, Aloysius Ka-Lau
Scheduling problems involved in Control Robotics, a software approach to control engineering are studied. The capability of a multiprocessor system to handle tasks with hard, real-time deadlines is investigated according to whether complete or partial a priori knowledge of the deadlines, computation times and frequencies of occurence of individual tasks is available. A model of preemptive scheduling, the "scheduling game" is introduced to explore mathematical relationships for different scheduling situations. A necessary and sufficient condition for scheduling tasks with simultaneous requests or deadlines is derived. Partial solutions and the difficulties involved in scheduling tasks with distributed requests are discussed. It is shown that in the most general case, there is no globally optimal algorithm in the absense of a priori knowledge about the distribution of requests of future tasks in time.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148905</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Note on the Average Time to Compute Transitive Closures</title>
<link>https://hdl.handle.net/1721.1/148904</link>
<description>A Note on the Average Time to Compute Transitive Closures
Bloniarz, P.A.; Fischer, M.J.; Meyer, A.R.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148904</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>K+1 Heads are Better Than K</title>
<link>https://hdl.handle.net/1721.1/148903</link>
<description>K+1 Heads are Better Than K
Yao, Andrew C.; Rivest, Ronald L.
There are languages which can be recognized by a deterministic (k+1)-headed one-way finite automaton but which cannot be recognized by a k-headed one-way (deterministic or non-deterministic) finite automaton. Furthermore, there is a language accepted by a 2-headed nondeterministic finite automaton which is accepted by no k-headed deterministic finite automaton.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148903</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design of a Modular Laboratory for Control Robotics</title>
<link>https://hdl.handle.net/1721.1/148902</link>
<description>The Design of a Modular Laboratory for Control Robotics
Malvania, Nikhil
Computer have been used for the control of physical processes since the early sixties. In this thesis, we look at Control Robotics, the procedural control of physical processes. Based upon this new approach, a design for a modular laboratory is proposed. The laboratory consists of a set of experiments which can be synthesized using certain conversion and processing modules. The laboratory also entails the generation of algorithms and programs for each experiement. Experiments are proposed and analysed, and a common and in a sense, minimal set of hardward modules is selected using a minimax approach. Power, torque, strength, resolution and other similar requirements for the modules are discussed. A theoretical model is developed for predicting and analyzing the capability of a processor to perform real-time control. The model is based upon the so-called Earliest Deadline algorithm for scheduling a number of tasks on a single processor. The model relates the bandwidths of different tasks a processor can perform to the total number of tasks; the average instruction execution time for the processor; and the complexity of its instruction set. This model is used to exhibit and compare the controlling capacities of two processors - Digital Equipment Corporation's PDP 11/45 and Intel 8080. It is also used to predict the processor requirements for the experiments of the proposed modeular laboratory. Thesis results include measure of relative power of the tested processors in the context of real-time control, and their capabilities in carrying out the experiments of the proposed laboratory.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148902</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Arrangement of Keys in a Hash Table</title>
<link>https://hdl.handle.net/1721.1/148901</link>
<description>Optimal Arrangement of Keys in a Hash Table
Rivest, Ronald L.
When open addressing is used to resolve collisions in a hash table, a given set of keys may be arranged in many ways; typically this depends on the order in which the keys are inserted. We show that arrangements minimizing either the average or worst-case number of probes required to retrieve any key in the table can be found using an algorithm for the assignment problem. The worst-case retrieval time can be reduced to 0(log2(M)) with probability 1-E(M), when storing M keys in a table of size M, where E(M) -&gt; 0 aas M -&gt; infinity. We also examine insertion algorithms to see how to apply these ideas for a dynamically changing set of keys.
</description>
<pubDate>Thu, 01 Jul 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148901</guid>
<dc:date>1976-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protosystem I: An Automatic Programming System Prototype</title>
<link>https://hdl.handle.net/1721.1/148900</link>
<description>Protosystem I: An Automatic Programming System Prototype
Ruth, Gregory R.
A model of the data processing system writing process is given in terms of development stages. These stages correspond to the progression in the implementation and design process from the highest level of abstraction (English system specifications) to the lowe level (machine code). The issues and goals (including optimization of the product data processing systems) involved in automating these stages are discussed and strategies and methodologies used for doing so are developed. Protosystem I, an automatic programming system prototype, is described. The completed (and working) part automates three of the five stages identified in the proposed model of the system writing process. The basic theory, methods and structure of this part of the automatic programming systems are presented.
</description>
<pubDate>Thu, 01 Jul 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148900</guid>
<dc:date>1976-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Worst-case Behavior of String-searching Algorithms</title>
<link>https://hdl.handle.net/1721.1/148899</link>
<description>On the Worst-case Behavior of String-searching Algorithms
Rivest, Ronald L.
Any algorithm for finding a pattern of length k in a string of length n must examine at least n-k+1 of the characters of the string in the worst case. By considering the pattern 00…0, we prove that this is the best possible result. Therefore there do not exist pattern matching algorithms whose worst-case behavior is "sublinear" in n (that is, linear with constant less than one), in contrast with the situation for average behavior (the Boyer-Moore algorithm is known to be sublinear on the average).
</description>
<pubDate>Thu, 01 Apr 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148899</guid>
<dc:date>1976-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Design of Data Processing Systems</title>
<link>https://hdl.handle.net/1721.1/148898</link>
<description>Automatic Design of Data Processing Systems
Ruth, Gregory R.
The design of data organization and data accessing procedures for data processing systems operating on large keyed fields of data is a common and recurrent activity in modern data processing applications. A considerable amount of understanding and expertise in this area has been developed and it is time to being codifying and automating this process. It should be possible to develop a system where the user has merely to specify the characteristics of his data objects and their interrelations and the system will automatically determine the data organizations and accessing procedures that are optimal for his application. The optimizer for Protosystem I (an automatic programming system prototype at MIT) provides an example of how such automation can be accomplished.
</description>
<pubDate>Sun, 01 Feb 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148898</guid>
<dc:date>1976-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improved Bounds on the Costs of Optimal and Balanced Binary Search Trees</title>
<link>https://hdl.handle.net/1721.1/148897</link>
<description>Improved Bounds on the Costs of Optimal and Balanced Binary Search Trees
Bayer, Paul J.
A binary search tree can be used to store data in a computer system for retrieval by name. Different elements in the tree may be referenced with different probabilities. If we define the cost of the tree as the average number of elements which must be examined in searching for an element, then different trees have different costs. We show that two particular types of trees, weight balanced trees and min-max trees, which are easily constructed from the probability distribution on the elements, are close to optimal. Specifically, we show that for any probability distribution with entropy H, H-log2H-(log2e-1)&lt;=Copt&lt;= Cwb ,+ H+2/Cmm,+H+2 where Copt, Cwb, and Cmm are the optimal, weigh balances, and min-max costs. We gain some added insight by deriving an expression for the expected value of the entropy of a random probability distribution.
</description>
<pubDate>Sat, 01 Nov 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148897</guid>
<dc:date>1975-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steam-oriented Computation in Recursive Data Flow Schemas</title>
<link>https://hdl.handle.net/1721.1/148896</link>
<description>Steam-oriented Computation in Recursive Data Flow Schemas
Weng, Kung-Song
In this thesis we present a parallel programming language based on a parallel computation model known as data flow schemas. Syntactically, the language resembles programming languages such as Algol 60, but does not have GOTO's, WHILE-loops, and non-local variables. The attractiveness of this approach lies in the inherently determinate nature of data flow schemas and the possiblity of formalizing the semantics of the language within the formalism suggested by Scott and Strachey. The language provides programming features for stream-oriented computation and intercommunicating systems. We introduce the notions of proper initialization and termination of such systems. A subclass of determinate systems in which these properties can be easily checked is defined and a translation into recursive data flow schemas is given.
</description>
<pubDate>Wed, 01 Oct 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148896</guid>
<dc:date>1975-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Complexity of the Word Problem for Commutative Semigroups</title>
<link>https://hdl.handle.net/1721.1/148895</link>
<description>Computational Complexity of the Word Problem for Commutative Semigroups
Cardoza, Edward W.
We analyze the computational complexity of some decision problems for commutative semigroups in terms of time and space on a Turing machine. The main result we present is that any decision procedure for the word problemm for commutative semigroups requires storage space at least proportional to n/logn on a multitape Turing machine. This implies that the word problem is polynomia space hard (and in particular that it is at least NP-hard). We comment on the close relation of commutative semigroups to vector addition systems and Petri nets. We also show that the lower bound of space n/logn can be extended to certain other natural algorithmic problems for commutative semigroups. Finally we show that for several other algorithmic problems for commutative semigroups there exist polynomial time algorithms.
</description>
<pubDate>Wed, 01 Oct 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148895</guid>
<dc:date>1975-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formal Properties of Well-formed Data Flow Schemas</title>
<link>https://hdl.handle.net/1721.1/148894</link>
<description>Formal Properties of Well-formed Data Flow Schemas
Leung, Clement Kin Cho
This thesis presents some results in comparative schematology and some undecidability results for two models of computer programs: the class of flowchart schemas and the class of well-formed data flow schemas (wfdfs's). Algorithms are given for translating a schema in each class into an equivalent schema in the other class. The propertiees of freedom, _-freedom, openness, and completeness are defined and studied. For every path P in a free flowchart schema S, there exists an interpretation under which the flow of controls through S is along P. _-freedom is a generalization of freedom and captures the notion of freedom for wfdfs's. An open schema is one in which no basic component is redundant and a complete schema contains no subschema which, whenever enabled, does not terminate. A comparison of the expressive power of subclasses of flowchart schemas and wfdfs's possessing various combinations of these properties is made. It is shown that the class of free flowchart schemas properly contains the classes of free and _-free wfdfs's , and that the class of open and complete flowchart schemas is equivalent in expressive power to the class of open and complete wfdfs's. Three undecidabilty results for open and complete program schemas are established: openness is undecidable for complete program schemas, completeness is undecidable for open program schemas, and equivalence is undecidable for open and complete program schemas.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148894</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Complexity of Negotion-limited Networks: A Brief Survery</title>
<link>https://hdl.handle.net/1721.1/148893</link>
<description>The Complexity of Negotion-limited Networks: A Brief Survery
Fischer, Michael J.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148893</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Isomorph Classes for Combinatorial Structures</title>
<link>https://hdl.handle.net/1721.1/148892</link>
<description>Finding Isomorph Classes for Combinatorial Structures
Weiss, Randell B.
A common problem in combinatorial analysis is finding isomorph classes of combinatorial objects. This process is sometimes known as isomorph rejection. In graph theory, it is used to count labelled and unlabelled graphs with certain properties. In chemistry, it is used to count the number of structures with the same chemical formula. In computer science it is used in counting arguments in proofs in complexity theory. In coding theory, it is used to partition sets of vectors into easy to handle sets. This thesis presents three different algorithms for solving this type of problem and compares their timing and memory use. Some examples are given of how to apply the algorithms to graph theory and coding theory.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148892</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Encryption Schemes for Computer Confidentiality</title>
<link>https://hdl.handle.net/1721.1/148891</link>
<description>Encryption Schemes for Computer Confidentiality
Pless, Vera
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148891</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Asynchronous Logic Array</title>
<link>https://hdl.handle.net/1721.1/148890</link>
<description>An Asynchronous Logic Array
Patil, Suhas S.
A new asynchronous logic array for the general synthesis of asynchronous digital circuits is presented. The parallel and asynchronous nature of the array gives the realized systems the speed and characteristics of hardwired circuits even though they are implemented in a uniform diode array with appropriate terminating circuits. The logic array is particularly suited for implementing control structures and should help extend the field of micro-control to asynchronous and parallel computers.
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148890</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>First Version of a Data Flow Procedure Language</title>
<link>https://hdl.handle.net/1721.1/148889</link>
<description>First Version of a Data Flow Procedure Language
Dennis, Jack B.
A language for representing computational procedures based on the concept of data flow is presented in terms of a semantic model that permits concurrent execution of noninterfering program parts. Procedures in the language operate on elementary and structured values, and always define functional transformations of values. The language is equivalent in expressive power to a block structured language with internal procedure variables and is a generalization of pure Lisp. The language is being used as a model for study of fundamental semantic constructs for programming, as a target language for evaluating translatability of programs expressed as the user-language level, and as a guide for research in advanced computer architecture.
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148889</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>CAMAC: Group Manipulation System</title>
<link>https://hdl.handle.net/1721.1/148888</link>
<description>CAMAC: Group Manipulation System
Weiss, Randell B.
CAMAC is a collection of group manipulation progams with an easy to use interface. With groups defined by either generating permutations or generators and relations the system can find coset tables, normalizers, centralizers, stabilizers, orbits, conjugacy classes, and isomorph classes of combinatorial objects, etc.
</description>
<pubDate>Sat, 01 Mar 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148888</guid>
<dc:date>1975-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decision Problems for Petri Nets and Vector Addition Systems</title>
<link>https://hdl.handle.net/1721.1/148887</link>
<description>Decision Problems for Petri Nets and Vector Addition Systems
Hack, Michael
Petri Nets, Generalized Petri Nets, and Vector Addition Systems can represent each other and thus have common decideability problems. The graphical appeal of Petri Nets is used in a new presentation of the classical problems of boundedness (decidable) and inclusion (undecidable). Various forms of the Reachability Problem are shown to be recursively equivalent to the Liveness Problem for Petri Nets. The decideability of these questions is still open, and some arguments both for and against the decidability of Liveness are presented.
</description>
<pubDate>Sat, 01 Mar 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148887</guid>
<dc:date>1975-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decidability of Equivalence for a Class of Data Flow Schemas</title>
<link>https://hdl.handle.net/1721.1/148886</link>
<description>Decidability of Equivalence for a Class of Data Flow Schemas
Qualitz, Joseph E.
In this paper we examine a class of computation schemas and consider the problem of deciding when pairs of elements in this class represent equivalent programs. We are able to show that equivalence is decidable for a non-trivial class of unary operator data flow schemas, and consider the applicability of this result to the problem of deciding equivalence in related models of computation. The model described below is a restricted version of the data flow schema described by Dennie and Fosseen in [1]. The reader is referred to that source for a more complete discussion of the properties of data flow schemas.
</description>
<pubDate>Sat, 01 Mar 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148886</guid>
<dc:date>1975-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Bateson's Logical Levels of Learning Theory</title>
<link>https://hdl.handle.net/1721.1/148885</link>
<description>On Bateson's Logical Levels of Learning Theory
Levin, Michael
</description>
<pubDate>Sat, 01 Feb 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148885</guid>
<dc:date>1975-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Research on Experts Systems</title>
<link>https://hdl.handle.net/1721.1/148884</link>
<description>Research on Experts Systems
Gorry, G. Anthony
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148884</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Class of Boolean Functions with Linear Combinatorial Complexity</title>
<link>https://hdl.handle.net/1721.1/148883</link>
<description>A Class of Boolean Functions with Linear Combinatorial Complexity
Hsieh, W. N.; Harper, L.H.; Savage, J.E.
In this paper we investigate the combinatorial complexity of Boolean functions satisfying a certain property, P^nk,m. A function of n variable has the P^nk,m property if there are at least m functions obtainable from each way of restricting it to a subset of n-l variables. We show that the complexity of P^n3,5 function is no less than 7n-4/6, and this bound cannot be much improved. Further, we find that for each k, there are p^k,2^k functions with complexity linear in n.
</description>
<pubDate>Tue, 01 Oct 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148883</guid>
<dc:date>1974-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Inherent Computational Complexity of Theories of Ordered Sets: A Brief Survery</title>
<link>https://hdl.handle.net/1721.1/148882</link>
<description>The Inherent Computational Complexity of Theories of Ordered Sets: A Brief Survery
Meyer, Albert R.
</description>
<pubDate>Tue, 01 Oct 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148882</guid>
<dc:date>1974-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>MDC-Programmer: A Muddle-to-datalanguage Translator for Information Retrieval</title>
<link>https://hdl.handle.net/1721.1/148881</link>
<description>MDC-Programmer: A Muddle-to-datalanguage Translator for Information Retrieval
Bengelloun, Safwan A.
This memo describes a practical application within the framework of the ARPA computer network of the philosophy that a fully developed computer network should appear as a virtual extensino of the user's own software environment. The application involves the design and implementation of a software facility that will permit users at MIT's Dynamic Modeling System to consider the retrieval component of the Datacomputer (developed and run by the Computer Corporation of America) as an extension of the Muddle environment. This facility generates efficient Datalanguage retrieval code, handles inter-process control of the Datacomputer, and manages all the necessary network connections.
</description>
<pubDate>Tue, 01 Oct 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148881</guid>
<dc:date>1974-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computing in Logarithmic Space</title>
<link>https://hdl.handle.net/1721.1/148880</link>
<description>Computing in Logarithmic Space
Lind, John C.
The set logspace, of logarithmic space computable string functions is defined. It is easily seen that logspace ≤ polytime, the set of polynomial time computable functions. ogspace is shown to equal L, the smallest class of recursive string functions containing concatenation and the equality function, and closed under explicit transformation, substitution of a function for a variable and two restricted types of recursion on notation. The first is called recursion of concatenation and only allows top level concetenation of the value of the recursive call. The second, called log bounded recursion on notation, will only define string functions whose length is bounded by 0(log n) on arguments of length n. Some additional closure properties of logspace are also described.
</description>
<pubDate>Sun, 01 Sep 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148880</guid>
<dc:date>1974-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Investigation of Current Language Support for the Data Requirements of Structured Programming</title>
<link>https://hdl.handle.net/1721.1/148879</link>
<description>An Investigation of Current Language Support for the Data Requirements of Structured Programming
Aiello, Jack M.
Structured programming is a new method for constructing reliable programs. Structured programming relies upon a systematic technique of top-down development which involves the refinement of both control structures and data structures. With possibly some limitations and extensions, existing languages can support control structure refinement. On the other hand, it is the belief of many that the representation of data structure refinement cannot be satified by present-day languages. Before accepting this view, it is wise to explore its validity. Therefore this thesis will investigate whether existing languages with possibly slight modifications are adequate for supporting the data requirements of structured programming.
</description>
<pubDate>Sun, 01 Sep 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148879</guid>
<dc:date>1974-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Enciphering Module for Multics</title>
<link>https://hdl.handle.net/1721.1/148878</link>
<description>An Enciphering Module for Multics
Benedict, G. Gordon
Recently IBM Corporation has declassified an algorithm for encryption usable for computer-to-computer or computer-to-terminal communications. Their algorithm was implemented in a hardware device called Lucifer. A software implementation of Lucifer for Multics is described. A proof of the algorithm's reversibility for deciphering is provided. A special hand-coded (assembly language) version of Lucifer is described whose goal is to attain performance as close as possible to that of the hardward device. Performance measurements of this program are given. Questions addressed are: How complex is it to implement an algorithm in software designed primarily for digital hardware? Can such a program perform well enough for use in the I/O system of a large time-sharing system?
</description>
<pubDate>Mon, 01 Jul 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148878</guid>
<dc:date>1974-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complete Classification of (24,12) and (22,11) Self-dual Codes</title>
<link>https://hdl.handle.net/1721.1/148877</link>
<description>Complete Classification of (24,12) and (22,11) Self-dual Codes
Pless, Vera; Sloane, N.J.A.
A complete classification is given of all [22, 11] and [24, 12] self-dual codes. For each code we give the order of its group, the number of codes equivalent to it, and its weight distribution. There is a unique [24, 12, 6] self-dual code. Several theorems on the enumeration of self-orthogonal codes are used, including forumlas for the number of such codes with minimum distance ≥ 4, and for the sum of the weight enumerators of all self-dual codes.
</description>
<pubDate>Sat, 01 Jun 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148877</guid>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Reduction Method for Establishing Lower Bounds on the Number of Additions</title>
<link>https://hdl.handle.net/1721.1/148876</link>
<description>The Reduction Method for Establishing Lower Bounds on the Number of Additions
Kedem, Zvi M.
A method for establishing lower bounds on the number of multiplications and divisions has been developed by Pan, Winograd and Strassen. A similar method is developed for establishing lower bounds on the number of additions and subtractions. The results obtained partially overlap those of Belaga, Winograd and Kirkpatrick.
</description>
<pubDate>Sat, 01 Jun 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148876</guid>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Dimensionality and Rate of Growth Arguments for Establishing Lower Bounds on Number of Multiplications</title>
<link>https://hdl.handle.net/1721.1/148875</link>
<description>Combining Dimensionality and Rate of Growth Arguments for Establishing Lower Bounds on Number of Multiplications
Kedem, Zvi M.
In this paper we describe a new method for establishing lower bounds for the number of multiplications and divisions required to compute rational functions. We shall start by reminding the reader of some standard notations.
</description>
<pubDate>Sat, 01 Jun 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148875</guid>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast On-line Integer Multiplication</title>
<link>https://hdl.handle.net/1721.1/148874</link>
<description>Fast On-line Integer Multiplication
Fischer, Michael J.; Stockmeyer, Larry J.
A Turing machine multiplies binary integers on-line if it receives its inputs low-order digits first and produces the jth digit of the product before reading in the (j+l)st digits of the two inputs. We present a general method for converting any off-line multiplication algorithm which forms the product of two n-digit binary numbers in time F(n) into an on-line method which uses time only O(F(n) log n ), assuming that F is monotone and satisfies n F(n) F(2n)/2 ! kF(n) for some constant k.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148874</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symmetry Codes and Their Invariant Subcodes</title>
<link>https://hdl.handle.net/1721.1/148873</link>
<description>Symmetry Codes and Their Invariant Subcodes
Pless, Vera
We define and study the invariant subcodes of the symmetry codes in order to be able to determine the algebraic properties of these codes. An infinite family of self-orthogonal rate 1/2 codes over GF (3), called symmetry codes, were constructed in [3].
</description>
<pubDate>Fri, 01 Feb 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148873</guid>
<dc:date>1974-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Super-exponential Complexity of Presburger Arithmetic</title>
<link>https://hdl.handle.net/1721.1/148872</link>
<description>Super-exponential Complexity of Presburger Arithmetic
Fischer, Michael J.; Rabin, Michael O.
Lower bounds are established on the computational complexity of the decision problem and on the inherent lengths of proofs for two classical decidable theories of logic: the first order theory of the real numbers under addition, and Presburger arithmetic -- the first order theory of addition on the natural numbers. There is a fixed constant c &gt; 0 such that for every (non-deterministic) decision procedure for determining the truth of sentences of real addition and for all sufficiently large n, there is a sentence  of length n for which the decision procedure runs for more than 2 cn steps.
</description>
<pubDate>Fri, 01 Feb 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148872</guid>
<dc:date>1974-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Complexity of the Theories of Weak Direct Products</title>
<link>https://hdl.handle.net/1721.1/148871</link>
<description>On the Complexity of the Theories of Weak Direct Products
Rackoff, Charles
Let N be the set of nonnegative integers and let &lt; N ,+&gt; be the weak direct product of &lt; N,+&gt; with itself. Mostowski[ 9 ] shows that the theory of &lt; N ,*&gt; is decidable, but his decision procedure isn't elementary recursive. We present here a more efficient procedure which operates   within space 2 2 . As corollaries we obtain the same upper bound for the theory of finite abelian groups, the theory of finitely generated abelian groups, and the theory of the structure &lt; N ,' &gt; of positive ...
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148871</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>String-matching and Other Products</title>
<link>https://hdl.handle.net/1721.1/148870</link>
<description>String-matching and Other Products
Fischer, Michael J.; Paterson, Michael S.
The string-matching problem considered here is to find all occurrences of a given pattern as a substring of another longer string. When the pattern is simply a given string of symbols, there is an algorithm due to Morris, Knuth and Pratt which has a  running time proportional to the total  length of the pattern and long string together. This time may be achieved even on a Turing machine. The more difficult  case where either string may have "don't care" symbols which are deemed to match with all symbols is also considered. By exploiting the formal similarity of string-matching with integer multiplication, a new algorithm has been obtained with a running time which is only slightly worse than linear.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148870</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Improved Overlap Argument for On-line Multiplication</title>
<link>https://hdl.handle.net/1721.1/148869</link>
<description>An Improved Overlap Argument for On-line Multiplication
Paterson, Michael S.; Fischer, Michael J.; Meyer, Albert R.
A lower bound of cN1ogN is proved for the mean time complexity of an on-line multitape Turing machine performing the multiplication of N-digit binary integers. For a more general class of machines the corresponding bound is  cN1ogN. These bounds compare favorably with know upper bounds of the form cN(1ogN) k, and for some classes the upper and lower bounds coincide. The proofs are based on the "overlap" argument due to Cook and Aanderaa.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148869</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discrete Computation: Theory and Open Problems</title>
<link>https://hdl.handle.net/1721.1/148868</link>
<description>Discrete Computation: Theory and Open Problems
Meyer, Albert R.
Complexity  1. Borodin, A. Computational Complexity: Theory and Practice, in Currents in the Theory of Computing, A. Aho, ed., Prentice-Hall, Englewood Cliff, N.J., 1973,pp.32-89.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148868</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Weak Monadic Second Order Theory of Successor is not Element-recurive</title>
<link>https://hdl.handle.net/1721.1/148867</link>
<description>Weak Monadic Second Order Theory of Successor is not Element-recurive
Meyer, Albert R.
Let L SIS be the set of formulas expressible in a week monadic second order logic using only the predicates [x =y+1] and [x E z]. Bucci and Elgot [3,4] have shown that the truth of sentences in L SIS (under the standard interpretation &lt; N, successor &gt; with second order variables interpreted as ranging over finite sets) is decidable. We refer to the true sentences in L SIS as WSIS. We shall prove that WSIS is not elementary-recursive in the sense of Kalmar. In fact, we claim a stronger result:
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148867</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-time Simulation of Multidimensional Turing Machines by Storage Modification Machines</title>
<link>https://hdl.handle.net/1721.1/148866</link>
<description>Real-time Simulation of Multidimensional Turing Machines by Storage Modification Machines
Schönage, A.
In [1] the author introduced a new machine model, now called the Storage Modification Machine (SMM). It was claimed, but not proved, that SMM's can simulate all sorts of Turing machines-- those with multidimensional worktapes in particular -- in real time.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148866</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A User's Guide to the Macro Control Language</title>
<link>https://hdl.handle.net/1721.1/148865</link>
<description>A User's Guide to the Macro Control Language
Geiger, Steven P.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148865</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Interactive Implementation of the ToddCoxeter Algorithm</title>
<link>https://hdl.handle.net/1721.1/148864</link>
<description>An Interactive Implementation of the ToddCoxeter Algorithm
Bonneau, Richard  J.
The Todd-Coxeter algorithm provides a systematic approach to the enumeration of cosets of a finitely presented group.  This memo describes an interactive implementation  of algorithm, including a manual on its use, examples, and methods of accessing the program. Applications of this algorithm are also discussed.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148864</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polynomial Exponentiation: The Fast Fourier Transform Revisited</title>
<link>https://hdl.handle.net/1721.1/148863</link>
<description>Polynomial Exponentiation: The Fast Fourier Transform Revisited
Bonneau, Richard J.
</description>
<pubDate>Fri, 01 Jun 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148863</guid>
<dc:date>1973-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Decision Procedure for the First Order Theory of Real Addition with Order</title>
<link>https://hdl.handle.net/1721.1/148862</link>
<description>A Decision Procedure for the First Order Theory of Real Addition with Order
Ferrante, Jeanne; Rackoff, Charles
Consider the first order theory of the real numbers with the predicates + (plus) and &lt; (less than). Let S be the set of true sentences. We first present an elimination of quantifiers decision procedure for S, and then analyse it to show that it takes at most time 2^2^cn, c a constant, to decide sentences of length n. Looking more closely at this procedure, we arrive at a second procedure by showing that a given sentence doesn't change in truth value when each of the quantifiers is limited to range over an appropriately chosen finite set of rationals. This fact leads to a decision procedure for S which takes space2^cn. We also remark that our methods lead to a decision procedure for Presburger arithmetic which operates in space 2^2^cn. These upper bounds should be compared with the results of Fischer and Rabin (Proceedings of AMS Symp. on Complexity of Real Computation Processes, to appear) that for some constant c, tim 2^cn for real addition, and time 2^2^cn for Presburger arithmetic, is required to decide some sentences of length n for infitely many n.
</description>
<pubDate>Tue, 01 May 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148862</guid>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Operator Embedding Theorem for Complexity Classes of Recursive Functions</title>
<link>https://hdl.handle.net/1721.1/148861</link>
<description>An Operator Embedding Theorem for Complexity Classes of Recursive Functions
Moll, Robert
Let F (t) be the set of functions computable by some machine using no more than t(x) machine steps on all but finitely many arguments x. If we order the - classes under set inclusion as t varies over the recursive functions, then it is natural to ask how rich a structure is obtained. We show that this structure is very rich indeed. If R is any countable partial order and F is any total effective operator, then we show that there is a recursively enumerable sequence of...
</description>
<pubDate>Tue, 01 May 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148861</guid>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Class of Finite Computations Structures Supporting the Fast Fourier Transform</title>
<link>https://hdl.handle.net/1721.1/148860</link>
<description>A Class of Finite Computations Structures Supporting the Fast Fourier Transform
Bonneau, Richard J.
The Fast Fourier Transform (FFT) and modular arithmetic are two distinct techniques which recently have been employed to increase the efficiency of numerous algorithms in the area of symbolic and algebraic manipulation.
</description>
<pubDate>Thu, 01 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148860</guid>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>SIM360: A S/360 Simulator</title>
<link>https://hdl.handle.net/1721.1/148859</link>
<description>SIM360: A S/360 Simulator
McCray, Wm. Arthur
Modern, large-scale computer systems typically operate under the control of an operating system or executive program, and reserve for the exclusive use of the operating system a set of privileged instructions, which the normal users may not issue. This very necessary arrangement produces a problem of equipment availability for those who wish to develop or investigate operating systems programs, because such programs cannot be run as normal user jobs under an executive program. This thesis describes SIM360, a detailed simulator of the representative IBM S/360 computer, which was written to run student programs, programs assigned as machine problems for a course in operating systems. The simulator allows programs to issue all of the priveleged instructions of the S/360, and thus provides a readily available tool for the study of operating systems programs.
</description>
<pubDate>Mon, 01 May 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148859</guid>
<dc:date>1972-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Emptiness Problem for Automata on Infinite Trees</title>
<link>https://hdl.handle.net/1721.1/148858</link>
<description>The Emptiness Problem for Automata on Infinite Trees
Hossley, Robert; Rackoff, Charles
The purpose of this paper is to give an alternative proof to the decidability of the emptiness problem for tree automata, as shown in Rabin [4]. The proof reduces the emptiness problem for automata on infinite trees to that for automata on finite trees, by showing that any automata definable set of infinite trees must contain a finitely-genarable trees.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148858</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Construction Heuristics for Geometry and a Vector Algebra Representation of Geometry</title>
<link>https://hdl.handle.net/1721.1/148857</link>
<description>Construction Heuristics for Geometry and a Vector Algebra Representation of Geometry
Wong, Richard
Heuristics for generating constructions to help solve high school geometry problems are given. Many examples of the use of these heuristics are given. A method of translating geometry problems into vector algebra problems is discussed. The solution of these vector algebra geometry problems is analyzed. The use of algebraic constructions to help solve these vector problems is also discussed.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148857</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Economy of Descriptions and Minimal Indices</title>
<link>https://hdl.handle.net/1721.1/148856</link>
<description>Economy of Descriptions and Minimal Indices
Bagchi, Amitava
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148856</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Helping People Think</title>
<link>https://hdl.handle.net/1721.1/148855</link>
<description>Helping People Think
Goldstein, Robert C.
Everyone, today, is familiar with the use of machines to ease physical burdens. Since the dawn of civilization, man's progress in gaining control over his environment has been largely determined by the power and sophistication of the machines that he has been able to command. Furthermore, since simple machines can be used to construct more complicated ones, this process, once begun, tends to advance at an accelerating rate.
</description>
<pubDate>Thu, 01 Apr 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148855</guid>
<dc:date>1971-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Macaims Data Management System</title>
<link>https://hdl.handle.net/1721.1/148854</link>
<description>The Macaims Data Management System
Goldstein, Robert C.; Strnad, Alois J.
MacAIMS (MAC Advanced Interactive Management System) is a relatively small research project that was initiated in the summer of 1968 to investigate the feasibility of using some of the then existing computer facilities at M.I.T. to aid in the management of Project MAC. Several interesting and useful interactive programs were developed and are currently in use.
</description>
<pubDate>Thu, 01 Apr 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148854</guid>
<dc:date>1971-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Relational Approach to the Management of Data Bases</title>
<link>https://hdl.handle.net/1721.1/148853</link>
<description>The Relational Approach to the Management of Data Bases
Strnad, Alois J.
The ultimate goal of Project MacAIMS (MAC Advanced Interactive Management System) is to build a computer facility which will be able to support non-trivial decision making processes. (See reference 4). In the early stages of our experiments we discovered that traditional approaches to the management of data bases do not satisfy our needs. We have determined the following requirements for the management of Large Data Bases (LDB) in a dynamically varying  environment such as an interactive Management  Information System.
</description>
<pubDate>Thu, 01 Apr 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148853</guid>
<dc:date>1971-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transmission of Information Between a Man-machine Decision System and its Environment</title>
<link>https://hdl.handle.net/1721.1/148852</link>
<description>Transmission of Information Between a Man-machine Decision System and its Environment
Wells, Douglas M.
</description>
<pubDate>Thu, 01 Apr 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148852</guid>
<dc:date>1971-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Substantive Use of Computers for Intellectual Activities</title>
<link>https://hdl.handle.net/1721.1/148851</link>
<description>The Substantive Use of Computers for Intellectual Activities
Goldstein, Robert C.
</description>
<pubDate>Thu, 01 Apr 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148851</guid>
<dc:date>1971-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computer Model of Simple Forms of Learning</title>
<link>https://hdl.handle.net/1721.1/148850</link>
<description>A Computer Model of Simple Forms of Learning
Jones, Thomas L.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148850</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A New List-tracing Algorithm</title>
<link>https://hdl.handle.net/1721.1/148849</link>
<description>A New List-tracing Algorithm
Fenichel, Robert R.
List-processing systems have each allowed use of only a  single size and configuration of list cell. This paper describes a system which allows use of arbitrarily many different sizes and configurations of list cell, possibly not specified until run time.
</description>
<pubDate>Thu, 01 Oct 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148849</guid>
<dc:date>1970-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Code-generation from an Object-machine Description</title>
<link>https://hdl.handle.net/1721.1/148848</link>
<description>Automatic Code-generation from an Object-machine Description
Miller, Perry L.
This memo outlines the basic elements of a macro code-generating system, and develops an informal machine-independent model of a code generator. Then the memo discusses how an implementation of this model could be set up to generate code for a particular machine from machine-dependent information given in descriptive form.
</description>
<pubDate>Thu, 01 Oct 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148848</guid>
<dc:date>1970-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complexity Measures for Programming Languages</title>
<link>https://hdl.handle.net/1721.1/148847</link>
<description>Complexity Measures for Programming Languages
Goodman, Leonard I.
A theory of complexity is developed for algorithms implemented in typical programming languages. The complexity of a measuring a specific type of complexity is a complexity measure -- some function of the amount of a particular resource used by a program in processing an input. Typical resources would be execution time, core, I/O devices, and channels
</description>
<pubDate>Wed, 01 Sep 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148847</guid>
<dc:date>1971-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pseudo-random Sequences</title>
<link>https://hdl.handle.net/1721.1/148846</link>
<description>Pseudo-random Sequences
Bruere-Dawson, Gerard
The purpose of this paper is to study some notions of randomnes for infinite sequences of 0's and 1's.
</description>
<pubDate>Thu, 01 Oct 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148846</guid>
<dc:date>1970-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Expansion of the Data Structuring Capabilities of PAL</title>
<link>https://hdl.handle.net/1721.1/148845</link>
<description>An Expansion of the Data Structuring Capabilities of PAL
Zilles, Stephen N.
</description>
<pubDate>Thu, 01 Oct 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148845</guid>
<dc:date>1970-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Suspension of Processes in a Multiprocessing Computer System</title>
<link>https://hdl.handle.net/1721.1/148844</link>
<description>Suspension of Processes in a Multiprocessing Computer System
Vogt, Carla M.
</description>
<pubDate>Tue, 01 Sep 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148844</guid>
<dc:date>1970-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Use of High Level Languages for Sytems Programming</title>
<link>https://hdl.handle.net/1721.1/148843</link>
<description>Use of High Level Languages for Sytems Programming
Graham, Robert M.
(This paper is a slightly edited version of a transcript so that it still contains the colloquial flavor of the oral presentation.)  I'm going to talk about languages for systems programming what they can do for us, and what we might expect from them in the future. These comments are largely based on my experience with the Multics System and I'll quote a few figures from Multics as we go along. I'm concerned particularly with large complex system.
</description>
<pubDate>Tue, 01 Sep 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148843</guid>
<dc:date>1970-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>File Management and Related Topics, June 12, 1970</title>
<link>https://hdl.handle.net/1721.1/148842</link>
<description>File Management and Related Topics, June 12, 1970
Graham, Robert M.
The subject of these notes is file management. We will develop the problems of file management within the environment of a large information and computing service, often called a computer utility or general purpose time-sharing system. We do this for two reasons. First, this environment imposes the most severe constraints. Other environments are obtained by relaxing these constraints. Secondly, large information and computing services will become more prevalent in the years to come.   Let us first look briefly at those objectives of an information and computing service which are significant to this discussion.
</description>
<pubDate>Tue, 01 Sep 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148842</guid>
<dc:date>1970-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Description and Flow Chart of the PDP-7/9 Communications Package</title>
<link>https://hdl.handle.net/1721.1/148841</link>
<description>Description and Flow Chart of the PDP-7/9 Communications Package
Ward, Philip W.
The PDP-7/9 Communications Package was written to provide data transfers between the buffer controller (PDP-7 or PDP-9) of an ESL Display Console and a host computer via a 50-kilobit serial Dataphone link. Initially, only one of the displays  (with a PDP-9 buffer controller) was to be operated remotely over q 50-kilobit line, and the only feasible access to the 7094 CTSS host computer was via the PDP-7 buffer controller of the other display, which is directly connected to CTSS channel D. For this connection, the PDP-7 could be looked upon as the "host" for the PDP-9, although it merely served as a message-handling intermediary for the real host, the 7094
</description>
<pubDate>Wed, 01 Jul 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148841</guid>
<dc:date>1970-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Design Coordination for the Building Industry</title>
<link>https://hdl.handle.net/1721.1/148840</link>
<description>Interactive Design Coordination for the Building Industry
Jackson, James N.
The problem of effective communication in the process of building design and construction is widely recognized. The involvement of several design disciplines combined with the tendency for designers to work in distinct offices results in little capacity for them to investigate the influence of their design decisions on other design areas.  One of the responses to the need for effective Interaction in the use of computers for design project is the supersytem concept proposed for ICES, the Integrated Civil Engineering System. The supersystem is defined as the cooperative effort on the part of the designers of several problem oriented computer capabilities to implement project capabilities by allowing each of their problem oriented subsystem to reference a single file of project data. The supersystem would allow design interaction by having each of the problem oriented computer subsystem reference a single file of information specifying the project.   Future work in the application of computers to interactive and project oriented design in the building industry will have to concentrate on the file structure to be used in the Implementation of a computer building design supersystem.
</description>
<pubDate>Mon, 01 Jun 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148840</guid>
<dc:date>1970-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Exposure Notification for COVID-19</title>
<link>https://hdl.handle.net/1721.1/148149</link>
<description>Automated Exposure Notification for COVID-19
Rivest, Ronald; Schiefelbein, M. Curran; Zissman, Marc A.; Bay, Jason; Bugnion, Edouard; Finnerty, Jill; Liccardi, Ilaria; Nelson, Brad; Norige, Adam S.; Shen, Emily H.; Wanger, Jenny; Yahalom, Raphael; Alekseyev, Jesslyn D.; Brubaker, Chad; Ferretti, Luca; Ishikawa, Charlie; Raykova, Mariana; Schlaman, Brendan; Schwartz, Robert X.; Sudduth, Emma; Tessaro, Stefano
Private Automated Contact Tracing (PACT) was a collaborative team and effort formed during the beginning of the Coronavirus Disease 2019 (COVID-19) pandemic. PACT’s mission was to enhance contact tracing in pandemic response by designing exposure-detection functions in personal digital communication devices that have maximal public health utility while preserving privacy. PACT had four major lines of effort: proximity detection efficacy, privacy, public health integration, and public health efficacy. In support of these lines of effort, PACT executed several cross-layer activities that helped demonstrate public health efficacy. These included prototype development and  demonstrations; system analysis; data collection and experimentation; and large-scale deployment support. PACT convened two scientific workshops relating to privacy-preserving AEN: one virtual workshop in April 2020 and a second hybrid workshop in October 2021. This report is an outcome of the second workshop and serves as PACT’s final report. It seeks to explain and discuss the use of automated exposure notification during the COVID-19 pandemic and to provide some recommendations for those who may try to design and deploy similar technologies in future pandemics.
The authors were among the 70+ in-person and virtual participants in the October 2021 ImPACT 2021 workshop. This final report has been heavily influenced by the discussion at that workshop.
</description>
<pubDate>Wed, 22 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/148149</guid>
<dc:date>2023-02-22T00:00:00Z</dc:date>
</item>
<item>
<title>Neurosymbolic Programming for Science</title>
<link>https://hdl.handle.net/1721.1/145783</link>
<description>Neurosymbolic Programming for Science
Sun, Jennifer J; Tjandrasuwita, Megan; Sehgal, Atharva; Solar-Lezama, Armando; Chaudhuri, Swarat; Yue, Yisong; Costilla Reyes, Omar
Neurosymbolic Programming (NP) techniques have the potential to accelerate scientific discovery across fields. These models combine neural and symbolic components to learn complex patterns and representations from data, using high-level concepts or known constraints. As a result, NP techniques can interface with symbolic domain knowledge from scientists, such as prior knowledge and experimental context, to produce interpretable outputs. Here, we identify opportunities and challenges between current NP models and scientific workflows, with real-world examples from behavior analysis in science. We define concrete next steps to move the NP for science field forward, to enable its use broadly for workflows across the natural and social sciences.
</description>
<pubDate>Wed, 12 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145783</guid>
<dc:date>2022-10-12T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-modal and Inertial sensor Solutions for Navigation-type Factor Graphs</title>
<link>https://hdl.handle.net/1721.1/145253</link>
<description>Multi-modal and Inertial sensor Solutions for Navigation-type Factor Graphs
Fourie, Dehann
This thesis presents a sum-product inference algorithm for platform navigation called Multi-modal iSAM (incremental smoothing and mapping). Common Gaussian only likelihoods are restrictive and require a complex front-end processes to deal with non-Gaussian measurements. Instead, our approach allows the front-end to defer ambiguities with non-Gaussian measurement models. We retain the acyclic Bayes tree (and incremental update strategy) from the predecessor iSAM2 max-product algorithm [Kaess et al., IJRR 2012]. The approach propagates continuous beliefs on the Bayes (Junction) tree, which is an efficient symbolic refactorization&#13;
of the nonparametric factor graph, and asymptotically approximates the underlying Chapman-Kolmogorov equations. Our method tracks dominant modes in the marginal posteriors of all variables with minimal approximation error, while suppressing almost all low likelihood modes (in a non-permanent manner). Keeping with existing inertial navigation, we present a novel, continuous-time, retroactively calibrating inertial odometry residual function, using preintegration to seamlessly incorporate pure inertial sensor measurements into a factor graph. We centralize around a factor graph (with starved graph databases) to separate elements of the navigation into an ecosystem of processes. Practical examples are included, such as how to infer multi-modal marginal posterior belief estimates for ambiguous loop closures; raw beam-formed acoustic measurements; or conventional parametric likelihoods, and others.
</description>
<pubDate>Thu, 31 Aug 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145253</guid>
<dc:date>2017-08-31T00:00:00Z</dc:date>
</item>
<item>
<title>Universal Motion Generator: Trajectory Autocompletion by Motion Prompts</title>
<link>https://hdl.handle.net/1721.1/143430</link>
<description>Universal Motion Generator: Trajectory Autocompletion by Motion Prompts
Wang, Yanwei; Shah, Julie
Foundation models, which are large neural networks trained on massive datasets, have shown&#13;
impressive generalization in both the language and the vision domain. While fine-tuning foundation&#13;
models for new tasks at test-time is impractical due to billions of parameters in those models, prompts&#13;
have been employed to re-purpose models for test-time tasks on the fly. In this report, we ideate the equivalent foundation model for motion generation and the corresponding formats of prompt that can condition such a model. The central goal is to learn a behavior prior for motion generation that can be re-used in a novel scene.
</description>
<pubDate>Wed, 15 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143430</guid>
<dc:date>2022-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Active Loop Detection for Applications that Access Databases</title>
<link>https://hdl.handle.net/1721.1/138144</link>
<description>Active Loop Detection for Applications that Access Databases
Shen, Jiasi; Rinard, Martin
We present Shear, a new system that observes and manipulates the interaction between an application and its surrounding environment to learn a model of the behavior of the application. Shear implements active loop detection to infer the loop structures in the application. This technique repeatedly presents the application with the same input, altering the program's interaction with the environment at precisely chosen execution points to elicit different program behaviors depending on the loop structure in the application. The ability to alter interactions between the application and the environment enables Shear to infer a broader range of loop structures otherwise undetectable given only the ability to observe application behavior. Active loop detection therefore enables Shear to infer a broader range of loop structures than previous approaches.
</description>
<pubDate>Mon, 15 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138144</guid>
<dc:date>2021-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Active Loop Detection for Applications that Access Databases</title>
<link>https://hdl.handle.net/1721.1/131244</link>
<description>Active Loop Detection for Applications that Access Databases
Shen, Jiasi; Rinard, Martin
We present Shear, a new system that observes and manipulates the interaction between an application and its surrounding environment to learn a model of the behavior of the application. Shear implements active loop detection to infer the looping structure in the application. This technique repeatedly presents the application with the same input, altering the program's interaction with the environment at precisely chosen execution points to elicit different program behaviors depending on the loop structure in the application. The ability to alter interactions between the application and the environment enables Shear to infer a broader range of looping structures otherwise undetectable given only the ability to observe application behavior. Active loop detection therefore enables Shear to infer a broader range of looping structures than previous approaches.
</description>
<pubDate>Thu, 09 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/131244</guid>
<dc:date>2021-09-09T00:00:00Z</dc:date>
</item>
<item>
<title>Bucket Elimination Algorithm for Dynamic Controllability Checking of Simple Temporal Networks with Uncertainty</title>
<link>https://hdl.handle.net/1721.1/130057</link>
<description>Bucket Elimination Algorithm for Dynamic Controllability Checking of Simple Temporal Networks with Uncertainty
Zhang, Yuening
Simple Temporal Networks with Uncertainty (STNU) can represent temporal problems where duration between events may be uncontrollable, e.g. when the event is caused by nature. An STNU is dynamically controllable (DC) if it can be successfully scheduled online. In this paper, we introduce a novel usage of bucket elimination algorithms for DC checking that matches the state of the art in achieving O(n^3) performance. Bucket elimination algorithms exist for STNs (path consistency and Fourier algorithms), but adapting it to STNUs is non-trivial. As a result, consistency checking becomes a special case of our algorithm. Due to the familiarity to bucket elimination algorithms, the final algorithm is easier to understand and implement. Additionally, conflict extraction is also easily supported in this framework.
</description>
<pubDate>Tue, 02 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130057</guid>
<dc:date>2021-03-02T00:00:00Z</dc:date>
</item>
<item>
<title>Lower Bounds on the Column Sparsity of Compressed Sensing Matrices</title>
<link>https://hdl.handle.net/1721.1/130056</link>
<description>Lower Bounds on the Column Sparsity of Compressed Sensing Matrices
Nachin, Mergen
</description>
<pubDate>Tue, 02 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130056</guid>
<dc:date>2021-03-02T00:00:00Z</dc:date>
</item>
<item>
<title>Comprehensive Java Metadata Tracking for Attack Detection and Repair</title>
<link>https://hdl.handle.net/1721.1/122969</link>
<description>Comprehensive Java Metadata Tracking for Attack Detection and Repair
Perkins, Jeff; Eikenberry, Jordan; Coglio, Alessandro; Rinard, Martin
We present ClearTrack, a system that tracks 32 bits of metadata for each primitive value in Java programs to detect and nullify a range of vulnerabilities such as integer overflow and underflow vulnerabilities, SQL injection vulnerabilities, and command injection vulnerabilities. Contributions include new techniques for eliminating false positives associated with benign integer overflows and underflows, new metadata-aware techniques for detecting and nullifying SQL and command injection attacks, and results from an evaluation of ClearTrack performed by a Test and Evaluation team hired by the sponsor of this research (an anonymous agency of the United States government). These results show that 1) ClearTrack operates successfully on Java programs comprising hundreds of thousands of lines of code (including instrumented jar files and Java system libraries, the majority of the applications comprise over 3 million lines of code), 2) because of computations such as cryptography and hash table calculations, these applications perform millions of benign integer overflows and underflows, and 3) ClearTrack successfully detects and nullifies all tested integer overflow and underflow, SQL injection, and command injection vulnerabilities in the benchmark applications.
</description>
<pubDate>Tue, 19 Nov 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122969</guid>
<dc:date>2019-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Precise and Comprehensive Provenance Tracking for Android Devices</title>
<link>https://hdl.handle.net/1721.1/122968</link>
<description>Precise and Comprehensive Provenance Tracking for Android Devices
Gordon, Michael; Eikenberry, Jordan; Eden, Anthony; Perkins, Jeff; Rinard, Martin
Detailed information about the paths that data take through a system is invaluable for understanding sources and behaviors of complex exfiltration malware. We present a new system, ClearScope, that tracks, at the level of individual bytes, the complete paths that data follow through Android systems. These paths include the original source where data entered the device (such as sensors or network connections), files in which the data was temporarily stored, applications that the data traversed during its time in the device, and sinks through which the data left the device.&#13;
&#13;
The ClearScope system design enables this unprecedented level of provenance tracking detail by 1) structuring the provenance representation as references, via provenance tags, to provenance events that record the movement of data between system components and into or out of the device and 2) adopting a split design in which provenance events are streamed to a remote server for storage, with only the minimal information required to generate the tagged stream of events retained on the device. ClearScope also includes compiler optimizations that enable efficient provenance tracking within applications by eliminating unnecessary provenance tracking computations and adopting and efficient aggregate provenance representation for arrays when all array elements have the same provenance.&#13;
&#13;
Experience using ClearScope to analyze the notorious Adups FOTA malware highlights the significant benefits that this level of comprehensive detail can bring. Performance experiments with the Caffeine Mark benchmarks show that the overall ClearScope provenance tracking overhead on this benchmark suite is 14%.
</description>
<pubDate>Tue, 19 Nov 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122968</guid>
<dc:date>2019-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Faster Dynamic Controllability Checking in Temporal Networks with Integer Bounds</title>
<link>https://hdl.handle.net/1721.1/121993</link>
<description>Faster Dynamic Controllability Checking in Temporal Networks with Integer Bounds
Bhargava, Nikhil; Williams, Brian C.
Simple Temporal Networks with Uncertainty (STNUs) provide a useful formalism with which to reason about events and the temporal constraints that apply to them. STNUs are in particular notable because they facilitate reasoning over stochastic, or uncontrollable, actions and their corresponding durations. To evaluate the feasibility of a set of constraints associated with an STNU, one checks the network's \textit{dynamic controllability}, which determines whether an adaptive schedule can be constructed on-the-fly. Our work provides a dynamic controllability checker that is able to quickly refute the controllability of an STNU with integer bounds, such as those found in planning problems. Our work is faster than the existing best runtime for networks with integer bounds and executes in O(min(mn, m\sqrt{n}\log{N}) + km + k^2n + kn\log{n}). Our approach pre-processes the STNU using an existing O(n^3) dynamic controllability checking algorithm and provides tighter bounds on its runtime. This makes our work easily adaptable to other algorithms that rely on checking variants of dynamic controllability.
</description>
<pubDate>Thu, 01 Aug 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121993</guid>
<dc:date>2019-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Exploitation of Fully Randomized Executables</title>
<link>https://hdl.handle.net/1721.1/121246</link>
<description>Automatic Exploitation of Fully Randomized Executables
Gadient, Austin; Ortiz, Baltazar; Barrato, Ricardo; Davis, Eli; Perkins, Jeff; Rinard, Martin
We present Marten, a new end to end system for automatically discovering, exploiting, and combining information leakage and buffer overflow vulnerabilities to derandomize and exploit remote, fully randomized processes. Results from two case studies high- light Marten’s ability to generate short, robust ROP chain exploits that bypass address space layout randomization and other modern defenses to download and execute injected code selected by an attacker.
We present an automated system, Marten, that automatically generates control flow hijacking exploits against fully randomized executables by combining information leakage and buffer overflow exploits.
</description>
<pubDate>Tue, 11 Jun 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121246</guid>
<dc:date>2019-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>Gen: A General-Purpose Probabilistic Programming System with Programmable Inference</title>
<link>https://hdl.handle.net/1721.1/119255</link>
<description>Gen: A General-Purpose Probabilistic Programming System with Programmable Inference
Cusumano-Towner, Marco F.; Saad, Feras A.; Lew, Alexander; Mansinghka, Vikash K.
Probabilistic modeling and inference are central to many fields. A key challenge for wider adoption of probabilistic programming languages is designing systems that are both flexible and performant. This paper introduces Gen, a new probabilistic programming system with novel language con- structs for modeling and for end-user customization and optimization of inference. Gen makes it practical to write probabilistic programs that solve problems from multiple fields. Gen programs can combine generative models written in Julia, neural networks written in TensorFlow, and custom inference algorithms based on an extensible library of Monte Carlo and numerical optimization techniques. This paper also presents techniques that enable Gen’s combination of flexibility and performance: (i) the generative function inter- face, an abstraction for encapsulating probabilistic and/or differentiable computations; (ii) domain-specific languages with custom compilers that strike different flexibility/per- formance tradeoffs; (iii) combinators that encode common patterns of conditional independence and repeated compu- tation, enabling speedups from caching; and (iv) a standard inference library that supports custom proposal distributions also written as programs in Gen. This paper shows that Gen outperforms state-of-the-art probabilistic programming systems, sometimes by multiple orders of magnitude, on problems such as nonlinear state-space modeling, structure learning for real-world time series data, robust regression, and 3D body pose estimation from depth images.
</description>
<pubDate>Mon, 26 Nov 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/119255</guid>
<dc:date>2018-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Understanding Generalization via Analytical Learning Theory</title>
<link>https://hdl.handle.net/1721.1/118307</link>
<description>Towards Understanding Generalization via Analytical Learning Theory
Kawaguchi, Kenji; Benigo, Yoshua; Verma, Vikas; Kaelbling, Leslie Pack
This paper introduces a novel measure-theoretic theory for machine learning&#13;
that does not require statistical assumptions. Based on this theory, a new&#13;
regularization method in deep learning is derived and shown to outperform&#13;
previous methods in CIFAR-10, CIFAR-100, and SVHN. Moreover, the proposed&#13;
theory provides a theoretical basis for a family of practically successful&#13;
regularization methods in deep learning. We discuss several consequences of&#13;
our results on one-shot learning, representation learning, deep learning,&#13;
and curriculum learning. Unlike statistical learning theory, the proposed&#13;
learning theory analyzes each problem instance individually via measure&#13;
theory, rather than a set of problem instances via statistics. As a result,&#13;
it provides different types of results and insights when compared to&#13;
statistical learning theory.
</description>
<pubDate>Mon, 01 Oct 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/118307</guid>
<dc:date>2018-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Dynamic Monitoring to Synthesize Models of Applications That  Access Databases</title>
<link>https://hdl.handle.net/1721.1/118184</link>
<description>Using Dynamic Monitoring to Synthesize Models of Applications That  Access Databases
Shen, Jiasi; Rinard, MArtin
We previously developed Konure, a tool that uses active learning to &#13;
infer the functionality of database applications. An alternative &#13;
approach is to observe the inputs, outputs, and database traffic from a &#13;
running system in normal use and then synthesize a model of the &#13;
application from this information.  To evaluate these two approaches, we &#13;
present Etch, which uses information from typical usage scenarios to &#13;
synthesize a model of the functionality of database applications whose &#13;
computation can be expressed in the Konure DSL.
</description>
<pubDate>Thu, 27 Sep 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/118184</guid>
<dc:date>2018-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Using Active Learning to Synthesize Models of Applications That  Access Databases</title>
<link>https://hdl.handle.net/1721.1/117593</link>
<description>Using Active Learning to Synthesize Models of Applications That  Access Databases
Shen, Jiasi; Rinard, Martin
We present a new technique that uses active learning to infer models of &#13;
applications that manipulate relational databases. This technique &#13;
comprises a domain-specific language for modeling applications that &#13;
access databases (each model is a program in this language) and an &#13;
associated inference algorithm that infers models of applications whose &#13;
behavior can be expressed in this language. The inference algorithm &#13;
generates test inputs and database configurations, runs the application, &#13;
then observes the resulting database traffic and outputs to &#13;
progressively refine its current model hypothesis.  The end result is a &#13;
model that completely captures the behavior of the application.  Because &#13;
the technique works only with the externally observable inputs, outputs, &#13;
and databases, it can infer the behavior of applications written in &#13;
arbitrary languages using arbitrary coding styles (as long as the &#13;
behavior of the application is expressible in the domain-specific language).&#13;
&#13;
We also present a technique for automatically regenerating an &#13;
implementation from the inferred model. The regenerator can produce a &#13;
translated implementation in a different language and systematically &#13;
include relevant security and error checks.
</description>
<pubDate>Tue, 28 Aug 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/117593</guid>
<dc:date>2018-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>Data and Code for "A New Approach to Animacy Detection"</title>
<link>https://hdl.handle.net/1721.1/116172</link>
<description>Data and Code for "A New Approach to Animacy Detection"
Labiba, Jahan,; Geeticka, Chauhan,; A., Finlayson, Mark
This archive contains the code and data for the workshop article "A New Approach to Animacy Detection," published in 2018 in the the 27th International Conference on Computational Linguistics (COLING 2018), in Santa Fe, NM. The root of the archive contains a readme file which explains the archive contents. Furthermore, the archive can be imported directly into the Eclipse IDE as a project encapsulating the executable code and data required to reproduce the results of the paper; the code compiles with Java 1.8. The archive also contains a copy of the near-final version of the paper for reference.
</description>
<pubDate>Thu, 07 Jun 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/116172</guid>
<dc:date>2018-06-07T00:00:00Z</dc:date>
</item>
<item>
<title>Best-first Enumeration Based on Bounding Conflicts, and its Application to Large-scale Hybrid Estimation</title>
<link>https://hdl.handle.net/1721.1/115882</link>
<description>Best-first Enumeration Based on Bounding Conflicts, and its Application to Large-scale Hybrid Estimation
Timmons, Eric; Williams, Brian C.
With the rise of autonomous systems, there is a need for them to have high levels of robustness and safety. This robustness can be achieved through systems that are self-repairing. Underlying this is the ability to diagnose subtle failures. Likewise, online planners can generate novel responses to exceptional situations. These planners require an accurate estimate of state. Estimation methods based on hybrid discrete/continuous state models have emerged as a method of computing precise state estimates, which can be employed for either diagnosis or planning in hybrid domains. However, existing methods have difficulty scaling to systems with more than a handful of components. Discrete state estimation capabilities can scale to this level by combining best-first enumeration and conflict-directed search. Best-first methods have been developed for hybrid estimation, but the creation of conflict-directed methods has previously been elusive. While conflicts are used to learn from constraint violation, probabilistic hybrid estimation is relatively unconstrained. In this paper we present an approach to hybrid estimation that unifies best-first enumeration and conflict-directed search through the concept of "bounding" conflicts, an extension of conflicts that represent tighter bounds on the cost of regions of the search space. This paper presents a general best-first search and enumeration algorithm based on bounding conflicts (A*BC) and a hybrid estimation method based on this enumeration algorithm. Experiments show that an A*BC powered state estimator produces estimates faster than the current state of the art, particularly on large systems.
</description>
<pubDate>Thu, 24 May 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/115882</guid>
<dc:date>2018-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Models of Sequential Decision-Making without Complete State Specification using Bayesian Nonparametric Inference and Active Querying</title>
<link>https://hdl.handle.net/1721.1/115482</link>
<description>Learning Models of Sequential Decision-Making without Complete State Specification using Bayesian Nonparametric Inference and Active Querying
Unhelkar, Vaibhav V.; Shah, Julie A.
Learning models of decision-making behavior during sequential tasks is useful across a variety of applications, including human-machine interaction. In this paper, we present an approach to learning such models within Markovian domains based on observing and querying a decision-making agent. In contrast to classical approaches to behavior learning, we do not assume complete knowledge of the state features that impact an agent's decisions. Using tools from Bayesian nonparametric inference and time series of agents  decisions, we first provide an inference algorithm to identify the presence of any unmodeled state features that impact decision making, as well as likely candidate models. In order to identify the best model among these candidates, we next provide an active querying approach that resolves model ambiguity by querying the decision maker. Results from our evaluations demonstrate that, using the proposed algorithms, an observer can identify the presence of latent state features, recover their dynamics, and estimate their impact on decisions during sequential tasks.
</description>
<pubDate>Thu, 17 May 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/115482</guid>
<dc:date>2018-05-17T00:00:00Z</dc:date>
</item>
<item>
<title>Generalization in Deep Learning</title>
<link>https://hdl.handle.net/1721.1/115274</link>
<description>Generalization in Deep Learning
Kawaguchi, Kenji; Kaelbling, Leslie Pack; Bengio, Yoshua
With a direct analysis of neural networks, this paper presents a mathematically tight generalization theory to partially address an open problem regarding the generalization of deep learning. Unlike previous bound-based theory, our main theory is quantitatively as tight as possible for every dataset individually, while producing qualitative insights competitively. Our results give insight into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, answering to an open question in the literature. We also discuss limitations of our results and propose additional open problems.
</description>
<pubDate>Tue, 01 May 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/115274</guid>
<dc:date>2018-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Natural Language Interface for Mobile Devices</title>
<link>https://hdl.handle.net/1721.1/113912</link>
<description>A Natural Language Interface for Mobile Devices
Katz, Boris; Borchardt, Gary; Felshin, Sue; Mora, Federico
Creating a robust, automated capability to respond to natural language requests has been a longstanding goal in the development of intelligent systems. This article describes the StartMobile system, originally developed in 2005-2007, which has served as an important precursor to Apple's Siri system and other commercial natural language interfaces to mobile devices and computational resources. The article begins with a discussion of goals in creating natural language interfaces, continues with a description of the general-purpose START information access system, describes the StartMobile system and its capabilities, and concludes with a discussion of current commercial systems and future directions.
</description>
<pubDate>Thu, 01 Mar 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113912</guid>
<dc:date>2018-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>continuous Relaxation to Over-constrained Temporal Plans</title>
<link>https://hdl.handle.net/1721.1/113372</link>
<description>continuous Relaxation to Over-constrained Temporal Plans
Yu, Peng
When humans fail to understand the capabilities of an autonomous system or its environmental limitations, they can jeopardize their objectives and the system by asking for unrealistic goals. The objective of this thesis is to enable consensus between human and autonomous system, by giving autonomous systems the ability to communicate to the user the reasons for goal failure and the relaxations to goals that archive feasibility. We represent our problem in the context of over-constrained temporal plans. They are commonly encountered while operating autonomous and decision support systems, when user objectives are in conflict with the environment. Over constrained plans are addressed by relaxing goals and or constraints, such as delaying the arrival time of a trip, with some candidate relaxations being preferable to others. In this thesis we present Uhura, a temporal plan diagnosis and relaxation algorithm that is designed to take over-constrained input plans with temporal flexibility and contingencies, and generate temporal relaxations that make the input plan executable. We introduce two innovative approaches within Uhura: collaborative plan diagnosis and continuous relaxation. Uhura focuses on novel ways of satisfying three goals to make the plan relaxation process more convenient for the users: small perturbation, quick response and simple interaction. We have incorporated Uhura within an autonomous executive that collaborates with human operators to resolve over-constrained temporal plans. Its effectiveness has been demonstrated both in simulation and in hardware on a Personal Transportation System concept. We believe that Uhura's collaborative temporal plan diagnosis capability can benefit a wide range of applications, both within industrial applications and in our daily lives.
SM thesis
</description>
<pubDate>Fri, 25 Jan 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113372</guid>
<dc:date>2013-01-25T00:00:00Z</dc:date>
</item>
<item>
<title>Risk Allocation for Temporal Risk Assessment</title>
<link>https://hdl.handle.net/1721.1/113371</link>
<description>Risk Allocation for Temporal Risk Assessment
Wang, Andrew J.
Temporal uncertainty arises when performing any activity in the natural world. When activities are composed into temporal plans, then, there is a risk of not meeting the plan requirements. Currently, we do not have quantitatively precise methods for assessing temporal risk of a plan. Existing methods that deal with temporal uncertainty either forgo probabilistic models or try to optimize a single objective, rather than satisfy multiple objectives. This thesis offers a method for evaluating whether a schedule exists that meets a set of temporal constraints, with acceptable risk of failure. Our key insight is to assume a form of risk allocation to each source of temporal uncertainty in our plan, such that we may reformulate the probabilistic plan into an STNU parameterized on the risk allocation. We show that the problem becomes a deterministic one of finding a risk allocation which implies a schedulable STNU within acceptable risk. By leveraging the principles behind STNU analysis, we derive conditions which encode this problem as a convex feasibility program over risk allocations. Furthermore, these conditions may be learned incrementally as temporal conflicts. Thus, to boost computational efficiency, we employ a generate-and-test approach to determine whether a schedule may be found.
MEng thesis
</description>
<pubDate>Thu, 31 Jan 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113371</guid>
<dc:date>2013-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Energy-efficient Control of a Smart Grid with Sustainable Homes based on Distributing Risk</title>
<link>https://hdl.handle.net/1721.1/113370</link>
<description>Energy-efficient Control of a Smart Grid with Sustainable Homes based on Distributing Risk
Ono, Masahiro
The goal of this thesis is to develop a distributed control system for a smart grid with sustainable homes. A central challenge is how to enhance energy efficiency in the presence of uncertainty. A major source of uncertainty in a smart grid is intermittent energy production by renewable energy sources. In the face of global climate change, it is crucial to reduce dependence on fossil fuels and shift to renewable energy sources, such as wind and solar. However, a large-scale introduction of wind and solar generation to an electrical grid poses a significant risk of blackouts since the energy supplied by the renewables is unpredictable and intermittent. The uncertain behavior of renewable energy sources increases the risk of blackouts. Therefore, an important challenge is to develop an intelligent control mechanism for the electrical grid that is both reliable and efficient. Uncertain weather conditions and human behavior pose challenges for a smart home. For example, autonomous room temperature control of a residential building may occasionally make the room environment uncomfortable for residents. Autonomous controllers must be able to take residents' preferences as an input, and to control the indoor environment in an energy-efficient manner while limiting the risk of failure to meet the residents' requirements in the presence of uncertainties. In order to overcome these challenges, we propose a distributed robust control method for a smart grid that includes smart homes as its building components. The proposed method consists of three algorithms: 1) market-based contingent energy dispatcher for an electrical grid, 2) a risk-sensitive plan executive for temperature control of a residential building, and 3) a chance-constrained model-predictive controller with a probabilistic guarantee of constraint satisfaction, which can control continuously operating systems such as an electrical grid and a building. We build the three algorithms upon the chance-constrained programming framework: minimization of a given cost function with chance constraints, which bound the probability of failure to satisfy given state constraints. Although these technologies provide promising capabilities, they cannot contribute to sustainability unless they are accepted by the society. In this thesis we specify policy challenges for a smart grid and a smart home, and discuss policy options that gives economical and regulatory incentives for the society to introduce these technologies on a large scale.
SM thesis
</description>
<pubDate>Fri, 20 Jan 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113370</guid>
<dc:date>2012-01-20T00:00:00Z</dc:date>
</item>
<item>
<title>Robust, Goal-directed Plan Execution with Bounded Risk</title>
<link>https://hdl.handle.net/1721.1/113369</link>
<description>Robust, Goal-directed Plan Execution with Bounded Risk
Ono, Masahiro
There is an increasing need for robust optimal plan execution for multi-agent systems in uncertain environments, while guaranteeing an acceptable probability of success. For ex- ample, a fleet of unmanned aerial vehicles (UAVs) and autonomous underwater vehicles (AUVs) are required to operate autonomously for an extensive mission duration in an uncertain environment. Previous work introduced the concept of a model-based executive, which increases the level of autonomy, elevating the level at which systems are commanded. This thesis develops model-based executives that reason explicitly from a stochastic plant model to find the optimal course of action, while ensuring that the probability of failure is within a user-specified risk bound. This thesis presents two robust mode-based executives: probabilisticSulu orp-Sulu, and distributedprobabilisticSulu or dp-Sulu. The objective for p-Sulu and dp-Sulu is to allow users to command continuous, stochastic multi-agent systems in a manner that is both intuitive and safe. The user specifies the desired evolution of the plant state, as well as the acceptable probabilities of failure, as a temporal plan on states called a chance-constrained qualitative state plan (CCQSP). An example of a CCQSP statement is "go to A through B within 30 minutes, with less than 0.001% probability of failure." p-Sulu and dp-Sulu take a CCQSP, a continuous plant model with stochastic uncertainty, and an objective function as inputs, and outputs an optimal continuous control sequence, as well as an optimal discrete schedule. The difference between p-Sulu and dp-Sulu is that p-Sulu plans in a centralized manner while dp-Sulu plans in a distributed manner. dp-Sulu enables robust CCQSP execution for multi-agent systems. We solve the problem based on the key concept of risk allocation, which achieves tractability by allocating the specified risk to individual constraints and mapping the result into an equivalent deterministic constrained optimization problem. Risk allocation also enables a distributed plan execution for multi-agent systems by distributing the risk among agents to decompose the optimization problem. Building upon the risk allocation approach, we develop our first CCQSP executive, p-Sulu, in four spirals. First, we develop the Convex Risk Allocation (CRA) algorithm, which can solve a CCQSP planning problem with a convex state space and a fixed schedule, highlighting the capability of optimally allocating risk to individual constraints. Second, we develop the Non-convex Iterative Risk Allocation (NIRA) algorithm, which can handle non-convex state space. Third, we build upon NIRA a full-horizon CCQSP planner, p-Sulu FH, which can optimize not only the control sequence but also the schedule. Fourth, we develop p-Sulu, which enables the real-time execution of CCQSPs by employing the receding horizon approach. Our second CCQSP executive, dp-Sulu, is developed in two spirals. First, we develop the Market-based Iterative Risk Allocation (MIRA) algorithm, which can control a multi-agent system in a distributed manner by optimally distributing risk among agents through the market-based method called tatonnement. Second and finally, we integrate the capability of MIRA into p-Sulu to build the robust model-based executive, dp-Sulu, which can execute CCQSPs on multi-agent systems in a distributed manner. Our simulation results demonstrate that our executives can efficiently execute CCQSP planning problems with significantly reduced suboptimality compared to prior art.
PhD thesis
</description>
<pubDate>Thu, 02 Feb 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113369</guid>
<dc:date>2012-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>Unsupervised Learning and Recognition of Physical Activity Plans</title>
<link>https://hdl.handle.net/1721.1/113368</link>
<description>Unsupervised Learning and Recognition of Physical Activity Plans
Dong, Shuonan
This thesis desires to enable a new kind of interaction between humans and computational agents, such as robots or computers, by allowing the agent to anticipate and adapt to human intent. In the future, more robots may be deployed in situations that require collaboration with humans, such as scientific exploration, search and rescue, hospital assistance, and even domestic care. These situations require robots to work together with humans, as part of a team, rather than as a stand-alone tool. The intent recognition capability is necessary for computational agents to play a more collaborative role in human-robot interactions, moving beyond the standard master-slave relationship of humans and computers today. We provide an innovative capability for recognizing human intent, through statistical plan learning and online recognition. We approach the plan learning problem by employing unsupervised learning to automatically determine the activities in a plan based on training data. The plan activities are described by a mixture of multivariate probability densities. The number of distributions in the mixture used to describe the data is assumed to be given. The training data trajectories are fed again through the activities' density distributions to determine each possible sequence of activities that make up a plan. These activity sequences are then summarized with temporal information in a temporal plan network, which consists of a network of all possible plans. Our approach to plan recognition begins with formulating the temporal plan network as a hidden Markov model. Next, we determine the most likely path using the Viterbi algorithm. Finally, we refer back to the temporal plan network to obtainpredicted future activities. Our research presents several innovations: First, we introduce a modified representation of temporal plan networks that incorporates probabilistic information into the state space and temporal representations. Second, we learn plans from actual data, such that the notion of an activity is not arbitrarily or manually defined, but is determined by the characteristics of the data. Third, we develop a recognition algorithm that can perform recognition continuously by making probabilistic updates. Finally, our recognizer not only identifies previously executed activities, but also predicts future activities based on the plan network. We demonstrate the capabilities of our algorithms on motion capture data. Our results show that the plan learning algorithm is able to generate reasonable temporal plan networks, depending on the dimensions of the training data and the recognition resolution used. The plan recognition algorithm is also successful in recognizing the correct activity sequences in the temporal plan network corresponding to the observed test data.
SM thesis
</description>
<pubDate>Thu, 23 Aug 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113368</guid>
<dc:date>2007-08-23T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and recognition of hybrid manipulation tasks in variable environments using probabilistic flow tubes</title>
<link>https://hdl.handle.net/1721.1/113367</link>
<description>Learning and recognition of hybrid manipulation tasks in variable environments using probabilistic flow tubes
Dong, Shuonan
Robots can act as proxies for human operators in environments where a human operator is not present or cannot directly perform a task, such as in dangerous or remote situations. Teleoperation is a common interface for controlling robots that are designed to be human proxies. Unfortunately, teleoperation may fail to preserve the natural fluidity of human motions due to interface limitations such as communication delays, non-immersive sensing, and controller uncertainty. I envision a robot that can learn a set of motions that a teleoperator commonly performs, so that it can autonomously execute routine tasks or recognize a user's motion in real time. Tasks can be either primitive activities or compound plans. During online operation, the robot can recognize a user's teleoperated motions on the fly and offer real-time assistance, for example, by autonomously executing the remainder of the task. I realize this vision by addressing three main problems: (1) learning primitive activities by identifying significant features of the example motions and generalizing the behaviors from user demonstration trajectories; (2) recognizing activities in real time by determining the likelihood that a user is currently executing one of several learned activities; and (3) learning complex plans by generalizing a sequence of activities, through auto-segmentation and incremental learning of previously unknown activities. To solve these problems, I first present an approach to learning activities from human demonstration that (1) provides flexibility and robustness when encoding a user's demonstrated motions by using a novel representation called a probabilistic flow tube, and (2) automatically determines the relevant features of a motion so that they can be preserved during autonomous execution in new situations. I next introduce an approach to real-time motion recognition that (1) uses temporal information to successfully model motions that may be non-Markovian, (2) provides fast real-time recognition of motions in progress by using an incremental temporal alignment approach, and (3) leverages the probabilistic flow tube representation to ensure robustness during recognition against varying environment states. Finally, I develop an approach to learn combinations of activities that (1) automatically determines where activities should be segmented in a sequence and (2) learns previously unknown activities on the fly. I demonstrate the results of autonomously executing motions learned by my approach on two different robotic platforms supporting user-teleoperated manipulation tasks in a variety of environments. I also present the results of real-time recognition in different scenarios, including a robotic hardware platform. Systematic testing in a two-dimensional environment shows up to a 27% improvement in activity recognition rates over prior art, while maintaining average computing times for incremental recognition of less than half of human reaction time.
PhD thesis
</description>
<pubDate>Thu, 23 Aug 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113367</guid>
<dc:date>2012-08-23T00:00:00Z</dc:date>
</item>
<item>
<title>Risk-minimizing program execution in robotic domains</title>
<link>https://hdl.handle.net/1721.1/113366</link>
<description>Risk-minimizing program execution in robotic domains
Effinger, Robert
In this thesis, we argue that autonomous robots operating in hostile and uncertain environments can improve robustness by computing and reasoning explicitly about risk. Autonomous robots with a keen sensitivity to risk can be trusted with critical missions, such as exploring deep space and assisting on the battlefield. We introduce a novel, risk-minimizing approach to program execution that utilizes program flexibility and estimation of risk in order to make runtime decisions that minimize the probability of program failure. Our risk-minimizing executive, called Murphy, utilizes two forms of program flexibility, 1) flexible scheduling of activity timing, and 2) redundant choice between subprocedures, in order to minimize two forms of program risk, 1) exceptions arising from activity failures, and 2) exceptions arising from timing constraint violations in a program. Murphy takes two inputs, a program written in a nondeterministic variant of the Reactive Model-based Programming Language (RMPL) and a set of stochastic activity failure models, one for each activity in a program, and computes two outputs, a risk-minimizing decision policy and value function. The decision policy informs Murphy which decisions to make at runtime in order to minimize risk, while the value function quantifies risk. In order to execute with low latency, Murphy computes the decision policy and value function offline, as a compilation step prior to program execution. In this thesis, we develop three approaches to RMPL program execution. First, we develop an approach that is guaranteed to minimize risk. For this approach, we reason probabilistically about risk by framing program execution as a Markov Decision Process (MDP). Next, we develop an approach that avoids risk altogether. For this approach, we frame program execution as a novel form of constraint-based temporal reasoning. Finally, we develop an execution approach that trades optimality in risk avoidance for tractability. For this approach, we leverage prior work in hierarchical decomposition of MDPs in order to mitigate complexity. We benchmark the tractability of each approach on a set of representative RMPL programs, and we demonstrate the applicability of the approach on a humanoid robot simulator.
PhD thesis
</description>
<pubDate>Thu, 02 Feb 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113366</guid>
<dc:date>2012-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Temporal Planning at Reactive Time Scales via Dynamic Backtracking Branch and Bound</title>
<link>https://hdl.handle.net/1721.1/113365</link>
<description>Optimal Temporal Planning at Reactive Time Scales via Dynamic Backtracking Branch and Bound
Effinger, Robert
Autonomous robots are being considered for increasingly capable roles in our society, such as urban search and rescue, automation for assisted living, and lunar habitat construction. To fulfill these roles, teams of autonomous robots will need to cooperate together to accomplish complex mission objectives in uncertain and dynamic environments. In these environments, autonomous robots face a host of new challenges, such as responding robustly to timing uncertainties and perturbations, task and coordination failures, and equipment malfunctions. In order to address these challenges, this thesis advocates a novel planning approach, called temporally-flexible contingent planning. A temporally-flexible contingent plan is a compact encoding of methods for achieving the mission objectives which incorporates robustness through flexible task durations, redundant methods, constraints on when methods are applicable, and preferences between methods. This approach enables robots to adapt to unexpected changes on-the-fly by selecting alternative methods at runtime in order to satisfy as best possible the mission objectives. The drawback to this approach, however, is the computational overhead involved in selecting alternative methods at runtime in response to changes. If a robot takes too long to select a new plan, it could fail to achieve its near-term mission objectives and potentially incur damage. To alleviate this problem, and extend the range of applicability of temporally-flexible contingent planning to more demanding real-time systems, this thesis proposes a temporally-flexible contingent plan executive that selects new methods quickly and optimally in response to changes in a robot's health and environment. We enable fast and optimal method selection through two complimentary approaches. First, we frame optimal method selection as a constraint satisfaction problem (CSP) variant, called an Optimal Conditional CSP (OCCSP). Second, we extend fast CSP search algorithms, such as Dynamic Backtracking and Branch-and-Bound Search, to solve OCCSPs. Experiments on an autonomous rover test-bed and on randomly generated plans show that these contributions significantly improve the speed at which robots perform optimal method selection in response to changes in their health status and environment.
SM thesis
</description>
<pubDate>Fri, 25 Aug 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113365</guid>
<dc:date>2006-08-25T00:00:00Z</dc:date>
</item>
<item>
<title>Fast, Approximate State Estimation of Concurrent Probabilistic Hybrid Automata</title>
<link>https://hdl.handle.net/1721.1/113364</link>
<description>Fast, Approximate State Estimation of Concurrent Probabilistic Hybrid Automata
Timmons, Eric
It is an undeniable fact that autonomous systems are simultaneously becoming more common place, more complex, and deployed in more inhospitable environments. Examples include smart homes, smart cars, Mars rovers, unmanned aerial vehicles, and autonomous underwater vehicles. A common theme that all of these autonomous systems share is that in order to appropriately control them and prevent mission failure, they must be able to quickly estimate their internal state and the state of the world. A natural representation of many real world systems is to describe them in terms of a mixture of continuous and discrete variables. Unfortunately, hybrid estimation is typically intractable due to the large space of possible assignments to the discrete variables. In this thesis, we investigate how to incorporate conflict directed techniques from the consistency-based, model-based diagnosis community into a hybrid framework that is no longer purely consistency based. We introduce a novel search algorithm, A&amp;#8727; with Bounding Conflicts, that uses conflicts to not only record infeasiblilities, but also learn where in the search space the heuristic function provided to the A&amp;#8727; search is weak (possibly due to heavy to moderate sensor or process noise). Additionally, we describe a hybrid state estimation algorithm that uses this new search to perform estimation on hybrid discrete/continuous systems.
SM thesis
</description>
<pubDate>Wed, 11 Dec 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113364</guid>
<dc:date>2013-12-11T00:00:00Z</dc:date>
</item>
<item>
<title>Decision Uncertainty Minimization and Autonomous Information Gathering</title>
<link>https://hdl.handle.net/1721.1/113363</link>
<description>Decision Uncertainty Minimization and Autonomous Information Gathering
Bush, Lawrence A. M.
Over the past several decades, technologies for remote sensing and exploration have be- come increasingly powerful but continue to face limitations in the areas of information gathering and analysis. These limitations affect technologies that use autonomous agents, which are devices that can make routine decisions independent of operator instructions. Bandwidth and other communications limitation require that autonomous differentiate between relevant and irrelevant information in a computationally efficient manner.This thesis presents a novel approach to this problem by framing it as an adaptive sensing problem. Adaptive sensing allows agents to modify their information collection strategies in response to the information gathered in real time. We developed and tested optimization algorithms that apply information guides to Monte Carlo planners. Information guides provide a mechanism by which the algorithms may blend online (realtime) and offline (previously simulated) planning in order to incorporate uncertainty into the decision- making process. This greatly reduces computational operations as well as decisional and communications overhead. We begin by introducing a 3-level hierarchy that visualizes adaptive sensing at synoptic (global), mesoscale (intermediate) and microscale (close-up) levels (a spatial hierarchy). We then introduce new algorithms for decision uncertainty minimization (DUM) and representational uncertainty minimization (RUM). Finally, we demonstrate the utility of this approach to real-world sensing problems, including bathymetric mapping and disaster relief. We also examine its potential in space exploration tasks by describing its use in a hypothetical aerial exploration of Mars. Our ultimate goal is to facilitate future large-scale missions to extraterrestrial objects for the purposes of scientific advancement and human exploration.
PhD thesis
</description>
<pubDate>Thu, 22 Aug 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113363</guid>
<dc:date>2013-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>Delay Controllability: Multi-Agent Coordination under Communication Delay</title>
<link>https://hdl.handle.net/1721.1/113340</link>
<description>Delay Controllability: Multi-Agent Coordination under Communication Delay
Bhargava, Nikhil; Muise, Christian; Vaquero, Tiago; Williams, Brian
Simple Temporal Networks with Uncertainty provide a useful framework for modeling temporal constraints and, importantly, for modeling actions with uncertain durations. To determine whether we can construct a schedule for a given network, we typically consider one of two types of controllability: dynamic or strong. These controllability checks have strict conditions on how uncertainty is resolved; uncertain outcomes are either recognized immediately or not at all. In this paper, we introduce delay controllability, a novel generalization of both strong and dynamic controllability that additionally exposes a large range of controllability classes in between. To do so, we use a delay function to parameterize our controllability checking. This delay function represents the difference between when an event happens and the time that it is observed. We also provide a single unified algorithm for checking delay controllability that runs in O(n^3) time, matching the best known runtime for dynamic controllability, which we use to motivate the decision to generalize dynamic and strong controllability. We conclude by providing an empirical evaluation of delay controllability, demonstrating its superior accuracy and practical efficiency as compared to other existing approximations.
New version posted April 19, 2019 with slight tweaks to the algorithm and added clarity based on reviewer feedback.
</description>
<pubDate>Mon, 29 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113340</guid>
<dc:date>2018-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Privacy and Security Risks for National Health Records Systems</title>
<link>https://hdl.handle.net/1721.1/113292</link>
<description>Privacy and Security Risks for National Health Records Systems
Alawaji, Ahmed; Sollins, Karen
A review of national health records (NEHR) systems shows that privacy and security risks have a profound impact on the success of such projects. Countries have different approaches when dealing with privacy and security considerations. The aims of this study were to explore how governments can design secure national health records systems. To do that systematically, we developed a framework to analyze NEHR systems. We then applied the framework to investigate the privacy and security risks in these systems. The studied systems demonstrate that getting privacy and security right have a considerable impact on the success of NEHR projects. Also, our study reveals that the healthcare system structure has a substantial impact on the adoption and usage rates of the system. The studied cases uncover many opportunities for improving privacy and security measures in future projects. The framework demonstrates the utility of applying it to the three cases.
SM thesis
</description>
<pubDate>Wed, 24 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113292</guid>
<dc:date>2018-01-24T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Component-based Supervised Learning Programs From Crowdsourced Examples</title>
<link>https://hdl.handle.net/1721.1/112949</link>
<description>Generating Component-based Supervised Learning Programs From Crowdsourced Examples
Cambronero, Jose; Rinard, Martin
We present CrowdLearn, a new system that processes an existing corpus of crowdsourced machine learning programs to learn how to generate effective pipelines for solving supervised machine learning problems. CrowdLearn uses a probabilistic model of program likelihood, conditioned on the current sequence of pipeline components and on the characteristics of the input data to the next component in the pipeline, to predict candidate pipelines. Our results highlight the effectiveness of this technique in leveraging existing crowdsourced programs to generate pipelines that work well on a range of supervised learning problems.
</description>
<pubDate>Thu, 21 Dec 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/112949</guid>
<dc:date>2017-12-21T00:00:00Z</dc:date>
</item>
<item>
<title>Typesafety for Explicitly-Coded Probabilistic Inference Procedures</title>
<link>https://hdl.handle.net/1721.1/112172</link>
<description>Typesafety for Explicitly-Coded Probabilistic Inference Procedures
Atkinson, Eric; Carbin, Michael
Researchers have recently proposed several systems that ease the process of developing Bayesian probabilistic inference algorithms. These include systems for automatic inference algorithm synthesis as well as stronger abstractions for manual algorithm development. However, existing systems whose performance relies on the developer manually constructing a part of the inference algorithm have limited support for reasoning about the correctness of the resulting algorithm. In this paper, we present Shuffle, a programming language for developing manual inference algorithms that enforces 1) the basic rules of probability theory and 2) statistical dependencies of the algorithm's corresponding probabilistic model. We have used Shuffle to develop inference algorithms for several standard probabilistic models. Our results demonstrate that Shuffle enables a developer to deliver performant implementations of these algorithms with the added benefit of Shuffle's correctness guarantees.
</description>
<pubDate>Thu, 09 Nov 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/112172</guid>
<dc:date>2017-11-09T00:00:00Z</dc:date>
</item>
<item>
<title>The Interval Programming Model Solution Algorithm Experimentation Tools and Results</title>
<link>https://hdl.handle.net/1721.1/111117</link>
<description>The Interval Programming Model Solution Algorithm Experimentation Tools and Results
Benjamin, Michael R.
Interval programming (IvP) is model for representing multi-objective optimization problems along with a set of solution algorithms. This paper describes a set of IvP solution experiments run over randomly generated problem instances, using five different versions of the Recursive Interval Programming ALgorithm (RIPAL). The final version is the algorithm used most extensively in practice, with the first four provided mostly for comparison as the final version is built up in complexity. The full details of the algorithms are outside the scope of this paper, with the focus here being the experimental results, and the software tools and technique used in generating the problem instances. Additional tools are described for facilitating the experiments, including visualization tools, and tools for generating the plots and tables shown in this document. All software tools are available under an open source license, and all problem instances reported here are also available online. This document is meant to supplement other discussions on the IvP model, algorithm, and IvP applications to provide the detail of reporting that would not be possible due to length restrictions of other papers.
</description>
<pubDate>Fri, 01 Sep 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/111117</guid>
<dc:date>2017-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference and Regeneration of Programs that Manipulate Relational Databases</title>
<link>https://hdl.handle.net/1721.1/111067</link>
<description>Inference and Regeneration of Programs that Manipulate Relational Databases
Shen, Jiasi; Rinard, Martin
We present a new technique that infers models of programs that manipulate relational databases. This technique generates test databases and input commands, runs the program, then observes the resulting outputs and updated databases to infer the model. Because the technique works only with the externally observable inputs, outputs, and databases, it can infer the behavior of programs written in arbitrary languages using arbitrary coding styles and patterns. We also present a technique for automatically regenerating an implementation of the program based on the inferred model. The regenerator can produce a translated implementation in a different language and systematically include relevant security and error checks. We present results that illustrate the use of the technique to eliminate SQL injection vulnerabilities and the translation of applications from Java and Ruby on Rails to Python.
</description>
<pubDate>Tue, 29 Aug 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/111067</guid>
<dc:date>2017-08-29T00:00:00Z</dc:date>
</item>
<item>
<title>An Efficient Fill Estimation Algorithm for Sparse Matrices and Tensors in Blocked Formats</title>
<link>https://hdl.handle.net/1721.1/109792</link>
<description>An Efficient Fill Estimation Algorithm for Sparse Matrices and Tensors in Blocked Formats
Ahrens, Willow; Schiefer, Nicholas; Xu, Helen
Tensors, linear-algebraic extensions of matrices in arbitrary dimensions, have numerous applications in computer science and computational science. Many tensors are sparse, containing more than 90% zero entries. Efficient algorithms can leverage sparsity to do less work, but the irregular locations of the nonzero entries pose challenges to performance engineers. Many tensor operations such as tensor-vector multiplications can be sped up substantially by breaking the tensor into equally sized blocks (only storing blocks which contain nonzeros) and performing operations in each block using carefully tuned code. However, selecting the best block size is computationally challenging. Previously, Vuduc et al. defined the fill of a sparse tensor to be the number of stored entries in the blocked format divided by the number of nonzero entries, and showed that the fill can be used as an effective heuristic to choose a good block size. However, they gave no accuracy bounds for their method for estimating the fill, and it is vulnerable to adversarial examples. In this paper, we present a sampling-based method for finding a (1 + epsilon)-approximation to the fill of an order N tensor for all block sizes less than B, with probability at least 1 - delta, using O(B^(2N) log(B^N / delta) / epsilon^2) samples for each block size. We introduce an efficient routine to sample for all B^N block sizes at once in O(N B^N) time. We extend our concentration bounds to a more efficient bound based on sampling without replacement, using the recent Hoeffding-Serfling inequality. We then implement our algorithm and compare our scheme to that of Vuduc, as implemented in the Optimized Sparse Kernel Interface (OSKI) library. We find that our algorithm provides faster estimates of the fill at all accuracy levels, providing evidence that this is both a theoretical and practical improvement. Our code is available under the BSD 3-clause license at https://github.com/peterahrens/FillEstimation.
</description>
<pubDate>Fri, 09 Jun 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/109792</guid>
<dc:date>2017-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Unit Auction Revenue with Possibilistic Beliefs</title>
<link>https://hdl.handle.net/1721.1/109726</link>
<description>Multi-Unit Auction Revenue with Possibilistic Beliefs
Micali, Silvio; Vlachos, Georgios
The revenue of traditional auction mechanisms is benchmarked solely against the players' own valuations, despite the fact that they may also have valuable beliefs about each other's valuations. Not much is known about generating revenue in auctions of multiple identical copies of a same good. (In particular the celebrated Vickrey mechanism has no revenue guarantees.) For such auctions, we (1) put forward an attractive revenue benchmark, based on the players' possibilistic about each other, and (2) construct a mechanism that achieves such benchmark, assuming that the players are two-level rational (where the rationality is in the sense of Aumann).
</description>
<pubDate>Mon, 05 Jun 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/109726</guid>
<dc:date>2017-06-05T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous COLREGS Modes and Velocity Functions</title>
<link>https://hdl.handle.net/1721.1/109146</link>
<description>Autonomous COLREGS Modes and Velocity Functions
Benjamin, Michael R.
This paper concerns an implementation of an autonomy system for unmanned surface vessels operating in accordance with the Coast Guard Collision Regulations (COLREGS). The autonomy system is implemented by associating a dedicated ownship behavior module for each contact for collision avoidance. For each behavior, a mode determination is made based on the COLREGS rules, ownship position and trajectory, and the contact position and trajectory. Based on the mode, an appropriate objective function is generated, over the set of possible ownship maneuvers, to bias the vehicle in accordance with the COLREGS. The focus on this paper is solely on (a) the mode determination algorithms, (b) the requisite ownship and contact terms regarding position, trajectory and relative position utilized in the mode determination algorithms, and (c) the form and equations used in making the objective functions associated with each mode.
</description>
<pubDate>Tue, 16 May 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/109146</guid>
<dc:date>2017-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Inference of Code Transforms and Search Spaces for Automatic Patch Generation Systems</title>
<link>https://hdl.handle.net/1721.1/108619</link>
<description>Automatic Inference of Code Transforms and Search Spaces for Automatic Patch Generation Systems
Long, Fan; Amidon, Peter; Rinard, Martin
We present a new system, Genesis, that processes sets of human patches to automatically infer code transforms and search spaces for automatic patch generation. We present results that characterize the effectiveness of the Genesis inference algorithms and the resulting complete Genesis patch generation system working with real-world patches and errors collected from top 1000 github Java software development projects. To the best of our knowledge, Genesis is the first system to automatically infer patch generation transforms or candidate patch search spaces from successful patches.
</description>
<pubDate>Fri, 08 Jul 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/108619</guid>
<dc:date>2016-07-08T00:00:00Z</dc:date>
</item>
<item>
<title>Planning Robust Strategies for Constructing Multi-object Arrangements</title>
<link>https://hdl.handle.net/1721.1/108510</link>
<description>Planning Robust Strategies for Constructing Multi-object Arrangements
Anders, Ariel; Kaelbling, Leslie; Lozano-Perez, Tomas
A crucial challenge in robotics is achieving reliable results in spite of sensing and control uncertainty. A prominent strategy for dealing with uncertainty is to construct a feedback policy, where actions are chosen as a function of the current state estimate. However, constructing such policies is computationally very difficult. An alternative strategy is conformant planning which finds open-loop action sequences that achieve the goal for all input states and action outcomes. In this work, we investigate the conformant planning approach to robot manipulation. In particular, we tackle the problem of pushing multiple objects simultaneously to achieve a specified arrangement. Conformant planning is a belief-state planning problem. A belief state is the set of all possible states of the world, and the goal is to find a sequence of actions that will bring an initial belief state to a goal belief state To do forward belief-state planning, we created a deterministic belief-state transition model from supervised learning based on physics simulations. A key pitfall in conformant planning is that the complexity of the belief state tends to increase with each operation, making it increasingly harder to compute the effect of actions. This work explores the idea that we can construct conformant plans for robot manipulation by only using actions resulting in compact belief states.
</description>
<pubDate>Mon, 30 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/108510</guid>
<dc:date>2017-01-30T00:00:00Z</dc:date>
</item>
<item>
<title>Inference and Regeneration of Programs that Store and Retrieve Data</title>
<link>https://hdl.handle.net/1721.1/108383</link>
<description>Inference and Regeneration of Programs that Store and Retrieve Data
Rinard, Martin; Shen, Jiasi
As modern computation platforms become increasingly complex, their programming interfaces are increasingly difficult to use. This complexity is especially inappropriate given the relatively simple core functionality that many of the computations implement. We present a new approach for obtaining so ware that executes on modern computing platforms with complex programming interfaces. Our approach starts with a simple seed program, written in the language of the developer's choice, that implements the desired core functionality. It then systematically generates inputs and observes the resulting outputs to learn the core functionality. It finally automatically regenerates new code that implements the learned core functionality on the target computing platform. This regenerated code contains both (a) boilerplate code for the complex programming interfaces that the target computing platform presents and (b) systematic error and vulnerability checking code that makes the new implementations robust and secure. By providing a productive new mechanism for capturing and encapsulating knowledge about how to use modern complex interfaces, this new approach promises to greatly reduce the developer effort required to obtain secure, robust so ware that executes on modern computing platforms.
</description>
<pubDate>Mon, 24 Apr 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/108383</guid>
<dc:date>2017-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>On the Non-Existence of Blockwise 2-Local PRGs with Applications to Indistinguishability Obfuscation</title>
<link>https://hdl.handle.net/1721.1/107928</link>
<description>On the Non-Existence of Blockwise 2-Local PRGs with Applications to Indistinguishability Obfuscation
Lombardi, Alex; Vaikuntanathan, Vinod
Lin and Tessaro (Eprint 2017/250) recently proposed indistinguishability obfuscation and functional encryption candidates and proved their security based on a standard assumption on bilinear maps and a non-standard assumption on ``Goldreich-like'' pseudorandom generators (PRG). In a nutshell, they require the existence of pseudo-random generators $G:\Sigma^n \to \{0,1\}^m$ for some $\mathsf{poly}(n)$-size alphabet $\Sigma$ where each output bit depends on at most two input alphabet symbols, and which achieve sufficiently large stretch. We show a polynomial-time attack against such generators. Our attack uses tools from the literature on two-source extractors (Chor and Goldreich, SICOMP 1988) and efficient refutation of 2-CSPs over large alphabets (Allen, O'Donnell and Witmer, FOCS 2015). Finally, we propose new ways to instantiate the Lin-Tessaro construction that do not immediately fall to our attacks. While we cannot say with any confidence that these modifications are secure, they certainly deserve further cryptanalysis.
</description>
<pubDate>Thu, 06 Apr 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/107928</guid>
<dc:date>2017-04-06T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal and Player-Replaceable Consensus with an Honest Majority</title>
<link>https://hdl.handle.net/1721.1/107927</link>
<description>Optimal and Player-Replaceable Consensus with an Honest Majority
Micali, Silvio; Vaikuntanathan, Vinod
We construct a Byzantine Agreement protocol that tolerates t &lt; n/2 corruptions, is very efficient in terms of the number of rounds and the number of bits of communication, and satisfies a strong notion of robustness called player replaceability (defined in [Mic16]). We provide an analysis of our protocol when executed on real-world networks such as the ones employed in the bitcoin protocol.
</description>
<pubDate>Fri, 31 Mar 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/107927</guid>
<dc:date>2017-03-31T00:00:00Z</dc:date>
</item>
<item>
<title>The Tensor Algebra Compiler</title>
<link>https://hdl.handle.net/1721.1/107013</link>
<description>The Tensor Algebra Compiler
Kjolstad, Fredrik; Kamil, Shoaib; Chou, Stephen; Lugato, David; Amarasinghe, Saman
Tensor and linear algebra is pervasive in data analytics and the physical sciences. Often the tensors, matrices or even vectors are sparse. Computing expressions involving a mix of sparse and dense tensors, matrices and vectors requires writing kernels for every operation and combination of formats of interest. The number of possibilities is infinite, which makes it impossible to write library code for all. This problem cries out for a compiler approach. This paper presents a new technique that compiles compound tensor algebra expressions combined with descriptions of tensor formats into efficient loops. The technique is evaluated in a prototype compiler called taco, demonstrating competitive performance to best-in-class hand-written codes for tensor and matrix operations.
</description>
<pubDate>Fri, 17 Feb 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/107013</guid>
<dc:date>2017-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>Collaborative Diagnosis of Over-Subscribed Temporal Plans</title>
<link>https://hdl.handle.net/1721.1/106886</link>
<description>Collaborative Diagnosis of Over-Subscribed Temporal Plans
Yu, Peng
Over-subscription, that is, being assigned too many tasks or requirements that are too demanding, is commonly encountered in temporal planning problems. As human beings, we often want to do more than we can, ask for things that may not be available, while underestimating how long it takes to perform each task. It is often difficult for us to detect the causes of failure in such situations and then find resolutions that are effective. We can greatly benefit from tools that assist us by looking out for these plan failures, by identifying their root causes, and by proposing preferred resolutions to these failures that lead to feasible plans. In recent literature, several approaches have been developed to resolve such over-subscribed problems, which are often framed as over-constrained scheduling, configuration design or optimal planning problems. Most of them take an all-or-nothing approach, in which over-subscription is resolved through suspending constraints or dropping goals. While helpful, in real-world scenarios, we often want to preserve our plan goals as much possible. As human beings, we know that slightly weakening the requirements of a travel plan, or replacing one of its destinations with an alternative one is often sufficient to resolve an over-subscription problem, no matter if the requirement being weakened is the duration of a deep-sea survey being planned for, or the restaurant cuisine for a dinner date. The goal of this thesis is to develop domain independent relaxation algorithms that perform this type of slight weakening of constraints, which we will formalize as continuous relaxation, and to embody them in a computational aid, Uhura, that performs tasks akin to an experienced travel agent or ocean scientists. In over-subscribed situations, Uhura helps us diagnose the causes of failure, suggests alternative plans, and collaborates with us in order to resolve conflicting requirements in the most preferred way. Most importantly, the algorithms underlying Uhura supports the weakening, instead of suspending, of constraints and variable domains in a temporally flexible plan. The contribution of this thesis is two-fold. First, we developed an algorithmic framework, called Best-first Conflict-Directed Relaxation (BCDR), for performing plan relaxation. Second, we use the BCDR framework to perform relaxation for several different families of plan representations involving different types of constraints. These include temporal constraints, chance constraints and variable domain constraints, and we incorporate several specialized conflict detection and resolution algorithms in support of the continuous weakening of them. The key idea behind BCDR's approach to continuous relaxation is to generalize the concepts of discrete conflicts and relaxations, first introduced by the model-based diagnosis community, to hybrid conflicts and relaxations, which denote minimal inconsistencies and minimal relaxations to both discrete and continuous relaxable constraints.
PhD thesis
</description>
<pubDate>Fri, 14 Oct 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/106886</guid>
<dc:date>2016-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>SE-Sync: A Certifiably Correct Algorithm for Synchronization over the Special Euclidean Group</title>
<link>https://hdl.handle.net/1721.1/106885</link>
<description>SE-Sync: A Certifiably Correct Algorithm for Synchronization over the Special Euclidean Group
Rosen, David M.; Carlone, Luca; Bandeira, Afonso S.; Leonard, John J.
Many important geometric estimation problems naturally take the form of synchronization over the special Euclidean group: estimate the values of a set of unknown poses given noisy measurements of a subset of their pairwise relative transforms. Examples of this class include the foundational problems of pose-graph simultaneous localization and mapping (SLAM) (in robotics), camera motion estimation (in computer vision), and sensor network localization (in distributed sensing), among others. This inference problem is typically formulated as a nonconvex maximum-likelihood estimation that is computationally hard to solve in general. Nevertheless, in this paper we present an algorithm that is able to efficiently recover certifiably globally optimal solutions of the special Euclidean synchronization problem in a non-adversarial noise regime. The crux of our approach is the development of a semidefinite relaxation of the maximum-likelihood estimation whose minimizer provides an exact MLE so long as the magnitude of the noise corrupting the available measurements falls below a certain critical threshold; furthermore, whenever exactness obtains, it is possible to verify this fact a posteriori, thereby certifying the optimality of the recovered estimate. We develop a specialized optimization scheme for solving large-scale instances of this semidefinite relaxation by exploiting its low-rank, geometric, and graph-theoretic structure to reduce it to an equivalent optimization problem defined on a low-dimensional Riemannian manifold, and then design a Riemannian truncated-Newton trust-region method to solve this reduction efficiently. Finally, we combine this fast optimization approach with a simple rounding procedure to produce our algorithm, SE-Sync. Experimental evaluation on a variety of simulated and real-world pose-graph SLAM datasets shows that SE-Sync is capable of recovering certifiably globally optimal solutions when the available measurements are corrupted by noise up to an order of magnitude greater than that typically encountered in robotics and computer vision applications, and does so more than an order of magnitude faster than the Gauss-Newton-based approach that forms the basis of current state-of-the-art techniques.
</description>
<pubDate>Sun, 05 Feb 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/106885</guid>
<dc:date>2017-02-05T00:00:00Z</dc:date>
</item>
<item>
<title>Propositional and Activity Monitoring Using Qualitative Spatial Reasoning</title>
<link>https://hdl.handle.net/1721.1/105848</link>
<description>Propositional and Activity Monitoring Using Qualitative Spatial Reasoning
Lane, Spencer Dale
Communication is the key to effective teamwork regardless of whether the team members are humans or machines. Much of the communication that makes human teams so effective is non-verbal; they are able to recognize the actions that the other team members are performing and take their own actions in order to assist. A robotic team member should be able to make the same inferences, observing the state of the environment and inferring what actions are being taken. In this thesis I introduce a novel approach to the combined problem of activity recognition and propositional monitoring. This approach breaks down the problem into smaller sub-tasks. First, the raw sensor input is parsed into simple, easy to understand primitive semantic relationships known as qualitative spatial relations (QSRs). These primitives are then combined to estimate the state of the world in the same language used by most planners, planning domain definition language (PDDL) propositions. Both the primitives and propositions are combined to infer the status of the actions that the human is taking. I describe an algorithm for solving each of these smaller problems and describe the modeling process for a variety of tasks from an abstracted electronic component assembly (ECA) scenario. I implemented this scenario on a robotic testbed and collected data of a human performing the example actions.
SM thesis
</description>
<pubDate>Wed, 14 Dec 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/105848</guid>
<dc:date>2016-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Sound and Complete Runtime Security Monitor for Application Software</title>
<link>https://hdl.handle.net/1721.1/105847</link>
<description>Sound and Complete Runtime Security Monitor for Application Software
Khan, M. Taimoor; Serpanos, Dimitrios; Shrobe, Howard
We present a run-time security monitor that detects both known and unknown cyber attacks by checking that the run-time behavior of the application is consistent with the expected behavior modeled by an application specification. This is crucial because, even if the implementation is consistent with its specification, the application may still be vulnerable due to flaws in the supporting infrastructure. This run-time security monitor is sound and complete, eliminating false alarms, as well as efficient, so that it does not limit run-time application performance and so that it supports real-time systems. Importantly, this monitor is readily applicable to both legacy and new system platforms.The security monitor takes as input the application specification and the application implementation, which may be expressed in different languages. The security monitor detects attacks by systematically comparing the application execution and specification behaviors at run-time, even though they operate at two different levels of abstraction. We define the denotational semantics of the specification language and prove that the monitor is sound and complete, i.e. if the application is consistent with its specification, the security monitor will produce no false alarms (soundness) and that it will detect any deviation of the application from the behavior sanctioned by the specification language (completeness). Importantly, the application specification language enables the description of known or potential attack plans, enabling not only attack detection but attack characterization as well.
</description>
<pubDate>Thu, 15 Dec 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/105847</guid>
<dc:date>2016-12-15T00:00:00Z</dc:date>
</item>
<item>
<title>Oort: User-Centric Cloud Storage with Global Queries</title>
<link>https://hdl.handle.net/1721.1/105802</link>
<description>Oort: User-Centric Cloud Storage with Global Queries
Chajed, Tej; Gjengset, Jon; Kaashoek, M. Frans; Mickens, James; Morris, Robert; Zeldovich, Nickolai
In principle, the web should provide the perfect stage for user-generated content, allowing users to share their data seamlessly with other users across services and applications. In practice, the web fragments a user's data over many sites, each exposing only limited APIs for sharing. This paper describes Oort, a new cloud storage system that organizes data primarily by user rather than by application or web site. Oort allows users to choose which web software to use with their data and which other users to share it with, while giving applications powerful tools to query that data. Users rent space from providers that cooperate to provide a global, federated, general-purpose storage system. To support large-scale, multi-user applications such as Twitter and e-mail, Oort provides global queries that find and combine data from relevant users across all providers. Oort makes global query execution efficient by recognizing and merging similar queries issued by many users' application instances, largely eliminating the per-user factor in the global complexity of queries. Our evaluation predicts that an Oort implementation could handle traffic similar to that seen by Twitter using a hundred cooperating Oort servers, and that applications with other sharing patterns, like e-mail, can also be executed efficiently.
</description>
<pubDate>Thu, 08 Dec 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/105802</guid>
<dc:date>2016-12-08T00:00:00Z</dc:date>
</item>
<item>
<title>Data and Code for "Automatic Identification of Narrative Diegesis and Point of View"</title>
<link>https://hdl.handle.net/1721.1/105279</link>
<description>Data and Code for "Automatic Identification of Narrative Diegesis and Point of View"
Eisenberg, Joshua D.; Finlayson, Mark A.
This archive contains the code and data for the workshop article "Automatic Identification of Narrative Diegesis and Point of View," published in 2016 in the 2nd Workshop for Computing News Storylines (CNewsStory 2016), co-located with EMNLP 2016 in Austin, TX. The root of the archive contains a README file which explains the archive contents. Furthermore, the archive can be imported directly into the Eclipse IDE as a project encapsulating the executable code required to reproduce the results of the paper; the code compiles with Java 1.8. The archive also contains a copy of the final version of the paper for reference.
</description>
<pubDate>Wed, 09 Nov 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/105279</guid>
<dc:date>2016-11-09T00:00:00Z</dc:date>
</item>
<item>
<title>Report on the 2015 NSF Workshop on Unified Annotation Tooling</title>
<link>https://hdl.handle.net/1721.1/105270</link>
<description>Report on the 2015 NSF Workshop on Unified Annotation Tooling
Finlayson, Mark Alan
On March 30 &amp; 31, 2015, an international group of twenty-three researchers with expertise in linguistic annotation convened in Sunny Isles Beach, Florida to discuss problems with and potential solutions for the state of linguistic annotation tooling. The participants comprised 14 researchers from the U.S. and 9 from outside the U.S., with 7 countries and 4 continents represented, and hailed from fields and specialties including computational linguistics, artificial intelligence, speech processing, multi-modal data processing, clinical &amp; medical natural language processing, linguistics, documentary linguistics, sign-language linguistics, corpus linguistics, and the digital humanities. The motivating problem of the workshop was the balkanization of annotation tooling, namely, that even though linguistic annotation requires sophisticated tool support to efficiently generate high-quality data, the landscape of tools for the field is fractured, incompatible, inconsistent, and lacks key capabilities. The overall goal of the workshop was to chart the way forward, centering on five key questions: (1) What are the problems with current tool landscape? (2) What are the possible benefits of solving some or all of these problems? (3) What capabilities are most needed? (4) How should we go about implementing these capabilities? And, (5) How should we ensure longevity and sustainability of the solution? I surveyed the participants before their arrival, which provided significant raw material for ideas, and the workshop discussion itself resulted in identification of ten specific classes of problems, five sets of most-needed capabilities. Importantly, we identified annotation project managers in computational linguistics as the key recipients and users of any solution, thereby succinctly addressing questions about the scope and audience of potential solutions. We discussed management and sustainability of potential solutions at length. The participants agreed on sixteen recommendations for future work. This technical report contains a detailed discussion of all these topics, a point-by-point review of the discussion in the workshop as it unfolded, detailed information on the participants and their expertise, and the summarized data from the surveys.
</description>
<pubDate>Tue, 08 Nov 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/105270</guid>
<dc:date>2016-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Alpenhorn: Bootstrapping Secure Communication without Leaking Metadata</title>
<link>https://hdl.handle.net/1721.1/105093</link>
<description>Alpenhorn: Bootstrapping Secure Communication without Leaking Metadata
Lazar, David; Zeldovich, Nickolai
Alpenhorn is the first system for initiating an encrypted connection between two users that provides strong privacy and forward secrecy guarantees for metadata (i.e., information about which users connected to each other) and that does not require out-of-band communication other than knowing the other user's Alpenhorn username (email address). This resolves a significant shortcoming in all prior works on private messaging, which assume an out-of-band key distribution mechanism. Alpenhorn's design builds on three ideas. First, Alpenhorn provides each user with an address book of friends that the user can call to establish a connection. Second, when a user adds a friend for the first time, Alpenhorn ensures the adversary does not learn the friend's identity, by using identity-based encryption in a novel wayto privately determine the friend's public key. Finally, when calling a friend, Alpenhorn ensures forward secrecy of metadata by storing pairwise shared secrets in friends' address books, and evolving them over time, using a new keywheel construction. Alpenhorn relies on a number of servers, but operates in an anytrust model, requiring just one of the servers to be honest. We implemented a prototype of Alpenhorn, and integrated it into the Vuvuzela private messaging system (which did not previously provide privacy or forward secrecy of metadata when initiating conversations). Experimental results show that Alpenhorn can scale to many users, supporting 10 million users on three Alpenhorn servers with an average call latency of 150 seconds and a client bandwidth overhead of 3.7 KB/sec.
</description>
<pubDate>Wed, 05 Oct 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/105093</guid>
<dc:date>2016-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Examining Key Mobility Resources through Denial of Service Attacks on proposed Global Name Resolution Services</title>
<link>https://hdl.handle.net/1721.1/104385</link>
<description>Examining Key Mobility Resources through Denial of Service Attacks on proposed Global Name Resolution Services
Rock, Colleen T.
The problem we address in this thesis is to uncover the design elements in a network architecture design that may open it up to denial of service (DoS) attacks and to expose the tradeoffs in mitigating those DoS opportunities. We take as our candidate network architecture design the Future Internet Architecture project MobilityFirst. MobilityFirst's overarching goal, driven by increasingly available wireless communication, is the support of mobility in an Internet architecture. At its core, MobilityFirst separates identification from location, as distinct from the current Internet architecture, and postulates the existence of globally unique, flat identifiers. In order to support mobility in this context, it also postulates a global name resolution service (GNRS). In this thesis we examine three alternative designs for the GNRS and the opportunities they expose for DoS attacks. We consider each one in depth analytically. As an example, we then study one particular attack in depth and are forced to conclude that approaches to mitigating this attack would have significant negative impact on the support of mobility thus exposing the dilemma in such system design tradeoffs.
MEng thesis
</description>
<pubDate>Mon, 26 Sep 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/104385</guid>
<dc:date>2016-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>Flowtune: Flowlet Control for Datacenter Networks</title>
<link>https://hdl.handle.net/1721.1/103920</link>
<description>Flowtune: Flowlet Control for Datacenter Networks
Perry, Jonathan; Balakrishnan, Hari; Shah, Devavrat
Rapid convergence to a desired allocation of network resources to endpoint traffic has been a long-standing challenge for packet-switched networks. The reason for this is that congestion control decisions are distributed across the endpoints, which vary their offered load in response to changes in application demand and network feedback on a packet-by-packet basis. We propose a different approach for datacenter networks, flowlet control, in which congestion control decisions are made at the granularity of a flowlet, not a packet. With flowlet control, allocations have to change only when flowlets arrive or leave. We have implemented this idea in a system called Flowtune using a centralized allocator that receives flowlet start and end notifications from endpoints. The allocator computes optimal rates using a new, fast method for network utility maximization, and updates endpoint congestion-control parameters. Experiments show that Flowtune outperforms DCTCP, pFabric, sfqCoDel, and XCP on tail packet delays in various settings, converging to optimal rates within a few packets rather than over several RTTs. Our implementation of Flowtune handles 10.4x more throughput per core and scales to 8x more cores than Fastpass, for an 83-fold throughput gain.
</description>
<pubDate>Mon, 15 Aug 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/103920</guid>
<dc:date>2016-08-15T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Inference of Code Transforms and Search Spaces for Automatic Patch Generation Systems</title>
<link>https://hdl.handle.net/1721.1/103556</link>
<description>Automatic Inference of Code Transforms and Search Spaces for Automatic Patch Generation Systems
Long, Fan; Amidon, Peter; Rinard, Martin
We present a new system, Genesis, that processes sets of human patches to automatically infer code transforms and search spaces for automatic patch generation. We present results that characterize the effectiveness of the Genesis inference algorithms and the resulting complete Genesis patch generation system working with real-world patches and errors collected from top 1000 github Java software development projects. To the best of our knowledge, Genesis is the first system to automatically infer patch generation transforms or candidate patch search spaces from successful patches.
</description>
<pubDate>Fri, 08 Jul 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/103556</guid>
<dc:date>2016-07-08T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Caching Mechanisms In Future Internet Architectures</title>
<link>https://hdl.handle.net/1721.1/103381</link>
<description>Evaluating Caching Mechanisms In Future Internet Architectures
Jing, Yuxin
This thesis seeks to test and evaluate the effects of in-­network storage in novel proposed Internet architectures in terms of their performance. In a world where more and more people are mobile and connected to the Internet, we look at how the added variable of user mobility can affect how these architectures perform under different loads. Evaluating the effects of in­-network storage and caching in these novel architectures will provide another facet to understanding how viable of an alternative they would be to the current TCP/IP paradigm of today's Internet. In Named Data Networking, where the storage is used to directly cache content, we see its use of storage impact the locality of where things are, while in MobilityFirst, where storage is used to cache chunks to provide robust delivery, we look at how its different layers work together in a mobility event.
MEng thesis
</description>
<pubDate>Tue, 28 Jun 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/103381</guid>
<dc:date>2016-06-28T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Network User Behavior: Various Approaches</title>
<link>https://hdl.handle.net/1721.1/103379</link>
<description>Modeling Network User Behavior: Various Approaches
Xu, Shidan
This project involves learning to predict users' mobility within the network topology. Topological mobility, as opposed to physical mobility, can be substantial as a user switches from LTE to wifi network, while moving minimally physically. Our dataset consists of email IMAP logs as they document associated client IP addresses, as well as the clients' identifiers. Prediction for online mobility is of particular interest to the networks community. If we can predict online mobility with high probability, then new network architecture can be designed to optimize the caching system by minimizing resending packets. We used various approaches and techniques to model the user's behavior, including probabilistic programming, regression, neural nets, and clustering algorithms. We compare and contrast how models differ in their prediction accuracy, speed of convergence, and algorithmic complexity.
MEng thesis
</description>
<pubDate>Tue, 28 Jun 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/103379</guid>
<dc:date>2016-06-28T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Practical Theory: Bayesian Optimization and Optimal Exploration</title>
<link>https://hdl.handle.net/1721.1/102796</link>
<description>Towards Practical Theory: Bayesian Optimization and Optimal Exploration
Kawaguchi, Kenji
This thesis discusses novel principles to improve the theoretical analyses of a class of methods, aiming to provide theoretically driven yet practically useful methods. The thesis focuses on a class of methods, called bound-based search, which includes several planning algorithms (e.g., the A* algorithm and the UCT algorithm), several optimization methods (e.g., Bayesian optimization and Lipschitz optimization), and some learning algorithms (e.g., PAC-MDP algorithms). For Bayesian optimization, this work solves an open problem and achieves an exponential convergence rate. For learning algorithms, this thesis proposes a new analysis framework, called PAC-RMDP, and improves the previous theoretical bounds. The PAC-RMDP framework also provides a unifying view of some previous near-Bayes optimal and PAC-MDP algorithms. All proposed algorithms derived on the basis of the new principles produced competitive results in our numerical experiments with standard benchmark tests.
SM thesis
</description>
<pubDate>Thu, 26 May 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/102796</guid>
<dc:date>2016-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Learning without Poor Local Minima</title>
<link>https://hdl.handle.net/1721.1/102665</link>
<description>Deep Learning without Poor Local Minima
Kawaguchi, Kenji
In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice.
</description>
<pubDate>Mon, 23 May 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/102665</guid>
<dc:date>2016-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Delphi: A Software Controller for Mobile Network Selection</title>
<link>https://hdl.handle.net/1721.1/101636</link>
<description>Delphi: A Software Controller for Mobile Network Selection
Deng, Shuo; Sivaraman, Anirudh; Balakrishnan, Hari
This paper presents Delphi, a mobile software controller that helps applications select the best network among available choices for their data transfers. Delphi optimizes a specified objective such as transfer completion time, or energy per byte transferred, or the monetary cost of a transfer. It has four components: a performance predictor that uses features gathered by a network monitor, and a traffic profiler to estimate transfer sizes near the start of a transfer, all fed into a network selector that uses the prediction and transfer size estimate to optimize an objective.For each transfer, Delphi either recommends the  best  single network to use, or recommends Multi-Path TCP (MPTCP), but crucially selects the network for MPTCP s  primary subflow . The choice of primary subflow has a strong impact onthe transfer completion time, especially for short transfers.We designed and implemented Delphi in Linux. It requires no application modifications. Our evaluation shows that Delphi reduces application network transfer time by 46% for Web browsing and by 49% for video streaming, comparedwith Android s default policy of always using Wi-Fi when it is available. Delphi can also be configured to achieve high throughput while being battery-efficient: in this configuration, it achieves 1.9x the throughput of Android s default policy while only consuming 6% more energy.
</description>
<pubDate>Thu, 25 Feb 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/101636</guid>
<dc:date>2016-02-25T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of the Search Spaces for Generate and Validate Patch Generation Systems</title>
<link>https://hdl.handle.net/1721.1/101211</link>
<description>An Analysis of the Search Spaces for Generate and Validate Patch Generation Systems
Long, Fan; Rinard, Martin
We present the first systematic analysis of the characteristics of patch search spaces for automatic patch generation systems. We analyze the search spaces of two current state-of- the-art systems, SPR and Prophet, with 16 different search space configurations. Our results are derived from an analysis of 1104 different search spaces and 768 patch generation executions. Together these experiments consumed over 9000 hours of CPU time on Amazon EC2.The analysis shows that 1) correct patches are sparse in the search spaces (typically at most one correct patch per search space per defect), 2) incorrect patches that nevertheless pass all of the test cases in the validation test suite are typically orders of magnitude more abundant, and 3) leveraging information other than the test suite is therefore critical for enabling the system to successfully isolate correct patches.We also characterize a key tradeoff in the structure of the search spaces. Larger and richer search spaces that contain correct patches for more defects can actually cause systems to find fewer, not more, correct patches. We identify two reasons for this phenomenon: 1) increased validation times because of the presence of more candidate patches and 2) more incorrect patches that pass the test suite and block the discovery of correct patches. These fundamental properties, which are all characterized for the first time in this paper, help explain why past systems often fail to generate correct patches and help identify challenges, opportunities, and productive future directions for the field.
</description>
<pubDate>Thu, 18 Feb 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/101211</guid>
<dc:date>2016-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>Outlier Detection in Heterogeneous Datasets using Automatic Tuple Expansion</title>
<link>https://hdl.handle.net/1721.1/101150</link>
<description>Outlier Detection in Heterogeneous Datasets using Automatic Tuple Expansion
Pit-Claudel, Clément; Mariet, Zelda; Harding, Rachael; Madden, Sam
Rapidly developing areas of information technology are generating massive amounts of data. Human errors, sensor failures, and other unforeseen circumstances unfortunately tend to undermine the quality and consistency of these datasets by introducing outliers -- data points that exhibit surprising behavior when compared to the rest of the data. Characterizing, locating, and in some cases eliminating these outliers offers interesting insight about the data under scrutiny and reinforces the confidence that one may have in conclusions drawn from otherwise noisy datasets. In this paper, we describe a tuple expansion procedure which reconstructs rich information from semantically poor SQL data types such as strings, integers, and floating point numbers. We then use this procedure as the foundation of a new user-guided outlier detection framework, dBoost, which relies on inference and statistical modeling of heterogeneous data to flag suspicious fields in database tuples. We show that this novel approach achieves good classification performance, both in traditional numerical datasets and in highly non-numerical contexts such as mostly textual datasets. Our implementation is publicly available, under version 3 of the GNU General Public License.
</description>
<pubDate>Mon, 08 Feb 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/101150</guid>
<dc:date>2016-02-08T00:00:00Z</dc:date>
</item>
<item>
<title>Initial report on Object Spreadsheets</title>
<link>https://hdl.handle.net/1721.1/100803</link>
<description>Initial report on Object Spreadsheets
McCutchen, Richard Matthew; Itzhaky, Shachar; Jackson, Daniel
There is a growing demand for data-driven web applications that help automate organizational and business processes of low to medium complexity by letting users view and update structured data in controlled ways. We present Object Spreadsheets, an end-user development tool that combines a spreadsheet interface with a rich data model to help the process administrators build the logic for such applications themselves. Its all-in-one interface with immediate feedback has the potential to bring more complex tasks within reach of end-user developers, compared to existing approaches. Our data model is based on the structure of entity-relationship models and directly supports nested variable-size collections and object references, which are common in web applications but poorly accommodated by traditional spreadsheets. Object Spreadsheets has a formula language suited to the data model and supports stored procedures to specify the forms of updates that application users may make. Formulas can be used to assemble data in the exact structure in which it is to be shown in the application UI, simplifying the task of UI building; we intend for Object Spreadsheets to be integrated with a UI builder to provide a complete solution for application development. We describe our prototype implementation and several example applications we built to demonstrate the applicability of the tool.
</description>
<pubDate>Tue, 12 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/100803</guid>
<dc:date>2016-01-12T00:00:00Z</dc:date>
</item>
<item>
<title>Filtered Iterators For Safe and Robust Programs in RIFL</title>
<link>https://hdl.handle.net/1721.1/100542</link>
<description>Filtered Iterators For Safe and Robust Programs in RIFL
Shen, Jiasi; Rinard, Martin
We present a new language construct, filtered iterators, for safe and robust input processing. Filtered iterators are designed to eliminate many common input-processing errors while enabling robust continued execution. The design is inspired by (a) observed common input-processing errors and (b) continued execution strategies that are implemented by developers fixing input validation errors. Filtered iterators decompose inputs into input units, atomically and automatically discarding units that trigger errors. Statistically significant results from a developer study highlight the difficulties that developers encounter when developing input-processing code using standard language constructs. These results also demonstrate the effectiveness of filtered iterators in eliminating many of these difficulties and enabling developers to produce safe and robust input-processing code.
</description>
<pubDate>Sun, 27 Dec 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/100542</guid>
<dc:date>2015-12-27T00:00:00Z</dc:date>
</item>
<item>
<title>Jenga: Harnessing Heterogeneous Memories through Reconfigurable Cache Hierarchies</title>
<link>https://hdl.handle.net/1721.1/100466</link>
<description>Jenga: Harnessing Heterogeneous Memories through Reconfigurable Cache Hierarchies
Beckmann, Nathan; Tsai, Po-An; Sanchez, Daniel
Conventional memory systems are organized as a rigid hierarchy, with multiple levels of progressively larger and slower memories. Hierarchy allows a simple, fixed design to benefit a wide range of applications, because working sets settle at the smallest (and fastest) level they fit in. However, rigid hierarchies also cause significant overheads, because each level adds latency and energy even when it does not capture the working set. In emerging systems with heterogeneous memory technologies such as stacked DRAM, these overheads often limit performance and efficiency. We propose Jenga, a reconfigurable cache hierarchy that avoids these pathologies and approaches the performance of a hierarchy optimized for each application. Jenga monitors application behavior and dynamically builds virtual cache hierarchies out of heterogeneous, distributed cache banks. Jenga uses simple hardware support and a novel software runtime to configure virtual cache hierarchies. On a 36-core CMP with a 1 GB stacked-DRAM cache, Jenga outperforms a combination of state-of-the-art techniques by 10% on average and by up to 36%, and does so while saving energy, improving system-wide energy-delay product by 29% on average and by up to 96%.
</description>
<pubDate>Sat, 19 Dec 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/100466</guid>
<dc:date>2015-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging Theory and Practice in Cache Replacement</title>
<link>https://hdl.handle.net/1721.1/100465</link>
<description>Bridging Theory and Practice in Cache Replacement
Beckmann, Nathan; Sanchez, Daniel
Much prior work has studied processor cache replacement policies, but a large gap remains between theory and practice. The optimal policy (MIN) requires unobtainable knowledge of the future, and prior theoretically-grounded policies use reference models that do not match real programs. Meanwhile, practical policies are designed empirically. Lacking a strong theoretical foundation, they do not make the best use of the information available to them. This paper bridges theory and practice. We propose that practical policies should replace lines based on their economic value added (EVA), the difference of their expected hits from the average. We use Markov decision processes to show that EVA is optimal under some reasonable simplifications. We present an inexpensive, practical implementation of EVA and evaluate it exhaustively over many cache sizes. EVA outperforms prior practical policies and saves area at iso-performance. These results show that formalizing cache replacement yields practical benefits.
</description>
<pubDate>Sat, 19 Dec 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/100465</guid>
<dc:date>2015-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Cache Calculus: Modeling Caches through Differential Equations</title>
<link>https://hdl.handle.net/1721.1/100464</link>
<description>Cache Calculus: Modeling Caches through Differential Equations
Beckmann, Nathan; Sanchez, Daniel
Caches are critical to performance, yet their behavior is hard to understand and model. In particular, prior work does not provide closed-form solutions of cache performance, i.e. simple expressions for the miss rate of a specific access pattern. Existing cache models instead use numerical methods that, unlike closed-form solutions, are computationally expensive and yield limited insight. We present cache calculus, a technique that models cache behavior as a system of ordinary differential equations, letting standard calculus techniques find simple and accurate solutions of cache performance for common access patterns.
</description>
<pubDate>Sat, 19 Dec 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/100464</guid>
<dc:date>2015-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Supplementary materials for "ProppLearner: Deeply Annotating a Corpus of Russian Folktales to Enable the Machine Learning of a Russian Formalist Theory"</title>
<link>https://hdl.handle.net/1721.1/100054</link>
<description>Supplementary materials for "ProppLearner: Deeply Annotating a Corpus of Russian Folktales to Enable the Machine Learning of a Russian Formalist Theory"
Finlayson, Mark Alan
This archive contains the supplementary material for the journal article "ProppLearner: Deeply Annotating a Corpus of Russian Folktales to Enable the Machine Learning of a Russian Formalist Theory", published in the Journal of Digital Scholarship in the Humanities (DSH), ca. 2016.The archive contains several different types of files. First, it contains the annotation guides that were used to train the annotators. The guides are numbered to match the team numbers in Table 6. Included here are not only detailed guides for some layers, as produced by the original developers of the specification, but also our synopsis guides for each layer, which were used as a reference and further training material for the annotators. Also of interest are the general annotator and adjudicator training guides, which outline the general procedures followed by the teams when conducting annotation. Those who are organizing their own annotation projects may find this material useful.Second, the archive contains a comprehensive manifest, in Excel spreadsheet format, listing the word counts, sources, types, and titles (in both Russian and English) of all the texts that are part of the corpus. Finally, the archive contains the actual corpus data files, in Story Workbench format, an XML-encoded stand-off annotation scheme. The scheme is described in the file format specification file, also included in the archive. These files can be parsed with the aid of any normal XML reading software, or can be loaded and edited easily with the Story Workbench annotation tool, also freely available.
</description>
<pubDate>Wed, 02 Dec 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/100054</guid>
<dc:date>2015-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Representation Discovery for Kernel-Based Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/100053</link>
<description>Representation Discovery for Kernel-Based Reinforcement Learning
Zewdie, Dawit H.; Konidaris, George
Recent years have seen increased interest in non-parametric reinforcement learning. There are now practical kernel-based algorithms for approximating value functions; however, kernel regression requires that the underlying function being approximated be smooth on its domain. Few problems of interest satisfy this requirement in their natural representation. In this paper we define Value-Consistent Pseudometric (VCPM), the distance function corresponding to a transformation of the domain into a space where the target function is maximally smooth and thus well-approximated by kernel regression. We then present DKBRL, an iterative batch RL algorithm interleaving steps of Kernel-Based Reinforcement Learning and distance metric adjustment. We evaluate its performance on Acrobot and PinBall, continuous-space reinforcement learning domains with discontinuous value functions.
</description>
<pubDate>Tue, 24 Nov 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/100053</guid>
<dc:date>2015-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Prefetching of Data Tiles for Interactive Visualization</title>
<link>https://hdl.handle.net/1721.1/99361</link>
<description>Dynamic Prefetching of Data Tiles for Interactive Visualization
Battle, Leilani; Chang, Remco; Stonebraker, Michael
In this paper, we present ForeCache, a general-purpose tool for exploratory browsing of large datasets. ForeCache utilizes a client-server architecture, where the user interacts with a lightweight client-side interface to browse datasets, and the data to be browsed is retrieved from a DBMS running on a back-end server. We assume a detail-on-demand browsing paradigm, and optimize the back-end support for this paradigm by inserting a separate middleware layer in front of the DBMS. To improve response times, the middleware layer fetches data ahead of the user as she explores a dataset. We consider two different mechanisms for prefetching: (a) learning what to fetch from the user's recent movements, and (b) using data characteristics (e.g., histograms) to find data similar to what the user has viewed in the past. We incorporate these mechanisms into a single prediction engine that adjusts its prediction strategies over time, based on changes in the user's behavior. We evaluated our prediction engine with a user study, and found that our dynamic prefetching strategy provides: (1) significant improvements in overall latency when compared with non-prefetching systems (430% improvement); and (2) substantial improvements in both prediction accuracy (25% improvement) and latency (88% improvement) relative to existing prefetching techniques.
</description>
<pubDate>Mon, 19 Oct 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/99361</guid>
<dc:date>2015-10-19T00:00:00Z</dc:date>
</item>
<item>
<title>Big Data Privacy Scenarios</title>
<link>https://hdl.handle.net/1721.1/99127</link>
<description>Big Data Privacy Scenarios
Bruce, Elizabeth; Sollins, Karen; Vernon, Mona; Weitzner, Danny
This paper is the first in a series on privacy in Big Data. As an outgrowth of a series of workshops on the topic, the Big Data Privacy Working Group undertook a study of a series of use scenarios to highlight the challenges to privacy that arise in the Big Data arena. This is a report on those scenarios. The deeper question explored by this exercise is what is distinctive about privacy in the context of Big Data. In addition, we discuss an initial list of issues for privacy that derive specifically from the nature of Big Data. These derive from observations across the real world scenarios and use cases explored in this project as well as wider reading and discussions:* Scale: The sheer size of the datasets leads to challenges in creating, managing and applying privacy policies.* Diversity: The increased likelihood of more and more diverse participants in Big Data collection, management, and use, leads to differing agendas and objectives. By nature, this is likely to lead to contradictory agendas and objectives.* Integration: With increased data management technologies (e.g. cloud services, data lakes, and so forth), integration across datasets, with new and often surprising opportunities for cross-product inferences, will also come new  information  about individuals and their behaviors.* Impact on secondary participants: Because many pieces of information are reflective of not only the targeted subject, but secondary, often unattended, participants, the inferences and resulting information will increasingly be reflective of other people, not originally considered as the subject of privacy concerns and approaches.* Need for emergent policies for emergent information: As inferences over merged data sets occur, emergent information or understanding will occur. Although each unique data set may have existing privacy policies and enforcement mechanisms, it is not clear that it is possible to develop the requisite and appropriate emerged privacy policies and appropriate enforcement of them automatically.
</description>
<pubDate>Thu, 01 Oct 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/99127</guid>
<dc:date>2015-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing a Context-Sensitive Context Detection Service for Mobile Devices</title>
<link>https://hdl.handle.net/1721.1/98905</link>
<description>Designing a Context-Sensitive Context Detection Service for Mobile Devices
Chen, Tiffany Yu-Han; Sivaraman, Anirudh; Das, Somak; Ravindranath, Lenin; Balakrishnan, Hari
This paper describes the design, implementation, and evaluation of Amoeba, a context-sensitive context detection service for mobile devices. Amoeba exports an API that allows a client to express interest in one or more context types (activity, indoor/outdoor, and entry/exit to/from named regions), subscribe to specific modes within each context (e.g., "walking" or "running", but no other activity), and specify a response latency (i.e., how often the client is notified). Each context has a detector that returns its estimate of the mode. The detectors take both the desired subscriptions and the current context detection into account, adjusting both the types of sensors and the sampling rates to achieve high accuracy and low energy consumption. We have implemented Amoeba on Android. Experiments with Amoeba on 45+ hours of data show that our activity detector achieves an accuracy between 92% and 99%, outperforming previous proposals like UCLA* (59%), EEMSS (82%) and SociableSense (72%), while consuming 4 to 6× less energy.
</description>
<pubDate>Thu, 24 Sep 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/98905</guid>
<dc:date>2015-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>Network Maximal Correlation</title>
<link>https://hdl.handle.net/1721.1/98878</link>
<description>Network Maximal Correlation
Feizi, Soheil; Makhdoumi, Ali; Duffy, Ken; Kellis, Manolis; Medard, Muriel
Identifying nonlinear relationships in large datasets is a daunting task particularly when the form of the nonlinearity is unknown. Here, we introduce Network Maximal Correlation (NMC) as a fundamental measure to capture nonlinear associations in networks without the knowledge of underlying nonlinearity shapes. NMC infers, possibly nonlinear, transformations of variables with zero means and unit variances by maximizing total nonlinear correlation over the underlying network. For the case of having two variables, NMC is equivalent to the standard Maximal Correlation. We characterize a solution of the NMC optimization using geometric properties of Hilbert spaces for both discrete and jointly Gaussian variables. For discrete random variables, we show that the NMC optimization is an instance of the Maximum Correlation Problem and provide necessary conditions for its global optimal solution. Moreover, we propose an efficient algorithm based on Alternating Conditional Expectation (ACE) which converges to a local NMC optimum. For this algorithm, we provide guidelines for choosing appropriate starting points to jump out of local maximizers. We also propose a distributed algorithm to compute a 1-$\epsilon$ approximation of the NMC value for large and dense graphs using graph partitioning. For jointly Gaussian variables, under some conditions, we show that the NMC optimization can be simplified to a Max-Cut problem, where we provide conditions under which an NMC solution can be computed exactly. Under some general conditions, we show that NMC can infer the underlying graphical model for functions of latent jointly Gaussian variables. These functions are unknown, bijective, and can be nonlinear. This result broadens the family of continuous distributions whose graphical models can be characterized efficiently. We illustrate the robustness of NMC in real world applications by showing its continuity with respect to small perturbations of joint distributions. We also show that sample NMC (NMC computed using empirical distributions) converges exponentially fast to the true NMC value. Finally, we apply NMC to different cancer datasets including breast, kidney and liver cancers, and show that NMC infers gene modules that are significantly associated with survival times of individuals while they are not detected using linear association measures.
</description>
<pubDate>Mon, 21 Sep 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/98878</guid>
<dc:date>2015-09-21T00:00:00Z</dc:date>
</item>
<item>
<title>Prophet: Automatic Patch Generation via Learning from Successful Patches</title>
<link>https://hdl.handle.net/1721.1/97735</link>
<description>Prophet: Automatic Patch Generation via Learning from Successful Patches
Long, Fan; Rinard, Martin
We present Prophet, a novel patch generation system that learns a probabilistic model over candidate patches from a database of past successful patches. Prophet defines the probabilistic model as the combination of a distribution over program points based on defect localization algorithms and a parametrized log-linear distribution over modification operations. It then learns the model parameters via maximum log-likelihood, which identifies important characteristics of the previous successful patches in the database. For a new defect, Prophet generates a search space that contains many candidate patches, applies the learned model to prioritize those potentially correct patches that are consistent with the identified successful patch characteristics, and then validates the candidate patches with a user supplied test suite. The experimental results indicate that these techniques enable Prophet to generate correct patches for 15 out of 69 real-world defects in eight open source projects. The previous state of the art generate and validate system, which uses a set of hand-code heuristics to prioritize the search, generates correct patches for 11 of these same 69 defects.
</description>
<pubDate>Mon, 13 Jul 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97735</guid>
<dc:date>2015-07-13T00:00:00Z</dc:date>
</item>
<item>
<title>Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications</title>
<link>https://hdl.handle.net/1721.1/97690</link>
<description>Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications
Abelson, Harold; Anderson, Ross; Bellovin, Steven M.; Benaloh, Josh; Blaze, Matt; Diffie, Whitfield; Gilmore, John; Green, Matthew; Landau, Susan; Neumann, Peter G.; Rivest, Ronald L.; Schiller, Jeffrey I.; Schneier, Bruce; Specter, Michael; Weitzner, Daniel J.
Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels going dark, these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates. We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today's Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse forward secrecy design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today's Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.
</description>
<pubDate>Mon, 06 Jul 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97690</guid>
<dc:date>2015-07-06T00:00:00Z</dc:date>
</item>
<item>
<title>PhD Thesis Proposal: Human-Machine Collaborative Optimization via Apprenticeship Scheduling</title>
<link>https://hdl.handle.net/1721.1/97689</link>
<description>PhD Thesis Proposal: Human-Machine Collaborative Optimization via Apprenticeship Scheduling
Gombolay, Matthew C.
Resource optimization in health care, manufacturing, and military operations requires the careful choreography of people and equipment to effectively fulfill the responsibilities of the profession. However, resource optimization is a computationally challenging problem, and poorly utilizing resources can have drastic consequences. Within these professions, there are human domain experts who are able to learn from experience to develop strategies, heuristics, and rules-of-thumb to effectively utilize the resources at their disposal. Manually codifying these heuristics within a computational tool is a laborious process and leaves much to be desired. Even with a codified set of heuristics, it is not clear how to best insert an autonomous decision-support system into the human decision-making process. The aim of this thesis is to develop an autonomous computational method for learning domain-expert heuristics from demonstration that can support the human decision-making process. We propose a new framework, called apprenticeship scheduling, which learns and embeds these heuristics within a scalable resource optimization algorithm for real-time decision-support. Our initial investigation, comprised of developing scalable methods for scheduling and studying shared control in human-machine collaborative resource optimization, inspires the development of our apprenticeship scheduling approach. We present a promising, initial prototype for learning heuristics from demonstration and outline a plan for our continuing work.
</description>
<pubDate>Thu, 02 Jul 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97689</guid>
<dc:date>2015-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Guaranteeing Spoof-Resilient Multi-Robot Networks</title>
<link>https://hdl.handle.net/1721.1/97442</link>
<description>Guaranteeing Spoof-Resilient Multi-Robot Networks
Gil, Stephanie; Kumar, Swarun; Mazumder, Mark; Katabi, Dina; Rus, Daniela
Multi-robot networks use wireless communication to provide wide-ranging services such as aerial surveillance and unmanned delivery. However, effective coordination between multiple robots requires trust, making them particularly vulnerable to cyber-attacks. Specifically, such networks can be gravely disrupted by the Sybil attack, where even a single malicious robot can spoof a large number of fake clients. This paper proposes a new solution to defend against the Sybil attack, without requiring expensive cryptographic key-distribution. Our core contribution is a novel algorithm implemented on commercial Wi-Fi radios that can "sense" spoofers using the physics of wireless signals. We derive theoretical guarantees on how this algorithm bounds the impact of the Sybil Attack on a broad class of robotic coverage problems. We experimentally validate our claims using a team of AscTec quadrotor servers and iRobot Create ground clients, and demonstrate spoofer detection rates over 96%.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97442</guid>
</item>
<item>
<title>Value-Deviation-Bounded Serial Data Encoding for Energy-Efficient Approximate Communication</title>
<link>https://hdl.handle.net/1721.1/97180</link>
<description>Value-Deviation-Bounded Serial Data Encoding for Energy-Efficient Approximate Communication
Stanley-Marbell, Phillip; Rinard, Martin
Transferring data between ICs accounts for a growing proportion of system power in wearable and mobile systems. Reducing signal transitions reduces the dynamic power dissipated in this data transfer, but traditional approaches cannot be applied when the transfer interfaces are serial buses. To address this challenge, we present a family of optimal value-deviation-bounded approximate serial encoders (VDBS encoders) that significantly reduce signal transitions (and hence, dynamic power) for bit-serial communication interfaces. When the data in transfer are from sensors, VDBS encoding enables a tradeoff between power efficiency and application fidelity, by exploiting the tolerance of many of the typical algorithms consuming sensor data to deviations in values. We derive analytic formulations for the family of VDBS encoders and introduce an efficient algorithm that performs close to the Pareto-optimal encoders. We evaluate the algorithm in two applications: Encoding data between a camera and processor in a text-recognition system, and between an accelerometer and processor in a pedometer system. For the text recognizer, the algorithm reduces signal transitions by 55% on average, while maintaining OCR accuracy at over 90% for previously-correctly-recognized text. For the pedometer, the algorithm reduces signal transitions by an average of 54% in exchange for step count errors of under 5%.
</description>
<pubDate>Thu, 04 Jun 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97180</guid>
<dc:date>2015-06-04T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems</title>
<link>https://hdl.handle.net/1721.1/97130</link>
<description>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems
Qi, Zichao; Long, Fan; Achour, Sara; Rinard, Martin
We analyze reported patches for three existing generate-and-validate patch generation systems (GenProg, RSRepair, and AE). The basic principle behind generate-and-validate systems is to accept only plausible patches that produce correct outputs for all inputs in the test suite used to validate the patches. Because of errors in the patch evaluation infrastructure, the majority of the reported patches are not plausible -- they do not produce correct outputs even for the inputs in the validation test suite. The overwhelming majority of the reported patches are not correct and are equivalent to a single modification that simply deletes functionality. Observed negative effects include the introduction of security vulnerabilities and the elimination of desirable standard functionality. We also present Kali, a generate-and-validate patch generation system that only deletes functionality. Working with a simpler and more effectively focused search space, Kali generates at least as many correct patches as prior GenProg, RSRepair, and AE systems. Kali also generates at least as many patches that produce correct outputs for the inputs in the validation test suite as the three prior systems. We also discuss the patches produced by ClearView, a generate-and-validate binary hot patching system that leverages learned invariants to produce patches that enable systems to survive otherwise fatal defects and security attacks. Our analysis indicates that ClearView successfully patches 9 of the 10 security vulnerabilities used to evaluate the system. At least 4 of these patches are correct.
</description>
<pubDate>Fri, 29 May 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97130</guid>
<dc:date>2015-05-29T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems</title>
<link>https://hdl.handle.net/1721.1/97089</link>
<description>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems
Qi, Zichao; Long, Fan; Achour, Sara; Rinard, Martin
We analyze reported patches for three existing generate-and-validate patch generation systems (GenProg, RSRepair, and AE). The basic principle behind generate-and-validate systems is to accept only plausible patches that produce correct outputs for all inputs in the test suite used to validate the patches. Because of errors in the patch evaluation infrastructure, the majority of the reported patches are not plausible --- they do not produce correct outputs even for the inputs in the validation test suite. The overwhelming majority of the reported patches are not correct and are equivalent to a single modification that simply deletes functionality. Observed negative effects include the introduction of security vulnerabilities and the elimination of desirable standard functionality. We also present Kali, a generate-and-validate patch generation system that only deletes functionality. Working with a simpler and more effectively focused search space, Kali generates at least as many correct patches as prior GenProg, RSRepair, and AE systems. Kali also generates at least as many patches that produce correct outputs for the inputs in the validation test suite as the three prior systems. We also discuss patches produced by ClearView, a generate-and-validate binary hot patching system that leverages learned invariants to produce patches that enable systems to survive otherwise fatal defects and security attacks. Our analysis indicates that ClearView successfully patches 9 of the 10 security vulnerabilities used to evaluate the system. At least 4 of these patches are correct.
</description>
<pubDate>Tue, 26 May 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97089</guid>
<dc:date>2015-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Prophet: Automatic Patch Generation via Learning from Successful Human Patches</title>
<link>https://hdl.handle.net/1721.1/97088</link>
<description>Prophet: Automatic Patch Generation via Learning from Successful Human Patches
Long, Fan; Rinard, Martin
We present Prophet, a novel patch generation system that learns a probabilistic model over candidate patches from a large code database that contains many past successful human patches. It defines the probabilistic model as the combination of a distribution over program points based on error localization algorithms and a parameterized log-linear distribution over modification operations. It then learns the model parameters via maximum log-likelihood, which identifies important characteristics of the successful human patches. For a new defect, Prophet generates a search space that contains many candidate patches, applies the learned model to prioritize those potentially correct patches that are consistent with the identified successful patch characteristics, and then validates the candidate patches with a user supplied test suite.
</description>
<pubDate>Tue, 26 May 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97088</guid>
<dc:date>2015-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Discovery and Patching of Buffer and Integer Overflow Errors</title>
<link>https://hdl.handle.net/1721.1/97087</link>
<description>Automatic Discovery and Patching of Buffer and Integer Overflow Errors
Sidiroglou-Douskos, Stelios; Lahtinen, Eric; Rinard, Martin
We present Targeted Automatic Patching (TAP), an automatic buffer and integer overflow discovery and patching system. Starting with an application and a seed input that the application processes correctly, TAP dynamically analyzes the execution of the application to locate target memory allocation sites and statements that access dynamically or statically allocated blocks of memory. It then uses targeted error-discovery techniques to automatically generate inputs that trigger integer and/or buffer overflows at the target sites. When it discovers a buffer or integer overflow error, TAP automatically matches and applies patch templates to generate patches that eliminate the error. Our experimental results show that TAP successfully discovers and patches two buffer and six integer overflow errors in six real-world applications.
</description>
<pubDate>Tue, 26 May 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97087</guid>
<dc:date>2015-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Simit: A Language for Physical Simulation</title>
<link>https://hdl.handle.net/1721.1/97075</link>
<description>Simit: A Language for Physical Simulation
Kjolstad, Fredrik; Kamil, Shoaib; Ragan-Kelley, Jonathan; Levin, David I.W.; Sueda, Shinjiro; Chen, Desai; Vouga, Etienne; Kaufman, Danny M.; Kanwar, Gurtej; Matusik, Wojciech; Amarasinghe, Saman
Using existing programming tools, writing high-performance simulation code is labor intensive and requires sacrificing readability and portability. The alternative is to prototype simulations in a high-level language like Matlab, thereby sacrificing performance. The Matlab programming model naturally describes the behavior of an entire physical system using the language of linear algebra. However, simulations also manipulate individual geometric elements, which are best represented using linked data structures like meshes. Translating between the linked data structures and linear algebra comes at significant cost, both to the programmer and the machine. High-performance implementations avoid the cost by rephrasing the computation in terms of linked or index data structures, leaving the code complicated and monolithic, often increasing its size by an order of magnitude. In this paper, we present Simit, a new language for physical simulations that lets the programmer view the system both as a linked data structure in the form of a hypergraph, and as a set of global vectors, matrices and tensors depending on what is convenient at any given time. Simit provides a novel assembly construct that makes it conceptually easy and computationally efficient to move between the two abstractions. Using the information provided by the assembly construct, the compiler generates efficient in-place computation on the graph. We demonstrate that Simit is easy to use: a Simit program is typically shorter than a Matlab program; that it is high-performance: a Simit program running sequentially on a CPU performs comparably to hand-optimized simulations; and that it is portable: Simit programs can be compiled for GPUs with no change to the program, delivering 5-25x speedups over our optimized CPU code.
</description>
<pubDate>Tue, 26 May 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97075</guid>
<dc:date>2015-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems (Supplementary Material)</title>
<link>https://hdl.handle.net/1721.1/97051</link>
<description>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems (Supplementary Material)
Qi, Zichao; Long, Fan; Achour, Sara; Rinard, Martin
We analyze reported patches for three prior generate-and-validate patch generation systems (GenProg, RSRepair, and AE). Because of errors in the patch evaluation infrastructure, the majority of the reported patches violate the basic principle behind the design of these systems   they do not produce correct outputs even for the inputs in the test suite used to validate the patches. We also show that the overwhelming majority of the accepted patches are not correct and are equivalent to a single modification that simply deletes functionality. We also present Kali, a generate-and-validate patch generation system that only deletes functionality. Working with a simpler and more effectively focused search space, Kali generates at least as many correct patches as prior GenProg, RSRepair, and AE systems. Kali also generates at least as many patches that produce correct outputs for the inputs in the validation test suite as the three prior systems. We also discuss the patches produced by ClearView, a generate-and-validate binary hot patching system that leverages learned invariants to produce patches that enable systems to survive otherwise fatal defects and security attacks. Our analysis indicates that ClearView successfully patches 9 of the 10 security vulnerabilities used to evaluate the system. At least 4 of these patches are correct.
</description>
<pubDate>Thu, 21 May 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97051</guid>
<dc:date>2015-05-21T00:00:00Z</dc:date>
</item>
<item>
<title>A (Truly) Local Broadcast Layer for Unreliable Radio Networks</title>
<link>https://hdl.handle.net/1721.1/97014</link>
<description>A (Truly) Local Broadcast Layer for Unreliable Radio Networks
Lynch, Nancy; Newport, Calvin
In this paper, we implement an efficient local broadcast service for the dual graph model, which describes communication in a radio network with both reliable and unreliable links. Our local broadcast service offers probabilistic latency guarantees for: (1) message delivery to all reliable neighbors (i.e., neighbors connected by reliable links), and (2) receiving some message when one or more reliable neighbors are broadcasting. This service significantly simplifies the design and analysis of algorithms for the otherwise challenging dual graph model. To this end, we also note that our solution can be interpreted as an implementation of the abstract MAC layer specification---therefore translating the growing corpus of algorithmic results studied on top of this layer to the dual graph model. At the core of our service is a seed agreement routine which enables nodes in the network to achieve "good enough" coordination to overcome the difficulties of unpredictable link behavior. Because this routine has potential application to other problems in this setting, we capture it with a formal specification---simplifying its reuse in other algorithms. Finally, we note that in a break from much work on distributed radio network algorithms, our problem definitions (including error bounds), implementation, and analysis do not depend on global network parameters such as the network size, a goal which required new analysis techniques. We argue that breaking the dependence of these algorithms on global parameters makes more sense and aligns better with the rise of ubiquitous computing, where devices will be increasingly working locally in an otherwise massive network. Our push for locality, in other words, is a contribution independent of the specific radio network model and problem studied here.
</description>
<pubDate>Mon, 18 May 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97014</guid>
<dc:date>2015-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Non-Essential Communication in Mobile Applications</title>
<link>https://hdl.handle.net/1721.1/96909</link>
<description>Non-Essential Communication in Mobile Applications
Rubin, Julia; Gordon, Michael I.; Nguyen, Nguyen; Rinard, Martin
This paper studies communication patterns in mobile applications. Our analysis shows that 65% of the HTTP, socket, and RPC communication in top-popular Android applications from Google Play have no effect on the user-observable application functionality. We present a static analysis that is able to detect non-essential communication with 84%-90% precision and 63%-64% recall, depending on whether advertisement content is interpreted as essential or not. We use our technique to analyze the 500 top-popular Android applications from Google Play and determine that more than 80% of the connection statements in these applications are non-essential.
</description>
<pubDate>Mon, 04 May 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/96909</guid>
<dc:date>2015-05-04T00:00:00Z</dc:date>
</item>
<item>
<title>Markov Chain Hallway and Poisson Forest Environment Generating Distributions</title>
<link>https://hdl.handle.net/1721.1/96879</link>
<description>Markov Chain Hallway and Poisson Forest Environment Generating Distributions
Richter, Charles; Vega-Brown, William; Roy, Nicholas
We document two environment-generating distributions used for sampling random 2D maps. The first generates random hallway environments based on a Markov chain and the second generates random forest environments based on the Poisson distribution.
</description>
<pubDate>Mon, 27 Apr 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/96879</guid>
<dc:date>2015-04-27T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Error Elimination by Horizontal Code Transfer Across Multiple Applications</title>
<link>https://hdl.handle.net/1721.1/96625</link>
<description>Automatic Error Elimination by Horizontal Code Transfer Across Multiple Applications
Sidiroglou-Douskos, Stelios; Lahtinen, Eric; Long, Fan; Rinard, Martin
We present Code Phage (CP), a system for automatically transferring correct code from donor applications into recipient applications that process the same inputs to successfully eliminate errors in the recipient. Experimental results using seven donor applications to eliminate ten errors in seven recipient applications highlight the ability of CP to transfer code across applications to eliminate out of bounds access, integer overflow, and divide by zero errors. Because CP works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, CP is the first system to automatically transfer code across multiple applications.
</description>
<pubDate>Wed, 15 Apr 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/96625</guid>
<dc:date>2015-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>Horizontal Code Transfer via Program Fracture and Recombination</title>
<link>https://hdl.handle.net/1721.1/96585</link>
<description>Horizontal Code Transfer via Program Fracture and Recombination
Sidiroglou-Douskos, Stelios; Davis, Eli; Rinard, Martin
We present a new horizontal code transfer technique, program fracture and recombination, for automatically replacing, deleting, and/or combining code from multiple applications. Benefits include automatic generation of new applications incorporating the best or most desirable functionality developed anywhere, the automatic elimination of security vulnerabilities, effective software rejuvenation, the automatic elimination of obsolete or undesirable functionality, and improved performance, simplicity, analyzability, and clarity.
</description>
<pubDate>Tue, 14 Apr 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/96585</guid>
<dc:date>2015-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Cache Model for Modern Processors</title>
<link>https://hdl.handle.net/1721.1/96525</link>
<description>A Cache Model for Modern Processors
Beckmann, Nathan; Sanchez, Daniel
Modern processors use high-performance cache replacement policies that outperform traditional alternatives like least-recently used (LRU). Unfortunately, current cache models use stack distances to predict LRU or its variants, and cannot capture these high-performance policies. Accurate predictions of cache performance enable many optimizations in multicore systems. For example, cache partitioning uses these predictions to divide capacity among applications in order to maximize performance, guarantee quality of service, or achieve other system objectives. Without an accurate model for high-performance replacement policies, these optimizations are unavailable to modern processors. We present a new probabilistic cache model designed for high-performance replacement policies. This model uses absolute reuse distances instead of stack distances, which makes it applicable to arbitrary age-based replacement policies. We thoroughly validate our model on several high-performance policies on synthetic and real benchmarks, where its median error is less than 1%. Finally, we present two case studies showing how to use the model to improve shared and single-stream cache performance.
</description>
<pubDate>Thu, 09 Apr 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/96525</guid>
<dc:date>2015-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>iBCM: Interactive Bayesian Case Model Empowering Humans via Intuitive Interaction</title>
<link>https://hdl.handle.net/1721.1/96315</link>
<description>iBCM: Interactive Bayesian Case Model Empowering Humans via Intuitive Interaction
Kim, Been; Glassman, Elena; Johnson, Brittney; Shah, Julie
Clustering methods optimize the partitioning of data points with respect to an internal metric, such as likelihood, in order to approximate the goodness of clustering. However, this internal metric does not necessarily translate into effective clustering from the user's perspective. This work presents the interactive Bayesian Case Model (iBCM), a model that opens a communication channel between the clustering model and the user. Users can provide direct input to iBCM in order to achieve effective clustering results, and iBCM optimizes the clustering by creating a balance between what the data indicate and what makes the most sense to the user. This model provides feedback for users and does not assume any prior knowledge of machine learning on their part. We provide quantitative evidence that users are able to obtain more satisfactory clustering results through iBCM than without an interactive model. We also demonstrate the use of this method in a real-world setting where computer language class teachers utilize iBCM to cluster students' coding assignments for grading.
</description>
<pubDate>Wed, 01 Apr 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/96315</guid>
<dc:date>2015-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Suite of Techniques for Describing Activity in Terms of Events</title>
<link>https://hdl.handle.net/1721.1/96300</link>
<description>A Suite of Techniques for Describing Activity in Terms of Events
Borchardt, Gary C.
This report presents a set of software techniques that support the tasks of event recognition, summarization of event sequences, explanation of recognized events, explanation of non-recognized events, prediction of event completions, and question answering by leveraging language-encoded human knowledge of what typically happens during various types of events. The techniques operate on sequences of timestamped, three-dimensional positions and contacts for humans, body parts, and objects, provided by a Microsoft Kinect sensor plus associated software. Appendices describe 64 activity sequences used for development and testing of the techniques and 102 event models created as part of the effort.
</description>
<pubDate>Mon, 30 Mar 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/96300</guid>
<dc:date>2015-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>Staged Program Repair in SPR</title>
<link>https://hdl.handle.net/1721.1/95970</link>
<description>Staged Program Repair in SPR
Long, Fan; Rinard, Martin
We present SPR, a new program repair system that uses condition synthesis to instantiate transformation schemas to repair program defects. SPR s staged repair strategy combines a rich space of potential repairs with a targeted search algorithm that makes this space viably searchable in practice. This strategy enables SPR to successfully find correct program repairs within a space that contains many meaningful and useful patches. The majority of these correct repairs are not within the search spaces of previous automatic program repair systems.
</description>
<pubDate>Wed, 11 Mar 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/95970</guid>
<dc:date>2015-03-11T00:00:00Z</dc:date>
</item>
<item>
<title>Staged Program Repair in SPR (Supplementary Material)</title>
<link>https://hdl.handle.net/1721.1/95963</link>
<description>Staged Program Repair in SPR (Supplementary Material)
Long, Fan; Rinard, Martin
We present SPR, a new program repair system that uses condition synthesis to instantiate transformation schemas to repair program defects. SPR's staged repair strategy combines a rich space of potential repairs with a targeted search algorithm that makes this space viably searchable in practice. This strategy enables SPR to successfully find correct program repairs within a space that contains many correct patches. The majority of these correct patches are not within the search spaces of previous automatic program repair systems.
</description>
<pubDate>Thu, 05 Mar 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/95963</guid>
<dc:date>2015-03-05T00:00:00Z</dc:date>
</item>
<item>
<title>Consensus using Asynchronous Failure Detectors</title>
<link>https://hdl.handle.net/1721.1/95775</link>
<description>Consensus using Asynchronous Failure Detectors
Lynch, Nancy; Sastry, Srikanth
The FLP result shows that crash-tolerant consensus is impossible to solve in asynchronous systems, and several solutions have been proposed for crash-tolerant consensus under alternative (stronger) models. One popular approach is to augment the asynchronous system with appropriate failure detectors, which provide (potentially unreliable) information about process crashes in the system, to circumvent the FLP impossibility. In this paper, we demonstrate the exact mechanism by which (sufficiently powerful) asynchronous failure detectors enable solving crash-tolerant consensus. Our approach, which borrows arguments from the FLP impossibility proof and the famous result from CHT, which shows that Omega is a weakest failure detector to solve consensus, also yields a natural proof to Omega as a weakest asynchronous failure detector to solve consensus. The use of I/O automata theory in our approach enables us to model execution in a more detailed fashion than CHT and also addresses the latent assumptions and assertions in the original result in CHT.
</description>
<pubDate>Mon, 02 Mar 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/95775</guid>
<dc:date>2015-03-02T00:00:00Z</dc:date>
</item>
<item>
<title>On the Formal Semantics of the Cognitive Middleware AWDRAT</title>
<link>https://hdl.handle.net/1721.1/95774</link>
<description>On the Formal Semantics of the Cognitive Middleware AWDRAT
Khan, Muhammad Taimoor; Serpanos, Dimitrios; Shrobe, Howard
The purpose of this work is two fold: on one hand we want to formalize the behavior of critical components of the self generating and adapting cognitive middleware AWDRAT such that the formalism not only helps to understand the semantics and technical details of the middleware but also opens an opportunity to extend the middleware to support other complex application domains of cybersecurity; on the other hand, the formalism serves as a prerequisite for our proof of the behavioral correctness of the critical components to ensure the safety of the middleware itself. However, here we focus only on the core and critical component of the middleware, i.e. Execution Monitor which is a part of the module "Architectural Differencer" of AWDRAT. The role of the execution monitor is to identify inconsistencies between run-time observations of the target system and predictions of the System Architectural Model. Therefore, to achieve this goal, we first define the formal (denotational) semantics of the observations (run-time events) and predictions (executable specifications as of System Architectural Model); then based on the aforementioned formal semantics, we formalize the behavior of the "Execution Monitor" of the middleware.
</description>
<pubDate>Tue, 03 Mar 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/95774</guid>
<dc:date>2015-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>Spectral Alignment of Networks</title>
<link>https://hdl.handle.net/1721.1/94606</link>
<description>Spectral Alignment of Networks
Feizi, Soheil; Quon, Gerald; Medard, Muriel; Kellis, Manolis; Jadbabaie, Ali
Network alignment refers to the problem of finding a bijective mapping across vertices of two or more graphs to maximize the number of overlapping edges and/or to minimize the number of mismatched interactions across networks. This paper introduces a network alignment algorithm inspired by eigenvector analysis which creates a simple relaxation for the underlying quadratic assignment problem. Our method relaxes binary assignment constraints along the leading eigenvector of an alignment matrix which captures the structure of matched and mismatched interactions across networks. Our proposed algorithm denoted by EigeAlign has two steps. First, it computes the Perron-Frobenius eigenvector of the alignment matrix. Second, it uses this eigenvector in a linear optimization framework of maximum weight bipartite matching to infer bijective mappings across vertices of two graphs. Unlike existing network alignment methods, EigenAlign considers both matched and mismatched interactions in its optimization and therefore, it is effective in aligning networks even with low similarity. We show that, when certain technical conditions hold, the relaxation given by EigenAlign is asymptotically exact over Erdos-Renyi graphs with high probability. Moreover, for modular network structures, we show that EigenAlign can be used to split the large quadratic assignment optimization into small subproblems, enabling the use of computationally expensive, but tight semidefinite relaxations over each subproblem. Through simulations, we show the effectiveness of the EigenAlign algorithm in aligning various network structures including Erdos-Renyi, power law, and stochastic block models, under different noise models. Finally, we apply EigenAlign to compare gene regulatory networks across human, fly and worm species which we infer by integrating genome-wide functional and physical genomics datasets from ENCODE and modENCODE consortia. EigenAlign infers conserved regulatory interactions across these species despite large evolutionary distances spanned. We find strong conservation of centrally-connected genes and some biological pathways, especially for human-fly comparisons.
</description>
<pubDate>Wed, 18 Feb 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/94606</guid>
<dc:date>2015-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Program Repair with Condition Synthesis and Compound Mutations</title>
<link>https://hdl.handle.net/1721.1/94520</link>
<description>Automatic Program Repair with Condition Synthesis and Compound Mutations
Long, Fan; Qi, Zichao; Achour, Sara; Rinard, Martin
We present PCR, a new automatic patch generation system. PCR uses a new condition synthesis technique to efficiently discover logical expressions that generate desired control- flow transfer patterns. Presented with a set of test cases, PCR deploys condition synthesis to find and repair incorrect if conditions that cause the application to produce the wrong result for one or more of the test cases. PCR also leverages condition synthesis to obtain a set of compound modifications that generate a rich, productive, and tractable search space of candidate patches. We evaluate PCR on a set of 105 defects from the GenProg benchmark set. For 40 of these defects, PCR generates plausible patches (patches that generate correct outputs for all inputs in the test suite used to validate the patch). For 12 of these defects, PCR generates correct patches that are functionally equivalent to developer patches that appear in subsequent versions. For comparison purposes, GenProg generates plausible patches for only 18 defects and correct patches for only 2 defects. AE generates plausible patches for only 27 defects and correct patches for only 3 defects.
</description>
<pubDate>Thu, 12 Feb 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/94520</guid>
<dc:date>2015-02-12T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems</title>
<link>https://hdl.handle.net/1721.1/94337</link>
<description>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems
Qi, Zichao; Long, Fan; Achour, Sara; Rinard, Martin
We analyze reported patches for three prior generate-and-validate patch generation systems (GenProg, RSRepair, and AE). Because of experimental error, the majority of the reported patches violate the basic principle behind the design of these systems -- they do not produce correct outputs even for the inputs in the test suite used to validate the patches. We also show that the overwhelming majority of the accepted patches are not correct and are equivalent to a single modification that simply deletes functionality. We also present Kali, a generate-and-validate patch generation system that simply deletes functionality. Working with a simpler and more effectively focused search space, Kali generates at least as many correct patches as prior GenProg, RSRepair, and AE systems. Kali also generates at least as many plausible patches that produce correct outputs for the inputs in the validation test suite as the three prior systems. We also discuss the patches produced by ClearView, a generate-and-validate binary hot patching system that leverages learned invariants to produce patches that enable systems to survive otherwise fatal defects and security attacks.
</description>
<pubDate>Tue, 10 Feb 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/94337</guid>
<dc:date>2015-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems (Supplementary Material)</title>
<link>https://hdl.handle.net/1721.1/93255</link>
<description>An Analysis of Patch Plausibility and Correctness for Generate-And-Validate Patch Generation Systems (Supplementary Material)
Qi, Zichao; Long, Fan; Achour, Sara; Rinard, Martin
We analyze reported patches for three prior generate-and-validate patch generation systems (GenProg, RSRepair, and AE). Because of experimental error, the majority of the reported patches violate the basic principle behind the design of these systems -- they do not produce correct outputs even for the inputs in the test suite used to validate the patches. We also show that the overwhelming majority of the accepted patches are not correct and are equivalent to a single modification that simply deletes functionality. We also present Kali, a generate-and-validate patch generation system that simply deletes functionality. Working with a simpler and more effectively focused search space, Kali produces more correct patches and at least as many patches that produce correct outputs for the inputs in the validation test suite as prior GenProg, RSRepair, and AE systems.
</description>
<pubDate>Mon, 02 Feb 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/93255</guid>
<dc:date>2015-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>Improved Caching Strategies for Publish/Subscribe Internet Networking</title>
<link>https://hdl.handle.net/1721.1/93253</link>
<description>Improved Caching Strategies for Publish/Subscribe Internet Networking
Beckler, Kendra K.
The systemic structure of TCP/IP is outdated; a new scheme for data transportation is needed in order to make the internet more adaptive to modern demands of mobility, information-driven demand, ever-increasing quantity of users and data, and performance requirements. While an information centric networking system addresses these issues, one required component for publish subscribe or content-addressed internet networking systems to work properly is an improved caching system. This allows the publish subscribe internet networking to dynamically route packets to mobile users, as an improvement over pure hierarchical or pure distributed caching systems. To this end, I proposed, implemented, and analyzed the workings of a superdomain caching system. The superdomain caching system is a hybrid of hierarchical and dynamic caching systems designed to continue reaping the benefits of the caching system for mobile users (who may move between neighboring domains in the midst of a network transaction) while minimizing the latency inherent in any distributed caching system to improve upon the content-addressed system.
MEng thesis
</description>
<pubDate>Sat, 31 Jan 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/93253</guid>
<dc:date>2015-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Efficiently Solving Repeated Integer Linear Programming Problems by Learning Solutions of Similar Linear Programming Problems using Boosting Trees</title>
<link>https://hdl.handle.net/1721.1/93099</link>
<description>Efficiently Solving Repeated Integer Linear Programming Problems by Learning Solutions of Similar Linear Programming Problems using Boosting Trees
Banerjee, Ashis Gopal; Roy, Nicholas
It is challenging to obtain online solutions of large-scale integer linear programming (ILP) problems that occur frequently in slightly different forms during planning for autonomous systems. We refer to such ILP problems as repeated ILP problems. The branch-and-bound (BAB) algorithm is commonly used to solve ILP problems, and a significant amount of computation time is expended in solving numerous relaxed linear programming (LP) problems at the nodes of the BAB trees. We observe that the relaxed LP problems, both within a particular BAB tree and across multiple trees for repeated ILP problems, are similar to each other in the sense that they contain almost the same number of constraints, similar objective function and constraint coefficients, and an identical number of decision variables. We present a boosting tree-based regression technique for learning a set of functions that map the objective function and the constraints to the decision variables of such a system of similar LP problems; this enables us to efficiently infer approximately optimal solutions of the repeated ILP problems. We provide theoretical performance guarantees on the predicted values and demonstrate the effectiveness of the algorithm in four representative domains involving a library of benchmark ILP problems, aircraft carrier deck scheduling, vehicle routing, and vehicle control.
</description>
<pubDate>Wed, 21 Jan 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/93099</guid>
<dc:date>2015-01-21T00:00:00Z</dc:date>
</item>
<item>
<title>Supplementary Materials for "A Survey of Corpora in Computational and Cognitive Narrative Science"</title>
<link>https://hdl.handle.net/1721.1/92563</link>
<description>Supplementary Materials for "A Survey of Corpora in Computational and Cognitive Narrative Science"
Finlayson, Mark Alan
This archive contains supplementary materials for the article titled "A Survey of Corpora in Computational and Cognitive Narrative Science" by Mark A. Finlayson, published in the journal *Sprache und Datenverarbeitung*. The archive contains two files. The first file is the raw bibliographic data of the survey, containing 2600+ citations. The second file is a spreadsheet with the coded features of each corpus, plus the analyses that underlie sections 3 &amp; 4 of the paper.
</description>
<pubDate>Tue, 30 Dec 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/92563</guid>
<dc:date>2014-12-30T00:00:00Z</dc:date>
</item>
<item>
<title>Queueing Theory Analysis of Labor &amp; Delivery at a Tertiary Care Center</title>
<link>https://hdl.handle.net/1721.1/92354</link>
<description>Queueing Theory Analysis of Labor &amp; Delivery at a Tertiary Care Center
Gombolay, Matthew; Golen, Toni; Shah, Neel; Shah, Julie
Labor and Delivery is a complex clinical service requiring the support of highly trained healthcare professionals from Obstetrics, Anesthesiology, and Neonatology and the access to a finite set of valuable resources. In the United States, the rate of cesarean sections on labor floors is approximately twice as high as considered appropriate for patient care. We analyze one month of data from a Boston-area hospital to assess how well the labor and delivery process can be modelled with tools from queueing theory. We find that the labor and delivery process is highly amenable to analysis under queueing theory models. We also investigate the problem of high cesarean section rates and the potential effects of resource utilization of lowering the rate of cesarean section.
</description>
<pubDate>Tue, 16 Dec 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/92354</guid>
<dc:date>2014-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>Network Infusion to Infer Information Sources in Networks</title>
<link>https://hdl.handle.net/1721.1/92031</link>
<description>Network Infusion to Infer Information Sources in Networks
Feizi, Soheil; Duffy, Ken; Kellis, Manolis; Medard, Muriel
Several models exist for diffusion of signals across biological, social, or engineered networks. However, the inverse problem of identifying the source of such propagated information appears more difficult even in the presence of multiple network snapshots, and especially for the single-snapshot case, given the many alternative, often similar, progression of diffusion that may lead to the same observed snapshots. Mathematically, this problem can be undertaken using a diffusion kernel that represents diffusion processes in a given network, but computing this kernel is computationally challenging in general. Here, we propose a path-based network diffusion kernel which considers edge-disjoint shortest paths among pairs of nodes in the network and can be computed efficiently for both homogeneous and heterogeneous continuous-time diffusion models. We use this network diffusion kernel to solve the inverse diffusion problem, which we term Network Infusion (NI), using both likelihood maximization and error minimization. The minimum error NI algorithm is based on an asymmetric Hamming premetric function and can balance between false positive and false negative error types. We apply this framework for both single-source and multi-source diffusion, for both single-snapshot and multi-snapshot observations, and using both uninformative and informative prior probabilities for candidate source nodes. We also provide proofs that under a standard susceptible-infected diffusion model, (1) the maximum-likelihood NI is mean-field optimal for tree structures or sufficiently sparse Erdos-Renyi graphs, (2) the minimum-error algorithm is mean-field optimal for regular tree structures, and (3) for sufficiently-distant sources, the multi-source solution is mean-field optimal in the regular tree structure. Moreover, we provide techniques to learn diffusion model parameters such as observation times. We apply NI to several synthetic networks and compare its performance to centrality-based and distance-based methods for Erdos-Renyi graphs, power-law networks, symmetric and asymmetric grids. Moreover, we use NI in two real-world applications. First, we identify the news sources for 3,553 stories in the Digg social news network, and validate our results based on annotated information, that was not provided to our algorithm. Second, we use NI to identify infusion hubs of human diseases, defined as gene candidates that can explain the connectivity pattern of disease-related genes in the human regulatory network. NI identifies infusion hubs of several human diseases including T1D, Parkinson, MS, SLE, Psoriasis and Schizophrenia. We show that, the inferred infusion hubs are biologically relevant and often not identifiable using the raw p-values.
</description>
<pubDate>Tue, 02 Dec 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/92031</guid>
<dc:date>2014-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>tBurton: A Divide and Conquer Temporal Planner</title>
<link>https://hdl.handle.net/1721.1/91170</link>
<description>tBurton: A Divide and Conquer Temporal Planner
Wang, David; Williams, Brian C.
Planning for and controlling a network of interacting devices requires a planner that accounts for the automatic timed transitions of devices while meeting deadlines and achieving durative goals. For example, a planner for an imaging satellite with a camera intolerant of exhaust would need to determine that opening a valve causes a chain reaction that ignites the engine, and thus needs to shield its camera. While planners exist that support deadlines and durative goals, currently, no planners can handle automatic timed transitions. We present tBurton, a temporal planner that supports these features while additionally producing a temporally least-commitment plan. tBurton uses a divide and conquer approach: dividing the problem using causal-graph decomposition and conquering each factor with heuristic forward search. The `sub-plans' from each factor are unified in a conflict directed search, guided by the causal graph structure. We describe why tBurton is fast and efficient and present its efficacy on benchmarks from the International Planning Competition.
</description>
<pubDate>Fri, 24 Oct 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/91170</guid>
<dc:date>2014-10-24T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Error Elimination by Multi-Application Code Transfer</title>
<link>https://hdl.handle.net/1721.1/91150</link>
<description>Automatic Error Elimination by Multi-Application Code Transfer
Sidiroglou-Douskos, Stelios; Lahtinen, Eric; Rinard, Martin
We present pDNA, a system for automatically transfer- ring correct code from donor applications into recipient applications to successfully eliminate errors in the recipient. Experimental results using six donor applications to eliminate nine errors in six recipient applications highlight the ability of pDNA to transfer code across applications to eliminate otherwise fatal integer and buffer overflow errors. Because pDNA works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, pDNA is the first system to eliminate software errors via the successful transfer of correct code across applications.
</description>
<pubDate>Thu, 02 Oct 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/91150</guid>
<dc:date>2014-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Error Elimination by Multi-Application Code Transfer</title>
<link>https://hdl.handle.net/1721.1/91149</link>
<description>Automatic Error Elimination by Multi-Application Code Transfer
Sidiroglou-Douskos, Stelios; Lahtinen, Eric; Long, Fan; Piselli, Paolo; Rinard, Martin
We present pDNA, a system for automatically transfer- ring correct code from donor applications into recipient applications to successfully eliminate errors in the recipient. Experimental results using six donor applications to eliminate nine errors in six recipient applications highlight the ability of pDNA to transfer code across applications to eliminate otherwise fatal integer and buffer overflow errors. Because pDNA works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, pDNA is the first system to eliminate software errors via the successful transfer of correct code across applications.
</description>
<pubDate>Tue, 30 Sep 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/91149</guid>
<dc:date>2014-09-30T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Error Elimination by Multi-Application Code Transfer</title>
<link>https://hdl.handle.net/1721.1/91148</link>
<description>Automatic Error Elimination by Multi-Application Code Transfer
Sidiroglou-Douskos, Stelios; Lahtinen, Eric; Long, Fan; Piselli, Paolo; Rinard, Martin
We present pDNA, a system for automatically transferring correct code from donor applications into recipient applications to successfully eliminate errors in the recipient. Experimental results using three donor applications to eliminate seven errors in four recipient applications highlight the ability of pDNA to transfer code across applications to eliminate otherwise fatal integer overflow errors at critical memory allocation sites. Because pDNA works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, pDNA is the first system to eliminate software errors via the successful transfer of correct code across applications.
</description>
<pubDate>Mon, 11 Aug 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/91148</guid>
<dc:date>2014-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>An Analyst's Assistant for the Interpretation of Vehicle Track Data</title>
<link>https://hdl.handle.net/1721.1/90812</link>
<description>An Analyst's Assistant for the Interpretation of Vehicle Track Data
Borchardt, Gary; Katz, Boris; Nguyen, Hong-Linh; Felshin, Sue; Senne, Ken; Wang, Andy
This report describes the Analyst's Assistant, a software system for language-interactive, collaborative user-system interpretation of events, specifically targeting vehicle events that can be recognized on the basis of vehicle track data. The Analyst's Assistant utilizes language not only as a means of interaction, but also as a basis for internal representation of scene information, background knowledge, and results of interpretation. Building on this basis, the system demonstrates emerging intelligent systems techniques related to event recognition, summarization of events, partitioning of subtasks between user and system, and handling of language and graphical references to scene entities during interactive analysis.
</description>
<pubDate>Wed, 08 Oct 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/90812</guid>
<dc:date>2014-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Error Elimination by Multi-Application Code Transfer</title>
<link>https://hdl.handle.net/1721.1/90561</link>
<description>Automatic Error Elimination by Multi-Application Code Transfer
Sidiroglou-Douskos, Stelios; Lahtinen, Eric; Rinard, Martin
We present Code Phage (CP), a system for automatically transferring correct code from donor applications into recipient applications to successfully eliminate errors in the recipient. Experimental results using six donor applications to eliminate nine errors in six recipient applications highlight the ability of CP to transfer code across applications to eliminate otherwise fatal integer and buffer over- flow errors. Because CP works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, CP is the first system to eliminate software errors via the successful transfer of correct code across applications.
</description>
<pubDate>Thu, 02 Oct 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/90561</guid>
<dc:date>2014-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Constraint Generation for the Jeeves Privacy Language</title>
<link>https://hdl.handle.net/1721.1/90560</link>
<description>Constraint Generation for the Jeeves Privacy Language
Rose, Eva
Our goal is to present a completed, semantic formalization of the Jeeves privacy language evaluation engine, based on the original Jeeves constraint semantics defined by Yang et al at POPL12, but sufficiently strong to support a first complete implementation thereof. Specifically, we present and implement a syntactically and semantically completed concrete syntax for Jeeves that meets the example criteria given in the paper. We also present and implement the associated translation to J, but here formulated by a completed and decompositional operational semantic formulation. Finally, we present an enhanced and decompositional, non-substitutional operational semantic formulation and implementation of the J evaluation engine (the dynamic semantics) with privacy constraints. In particular, we show how implementing the constraints can be defined as a monad, and evaluation can be defined as monadic operation on the constraint environment. The implementations are all completed in Haskell, utilizing its almost one-to-one capability to transparently reflect the underlying semantic reasoning when formalized this way. In practice, we have applied the "literate" program facility of Haskell to this report, a feature that enables the source LATEX to also serve as the source code for the implementation (skipping the report-parts as comment regions). The implementation is published as a github project.
</description>
<pubDate>Wed, 01 Oct 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/90560</guid>
<dc:date>2014-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>OpLog: a library for scaling update-heavy data structures</title>
<link>https://hdl.handle.net/1721.1/89653</link>
<description>OpLog: a library for scaling update-heavy data structures
Boyd-Wickizer, Silas; Kaashoek, M. Frans; Morris, Robert; Zeldovich, Nickolai
Existing techniques (e.g., RCU) can achieve good multi-core scaling for read-mostly data, but for update-heavy data structures only special-purpose techniques exist. This paper presents OpLog, a general-purpose library supporting good scalability for update-heavy data structures. OpLog achieves scalability by logging each update in a low-contention per-core log; it combines logs only when required by a read to the data structure. OpLog achieves generality by logging operations without having to understand them, to ease application to existing data structures. OpLog can further increase performance if the programmer indicates which operations can be combined in the logs. An evaluation shows how to apply OpLog to three update-heavy Linux kernel data structures. Measurements on a 48-core AMD server show that the result significantly improves the performance of the Apache web server and the Exim mail server under certain workloads.
</description>
<pubDate>Tue, 16 Sep 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/89653</guid>
<dc:date>2014-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Alloy*: A Higher-Order Relational Constraint Solver</title>
<link>https://hdl.handle.net/1721.1/89157</link>
<description>Alloy*: A Higher-Order Relational Constraint Solver
Milicevic, Aleksandar; Near, Joseph P.; Kang, Eunsuk; Jackson, Daniel
The last decade has seen a dramatic growth in the use of constraint solvers as a computational mechanism, not only for analysis and synthesis of software, but also at runtime. Solvers are available for a variety of logics but are generally restricted to first-order formulas. Some tasks, however, most notably those involving synthesis, are inherently higher order; these are typically handled by embedding a first-order solver (such as a SAT or SMT solver) in a domain-specific algorithm. Using strategies similar to those used in such algorithms, we show how to extend a first-order solver (in this case Kodkod, a model finder for relational logic used as the engine of the Alloy Analyzer) so that it can handle quantifications over higher-order structures. The resulting solver is sufficiently general that it can be applied to a range of problems; it is higher order, so that it can be applied directly, without embedding in another algorithm; and it performs well enough to be competitive with specialized tools on standard benchmarks. Although the approach is demonstrated for a particular relational logic, the principles behind it could be applied to other first-order solvers. Just as the identification of first-order solvers as reusable backends advanced the performance of specialized tools and simplified their architecture, factoring out higher-ordersolvers may bring similar benefits to a new class of tools.
</description>
<pubDate>Tue, 02 Sep 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/89157</guid>
<dc:date>2014-09-02T00:00:00Z</dc:date>
</item>
<item>
<title>Motion Compatibility for Indoor Localization</title>
<link>https://hdl.handle.net/1721.1/89075</link>
<description>Motion Compatibility for Indoor Localization
Park, Jun-geun; Teller, Seth
Indoor localization -- a device's ability to determine its location within an extended indoor environment -- is a fundamental enabling capability for mobile context-aware applications. Many proposed applications assume localization information from GPS, or from WiFi access points. However, GPS fails indoors and in urban canyons, and current WiFi-based methods require an expensive, and manually intensive, mapping, calibration, and configuration process performed by skilled technicians to bring the system online for end users. We describe a method that estimates indoor location with respect to a prior map consisting of a set of 2D floorplans linked through horizontal and vertical adjacencies. Our main contribution is the notion of "path compatibility," in which the sequential output of a classifier of inertial data producing low-level motion estimates (standing still, walking straight, going upstairs, turning left etc.) is examined for agreement with the prior map. Path compatibility is encoded in an HMM-based matching model, from which the method recovers the user s location trajectory from the low-level motion estimates. To recognize user motions, we present a motion labeling algorithm, extracting fine-grained user motions from sensor data of handheld mobile devices. We propose "feature templates," which allows the motion classifier to learn the optimal window size for a specific combination of a motion and a sensor feature function. We show that, using only proprioceptive data of the quality typically available on a modern smartphone, our motion labeling algorithm classifies user motions with 94.5% accuracy, and our trajectory matching algorithm can recover the user's location to within 5 meters on average after one minute of movements from an unknown starting location. Prior information, such as a known starting floor, further decreases the time required to obtain precise location estimate.
</description>
<pubDate>Tue, 26 Aug 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/89075</guid>
<dc:date>2014-08-26T00:00:00Z</dc:date>
</item>
<item>
<title>Energy-Efficient Approximate Computation in Topaz</title>
<link>https://hdl.handle.net/1721.1/88926</link>
<description>Energy-Efficient Approximate Computation in Topaz
Achour, Sara; Rinard, Martin
We present Topaz, a new task-based language for computations that execute on approximate computing platforms that may occasionally produce arbitrarily inaccurate results. The Topaz implementation maps approximate tasks onto the approximate machine and integrates the approximate results into the main computation, deploying a novel outlier detection and reliable reexecution mechanism to prevent unacceptably inaccurate results from corrupting the overall computation. Topaz therefore provides the developers of approximate hardware with substantial freedom in producing designs with little or no precision or accuracy guarantees. Experimental results from our set of benchmark applications demonstrate the effectiveness of Topaz and the Topaz implementation in enabling developers to productively exploit emerging approximate hardware platforms.
</description>
<pubDate>Tue, 19 Aug 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/88926</guid>
<dc:date>2014-08-19T00:00:00Z</dc:date>
</item>
<item>
<title>A Coded Shared Atomic Memory Algorithm for Message Passing Architectures</title>
<link>https://hdl.handle.net/1721.1/88551</link>
<description>A Coded Shared Atomic Memory Algorithm for Message Passing Architectures
Cadambe, Viveck R.; Lynch, Nancy; Medard, Muriel; Musial, Peter
This paper considers the communication and storage costs of emulating atomic (linearizable) multi-writer multi-reader shared memory in distributed message-passing systems. The paper contains three main contributions: (1) We present a atomic shared-memory emulation algorithm that we call Coded Atomic Storage (CAS). This algorithm uses erasure coding methods. In a storage system with 'N' servers that is resilient to 'f' server failures, we show that the communication cost of CAS is N/(N-2f) . The storage cost of CAS is unbounded. (2) We present a modification of the CAS algorithm known as CAS with Garbage Collection (CASGC). The CASGC algorithm is parametrized by an integer 'd' and has a bounded storage cost. We show that in every execution where the number of write operations that are concurrent with a read operation is no bigger than 'd', the CASGC algorithm with parameter 'd' satisfies atomicity and liveness. We explicitly characterize the storage cost of CASGC, and show that it has the same communication cost as CASGC. (3) We describe an algorithm known as the Communication Cost Optimal Atomic Storage (CCOAS) algorithm that achieves a smaller communication cost than CAS and CASGC. In particular, CCOAS incurs read and write communication costs of N/(N-f) measured in terms of number of object values. We also discuss drawbacks of CCOAS as compared with CAS and CASGC.
</description>
<pubDate>Fri, 01 Aug 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/88551</guid>
<dc:date>2014-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Autotuning Algorithmic Choice for Input Sensitivity</title>
<link>https://hdl.handle.net/1721.1/88083</link>
<description>Autotuning Algorithmic Choice for Input Sensitivity
Ding, Yufei; Ansel, Jason; Veeramachaneni, Kalyan; Shen, Xipeng; O'Reilly, Una-May; Amarasinghe, Saman
Empirical autotuning is increasingly being used in many domains to achieve optimized performance in a variety of different execution environments. A daunting challenge faced by such autotuners is input sensitivity, where the best autotuned configuration may vary with different input sets. In this paper, we propose a two level solution that: first, clusters to find input sets that are similar in input feature space; then, uses an evolutionary autotuner to build an optimized program for each of these clusters; and, finally, builds an adaptive overhead aware classifier which assigns each input to a specific input optimized program. Our approach addresses the complex trade-off between using expensive features, to accurately characterize an input, and cheaper features, which can be computed with less overhead. Experimental results show that by adapting to different inputs one can obtain up to a 3x speedup over using a single configuration for all inputs.
</description>
<pubDate>Mon, 23 Jun 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/88083</guid>
<dc:date>2014-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Possibilistic Beliefs and Higher-Level Rationality</title>
<link>https://hdl.handle.net/1721.1/87727</link>
<description>Possibilistic Beliefs and Higher-Level Rationality
Chen, Jing; Micali, Silvio; Pass, Rafael
We consider rationality and rationalizability for normal-form games of incomplete information in which the players have possibilistic beliefs about their opponents. In this setting, we prove that the strategies compatible with the players being level-k rational coincide with the strategies surviving a natural k-step iterated elimination procedure. We view the latter strategies as the (level-k) rationalizable ones in our possibilistic setting.
</description>
<pubDate>Mon, 09 Jun 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/87727</guid>
<dc:date>2014-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>Possibilistic Beliefs and Higher-Level Rationality</title>
<link>https://hdl.handle.net/1721.1/87710</link>
<description>Possibilistic Beliefs and Higher-Level Rationality
Chen, Jing; Micali, Silvio; Pass, Rafael
We consider rationality and rationalizability for normal-form games of incomplete information in which the players have possibilistic beliefs about their opponents. In this setting, we prove that the strategies compatible with the players being level-k rational coincide with the strategies surviving a natural k-step iterated elimination procedure. We view the latter strategies as the (level-k) rationalizable ones in our possibilistic setting.
</description>
<pubDate>Mon, 09 Jun 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/87710</guid>
<dc:date>2014-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>Latent Case Model: A Generative Approach for  Case-Based Reasoning and Prototype Classification</title>
<link>https://hdl.handle.net/1721.1/87548</link>
<description>Latent Case Model: A Generative Approach for  Case-Based Reasoning and Prototype Classification
Kim, Been; Rudin, Cynthia; Shah, Julie
We present a general framework for Bayesian case-based reasoning and prototype classification and clustering -- Latent Case Model (LCM). LCM learns the most representative prototype observations of a dataset by performing joint inference on cluster prototypes and features. Simultaneously, LCM pursues sparsity by learning subspaces, the sets of few features that play important roles in characterizing the prototypes. The prototype and subspace representation preserves interpretability in high dimensional data. We validate the approach preserves classification accuracy on standard data sets, and verify through human subject experiments that the output of LCM produces statistically significant improvements in participants' performance on a task requiring an understanding of clusters within a dataset.
</description>
<pubDate>Mon, 26 May 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/87548</guid>
<dc:date>2014-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Quaternionic Representation of the Riesz Pyramid for Video Magnification</title>
<link>https://hdl.handle.net/1721.1/86300</link>
<description>Quaternionic Representation of the Riesz Pyramid for Video Magnification
Wadhwa, Neal; Rubinstein, Michael; Durand, Fredo; Freeman, William T.
Recently, we presented a new image pyramid, called the Riesz pyramid, that uses the Riesz transform to manipulate the phase in non-oriented sub-bands of an image sequence to produce real-time motion-magnified videos. In this report we give a quaternionic formulation of the Riesz pyramid, and show how several seemingly heuristic choices in how to use the Riesz transform for phase-based video magnification fall out of this formulation in a natural and principled way. We intend this report to accompany the original paper on the Riesz pyramid for video magnification.
</description>
<pubDate>Sat, 26 Apr 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/86300</guid>
<dc:date>2014-04-26T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Person Motion Tracking via RF Body Reflections</title>
<link>https://hdl.handle.net/1721.1/86299</link>
<description>Multi-Person Motion Tracking via RF Body Reflections
Adib, Fadel; Kabelac, Zachary; Katabi, Dina
Recently, we have witnessed the emergence of technologies that can localize a user and track her gestures based purely on radio reflections off the person's body. These technologies work even if the user is behind a wall or obstruction. However, for these technologies to be fully practical, they need to address major challenges such as scaling to multiple people, accurately localizing them and tracking their gestures, and localizing static users as opposed to requiring the user to move to be detectable. This paper presents WiZ, the first multi-person centimeter-scale motion tracking system that pinpoints people's locations based purely on RF reflections off their bodies. WiZ can also locate static users by sensing minute changes in their RF reflections due to breathing. Further, it can track concurrent gestures made by different individuals, even when they carry no wireless device on them. We implement a prototype of WiZ and show that it can localize up to five users each with a median accuracy of 8-18 cm and 7-11 cm in the x and y dimensions respectively. WiZ can also detect 3D pointing gestures of multiple users with a median orientation error of 8 -16 degrees for each of them. Finally, WiZ can track breathing motion and output the breath count of multiple people with high accuracy.
</description>
<pubDate>Sat, 26 Apr 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/86299</guid>
<dc:date>2014-04-26T00:00:00Z</dc:date>
</item>
<item>
<title>One Clock to Rule Them All: A Primitive for Distributed Wireless Protocols at the Physical Layer</title>
<link>https://hdl.handle.net/1721.1/86298</link>
<description>One Clock to Rule Them All: A Primitive for Distributed Wireless Protocols at the Physical Layer
Abari, Omid; Rahul, Hariharan; Katabi, Dina
Implementing distributed wireless protocols at the physical layer today is challenging because different nodes have different clocks, each of which has slightly different frequencies. This causes the nodes to have frequency offset relative to each other, as a result of which transmitted signals from these nodes do not combine in a predictable manner over time. Past work tackles this challenge and builds distributed PHY layer systems by attempting to address the effects of the frequency offset and compensating for it in the transmitted signals. In this paper, we address this challenge by addressing the root cause - the different clocks with different frequencies on the different nodes. We present AirClock, a new wireless coordination primitive that enables multiple nodes to act as if they are driven by a single clock that they receive wirelessly over the air. AirClock presents a synchronized abstraction to the physical layer, and hence enables direct implementation of diverse kinds of distributed PHY protocols. We illustrate AirClock's versatility by using it to build three different systems: distributed MIMO, distributed rate adaptation for wireless sensors, and pilotless OFDM, and show that they can provide significant performance benefits over today's systems.
</description>
<pubDate>Sun, 27 Apr 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/86298</guid>
<dc:date>2014-04-27T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic Execution for (Almost) Free: Hijacking an Existing Implementation to Perform Symbolic Execution</title>
<link>https://hdl.handle.net/1721.1/86235</link>
<description>Symbolic Execution for (Almost) Free: Hijacking an Existing Implementation to Perform Symbolic Execution
Near, Joseph P.; Jackson, Daniel
Symbolic execution of a language is traditionally achieved by replacing the language s interpreter with an entirely new interpreter. This may be an unnecessary burden, and it is tempting instead to try to use as much of the existing interpret infrastructure as possible, both for handling aspects of the computation that are not symbolic, and for propagating symbolic ones. This approach was used to implement Rubicon, a bounded verification system for Ruby on Rails web applications, in less than 1000 lines of Ruby code. Rubicon uses symbolic execution to derive verification conditions from Rails applications and an off-the-shelf solver to check them. Despite its small size, Rubicon has been used to find previously unknown bugs in open-source Rails applications. The key idea is to encode symbolic values and operations in a library written in the target language itself, overriding only a small part of the standard interpreter. We formalize this approach, showing that replacing a few key operators with symbolic versions in a standard interpreter gives the same effect as replacing the entire interpreter with a symbolic one.
</description>
<pubDate>Tue, 22 Apr 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/86235</guid>
<dc:date>2014-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Moebius Language Reference, Version 1.2</title>
<link>https://hdl.handle.net/1721.1/86174</link>
<description>Moebius Language Reference, Version 1.2
Borchardt, Gary C.
Moebius is a representation and interface language based on a subset of English. It is designed for use as a means of encoding information and as a means of conveying information between software components and other software components, between software components and humans, and between data repositories and their users -- human or machine. This report describes the structure and use of the Moebius language and presents three applications of the language to date.
</description>
<pubDate>Wed, 09 Apr 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/86174</guid>
<dc:date>2014-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Sloth: Being Lazy is a Virtue (When Issuing Database Queries)</title>
<link>https://hdl.handle.net/1721.1/86173</link>
<description>Sloth: Being Lazy is a Virtue (When Issuing Database Queries)
Cheung, Alvin; Madden, Samuel; Solar-Lezama, Armando
Many web applications store persistent data in databases. During execution, such applications spend a significant amount of time communicating with the database for retrieval and storing of persistent data over the network. These network round trips represent a significant fraction of the overall execution time for many applications and as a result increase their latency. While there has been prior work that aims to eliminate round trips by batching queries, they are limited by 1) a requirement that developers manually identify batching opportunities, or 2) the fact that they employ static program analysis techniques that cannot exploit many opportunities for batching. In this paper, we present Sloth, a new system that extends traditional lazy evaluation to expose query batching opportunities during application execution, even across loops, branches, and method boundaries. We evaluated Sloth using over 100 benchmarks from two large-scale open-source applications, and achieved up to a 3x reduction in page load time by delaying computation.
</description>
<pubDate>Mon, 14 Apr 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/86173</guid>
<dc:date>2014-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Cicada: Predictive Guarantees for Cloud Network Bandwidth</title>
<link>https://hdl.handle.net/1721.1/85975</link>
<description>Cicada: Predictive Guarantees for Cloud Network Bandwidth
LaCurts, Katrina; Mogul, Jeffrey C.; Balakrishnan, Hari; Turner, Yoshio
In cloud-computing systems, network-bandwidth guarantees have been shown to improve predictability of application performance and cost. Most previous work on cloud-bandwidth guarantees has assumed that cloud tenants know what bandwidth guarantees they want. However, application bandwidth demands can be complex and time-varying, and many tenants might lack sufficient information to request a bandwidth guarantee that is well-matched to their needs. A tenant's lack of accurate knowledge about its future bandwidth demands can lead to over-provisioning (and thus reduced cost-efficiency) or under-provisioning (and thus poor user experience in latency-sensitive user-facing applications). We analyze traffic traces gathered over six months from an HP Cloud Services datacenter, finding that application bandwidth consumption is both time-varying and spatially inhomogeneous. This variability makes it hard to predict requirements. To solve this problem, we develop a prediction algorithm usable by a cloud provider to suggest an appropriate bandwidth guarantee to a tenant. The key idea in the prediction algorithm is to treat a set of previously observed traffic matrices as "experts" and learn online the best weighted linear combination of these experts to make its prediction. With tenant VM placement using these predictive guarantees, we find that the inter-rack network utilization in certain datacenter topologies can be more than doubled.
</description>
<pubDate>Mon, 24 Mar 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/85975</guid>
<dc:date>2014-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>The N2 Corpus v1.0</title>
<link>https://hdl.handle.net/1721.1/85893</link>
<description>The N2 Corpus v1.0
Finlayson, Mark A.; Halverson, Jeffry R.; Corman, Steven R.
The N2 Corpus (Narrative Networks Corpus) comprises 100 story texts (42,480 words) relevant to Islamist Extremism, drawn from religious stories, online material, and promotional magazines. The corpus has been annotated for 14 different layers of syntax and semantics. This v1.0 version is missing 33 texts that will be added in later versions. The corpus is described in: Mark A. Finlayson, Jeffry R. Halverson, and Steven R. Corman (2014) "The N2 Corpus: A semantically annotated collection of Islamist extremist stories", Proceedings of the 9th Language Resources and Evaluation Conference (LREC), Reykjavik, Iceland.
</description>
<pubDate>Sat, 22 Mar 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/85893</guid>
<dc:date>2014-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>An Architecture for Online Affordance-based Perception and Whole-body Planning</title>
<link>https://hdl.handle.net/1721.1/85690</link>
<description>An Architecture for Online Affordance-based Perception and Whole-body Planning
Fallon, Maurice; Kuindersma, Scott; Karumanchi, Sisir; Antone, Matthew; Schneider, Toby; Dai, Hongkai; Perez D'Arpino, Claudia; Deits, Robin; DiCicco, Matt; Fourie, Dehann; Koolen, Twan; Marion, Pat; Posa, Michael; Valenzuela, Andres; Yu, Kuan-Ting; Shah, Julie; Iagnemma, Karl; Tedrake, Russ; Teller, Seth
The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule.
</description>
<pubDate>Sun, 16 Mar 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/85690</guid>
<dc:date>2014-03-16T00:00:00Z</dc:date>
</item>
<item>
<title>PIKA: A Network Service for Multikernel Operating Systems</title>
<link>https://hdl.handle.net/1721.1/84608</link>
<description>PIKA: A Network Service for Multikernel Operating Systems
Beckmann, Nathan Z.; Gruenwald III, Charles; Johnson, Christopher R.; Kasture, Harshad; Sironi, Filippo; Agarwal, Anant; Kaashoek, M. Frans; Zeldovich, Nickolai
PIKA is a network stack designed for multikernel operating systems that target potential future architectures lacking cache-coherent shared memory but supporting message passing. PIKA splits the network stack into several servers that communicate using a low-overhead message passing layer. A key challenge faced by PIKA is the maintenance of shared state, such as a single accept queue and load balance information. PIKA addresses this challenge using a speculative 3-way handshake for connection acceptance, and a new distributed load balancing scheme for spreading connections. A PIKA prototype achieves competitive performance, excellent scalability, and low service times under load imbalance on commodity hardware. Finally, we demonstrate that splitting network stack processing by function across separate cores is a net loss on commodity hardware, and we describe conditions under which it may be advantageous.
</description>
<pubDate>Tue, 28 Jan 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/84608</guid>
<dc:date>2014-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Reliability-Aware Optimization of Approximate Computational Kernels with Rely</title>
<link>https://hdl.handle.net/1721.1/83843</link>
<description>Reliability-Aware Optimization of Approximate Computational Kernels with Rely
Misailovic, Sasa; Carbin, Michael; Achour, Sara; Qi, Zichao; Rinard, Martin
Emerging high-performance architectures are anticipated to contain unreliable components (e.g., ALUs) that offer low power consumption at the expense of soft errors. Some applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors and can therefore trade accuracy of their results for reduced energy consumption by utilizing these unreliable hardware components. We present and evaluate a technique for reliability-aware optimization of approximate computational kernel implementations. Our technique takes a standard implementation of a computation and automatically replaces some of its arithmetic operations with unreliable versions that consume less power, but may produce incorrect results with some probability. Our technique works with a developer-provided specification of the required reliability of a computation -- the probability that it returns the correct result -- and produces an unreliable implementation that satisfies that specification. We evaluate our approach on five applications from the image processing, numerical analysis, and financial analysis domains and demonstrate how our technique enables automatic exploration of the trade-off between the reliability of a computation and its performance.
</description>
<pubDate>Thu, 09 Jan 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/83843</guid>
<dc:date>2014-01-09T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of Randomized Accuracy-Aware Map-Fold Programs</title>
<link>https://hdl.handle.net/1721.1/83397</link>
<description>Synthesis of Randomized Accuracy-Aware Map-Fold Programs
Misailovic, Sasa; Rinard, Martin
We present Syndy, a technique for automatically synthesizing randomized map/fold computations that trade accuracy for performance. Given a specification of a fully accurate computation, Syndy automatically synthesizes approximate implementations of map and fold tasks, explores the approximate computation space that these approximations induce, and derives an accuracy versus performance tradeoff curve that characterizes the explored space. Each point on the curve corresponds to an approximate randomized program configuration that realizes the probabilistic error and time bounds associated with that point.
</description>
<pubDate>Sun, 29 Dec 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/83397</guid>
<dc:date>2013-12-29T00:00:00Z</dc:date>
</item>
<item>
<title>3D Tracking via Body Radio Reflections</title>
<link>https://hdl.handle.net/1721.1/82913</link>
<description>3D Tracking via Body Radio Reflections
Adib, Fadel; Kabelac, Zach; Katabi, Dina; Miller, Robert C.
This paper introduces WiTrack, a system that tracks the 3D motion of a user from the radio signals reflected off her body. It works even if the person is occluded from the WiTrack device or in a different room. WiTrack does not require the user to carry any wireless device, yet its accuracy exceeds current RF localization systems, which require the user to hold a transceiver. Empirical measurements with a WiTrack prototype show that, on average, it localizes the center of a human body to within 10 to 13 cm in the x and y dimensions, and 21 cm in the z dimension. It also provides coarse tracking of body parts, identifying the direction of a pointing hand with a median of 11.2 degrees. WiTrack bridges a gap between RF-based localization systems which locate a user through walls and occlusions, and human-computer interaction systems like WiTrack, which can track a user without instrumenting her body, but require the user to stay within the direct line of sight of the device.
</description>
<pubDate>Wed, 11 Dec 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/82913</guid>
<dc:date>2013-12-11T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging Utility Maximization and Regret Minimization</title>
<link>https://hdl.handle.net/1721.1/82632</link>
<description>Bridging Utility Maximization and Regret Minimization
Chiesa, Alessandro; Micali, Silvio; Zhu, Zeyuan Allen
We relate the strategies obtained by (1) utility maximizers who use regret to refine their set of undominated strategies, and (2) regret minimizers who use weak domination to refine their sets of regret-minimizing strategies.
</description>
<pubDate>Tue, 03 Dec 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/82632</guid>
<dc:date>2013-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>GenBase: A Complex Analytics Genomics Benchmark</title>
<link>https://hdl.handle.net/1721.1/82517</link>
<description>GenBase: A Complex Analytics Genomics Benchmark
Taft, Rebecca; Vartak, Manasi; Satish, Nadathur Rajagopalan; Sundaram, Narayanan; Madden, Samuel; Stonebraker, Michael
This paper introduces a new benchmark, designed to test database management system (DBMS) performance on a mix of data management tasks (joins, filters, etc.) and complex analytics (regression, singular value decomposition, etc.) Such mixed workloads are prevalent in a number of application areas, including most science workloads and web analytics. As a specific use case, we have chosen genomics data for our benchmark, and have constructed a collection of typical tasks in this area. In addition to being representative of a mixed data management and analytics workload, this benchmark is also meant to scale to large dataset sizes and multiple nodes across a cluster. Besides presenting this benchmark, we have run it on a variety of storage systems including traditional row stores, newer column stores, Hadoop, and an array DBMS. We present performance numbers on all systems on single and multiple nodes, and show that performance differs by orders of magnitude between the various solutions. In addition, we demonstrate that most platforms have scalability issues. We also test offloading the analytics onto a coprocessor. The intent of this benchmark is to focus research interest in this area; to this end, all of our data, data generators, and scripts are available on our web site.
</description>
<pubDate>Tue, 19 Nov 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/82517</guid>
<dc:date>2013-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>On Randomized Path Coverage of Configuration Spaces</title>
<link>https://hdl.handle.net/1721.1/82462</link>
<description>On Randomized Path Coverage of Configuration Spaces
Perez, Alejandro
We present a sampling-based algorithm that generates a set of locally-optimal paths that differ in visibility.
</description>
<pubDate>Mon, 18 Nov 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/82462</guid>
<dc:date>2013-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>OpenTuner: An Extensible Framework for Program Autotuning</title>
<link>https://hdl.handle.net/1721.1/81958</link>
<description>OpenTuner: An Extensible Framework for Program Autotuning
Ansel, Jason; Kamil, Shoaib; Veeramachaneni, Kalyan; O'Reilly, Una-May; Amarasinghe, Saman
Program autotuning has been shown to achieve better or more portable performance in a number of domains. However, autotuners themselves are rarely portable between projects, for a number of reasons: using a domain-informed search space representation is critical to achieving good results; search spaces can be intractably large and require advanced machine learning techniques; and the landscape of search spaces can vary greatly between different problems, sometimes requiring domain specific search techniques to explore efficiently. This paper introduces OpenTuner, a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests. We demonstrate the efficacy and generality of OpenTuner by building autotuners for 6 distinct projects and 14 total benchmarks, showing speedups over prior techniques of these projects of up to 2.8x with little programmer effort.
</description>
<pubDate>Fri, 01 Nov 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/81958</guid>
<dc:date>2013-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Code for Java Libraries for Accessing the Princeton Wordnet: Comparison and Evaluation</title>
<link>https://hdl.handle.net/1721.1/81949</link>
<description>Code for Java Libraries for Accessing the Princeton Wordnet: Comparison and Evaluation
Finlayson, Mark Alan
This archive contains the code and data for running the evaluations described in: Finlayson, Mark Alan (2014) "Java Libraries for Accessing the Princeton Wordnet: comparison and Evaluation" in Proceedings of the 7th Global Wordnet Conference (GWC 2014). Tartu, Estonia, 25-29 January 2014. The archive contains five Eclipse projects (compatible with Eclipse 3.8.0) that may be imported directly into an Eclipse workspace. You will need a Java 1.4, 1.5, and 1.6 JRE to run all the code in the archive. Paper abstract: Java is a popular programming language for natural language processing. I compare and evaluate 12 Java libraries designed to access the information in the original Princeton Wordnet databases. From this comparison emerges a set of decision criteria that will enable a user to pick the library most suited to their purposes. I identify five deciding features: (1) availability of similarity metrics; (2) support for editing; (3) availability via Maven; (4) compatibility with retired Java versions; and (5) support for Enterprise Java. I also provide a comparison of other features of each library, the information exposed by each API, and the versions of Wordnet each library supports, and I evaluate each library for the speed of various retrieval operations. In the case that the user's application does not require one of the deciding features, I show that my library, JWI, the MIT Java Wordnet Interface, is the highest-performance, widest-coverage, easiest-to-use library available.
</description>
<pubDate>Fri, 01 Nov 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/81949</guid>
<dc:date>2013-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Asynchronous Failure Detectors</title>
<link>https://hdl.handle.net/1721.1/81371</link>
<description>Asynchronous Failure Detectors
Cornejo, Alejandro; Lynch, Nancy; Sastry, Srikanth
Failure detectors -- oracles that provide information about process crashes -- are an important abstraction for crash tolerance in distributed systems. The generality of failure-detector theory, while providing great expressiveness, poses significant challenges in developing a robust hierarchy of failure detectors. We address some of these challenges by proposing (1) a variant of failure detectors called asynchronous failure detectors and (2) an associated modeling framework. Unlike the traditional failure-detector framework, our framework eschews real-time completely. We show that asynchronous failure detectors are sufficiently expressive to include several popular failure detectors including, but not limited to, the canonical Chandra-Toueg failure detectors, Sigma and other quorum failure detectors, Omega, anti-Omega, Omega^k, and Psi_k. Additionally, asynchronous failure detectors satisfy many desirable properties: they are self-implementable, guarantee that stronger asynchronous failure-detectors solve harder problems, and ensure that their outputs encode no information other than the set of crashed processes. We introduce the notion of a failure detector being representative for a problem to capture the idea that some problems encode the same information about process crashes as their weakest failure detectors do. We show that a large class of problems, called bounded problems, do not have representative failure detectors. Finally, we use the asynchronous failure-detector framework to show how sufficiently strong AFDs circumvent the impossibility of consensus in asynchronous systems.
This report supersedes MIT-CSAIL-TR-2013-002.
</description>
<pubDate>Thu, 10 Oct 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/81371</guid>
<dc:date>2013-10-10T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Shared State with History Maintenance</title>
<link>https://hdl.handle.net/1721.1/81365</link>
<description>Distributed Shared State with History Maintenance
Panchekha, Pavel; Brodsky, Micah Z. (Micah Zev)
Shared mutable state is challenging to maintain in a distributed environment. We develop a technique, based on the Operational Transform, that guides independent agents into producing consistent states through inconsistent but equivalent histories of operations. Our technique, history maintenance, extends and streamlines the Operational Transform for general distributed systems. We describe how to use history maintenance to create eventually-consistent, strongly-consistent, and hybrid systems whose correctness is easy to reason about.
</description>
<pubDate>Tue, 08 Oct 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/81365</guid>
<dc:date>2013-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Mouse Behavior Recognition with The Wisdom of Crowd</title>
<link>https://hdl.handle.net/1721.1/80815</link>
<description>Mouse Behavior Recognition with The Wisdom of Crowd
Ni, Yuzhao; Frogner, Charles A.; Poggio, Tomaso A
In this thesis, we designed and implemented a crowdsourcing system to annotatemouse behaviors in videos; this involves the development of a novel clip-based video labeling tools, that is more efficient than traditional labeling tools in crowdsourcing platform, as well as the design of probabilistic inference algorithms that predict the true labels and the workers' expertise from multiple workers' responses. Our algorithms are shown to perform better than majority vote heuristic. We also carried out extensive experiments to determine the effectiveness of our labeling tool, inference algorithms and the overall system.
</description>
<pubDate>Thu, 19 Sep 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/80815</guid>
<dc:date>2013-09-19T00:00:00Z</dc:date>
</item>
<item>
<title>Harvesting Application Information for Industry-Scale Relational Schema Matching</title>
<link>https://hdl.handle.net/1721.1/80380</link>
<description>Harvesting Application Information for Industry-Scale Relational Schema Matching
Kushman, Nate; Adib, Fadel; Katabi, Dina; Barzilay, Regina
Consider the problem of migrating a company's CRM or ERP database from one application to another, or integrating two such databases as a result of a merger. This problem requires matching two large relational schemas with hundreds and sometimes thousands of fields. Further, the correct match is likely complex: rather than a simple one-to-one alignment, some fields in the source database may map to multiple fields in the target database, and others may have no equivalent fields in the target database. Despite major advances in schema matching, fully automated solutions to large relational schema matching problems are still elusive. This paper focuses on improving the accuracy of automated large relational schema matching. Our key insight is the observation that modern database applications have a rich user interface that typically exhibits more consistency across applications than the underlying schemas. We associate UI widgets in the application with the underlying database fields on which they operate and demonstrate that this association delivers new information useful for matching large and complex relational schemas. Additionally, we show how to formalize the schema matching problem as a quadratic program, and solve it efficiently using standard optimization and machine learning techniques. We evaluate our approach on real-world CRM applications with hundreds of fields and show that it improves the accuracy by a factor of 2-4x.
</description>
<pubDate>Tue, 10 Sep 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/80380</guid>
<dc:date>2013-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Bidirectional Rapidly-Exploring Random Trees</title>
<link>https://hdl.handle.net/1721.1/79884</link>
<description>Optimal Bidirectional Rapidly-Exploring Random Trees
Jordan, Matthew; Perez, Alejandro
In this paper we present a simple, computationally-efficient, two-tree variant of the RRT* algorithm along with several heuristics.
</description>
<pubDate>Thu, 15 Aug 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79884</guid>
<dc:date>2013-08-15T00:00:00Z</dc:date>
</item>
<item>
<title>Does invariant recognition predict tuning of neurons in sensory cortex?</title>
<link>https://hdl.handle.net/1721.1/79828</link>
<description>Does invariant recognition predict tuning of neurons in sensory cortex?
Poggio, Tomaso; Mutch, Jim; Anselmi, Fabio; Tacchetti, Andrea; Rosasco, Lorenzo; Leibo, Joel Z.
Tuning properties of simple cells in cortical V1 can be described in terms of a "universal shape" characterized by parameter values which hold across different species. This puzzling set of findings begs for a general explanation grounded on an evolutionarily important computational function of the visual cortex. We ask here whether these properties are predicted by the hypothesis that the goal of the ventral stream is to compute for each image a "signature" vector which is invariant to geometric transformations, with the the additional assumption that the mechanism for continuously learning and maintaining invariance consists of the memory storage of a sequence of neural images of a few objects undergoing transformations (such as translation, scale changes and rotation) via Hebbian synapses. For V1 simple cells the simplest version of this hypothesis is the online Oja rule which implies that the tuning of neurons converges to the eigenvectors of the covariance of their input. Starting with a set of dendritic fields spanning a range of sizes, simulations supported by a direct mathematical analysis show that the solution of the associated "cortical equation" provides a set of Gabor-like wavelets with parameter values that are in broad agreement with the physiology data. We show however that the simple version of the Hebbian assumption does not predict all the physiological properties. The same theoretical framework also provides predictions about the tuning of cells in V4 and in the face patch AL which are in qualitative agreement with physiology data.
</description>
<pubDate>Tue, 06 Aug 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79828</guid>
<dc:date>2013-08-06T00:00:00Z</dc:date>
</item>
<item>
<title>Sound Input Filter Generation for Integer Overflow Errors</title>
<link>https://hdl.handle.net/1721.1/79827</link>
<description>Sound Input Filter Generation for Integer Overflow Errors
Long, Fan; Sidiroglou-Douskos, Stelios; Kim, Deokhwan; Rinard, Martin
We present a system, SIFT, for generating input filters that nullify integer overflow errors associated with critical program sites such as memory allocation or block copy sites. SIFT uses a static program analysis to generate filters that discard inputs that may trigger integer overflow errors in the computations of the sizes of allocated memory blocks or the number of copied bytes in block copy operations. The generated filters are sound   if an input passes the filter, it will not trigger an integer overflow error for any analyzed site. Our results show that SIFT successfully analyzes (and therefore generates sound input filters for) 52 out of 58 memory allocation and block memory copy sites in analyzed input processing modules from five applications (VLC, Dillo, Swfdec, Swftools, and GIMP). These nullified errors include six known integer overflow vulnerabilities. Our results also show that applying these filters to 62895 real-world inputs produces no false positives. The analysis and filter generation times are all less than a second.
</description>
<pubDate>Tue, 06 Aug 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79827</guid>
<dc:date>2013-08-06T00:00:00Z</dc:date>
</item>
<item>
<title>Conceptual Design of Software: A Research Agenda</title>
<link>https://hdl.handle.net/1721.1/79826</link>
<description>Conceptual Design of Software: A Research Agenda
Jackson, Daniel
A research agenda in software design is outlined, focusing on the role of concepts. The notions of concepts as "abstract affordances" and of conceptual integrity are discussed, and a series of small examples of conceptual models is given.
</description>
<pubDate>Thu, 08 Aug 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79826</guid>
<dc:date>2013-08-08T00:00:00Z</dc:date>
</item>
<item>
<title>Jigsaw: Scalable Software-Defined Caches (Extended Version)</title>
<link>https://hdl.handle.net/1721.1/79746</link>
<description>Jigsaw: Scalable Software-Defined Caches (Extended Version)
Beckmann, Nathan; Sanchez, Daniel
Shared last-level caches, widely used in chip-multiprocessors (CMPs), face two fundamental limitations. First, the latency and energy of shared caches degrade as the system scales up. Second, when multiple workloads share the CMP, they suffer from interference in shared cache accesses. Unfortunately, prior research addressing one issue either ignores or worsens the other: NUCA techniques reduce access latency but are prone to hotspots and interference, and cache partitioning techniques only provide isolation but do not reduce access latency. We present Jigsaw, a technique that jointly addresses the scalability and interference problems of shared caches. Hardware lets software define shares, collections of cache bank partitions that act as virtual caches, and map data to shares. Shares give software full control over both data placement and capacity allocation. Jigsaw implements efficient hardware support for share management, monitoring, and adaptation. We propose novel resource-management algorithms and use them to develop a system-level runtime that leverages Jigsaw to both maximize cache utilization and place data close to where it is used. We evaluate Jigsaw using extensive simulations of 16- and 64-core tiled CMPs. Jigsaw improves performance by up to 2.2x (18% avg) over a conventional shared cache, and significantly outperforms state-of-the-art NUCA and partitioning techniques.
</description>
<pubDate>Sun, 01 Sep 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79746</guid>
<dc:date>2013-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coded Emulation of Shared Atomic Memory for Message Passing Architectures</title>
<link>https://hdl.handle.net/1721.1/79606</link>
<description>Coded Emulation of Shared Atomic Memory for Message Passing Architectures
Cadambe, Viveck R.; Lynch, Nancy; Medard, Muriel; Musial, Peter
This paper considers the communication and storage costs of emulating atomic (linearizable) read/write shared memory in distributed message-passing systems. We analyze the costs of previously-proposed algorithms by Attiya, Bar-Noy, and Dolev (the ABD algorithm) and by Fan and Lynch (the LDR algorithm), and develop new coding-based algorithms that significantly reduce these costs. The paper contains three main contributions: (1) We present a new shared-memory algorithm that we call CAS, for Coded Atomic Storage. This algorithm uses erasure coding methods. (2) In a storage system with N servers that is resilient to f server failures, we show that the communication costs for the ABD and LDR algorithms, measured in terms of number of object values, are both at least f + 1, whereas the communication cost for CAS is N/(N-2f). (3) We also explicitly quantify the storage costs of the ABD, LDR, and CAS algorithms. The storage cost of the ABD algorithm, measured in terms of number of object values, is N; whereas the storage costs of the LDR and CAS algorithms are both unbounded. We present a modification of the CAS algorithm based on the idea of garbage collection. The modified version of CAS has a storage cost of (d + 1) N/(N-2f), where d in an upper bound on the number of operations that are concurrent with a read operation. Thus, if d is sufficiently small, the storage cost of CAS is lower than those of both the ABD and LDR algorithms.
</description>
<pubDate>Wed, 17 Jul 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79606</guid>
<dc:date>2013-07-17T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Input/Output Automata: a Formal and Compositional Model for Dynamic Systems</title>
<link>https://hdl.handle.net/1721.1/79420</link>
<description>Dynamic Input/Output Automata: a Formal and Compositional Model for Dynamic Systems
Attie, Paul C.; Lynch, Nancy A.
We present dynamic I/O automata (DIOA), a compositional model of dynamic systems, based on I/O automata. In our model, automata can be created and destroyed dynamically, as computation proceeds. In addition, an automaton can dynamically change its signature, that is, the set of actions in which it can participate. This allows us to model mobility, by enforcing the constraint that only automata at the same location may synchronize on common actions. Our model features operators for parallel composition, action hiding, and action renaming. It also features a notion of automaton creation, and a notion of trace inclusion from one dynamic system to another, which can be used to prove that one system implements the other. Our model is hierarchical: a dynamically changing system of interacting automata is itself modeled as a single automaton that is "one level higher." This can be repeated, so that an automaton that represents such a dynamic system can itself be created and destroyed. We can thus model the addition and removal of entire subsystems with a single action. We establish fundamental compositionality results for DIOA: if one component is replaced by another whose traces are a subset of the former, then the set of traces of the system as a whole can only be reduced, and not increased, i.e., no new behaviors are added. That is, parallel composition, action hiding, and action renaming, are all monotonic with respect to trace inclusion. We also show that, under certain technical conditions, automaton creation is monotonic with respect to trace inclusion: if a system creates automaton Ai instead of (previously) creating automaton A'i, and the traces of Ai are a subset of the traces of A'i,then the set of traces of the overall system is possibly reduced, but not increased. Our trace inclusion results imply that trace equivalence is a congruence relation with respect to parallel composition, action hiding, and action renaming. Our trace inclusion results enable a design and refinement methodology based solely on the notion of externally visible behavior, and which is therefore independent of specific methods of establishing trace inclusion. It permits the refinement of components and subsystems in isolation from the entire system, and provides more flexibility in refinement than a methodology which is, for example, based on the monotonicity of forward simulation with respect to parallel composition. In the latter, every automaton must be refined using forward simulation, whereas in our framework different automata can be refined using different methods. The DIOA model was defined to support the analysis of mobile agent systems, in a joint project with researchers at Nippon Telegraph and Telephone. It can also be used for other forms of dynamic systems, such as systems described by means of object-oriented programs, and systems containing services with changing access permissions.
</description>
<pubDate>Mon, 08 Jul 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79420</guid>
<dc:date>2013-07-08T00:00:00Z</dc:date>
</item>
<item>
<title>Verifying Quantitative Reliability of Programs That Execute on Unreliable Hardware</title>
<link>https://hdl.handle.net/1721.1/79355</link>
<description>Verifying Quantitative Reliability of Programs That Execute on Unreliable Hardware
Carbin, Michael; Misailovic, Sasa; Rinard, Martin
Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and recovery from soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors. In this paper we present Rely, a programming language that enables developers to reason about the quantitative reliability of an application -- namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces. We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification, that characterizes the reliability of the underlying hardware components, and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented in Rely.
</description>
<pubDate>Wed, 19 Jun 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79355</guid>
<dc:date>2013-06-19T00:00:00Z</dc:date>
</item>
<item>
<title>Body-form and body-pose recognition with a hierarchical model of the ventral stream</title>
<link>https://hdl.handle.net/1721.1/79354</link>
<description>Body-form and body-pose recognition with a hierarchical model of the ventral stream
Kim, Heejung; Wohlwend, Jeremy; Leibo, Joel Z.; Poggio, Tomaso
When learning to recognize a novel body shape, e.g., a panda bear, we are not misled by changes in its pose. A "jumping panda bear" is readily recognized, despite having no prior visual experience with the conjunction of these concepts. Likewise, a novel pose can be estimated in an invariant way, with respect to the actor's body shape. These body and pose recognition tasks require invariance to non-generic transformations that previous models of the ventral stream do not have. We show that the addition of biologically plausible, class-specific mechanisms associating previously-viewed actors in a range of poses enables a hierarchical model of object recognition to account for this human capability. These associations could be acquired in an unsupervised manner from past experience.
</description>
<pubDate>Tue, 18 Jun 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79354</guid>
<dc:date>2013-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Reactive Integrated Motion Planning and Execution Using Chekhov</title>
<link>https://hdl.handle.net/1721.1/79078</link>
<description>Reactive Integrated Motion Planning and Execution Using Chekhov
Shroff, Ameya
We envision a world in which robots and humans can collaborate to perform complex tasks in real-world environments. Current motion planners successfully generate trajectories for a robot with multiple degrees of freedom, in a cluttered environment, and ensure that the robot can achieve its goal while avoiding all the obstacles in the environment. However, these planners are not practical in real world scenarios that involve unstructured, dynamic environments for a three primary reasons. First, these motion planners assume that the environment the robot is functioning in, is well-known and static, both during plan generation and plan execution. Second, these planners do not support temporal constraints, which are crucial for planning in a rapidly-changing environment and for allowing task synchronisation between the robot and other agents, like a human or even another robot. Third, the current planners do not adequately represent the requirements of the task. They often over-constrain the task description and are hence unable to take advantage of task flexibility which may aid in optimising energy efficiency or robustness. In this thesis we present Chekhov, a reactive, integrated motion planning and execution executive that addresses these shortcomings using four key innovations. First, unlike traditional planners, the planning and execution components of Chekhov are very closely integrated. This close coupling blurs the traditional, sharp boundary between the two components and allows for optimal collaboration. Second, Chekhov represents temporal constraints, which allows it to perform operations that are temporally synchronised with external events. Third, Chekhov uses an incremental search algorithm which allows it to rapidly generate a new plan if a disturbance is encountered that threatens the execution of the existing plan. Finally, unlike standard planners which generate a single reference trajectory from the start pose to the goal pose, Chekhov generates a Qualitative Control Plan using Flow Tubes that represent families of feasible trajectories and associated control policies. These flow tubes provide Chekhov with a flexibility that is extremely valuable and serve as Chekhov's first line of defence.
MEng thesis
</description>
<pubDate>Thu, 06 Jun 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79078</guid>
<dc:date>2013-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>A Publish-Subscribe Implementation of Network Management</title>
<link>https://hdl.handle.net/1721.1/79060</link>
<description>A Publish-Subscribe Implementation of Network Management
Simosa, Jorge D.
As modern networks become highly integrated, heterogeneous, and experience exponential growth, the task of network management becomes increasingly unmanageable for network administrators and designers. The Knowledge Plane (KP) is designed to support a self-managing network, given the organizational constraints of network management, as well as to create synergy and exploit commonality among network applications. In this thesis, to build an Information Plane that is suitable to the requirements of the KP, we propose a publish/subscribe system that provides a clear and systematic framework for resolving tussles in the network. To evaluate the effectiveness of this design, we configured a network of PlanetLab nodes and conducted experiments involving a variety of file sizes and source-destination pairs. The results suggest that the system's performance is not only comparable to existing file transfer services, but that the system also introduces several performance gains that are unattainable with current network architectures.
MEng thesis
</description>
<pubDate>Tue, 04 Jun 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79060</guid>
<dc:date>2013-06-04T00:00:00Z</dc:date>
</item>
<item>
<title>BigBand: GHz-Wide Sensing and Decoding on Commodity Radios</title>
<link>https://hdl.handle.net/1721.1/79058</link>
<description>BigBand: GHz-Wide Sensing and Decoding on Commodity Radios
Hassanieh, Haitham; Shi, Lixin; Abari, Omid; Hamed, Ezzeldine; Katabi, Dina
The goal of this paper is to make sensing and decoding GHz of spectrum simple, cheap, and low power. Our thesis is simple: if we can build a technology that captures GHz of spectrum using commodity Wi-Fi radios, it will have the right cost and power budget to enable a variety of new applications such as GHz-widedynamic access and concurrent decoding of diverse technologies. This vision will change today s situation where only expensive power-hungry spectrum analyzers can capture GHz-wide spectrum. Towards this goal, the paper harnesses the sparse Fourier transform to compute the frequency representation of a sparse signal without sampling it at full bandwidth. The paper makes the following contributions. First, it presents BigBand, a receiver that can sense and decode a sparse spectrum wider than its own digital bandwidth. Second, it builds a prototype of its design using 3 USRPs that each samples the spectrum at 50 MHz, producing a device that captures 0.9 GHz -- i.e., 6x larger bandwidth than the three USRPs combined. Finally, it extends its algorithm to enable spectrum sensing in scenarios where the spectrum is not sparse.
</description>
<pubDate>Wed, 22 May 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79058</guid>
<dc:date>2013-05-22T00:00:00Z</dc:date>
</item>
<item>
<title>Organon: A Symbolic Constraint Framework &amp; Solver</title>
<link>https://hdl.handle.net/1721.1/79057</link>
<description>Organon: A Symbolic Constraint Framework &amp; Solver
Evans, Isaac; Lynch, Joseph
Organon is an open source system for expressing and solving complex symbolic constraints between generic entities. Our design avoids restricting the programmer s ability to phrase constraints; Organon acts purely as a framework that defines and holds together the key concepts of forms, constraints, and solvers. It has three main components: (1) Forms: Abstract representations of the entities to be constrained. (2) Constraints: Functions that symbolically express requirements on the relationships between forms as well as provide information a solver can use to improve the constraint s satisfaction. (3) Solvers: Functions which inspect instantiations of forms and manipulate them in an attempt to satisfy a set of objective constraints.
</description>
<pubDate>Fri, 24 May 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/79057</guid>
<dc:date>2013-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>High Spatial Resolution BRDFs with Metallic powders Using Wave Optics Analysis</title>
<link>https://hdl.handle.net/1721.1/78590</link>
<description>High Spatial Resolution BRDFs with Metallic powders Using Wave Optics Analysis
Levin, Anat; Glasner, Daniel; Xiong, Ying; Durand, Fredo; Freeman, William; Matusik, Wojciech; Zickler, Todd
This manuscript completes the analysis of our SIGGRAPH 2013 paper "Fabricating BRDFs at High Spatial Resolution Using Wave Optics" in which photolithography fabrication was used for manipulating reflectance effects. While photolithography allows for precise reflectance control, it is costly to fabricate. Here we explore an inexpensive alternative to micro-fabrication, in the form of metallic powders. Such powders are readily available at a variety of particle sizes and morphologies. Using an analysis similar to the micro-fabrication paper, we provide guidelines for the relation between the particles' shape and size and the reflectance functions they can produce.
</description>
<pubDate>Wed, 24 Apr 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/78590</guid>
<dc:date>2013-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>Compositional Policy Priors</title>
<link>https://hdl.handle.net/1721.1/78573</link>
<description>Compositional Policy Priors
Wingate, David; Diuk, Carlos; O'Donnell, Timothy; Tenenbaum, Joshua; Gershman, Samuel
This paper describes a probabilistic framework for incorporating structured inductive biases into reinforcement learning. These inductive biases arise from policy priors, probability distributions over optimal policies. Borrowing recent ideas from computational linguistics and Bayesian nonparametrics, we define several families of policy priors that express compositional, abstract structure in a domain. Compositionality is expressed using probabilistic context-free grammars, enabling a compact representation of hierarchically organized sub-tasks. Useful sequences of sub-tasks can be cached and reused by extending the grammars nonparametrically using Fragment Grammars. We present Monte Carlo methods for performing inference, and show how structured policy priors lead to substantially faster learning in complex domains compared to methods without inductive biases.
</description>
<pubDate>Fri, 12 Apr 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/78573</guid>
<dc:date>2013-04-12T00:00:00Z</dc:date>
</item>
<item>
<title>Task-Structured Probabilistic I/O Automata</title>
<link>https://hdl.handle.net/1721.1/78359</link>
<description>Task-Structured Probabilistic I/O Automata
Canetti, Ran; Cheung, Ling; Kaynar, Dilsun; Liskov, Moses; Lynch, Nancy; Pereira, Olivier; Segala, Roberto
Modeling frameworks such as Probabilistic I/O Automata (PIOA) and Markov Decision Processes permit both probabilistic and nondeterministic choices. In order to use these frameworks to express claims about probabilities of events, one needs mechanisms for resolving nondeterministic choices. For PIOAs, nondeterministic choices have traditionally been resolved by schedulers that have perfect information about the past execution. However, these schedulers are too powerful for certain settings, such as cryptographic protocol analysis, where information must sometimes be hidden. Here, we propose a new, less powerful nondeterminism-resolution mechanism for PIOAs, consisting of tasks and local schedulers. Tasks are equivalence classes of system actions that are scheduled by oblivious, global task sequences. Local schedulers resolve nondeterminism within system components, based on local information only. The resulting task-PIOA framework yields simple notions of external behavior and implementation, and supports simple compositionality results. We also define a new kind of simulation relation, and show it to be sound for proving implementation. We illustrate the potential of the task-PIOAframework by outlining its use in verifying an Oblivious Transfer protocol.
"May 28, 2009."
</description>
<pubDate>Thu, 01 Jan 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/78359</guid>
<dc:date>2009-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tracking 3-D Rotations with the Quaternion Bingham Filter</title>
<link>https://hdl.handle.net/1721.1/78248</link>
<description>Tracking 3-D Rotations with the Quaternion Bingham Filter
Glover, Jared; Kaelbling, Leslie Pack
A deterministic method for sequential estimation of 3-D rotations is presented. The Bingham distribution is used to represent uncertainty directly on the unit quaternion hypersphere. Quaternions avoid the degeneracies of other 3-D orientation representations, while the Bingham distribution allows tracking of large-error (high-entropy) rotational distributions. Experimental comparison to a leading EKF-based filtering approach on both synthetic signals and a ball-tracking dataset shows that the Quaternion Bingham Filter (QBF) has lower tracking error than the EKF, particularly when the state is highly dynamic. We present two versions of the QBF, suitable for tracking the state of first- and second-order rotating dynamical systems.
</description>
<pubDate>Wed, 27 Mar 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/78248</guid>
<dc:date>2013-03-27T00:00:00Z</dc:date>
</item>
<item>
<title>Faces as a "Model Category" for Visual Object Recognition</title>
<link>https://hdl.handle.net/1721.1/77936</link>
<description>Faces as a "Model Category" for Visual Object Recognition
Tan, Cheston; Poggio, Tomaso
Visual recognition is an important ability that is central to many everyday tasks such as reading, navigation and social interaction, and is therefore actively studied in neuroscience, cognitive psychology and artificial intelligence. There exist thousands of object categories, all of which pose similar challenges to biological and artificial visual systems: accurate recognition under varying location, scale, view angle, illumination and clutter. In many areas of science, important discoveries have been made using "model organisms" such as fruit flies, mice and macaques. For the thousands of object categories, the important and well-studied category of faces could potentially serve as a "model category" upon which efforts are focused, and from which fundamental insights are drawn. However, it has been hotly debated whether faces are processed by the brain in a manner fundamentally different from other categories. Here we show that "neural tuning size" -- a single parameter in a computational model of object processing -- is able to account for important face-specific phenomena. Thus, surprisingly, "face-like" processing is explainable by physiological mechanisms that differ only quantitatively from "object-like" processing. Our computational proof-of-principle provides specific neural tuning properties that correspond to the so-far qualitative and controversial notion of "holistic" face processing. Overall, faces may be a viable model category. Since faces are highly amenable to complementary experimental techniques like functional MRI, electrophysiology, electroencephalography and transcranial magnetic stimulation, this further raises the odds that the algorithms and neural circuits underlying visual recognition may first be solved for faces. With faces serving as a model category, the great scientific challenge of understanding and reverse-engineering general visual recognition can be greatly accelerated.
</description>
<pubDate>Mon, 18 Mar 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/77936</guid>
<dc:date>2013-03-18T00:00:00Z</dc:date>
</item>
<item>
<title>A Plan for Optimizing Network-Intensive Cloud Applications</title>
<link>https://hdl.handle.net/1721.1/77238</link>
<description>A Plan for Optimizing Network-Intensive Cloud Applications
LaCurts, Katrina; Deng, Shuo; Balakrishnan, Hari
A significant and growing number of applications deployed on cloud infrastructures are network-intensive. These applications are frequently bottlenecked by the speed of network connections between the machines on which they are deployed. Due to the complexity and size of cloud networks, such applications often run slowly or have unpredictable completion times and/or throughput, both of which can result in increased cost to the customer. In this paper, we argue that cloud customers should be able to express the demands and objectives of their applications. We outline an architecture that allows for this type of expression, and distributes applications within the cloud network such that the application's objectives are met. We discuss some of the key questions that need to be addressed to implement the architecture, as well as the interactions between optimizations done by clients and by cloud providers. We also present preliminary results that indicate that these types of systems are feasible and improve performance.
</description>
<pubDate>Tue, 12 Feb 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/77238</guid>
<dc:date>2013-02-12T00:00:00Z</dc:date>
</item>
<item>
<title>Asynchronous Failure Detectors</title>
<link>https://hdl.handle.net/1721.1/76716</link>
<description>Asynchronous Failure Detectors
Cornejo, Alejandro; Lynch, Nancy; Sastry, Srikanth
Failure detectors -- oracles that provide information about process crashes -- are an important abstraction for crash tolerance in distributed systems. The generality of failure-detector theory, while providing great expressiveness, poses significant challenges in developing a robust hierarchy of failure detectors. We address some of these challenges by proposing (1) a variant of failure detectors called asynchronous failure detectors and (2) an associated modeling framework. Unlike the traditional failure-detector framework, our framework eschews real-time completely. We show that asynchronous failure detectors are sufficiently expressive to include several popular failure detectors including, but not limited to, the canonical Chandra-Toueg failure detectors, Sigma and other quorum failure detectors, Omega, anti-Omega, Omega^k, and Psi_k. Additionally, asynchronous failure detectors satisfy many desirable properties: they are self-implementable, guarantee that stronger asynchronous failure-detectors solve harder problems, and ensure that their outputs encode no information other than the set of crashed processes. We introduce the notion of a failure detector being representative for a problem to capture the idea that some problems encode the same information about process crashes as their weakest failure detectors do. We show that a large class of problems, called bounded problems, do not have representative failure detectors. Finally, we use the asynchronous failure-detector framework to show how sufficiently strong AFDs circumvent the impossibility of consensus in asynchronous systems.
This report is superseded by MIT-CSAIL-TR-2013-025.
</description>
<pubDate>Wed, 30 Jan 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/76716</guid>
<dc:date>2013-01-30T00:00:00Z</dc:date>
</item>
<item>
<title>Securing Deployed RFIDs by Randomizing the Modulation and the Channel</title>
<link>https://hdl.handle.net/1721.1/76260</link>
<description>Securing Deployed RFIDs by Randomizing the Modulation and the Channel
Wang, Jue; Hassanieh, Haitham; Katabi, Dina; Kohno, Tadayoshi
RFID cards are widely used today in sensitive applications such as access control, payment systems, and asset tracking. Past work shows that an eavesdropper snooping on the communication between a card and its legitimate reader can break their cryptographic protocol and obtain their secret keys. One solution for this problem is to install stronger cryptographic protocols on the cards. However, RFIDs' size, power, and cost limitations do not allow for conventional cryptographic protocols. Further, installing new protocols requires revoking billions of cards in consumers  hands and facilities worldwide, which is costly and impractical. In this paper, we ask whether one can secure RFIDs from such attacks without revoking or changing the insecure cards. We propose LocRF, a solution that changes the signal used to read the RFID cards but does not require any changes to the cards themselves. LocRF introduces a new approach that randomizes the modulation of the RFID signal as well as the wireless channel. This design protects RFIDs from eavesdroppers even if they use multi-antenna MIMO receivers. We built a prototype of LocRF on software-defined radios and used it to secure the communication of off-the-shelf cards. Both our analysis and empirical evaluation demonstrate theeffectiveness of LocRF.
</description>
<pubDate>Sat, 12 Jan 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/76260</guid>
<dc:date>2013-01-12T00:00:00Z</dc:date>
</item>
<item>
<title>The computational magic of the ventral stream: sketch of a theory (and why some deep architectures work).</title>
<link>https://hdl.handle.net/1721.1/76248</link>
<description>The computational magic of the ventral stream: sketch of a theory (and why some deep architectures work).
Poggio, Tomaso; Mutch, Jim; Leibo, Joel; Rosasco, Lorenzo; Tacchetti, Andrea
This paper explores the theoretical consequences of a simple assumption: the computational goal of the feedforward path in the ventral stream -- from V1, V2, V4 and to IT -- is to discount image transformations, after learning them during development.
</description>
<pubDate>Sat, 29 Dec 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/76248</guid>
<dc:date>2012-12-29T00:00:00Z</dc:date>
</item>
<item>
<title>5D Covariance Tracing for Efficient Defocus and Motion Blur</title>
<link>https://hdl.handle.net/1721.1/74662</link>
<description>5D Covariance Tracing for Efficient Defocus and Motion Blur
Belcour, Laurent; Soler, Cyril; Subr, Kartic; Holzschuch, Nicolas; Durand, Fredo
The rendering of effects such as motion blur and depth-of-field requires costly 5D integrals. We dramatically accelerate their computation through adaptive sampling and reconstruction based on the prediction of the anisotropy and bandwidth of the integrand. For this, we develop a new frequency analysis of the 5D temporal light-field, and show that first-order motion can be handled through simple changes of coordinates in 5D. We further introduce a compact representation of the spectrum using the co- variance matrix and Gaussian approximations. We derive update equations for the 5 × 5 covariance matrices for each atomic light transport event, such as transport, occlusion, BRDF, texture, lens, and motion. The focus on atomic operations makes our work general, and removes the need for special-case formulas. We present a new rendering algorithm that computes 5D covariance matrices on the image plane by tracing paths through the scene, focusing on the single-bounce case. This allows us to reduce sampling rates when appropriate and perform reconstruction of images with complex depth-of-field and motion blur effects.
</description>
<pubDate>Fri, 16 Nov 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/74662</guid>
<dc:date>2012-11-16T00:00:00Z</dc:date>
</item>
<item>
<title>Monitoring the Execution of Temporal Plans for Robotic Systems</title>
<link>https://hdl.handle.net/1721.1/73686</link>
<description>Monitoring the Execution of Temporal Plans for Robotic Systems
Levine, Steven J.
To achieve robustness in dynamic and uncertain environments, robotic systems must monitor the progress of their plans during execution. This thesis develops a plan executive called Pike that is capable of executing and monitoring plans. The execution monitor at its core quickly and efficiently detects relevant disturbances that threaten future actions in the plan. We present a set of novel offline algorithms that extract sets of candidate causal links from temporally-flexible plans. A second set of algorithms uses these causal links to monitor the execution online and detect problems with low latency. We additionally introduce the TBurton executive, a system capable of robustly meeting a user s high-level goals through the combined use of Pike and a temporal generative planner. An innovative voice-commanded robot is demonstrated in hardware and simulation that robustly meets high level goals and verbalizes any causes of failure using the execution monitor
MEng thesis
</description>
<pubDate>Thu, 04 Oct 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/73686</guid>
<dc:date>2012-10-04T00:00:00Z</dc:date>
</item>
<item>
<title>A Gaussian Approximation of Feature Space for Fast Image Similarity</title>
<link>https://hdl.handle.net/1721.1/73685</link>
<description>A Gaussian Approximation of Feature Space for Fast Image Similarity
Gharbi, Michael; Malisiewicz, Tomasz; Paris, Sylvain; Durand, Frédo
We introduce a fast technique for the robust computation of image similarity. It builds on a re-interpretation of the recent exemplar-based SVM approach, where a linear SVM is trained at a query point and distance is computed as the dot product with the normal to the separating hyperplane. Although exemplar-based SVM is slow because it requires a new training for each exemplar, the latter approach has shown robustness for image retrieval and object classification, yielding state-of- the-art performance on the PASCAL VOC 2007 detection task despite its simplicity. We re-interpret it by viewing the SVM between a single point and the set of negative examples as the computation of the tangent to the manifold of images at the query. We show that, in a high-dimensional space such as that of image features, all points tend to lie at the periphery and that they are usually separable from the rest of the set. We then use a simple Gaussian approximation to the set of all images in feature space, and fit it by computing the covariance matrix on a large training set. Given the covariance matrix, the computation of the tangent or normal at a point is straightforward and is a simple multiplication by the inverse covariance. This allows us to dramatically speed up image retrieval tasks, going from more than ten minutes to a single second. We further show that our approach is equivalent to feature-space whitening and has links to image saliency.
</description>
<pubDate>Mon, 01 Oct 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/73685</guid>
<dc:date>2012-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Tracking for Real-Time Dense RGB-D Mapping with Kintinuous</title>
<link>https://hdl.handle.net/1721.1/73167</link>
<description>Robust Tracking for Real-Time Dense RGB-D Mapping with Kintinuous
Whelan, Thomas; Johannsson, Hordur; Kaess, Michael; Leonard, John J.; McDonald, John
This paper describes extensions to the Kintinuous algorithm for spatially extended KinectFusion, incorporating the following additions: (i) the integration of multiple 6DOF camera odometry estimation methods for robust tracking; (ii) a novel GPU-based implementation of an existing dense RGB-D visual odometry algorithm; (iii) advanced fused real-time surface coloring. These extensions are validated with extensive experimental results, both quantitative and qualitative, demonstrating the ability to build dense fully colored models of spatially extended environments for robotics and virtual reality applications while remaining robust against scenes with challenging sets of geometric and visual features.
</description>
<pubDate>Mon, 17 Sep 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/73167</guid>
<dc:date>2012-09-17T00:00:00Z</dc:date>
</item>
<item>
<title>Aeolus Reference Manual</title>
<link>https://hdl.handle.net/1721.1/73017</link>
<description>Aeolus Reference Manual
Liskov, Barbara
This document describes the interface that the Aeolus information flow platform provides for users who are implementing applications using Java. The document explains how the Aeolus features are made available by means of a Java library.
</description>
<pubDate>Fri, 14 Sep 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/73017</guid>
<dc:date>2012-09-14T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale Geometric Methods for Data Sets I: Multiscale SVD, Noise and Curvature</title>
<link>https://hdl.handle.net/1721.1/72597</link>
<description>Multiscale Geometric Methods for Data Sets I: Multiscale SVD, Noise and Curvature
Little, Anna V.; Maggioni, Mauro; Rosasco, Lorenzo
Large data sets are often modeled as being noisy samples from probability distributions in R^D, with D large. It has been noticed that oftentimes the support M of these probability distributions seems to be well-approximated by low-dimensional sets, perhaps even by manifolds. We shall consider sets that are locally well approximated by k-dimensional planes, with k &lt;&lt; D, with k-dimensional manifolds isometrically embedded in R^D being a special case. Samples from this distribution; are furthermore corrupted by D-dimensional noise. Certain tools from multiscale geometric measure theory and harmonic analysis seem well-suited to be adapted to the study of samples from such probability distributions, in order to yield quantitative geometric information about them. In this paper we introduce and study multiscale covariance matrices, i.e. covariances corresponding to the distribution restricted to a ball of radius r, with a fixed center and varying r, and under rather general geometric assumptions we study how their empirical, noisy counterparts behave. We prove that in the range of scales where these covariance matrices are most informative, the empirical, noisy covariances are close to their expected, noiseless counterparts. In fact, this is true as soon as the number of samples in the balls where the covariance matrices are computed is linear in the intrinsic dimension of M. As an application, we present an algorithm for estimating the intrinsic dimension of M.
</description>
<pubDate>Sat, 08 Sep 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/72597</guid>
<dc:date>2012-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>A Social-Welfare Optimal Probabilistic Mechanism for Knightian Single-Good Auctions</title>
<link>https://hdl.handle.net/1721.1/72584</link>
<description>A Social-Welfare Optimal Probabilistic Mechanism for Knightian Single-Good Auctions
Chiesa, Alessandro; Micali, Silvio; Zhu, Zeyuan Allen
We provide an optimal probabilistic mechanism for maximizing social welfare in single-good auctions when each player does not know his true valuation for the good, but only a set of valuations that is guaranteed to include his true one.
</description>
<pubDate>Fri, 07 Sep 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/72584</guid>
<dc:date>2012-09-07T00:00:00Z</dc:date>
</item>
<item>
<title>From Formal Methods to Executable Code</title>
<link>https://hdl.handle.net/1721.1/72537</link>
<description>From Formal Methods to Executable Code
Musial, Peter M.
The objective of this work is the derivation of software that is verifiably correct. Our approach is to abstract system specifications and model these in a formal framework called Timed Input/Output Automata, which provides a notation for expressing distributed systems and mathematical support for reasoning about their properties. Although formal reasoning is easier at an abstract level, it is not clear how to transform these abstractions into executable code. During system implementation, when an abstract system specification is left up to human interpretation, then this opens a possibility of undesirable behaviors being introduced into the final code, thereby nullifying all formal efforts. This manuscript addresses this issue and presents a set of transformation methods for systems described as a network to timed automata into Java code for distributed platforms. We prove that the presented transformation methods preserve guarantees of the source specifications, and therefore, result in code that is correct by construction.
Note: the cover page of this report shows an incorrect title.  The title given on the first page of the document itself is correct.
</description>
<pubDate>Mon, 27 Aug 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/72537</guid>
<dc:date>2012-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Bounded-Contention Coding for Wireless Networks in the High SNR Regime</title>
<link>https://hdl.handle.net/1721.1/72536</link>
<description>Bounded-Contention Coding for Wireless Networks in the High SNR Regime
Censor-Hillel, Keren; Haeupler, Bernhard; Lynch, Nancy; Medard, Muriel
Efficient communication in wireless networks is typically challenged by the possibility of interference among several transmitting nodes. Much important research has been invested in decreasing the number of collisions in order to obtain faster algorithms for communication in such networks. This paper proposes a novel approach for wireless communication, which embraces collisions rather than avoiding them, over an additive channel. It introduces a coding technique called Bounded-Contention Coding (BCC) that allows collisions to be successfully decoded by the receiving nodes into the original transmissions and whose complexity depends on a bound on the contention among the transmitters. BCC enables deterministic local broadcast in a network with n nodes and at most a transmitters with information of L bits each within O(a log n + aL) bits of communication with full-duplex radios, and O((a log n + aL)(log n)) bits, with high probability, with half-duplex radios. When combined with random linear network coding, BCC gives global broadcast within O((D + a + log n)(a log n + L)) bits, with high probability. This also holds in dynamic networks that can change arbitrarily over time by a worst-case adversary. When no bound on the contention is given, it is shown how to probabilistically estimate it and obtain global broadcast that is adaptive to the true contention in the network.
</description>
<pubDate>Mon, 27 Aug 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/72536</guid>
<dc:date>2012-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Using Program Synthesis for Social Recommendations</title>
<link>https://hdl.handle.net/1721.1/72106</link>
<description>Using Program Synthesis for Social Recommendations
Cheung, Alvin; Solar-Lezama, Armando; Madden, Samuel
This paper presents a new approach to select events of interest to a user in a social media setting where events are generated by the activities of the user's friends through their mobile devices. We argue that given the unique requirements of the social media setting, the problem is best viewed as an inductive learning problem, where the goal is to first generalize from the users' expressed "likes" and "dislikes" of specific events, then to produce a program that can be manipulated by the system and distributed to the collection devices to collect only data of interest. The key contribution of this paper is a new algorithm that combines existing machine learning techniques with new program synthesis technology to learn users' preferences. We show that when compared with the more standard approaches, our new algorithm provides up to order-of-magnitude reductions in model training time, and significantly higher prediction accuracies for our target application. The approach also improves on standard machine learning techniques in that it produces clear programs that can be manipulated to optimize data collection and filtering.
</description>
<pubDate>Mon, 13 Aug 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/72106</guid>
<dc:date>2012-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>The Order Independence of Iterated Dominance in Extensive Games, with Connections to Mechanism Design and Backward Induction</title>
<link>https://hdl.handle.net/1721.1/71953</link>
<description>The Order Independence of Iterated Dominance in Extensive Games, with Connections to Mechanism Design and Backward Induction
Chen, Jing; Micali, Silvio
Shimoji and Watson (1998) prove that a strategy of an extensive game is rationalizable in the sense of Pearce if and only if it survives the maximal elimination of conditionally dominated strategies. Briefly, this process iteratively eliminates conditionally dominated strategies according to a specific order, which is also the start of an order of elimination of weakly dominated strategies. Since the final set of possible payoff profiles, or terminal nodes, surviving iterated elimination of weakly dominated strategies may be order-dependent, one may suspect that the same holds for conditional dominance. We prove that, although the sets of strategy profiles surviving two arbitrary elimination orders of conditional dominance may be very different from each other, they are equivalent in the following sense: for each player i and each pair of elimination orders, there exists a function phi_i mapping each strategy of i surviving the first order to a strategy of i surviving the second order, such that, for every strategy profile s surviving the first order, the profile (phi_i(s_i))_i induces the same terminal node as s does. To prove our results we put forward a new notion of dominance and an elementary characterization of extensive-form rationalizability (EFR) that may be of independent interest. We also establish connections between EFR and other existing iterated dominance procedures, using our notion of dominance and our characterization of EFR.
</description>
<pubDate>Tue, 31 Jul 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/71953</guid>
<dc:date>2012-07-31T00:00:00Z</dc:date>
</item>
<item>
<title>Patch complexity, finite pixel correlations and optimal denoising</title>
<link>https://hdl.handle.net/1721.1/71919</link>
<description>Patch complexity, finite pixel correlations and optimal denoising
Levin, Anat; Nadler, Boaz; Durand, Fredo; Freeman, William T.
Image restoration tasks are ill-posed problems, typically solved withpriors. Since the optimal prior is the exact unknown density of natural images,actual priors are only approximate and typically restricted to small patches. Thisraises several questions: How much may we hope to improve current restorationresults with future sophisticated algorithms? And more fundamentally, even withperfect knowledge of natural image statistics, what is the inherent ambiguity ofthe problem? In addition, since most current methods are limited to finite supportpatches or kernels, what is the relation between the patch complexity of naturalimages, patch size, and restoration errors? Focusing on image denoising, we makeseveral contributions. First, in light of computational constraints, we study the relation between denoising gain and sample size requirements in a non parametricapproach. We present a law of diminishing return, namely that with increasingpatch size, rare patches not only require a much larger dataset, but also gain littlefrom it. This result suggests novel adaptive variable-sized patch schemes for denoising. Second, we study absolute denoising limits, regardless of the algorithmused, and the converge rate to them as a function of patch size. Scale invarianceof natural images plays a key role here and implies both a strictly positive lowerbound on denoising and a power law convergence. Extrapolating this parametriclaw gives a ballpark estimate of the best achievable denoising, suggesting thatsome improvement, although modest, is still possible.
</description>
<pubDate>Sun, 07 Oct 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/71919</guid>
<dc:date>2012-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>Viewstamped Replication Revisited</title>
<link>https://hdl.handle.net/1721.1/71763</link>
<description>Viewstamped Replication Revisited
Liskov, Barbara; Cowling, James
This paper presents an updated version of Viewstamped Replication, a replication technique that handles failures in which nodes crash. It describes how client requests are handled, how the group reorganizes when a replica fails, and how a failed replica is able to rejoin the group. The paper also describes a number of important optimizations and presents a protocol for handling reconfigurations that can change both the group membership and the number of failures the group is able to handle.
</description>
<pubDate>Mon, 23 Jul 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/71763</guid>
<dc:date>2012-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Kintinuous: Spatially Extended KinectFusion</title>
<link>https://hdl.handle.net/1721.1/71756</link>
<description>Kintinuous: Spatially Extended KinectFusion
Whelan, Thomas; Kaess, Michael; Fallon, Maurice; Johannsson, Hordur; Leonard, John; McDonald, John
In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume due to this variation, and, (iii) incrementally adding the resulting points to a triangular mesh representation of the environment. The system is implemented as a set of hierarchical multi-threaded components which are capable of operating in real-time. The architecture facilitates the creation and integration of new modules with minimal impact on the performance on the dense volume tracking and surface reconstruction modules. We provide experimental results demonstrating the system's ability to map areas considerably beyond the scale of the original KinectFusion algorithm including a two story apartment and an extended sequence taken from a car at night. In order to overcome failure of the iterative closest point (ICP) based odometry in areas of low geometric features we have evaluated the Fast Odometry from Vision (FOVIS) system as an alternative. We provide a comparison between the two approaches where we show a trade off between the reduced drift of the visual odometry approach and the higher local mesh quality of the ICP-based approach. Finally we present ongoing work on incorporating full simultaneous localisation and mapping (SLAM) pose-graph optimisation.
</description>
<pubDate>Thu, 19 Jul 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/71756</guid>
<dc:date>2012-07-19T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated robot task and motion planning in belief space</title>
<link>https://hdl.handle.net/1721.1/71529</link>
<description>Integrated robot task and motion planning in belief space
Kaelbling, Leslie Pack; Lozano-Perez, Tomas
In this paper, we describe an integrated strategy for planning, perception, state-estimation and action in complex mobile manipulation domains. The strategy is based on planning in the belief space of probability distribution over states. Our planning approach is based on hierarchical goal regression (pre-image back-chaining). We develop a vocabulary of fluents that describe sets of belief states, which are goals and subgoals in the planning process. We show that a relatively small set of symbolic operators lead to task-oriented perception in support of the manipulation goals. An implementation of this method is demonstrated in simulation and on a real PR2 robot, showing robust, flexible solution of mobile manipulation problems with multiple objects and substantial uncertainty.
</description>
<pubDate>Tue, 03 Jul 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/71529</guid>
<dc:date>2012-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated Robot Task and Motion Planning in the Now</title>
<link>https://hdl.handle.net/1721.1/71521</link>
<description>Integrated Robot Task and Motion Planning in the Now
Kaelbling, Leslie Pack; Lozano-Perez, Tomas
This paper provides an approach to integrating geometric motion planning with logical task planning for long-horizon tasks in domains with many objects. We propose a tight integration between the logical and geometric aspects of planning. We use a logical representation which includes entities that refer to poses, grasps, paths and regions, without the need for a priori discretization. Given this representation and some simple mechanisms for geometric inference, we characterize the pre-conditions and effects of robot actions in terms of these logical entities. We then reason about the interaction of the geometric and non-geometric aspects of our domains using the general-purpose mechanism of goal regression (also known as pre-image backchaining). We propose an aggressive mechanism for temporal hierarchical decomposition, which postpones the pre-conditions of actions to create an abstraction hierarchy that both limits the lengths of plans that need to be generated and limits the set of objects relevant to each plan. We describe an implementation of this planning method and demonstrate it in a simulated kitchen environment in which it solves problems that require approximately 100 individual pick or place operations for moving multiple objects in a complex domain.
</description>
<pubDate>Fri, 29 Jun 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/71521</guid>
<dc:date>2012-06-29T00:00:00Z</dc:date>
</item>
<item>
<title>Epistemic Implementation and The Arbitrary-Belief Auction</title>
<link>https://hdl.handle.net/1721.1/71232</link>
<description>Epistemic Implementation and The Arbitrary-Belief Auction
Chen, Jing; Micali, Silvio; Pass, Rafael
In settings of incomplete information we put forward an epistemic framework for designing mechanisms that successfully leverage the players' arbitrary higher-order beliefs, even when such beliefs are totally wrong, and even when the players are rational in a very weak sense. Following Aumann (1995), we consider a player i rational if he uses a pure strategy s_i such that no alternative pure strategy s_i' performs better than s_i in every world i considers possible, and consider him order-k rational if he is rational and believes that all other players are order-(k-1) rational. We then introduce an iterative deletion procedure of dominated strategies and use it to precisely characterize the strategies consistent with the players being order-k rational. We exemplify the power of our framework in single-good auctions by introducing and achieving a new class of revenue benchmarks, defined over the players' arbitrary beliefs, that can be much higher than classical ones, and are unattainable by traditional mechanisms. Namely, we exhibit a mechanism that, for every k greater than or equal to 0 and epsilon&gt;0 and whenever the players are order-(k+1) rational, guarantees revenue greater than or equal to G^k-epsilon, where G^k is the second highest belief about belief about ... (k times) about the highest valuation of some player, even when such a player's identity is not precisely known. Importantly, our mechanism is possibilistic interim individually rational. Essentially this means that, based on his beliefs, a player's utility is non-negative not in expectation, but in each world he believes possible. We finally show that our benchmark G^k is so demanding that it separates the revenue achievable with order-k rational players from that achievable with order-(k+1) rational ones. That is, no possibilistic interim individually rational mechanism can guarantee revenue greater than or equal to G^k-c, for any constant c&gt;0, when the players are only order-k rational.
</description>
<pubDate>Fri, 22 Jun 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/71232</guid>
<dc:date>2012-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>Throwing Down the Visual Intelligence Gauntlet</title>
<link>https://hdl.handle.net/1721.1/71199</link>
<description>Throwing Down the Visual Intelligence Gauntlet
Tan, Cheston; Leibo, Joel Z; Poggio, Tomaso
In recent years, scientific and technological advances have produced artificial systems that have matched or surpassed human capabilities in narrow domains such as face detection and optical character recognition. However, the problem of producing truly intelligent machines still remains far from being solved. In this chapter, we first describe some of these recent advances, and then review one approach to moving beyond these limited successes---the neuromorphic approach of studying and reverse-engineering the networks of neurons in the human brain (specifically, the visual system). Finally, we discuss several possible future directions in the quest for visual intelligence.
</description>
<pubDate>Sun, 01 Jan 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/71199</guid>
<dc:date>2012-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Parametric Auctions</title>
<link>https://hdl.handle.net/1721.1/71170</link>
<description>Optimal Parametric Auctions
Azar, Pablo Daniel; Micali, Silvio
We study the problem of an auctioneer who wants to maximize her profits. In our model, there are n buyers with private valuations drawn from independent distributions F_1,...,F_n. When these distributions are known to the seller, Myerson's optimal auction is a well known mechanism that maximizes revenue. However, in many cases it is too strong to assume that the seller knows these distributions. We propose an alternative model where the seller only knows the mean mu_i and variance sigma_i^2 of each distribution F_i. We call mechanisms that only use this information parametric auctions. We construct such auctions for all single-dimensional downward closed environments. For a very large class of distributions, including (but not limited to) distributions with a monotone hazard rate, our auctions achieve a constant fraction of the revenue of Myerson's auction. When the seller has absolutely no knowledge about the distributions, it is well known that no auction can achieve a constant fraction of the optimal revenue when the players are not identically distributed. Our parametric model gives the seller a small amount of extra information, allowing her to construct auctions for which (1) she does not know the full distribution of valuations, (2) no two bidders need to be drawn from identical distributions and (3) the revenue obtained is a constant fraction of the revenue in Myerson's optimal auction. For digital goods environments we present a different parametric auction that not only gives a better approximation to the optimal auction, but that is also optimal in a new sense, which we call maximin optimality. Informally, an auction is maximin optimal if it maximizes revenue in the worst case over an adversary's choice of the distribution. We show that our digital parametric is maximin optimal among the class of posted price mechanisms.
</description>
<pubDate>Thu, 14 Jun 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/71170</guid>
<dc:date>2012-06-14T00:00:00Z</dc:date>
</item>
<item>
<title>The Levels of Understanding framework, revised</title>
<link>https://hdl.handle.net/1721.1/70970</link>
<description>The Levels of Understanding framework, revised
Poggio, Tomaso
I discuss the "levels of understanding" framework described in Marr's Vision and propose a revised and updated version of it to capture the changes in computation and neuroscience over the last 30 years.
</description>
<pubDate>Thu, 31 May 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/70970</guid>
<dc:date>2012-05-31T00:00:00Z</dc:date>
</item>
<item>
<title>Temporally Scalable Visual SLAM using a Reduced Pose Graph</title>
<link>https://hdl.handle.net/1721.1/70952</link>
<description>Temporally Scalable Visual SLAM using a Reduced Pose Graph
Johannsson, Hordur; Kaess, Michael; Fallon, Maurice; Leonard, John J.
In this paper, we demonstrate a system for temporally scalable visual SLAM using a reduced pose graph representation. Unlike previous visual SLAM approaches that use keyframes, our approach continually uses new measurements to improve the map, yet achieves efficiency by avoiding adding redundant frames and not using marginalization to reduce the graph. To evaluate our approach, we present results using an online binocular visual SLAM system that uses place recognition for both robustness and multi-session operation. To allow large-scale indoor mapping, our system automatically handles elevator rides based on accelerometer data. We demonstrate long-term mapping in a large multi-floor building, using approximately nine hours of data collected over the course of six months. Our results illustrate the capability of our visual SLAM system to scale in size with the area of exploration instead of the time of exploration.
</description>
<pubDate>Fri, 25 May 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/70952</guid>
<dc:date>2012-05-25T00:00:00Z</dc:date>
</item>
<item>
<title>A Case for Fine-Grain Adaptive Cache Coherence</title>
<link>https://hdl.handle.net/1721.1/70909</link>
<description>A Case for Fine-Grain Adaptive Cache Coherence
Kurian, George; Khan, Omer; Devadas, Srinivas
As transistor density continues to grow geometrically, processor manufacturers are already able to place a hundred cores on a chip (e.g., Tilera TILE-Gx 100), with massive multicore chips on the horizon. Programmers now need to invest more effort in designing software capable of exploiting multicore parallelism. The shared memory paradigm provides a convenient layer of abstraction to the programmer, but will current memory architectures scale to hundreds of cores? This paper directly addresses the question of how to enable scalable memory systems for future multicores. We develop a scalable, efficient shared memory architecture that enables seamless adaptation between private and logically shared caching at the fine granularity of cache lines. Our data-centric approach relies on in hardware runtime profiling of the locality of each cache line and only allows private caching for data blocks with high spatio-temporal locality. This allows us to better exploit on-chip cache capacity and enable low-latency memory access in large-scale multicores.
</description>
<pubDate>Tue, 22 May 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/70909</guid>
<dc:date>2012-05-22T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Parametric Auctions</title>
<link>https://hdl.handle.net/1721.1/70556</link>
<description>Optimal Parametric Auctions
Azar, Pablo; Micali, Silvio
Theory of Computation
We study the problem of profit maximization in auctions of one good where the buyers' valuations are drawn from independent distributions. When these distributions are known to the seller, Myerson's optimal auction is a well-known mechanism for maximizing revenue. In many cases, however, the seller may not know the buyers' distributions. We propose an alternative model where the seller only knows the mean and the variance of each distribution. We call parametric an auction whose mechanism only uses these parameters. We construct parametric auctions both when the seller only has one copy of the good to sell, and when she has an infinite number of identical copies (i.e., when the good is digital). For a very large class of distributions, including (but not limited to) distributions with a monotone hazard rate, our auctions achieve a constant fraction of the revenue of Myerson's auction. When the seller has absolutely no knowledge about the distributions, it is well known that no auction can achieve a constant fraction of the optimal revenue when the players are not identically distributed. Our parametric model gives the seller a small amount of extra information, allowing her to construct auctions for which (1) no two bidders need to be drawn from identical distributions and (2) the revenue obtained is a constant fraction of the revenue in Myerson's optimal auction.
</description>
<pubDate>Tue, 08 May 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/70556</guid>
<dc:date>2012-05-08T00:00:00Z</dc:date>
</item>
<item>
<title>Preliminary MEG decoding results</title>
<link>https://hdl.handle.net/1721.1/70170</link>
<description>Preliminary MEG decoding results
Isik, Leyla; Meyers, Ethan M.; Leibo, Joel Z.; Poggio, Tomaso
Decoding analysis has been applied to electrophysiology and fMRI data to study the visual system, however, this method has only been applied to MEG visual data in a few instances. Here we use the Neural Decoding Toolbox for Matlab to show that it is possible to decode visual stimuli based on MEG data.
</description>
<pubDate>Fri, 20 Apr 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/70170</guid>
<dc:date>2012-04-20T00:00:00Z</dc:date>
</item>
<item>
<title>A Method for Fast, High-Precision Characterization of Synthetic Biology Devices</title>
<link>https://hdl.handle.net/1721.1/69973</link>
<description>A Method for Fast, High-Precision Characterization of Synthetic Biology Devices
Beal, Jacob; Weiss, Ron; Yaman, Fusun; Davidsohn, Noah; Adler, Aaron
Engineering biological systems with predictable behavior is a foundational goal of synthetic biology. To accomplish this, it is important to accurately characterize the behavior of biological devices. Prior characterization efforts, however, have generally not yielded enough high-quality information to enable compositional design. In the TASBE (A Tool-Chain to Accelerate Synthetic Biological Engineering) project we have developed a new characterization technique capable of producing such data. This document describes the techniques we have developed, along with examples of their application, so that the techniques can be accurately used by others.
</description>
<pubDate>Sat, 07 Apr 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/69973</guid>
<dc:date>2012-04-07T00:00:00Z</dc:date>
</item>
<item>
<title>Cryptographic Treatment of CryptDB's Adjustable Join</title>
<link>https://hdl.handle.net/1721.1/69859</link>
<description>Cryptographic Treatment of CryptDB's Adjustable Join
Popa, Raluca Ada; Zeldovich, Nickolai
In this document, we provide a cryptographic treatment of the adjustable join protocol from CryptDB. We also discuss how our scheme could be used outside of CryptDB because it provides a simple functionality that may be needed in other settings. Intuitively, it is a pseudorandom permutation where an external party not knowing the secret key can nonetheless adjust a ciphertext under one key to a ciphertext under a different key, given an adjustment token from a party that knows the secret key.
</description>
<pubDate>Sun, 25 Mar 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/69859</guid>
<dc:date>2012-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>A Lossy, Synchronization-Free, Race-Full, But Still Acceptably Accurate Parallel Space-Subdivision Tree Construction Algorithm</title>
<link>https://hdl.handle.net/1721.1/69177</link>
<description>A Lossy, Synchronization-Free, Race-Full, But Still Acceptably Accurate Parallel Space-Subdivision Tree Construction Algorithm
Rinard, Martin
We present a new synchronization-free space-subdivision tree construction algorithm. Despite data races, this algorithm produces trees that are consistent enough for the client Barnes-Hut center of mass and force computation phases to use successfully. Our performance results show that eliminating synchronization improves the performance of the parallel algorithm by approximately 20%. End-to-end accuracy results show that the resulting partial data structure corruption has a neglible effect on the overall accuracy of the Barnes-Hut N-body simulation. We note that many data structure manipulation algorithms use many of the same basic operations (linked data structure updates and array insertions) as our tree construction algorithm. We therefore anticipate that the basic principles the we develop in this paper may effectively guide future efforts in this area.
</description>
<pubDate>Thu, 23 Feb 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/69177</guid>
<dc:date>2012-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>DSENT - A Tool Connecting Emerging Photonics with Electronics for Opto-Electronic Networks-on-Chip Modeling</title>
<link>https://hdl.handle.net/1721.1/69050</link>
<description>DSENT - A Tool Connecting Emerging Photonics with Electronics for Opto-Electronic Networks-on-Chip Modeling
Sun, Chen; Chen, Chia-Hsin Owen; Kurian, George; Wei, Lan; Miller, Jason; Agarwal, Anant; Peh, Li-Shiuan; Stojanovic, Vladimir
With the advent of many-core chips that place substantial demand on the NoC, photonics has been investigated as a promising alternative to electrical NoCs. While numerous opto-electronic NoCs have been proposed, their evaluations tend to be based on fixed numbers for both photonic and electrical components, making it difficult to co-optimize. Through our own forays into opto-electronic NoC design, we observe that photonics and electronics are very much intertwined, reflecting a strong need for a NoC modeling tool that accurately models parameterized electronic and photonic components within a unified framework, capturing their interactions faithfully. In this paper, we present a tool, DSENT, for design space exploration of electrical and opto-electrical networks. We form a framework that constructs basic NoC building blocks from electrical and photonic technology parameters. To demonstrate potential use cases, we perform a network case study illustrating data-rate tradeoffs, a comparison with scaled electrical technology, and sensitivity to photonics parameters.
</description>
<pubDate>Wed, 08 Feb 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/69050</guid>
<dc:date>2012-02-08T00:00:00Z</dc:date>
</item>
<item>
<title>GURLS: a Toolbox for Regularized Least Squares Learning</title>
<link>https://hdl.handle.net/1721.1/69034</link>
<description>GURLS: a Toolbox for Regularized Least Squares Learning
Tacchetti, Andrea; Mallapragada, Pavan S.; Santoro, Matteo; Rosasco, Lorenzo
We present GURLS, a toolbox for supervised learning based on the regularized least squares algorithm. The toolbox takes advantage of all the favorable properties of least squares and is tailored to deal in particular with multi-category/multi-label problems. One of the main advantages of GURLS is that it allows training and tuning a multi-category classifier at essentially the same cost of one single binary classifier. The toolbox provides a set of basic functionalities including different training strategies and routines to handle computations with very large matrices by means of both memory-mapped storage and distributed task execution. The system is modular and can serve as a basis for easily prototyping new algorithms. The toolbox is available for download, easy to set-up and use.
</description>
<pubDate>Tue, 31 Jan 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/69034</guid>
<dc:date>2012-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Probabilistic Approach to Acquiring Information from Human Partners Using Language</title>
<link>https://hdl.handle.net/1721.1/68651</link>
<description>Toward a Probabilistic Approach to Acquiring Information from Human Partners Using Language
Tellex, Stefanie; Thaker, Pratiksha; Deits, Robin; Simeonov, Dimitar; Kollar, Thomas; Roy, Nicholas
Our goal is to build robots that can robustly interact with humans using natural language. This problem is extremely challenging because human language is filled with ambiguity, and furthermore, the robot's model of the environment might be much more limited than the human partner. When humans encounter ambiguity in dialog with each other, a key strategy to resolve it is to ask clarifying questions about whatthey do not understand. This paper describes an approach for enabling robots to take the same approach: asking the human partner clarifying questions about ambiguous commands in order to infer better actions. The robot fuses information from the command, the question, and the answer by creating a joint probabilistic graphical model in the Generalized Grounding Graph framework. We demonstrate that by performing inference using information from the command, question and answer, the robot is able to infer object groundings and follow commands with higher accuracythan by using the command alone.
</description>
<pubDate>Mon, 23 Jan 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/68651</guid>
<dc:date>2012-01-23T00:00:00Z</dc:date>
</item>
<item>
<title>A Benchmark of Computational Models of Saliency to Predict Human Fixations</title>
<link>https://hdl.handle.net/1721.1/68590</link>
<description>A Benchmark of Computational Models of Saliency to Predict Human Fixations
Judd, Tilke; Durand, Frédo; Torralba, Antonio
Many computational models of visual attention have been created from a wide variety of different approaches to predict where people look in images. Each model is usually introduced by demonstrating performances on new images, and it is hard to make immediate comparisons between models. To alleviate this problem, we propose a benchmark data set containing 300 natural images with eye tracking data from 39 observers to compare model performances. We calculate the performance of 10 models at predicting ground truth fixations using three different metrics. We provide a way for people to submit new models for evaluation online. We find that the Judd et al. and Graph-based visual saliency models perform best. In general, models with blurrier maps and models that include a center bias perform well. We add and optimize a blur and center bias for each model and show improvements. We compare performances to baseline models of chance, center and human performance. We show that human performance increases with the number of humans to a limit. We analyze the similarity of different models using multidimensional scaling and explore the relationship between model performance and fixation consistency. Finally, we offer observations about how to improve saliency models in the future.
</description>
<pubDate>Fri, 13 Jan 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/68590</guid>
<dc:date>2012-01-13T00:00:00Z</dc:date>
</item>
<item>
<title>Structuring Unreliable Radio Networks</title>
<link>https://hdl.handle.net/1721.1/67885</link>
<description>Structuring Unreliable Radio Networks
Censor-Hillel, Keren; Gilbert, Seth; Kuhn, Fabian; Lynch, Nancy; Newport, Calvin
In this paper we study the problem of building a connected dominating set with constant degree (CCDS) in the dual graph radio network model.  This model includes two types of links:  reliable links, which&#13;
always deliver messages, and unreliable links, which sometimes fail to deliver messages.  Real networks compensate for this differing quality by deploying low-layer detection protocols to filter unreliable from reliable links.  With this in mind, we begin by presenting an algorithm that solves the CCDS problem in the dual graph model under the assumption that every process u is provided with a local "link detector set" consisting of every neighbor connected to u by a reliable link.  The algorithm solves the CCDS problem in O((Delta log2(n)/b) + log3(n)) rounds, with high probability, where Delta is the maximum degree in the reliable link graph, n is the network size, and b is an upper bound in bits on the message size.  The algorithm works by first building a Maximal Independent Set (MIS) in log3(n) time, and then leveraging the local topology knowledge to efficiently connect nearby MIS processes.  A natural follow up question is whether the link detector must be perfectly reliable to solve the CCDS problem.  To answer this question, we first describe an algorithm that builds a CCDS in O(Delta polylog(n)) time under the assumption of O(1) unreliable links included in each link detector set.  We then prove this algorithm to be (almost) tight by showing that the possible inclusion of only a single unreliable link in each process's local link detector set is sufficient to require Omega(Delta) rounds to solve the CCDS problem, regardless of message size.  We conclude by discussing how to apply our algorithm in the setting where the topology of reliable and unreliable links can change over time.
</description>
<pubDate>Thu, 22 Dec 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/67885</guid>
<dc:date>2011-12-22T00:00:00Z</dc:date>
</item>
<item>
<title>A Frequency Analysis of Monte-Carlo and other Numerical Integration Schemes</title>
<link>https://hdl.handle.net/1721.1/67677</link>
<description>A Frequency Analysis of Monte-Carlo and other Numerical Integration Schemes
Durand, Frédo
The numerical calculation of integrals is central to many computer graphics algorithms such as Monte-Carlo Ray Tracing. We show that such methods can be studied using Fourier analysis. Numerical error is shown to correspond to aliasing and the link between properties of the sampling pattern and the integrand is studied. The approach also permits the unified study of image aliasing and numerical integration, by considering a multidimensional domain where some dimensions are integrated while others are sampled.
</description>
<pubDate>Wed, 14 Dec 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/67677</guid>
<dc:date>2011-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>CPHash: A Cache-Partitioned Hash Table</title>
<link>https://hdl.handle.net/1721.1/67296</link>
<description>CPHash: A Cache-Partitioned Hash Table
Metreveli, Zviad; Zeldovich, Nickolai; Kaashoek, M. Frans
CPHash is a concurrent hash table for multicore processors. CPHash partitions its table across the caches of cores and uses message passing to transfer lookups/inserts to a partition. CPHash's message passing avoids the need for locks, pipelines batches of asynchronous messages, and packs multiple messages into a single cache line transfer. Experiments on a 80-core machine with 2 hardware threads per core show that CPHash has ~1.6x higher throughput than a hash table implemented using fine-grained locks. An analysis shows that CPHash wins because it experiences fewer cache misses and its cache misses are less expensive, because of less contention for the on-chip interconnect and DRAM. CPServer, a key/value cache server using CPHash, achieves ~5% higher throughput than a key/value cache server that uses a hash table with fine-grained locks, but both achieve better throughput and scalability than memcached. Finally, the throughput of CPHash and CPServer scales near-linearly with the number of cores.
</description>
<pubDate>Sat, 26 Nov 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/67296</guid>
<dc:date>2011-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Reasoning about Relaxed Programs</title>
<link>https://hdl.handle.net/1721.1/67031</link>
<description>Reasoning about Relaxed Programs
Carbin, Michael; Kim, Deokhwan; Misailovic, Sasa; Rinard, Martin C.
A number of approximate program transformations have recently emerged that enable transformed programs to trade accuracy of their results for increased performance by dynamically and nondeterministically modifying variables that control program execution. We call such transformed programs relaxed programs -- they have been extended with additional nondeterminism to relax their semantics and offer greater execution flexibility. We present programming language constructs for developing relaxed programs and proof rules for reasoning about properties of relaxed programs. Our proof rules enable programmers to directly specify and verify acceptability properties that characterize the desired correctness relationships between the values of variables in a program's original semantics (before transformation) and its relaxed semantics. Our proof rules also support the verification of safety properties (which characterize desirable properties involving values in individual executions). The rules are designed to support a reasoning approach in which the majority of the reasoning effort uses the original semantics. This effort is then reused to establish the desired properties of the program under the relaxed semantics. We have formalized the dynamic semantics of our target programming language and the proof rules in Coq, and verified that the proof rules are sound with respect to the dynamic semantics. Our Coq implementation enables developers to obtain fully machine checked verifications of their relaxed programs.
</description>
<pubDate>Tue, 15 Nov 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/67031</guid>
<dc:date>2011-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Fast and Robust Pyramid-based Image Processing</title>
<link>https://hdl.handle.net/1721.1/67030</link>
<description>Fast and Robust Pyramid-based Image Processing
Aubry, Mathieu; Paris, Sylvain; Hasinoff, Samuel W.; Kautz, Jan; Durand, Frédo
Multi-scale manipulations are central to image editing but they are also prone to halos. Achieving artifact-free results requires sophisticated edgeaware techniques and careful parameter tuning. These shortcomings were recently addressed by the local Laplacian filters, which can achieve a broad range of effects using standard Laplacian pyramids. However, these filters are slow to evaluate and their relationship to other approaches is unclear. In this paper, we show that they are closely related to anisotropic diffusion and to bilateral filtering. Our study also leads to a variant of the bilateral filter that produces cleaner edges while retaining its speed. Building upon this result, we describe an acceleration scheme for local Laplacian filters that yields speed-ups on the order of 50x. Finally, we demonstrate how to use local Laplacian filters to alter the distribution of gradients in an image. We illustrate this property with a robust algorithm for photographic style transfer.
</description>
<pubDate>Tue, 15 Nov 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/67030</guid>
<dc:date>2011-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>SEEC: A General and Extensible Framework for Self-Aware Computing</title>
<link>https://hdl.handle.net/1721.1/67020</link>
<description>SEEC: A General and Extensible Framework for Self-Aware Computing
Hoffmann, Henry; Maggio, Martina; Santambrogio, Marco D.; Leva, Alberto; Agarwal, Anant
Modern systems require applications to balance competing goals, e.g. achieving high performance and low power. Achieving this balance places an unrealistic burden on application programmers who must understand the power and performance implications of a variety of application and system actions (e.g. changing algorithms or allocating cores). To address this problem, we propose the Self-aware Computing framework, or SEEC. SEEC automatically and dynamically schedules actions to meet application specified goals. While other self-aware implementations have been proposed, SEEC is uniquely distinguished by its decoupled approach, which allows application and systems programmers to separately specify observations and actions, according to their expertise. SEEC s runtime decision engine observes the system and schedules actions automatically, reducing programmer burden. This general and extensible decision engine employs both control theory and machine learning to reason about previously unseen applications and actions while automatically adapting to changes in both application and system models. This paper describes the SEEC framework and evaluates it in several case studies. SEEC is used to build an adaptive system that optimizes performance per Watt for the PARSEC benchmarks on multiple machines, achieving results as least 1.65x better than a classical control system. Additional studies show how SEEC can learn optimal resource allocation online and respond to fluctuations in the underlying hardware while managing multiple applications.
</description>
<pubDate>Mon, 07 Nov 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/67020</guid>
<dc:date>2011-11-07T00:00:00Z</dc:date>
</item>
<item>
<title>Leader Election Using Loneliness Detection</title>
<link>https://hdl.handle.net/1721.1/66224</link>
<description>Leader Election Using Loneliness Detection
Ghaffari, Mohsen; Lynch, Nancy; Sastry, Srikanth
We consider the problem of leader election (LE) in single-hop radio networks with synchronized time slots for transmitting and receiving messages. We assume that the actual number n of processes is unknown, while the size u of the ID space is known, but is possibly much larger. We consider two types of collision detection: strong (SCD), whereby all processes detect collisions, and weak (WCD), whereby only non-transmitting processes detect collisions. We introduce loneliness detection (LD) as a key subproblem for solving LE in WCD systems. LD informs all processes whether the system contains exactly one process or more than one. We show that LD captures the difference in power between SCD and WCD, by providing an implementation of SCD over WCD and LD. We present two algorithms that solve deterministic and probabilistic LD in WCD systems with time costs of O(log(u/n)) and O(min(log(u/n), (log(1/epsilon)/n)), respectively, where epsilon is the error probability. We also provide matching lower bounds. We present two algorithms that solve deterministic and probabilistic LE in SCD systems with time costs of O(log u) and O(min(log u, loglog n + log(1/epsilon))), respectively, where epsilon is the error probability. We provide matching lower bounds.
</description>
<pubDate>Wed, 12 Oct 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/66224</guid>
<dc:date>2011-10-12T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Input Rectification</title>
<link>https://hdl.handle.net/1721.1/66170</link>
<description>Automatic Input Rectification
Long, Fan; Ganesh, Vijay; Carbin, Micheal; Sidiroglou, Stelios; Rinard, Martin
We present a novel technique, automatic input rectification, and a prototype implementation called SOAP. SOAP learns a set of constraints characterizing typical inputs that an application is highly likely to process correctly. When given an atypical input that does not satisfy these constraints, SOAP automatically rectifies the input (i.e., changes the input so that is satisfies the learned constraints). The goal is to automatically convert potentially dangerous inputs into typical inputs that the program is highly likely to process correctly. Our experimental results show that, for a set of benchmark applications (namely, Google Picasa, ImageMagick, VLC, Swfdec, and Dillo), this approach effectively converts malicious inputs (which successfully exploit vulnerabilities in the application) into benign inputs that the application processes correctly. Moreover, a manual code analysis shows that, if an input does satisfy the learned constraints, it is incapable of exploiting these vulnerabilities. We also present the results of a user study designed to evaluate the subjective perceptual quality of outputs from benign but atypical inputs that have been automatically rectified by SOAP to conform to the learned constraints. Specifically, we obtained benign inputs that violate learned constraints, used our input rectifier to obtain rectified inputs, then paid Amazon Mechanical Turk users to provide their subjective qualitative perception of the difference between the outputs from the original and rectified inputs. The results indicate that rectification can often preserve much, and in many cases all, of the desirable data in the original input.
</description>
<pubDate>Mon, 03 Oct 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/66170</guid>
<dc:date>2011-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Class Learning: Simplex Coding And Relaxation Error</title>
<link>https://hdl.handle.net/1721.1/66085</link>
<description>Multi-Class Learning: Simplex Coding And Relaxation Error
Mroueh, Youssef; Poggio, Tomaso; Rosasco, Lorenzo; Slotine, Jean-Jacques E.
We study multi-category classification in the framework of computational learning theory. We show how a relaxation approach, which is commonly used in binary classification, can be generalized to the multi-class setting. We propose a vector coding, namely the simplex coding, that allows to introduce a new notion of multi-class margin and cast multi-category classification into a vector valued regression problem. The analysis of the relaxation error be quantified and the binary case is recovered as a special case of our theory. From a computational point of view we can show that using the simplex coding we can design regularized learning algorithms for multi-category classification that can be trained at a complexity which is independent to the number of classes.
</description>
<pubDate>Tue, 27 Sep 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/66085</guid>
<dc:date>2011-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Nonparametric Sparsity and Regularization</title>
<link>https://hdl.handle.net/1721.1/65964</link>
<description>Nonparametric Sparsity and Regularization
Mosci, Sofia; Rosasco, Lorenzo; Santoro, Matteo; Verri, Alessandro; Villa, Silvia
In this work we are interested in the problems of supervised learning and variable selection when the input-output dependence is described by a nonlinear function depending on a few variables. Our goal is to consider a sparse nonparametric model, hence avoiding linear or additive models. The key idea is to measure the importance of each variable in the model by making use of partial derivatives. Based on this intuition we propose and study a new regularizer and a corresponding least squares regularization scheme. Using concepts and results from the theory of reproducing kernel Hilbert spaces and proximal methods, we show that the proposed learning algorithm corresponds to a minimization problem which can be provably solved by an iterative procedure. The consistency properties of the obtained estimator are studied both in terms of prediction and selection performance. An extensive empirical analysis shows that the proposed method performs favorably with respect to the state-of-the-art.
</description>
<pubDate>Mon, 26 Sep 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/65964</guid>
<dc:date>2011-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>A hypothesis-based algorithm for planning and control in non-Gaussian belief spaces</title>
<link>https://hdl.handle.net/1721.1/65856</link>
<description>A hypothesis-based algorithm for planning and control in non-Gaussian belief spaces
Platt, Robert, Jr.; Kaelbling, Leslie; Lozano-Perez, Tomas; Tedrake, Russ
We consider the partially observable control problem where it is potentially necessary to perform complex information-gathering operations in order to localize state. One approach to solving these problems is to create plans in belief-space, the space of probability distributions over the underlying state of the system. The belief-space plan encodes a strategy for performing a task while gaining information as necessary. Most approaches to belief-space planning rely upon representing belief state in a particular way (typically as a Gaussian). Unfortunately, this can lead to large errors between the assumed density representation and the true belief state. We propose a new computationally efficient algorithm for planning in non-Gaussian belief spaces. We propose a receding horizon re-planning approach where planning occurs in a low-dimensional sampled representation of belief state while the true belief state of the system is monitored using an arbitrary accurate high-dimensional representation. Our key contribution is a planning problem that, when solved optimally on each re-planning step, is guaranteed, under certain conditions, to enable the system to gain information. We prove that when these conditions are met, the algorithm converges with probability one. We characterize algorithm performance for different parameter settings in simulation and report results from a robot experiment that illustrates the application of the algorithm to robot grasping.
</description>
<pubDate>Sat, 27 Aug 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/65856</guid>
<dc:date>2011-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and disrupting invariance in visual recognition</title>
<link>https://hdl.handle.net/1721.1/65646</link>
<description>Learning and disrupting invariance in visual recognition
Isik, Leyla; Leibo, Joel Z; Poggio, Tomaso
Learning by temporal association rules such as Foldiak's trace rule is an attractive hypothesis that explains the development of invariance in visual recognition. Consistent with these rules, several recent experiments have shown that invariance can be broken by appropriately altering the visual environment but found puzzling differences in the effects at the psychophysical versus single cell level. We show a) that associative learning provides appropriate invariance in models of object recognition inspired by Hubel and Wiesel b) that we can replicate the "invariance disruption" experiments using these models with a temporal association learning rule to develop and maintain invariance, and c) that we can thereby explain the apparent discrepancies between psychophysics and singe cells effects. We argue that these models account for the stability of perceptual invariance despite the underlying plasticity of the system, the variability of the visual world and expected noise in the biological mechanisms.
</description>
<pubDate>Sat, 10 Sep 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/65646</guid>
<dc:date>2011-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Tragedy of the routing table: An analysis of collective action amongst Internet network operators</title>
<link>https://hdl.handle.net/1721.1/65591</link>
<description>Tragedy of the routing table: An analysis of collective action amongst Internet network operators
Woodrow, Stephen Robert
This thesis analyzes and discusses the effectiveness of social efforts to achieve collective action amongst Internet network operators in order to manage the growth of the Internet routing table. The size and rate of growth of the Internet routing table is an acknowledged challenge impeding the scalability of our BGP interdomain routing architecture. While most of the work towards a solution to this problem has focused on architectural improvements, an effort launched in the 1990s called the CIDR Report attempts to incentivize route aggregation using social forces and norms in the Internet operator community. This thesis analyzes the behavior of Internet network operators in response to the CIDR Report from 1997 to 2011 to determine whether the Report was effective in achieving this goal. While it is difficult to causally attribute aggregation behavior to appearance on the CIDR report, there is a trend for networks to improve their prefix aggregation following an appearance on the CIDR Report compared to untreated networks. This suggests that the CIDR Report did affect network aggregation behavior, although the routing table continued to grow. This aggregation improvement is most prevalent early in the study period and becomes less apparent as time goes on. Potential causes of the apparent change in efficacy of the Report are discussed and examined using Ostrom s Common Pool Resource framework. The thesis then concludes with a discussion of options for mitigating routing table growth, including the continued use of community forces to better manage the Internet routing table.
S.M. thesis
</description>
<pubDate>Sat, 06 Aug 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/65591</guid>
<dc:date>2011-08-06T00:00:00Z</dc:date>
</item>
<item>
<title>MOOS-IvP Autonomy Tools Users Manual Release 4.2.1</title>
<link>https://hdl.handle.net/1721.1/65074</link>
<description>MOOS-IvP Autonomy Tools Users Manual Release 4.2.1
Benjamin, Michael R.
This document describes 19 MOOS-IvP autonomy tools. uHelmScope provides a run-time scoping window into the state of an active IvP Helm executing its mission. pMarineViewer is a geo-based GUI tool for rendering marine vehicles and geometric data in their operational area. uXMS is a terminal based tool for scoping on a MOOSDB process. uTermCommand is a terminal based tool for poking a MOOSDB with a set of MOOS file pre-defined variable-value pairs selectable with aliases from the command-line. pEchoVar provides a way of echoing a post to one MOOS variable with a new post having the same value to a different variable. uProcessWatch monitors the presence or absence of a set of MOOS processes and summarizes the collective status in a single MOOS variable. uPokeDB provides a way of poking the MOOSDB from the command line with one or more variable-value pairs without any pre-existing configuration of a MOOS file. uTimerScript will execute a pre-defined timed pausable script of poking variable-value pairs to a MOOSDB. pNodeReporter summarizes a platforms critical information into a single node report string for sharing beyond the vehicle. pBasicContactMgr provides a basic contact management service with the ability to generate range-dependent configurable alerts. uSimMarine provides a simple marine vehicle simulator. uSimBeaconRange and uSimContactRange provide further simulation for range-only sensors. The Alog Toolbox is a set of offline tools for analyzing and manipulating log files in the .alog format.
</description>
<pubDate>Thu, 28 Jul 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/65074</guid>
<dc:date>2011-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>An Overview of MOOS-IvP and a Users Guide to the IvP Helm - Release 4.2.1</title>
<link>https://hdl.handle.net/1721.1/65073</link>
<description>An Overview of MOOS-IvP and a Users Guide to the IvP Helm - Release 4.2.1
Benjamin, Michael R.; Schmidt, Henrik; Newman, Paul; Leonard, John J.
This document describes the IvP Helm - an Open Source behavior-based autonomy application for unmanned vehicles. IvP is short for interval programming - a technique for representing and solving multi-objective optimizations problems. Behaviors in the IvP Helm are reconciled using multi-objective optimization when in competition with each other for influence of the vehicle. The IvP Helm is written as a MOOS application where MOOS is a set of Open Source publish-subscribe autonomy middleware tools. This document describes the configuration and use of the IvP Helm, provides examples of simple missions and information on how to download and build the software from the MOOS-IvP server at www.moos-ivp.org.
</description>
<pubDate>Wed, 03 Aug 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/65073</guid>
<dc:date>2011-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>Vote the OS off your Core</title>
<link>https://hdl.handle.net/1721.1/64977</link>
<description>Vote the OS off your Core
Belay, Adam; Wentzlaff, David; Agarwal, Anant
Recent trends in OS research have shown evidence that there are performance benefits to running OS services on different cores than the user applications that rely on them. We quantitatively evaluate this claim in terms of one of the most significant architectural constraints: memory performance. To this end, we have created CachEMU, an open-source memory trace generator and cache simulator built as an extension to QEMU for working with system traces. Using CachEMU, we determined that for five common Linux test workloads, it was best to run the OS close, but not too close   on the same package, but not on the same core.
</description>
<pubDate>Wed, 27 Jul 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/64977</guid>
<dc:date>2011-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>A Scalable Information Theoretic Approach to Distributed Robot Coordination</title>
<link>https://hdl.handle.net/1721.1/64821</link>
<description>A Scalable Information Theoretic Approach to Distributed Robot Coordination
Julian, Brian J.; Angermann, Michael; Schwager, Mac; Rus, Daniela
This paper presents a scalable information theoretic approach to infer the state of an environment by distributively controlling robots equipped with sensors. The robots iteratively estimate the environment state using a recursive Bayesian filter, while continuously moving to improve the quality of the estimate by following the gradient of mutual information. Both the filter and the controller use a novel algorithm for approximating the robots' joint measurement probabilities, which combines consensus (for decentralization) and sampling (for scalability). The approximations are shown to approach the true joint measurement probabilities as the size of the consensus rounds grows or as the network becomes complete. The resulting gradient controller runs in constant time with respect to the number of robots, and linear time with respect to the number of sensor measurements and environment discretization cells, while traditional mutual information methods are exponential in all of these quantities. Furthermore, the controller is proven to be convergent between consensus rounds and, under certain conditions, is locally optimal. The complete distributed inference and coordination algorithm is demonstrated in experiments with five quad-rotor flying robots and simulations with 100 robots.
</description>
<pubDate>Sun, 25 Sep 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/64821</guid>
<dc:date>2011-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>Kernels for Vector-Valued Functions: a Review</title>
<link>https://hdl.handle.net/1721.1/64731</link>
<description>Kernels for Vector-Valued Functions: a Review
Alvarez, Mauricio A.; Rosasco, Lorenzo; Lawrence, Neil D.
Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.
</description>
<pubDate>Thu, 30 Jun 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/64731</guid>
<dc:date>2011-06-30T00:00:00Z</dc:date>
</item>
<item>
<title>A Software Approach to Unifying Multicore Caches</title>
<link>https://hdl.handle.net/1721.1/64698</link>
<description>A Software Approach to Unifying Multicore Caches
Boyd-Wickizer, Silas; Kaashoek, M. Frans; Morris, Robert; Zeldovich, Nickolai
Multicore chips will have large amounts of fast on-chip cache memory, along with relatively slow DRAM interfaces. The on-chip cache memory, however, will be fragmented and spread over the chip; this distributed arrangement is hard for certain kinds of applications to exploit efficiently, and can lead to needless slow DRAM accesses. First, data accessed from many cores may be duplicated in many caches, reducing the amount of distinct data cached. Second, data in a cache distant from the accessing core may be slow to fetch via the cache coherence protocol. Third, software on each core can only allocate space in the small fraction of total cache memory that is local to that core. A new approach called software cache unification (SCU) addresses these challenges for applications that would be better served by a large shared cache. SCU chooses the on-chip cache in which to cache each item of data. As an application thread reads data items, SCU moves the thread to the core whose on-chip cache contains each item. This allows the thread to read the data quickly if it is already on-chip; if it is not, moving the thread causes the data to be loaded into the chosen on-chip cache. A new file cache for Linux, called MFC, uses SCU to improve performance of file-intensive applications, such as Unix file utilities. An evaluation on a 16-core AMD Opteron machine shows that MFC improves the throughput of file utilities by a factor of 1.6. Experiments with a platform that emulates future machines with less DRAM throughput per core shows that MFC will provide benefit to a growing range of applications.
</description>
<pubDate>Tue, 28 Jun 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/64698</guid>
<dc:date>2011-06-28T00:00:00Z</dc:date>
</item>
<item>
<title>A hierarchical model of peripheral vision</title>
<link>https://hdl.handle.net/1721.1/64621</link>
<description>A hierarchical model of peripheral vision
Isik, Leyla; Leibo, Joel Z.; Mutch, Jim; Lee, Sang Wan; Poggio, Tomaso
We present a peripheral vision model inspired by the cortical architecture discovered by Hubel and Wiesel. As with existing cortical models, this model contains alternating layers of simple cells, which employ tuning functions to increase specificity, and complex cells, which pool over simple cells to increase invariance. To extend the traditional cortical model, we introduce the option of eccentricity-dependent pooling and tuning parameters within a given model layer. This peripheral vision system can be used to model physiological data where receptive field sizes change as a function of eccentricity. This gives the user flexibility to test different theories about filtering and pooling ranges in the periphery. In a specific instantiation of the model, pooling and tuning parameters can increase linearly with eccentricity to model physiological data found in different layers of the visual cortex. Additionally, it can be used to introduce pre-cortical model layers such as retina and LGN. We have tested the model s response with different parameters on several natural images to demonstrate its effectiveness as a research tool. The peripheral vision model presents a useful tool to test theories about crowding, attention, visual search, and other phenomena of peripheral vision.
</description>
<pubDate>Fri, 17 Jun 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/64621</guid>
<dc:date>2011-06-17T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Information-Sharing Network Management</title>
<link>https://hdl.handle.net/1721.1/63260</link>
<description>Scalable Information-Sharing Network Management
Guo, Nina X.
This thesis analyzes scalable information-sharing network management. It looks into one of the large problems in network management today: finding information across different network domains. Information-sharing network management is a method to solving the problem, though it is important to make it scalable. The solution proposed uses the Publish-Subscribe Internet Routing Paradigm (PSIRP) inter-domain design as the base structure. The design borrows from Border Gateway Protocol ideas and uses the Chord protocol as one of the key methods of finding information. The conclusion after analyzing the scalability of PSIRP is that its use of Chord gives it an advantage that allows a O(log^2 N) tradeoff between performance and distribution.
MEng thesis
</description>
<pubDate>Tue, 07 Jun 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/63260</guid>
<dc:date>2011-06-07T00:00:00Z</dc:date>
</item>
<item>
<title>Regularization Predicts While Discovering Taxonomy</title>
<link>https://hdl.handle.net/1721.1/63175</link>
<description>Regularization Predicts While Discovering Taxonomy
Mroueh, Youssef; Poggio, Tomaso; Rosasco, Lorenzo
In this work we discuss a regularization framework to solve multi-category when the classes are described by an underlying class taxonomy. In particular we discuss how to learn the class taxonomy while learning a multi-category classifier.
</description>
<pubDate>Fri, 03 Jun 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/63175</guid>
<dc:date>2011-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the Performance of Broadband Networks through the Statistical Analysis of Speed Tests - Supplemental materials</title>
<link>https://hdl.handle.net/1721.1/62812</link>
<description>Understanding the Performance of Broadband Networks through the Statistical Analysis of Speed Tests - Supplemental materials
García, Rubén
Supplemental materials for the master thesis "Understanding the Performance of Broadband Networks Through the Statistical Analysis of Speed Tests", by Rubén García, submitted in May 2011 for the S.M. in Technology and Policy. Supplemental materials include: Source_code: Folder containing the source code for the statistical analysis of NDT speed tests, written for the R statistical package; NDT_data: Folder containing the following datasets (1) ndt4.h5: Initial NDT data that we used for the analysis; (2) ndt3.h5: Reduced version of the ndt4 dataset (same tests but less variables), also contains the 'whois' file that we combine with the NDT data in order to add location information; (3) comcast-ndt.h5: dataset containing the speed tests of a controlled experiment that we ran using different test durations; Aggregated_datasets: Versions of the ndt4.h5 dataset aggregated by IP and by Autonomous System.
</description>
<pubDate>Tue, 10 May 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62812</guid>
<dc:date>2011-05-10T00:00:00Z</dc:date>
</item>
<item>
<title>jMWE v1.0.0</title>
<link>https://hdl.handle.net/1721.1/62793</link>
<description>jMWE v1.0.0
Finlayson, Mark Alan; Kulkarni, Nidhi
jMWE is a Java library for constructing and testing Multi-Word Expression detectors. The library has three main facilities: (1) a detector API, (2) a MWE index facility, and (3) a test harness. This is version 1.0.0 of the library. It contains the source code, compiled binary files, javadocs, a user's manual (pdf), and data for constructing a default MWE index. The freely available version of jMWE is licensed for use for non-commercial purposes only, as long as proper acknowledgment is made. Details can be found in the license, which is included at the end of this document. The copyright on the software is owned by MIT; if you wish to use the software for commercial purposes, please contact the MIT Technology Licensing Office for more information on how to obtain a commercial license.
"June 2011."
</description>
<pubDate>Sat, 01 Jan 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62793</guid>
<dc:date>2011-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Source code and data for MWE'2011 papers</title>
<link>https://hdl.handle.net/1721.1/62792</link>
<description>Source code and data for MWE'2011 papers
Finlayson, Mark Alan; Kulkarni, Nidhi
Contains the source code and data necessary to run all computations described in the following two papers: Finlayson, Mark A. and Kulkarni, Nidhi (2011) "Detecting Multi-Word Expressions improves Word Sense Disambiguation", in Proceedings of the 2011 Workshop on Multiword Expressions, held at ACL'2011 in Portland, OR; Kulkarni, Nidhi and Finlayson, Mark A. (2011) "jMWE: A Java Toolkit for Detecting Multi-Word Expressions" in Proceedings of the 2011 Workshop on Multiword Expressions, held at ACL'2011 in Portland, OR.
</description>
<pubDate>Mon, 09 May 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62792</guid>
<dc:date>2011-05-09T00:00:00Z</dc:date>
</item>
<item>
<title>Library Cache Coherence</title>
<link>https://hdl.handle.net/1721.1/62580</link>
<description>Library Cache Coherence
Shim, Keun Sup; Cho, Myong Hyon; Lis, Mieszko; Khan, Omer; Devadas, Srinivas
Directory-based cache coherence is a popular mechanism for chip multiprocessors and multicores. The directory protocol, however, requires multicast for invalidation messages and the collection of acknowledgement messages, which can be expensive in terms of latency and network traffic. Furthermore, the size of the directory increases with the number of cores. We present Library Cache Coherence (LCC), which requires neither broadcast/multicast for invalidations nor waiting for invalidation acknowledgements. A library is a set of timestamps that are used to auto-invalidate shared cache lines, and delay writes on the lines until all shared copies expire. The size of library is independent of the number of cores. By removing the complex invalidation process of directory-based cache coherence protocols, LCC generates fewer network messages. At the same time, LCC also allows reads on a cache block to take place while a write to the block is being delayed, without breaking sequential consistency. As a result, LCC has 1.85X less average memory latency than a MESI directory-based protocol on our set of benchmarks, even with a simple timestamp choosing algorithm; moreover, our experimental results on LCC with an ideal timestamp scheme (though not implementable) show the potential of further improvement for LCC with more sophisticated timestamp schemes.
</description>
<pubDate>Mon, 02 May 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62580</guid>
<dc:date>2011-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of User Traffic Characteristics on Mobile-Access versus Fixed-Access Networks</title>
<link>https://hdl.handle.net/1721.1/62579</link>
<description>Comparison of User Traffic Characteristics on Mobile-Access versus Fixed-Access Networks
Heikkinen, Mikko V. J.; Berger, Arthur W.
We compare Web traffic characteristics of mobile- versus fixed-access end-hosts, where herein the term "mobile" refers to access via cell towers, using for example the 3G/UMTS standard, and the term "fixed" includes Wi-Fi access. It is well-known that connection speeds are in general slower over mobile-access networks, and also that often there is higher packet loss. We were curious whether this leads mobile-access users to have smaller connections. We examined the distribution of the number of bytes-per-connection, and packet loss from a sampling of logs from servers of Akamai Technologies. We obtained 149 million connections, across 57 countries. The mean bytes-per-connection was typically larger for fixed-access: for two-thirds of the countries, it was at least one-third larger. Regarding distributions, we found that the difference between the bytes-per-connection for mobile- versus fixed-access, as well as the packet loss, was statistically significant for each of the countries; however the visual difference in plots is typically small. For some countries, mobile-access had the larger connections. As expected, mobile-access often had higher loss than fixed-access, but the reverse pertained for some countries. Typically packet loss increased during the busy period of the day, when mobile-access had a larger increase. Comparing our results from 2010 to those from 2009 of the same time period, we found that connections have become a bit smaller.
</description>
<pubDate>Tue, 03 May 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62579</guid>
<dc:date>2011-05-03T00:00:00Z</dc:date>
</item>
<item>
<title>ARBAC Policy for a Large Multi-National Bank</title>
<link>https://hdl.handle.net/1721.1/62562</link>
<description>ARBAC Policy for a Large Multi-National Bank
Jayaraman, Karthick; Ganesh, Vijay; Tripunitara, Mahesh; Rinard, Martin C.; Chapin, Steve J.
Administrative role-based access control (ARBAC) is the first comprehensive administrative model proposed for role-based access control (RBAC). ARBAC has several features for designing highly expressive policies, but current work has not highlighted the utility of these expressive policies. In this report, we present a case study of designing an ARBAC policy for a bank comprising 18 branches. Using this case study we provide an assessment about the features of ARBAC that are likely to be used in realistic policies.
</description>
<pubDate>Wed, 27 Apr 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62562</guid>
<dc:date>2011-04-27T00:00:00Z</dc:date>
</item>
<item>
<title>Collusive Dominant-Strategy Truthfulness</title>
<link>https://hdl.handle.net/1721.1/62301</link>
<description>Collusive Dominant-Strategy Truthfulness
Chen, Jinc; Micali, Silvio
Fifty years ago, Vickrey published his famous mechanism for auctioning a single good in limited supply. The main property of Vickrey's mechanism is efficiency in dominant strategies. In absence of collusion, this is a wonderful efficiency guarantee. We note, however, that collusion is far from rare in auctions, and if some colluders exist and have some wrong beliefs, then the Vickrey mechanism dramatically loses its efficiency. Accordingly, we put forward a new mechanism that, despite unconstrained collusion, guarantees efficiency by providing a richer set of strategies and ensuring that it is dominant for every player to reveal truthfully not only his own valuation, but also with whom he is colluding, if he is indeed colluding with someone else. Our approach meaningfully bypasses prior impossibility proofs.
</description>
<pubDate>Fri, 22 Apr 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62301</guid>
<dc:date>2011-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanism Design with Approximate Valuations</title>
<link>https://hdl.handle.net/1721.1/62296</link>
<description>Mechanism Design with Approximate Valuations
Chiesa, Alessandro; Micali, Silvio; Zhu, Zeyuan Allen
In mechanism design, we replace the strong assumption that each player knows his own payoff type EXACTLY with the more realistic assumption that he knows it only APPROXIMATELY. Specifically, we study the classical problem of maximizing social welfare in single-good auctions when players know their true valuations only within a constant multiplicative factor d in (0,1). Our approach is deliberately non-Bayesian and very conservative: each player i only knows that his true valuation is one among finitely many values in a d-APPROXIMATE SET, Ki, and his true valuation is ADVERSARIALLY and SECRETLY chosen in Ki at the beginning of the auction. We prove tight upper and lower bounds for the fraction of the maximum social welfare achievable in our model, in either dominant or undominated strategies, both via deterministic and probabilistic mechanisms. The landscape emerging is quite unusual and intriguing.
</description>
<pubDate>Wed, 16 Feb 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62296</guid>
<dc:date>2011-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>Partial Reversal Acyclicity</title>
<link>https://hdl.handle.net/1721.1/62295</link>
<description>Partial Reversal Acyclicity
Radeva, Tsvetomira; Lynch, Nancy
Partial Reversal (PR) is a link reversal algorithm which ensures that the underlying graph structure is destination-oriented and acyclic. These properties of PR make it useful in routing protocols and algorithms for solving leader election and mutual exclusion. While proofs exist to establish the acyclicity property of PR, they rely on assigning labels to either the nodes or the edges in the graph. In this work we present simpler direct proof of the acyclicity property of partial reversal without using any external or dynamic labeling mechanism. First, we provide a simple variant of the PR algorithm, and show that it maintains acyclicity. Next, we present a binary relation which maps the original PR algorithm to the new algorithm, and finally, we conclude that the acyclicity proof applies to the original PR algorithm as well.
</description>
<pubDate>Thu, 14 Apr 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62295</guid>
<dc:date>2011-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Gasping for AIR   Why we need Linked Rules and Justifications on the Semantic Web</title>
<link>https://hdl.handle.net/1721.1/62294</link>
<description>Gasping for AIR   Why we need Linked Rules and Justifications on the Semantic Web
Kagal, Lalana; Jacobi, Ian; Khandelwal, Ankesh
The Semantic Web is a distributed model for publishing, utilizing and extending structured information using Web protocols. One of the main goals of this technology is to automate the retrieval and integration of data and to enable the inference of interesting results. This automation requires logics and rule languages that make inferences, choose courses of action, and answer questions. The openness of the Web, however, leads to several issues including the handling of inconsistencies, integration of diverse information, and the determination of the quality and trustworthiness of the data. AIR is a Semantic Web-based rule language that provides this functionality while focusing on generating and tracking explanations for its inferences and actions as well as conforming to Linked Data principles. AIR supports Linked Rules, which allow rules to be combined, re-used and extended in a manner similar to Linked Data. Additionally, AIR explanations themselves are Semantic Web data so they can be used for further reasoning. In this paper we present an overview of AIR, discuss its potential as a Web rule language by providing examples of how its features can be leveraged for different inference requirements, and describe how justifications are represented and generated.
</description>
<pubDate>Sat, 16 Apr 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62294</guid>
<dc:date>2011-04-16T00:00:00Z</dc:date>
</item>
<item>
<title>Approximations in the HMAX Model</title>
<link>https://hdl.handle.net/1721.1/62293</link>
<description>Approximations in the HMAX Model
Chikkerur, Sharat; Poggio, Tomaso
The HMAX model is a biologically motivated architecture for computer vision whose components are in close agreement with existing physiological evidence. The model is capable of achieving close to human level performance on several rapid object recognition tasks. However, the model is computationally bound and has limited engineering applications in its current form. In this report, we present several approximations in order to increase the efficiency of the HMAX model. We outline approximations at several levels of the hierarchy and empirically evaluate the trade-offs between efficiency and accuracy. We also explore ways to quantify the representation capacity of the model.
</description>
<pubDate>Thu, 14 Apr 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62293</guid>
<dc:date>2011-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Marginal Likelihood Optimization in Blind Deconvolution</title>
<link>https://hdl.handle.net/1721.1/62035</link>
<description>Efficient Marginal Likelihood Optimization in Blind Deconvolution
Levin, Anat; Weiss, Yair; Durand, Fredo; Freeman, William T.
In blind deconvolution one aims to estimate from an input blurred image y a sharp image x and an unknown blur kernel k. Recent research shows that a key to success is to consider the overall shape of the posterior distribution p(x, k|y) and not only its mode. This leads to a distinction between MAPx,k strategies which estimate the mode pair x, k and often lead to undesired results, and MAPk strategies which select the best k while marginalizing over all possible x images. The MAPk principle is significantly more robust than the MAPx,k one, yet, it involves a challenging marginalization over latent images. As a result, MAPk techniques are considered complicated, and have not been widely exploited. This paper derives a simple approximated MAPk algorithm which involves only a modest modification of common MAPx,k algorithms. We show that MAPk can, in fact, be optimized easily, with no additional computational complexity.
</description>
<pubDate>Mon, 04 Apr 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62035</guid>
<dc:date>2011-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>A Comparison of Autonomic Decision Making Techniques</title>
<link>https://hdl.handle.net/1721.1/62020</link>
<description>A Comparison of Autonomic Decision Making Techniques
Maggio, Martina; Hoffmann, Henry; Santambrogio, Marco D.; Agarwal, Anant; Leva, Alberto
Autonomic computing systems are capable of adapting their behavior and resources thousands of times a second to automatically decide the best way to accomplish a given goal despite changing environmental conditions and demands. Different decision mechanisms are considered in the literature, but in the vast majority of the cases a single technique is applied to a given instance of the problem. This paper proposes a comparison of some state of the art approaches for decision making, applied to a self-optimizing autonomic system that allocates resources to a software application, which provides direct performance feedback at runtime. The Application Heartbeats framework is used to provide the sensor data (feedback), and a variety of decision mechanisms, from heuristics to control-theory and machine learning, are investigated. The results obtained with these solutions are compared by means of case studies using standard benchmarks.
</description>
<pubDate>Fri, 01 Apr 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62020</guid>
<dc:date>2011-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remote Oblivious Storage: Making Oblivious RAM Practical</title>
<link>https://hdl.handle.net/1721.1/62006</link>
<description>Remote Oblivious Storage: Making Oblivious RAM Practical
Boneh, Dan; Mazieres, David; Popa, Raluca Ada
Remote storage of data has become an increasingly attractive and advantageous option, especially due to cloud systems. While encryption protects the data, it does not hide the access pattern to the data. A natural solution is to access remote storage using an Oblivious RAM (ORAM) which provably hides all access patterns. While ORAM is asymptotically efficient, the best existing scheme (Pinkas and Reinman, Crypto'10) still has considerable overhead for a practical implementation: for M stored items, it stores 4 times and sometimes 6 times more items remotely, requires O(log2 M) round trips to storage server per request, and periodically blocks all data requests to shuffle all storage (which is a lengthy process). In this paper, we first define a related notion to ORAM, oblivious storage (OS), which captures more accurately and naturally the security setting of remote storage. Then, we propose a new ORAM/OS construction that solves the practicality issues just outlined: it has a storage constant of ~ 1, achieves O(1) round trips to the storage server per request, and allows requests to happen concurrently with shuffle without jeopardizing security. Our construction consists of a new organization of server memory into a flat main part and a hierarchical shelter, a client-side index for rapidly locating identifiers at the server, a new shelter serving requests concurrent with the shuffle, and a data structure for locating items efficiently in a partially shuffled storage.
</description>
<pubDate>Wed, 30 Mar 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/62006</guid>
<dc:date>2011-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>Multicore Performance Optimization Using Partner Cores</title>
<link>https://hdl.handle.net/1721.1/61978</link>
<description>Multicore Performance Optimization Using Partner Cores
Lau, Eric; Miller, Jason E; Choi, Inseok; Yeung, Donald; Amarasinghe, Saman; Agarwal, Anant
As the push for parallelism continues to increase the number of cores on a chip, and add to the complexity of system design, the task of optimizing performance at the application level becomes nearly impossible for the programmer. Much effort has been spent on developing techniques for optimizing performance at runtime, but many techniques for modern processors employ the use of speculative threads or performance counters. These approaches result in stolen cycles, or the use of an extra core, and such expensive penalties put demanding constraints on the gains provided by such methods. While processors have grown in power and complexity, the technology for small, efficient cores has emerged. We introduce the concept of Partner Cores for maximizing hardware power efficiency; these are low-area, low-power cores situated on-die, tightly coupled to each main processor core. We demonstrate that such cores enable performance improvement without incurring expensive penalties, and carry out potential applications that are impossible on a traditional chip multiprocessor.
</description>
<pubDate>Fri, 25 Mar 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/61978</guid>
<dc:date>2011-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>SEEC: A Framework for Self-aware Management of Multicore Resources</title>
<link>https://hdl.handle.net/1721.1/61950</link>
<description>SEEC: A Framework for Self-aware Management of Multicore Resources
Hoffmann, Henry; Maggio, Martina; Santambrogio, Marco D.; Leva, Alberto; Agarwal, Anant
This paper presents SEEC, a self-aware programming model, designed to reduce programming effort in modern multicore systems. In the SEEC model, application programmers specify application goals and progress, while systems programmers separately specify actions system software and hardware can take to affect an application (e.g. resource allocation). The SEEC runtime monitors applications and dynamically selects actions to meet application goals optimally (e.g. meeting performance while minimizing power consumption). The SEEC runtime optimizes system behavior for the application rather than requiring the application programmer to optimize for the system. This paper presents a detailed discussion of the SEEC model and runtime as well as several case studies demonstrating their benefits. SEEC is shown to optimize performance per Watt for a video encoder, find optimal resource allocation for an application with complex resource usage, and maintain the goals of multiple applications in the face of environmental fluctuations.
</description>
<pubDate>Thu, 24 Mar 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/61950</guid>
<dc:date>2011-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>Intel Concurrent Collections for Haskell</title>
<link>https://hdl.handle.net/1721.1/61759</link>
<description>Intel Concurrent Collections for Haskell
Newton, Ryan; Chen, Chih-Ping; Marlow, Simon
Intel Concurrent Collections (CnC) is a parallel programming model in which a network of steps (functions) communicate through message-passing as well as a limited form of shared memory. This paper describes a new implementation of CnC for Haskell. Compared to existing parallel programming models for Haskell, CnC occupies a useful point in the design space: pure and deterministic like Evaluation Strategies, but more explicit about granularity and the structure of the parallel computation, which affords the programmer greater control over parallel performance. We present results on 4, 8, and 32-core machines demonstrating parallel speedups over 20x on non-trivial benchmarks.
</description>
<pubDate>Tue, 22 Mar 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/61759</guid>
<dc:date>2011-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>BOOM: Broadcast Optimizations for On-chip Meshes</title>
<link>https://hdl.handle.net/1721.1/61695</link>
<description>BOOM: Broadcast Optimizations for On-chip Meshes
Krishna, Tushar; Beckmann, Bradford M.; Peh, Li-Shiuan; Reinhardt, Steven K.
Future many-core chips will require an on-chip network that can support broadcasts and multicasts at good power-performance. A vanilla on-chip network would send multiple unicast packets for each broadcast packet, resulting in latency, throughput and power overheads. Recent research in on-chip multicast support has proposed forking of broadcast/multicast packets within the network at the router buffers, but these techniques are far from ideal, since they increase buffer occupancy which lowers throughput, and packets incur delay and power penalties at each router. In this work, we analyze an ideal broadcast mesh; show the substantial gaps between state-of-the-art multicast NoCs and the ideal; then propose BOOM, which comprises a WHIRL routing protocol that ideally load balances broadcast traffic, a mXbar multicast crossbar circuit that enables multicast traversal at similar energy-delay as unicasts, and speculative bypassing of buffering for multicast flits. Together, they enable broadcast packets to approach the delay, energy, and throughput of the ideal fabric. Our simulations show BOOM realizing an average network latency that is 5% off ideal, attaining 96% of ideal throughput, with energy consumption that is 9% above ideal. Evaluations using synthetic traffic show BOOM achieving a latency reduction of 61%, throughput improvement of 63%, and buffer power reduction of 80% as compared to a baseline broadcast. Simulations with PARSEC benchmarks show BOOM reducing average request and network latency by 40% and 15% respectively.
</description>
<pubDate>Mon, 14 Mar 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/61695</guid>
<dc:date>2011-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Fleets: Scalable Services in a Factored Operating System</title>
<link>https://hdl.handle.net/1721.1/61640</link>
<description>Fleets: Scalable Services in a Factored Operating System
Wentzlaff, David; Gruenwald, Charles, III; Beckmann, Nathan; Belay, Adam; Kasture, Harshad; Modzelewski, Kevin; Youseff, Lamia; Miller, Jason E.; Agarwal, Anant
Current monolithic operating systems are designed for uniprocessor systems, and their architecture reflects this. The rise of multicore and cloud computing is drastically changing the tradeoffs in operating system design. The culture of scarce computational resources is being replaced with one of abundant cores, where spatial layout of processes supplants time multiplexing as the primary scheduling concern. Efforts to parallelize monolithic kernels have been difficult and only marginally successful, and new approaches are needed. This paper presents fleets, a novel way of constructing scalable OS services. With fleets, traditional OS services are factored out of the kernel and moved into user space, where they are further parallelized into a distributed set of concurrent, message-passing servers. We evaluate fleets within fos, a new factored operating system designed from the ground up with scalability as the first-order design constraint. This paper details the main design principles of fleets, and how the system architecture of fos enables their construction. We describe the design and implementation of three critical fleets (network stack, page allocation, and file system) and compare with Linux. These comparisons show that fos achieves superior performance and has better scalability than Linux for large multicores; at 32 cores, fos's page allocator performs 4.5 times better than Linux, and fos's network stack performs 2.5 times better. Additionally, we demonstrate how fleets can adapt to changing resource demand, and the importance of spatial scheduling for good performance in multicores.
</description>
<pubDate>Wed, 09 Mar 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/61640</guid>
<dc:date>2011-03-09T00:00:00Z</dc:date>
</item>
<item>
<title>Werner Reichardt: the man and his scientific legacy</title>
<link>https://hdl.handle.net/1721.1/61424</link>
<description>Werner Reichardt: the man and his scientific legacy
Poggio, Tomaso
Geiger, Gadi
Excerpts from a talk given by Tomaso Poggio in Tübingen on the opening ofthe Werner Reichardt Centrun für Integrative Neurowissenschaften, December 8, 2008.
</description>
<pubDate>Fri, 04 Mar 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/61424</guid>
<dc:date>2011-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Decomposing Broadcast Algorithms Using Abstract MAC Layers</title>
<link>https://hdl.handle.net/1721.1/61391</link>
<description>Decomposing Broadcast Algorithms Using Abstract MAC Layers
Khabbazian, Majid; Kowalski, Dariusz; Kuhn, Fabian; Lynch, Nancy
In much of the theoretical literature on global broadcast algorithms for wireless networks, issues of message dissemination are considered together with issues of contention management. This combination leads to complicated algorithms and analysis, and makes it difficult to extend the work to more difficult communication problems. In this paper, we present results aimed at simplifying such algorithms and analysis by decomposing the treatment into two levels, using abstract "MAC layer" specifications to encapsulate contention management. We use two different abstract MAC layers: the basic layer of Kuhn, Lynch, and Newport, and a new probabilistic layer. We first present a typical randomized contention-management algorithm for a standard graph-based radio network model and show that it implements both abstract MAC layers. Then we combine this algorithm with greedy algorithms for single-message and multi-message global broadcast and analyze the combinations, using both abstract MAC layers as intermediate layers. Using the basic MAC layer, we prove a bound of O(D log(n / epsilon) log(Delta)) for the time to deliver a single message everywhere with probability 1 - epsilon, where D is the network diameter, n is the number of nodes, and Delta is the maximum node degree. Using the probabilistic layer, we prove a bound of O((D + log(n/epsilon)) log(Delta)), which matches the best previously-known bound for single-message broadcast over the physical network model. For multi-message broadcast, we obtain bounds of O((D + k Delta) log(n/epsilon) log(Delta)) using the basic layer and O((D + k Delta log(n/epsilon)) log(Delta)) using the probabilistic layer, for the time to deliver a message everywhere in the presence of at most k concurrent messages.
</description>
<pubDate>Wed, 23 Feb 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/61391</guid>
<dc:date>2011-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>SoftCast: Clean-slate Scalable Wireless Video</title>
<link>https://hdl.handle.net/1721.1/61009</link>
<description>SoftCast: Clean-slate Scalable Wireless Video
Jakubczak, Szymon; Katabi, Dina
Video broadcast and mobile video challenge the conventional wireless design. In broadcast and mobile scenarios the bit rate supported by the channel differs across receivers and varies quickly over time. The conventional design however forces the source to pick a single bit rate and degrades sharply when the channel cannot not support the chosen bit rate. This paper presents SoftCast, a clean-slate design for wireless video where the source transmits one video stream that each receiver decodes to a video quality commensurate with its specific instantaneous channel quality. To do so, SoftCast ensures the samples of the digital video signal transmitted on the channel are linearly related to the pixels' luminance. Thus, when channel noise perturbs the transmitted signal samples, the perturbation naturally translates into approximation in the original video pixels. Hence, a receiver with a good channel (low noise) obtains a high fidelity video, and a receiver with a bad channel (high noise) obtains a low fidelity video. We implement SoftCast using the GNURadio software and the USRP platform. Results from a 20-node testbed show that SoftCast improves the average video quality (i.e., PSNR) across broadcast receivers in our testbed by up to 5.5dB. Even for a single receiver, it eliminates video glitches caused by mobility and increases robustness to packet loss by an order of magnitude.
</description>
<pubDate>Tue, 15 Feb 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/61009</guid>
<dc:date>2011-02-15T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanism Design With Approximate Player Types</title>
<link>https://hdl.handle.net/1721.1/61008</link>
<description>Mechanism Design With Approximate Player Types
Chiesa, Alessandro; Micali, Silvio; Zhu, Zeyuan Allen
We investigate mechanism design when the players do not exactly know their types, but have instead only partial information about them.
</description>
<pubDate>Wed, 16 Feb 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/61008</guid>
<dc:date>2011-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Understanding Hierarchical Natural Language Commands for Robotic Navigation and Manipulation</title>
<link>https://hdl.handle.net/1721.1/60883</link>
<description>Towards Understanding Hierarchical Natural Language Commands for Robotic Navigation and Manipulation
Kollar, Thomas; Dickerson, Steven; Tellex, Stefanie; Banerjee, Ashis Gopal; Walter, Matthew R.; Teller, Seth; Roy, Nicholas
We describe a new model for understanding hierarchical natural language commands for robot navigation and manipulation. The model has three components: a semantic structure that captures the hierarchical structure of language; a cost function that maps the command's semantic structure to the robot's sensorimotor capabilities; and an efficient search method for finding the lowest-cost plan. We present a proof-of-concept system that carries out navigation commands in a simulated setting.
</description>
<pubDate>Tue, 01 Feb 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60883</guid>
<dc:date>2011-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>What is Decidable about Strings?</title>
<link>https://hdl.handle.net/1721.1/60877</link>
<description>What is Decidable about Strings?
Ganesh, Vijay; Minnes, Mia; Solar-Lezama, Armando; Rinard, Martin
We prove several decidability and undecidability results for the satisfiability/validity problem of formulas over a language of finite-length strings and integers (interpreted as lengths of strings). The atomic formulas over this language are equality over string terms (word equations), linear inequality over length function (length constraints), and membership predicate over regularexpressions (r.e.). These decidability questions are important in logic, program analysis and formal verification. Logicians have been attempting to resolve some of these questions for many decades, while practical satisfiability procedures for these formulas are increasingly important in the analysis of string-manipulating programs such as web applications and scripts. We prove three main theorems. First, we consider Boolean combination of quantifier-free formulas constructed out of word equations and length constraints. We show that if word equations can be converted to a solved form, a form relevant in practice, then the satisfiability problem for Boolean combination of word equations and length constraints is decidable. Second, we show that the satisfiability problem for word equations in solved form that areregular, length constraints and r.e. membership predicate is also decidable. Third, we show that the validity problem for the set of sentences written as a forall-exists quantifier alternation applied to positive word equations is undecidable. A corollary of this undecidability result is that this set is undecidable even with sentences with at most two occurrences of a string variable.
</description>
<pubDate>Tue, 01 Feb 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60877</guid>
<dc:date>2011-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>CryptDB: A Practical Encrypted Relational DBMS</title>
<link>https://hdl.handle.net/1721.1/60876</link>
<description>CryptDB: A Practical Encrypted Relational DBMS
Popa, Raluca Ada; Zeldovich, Nickolai; Balakrishnan, Hari
CryptDB is a DBMS that provides provable and practical privacy in the face of a compromised database server or curious database administrators. CryptDB works by executing SQL queries over encrypted data. At its core are three novel ideas: an SQL-aware encryption strategy that maps SQL operations to encryption schemes, adjustable query-based encryption which allows CryptDB to adjust the encryption level of each data item based on user queries, and onion encryption to efficiently change data encryption levels. CryptDB only empowers the server to execute queries that the users requested, and achieves maximum privacy given the mix of queries issued by the users. The database server fully evaluates queries on encrypted data and sends the result back to the client for final decryption; client machines do not perform any query processing and client-side applications run unchanged. Our evaluation shows that CryptDB has modest overhead: on the TPC-C benchmark on Postgres, CryptDB reduces throughput by 27% compared to regular Postgres. Importantly, CryptDB does not change the innards of existing DBMSs: we realized the implementation of CryptDB using client-side query rewriting/encrypting, user-defined functions, and server-side tables for public key information. As such, CryptDB is portable; porting CryptDB to MySQL required changing 86 lines of code, mostly at the connectivity layer.
</description>
<pubDate>Wed, 26 Jan 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60876</guid>
<dc:date>2011-01-26T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Output Learning via Spectral Filtering</title>
<link>https://hdl.handle.net/1721.1/60875</link>
<description>Multi-Output Learning via Spectral Filtering
Baldassarre, Luca; Rosasco, Lorenzo; Barla, Annalisa; Verri, Alessandro
In this paper we study a class of regularized kernel methods for vector-valued learning which are based on filtering the spectrum of the kernel matrix. The considered methods include Tikhonov regularization as a special case, as well as interesting alternatives such as vector-valued extensions of L2 boosting. Computational properties are discussed for various examples of kernels for vector-valued functions and the benefits of iterative techniques are illustrated. Generalizing previous results for the scalar case, we show finite sample bounds for the excess risk of the obtained estimator and, in turn, these results allow to prove consistency both for regression and multi-category classification. Finally, we present some promising results of the proposed algorithms on artificial and real data.
</description>
<pubDate>Mon, 24 Jan 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60875</guid>
<dc:date>2011-01-24T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic and Statistical Analysis of Perforated Patterns</title>
<link>https://hdl.handle.net/1721.1/60675</link>
<description>Probabilistic and Statistical Analysis of Perforated Patterns
Misailovic, Sasa; Roy, Daniel M.; Rinard, Martin
We present a new foundation for the analysis and transformation of computer programs.Standard approaches involve the use of logical reasoning to prove that the applied transformation does not change the observable semantics of the program. Our approach, in contrast, uses probabilistic and statistical reasoning to justify the application of transformations that may change, within probabilistic bounds, the result that the program produces. Loop perforation transforms loops to execute fewer iterations. We show how to use our basic approach to justify the application of loop perforation to a set of computational patterns. Empirical results from computations drawn from the PARSEC benchmark suite demonstrate that these computational patterns occur in practice. We also outline a specification methodology that enables the transformation of subcomputations and discuss how to automate the approach.
</description>
<pubDate>Wed, 19 Jan 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60675</guid>
<dc:date>2011-01-19T00:00:00Z</dc:date>
</item>
<item>
<title>Flexible Execution of Plans with Choice and Uncertainty</title>
<link>https://hdl.handle.net/1721.1/60674</link>
<description>Flexible Execution of Plans with Choice and Uncertainty
Conrad, Patrick R; Williams, Brian C
Dynamic plan execution strategies allow an autonomous agent to respond to uncertainties, while improving robustness and reducing the need for an overly conservative plan. Executives have improved robustness by expanding the types of choices made dynamically, such as selecting alternate methods. However, in some approaches to date, these additional choices often induce significant storage requirements to make flexible execution possible. This paper presents a novel system called Drake, which is able to dramatically reduce the storage requirements in exchange for increased execution time for some computations. Drake frames a plan as a collection of related Simple Temporal Problems, and executes the plan with a fast dynamic scheduling algorithm. This scheduling algorithm leverages prior work in Assumption-based Truth Maintenance Systems to compactly record and reason over the family of Simple Temporal Problems. We also allow Drake to reason over temporal uncertainty and choices by using prior work in Simple Temporal Problems with Uncertainty, which can guarantee correct execution, regardless of the uncertain outcomes. On randomly generated structured plans with choice, framed as either Temporal Plan Networks or Disjunctive Temporal Problems, we show a reduction in the size of the solution set of around four orders of magnitude, compared to prior art.
</description>
<pubDate>Sat, 15 Jan 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60674</guid>
<dc:date>2011-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>Neurons That Confuse Mirror-Symmetric Object Views</title>
<link>https://hdl.handle.net/1721.1/60379</link>
<description>Neurons That Confuse Mirror-Symmetric Object Views
Mutch, Jim; Leibo, Joel Z; Smale, Steve; Rosasco, Lorenzo; Poggio, Tomaso
Neurons in inferotemporal cortex that respond similarly to many pairs of mirror-symmetric images -- for example, 45 degree and -45 degree views of the same face -- have often been reported. The phenomenon seemed to be an interesting oddity. However, the same phenomenon has also emerged in simple hierarchical models of the ventral stream. Here we state a theorem characterizing sufficient conditions for this curious invariance to occur in a rather large class of hierarchical networks and demonstrate it with simulations.
</description>
<pubDate>Fri, 31 Dec 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60379</guid>
<dc:date>2010-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Generic Invariances in Object Recognition:  Translation and Scale</title>
<link>https://hdl.handle.net/1721.1/60378</link>
<description>Learning Generic Invariances in Object Recognition:  Translation and Scale
Leibo, Joel Z; Mutch, Jim; Rosasco, Lorenzo; Ullman, Shimon; Poggio, Tomaso
Invariance to various transformations is key to object recognition but existing definitions of invariance are somewhat confusing while discussions of invariance are often confused. In this report, we provide an operational definition of invariance by formally defining perceptual tasks as classification problems. The definition should be appropriate for physiology, psychophysics and computational modeling. For any specific object, invariance can be trivially ``learned'' by memorizing a sufficient number of example images of the transformed object. While our formal definition of invariance also covers such cases, this report focuses instead on invariance from very few images and mostly on invariances from one example. Image-plane invariances -- such as translation, rotation and scaling -- can be computed from a single image for any object. They are called generic since in principle they can be hardwired or learned (during development) for any object. In this perspective, we characterize the invariance range of a class of feedforward architectures for visual recognition that mimic the hierarchical organization of the ventral stream. We show that this class of models achieves essentially perfect translation and scaling invariance for novel images. In this architecture a new image is represented in terms of weights of "templates" (e.g. "centers" or "basis functions") at each level in the hierarchy. Such a representation inherits the invariance of each template, which is implemented through replication of the corresponding "simple" units across positions or scales and their "association" in a "complex" unit. We show simulations on real images that characterize the type and number of templates needed to support the invariant recognition of novel objects. We find that 1) the templates need not be visually similar to the target objects and that 2) a very small number of them is sufficient for good recognition. These somewhat surprising empirical results have intriguing implications for the learning of invariant recognition during the development of a biological organism, such as a human baby. In particular, we conjecture that invariance to translation and scale may be learned by the association -- through temporal contiguity -- of a small number of primal templates, that is patches extracted from the images of an object moving on the retina across positions and scales. The number of templates can later be augmented by bootstrapping mechanisms using the correspondence provided by the primal templates -- without the need of temporal contiguity.
</description>
<pubDate>Thu, 30 Dec 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60378</guid>
<dc:date>2010-12-30T00:00:00Z</dc:date>
</item>
<item>
<title>Conservative Rationalizability and The Second-Knowledge Mechanism</title>
<link>https://hdl.handle.net/1721.1/60371</link>
<description>Conservative Rationalizability and The Second-Knowledge Mechanism
Chen, Jing; Micali, Silvio
In mechanism design, the traditional way of modeling the players' incomplete information about their opponents is "assuming a Bayesian." This assumption, however, is very strong and does not hold in many real applications. Accordingly, we put forward (1) a set-theoretic way to model the knowledge that a player might have about his opponents, and (2) a new class of mechanisms capable of leveraging such more conservative knowledge in a robust way. In auctions of a single good, we show that such a new mechanism can perfectly guarantee a revenue benchmark (always lying in between the second highest and the highest valuation) that no classical mechanism can even approximate in any robust way.
</description>
<pubDate>Mon, 20 Dec 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60371</guid>
<dc:date>2010-12-20T00:00:00Z</dc:date>
</item>
<item>
<title>Conservative-Bayesian Mechanism Design</title>
<link>https://hdl.handle.net/1721.1/60370</link>
<description>Conservative-Bayesian Mechanism Design
Azar, Pablo; Chen, Jing; Micali, Silvio
Classical Bayesian mechanism design is "centralized," that is, the designer is assumed to know the distribution D from which the players' type profile has been drawn. We instead investigate a very "decentralized" Bayesian model, where the designer has no knowledge at all, and each player only has some probabilistic information about D. For this decentralized model and many contexts of interest, where the goal is to maximize revenue, we show that, for arbitrary type distributions D (in particular, correlated ones), it is possible to design mechanisms matching to a significant extent the performance of the optimal centralized mechanisms. Our results are "existential" for a broad class of contexts (including combinatorial auctions) and "constructive" for auctions of a single good.
</description>
<pubDate>Mon, 20 Dec 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60370</guid>
<dc:date>2010-12-20T00:00:00Z</dc:date>
</item>
<item>
<title>Heracles: Fully Synthesizable Parameterized MIPS-Based Multicore System</title>
<link>https://hdl.handle.net/1721.1/60266</link>
<description>Heracles: Fully Synthesizable Parameterized MIPS-Based Multicore System
Kinsy, Michel; Pellauer, Michael
Heracles is an open-source complete multicore system written in Verilog. It is fully parameterized and can be reconfigured and synthesized into different topologies and sizes. Each processing node has a 7-stage pipeline, fully bypassed, microprocessor running the MIPS-III ISA, a 4-stage input-buffer, virtual-channel router, and a local variable-size shared memory. Our design is highly modular with clear interfaces between the core, the memory hierarchy, and the on-chip network. In the baseline design, the microprocessor is attached to two caches, one instruction cache and one data cache, which are oblivious to the global memory organization. The memory system in Heracles can be configured as one single global shared memory (SM), or distributed shared memory (DSM), or any combination thereof. Each core is connected to the rest of the network of processors by a parameterized, realistic, wormhole router. We show different topology configurations of the system, and their synthesis results on the Xilinx Virtex-5 LX330T FPGA board. We also provide a small MIPS cross-compiler toolchain to assist in developing software for Heracles.
</description>
<pubDate>Wed, 08 Dec 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60266</guid>
<dc:date>2010-12-08T00:00:00Z</dc:date>
</item>
<item>
<title>From primal templates to invariant recognition</title>
<link>https://hdl.handle.net/1721.1/60216</link>
<description>From primal templates to invariant recognition
Leibo, Joel Z; Mutch, Jim; Ullman, Shimon; Poggio, Tomaso
We can immediately recognize novel objects   seen only once before -- in different positions on the retina and at different scales (distances). Is this ability hardwired by our genes or learned during development -- and if so how? We present a computational proof that developmental learning of invariance in recognition is possible and can emerge rapidly. This computational work sets the stage for experiments on the development of object invariance while suggesting a specific mechanism that may be critically tested.
</description>
<pubDate>Sat, 04 Dec 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60216</guid>
<dc:date>2010-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>Verification of Semantic Commutativity Conditions and Inverse Operations on Linked Data Structures</title>
<link>https://hdl.handle.net/1721.1/60078</link>
<description>Verification of Semantic Commutativity Conditions and Inverse Operations on Linked Data Structures
Kim, Deokhwan; Rinard, Martin C.
Commuting operations play a critical role in many parallel computing systems. We present a new technique for verifying commutativity conditions, which are logical formulas that characterize when operations commute. Because our technique reasons with the abstract state of verified linked data structure implementations, it can verify commuting operations that produce semantically equivalent (but not identical) data structure states in different execution orders. We have used this technique to verify sound and complete commutativity conditions for all pairs of operations on a collection of linked data structure implementations, including data structures that export a set interface (ListSet and HashSet) as well as data structures that export a map interface (AssociationList, HashTable, and ArrayList). This effort involved the specification and verification of 765 commutativity conditions. Many speculative parallel systems need to undo the effects of speculatively executed operations. Inverse operations, which undo these effects, are often more efficient than alternate approaches (such as saving and restoring data structure state). We present a new technique for verifying such inverse operations. We have specified and verified, for all of our linked data structure implementations, an inverse operation for every operation that changes the data structure state. Together, the commutativity conditions and inverse operations provide a key resource that language designers and system developers can draw on to build parallel languages and systems with strong correctness guarantees.
</description>
<pubDate>Fri, 03 Dec 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60078</guid>
<dc:date>2010-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>LEAP Scratchpads: Automatic Memory and Cache Management for Reconfigurable Logic [Extended Version]</title>
<link>https://hdl.handle.net/1721.1/60045</link>
<description>LEAP Scratchpads: Automatic Memory and Cache Management for Reconfigurable Logic [Extended Version]
Adler, Michael; Fleming, Kermin E.; Parashar, Angshuman; Pellauer, Michael; Emer, Joel
Developers accelerating applications on FPGAs or other reconfigurable logic have nothing but raw memory devices in their standard toolkits. Each project typically includes tedious development of single-use memory management. Software developers expect a programming environment to include automatic memory management. Virtual memory provides the illusion of very large arrays and processor caches reduce access latency without explicit programmer instructions. LEAP scratchpads for reconfigurable logic dynamically allocate and manage multiple, independent, memory arrays in a large backing store. Scratchpad accesses are cached automatically in multiple levels, ranging from shared on-board, RAM-based, set-associative caches to private caches stored in FPGA RAM blocks. In the LEAP framework, scratchpads share the same interface as on-die RAM blocks and are plug-in replacements. Additional libraries support heap management within a storage set. Like software developers, accelerator authors using scratchpads may focus more on core algorithms and less on memory management. Two uses of FPGA scratchpads are analyzed: buffer management in an H.264 decoder and memory management within a processor microarchitecture timing model.
CORRECTION: The authors for entry [4] in the references should have been "E. S. Chung, &#13;
J. C. Hoe, and K. Mai".
</description>
<pubDate>Tue, 23 Nov 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60045</guid>
<dc:date>2010-11-23T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable directoryless shared memory coherence using execution migration</title>
<link>https://hdl.handle.net/1721.1/60039</link>
<description>Scalable directoryless shared memory coherence using execution migration
Lis, Mieszko; Shim, Keun Sup; Cho, Myong Hyon; Khan, Omer; Devadas, Srinivas
We introduce the concept of deadlock-free migration-based coherent shared memory to the NUCA family of architectures. Migration-based architectures move threads among cores to guarantee sequential semantics in large multicores. Using a execution migration (EM) architecture, we achieve performance comparable to directory-based architectures without using directories: avoiding automatic data replication significantly reduces cache miss rates, while a fast network-level thread migration scheme takes advantage of shared data locality to reduce remote cache accesses that limit traditional NUCA performance. EM area and energy consumption are very competitive, and, on the average, it outperforms a directory-based MOESI baseline by 6.8% and a traditional S-NUCA design by 9.2%. We argue that with EM scaling performance has much lower cost and design complexity than in directory-based coherence and traditional NUCA architectures: by merely scaling network bandwidth from 128 to 256 (512) bit flits, the performance of our architecture improves by an additional 8% (12%), while the baselines show negligible improvement.
</description>
<pubDate>Mon, 22 Nov 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60039</guid>
<dc:date>2010-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>One-Shot Learning with a Hierarchical Nonparametric Bayesian Model</title>
<link>https://hdl.handle.net/1721.1/60025</link>
<description>One-Shot Learning with a Hierarchical Nonparametric Bayesian Model
Salakhutdinov, Ruslan; Tenenbaum, Josh; Torralba, Antonio
We develop a hierarchical Bayesian model that learns to learn categories from single training examples. The model transfers acquired knowledge from previously learned categories to a novel category, in the form of a prior over category means and variances. The model discovers how to group categories into meaningful super-categories that express different priors for new classes. Given a single example of a novel category, we can efficiently infer which super-category the novel category belongs to, and thereby estimate not only the new category's mean but also an appropriate similarity metric based on parameters inherited from the super-category. On MNIST and MSR Cambridge image datasets the model learns useful representations of novel categories based on just a single training example, and performs significantly better than simpler hierarchical Bayesian approaches. It can also discover new categories in a completely unsupervised fashion, given just one or a few examples.
</description>
<pubDate>Wed, 13 Oct 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60025</guid>
<dc:date>2010-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>Generalization and Properties of the Neural Response</title>
<link>https://hdl.handle.net/1721.1/60024</link>
<description>Generalization and Properties of the Neural Response
Bouvrie, Jake; Poggio, Tomaso; Rosasco, Lorenzo; Smale, Steve; Wibisono, Andre
Hierarchical learning algorithms have enjoyed tremendous growth in recent years, with many new algorithms being proposed and applied to a wide range of applications. However, despite the apparent success of hierarchical algorithms in practice, the theory of hierarchical architectures remains at an early stage. In this paper we study the theoretical properties of hierarchical algorithms from a mathematical perspective. Our work is based on the framework of hierarchical architectures introduced by Smale et al. in the paper "Mathematics of the Neural Response", Foundations of Computational Mathematics, 2010. We propose a generalized definition of the neural response and derived kernel that allows us to integrate some of the existing hierarchical algorithms in practice into our framework. We then use this generalized definition to analyze the theoretical properties of hierarchical architectures. Our analysis focuses on three particular aspects of the hierarchy. First, we show that a wide class of architectures suffers from range compression; essentially, the derived kernel becomes increasingly saturated at each layer. Second, we show that the complexity of a linear architecture is constrained by the complexity of the first layer, and in some cases the architecture collapses into a single-layer linear computation. Finally, we characterize the discrimination and invariance properties of the derived kernel in the case when the input data are one-dimensional strings. We believe that these theoretical results will provide a useful foundation for guiding future developments within the theory of hierarchical algorithms.
</description>
<pubDate>Fri, 19 Nov 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/60024</guid>
<dc:date>2010-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>A Tree-Based Context Model for Object Recognition</title>
<link>https://hdl.handle.net/1721.1/59799</link>
<description>A Tree-Based Context Model for Object Recognition
Choi, Myung Jin; Lim, Joseph J.; Torralba, Antonio; Willsky, Alan S.
There has been a growing interest in exploiting contextual information in addition to local features to detect and localize multiple object categories in an image. A context model can rule out some unlikely combinations or locations of objects and guide detectors to produce a semantically coherent interpretation of a scene. However, the performance benefit of context models has been limited because most of the previous methods were tested on datasets with only a few object categories, in which most images contain one or two object categories. In this paper, we introduce a new dataset with images that contain many instances of different object categories, and propose an efficient model that captures the contextual information among more than a hundred object categories using a tree structure. Our model incorporates global image features, dependencies between object categories, and outputs of local detectors into one probabilistic framework. We demonstrate that our context model improves object recognition performance and provides a coherent interpretation of a scene, which enables a reliable image querying system by multiple object categories. In addition, our model can be applied to scene understanding tasks that local detectors alone cannot solve, such as detecting objects out of context or querying for the most typical and the least typicalscenes in a dataset.
</description>
<pubDate>Fri, 29 Oct 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/59799</guid>
<dc:date>2010-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>SEEC: A Framework for Self-aware Computing</title>
<link>https://hdl.handle.net/1721.1/59519</link>
<description>SEEC: A Framework for Self-aware Computing
Hoffmann, Henry; Maggio, Martina; Santambrogio, Marco D.; Leva, Alberto; Agarwal, Anant
As the complexity of computing systems increases, application programmers must be experts in their application domain and have the systems knowledge required to address the problems that arise from parallelism, power, energy, and reliability concerns. One approach to relieving this burden is to make use of self-aware computing systems, which automatically adjust their behavior to help applications achieve their goals. This paper presents the SEEC framework, a unified computational model designed to enable self-aware computing in both applications and system software. In the SEEC model, applications specify goals, system software specifies possible actions, and the SEEC framework is responsible for deciding how to use the available actions to meet the application-specified goals. The SEEC framework is built around a general and extensible control system which provides predictable behavior and allows SEEC to make decisions that achieve goals while optimizing resource utilization. To demonstrate the applicability of the SEEC framework, this paper presents fivedifferent self-aware systems built using SEEC. Case studies demonstrate how these systems can control the performance of the PARSEC benchmarks, optimize performance per Watt for a video encoder, and respond to unexpected changes in the underlying environment. In general these studies demonstrate that systems built using the SEEC framework are goal-oriented, predictable, adaptive, and extensible.
</description>
<pubDate>Wed, 13 Oct 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/59519</guid>
<dc:date>2010-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>Audit Trails in the Aeolus Distributed Security Platform</title>
<link>https://hdl.handle.net/1721.1/58772</link>
<description>Audit Trails in the Aeolus Distributed Security Platform
Popic, Victoria
This thesis provides a complete design and implementation of audit trail collection and storage for Aeolus, a distributed security platform based on information flow control. An information flow control system regulates all activities that concern information security. By recording all the operations monitored by Aeolus, our audit trails capture all actions that can affect system security. In our system, event records are collected on each system node and shipped to a centralized location, where they are stored and processed. To correlate audit trail events of different system nodes we store event dependencies directly in the event records. Each audit trail record keeps links to its immediate predecessors. Therefore, our audit trails form dependency graphs that capture the causal relationship among system events. These graphs can be used to reconstruct the chains of events leading to a given system state. Our results show that audit trail collection imposes a small overhead on system performance.
MEng thesis
</description>
<pubDate>Wed, 29 Sep 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/58772</guid>
<dc:date>2010-09-29T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian perceptual inference in linear Gaussian models</title>
<link>https://hdl.handle.net/1721.1/58669</link>
<description>Bayesian perceptual inference in linear Gaussian models
Battaglia, Peter W.
The aim of this paper is to provide perceptual scientists with a quantitative framework for modeling a variety of common perceptual behaviors, and to unify various perceptual inference tasks by exposing their common computational underpinnings. This paper derives a model Bayesian observer for perceptual contexts with linear Gaussian generative processes. I demonstrate the relationship between four fundamental perceptual situations by expressing their corresponding posterior distributions as consequences of the model's predictions under their respective assumptions.
</description>
<pubDate>Tue, 21 Sep 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/58669</guid>
<dc:date>2010-09-21T00:00:00Z</dc:date>
</item>
<item>
<title>A File Location, Replication, and Distribution System for Network Information to Aid Network Management</title>
<link>https://hdl.handle.net/1721.1/58668</link>
<description>A File Location, Replication, and Distribution System for Network Information to Aid Network Management
Cheng, Tiffany
This thesis demonstrates and evaluates the design, architecture, and implementation of a file location, replication, and distribution system built with the objective of managing information in an Internet network. The system's goal is to enable the availability of information by providing alternative locations for files in case of situations where the original piece of information cannot be found in the network due to failures or other problems. The system provides the mechanism for duplicating files and executes the act of placing them in multiple locations according to predefined rules for distribution. The resulting system is a working model for a file management system that can exist over the Internet and will aid in overall network management by organizing and overseeing the information found within a network.
MEng thesis
</description>
<pubDate>Wed, 22 Sep 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/58668</guid>
<dc:date>2010-09-22T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Solutions of Similar Linear Programming Problems using Boosting Trees</title>
<link>https://hdl.handle.net/1721.1/58609</link>
<description>Learning Solutions of Similar Linear Programming Problems using Boosting Trees
Banerjee, Ashis Gopal; Roy, Nicholas
In many optimization problems, similar linear programming (LP) problems occur in the nodes of the branch and bound trees that are used to solve integer (mixed or pure, deterministic or stochastic) programming problems. Similar LP problems are also found in problem domains where the objective function and constraint coefficients vary due to uncertainties in the operating conditions. In this report, we present a regression technique for learning a set of functions that map the objective function and the constraints to the decision variables of such an LP system by modifying boosting trees, an algorithm we term the Boost-LP algorithm. Matrix transformations and geometric properties of boosting trees are utilized to provide theoretical performance guarantees on the predicted values. The standard form of the loss function is altered to reduce the possibility of generating infeasible LP solutions. Experimental results on three different problems, one each on scheduling, routing, and planning respectively, demonstrate the effectiveness of the Boost-LP algorithm in providing significant computational benefits over regular optimization solvers without generating solutions that deviate appreciably from the optimum values.
</description>
<pubDate>Sat, 18 Sep 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/58609</guid>
<dc:date>2010-09-18T00:00:00Z</dc:date>
</item>
<item>
<title>Conservative-Bayesian Mechanisms</title>
<link>https://hdl.handle.net/1721.1/58486</link>
<description>Conservative-Bayesian Mechanisms
Azar, Pablo; Chen, Jing; Micali, Silvio
We put forward a new class of mechanisms. In this extended abstract, we exemplify our approach only for single-good auctions in what we call a conservative-Bayesian setting. (Essentially, no common-knowledge about the underlying distribution of the players' valuations is required.) We prove that our mechanism is optimal in this challenging and realistic setting.
</description>
<pubDate>Wed, 08 Sep 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/58486</guid>
<dc:date>2010-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>Practical Color-Based Motion Capture</title>
<link>https://hdl.handle.net/1721.1/58485</link>
<description>Practical Color-Based Motion Capture
Wang, Robert; Paris, Sylvain; Popovic, Jovan
Motion capture systems have been widely used for high quality content creation and virtual reality but are rarely used in consumer applications due to their price and setup cost. In this paper, we propose a motion capture system built from commodity components that can be deployed in a matter of minutes. Our approach uses one or more webcams and a color shirt to track the upper-body at interactive rates. We describe a robust color calibration system that enables our color-based tracking to work against cluttered backgrounds and under multiple illuminants. We demonstrate our system in several real-world indoor and outdoor settings.
</description>
<pubDate>Fri, 10 Sep 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/58485</guid>
<dc:date>2010-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Reliably Detecting Connectivity using Local Graph Traits</title>
<link>https://hdl.handle.net/1721.1/58484</link>
<description>Reliably Detecting Connectivity using Local Graph Traits
Cornejo, Alejandro; Lynch, Nancy
Local distributed algorithms can only gather sufficient information to identify local graph traits, that is, properties that hold within the local neighborhood of each node. However, it is frequently the case that global graph properties (connectivity, diameter, girth, etc) have a large influence on the execution of a distributed algorithm. This paper studies local graph traits and their relationship with global graph properties. Specifically, we focus on graph k-connectivity. First we prove a negative result that shows there does not exist a local graph trait which perfectly captures graph k-connectivity. We then present three different local graph traits which can be used to reliably predict the k-connectivity of a graph with varying degrees of accuracy. As a simple application of these results, we present upper and lower bounds for a local distributed algorithm which determines if a graph is k-connected. As a more elaborate application of local graph traits, we describe, and prove the correctness of, a local distributed algorithm that preserves k-connectivity in mobile ad hoc networks while allowing nodes to move independently whenever possible.
</description>
<pubDate>Thu, 09 Sep 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/58484</guid>
<dc:date>2010-09-09T00:00:00Z</dc:date>
</item>
<item>
<title>An Overview of MOOS-IvP and a Users Guide to the IvP Helm Autonomy Software</title>
<link>https://hdl.handle.net/1721.1/57583</link>
<description>An Overview of MOOS-IvP and a Users Guide to the IvP Helm Autonomy Software
Benjamin, Michael R.; Newman, Paul; Schmidt, Henrik; Leonard, John J.
This document describes the IvP Helm -- an Open Source behavior-based autonomy application for unmanned vehicles. IvP is short for interval programming -- a technique for representing and solving multi-objective optimizations problems. Behaviors in the IvP Helm are reconciled using multi-objective optimization when in competition with each other for influence of the vehicle. The IvP Helm is written as a MOOS application where MOOS is a set of Open Source publish-subscribe autonomy middleware tools. This document describes the configuration and use of the IvP Helm, provides examples of simple missions and information on how to download and build the software from the MOOS-IvP server at www.moosivp.org.
</description>
<pubDate>Fri, 27 Aug 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57583</guid>
<dc:date>2010-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>The Abstract MAC Layer</title>
<link>https://hdl.handle.net/1721.1/57577</link>
<description>The Abstract MAC Layer
Kuhn, Fabian; Lynch, Nancy; Newport, Calvin
A diversity of possible communication assumptions complicates the study of algorithms and lower bounds for radio networks. We address this problem by defining an abstract MAC layer. This service provides reliable local broadcast communication, with timing guarantees stated in terms of a collection of abstract delay functions applied to the relevant contention. Algorithm designers can analyze their algorithms in terms of these functions, independently of specific channel behavior. Concrete implementations of the abstract MAC Layer over basic radio network models generate concrete definitions for these delay functions, automatically adapting bounds proven for the abstract service to bounds for the specific radio network under consideration. To illustrate this approach, we use the abstract MAC Layer to study the new problem of Multi-Message Broadcast, a generalization of standard single-message broadcast in which multiple messages can originate at different times and locations in the network. We present and analyze two algorithms for Multi-Message Broadcast in static networks: a simple greedy algorithm and one that uses regional leaders. We then indicate how these results can be extended to mobile networks.
</description>
<pubDate>Thu, 26 Aug 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57577</guid>
<dc:date>2010-08-26T00:00:00Z</dc:date>
</item>
<item>
<title>MOOS-IvP Autonomy Tools Users Manual</title>
<link>https://hdl.handle.net/1721.1/57509</link>
<description>MOOS-IvP Autonomy Tools Users Manual
Benjamin, Michael R.
This document describes fifteen MOOS-IvP autonomy tools. uHelmScope provides a run-time scoping window into the state of an active IvP Helm executing its mission. pMarineViewer is a geo-based GUI tool for rendering marine vehicles and geometric data in their operational area. uXMS is a terminal based tool for scoping on a MOOSDB process. uTermCommand is a terminal based tool for poking a MOOSDB with a set of MOOS file pre-defined variable-value pairs selectable with aliases from the command-line. pEchoVar provides a way of echoing a post to one MOOS variable with a new post having the same value to a different variable. uProcessWatch monitors the presence or absence of a set of MOOS processes and summarizes the collective status in a single MOOS variable. uPokeDB provides a way of poking the MOOSDB from the command line with one or more variable-value pairs without any pre-existing configuration of a MOOS file. uTimerScript will execute a pre-defined timed pausable script of poking variable-value pairs to a MOOSDB. pNodeReporter summarizes a platforms critical information into a single node report string for sharing beyond the vehicle. pBasicContactMgr provides a basic contact management service with the ability to generate range-dependent configurable alerts. The Alog Toolbox is a set of offline tools for analyzing and manipulating log files in the .alog format.
</description>
<pubDate>Mon, 23 Aug 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57509</guid>
<dc:date>2010-08-23T00:00:00Z</dc:date>
</item>
<item>
<title>UCM/MIT Indications, Referring Expressions, and Coreference Corpus (UMIREC corpus) v1.1</title>
<link>https://hdl.handle.net/1721.1/57507</link>
<description>UCM/MIT Indications, Referring Expressions, and Coreference Corpus (UMIREC corpus) v1.1
Finlayson, Mark Alan; Hervas, Raquel
The corpus comprises 62 files in "Story Workbench" annotation format: 30 folktales in English from a variety of sources, and 32 Wall Street Journal articles selected to coincide with articles found in the Penn Treebank. The files are annotated with the location of referring expressions, coreference relations between the referring expressions, and so-called "indication structures", which split referring expressions into constituents (nuclei and modifiers) and mark each constituent as either 'distinctive' or 'descriptive', indicating whether or not the constituent contains information required for uniquely identifying the referent. The files distributed in this corpus archive are the gold-standard files, which were constructed by merging annotations done by two trained annotators. The contents of this corpus, the annotation procedure, and the indication structures are described in more detail in a paper titled "The Prevalence of Descriptive Referring Expressions in News and Narrative" published in the proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, held in July 2010 in Uppsala, Sweden (ACL-2010). A near-final version of the paper is included in the doc/ directory of the compressed corpus archive file.&#13;
This is version 1.1 of the UMIREC corpus, in which the coreference annotations have been fixed relative to version 1.0. UMIREC v1.0 suffered from a bug in the export script that corrupted the coreference data.
</description>
<pubDate>Wed, 12 May 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57507</guid>
<dc:date>2010-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Parallelizing Sequential Programs With Statistical Accuracy Tests</title>
<link>https://hdl.handle.net/1721.1/57475</link>
<description>Parallelizing Sequential Programs With Statistical Accuracy Tests
Misailovic, Sasa; Kim, Deokhwan; Rinard, Martin
We present QuickStep, a novel system for parallelizing sequential programs. QuickStep deploys a set of parallelization transformations that together induce a search space of candidate parallel programs. Given a sequential program, representative inputs, and an accuracy requirement, QuickStep uses performance measurements, profiling information, and statistical accuracy tests on the outputs of candidate parallel programs to guide its search for a parallelizationthat maximizes performance while preserving acceptable accuracy. When the search completes, QuickStep produces an interactive report that summarizes the applied parallelization transformations, performance, and accuracy results for the automatically generated candidate parallel programs. In our envisioned usage scenarios, the developer examines this report to evaluate the acceptability of the final parallelization and to obtain insight into how the original sequential program responds to different parallelization strategies. Itis also possible for the developer (or even a user of the program who has no software development expertise whatsoever) to simply use the best parallelization out of the box without examining the report or further investigating the parallelization. Results from our benchmark set of applications show that QuickStep can automatically generate accurate and efficient parallel programs---the automatically generated parallel versions of five of our six benchmark applications run between 5.0 and 7.7 times faster on 8 cores than the original sequential versions. Moreover, a comparison with the Intel icc compiler highlights how QuickStep can effectively parallelize applications with features (such as the use of modern object-oriented programming constructs or desirable parallelizations with infrequent but acceptable data races) that place them inherently beyond the reach of standard approaches.
</description>
<pubDate>Thu, 05 Aug 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57475</guid>
<dc:date>2010-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>An Efficient Learning Procedure for Deep Boltzmann Machines</title>
<link>https://hdl.handle.net/1721.1/57474</link>
<description>An Efficient Learning Procedure for Deep Boltzmann Machines
Salakhutdinov, Ruslan; Hinton, Geoffrey
We present a new learning algorithm for Boltzmann Machines that contain many layers of hidden variables. Data-dependent statistics are estimated using a variational approximation that tends to focus on a single mode, and data-independent statistics are estimated using persistent Markov chains. The use of two quite different techniques for estimating the two types of statistic that enter into the gradient of the log likelihood makes it practical to learn Boltzmann Machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer "pre-training" phase that initializes the weights sensibly. The pre-training also allows the variational inference to be initialized sensibly with a single bottom-up pass. We present results on the MNIST and NORB datasets showing that Deep Boltzmann Machines learn very good generative models of hand-written digits and 3-D objects. We also show that the features discovered by Deep Boltzmann Machines are a very effective way to initialize the hidden layers of feed-forward neural nets which are then discriminatively fine-tuned.
</description>
<pubDate>Wed, 04 Aug 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57474</guid>
<dc:date>2010-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>MAC Design for Analog Network Coding</title>
<link>https://hdl.handle.net/1721.1/57473</link>
<description>MAC Design for Analog Network Coding
Khabbazian, Majid; Kuhn, Fabian; Lynch, Nancy; Medard, Muriel; ParandehGheibi, Ali
Most medium access control mechanisms discard collided packets and consider interference harmful. Recent work on Analog Network Coding (ANC) suggests a different approach, in which multiple interfering transmissions are strategically scheduled. The received collisions are collected and then used in a decoding process, such as the ZigZag decoding process, where the packets involved in the collisions are extracted. In this paper, we present an algebraic representation of collisions and describe a general approach to recovering collisions using ANC. To study the eect of using ANC on the performance of MAC layers, we develop an ANC-based algorithm that implements an abstract MAC layer service, as defined in [1, 2], and analyze its performance. This study proves that ANC can significantly improve the performance of MAC layer services, in terms of probabilistic time guarantees for packet delivery. We illustrate how this improvement at the MAC layer can translate into faster higher-level algorithms, by analyzing the time complexity of a multiple-message network-wide broadcast algorithm that uses our ANC-based MAC service.
</description>
<pubDate>Mon, 02 Aug 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57473</guid>
<dc:date>2010-08-02T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and Invariance in a Family of Hierarchical Kernels</title>
<link>https://hdl.handle.net/1721.1/57464</link>
<description>Learning and Invariance in a Family of Hierarchical Kernels
Wibisono, Andre; Bouvrie, Jake; Rosasco, Lorenzo; Poggio, Tomaso
Understanding invariance and discrimination properties of hierarchical models is arguably the key to understanding how and why such models, of which the the mammalian visual system is one instance, can lead to good generalization properties and reduce the sample complexity of a given learning task. In this paper we explore invariance to transformation and the role of layer-wise embeddings within an abstract framework of hierarchical kernels motivated by the visual cortex. Here a novel form of invariance is induced by propagating the effect of locally defined, invariant kernels throughout a hierarchy. We study this notion of invariance empirically. We then present an extension of the abstract hierarchical modeling framework to incorporate layer-wise embeddings, which we demonstrate can lead to improved generalization and scalable algorithms. Finally we analyze experimentally sample complexity properties as a function of architectural parameters.
</description>
<pubDate>Fri, 30 Jul 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57464</guid>
<dc:date>2010-07-30T00:00:00Z</dc:date>
</item>
<item>
<title>Examining high level neural representations of cluttered scenes</title>
<link>https://hdl.handle.net/1721.1/57463</link>
<description>Examining high level neural representations of cluttered scenes
Meyers, Ethan; Embark, Hamdy; Freiwald, Winrich; Serre, Thomas; Kreiman, Gabriel; Poggio, Tomaso
Humans and other primates can rapidly categorize objects even when they are embedded in complex visual scenes (Thorpe et al., 1996; Fabre-Thorpe et al., 1998). Studies by Serre et al., 2007 have shown that the ability of humans to detect animals in brief presentations of natural images decreases as the size of the target animal decreases and the amount of clutter increases, and additionally, that a feedforward computational model of the ventral visual system, originally developed to account for physiological properties of neurons, shows a similar pattern of performance. Motivated by these studies, we recorded single- and multi-unit neural spiking activity from macaque superior temporal sulcus (STS) and anterior inferior temporal cortex (AIT), as a monkey passively viewed images of natural scenes. The stimuli consisted of 600 images of animals in natural scenes, and 600 images of natural scenes without animals in them, captured at four different viewing distances, and were the same images used by Serre et al. to allow for a direct comparison between human psychophysics, computational models, and neural data. To analyze the data, we applied population "readout" techniques (Hung et al., 2005; Meyers et al., 2008) to decode from the neural activity whether an image contained an animal or not. The decoding results showed a similar pattern of degraded decoding performance with increasing clutter as was seen in the human psychophysics and computational model results. However, overall the decoding accuracies from the neural data lower were than that seen in the computational model, and the latencies of information in IT were long (~125ms) relative to behavioral measures obtained from primates in other studies. Additional tests also showed that the responses of the model units were not capturing several properties of the neural responses, and that detecting animals in cluttered scenes using simple model units based on V1 cells worked almost as well as using more complex model units that were designed to model the responses of IT neurons. While these results suggest AIT might not be the primary brain region involved in this form of rapid categorization, additional studies are needed before drawing strong conclusions.
</description>
<pubDate>Thu, 29 Jul 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57463</guid>
<dc:date>2010-07-29T00:00:00Z</dc:date>
</item>
<item>
<title>Characteristics of Small Social Networks</title>
<link>https://hdl.handle.net/1721.1/57462</link>
<description>Characteristics of Small Social Networks
Richards, Whitman; Macindoe, Owen
Belief Dynamics
Two dozen networks are analyzed using three parameters that attempt to capture important properties of social networks: leadership L, member bonding B, and diversity of expertise D. The first two of these parameters have antecedents, the third is new. A key part of the analysis is to examine networks at multiple scales by dissecting the entire network into its n subgraphs of a given radius of two edge steps about each of the n nodes. This scale-based analysis reveals constraints on what we have dubbed "cognitive" networks, as contrasted with biological or physical networks. Specifically, "cognitive" networks appear to maximize bonding and diversity over a range of leadership dominance. Asymptotic relations between the bonding and diversity measures are also found when small, nearly complete subgraphs are aggregated to form larger networks. This aggregation probably underlies changes in a regularity among the LBD parameters; this regularity is a U-shaped function of networks size, n, which is minimal for networks around 80 or so nodes.
</description>
<pubDate>Tue, 27 Jul 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57462</guid>
<dc:date>2010-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Language and Compiler Support for Auto-Tuning Variable-Accuracy Algorithms</title>
<link>https://hdl.handle.net/1721.1/57461</link>
<description>Language and Compiler Support for Auto-Tuning Variable-Accuracy Algorithms
Ansel, Jason; Wong, Yee Lok; Chan, Cy; Olszewski, Marek; Edelman, Alan; Amarasinghe, Saman
Approximating ideal program outputs is a common technique for solving computationally difficult problems, for adhering to processing or timing constraints, and for performance optimization in situations where perfect precision is not necessary. To this end, programmers often use approximation algorithms, iterative methods, data resampling, and other heuristics. However, programming such variable accuracy algorithms presents difficult challenges since the optimal algorithms and parameters may change with different accuracy requirements and usage environments. This problem is further compounded when multiple variable accuracy algorithms are nested together due to the complex way that accuracy requirements can propagate across algorithms and because of the resulting size of the set of allowable compositions. As a result, programmers often deal with this issue in an ad-hoc manner that can sometimes violate sound programming practices such as maintaining library abstractions. In this paper, we propose language extensions that expose trade-offs between time and accuracy to the compiler. The compiler performs fully automatic compile-time and install-time autotuning and analyses in order to construct optimized algorithms to achieve any given target accuracy. We present novel compiler techniques and a structured genetic tuning algorithm to search the space of candidate algorithms and accuracies in the presence of recursion and sub-calls to other variable accuracy code. These techniques benefit both the library writer, by providing an easy way to describe and search the parameter and algorithmic choice space, and the library user, by allowing high level specification of accuracy requirements which are then met automatically without the need for the user to understand any algorithm-specific parameters. Additionally, we present a new suite of benchmarks, written in our language, to examine the efficacy of our techniques. Our experimental results show that by relaxing accuracy requirements, we can easily obtain performance improvements ranging from 1.1x to orders of magnitude of speedup.
</description>
<pubDate>Tue, 27 Jul 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/57461</guid>
<dc:date>2010-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>ChitChat: Making Video Chat Robust to Packet Loss</title>
<link>https://hdl.handle.net/1721.1/56252</link>
<description>ChitChat: Making Video Chat Robust to Packet Loss
Wang, Jue; Katabi, Dina
Video chat is increasingly popular among Internet users. Often, however, chatting sessions suffer from packet loss, which causes video outage and poor quality. Existing solutions however are unsatisfying. Retransmissions increase the delay and hence can interact negatively with the strict timing requirements of interactive video. FEC codes introduce extra overhead and hence reduce the bandwidth available for video data even in the absence of packet loss. This paper presents ChitChat, a new approach for reliable video chat that neither delays frames nor introduces bandwidth overhead. The key idea is to ensure that the information in each packet describes the whole frame. As a result, even when some packets are lost, the receiver can still use the received packets to decode a smooth version of the original frame. This reduces frame loss and the resulting video freezes and improves the perceived video quality. We have implemented ChitChat and evaluated it over multiple Internet paths. In comparison to Windows Live Messenger 2009, our method reduces the occurrences of video outage events by more than an order of magnitude.
</description>
<pubDate>Mon, 05 Jul 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/56252</guid>
<dc:date>2010-07-05T00:00:00Z</dc:date>
</item>
<item>
<title>EM2: A Scalable Shared-Memory Multicore Architecture</title>
<link>https://hdl.handle.net/1721.1/55944</link>
<description>EM2: A Scalable Shared-Memory Multicore Architecture
Khan, Omer; Lis, Mieszko; Devadas, Srini
We introduce the Execution Migration Machine (EM2), a novel, scalable shared-memory architecture for large-scale multicores constrained by off-chip memory bandwidth. EM2 reduces cache miss rates, and consequently off-chip memory usage, by permitting only one copy of data to be stored anywhere in the system: when a thread wishes to access an address not locally cached on the core it is executing on, it migrates to the appropriate core and continues execution. Using detailed simulations of a range of 256-core configurations on the SPLASH-2 benchmark suite, we show that EM2 improves application completion times by 18% on the average while remaining competitive with traditional architectures in silicon area.
</description>
<pubDate>Sat, 12 Jun 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/55944</guid>
<dc:date>2010-06-12T00:00:00Z</dc:date>
</item>
<item>
<title>Broadcasting in Unreliable Radio Networks</title>
<link>https://hdl.handle.net/1721.1/55721</link>
<description>Broadcasting in Unreliable Radio Networks
Oshman, Rotem; Richa, Andrea; Newport, Calvin; Lynch, Nancy; Kuhn, Fabian
Practitioners agree that unreliable links, which fluctuate between working and not working, are an important characteristic of wireless networks. In contrast, most theoretical models of radio networks fix a static set of links and assume that these links work reliably throughout an execution. This gap between theory and practice motivates us to investigate how unreliable links affect theoretical bounds on broadcast in radio networks. To that end we consider a model that includes two types of links: reliable links, which always deliver messages, and unreliable links, which sometimes deliver messages and sometimes do not. It is assumed that the graph induced by the reliable links is connected, and unreliable links are controlled by a worst-case adversary. In the new model we show an(n log n) lower bound on deterministic broadcast in undirected graphs, even when all processes are initially awake and have collision detection, and an (n) lower bound on randomized broadcast in undirected networks of constant diameter. This clearly separates the new model from the classical, reliable model. On the positive side, we give two algorithms that tolerate the inherent unreliability: an O(n3=2plog n)-time deterministic algorithm and a randomized algorithm which terminates in O(n log2 n) rounds with high probability.
</description>
<pubDate>Tue, 08 Jun 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/55721</guid>
<dc:date>2010-06-08T00:00:00Z</dc:date>
</item>
<item>
<title>iJam: Jamming Oneself for Secure Wireless Communication</title>
<link>https://hdl.handle.net/1721.1/55650</link>
<description>iJam: Jamming Oneself for Secure Wireless Communication
Katabi, Dina; Gollakota, Shyamnath
Wireless is inherently less secure than wired networks because of its broadcast nature. Attacks that simply snoop on the wireless medium successfully defeat the security of even 802.11 networks using the most recent security standards (WPA2-PSK). In this paper we ask the following question: Can we prevent this kind of eavesdropping from happening? If so, we can potentially defeat the entire class of attacks that rely on snooping. This paper presents iJam, a PHY-layer protocol for OFDM-based wireless systems. iJam ensures that an eavesdropper cannot successfully demodulate a wireless signal not intended for it. To achieve this iJam strategically introduces interference that prevents an eavesdropper from decoding the data, while allowing the intended receiver to decode it. iJam exploits the properties of 802.11â  s OFDM signals to ensure that an eavesdropper cannot even tell which parts of the signal are jammed. We implement iJam and evaluate it in a testbed of GNURadios with an 802.11-like physical layer. We show that iJam makes the data bits at the adversary look random, i.e., the BER becomes close to 50%, whereas the receiver can perfectly decode the data.
</description>
<pubDate>Mon, 07 Jun 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/55650</guid>
<dc:date>2010-06-07T00:00:00Z</dc:date>
</item>
<item>
<title>Power-Aware Computing with Dynamic Knobs</title>
<link>https://hdl.handle.net/1721.1/54799</link>
<description>Power-Aware Computing with Dynamic Knobs
Misailovic, Sasa; Agarwal, Anant; Carbin, Michael; Sidiroglou, Stelios; Hoffmann, Henry; Rinard, Martin
We present PowerDial, a system for dynamically adapting application behavior to execute successfully in the face of load and power fluctuations. PowerDial transforms static configuration parameters into dynamic knobs that the PowerDial control system can manipulate to dynamically trade off the accuracy of the computation in return for reductions in the computational resources that the application requires to produce its results. These reductions translate into power savings. Our experimental results show that PowerDial can enable our benchmark applications to execute responsively in the face of power caps (imposed, for example, in response to cooling system failures) that would otherwise significantly impair the delivered performance. They also show that PowerDial can reduce the number of machines required to meet peak load, in our experiments enabling up to a 75% reduction in direct power and capital costs.
</description>
<pubDate>Fri, 14 May 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/54799</guid>
<dc:date>2010-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>SIFT Flow: Dense Correspondence across Scenes and its Applications</title>
<link>https://hdl.handle.net/1721.1/54787</link>
<description>SIFT Flow: Dense Correspondence across Scenes and its Applications
Freeman, William T.; Torralba, Antonio; Yuen, Jenny; Liu, Ce
While image alignment has been studied in different areas of computer vision for decades, aligning images depicting different scenes remains a challenging problem. Analogous to optical flow where an image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its nearest neighbors in a large image corpus containing a variety of scenes. The SIFT flow algorithm consists of matching densely sampled, pixel-wise SIFT features between two images, while preserving spatial discontinuities. The SIFT features allow robust matching across different scene/object appearances, whereas the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach robustly aligns complex scene pairs containing significant spatial differences. Based on SIFT flow, we propose an alignment-based large database framework for image analysis and synthesis, where image information is transferred from the nearest neighbors to a query image according to the dense scene correspondence. This framework is demonstrated through concrete applications, such as motion field prediction from a single image, motion synthesis via object transfer, satellite image registration and face recognition.
</description>
<pubDate>Sat, 08 May 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/54787</guid>
<dc:date>2010-05-08T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Task and Motion Planning in the Now</title>
<link>https://hdl.handle.net/1721.1/54780</link>
<description>Hierarchical Task and Motion Planning in the Now
Kaelbling, Leslie Pack; Lozano-Perez, Tomas
In this paper we outline an approach to the integration of task planning and motion planning that has the following key properties: It is aggressively hierarchical. It makes choices and commits to them in a top-down fashion in an attempt to limit the length of plans that need to be constructed, and thereby exponentially decrease the amount of search required. Importantly, our approach also limits the need to project the effect of actions into the far future. It operates on detailed, continuous geometric representations and partial symbolic descriptions. It does not require a complete symbolic representation of the input geometry or of the geometric effect of the task-level operations.
Workshop on Mobile Manipulation, IEEE International Conference on Robotics and Automation
</description>
<pubDate>Fri, 07 May 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/54780</guid>
<dc:date>2010-05-07T00:00:00Z</dc:date>
</item>
<item>
<title>UCM/MIT Indications, Referring Expressions, and Coreference Corpus (UMIREC corpus)</title>
<link>https://hdl.handle.net/1721.1/54766</link>
<description>UCM/MIT Indications, Referring Expressions, and Coreference Corpus (UMIREC corpus)
Hervas, Raquel; Finlayson, Mark Alan
This version of the UMIREC corpus has been superseded by version 1.1, found at http://hdl.handle.net/1721.1/57507.  Please do not use version 1.0, as it contains corrupted coreference information.  The correct, uncorrupted data is found in version 1.1.
</description>
<pubDate>Wed, 12 May 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/54766</guid>
<dc:date>2010-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Annotation Guide for the UCM/MIT Indications, Referential Expressions, and Coreference Corpus (UMIREC Corpus)</title>
<link>https://hdl.handle.net/1721.1/54765</link>
<description>Annotation Guide for the UCM/MIT Indications, Referential Expressions, and Coreference Corpus (UMIREC Corpus)
Hervas, Raquel; Finlayson, Mark Alan
This is the annotation guide given to the annotators who created the UCM/MIT Indications, Referring Expressions, and Coreference (UMIREC) Corpus version 1.0. The corpus comprises texts annotated for referring expressions, coreference relations between the referring expressions, and so-called "indication structures", which split referring expressions into constituents (nuclei and modifiers) and mark each constituent as either 'distinctive' or 'descriptive', which indicate whether or not the constituent contains information required for uniquely identifying the referent. The contents of this corpus, the annotation procedure, and the indication structures are described in more detail in a paper titled "The Prevalence of Descriptive Referring Expressions in News and Narrative" published in the proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, held in July 2010 in Uppsala, Sweden (ACL-2010).
</description>
<pubDate>Wed, 12 May 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/54765</guid>
<dc:date>2010-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>A User Study Comparing 3D Modeling with Silhouettes and Google SketchUp</title>
<link>https://hdl.handle.net/1721.1/54731</link>
<description>A User Study Comparing 3D Modeling with Silhouettes and Google SketchUp
Igarashi, Takeo; Durand, Fredo; Rivers, Alec
We describe a user study comparing 3D Modeling with Silhouettes and Google SketchUp. In the user study, ten users were asked to create 3D models of three different objects, using either 3D Modeling with Silhouettes or Google SketchUp. Ten different users were then asked to rank images of the models produced by the first group. We show that the models made with 3D Modeling with Silhouettes were ranked significantly higher on average than those made with Google SketchUp.
</description>
<pubDate>Wed, 05 May 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/54731</guid>
<dc:date>2010-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Error Finding in Access-Control Policies</title>
<link>https://hdl.handle.net/1721.1/54730</link>
<description>Automatic Error Finding in Access-Control Policies
Jayaraman, Karthick; Rinard, Martin C.; Tripunitara, Mahesh; Ganesh, Vijay; Chapin, Steve
Access-control policies are a key infrastructural technology for computer security. However, a significant problem is that system administrators need to be able to automatically verify whether their policies capture the intended security goals. To address this important problem, researchers have proposed many automated verification techniques. Despite considerable progress in verification techniques, scalability is still a significant issue. Hence, in this paper we propose that error finding complements verification, and is a fruitful way of checking whether or not access control policies implement the security intent of system administrators. Error finding is more scalable (at the cost of completeness), and allows for the use of a wider variety of techniques. In this paper, we describe an abstraction-refinement based technique and its implementation, the Mohawk tool, aimed at finding errors in ARBAC access-control policies. The key insight behind our abstraction-refinement technique is that it is more efficient to look for errors in an abstract policy (with successive refinements, if necessary) than its complete counterpart. Mohawk accepts as input an access-control policy and a safety question. If Mohawk finds an error in the input policy, it terminates with a sequence of actions that cause the error. We provide an extensive comparison of Mohawk with the current state-of-the-art analysis tools. We show that Mohawk scales very well as the size and complexity of the input policies increase, and is orders of magnitude faster than competing tools. The Mohawk tool is open source and available from the Google Code website: http://code.google.com/p/mohawk/
</description>
<pubDate>Wed, 05 May 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/54730</guid>
<dc:date>2010-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>The Bayes Tree: Enabling Incremental Reordering and Fluid Relinearization for Online Mapping</title>
<link>https://hdl.handle.net/1721.1/54717</link>
<description>The Bayes Tree: Enabling Incremental Reordering and Fluid Relinearization for Online Mapping
Kaess, Michael; Dellaert, Frank; Roberts, Richard; Ila, Viorela
In this paper we present a novel data structure, the Bayes tree, which exploits the connections between graphical model inference and sparse linear algebra. The proposed data structure provides a new perspective on an entire class of simultaneous localization and mapping (SLAM) algorithms. Similar to a junction tree, a Bayes tree encodes a factored probability density, but unlike the junction tree it is directed and maps more naturally to the square root information matrix of the SLAM problem. This makes it eminently suited to encode the sparse nature of the problem, especially in a smoothing and mapping (SAM) context. The inherent sparsity of SAM has already been exploited in the literature to produce efficient solutions in both batch and online mapping. The graphical model perspective allows us to develop a novel incremental algorithm that seamlessly incorporates reordering and relinearization. This obviates the need for expensive periodic batch operations from previous approaches, which negatively affect the performance and detract from the intended online nature of the algorithm. The new method is evaluated using simulated and real-world datasets in both landmark and pose SLAM settings.
</description>
<pubDate>Fri, 29 Jan 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/54717</guid>
<dc:date>2010-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing MapReduce for Multicore Architectures</title>
<link>https://hdl.handle.net/1721.1/54692</link>
<description>Optimizing MapReduce for Multicore Architectures
Kaashoek, Frans; Morris, Robert; Mao, Yandong
MapReduce is a programming model for data-parallel programs originally intended for data centers. MapReduce simplifies parallel programming, hiding synchronization and task management. These properties make it a promising programming model for future processors with many cores, and existing MapReduce libraries such as Phoenix have demonstrated that applications written with MapReduce perform competitively with those written with Pthreads. This paper explores the design of the MapReduce data structures for grouping intermediate key/value pairs, which is often a performance bottleneck on multicore processors. The paper finds the best choice depends on workload characteristics, such as the number of keys used by the application, the degree of repetition of keys, etc. This paper also introduces a new MapReduce library, Metis, with a compromise data structure designed to perform well for most workloads. Experiments with the Phoenix benchmarks on a 16-core AMD-based servershow that Metisâ   data structure performs better than simpler alternatives, including Phoenix.
</description>
<pubDate>Sun, 02 May 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/54692</guid>
<dc:date>2010-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Instruction-Level Execution Migration</title>
<link>https://hdl.handle.net/1721.1/53748</link>
<description>Instruction-Level Execution Migration
Devadas, Srinivas; Lis, Mieszko; Khan, Omer
We introduce the Execution Migration Machine (EM²), a novel data-centric multicore memory system architecture based on computation migration. Unlike traditional distributed memory multicores, which rely on complex cache coherence protocols to move the data to the core where the computation is taking place, our scheme always moves the computation to the core where the data resides. By doing away with the cache coherence protocol, we are able to boost the effectiveness of per-core caches while drastically reducing hardware complexity. To evaluate the potential of EM² architectures, we developed a series of PIN/Graphite-based models of an EM² multicore with 64 x86 cores and, under some simplifying assumptions (a timing model restricted to data memory performance, no instruction cache modeling, high-bandwidth fixed-latency interconnect allowing concurrent migrations), compared them against corresponding directory-based cache-coherent architecture models. We justify our assumptions and show that our conclusions are valid even if our assumptions are removed. Experimental results on a range of SPLASH-2 and PARSEC benchmarks indicate that EM2 can significantly improve per-core cache performance, decreasing overall miss rates by as much as 84% and reducing average memory latency by up to 58%.
</description>
<pubDate>Sat, 17 Apr 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/53748</guid>
<dc:date>2010-04-17T00:00:00Z</dc:date>
</item>
<item>
<title>Kongming: A Generative Planner for Hybrid Systems with Temporally Extended Goals</title>
<link>https://hdl.handle.net/1721.1/53720</link>
<description>Kongming: A Generative Planner for Hybrid Systems with Temporally Extended Goals
Li, Hui X.
Most unmanned missions in space and undersea are commanded by a "script" that specifies a sequence of discrete commands and continuous actions. Currently such scripts are mostly hand-generated by human operators. This introduces inefficiency, puts a significant cognitive burden on the engineers, and prevents re-planning in response to environment disturbances or plan execution failure. For discrete systems, the field of autonomy has elevated the level of commanding by developing goal-directed systems, to which human operators specify a series of temporally extended goals to be accomplished, and the goal-directed systems automatically output the correct, executable command sequences. Increasingly, the control of autonomous systems involves performing actions with a mix of discrete and continuous effects. For example, a typical autonomous underwater vehicle (AUV) mission involves discrete actions, like get GPS and take sample, and continuous actions, like descend and ascend, which are influenced by the dynamical model of the vehicle. A hybrid planner generates a sequence of discrete and continuous actions that achieve the mission goals. In this thesis, I present a novel approach to solve the generative planning problem for temporally extended goals for hybrid systems, involving both continuous and discrete actions. The planner, Kongming, incorporates two innovations. First, it employs a compact representation of all hybrid plans, called a Hybrid Flow Graph, which combines the strengths of a Planning Graph for discrete actions and Flow Tubes for continuous actions. Second, it engages novel reformulation schemes to handle temporally flexible actions and temporally extended goals. I have successfully demonstrated controlling an AUV in the Atlantic ocean using mission scripts solely generated by Kongming. I have also empirically evaluated Kongming on various real-world scenarios in the underwater domain and the air vehicle domain, and found it successfully and efficiently generates valid and optimal plans.
PhD thesis
</description>
<pubDate>Fri, 09 Apr 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/53720</guid>
<dc:date>2010-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Generalized Conflict Learning For Hybrid Discrete Linear Optimization</title>
<link>https://hdl.handle.net/1721.1/53718</link>
<description>Generalized Conflict Learning For Hybrid Discrete Linear Optimization
Li, Hui X.
Conflict-directed search algorithms have formed the core of practical, model-based reasoning systems for the last three decades. In many of these applications there is a series of discrete constraint optimization problems and a conflict-directed search algorithm, which uses conflicts in the forward search step to focus search away from known infeasibilities and towards the optimal solution. In the arena of model-based autonomy, discrete systems, like deep space probes, have given way to more agile systems, such as coordinated vehicle control, which must robustly control their continuous dynamics. Controlling these systems requires optimizing over continuous, as well as discrete variables, using linear and non-linear as well as logical constraints. This thesis explores the development of algorithms for solving ybrid discrete/linear optimization problems that use conflicts in the forward search direction, generalizing from the conflict-directed search algorithms of based reasoning. We introduce a novel algorithm called Generalized Conflict-directed Branch and Bound (GCD-BB). GCD-BB extends traditional Branch and Bound (B&amp;B), by first constructing conflicts from nodes of the search tree that are found to be infeasible or sub-optimal, and then by using these conflicts to guide the forward search away from known infeasible and sub-optimal states. We evaluate GCD-BB empirically on a range of test problems of coordinated air vehicle control. GCD-BB demonstrates a substantial improvement in performance compared to a traditional B&amp;B algorithm, applied to either disjunctive linear programs or an equivalent binary integer program encoding.
SM thesis
</description>
<pubDate>Fri, 20 May 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/53718</guid>
<dc:date>2005-05-20T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Re-Photography</title>
<link>https://hdl.handle.net/1721.1/53705</link>
<description>Computational Re-Photography
Agarwala, Aseem; Bae, Soonmin; Durand, Fredo
Rephotographers aim to recapture an existing photograph from the same viewpoint. A historical photograph paired with a well-aligned modern rephotograph can serve as a remarkable visualization of the passage of time. However, the task of rephotography is tedious and often imprecise, because reproducing the viewpoint of the original photograph is challenging. The rephotographer must disambiguate between the six degrees of freedom of 3D translation and rotation, and the confounding similarity between the effects of camera zoom and dolly. We present a real-time estimation and visualization technique for rephotography that helps users reach a desired viewpoint during capture. The input to our technique is a reference image taken from the desired viewpoint. The user moves through the scene with a camera and follows our visualization to reach the desired viewpoint. We employ computer vision techniques to compute the relative viewpoint difference. We guide 3D movement using two 2D arrows. We demonstrate the success of our technique by rephotographing historical images and conducting user studies.
</description>
<pubDate>Wed, 07 Apr 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/53705</guid>
<dc:date>2010-04-07T00:00:00Z</dc:date>
</item>
<item>
<title>Decoupled Sampling for Real-Time Graphics Pipelines</title>
<link>https://hdl.handle.net/1721.1/53330</link>
<description>Decoupled Sampling for Real-Time Graphics Pipelines
Ragan-Kelley, Jonathan; Doggett, Michael; Lehtinen, Jaakko; Chen, Jiawen; Durand, Fredo
We propose decoupled sampling, an approach that decouples shading from visibility sampling in order to enable motion blur and depth-of-field at reduced cost. More generally, it enables extensions of modern real-time graphics pipelines that provide controllable shading rates to trade off quality for performance. It can be thought of as a generalization of GPU-style multisample antialiasing (MSAA) to support unpredictable shading rates, with arbitrary mappings from visibility to shading samples as introduced by motion blur, depth-of-field, and adaptive shading. It is inspired by the Reyes architecture in offline rendering, but targets real-time pipelines by driving shading from visibility samples as in GPUs, and removes the need for micropolygon dicing or rasterization. Decoupled Sampling works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. We present extensions of two modern GPU pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion blur and depth-of-field, as well as variable and adaptive shading rates.
</description>
<pubDate>Mon, 29 Mar 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/53330</guid>
<dc:date>2010-03-29T00:00:00Z</dc:date>
</item>
<item>
<title>Relational Cloud: The Case for a Database Service</title>
<link>https://hdl.handle.net/1721.1/52606</link>
<description>Relational Cloud: The Case for a Database Service
Wu, Eugene; Madden, Samuel; Zhang, Yang; Jones, Evan; Curino, Carlo
In this paper, we make the case for â  databases as a serviceâ   (DaaS), with two target scenarios in mind: (i) consolidation of data management functionality for large organizations and (ii) outsourcing data management to a cloud-based service provider for small/medium organizations. We analyze the many challenges to be faced, and discuss the design of a database service we are building, called Relational Cloud. The system has been designed from scratch and combines many recent advances and novel solutions. The prototype we present exploits multiple dedicated storage engines, provides high-availability via transparent replication, supports automatic workload partitioning and live data migration, and provides serializable distributed transactions. While the system is still under active development, we are able to present promising initial results that showcase the key features of our system. The tests are based on TPC benchmarks and real-world data from epinions.com, and show our partitioning, scalability and balancing capabilities.
</description>
<pubDate>Sun, 14 Mar 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/52606</guid>
<dc:date>2010-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>CNS: a GPU-based framework for simulating cortically-organized networks</title>
<link>https://hdl.handle.net/1721.1/51839</link>
<description>CNS: a GPU-based framework for simulating cortically-organized networks
Poggio, Tomaso; Knoblich, Ulf; Mutch, Jim
Computational models whose organization is inspired by the cortex are increasing in both number and popularity. Current instances of such models include convolutional networks, HMAX, Hierarchical Temporal Memory, and deep belief networks. These models present two practical challenges. First, they are computationally intensive. Second, while the operations performed by individual cells, or units, are typically simple, the code needed to keep track of network connectivity can quickly become complicated, leading to programs that are difficult to write and to modify. Massively parallel commodity computing hardware has recently become available in the form of general-purpose GPUs. This helps address the first problem but exacerbates the second. GPU programming adds an extra layer of difficulty, further discouraging exploration. To address these concerns, we have created a programming framework called CNS ('Cortical Network Simulator'). CNS models are automatically compiled and run on a GPU, typically 80-100x faster than on a single CPU, without the user having to learn any GPU programming. A novel scheme for the parametric specification of network connectivity allows the user to focus on writing just the code executed by a single cell. We hope that the ability to rapidly define and run cortically-inspired models will facilitate research in the cortical modeling community. CNS is available under the GNU General Public License.
</description>
<pubDate>Fri, 26 Feb 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/51839</guid>
<dc:date>2010-02-26T00:00:00Z</dc:date>
</item>
<item>
<title>Performance and error analysis of three part of speech taggers on health texts</title>
<link>https://hdl.handle.net/1721.1/51833</link>
<description>Performance and error analysis of three part of speech taggers on health texts
Zeng, Qing; Curtis, Dorothy
Increasingly, natural language processing (NLP) techniques are being developed and utilized in a variety of biomedical domains. Part of speech tagging is a critical step in many NLP applications. Currently, we are developing a NLP tool for text simplification. As part of this effort, we set off to evaluate several part of speech (POS) taggers. We selected 120 sentences (2375 tokens) from a corpus of six types of diabetes-related health texts and asked human reviewers to tag each word in these sentences to create a "Gold Standard." We then tested each of the three POS taggers against the "Gold Standard." One tagger (dTagger) had been trained on health texts and the other two (MaxEnt and Curran &amp; Clark) were trained on general news articles. We analyzed the errors and placed them into five categories: systematic, close, subtle, difficult source, and other. The three taggers have relatively similar rates of success: dTagger, MaxEnt, and Curran &amp; Clark had 87%, 89% and 90% agreement with the gold standard, respectively. These rates of success are lower than published rates for these taggers. This is probably due to our testing them on a corpus that differs significantly from their training corpora. The taggers made different errors: the dTagger, which had been trained on a set of medical texts (MedPost), made fewer errors on medical terms than MaxEnt and Curran &amp; Clark. The latter two taggers performed better on non-medical terms and we found the difference between their performance and that of dTagger was statistically significant. Our findings suggest that the three POS taggers have similar correct tagging rates, though they differ in the types of errors they make. For the task of text simplification, we are inclined to perform additional training of the Curran &amp; Clark tagger with the Medpost corpus because both the fine grained tagging provided by this tool and the correct recognition of medical terms are equally important.
</description>
<pubDate>Thu, 25 Feb 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/51833</guid>
<dc:date>2010-02-25T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Cache Coherence on Manycore Optical Networks</title>
<link>https://hdl.handle.net/1721.1/51734</link>
<description>Efficient Cache Coherence on Manycore Optical Networks
Psota, James; Agarwal, Anant; Miller, Jason; Beckmann, Nathan; Kurian, George
Ever since industry has turned to parallelism instead of frequency scaling to improve processor performance, multicore processors have continued to scale to larger and larger numbers of cores. Some believe that multicores will have 1000 cores or more by the middle of the next decade. However, their promise of increased performance will only be reached if their inherent scaling challenges are overcome. One such major scaling challenge is the viability of efficient cache coherence with large numbers of cores. Meanwhile, recent advances in nanophotonic device manufacturing are making CMOS-integrated optics a realityâ  interconnect technology which can provide significantly more bandwidth at lower power than conventional electrical analogs. The contributions of this paper are two-fold. (1) It presents ATAC, a new manycore architecture that augments an electrical mesh network with an optical network that performs highly efficient broadcasts. (2) It introduces ACKwise, a novel directory-based cache coherence protocol that provides high performance and scalability on any large-scale manycore interconnection net- work with broadcast capability. Performance evaluation studies using analytical models show that (i) a 1024-core ATAC chip using ACKwise achieves a speedup of 3.9Ã  compared to a similarly-sized pure electrical mesh manycore with a conventional limited directory protocol; (ii) the ATAC chip with ACKwise achieves a speedup of 1.35Ã  compared to the electrical mesh chip with ACKwise; and (iii) a pure electrical mesh chip with ACKwise achieves a speedup of 2.9Ã  over the same chip using a conventional limited directory protocol.
</description>
<pubDate>Thu, 11 Feb 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/51734</guid>
<dc:date>2010-02-11T00:00:00Z</dc:date>
</item>
<item>
<title>Core Count vs Cache Size for Manycore Architectures in the Cloud</title>
<link>https://hdl.handle.net/1721.1/51733</link>
<description>Core Count vs Cache Size for Manycore Architectures in the Cloud
Agarwal, Anant; Miller, Jason; Beckmann, Nathan; Wentzlaff, David
The number of cores which fit on a single chip is growing at an exponential rate while off-chip main memory bandwidth is growing at a linear rate at best. This core count to off-chip bandwidth disparity causes per-core memory bandwidth to decrease as process technology advances. Continuing per-core off-chip bandwidth reduction will cause multicore and manycore chip architects to rethink the optimal grain size of a core and the on-chip cache configuration in order to save main memory bandwidth. This work introduces an analytic model to study the tradeoffs of utilizing increased chip area for larger caches versus more cores. We focus this study on constructing manycore architectures well suited for the emerging application space of cloud computing where many independent applications are consolidated onto a single chip. This cloud computing application mix favors small, power-efficient cores. The model is exhaustively evaluated across a large range of cache and core-count configurations utilizing SPEC Int 2000 miss rates and CACTI timing and area models to determine the optimal cache configurations and the number of cores across four process nodes. The model maximizes aggregate computational throughput and is applied to SRAM and logic process DRAM caches. As an example, our study demonstrates that the optimal manycore configuration in the 32nm node for a 200 mm^2 die uses on the order of 158 cores, with each core containing a 64KB L1I cache, a 16KB L1D cache, and a 1MB L2 embedded-DRAM cache. This study finds that the optimal cache size will continue to grow as process technology advances, but the tradeoff between more cores and larger caches is a complex tradeoff in the face of limited off-chip bandwidth and the non-linearities of cache miss rates and memory controller queuing delay.
</description>
<pubDate>Thu, 11 Feb 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/51733</guid>
<dc:date>2010-02-11T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Parallelization With Statistical Accuracy Bounds</title>
<link>https://hdl.handle.net/1721.1/51680</link>
<description>Automatic Parallelization With Statistical Accuracy Bounds
Kim, Deokhwan; Misailovic, Sasa; Rinard, Martin
Traditional parallelizing compilers are designed to generate parallel programs that produce identical outputs as the original sequential program. The difficulty of performing the program analysis required to satisfy this goal and the restricted space of possible target parallel programs have both posed significant obstacles to the development of effective parallelizing compilers. The QuickStep compiler is instead designed to generate parallel programs that satisfy statistical accuracy guarantees. The freedom to generate parallel programs whose output may differ (within statistical accuracy bounds) from the output of the sequential program enables a dramatic simplification of the compiler and a significant expansion in the range of parallel programs that it can legally generate. QuickStep exploits this flexibility to take a fundamentally different approach from traditional parallelizing compilers. It applies a collection of transformations (loop parallelization, loop scheduling, synchronization introduction, and replication introduction) to generate a search space of parallel versions of the original sequential program. It then searches this space (prioritizing the parallelization of the most time-consuming loops in the application) to find a final parallelization that exhibits good parallel performance and satisfies the statistical accuracy guarantee. At each step in the search it performs a sequence of trial runs on representative inputs to examine the performance, accuracy, and memory accessing characteristics of the current generated parallel program. An analysis of these characteristics guides the steps the compiler takes as it explores the search space of parallel programs. Results from our benchmark set of applications show that QuickStep can automatically generate parallel programs with good performance and statistically accurate outputs. For two of the applications, the parallelization introduces noise into the output, but the noise remains within acceptable statistical bounds. The simplicity of the compilation strategy and the performance and statistical acceptability of the generated parallel programs demonstrate the advantages of the QuickStep approach.
</description>
<pubDate>Wed, 10 Feb 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/51680</guid>
<dc:date>2010-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>The Cost of Global Broadcast Using Abstract MAC Layers</title>
<link>https://hdl.handle.net/1721.1/51667</link>
<description>The Cost of Global Broadcast Using Abstract MAC Layers
Lynch, Nancy; Kuhn, Fabian; Kowalski, Dariusz; Khabbazian, Majid
We analyze greedy algorithms for broadcasting messages throughout a multi-hop wireless network, using a slot-based model that includes message collisions without collision detection. Our algorithms are split formally into two pieces: a high-level piece for broadcast and a low-level piece for contention management. We accomplish the split using abstract versions of the MAC layer to encapsulate the contention management. We use two different abstract MAC layers: a basic non-probabilistic one, which our contention management algorithm implements with high probability, and a probabilistic one, which our contention management algorithm implements precisely. Using this approach, we obtain the following complexity bounds: Single-message broadcast, using the basic abstract MAC layer, takes time O(D log(n/epsilon) log(Delta)) to deliver the message everywhere with probability 1 - epsilon, where D is the network diameter, n is the number of nodes, and Delta is the maximum node degree. Single-message broadcast, using the probabilistic abstract MAC layer, takes time only O((D + log(n/epsilon)) log(Delta)). For multi-message broadcast, the bounds are O((D + k' Delta) log(n/epsilon) log(Delta)) using the basic layer and O((D + k' Delta log(n/epsilon)) log(Delta)) using the probabilistic layer,for the time to deliver a single message everywhere in the presence of at most k' concurrent messages.
</description>
<pubDate>Tue, 09 Feb 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/51667</guid>
<dc:date>2010-02-09T00:00:00Z</dc:date>
</item>
<item>
<title>An Operating System for Multicore and Clouds: Mechanisms and Implementation</title>
<link>https://hdl.handle.net/1721.1/51381</link>
<description>An Operating System for Multicore and Clouds: Mechanisms and Implementation
Modzelewski, Kevin; Miller, Jason; Belay, Adam; Beckmann, Nathan; Gruenwald, Charles, III; Wentzlaff, David; Youseff, Lamia; Agarwal, Anant
Cloud computers and multicore processors are two emerging classes of computational hardware that have the potential to provide unprecedented compute capacity to the average user. In order for the user to effectively harness all of this computational power, operating systems (OSes) for these new hardware platforms are needed.  Existing multicore operating systems do not scale to large numbers of cores, and do not support clouds. Consequently, current-day cloud systems push much complexity onto the user, requiring the user to manage individual Virtual Machines (VMs) and deal with many system-level concerns. In this work we describe the mechanisms and implementation of a factored operating system named fos. fos is a single system image operating system across both multicore and Infrastructure as a Service (IaaS) cloud systems.  fos tackles OS scalability challenges by factoring the OS into its component system services. Each system service is further factored into a collection of Internet-inspired servers which communicate via messaging. Although designed in a manner similar to distributed Internet services, OS services instead provide traditional kernel services such as file systems, scheduling, memory management,and access to hardware. fos also implements new classes of OS services like fault tolerance and demand elasticity. In this work, we describe our working fos implementation, and provide early performance measurements of fos for both intra-machine and inter-machine operations.
</description>
<pubDate>Mon, 08 Feb 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/51381</guid>
<dc:date>2010-02-08T00:00:00Z</dc:date>
</item>
<item>
<title>Submodular Secretary Problem and Extensions</title>
<link>https://hdl.handle.net/1721.1/51336</link>
<description>Submodular Secretary Problem and Extensions
Zadimoghaddam, Morteza; Hajiaghayi, MohammadTaghi; Bateni, MohammadHossein
Online auction is an essence of many modern markets, particularly networked markets, in which information about goods, agents, and outcomes is revealed over a period of time, and the agents must make irrevocable decisions without knowing future information. Optimal stopping theory, especially the classic "secretary problem", is a powerful tool for analyzing such online scenarios which generally require optimizing an objective function over the input. The secretary problem and its generalization the "multiple-choice secretary problem" were under a thorough study in the literature. In this paper, we consider a very general setting of the latter problem called the "submodular secretary problem", in which the goal is to select k secretaries so as to maximize the expectation of a (not necessarily monotone) submodular function which defines efficiency of the selected secretarial group based on their overlapping skills. We present the first constant-competitive algorithm for this case. In a more general setting in which selected secretaries should form an independent (feasible) set in each of l given matroids as well, we obtain an O(l log^2 r)-competitive algorithm generalizing several previous results, where r is the maximum rank of the matroids. Another generalization is to consider l knapsack constraints instead of the matroid constraints, for which we present an O(l)-competitive algorithm. In a sharp contrast, we show for a more general setting of "subadditive secretary problem, there is no o~(sqrt(n))-competitive algorithm and thus submodular functions are the most general functions to consider for constant competitiveness in our setting. We complement this result by giving a matching O(sqrt(n))-competitive algorithm for the subadditive case. At the end, we consider some special cases of our general setting as well.
</description>
<pubDate>Mon, 01 Feb 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/51336</guid>
<dc:date>2010-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>SWIFT: A Narrowband-Friendly Cognitive Wideband Network</title>
<link>https://hdl.handle.net/1721.1/51335</link>
<description>SWIFT: A Narrowband-Friendly Cognitive Wideband Network
Sodini, Charles; Edalat, Farinaz; Katabi, Dina; Kushman, Nate; Rahul, Hariharan
Wideband technologies in the unlicensed spectrum can satisfy the ever-increasing demands for wireless bandwidth created by emerging rich media applications. The key challenge for such systems, however, is to allow narrowband technologies that share these bands (say, 802.11 a/b/g/n, Zigbee) to achieve their normal performance, without compromising the throughput or range of the wideband network.This paper presents SWIFT, the first system where high-throughput wideband nodes are shown in a working deployment to coexist with unknown narrowband devices, while forming a network of their own. Prior work avoids narrowband devices by operating below the noise level and limiting itself to a single contiguous unused band. While this achieves coexistence, it sacrifices the throughput and operating distance of the wideband device. In contrast, SWIFT creates high throughput wireless links by weaving together non-contiguous unused frequency bands that change as narrowband devices enter or leave the environment. This design principle of cognitive aggregation allows SWIFT to achieve coexistence, while operating at normal power, and thereby obtaining higher throughput and greater operating range. We implement SWIFT on a wideband hardware platform, and evaluate it in the presence of 802.11 devices. In comparison to a baseline that coexists with narrowband devices by operating below their noise level, SWIFT is equally narrowband-friendly but achieves 3.6x-10.5x higher throughput and 6x greater range.
</description>
<pubDate>Sun, 17 Aug 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/51335</guid>
<dc:date>2008-08-17T00:00:00Z</dc:date>
</item>
<item>
<title>Selective Vectorization for Short-Vector Instructions</title>
<link>https://hdl.handle.net/1721.1/50235</link>
<description>Selective Vectorization for Short-Vector Instructions
Amarasinghe, Saman; Rabbah, Rodric; Larsen, Samuel
Multimedia extensions are nearly ubiquitous in today's general-purpose processors. These extensions consist primarily of a set of short-vector instructions that apply the same opcode to a vector of operands. Vector instructions introduce a data-parallel component to processors that exploit instruction-level parallelism, and present an opportunity for increased performance. In fact, ignoring a processor's vector opcodes can leave a significant portion of the available resources unused. In order for software developers to find short-vector instructions generally useful, however, the compiler must target these extensions with complete transparency and consistent performance. This paper describes selective vectorization, a technique for balancing computation across a processor's scalar and vector units. Current approaches for targeting short-vector instructions directly adopt vectorizing technology first developed for supercomputers. Traditional vectorization, however, can lead to a performance degradation since it fails to account for a processor's scalar resources. We formulate selective vectorization in the context of software pipelining. Our approach creates software pipelines with shorter initiation intervals, and therefore, higher performance. A key aspect of selective vectorization is its ability to manage transfer of operands between vector and scalar instructions. Even when operand transfer is expensive, our technique is sufficiently sophisticated to achieve significant performance gains. We evaluate selective vectorization on a set of SPEC FP benchmarks. On a realistic VLIW processor model, the approach achieves whole-program speedups of up to 1.35x over existing approaches. For individual loops, it provides speedups of up to 1.75x.
</description>
<pubDate>Fri, 18 Dec 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/50235</guid>
<dc:date>2009-12-18T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing Computational Models of Narrative</title>
<link>https://hdl.handle.net/1721.1/50232</link>
<description>Advancing Computational Models of Narrative
Richards, Whitman; Winston, Patrick Henry; Finlayson, Mark Alan
Report of a Workshop held at the Wylie Center, Beverly, MA, Oct 8-10 2009
</description>
<pubDate>Thu, 17 Dec 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/50232</guid>
<dc:date>2009-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>The Video Mesh: A Data Structure for Image-based Video Editing</title>
<link>https://hdl.handle.net/1721.1/50231</link>
<description>The Video Mesh: A Data Structure for Image-based Video Editing
Durand, Fredo; Cohen, Michael; Chen, Jiawen; Paris, Sylvain; Wang, Jue; Matusik, Wojciech
This paper introduces the video mesh, a data structure for representing video as 2.5D "paper cutouts." The video mesh allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. The video mesh sparsely encodes optical flow as well as depth, and handles occlusion using local layering and alpha mattes. Motion is described by a sparse set of points tracked over time. Each point also stores a depth value. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. The user rotoscopes occluding contours and we introduce an algorithm to cut the video mesh along them. Object boundaries are refined with perpixel alpha values. The video mesh is at its core a set of texture mapped triangles, we leverage graphics hardware to enable interactive editing and rendering of a variety of effects. We demonstrate the effectiveness of our representation with a number of special effects including 3D viewpoint changes, object insertion, and depth-of-field manipulation.
</description>
<pubDate>Wed, 16 Dec 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/50231</guid>
<dc:date>2009-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>Perfect and General Virtual Implementation For Perfectly Informed Players</title>
<link>https://hdl.handle.net/1721.1/49869</link>
<description>Perfect and General Virtual Implementation For Perfectly Informed Players
Micali, Silvio; Chen, Jing
We show that, when the players are perfectly informed about each other, essentially all social-choice functions can be rationally robustly implemented via an extensive-form public-action mechanism that (1) is perfectly robust against collusion, (2) requires only a linear number of computation steps and communication bits, and (3) preserves the privacy of the players' types to a very high extent.
</description>
<pubDate>Fri, 04 Dec 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49869</guid>
<dc:date>2009-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>Sufficient Conditions for Uniform Stability of Regularization Algorithms</title>
<link>https://hdl.handle.net/1721.1/49868</link>
<description>Sufficient Conditions for Uniform Stability of Regularization Algorithms
Poggio, Tomaso; Rosasco, Lorenzo; Wibisono, Andre
In this paper, we study the stability and generalization properties of penalized empirical-risk minimization algorithms. We propose a set of properties of the penalty term that is sufficient to ensure uniform ?-stability: we show that if the penalty function satisfies a suitable convexity property, then the induced regularization algorithm is uniformly ?-stable. In particular, our results imply that regularization algorithms with penalty functions which are strongly convex on bounded domains are ?-stable. In view of the results in [3], uniform stability implies generalization, and moreover, consistency results can be easily obtained. We apply our results to show that â  p regularization for 1 &lt; p &lt;= 2 and elastic-net regularization are uniformly ?-stable, and therefore generalize.
</description>
<pubDate>Tue, 01 Dec 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49868</guid>
<dc:date>2009-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Operating System for Clouds and Manycore: fos</title>
<link>https://hdl.handle.net/1721.1/49844</link>
<description>A Unified Operating System for Clouds and Manycore: fos
Modzelewski, Kevin; Miller, Jason; Belay, Adam; Beckmann, Nathan; Gruenwald, Charles, III; Wentzlaff, David; Youseff, Lamia; Agarwal, Anant
Single chip processors with thousands of cores will be available in the next ten years and clouds of multicore processors afford the operating system designer thousands of cores today. Constructing operating systems for manycore and cloud systems face similar challenges. This work identifies these shared challenges and introduces our solution: a factored operating system (fos) designed to meet the scalability, faultiness, variability of demand, and programming challenges of OSâ  s for single-chip thousand-core manycore systems as well as current day cloud computers. Current monolithic operating systems are not well suited for manycores and clouds as they have taken an evolutionary approach to scaling such as adding fine grain locks and redesigning subsystems, however these approaches do not increase scalability quickly enough. fos addresses the OS scalability challenge by using a message passing design and is composed out of a collection of Internet inspired servers. Each operating system service is factored into a set of communicating servers which in aggregate implement a system service. These servers are designed much in the way that distributed Internet services are designed, but provide traditional kernel services instead of Internet services. Also, fos embraces the elasticity of cloud and manycore platforms by adapting resource utilization to match demand. fos facilitates writing applications across the cloud by providing a single system image across both future 1000+ core manycores and current day Infrastructure as a Service cloud computers. In contrast, current cloud environments do not provide a single system image and introduce complexity for the user by requiring different programming models for intra- vs inter-machine communication, and by requiring the use of non-OS standard management tools.
</description>
<pubDate>Fri, 20 Nov 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49844</guid>
<dc:date>2009-11-20T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Computation in Dynamic Networks</title>
<link>https://hdl.handle.net/1721.1/49814</link>
<description>Distributed Computation in Dynamic Networks
Oshman, Rotem; Lynch, Nancy; Kuhn, Fabian
In this report we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model is intended to capture mobile networks and wireless networks, in which mobility and interference render communication unpredictable. The model allows the study of the fundamental computation power of dynamic networks. In particular, it captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T-interval connectivity (for T &gt;= 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. Algorithms for the dynamic graph model must cope with these unceasing changes. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any computable function of their initial inputs in O(n^2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T &gt; 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n^2 / T) rounds using messages of size O(log n + d). We also give two lower bounds on the gossip problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. We show an Omega(n log k) bound on gossip in 1-interval connected graphs against centralized algorithms, and an Omega(n + nk / T) bound on exchanging k pieces of information in T-interval connected graphs for a restricted class of randomized distributed algorithms. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks.
</description>
<pubDate>Tue, 10 Nov 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49814</guid>
<dc:date>2009-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Rational Robustness for Mechanism Design</title>
<link>https://hdl.handle.net/1721.1/49810</link>
<description>Rational Robustness for Mechanism Design
Micali, Silvio; Chen, Jing
Theory of Computation
The currently prevailing equilibrium-based approach to mechanism design suffers from a plurality of fundamental problems, and new conceptual frameworks are needed to solve or sufficiently alleviate them. In this paper, we put forward rational robustness, a new solution concept/implementation notion that is not equilibrium-based; prove its fundamental structural theorems; and compare it with prior notions. Our notion of implementation is specifically built so as to be robust against the problem of equilibrium selection. We prove it robust against other fundamental problems as well in different papers.
first draft
</description>
<pubDate>Tue, 10 Nov 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49810</guid>
<dc:date>2009-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Graphite: A Distributed Parallel Simulator for Multicores</title>
<link>https://hdl.handle.net/1721.1/49809</link>
<description>Graphite: A Distributed Parallel Simulator for Multicores
Beckmann, Nathan; Eastep, Jonathan; Gruenwald, Charles, III; Kurian, George; Kasture, Harshad; Miller, Jason E.; Celio, Christopher; Agarwal, Anant
This paper introduces the open-source Graphite distributed parallel multicore simulator infrastructure. Graphite is designed from the ground up for exploration of future multicore processors containing dozens, hundreds, or even thousands of cores. It provides high performance for fast design space exploration and software development for future processors. Several techniques are used to achieve this performance including: direct execution, multi-machine distribution, analytical modeling, and lax synchronization. Graphite is capable of accelerating simulations by leveraging several machines. It can distribute simulation of an off-the-shelf threaded application across a cluster of commodity Linux machines with no modification to the source code. It does this by providing a single, shared address space and consistent single-process image across machines. Graphite is designed to be a simulation framework, allowing different component models to be easily replaced to either model different architectures or tradeoff accuracy for performance. We evaluate Graphite from a number of perspectives and demonstrate that it can simulate target architectures containing over 1000 cores on ten 8-core servers. Performance scales well as more machines are added with near linear speedup in many cases. Simulation slowdown is as low as 41x versus native execution for some applications. The Graphite infrastructure and existing models will be released as open-source software to allow the community to simulate their own architectures and extend and improve the framework.
</description>
<pubDate>Mon, 09 Nov 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49809</guid>
<dc:date>2009-11-09T00:00:00Z</dc:date>
</item>
<item>
<title>Smartlocks: Self-Aware Synchronization through Lock Acquisition Scheduling</title>
<link>https://hdl.handle.net/1721.1/49808</link>
<description>Smartlocks: Self-Aware Synchronization through Lock Acquisition Scheduling
Agarwal, Anant; Santambrogio, Marco D.; Wingate, David; Eastep, Jonathan
As multicore processors become increasingly prevalent, system complexity is skyrocketing. The advent of the asymmetric multicore compounds this -- it is no longer practical for an average programmer to balance the system constraints associated with today's multicores and worry about new problems like asymmetric partitioning and thread interference. Adaptive, or self-aware, computing has been proposed as one method to help application and system programmers confront this complexity. These systems take some of the burden off of programmers by monitoring themselves and optimizing or adapting to meet their goals. This paper introduces an open-source self-aware synchronization library for multicores and asymmetric multicores called Smartlocks. Smartlocks is a spin-lock library that adapts its internal implementation during execution using heuristics and machine learning to optimize toward a user-defined goal, which may relate to performance, power, or other problem-specific criteria. Smartlocks builds upon adaptation techniques from prior work like reactive locks, but introduces a novel form of adaptation designed for asymmetric multicores that we term lock acquisition scheduling. Lock acquisition scheduling is optimizing which waiter will get the lock next for the best long-term effect when multiple threads (or processes) are spinning for a lock. Our results demonstrate empirically that lock scheduling is important for asymmetric multicores and that Smartlocks significantly outperform conventional and reactive locks for asymmetries like dynamic variations in processor clock frequencies caused by thermal throttling events.
</description>
<pubDate>Mon, 09 Nov 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49808</guid>
<dc:date>2009-11-09T00:00:00Z</dc:date>
</item>
<item>
<title>Automated home-cage behavioral phenotyping of mice</title>
<link>https://hdl.handle.net/1721.1/49527</link>
<description>Automated home-cage behavioral phenotyping of mice
Yu, Xinlin; Steele, Andrew D.; Khilnani, Vinita; Garrote, Estibaliz; Jhuang, Hueihan; Serre, Thomas; Poggio, Tomaso
We describe a trainable computer vision system enabling the automated analysis of complex mouse behaviors. We provide software and a very large manually annotated video database used for training and testing the system. Our system outperforms leading commercial software and performs on par with human scoring, as measured from the ground-truth manual annotations of thousands of clips of freely behaving animals. We show that the home-cage behavior profiles provided by the system is sufficient to accurately predict the strain identity of individual animals in the case of two standard inbred and two non-standard mouse strains. Our software should complement existing sensor-based automated approaches and help develop an adaptable, comprehensive, high-throughput, fine-grained, automated analysis of rodent behavior.
</description>
<pubDate>Mon, 26 Oct 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49527</guid>
<dc:date>2009-10-26T00:00:00Z</dc:date>
</item>
<item>
<title>Co-Clustering with Generative Models</title>
<link>https://hdl.handle.net/1721.1/49526</link>
<description>Co-Clustering with Generative Models
Golland, Polina; Lashkari, Danial
In this paper, we present a generative model for co-clustering and develop algorithms based on the mean field approximation for the corresponding modeling problem. These algorithms can be viewed as generalizations of the traditional model-based clustering; they extend hard co-clustering algorithms such as Bregman co-clustering to include soft assignments. We show empirically that these model-based algorithms offer better performance than their hard-assignment counterparts, especially with increasing problem complexity.
</description>
<pubDate>Tue, 03 Nov 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49526</guid>
<dc:date>2009-11-03T00:00:00Z</dc:date>
</item>
<item>
<title>Propagation Networks: A Flexible and Expressive Substrate for Computation</title>
<link>https://hdl.handle.net/1721.1/49525</link>
<description>Propagation Networks: A Flexible and Expressive Substrate for Computation
Radul, Alexey
I propose a shift in the foundations of computation. Practically all ideas of general-purpose computation today are founded either on execution of sequences of atomic instructions, i.e., assembly languages, or on evaluation of tree-structured expressions, i.e., most higher level programming languages. Both have served us well in the past, but it is increasingly clear that we need something more. I suggest that we can build general-purpose computation on propagation of information through networks of stateful cells interconnected with stateless autonomous asynchronous computing elements. Various forms of this general idea have been used with great success for various special purposes; perhaps the most immediate example is constraint propagation in constraint satisfaction systems. These special-purpose systems, however, are all complex and all different, and neither compose well, nor interoperate well, nor generalize well. A foundational layer is missing. The key insight in this work is that a cell should not be seen as storing a value, but as accumulating information about a value. The cells should never forget information -- such monotonicity prevents race conditions in the behavior of the network. Monotonicity of information need not be a severe restriction: for example, carrying reasons for believing each thing makes it possible to explore but thenpossibly reject tentative hypotheses, thus appearing to undo something, while maintaining monotonicity. Accumulating information is a broad enough design principle to encompass arbitrary computation. The object of this dissertation is therefore to architect a general-purpose computing system based on propagation networks; to subsume expression evaluation under propagation just as instruction execution is subsumed under expression evaluation; to demonstrate that a general-purpose propagation system can recover all the benefits that have been derived from special-purpose propagation systems, allow them to compose andinteroperate, and offer further expressive power beyond what we have known in the past; and finally to contemplate the lessons that such a fundamental shift can teach us about the deep nature of computation.
PhD thesis
</description>
<pubDate>Tue, 03 Nov 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49525</guid>
<dc:date>2009-11-03T00:00:00Z</dc:date>
</item>
<item>
<title>Shape from Sheen</title>
<link>https://hdl.handle.net/1721.1/49511</link>
<description>Shape from Sheen
Adelson, Edward H.; Torralba, Antonio; Fleming, Roland W.
</description>
<pubDate>Thu, 22 Oct 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49511</guid>
<dc:date>2009-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Iterative Projection Methods for Structured Sparsity Regularization</title>
<link>https://hdl.handle.net/1721.1/49428</link>
<description>Iterative Projection Methods for Structured Sparsity Regularization
Rosasco, Lorenzo; Verri, Alessandro; Santoro, Matteo; Mosci, Sofia; Villa, Silvia
In this paper we propose a general framework to characterize and solve the optimization problems underlying a large class of sparsity based regularization algorithms. More precisely, we study the minimization of learning functionals that are sums of a differentiable data term and a convex non differentiable penalty. These latter penalties have recently become popular in machine learning since they allow to enforce various kinds of sparsity properties in the solution. Leveraging on the theory of Fenchel duality and subdifferential calculus, we derive explicit optimality conditions for the regularized solution and propose a general iterative projection algorithm whose convergence to the optimal solution can be proved. The generality of the framework is illustrated, considering several examples of regularization schemes, including l1 regularization (and several variants), multiple kernel learning and multi-task learning. Finally, some features of the proposed framework are empirically studied.
</description>
<pubDate>Wed, 14 Oct 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49428</guid>
<dc:date>2009-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Supporting Directed Content Sharing on the Web</title>
<link>https://hdl.handle.net/1721.1/49426</link>
<description>Understanding and Supporting Directed Content Sharing on the Web
Miller, Rob; Karger, David; Marcus, Adam; Bernstein, Michael
To find interesting, personally relevant web content, we often rely on friends and colleagues to pass links along as they encounter them. In this paper, we study and augment link-sharing via e-mail, the most popular means of sharing web content today. Armed with survey data indicating that active sharers of novel web content are often those that actively seek it out, we present FeedMe, a plug-in for Google Reader that makes directed sharing of content a more salient part of the user experience. Our survey research indicates that sharing is moderated by concern about relevancy to the recipient, a desire to send only novel content to the recipient, and the effort required to share. FeedMe allays these concerns by recommending friends who may be interested in seeing the content, providing information on what the recipient has seen and how many emails they have received recently, and giving recipients the opportunity to provide lightweight feedback when they appreciate shared content. FeedMe introduces a novel design space for mixed-initiative social recommenders: friends who know the user voluntarily vet the material on the userâ  s behalf. We present a two week field experiment (N=60) demonstrating that FeedMeâ  s recommendations and social awareness features made it easier and more enjoyable to share content that recipients appreciated and would not have found otherwise.
</description>
<pubDate>Wed, 07 Oct 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49426</guid>
<dc:date>2009-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>Notes on the Shannon Entropy of the Neural Response</title>
<link>https://hdl.handle.net/1721.1/49425</link>
<description>Notes on the Shannon Entropy of the Neural Response
Shakhnarovich, Greg; Bouvrie, Jake; Rosasco, Lorenzo; Smale, Steve
In these notes we focus on the concept of Shannon entropy in an attempt to provide a systematic way of assessing the discrimination properties of the neural response, and quantifying the role played by the number of layers and the number of templates.
</description>
<pubDate>Fri, 09 Oct 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49425</guid>
<dc:date>2009-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>A Bayesian inference theory of attention: neuroscience and algorithms</title>
<link>https://hdl.handle.net/1721.1/49416</link>
<description>A Bayesian inference theory of attention: neuroscience and algorithms
Chikkerur, Sharat; Serre, Thomas; Poggio, Tomaso
The past four decades of research in visual neuroscience has generated a large and disparate body of literature on the role of attention [Itti et al., 2005]. Although several models have been developed to describe specific properties of attention, a theoretical framework that explains the computational role of attention and is consistent with all known effects is still needed. Recently, several authors have suggested that visual perception can be interpreted as a Bayesian inference process [Rao et al., 2002, Knill and Richards, 1996, Lee and Mumford, 2003]. Within this framework, topdown priors via cortical feedback help disambiguate noisy bottom-up sensory input signals. Building on earlier work by Rao [2005], we show that this Bayesian inference proposal can be extended to explain the role and predict the main properties of attention: namely to facilitate the recognition of objects in clutter. Visual recognition proceeds by estimating the posterior probabilities for objects and their locations within an image via an exchange of messages between ventral and parietal areas of the visual cortex. Within this framework, spatial attention is used to reduce the uncertainty in feature information; feature-based attention is used to reduce the uncertainty in location information. In conjunction, they are used to recognize objects in clutter. Here, we find that several key attentional phenomena such such as pop-out, multiplicative modulation and change in contrast response emerge naturally as a property of the network. We explain the idea in three stages. We start with developing a simplified model of attention in the brain identifying the primary areas involved and their interconnections. Secondly, we propose a Bayesian network where each node has direct neural correlates within our simplified biological model. Finally, we elucidate the properties of the resulting model, showing that the predictions are consistent with physiological and behavioral evidence.
</description>
<pubDate>Sat, 03 Oct 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49416</guid>
<dc:date>2009-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>Attentive processing improves object recognition</title>
<link>https://hdl.handle.net/1721.1/49415</link>
<description>Attentive processing improves object recognition
Chikkerur, Sharat; Poggio, Tomaso; Serre, Thomas
The human visual system can recognize several thousand object categories irrespective of their position and size. This combination of selectivity and invariance is built up gradually across several stages of visual processing. However, the recognition of multiple objects in cluttered visual scenes presents a difficult problem for human as well as machine vision systems. The human visual system has evolved to perform two stages of visual processing: a pre-attentive parallel processing stage, in which the entire visual field is processed at once and a slow serial attentive processing stage, in which aregion of interest in an input image is selected for "specialized" analysis by an attentional spotlight. We argue that this strategy evolved to overcome the limitation of purely feed forward processing in the presence of clutter and crowding. Using a Bayesian model of attention along with a hierarchical model of feed forward recognition on a data set of real world images, we show that this two stage attentive processing can improve recognition in cluttered and crowded conditions.
</description>
<pubDate>Fri, 02 Oct 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/49415</guid>
<dc:date>2009-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient POMDP Forward Search by Predicting the Posterior Belief Distribution</title>
<link>https://hdl.handle.net/1721.1/46820</link>
<description>Efficient POMDP Forward Search by Predicting the Posterior Belief Distribution
Roy, Nicholas; He, Ruijie
Online, forward-search techniques have demonstrated promising results for solving problems in partially observable environments. These techniques depend on the ability to efficiently search and evaluate the set of beliefs reachable from the current belief. However, enumerating or sampling action-observation sequences to compute the reachable beliefs is computationally demanding; coupled with the need to satisfy real-time constraints, existing online solvers can only search to a limited depth. In this paper, we propose that policies can be generated directly from the distribution of the agent's posterior belief. When the underlying state distribution is Gaussian, and the observation function is an exponential family distribution, we can calculate this distribution of beliefs without enumerating the possible observations. This property not only enables us to plan in problems with large observation spaces, but also allows us to search deeper by considering policies composed of multi-step action sequences. We present the Posterior Belief Distribution (PBD) algorithm, an efficient forward-search POMDP planner for continuous domains, demonstrating that better policies are generated when we can perform deeper forward search.
</description>
<pubDate>Wed, 23 Sep 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46820</guid>
<dc:date>2009-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>Whanaungatanga: Sybil-proof routing with social networks</title>
<link>https://hdl.handle.net/1721.1/46819</link>
<description>Whanaungatanga: Sybil-proof routing with social networks
Lesniewski-Laas, Chris; Kaashoek, M. Frans
Decentralized systems, such as distributed hash tables, are subject to the Sybil attack, in which an adversary creates many false identities to increase its influence. This paper proposes a routing protocol for a distributed hash table that is strongly resistant to the Sybil attack. This is the first solution to this problem with sublinear run time and space usage. The protocol uses the social connections between users to build routing tables that enable Sybil-resistant distributed hash table lookups. With a social network of N well-connected honest nodes, the protocol can tolerate up to O(N/log N) "attack edges" (social links from honest users to phony identities). This means that an adversary has to fool a large fraction of the honest users before any lookups will fail. The protocol builds routing tables that contain O(N log^(3/2) N) entries per node. Lookups take O(1) time. Simulation results, using social network graphs from LiveJournal, Flickr, and YouTube, confirm the analytical results.
</description>
<pubDate>Thu, 24 Sep 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46819</guid>
<dc:date>2009-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>Finding aircraft collision-avoidance strategies using policy search methods</title>
<link>https://hdl.handle.net/1721.1/46722</link>
<description>Finding aircraft collision-avoidance strategies using policy search methods
Kaelbling, Leslie Pack; Lozano-Perez, Tomas
A progress report describing the application of policy gradient and policy search by dynamic programming methods to an aircraft collision avoidance problem inspired by the requirements of next-generation TCAS.
</description>
<pubDate>Sat, 12 Sep 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46722</guid>
<dc:date>2009-09-12T00:00:00Z</dc:date>
</item>
<item>
<title>Dependency-Directed Backtracking in Non-Deterministic Scheme</title>
<link>https://hdl.handle.net/1721.1/46712</link>
<description>Dependency-Directed Backtracking in Non-Deterministic Scheme
Zabih, Ramin
Non-deterministic LISP can be used to describe a search problem without specifying the method used to solve the problem. We show that SCHEMER, a non-deterministic dialect of SCHEME, can support dependency-directed backtracking as well as chronological backtracking. Full code for a working SCHEMER interpreter that provides dependency-directed backtracking is included.
This is a greatly revised version of a thesis submitted to the Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science on January 2, 1987, in partial fulfillment of the requirements for the degree of Master of Science.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46712</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Code for LOLCAT Method (Variant of Gillespie Algorithm)</title>
<link>https://hdl.handle.net/1721.1/46710</link>
<description>Code for LOLCAT Method (Variant of Gillespie Algorithm)
Beal, Jacob; Indurkhya, Sagar
This code and data is publicly listed code for the LOLCAT Method developed by Sagar Indurkhya and Jacob Beal, in the paper: "Reaction factoring and bipartite update graphs accelerate the Gillespie algorithm for large-scale biochemical systems."
</description>
<pubDate>Fri, 04 Sep 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46710</guid>
<dc:date>2009-09-04T00:00:00Z</dc:date>
</item>
<item>
<title>Using Code Perforation to Improve Performance, Reduce Energy Consumption, and Respond to Failures</title>
<link>https://hdl.handle.net/1721.1/46709</link>
<description>Using Code Perforation to Improve Performance, Reduce Energy Consumption, and Respond to Failures
Hoffmann, Henry; Misailovic, Sasa; Sidiroglou, Stelios; Agarwal, Anant; Rinard, Martin
Many modern computations (such as video and audio encoders, Monte Carlo simulations, and machine learning algorithms) are designed to trade off accuracy in return for increased performance. To date, such computations typically use ad-hoc, domain-specific techniques developed specifically for the computation at hand. We present a new general technique, code perforation, for automatically augmenting existing computations with the capability of trading off accuracy in return for performance. In contrast to existing approaches, which typically require the manual development of new algorithms, our implemented SpeedPress compiler can automatically apply code perforation to existing computations with no developer intervention whatsoever. The result is a transformed computation that can respond almost immediately to a range of increased performancedemands while keeping any resulting output distortion within acceptable user-defined bounds. We have used SpeedPress to automatically apply code perforation to applications from the PARSEC benchmark suite. The results show that the transformed applications can run as much as two to three times faster than the original applications while distorting the output by less than 10%. Because the transformed applications can operate successfully at many points in the performance/accuracy tradeoff space, they can (dynamically and on demand) navigate the tradeoff space to either maximize performance subject to a given accuracy constraint, or maximize accuracy subject to a given performance constraint. We also demonstrate the SpeedGuard runtime system which uses code perforation to enable applications to automatically adapt to challenging execution environments such as multicore machines that suffer core failures or machines that dynamically adjust the clock speed to reduce power consumption or to protect the machine from overheating.
</description>
<pubDate>Thu, 03 Sep 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46709</guid>
<dc:date>2009-09-03T00:00:00Z</dc:date>
</item>
<item>
<title>Lightweight Communications and Marshalling for Low-Latency Interprocess Communication</title>
<link>https://hdl.handle.net/1721.1/46708</link>
<description>Lightweight Communications and Marshalling for Low-Latency Interprocess Communication
Moore, David; Olson, Edwin; Huang, Albert
We describe the Lightweight Communications and Marshalling (LCM) library for message passing and data marshalling. The primary goal of LCM is to simplify the development of low-latency message passing systems, targeted at real-time robotics applications. LCM is comprised of several components: a data type specification language, a message passing system, logging/playback tools, and real-time analysis tools. LCM provides a platform- and language-independent type specification language. These specifications can be compiled into platform and language specific implementations, eliminating the need for users to implement marshalling code while guaranteeing run-time type safety. Messages can be transmitted between different processes using LCM's message-passing system, which implements a publish/subscribe model. LCM's implementation is notable in providing low-latency messaging and eliminating the need for a central communications "hub". This architecture makes it easy to mix simulated, recorded, and live data sources. A number of logging, playback, and traffic inspection tools simplify common development and debugging tasks. LCM is targeted at robotics and other real-time systems where low latency is critical; its messaging model permits dropping messages in order to minimize the latency of new messages. In this paper, we explain LCM's design, evaluate its performance, and describe its application to a number of autonomous land, underwater, and aerial robots.
</description>
<pubDate>Wed, 02 Sep 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46708</guid>
<dc:date>2009-09-02T00:00:00Z</dc:date>
</item>
<item>
<title>Information Flow for Secure Distributed Applications</title>
<link>https://hdl.handle.net/1721.1/46700</link>
<description>Information Flow for Secure Distributed Applications
Cheng, Winnie Wing-Yee
Private and confidential information is increasingly stored online and increasingly being exposed due to human errors as well as malicious attacks. Information leaks threaten confidentiality, lead to lawsuits, damage enterprise reputations, and cost billion of dollars. While distributed computing architectures provide data and service integration, they also create information flow control problems due to the interaction complexity among service providers. A main problem is the lack of an appropriate programming model to capture expected information flow behaviors in these large distributed software infrastructures. This research tackles this problem by proposing a programming methodology and enforcement platform for application developers to protect and share their sensitive data. We introduce Aeolus, a new platform intended to make it easier to build distributed applications that avoid the unauthorized release of information. The Aeolus security model is based on information flow control but differs from previous work in ways that we believe make it easier to use and understand. In addition, Aeolus provides a number of new mechanisms (anonymous closures, compound tags, boxes, and shared volatile state) to ease the job of writing applications. This thesis provides examples to show how Aeolus features support secure distributed applications. It describes the system design issues and solutions in designing a prototype implementation and presents performance results that show our platform has low overhead.
PhD thesis
</description>
<pubDate>Thu, 27 Aug 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46700</guid>
<dc:date>2009-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>AvatarSAT: An Auto-tuning Boolean SAT Solver</title>
<link>https://hdl.handle.net/1721.1/46691</link>
<description>AvatarSAT: An Auto-tuning Boolean SAT Solver
Ganesh, Vijay; Singh, Rishabh; Near, Joseph P.; Rinard, Martin
We present AvatarSAT, a SAT solver that uses machine-learning classifiers to automatically tune the heuristics of an off-the-shelf SAT solver on a per-instance basis. The classifiers use features of both the input and conflict clauses to select parameter settings for the solver's tunable heuristics. On a randomly selected set of SAT problems chosen from the 2007 and 2008 SAT competitions, AvatarSAT is, on average, over two times faster than MiniSAT based on the geometric mean speedup measure and 50% faster based on the arithmeticmean speedup measure. Moreover, AvatarSAT is hundreds to thousands of times faster than MiniSAT on many hard SAT instances and is never more than twenty times slower than MiniSAT on any SAT instance.
</description>
<pubDate>Wed, 26 Aug 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46691</guid>
<dc:date>2009-08-26T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting Hazardous Intensive Care Patient Episodes Using Real-time Mortality Models</title>
<link>https://hdl.handle.net/1721.1/46690</link>
<description>Detecting Hazardous Intensive Care Patient Episodes Using Real-time Mortality Models
Hug, Caleb
The modern intensive care unit (ICU) has become a complex, expensive, data-intensive environment. Caregivers maintain an overall assessment of their patients based on important observations and trends. If an advanced monitoring system could also reliably provide a systemic interpretation of a patient's observations it could help caregivers interpret these data more rapidly and perhaps more accurately. In this thesis I use retrospective analysis of mixed medical/surgical intensive care patients to develop predictive models. Logistic regression is applied to 7048 development patients with several hundred candidate variables. These candidate variables range from simple vitals to long term trends and baseline deviations. Final models are selected by backward elimination on top cross-validated variables and validated on 3018 additional patients. The real-time acuity score (RAS) that I develop demonstrates strong discrimination ability for patient mortality, with an ROC area (AUC) of 0.880. The final model includes a number of variables known to be associated with mortality, but also computationally intensive variables absent in other severity scores. In addition to RAS, I also develop secondary outcome models that perform well at predicting pressor weaning (AUC=0.825), intraaortic balloon pump removal (AUC=0.816), the onset of septic shock (AUC=0.843), and acute kidney injury (AUC=0.742). Real-time mortality prediction is a feasible way to provide continuous risk assessment for ICU patients. RAS offers similar discrimination ability when compared to models computed once per day, based on aggregate data over that day. Moreover, RAS mortality predictions are better at discrimination than a customized SAPS II score (Day 3 AUC=0.878 vs AUC=0.849, p &lt; 0.05). The secondary outcome models also provide interesting insights into patient responses to care and patient risk profiles. While models trained for specifically recognizing secondary outcomes consistently outperform the RAS model at their specific tasks, RAS provides useful baseline risk estimates throughout these events and in some cases offers a notable level of predictive utility.
PhD thesis
</description>
<pubDate>Wed, 26 Aug 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46690</guid>
<dc:date>2009-08-26T00:00:00Z</dc:date>
</item>
<item>
<title>Extending a MOOS-IvP Autonomy System and Users Guide to the IvPBuild Toolbox</title>
<link>https://hdl.handle.net/1721.1/46361</link>
<description>Extending a MOOS-IvP Autonomy System and Users Guide to the IvPBuild Toolbox
Benjamin, Michael R.; Newman, Paul M.; Schmidt, Henrik; Leonard, John J.
This document describes how to extend the suite of MOOS applications and IvP Helm behaviors distributed with the MOOS-IvP software bundle from www.moos-ivp.org. It covers (a) a straw-man repository with a place-holder MOOS application and IvP Behavior, with a working CMake build structure, (b) a brief overview of the MOOS application class with an example application, (c) an overview of the IvP Behavior class with an example behavior, and (d) the IvPBuild Toolbox for generation of objective functions within behaviors.
</description>
<pubDate>Thu, 20 Aug 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46361</guid>
<dc:date>2009-08-20T00:00:00Z</dc:date>
</item>
<item>
<title>Guaranteed in-order packet delivery using Exclusive Dynamic Virtual Channel Allocation</title>
<link>https://hdl.handle.net/1721.1/46353</link>
<description>Guaranteed in-order packet delivery using Exclusive Dynamic Virtual Channel Allocation
Devadas, Srinivas; Cho, Myong Hyon; Shim, Keun Sup; Lis, Mieszko
In-order packet delivery, a critical abstraction for many higher-level protocols, can severely limit the performance potential in low-latency networks (common, for example, in network-on-chip designs with many cores). While basic variants of dimension-order routing guarantee in-order delivery, improving performance by adding multiple dynamically allocated virtual channels or using other routing schemes compromises this guarantee. Although this can be addressed by reordering out-of-order packets at the destination core, such schemes incur significant overheads, and, in the worst case, raise the specter of deadlock or require expensive retransmission. We present Exclusive Dynamic VCA, an oblivious virtual channel allocation scheme which combines the performance advantages of dynamic virtual allocation with in-network, deadlock-free in-order delivery. At the same time, our scheme reduces head-of-line blocking, often significantly improving throughput compared to equivalent baseline (out-of-order) dimension-order routing when multiple virtual channels are used, and so may be desirable even when in-order delivery is not required. Implementation requires only minor, inexpensive changes to traditional oblivious dimension-order router architectures, more than offset by the removal of packet reorder buffers and logic.
</description>
<pubDate>Tue, 18 Aug 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46353</guid>
<dc:date>2009-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Application Heartbeats for Software Performance and Health</title>
<link>https://hdl.handle.net/1721.1/46351</link>
<description>Application Heartbeats for Software Performance and Health
Miller, Jason; Agarwal, Anant; Santambrogio, Marco; Eastep, Jonathan; Hoffmann, Henry
Adaptive, or self-aware, computing has been proposed as one method to help application programmers confront the growing complexity of multicore software development. However, existing approaches to adaptive systems are largely ad hoc and often do not manage to incorporate the true performance goals of the applications they are designed to support. This paper presents an enabling technology for adaptive computing systems: Application Heartbeats. The Application Heartbeats framework provides a simple, standard programming interface that applications can use to indicate their performance and system software (and hardware) can use to query an applicationâ  s performance. Several experiments demonstrate the simplicity and efficacy of the Application Heartbeat approach. First the PARSEC benchmark suite is instrumented with Application Heartbeats to show the broad applicability of the interface. Then, an adaptive H.264 encoder is developed to show how applications might use Application Heartbeats internally. Next, an external resource scheduler is developed which assigns cores to an application based on its performance as specified with Application Heartbeats. Finally, the adaptive H.264 encoder is used to illustrate how Application Heartbeats can aid fault tolerance.
</description>
<pubDate>Fri, 07 Aug 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46351</guid>
<dc:date>2009-08-07T00:00:00Z</dc:date>
</item>
<item>
<title>CG2Real: Improving the Realism of Computer Generated Images using a Large Collection of Photographs</title>
<link>https://hdl.handle.net/1721.1/46335</link>
<description>CG2Real: Improving the Realism of Computer Generated Images using a Large Collection of Photographs
Pfister, Hanspeter; Freeman, William T.; Avidan, Shai; Dale, Kevin; Johnson, Micah K.; Matusik, Wojciech
Computer Graphics (CG) has achieved a high level of realism, producing strikingly vivid images. This realism, however, comes at the cost of long and often expensive manual modeling, and most often humans can still distinguish between CG images and real images. We present a novel method to make CG images look more realistic that is simple and accessible to novice users. Our system uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a novel mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our improved CG images appear more realistic than the originals.
</description>
<pubDate>Wed, 15 Jul 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46335</guid>
<dc:date>2009-07-15T00:00:00Z</dc:date>
</item>
<item>
<title>The Guided Improvement Algorithm for Exact, General-Purpose, Many-Objective Combinatorial Optimization</title>
<link>https://hdl.handle.net/1721.1/46322</link>
<description>The Guided Improvement Algorithm for Exact, General-Purpose, Many-Objective Combinatorial Optimization
Jackson, Daniel; Estler, H.-Christian; Rayside, Derek
This paper presents a new general-purpose algorithm for exact solving of combinatorial many-objective optimization problems. We call this new algorithm the guided improvement algorithm. The algorithm is implemented on top of the non-optimizing relational constraint solver Kodkod. We compare the performance of this new algorithm against two algorithms from the literature [Gavanelli 2002, Lukasiewycz et alia 2007, Laumanns et alia 2006]) on three micro-benchmark problems (n-Queens, n-Rooks, and knapsack) and on two aerospace case studies. Results indicate that the new algorithm is better for the kinds of many-objective problems that our aerospace collaborators are interested in solving. The new algorithm returns Pareto-optimal solutions as it computes.
</description>
<pubDate>Fri, 03 Jul 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/46322</guid>
<dc:date>2009-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>Programming Manifolds</title>
<link>https://hdl.handle.net/1721.1/45652</link>
<description>Programming Manifolds
Bachrach, Jonathan; Beal, Jacob
Many programming domains involve the manipulation of values distributed through a manifold - examples include sensor networks, smart materials, and biofilms. This paper describes a programming semantics for manifolds based on the amorphous medium abstraction, which places a computational device at every point in the manifold. This abstraction enables the creation of programs that automatically scale to networks of different size and device density. This semantics is currently implemented in our language Proto and compiles for execution on Mica2 Motes and several other platforms.
</description>
<pubDate>Mon, 01 Jan 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45652</guid>
<dc:date>2007-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Visual Histories for Vector Graphics</title>
<link>https://hdl.handle.net/1721.1/45600</link>
<description>Interactive Visual Histories for Vector Graphics
Scull, Craig; Johnson, Steve; Aliaga, Frederick; Paris, Sylvain; Su, Sara L.; Durand, Fredo
Presentation and graphics software enables users to experiment with variations of illustrations. They can revisit recent editing operations using the ubiquitous undo command, but they are limited to sequential exploration. We propose a new interaction metaphor and visualization for operation history. While editing, a user can access a history mode in which actions are denoted by graphical depictions appearing on top of the document. Our work is inspired by the visual language of film storyboards and assembly instructions. Our storyboard provides an interactive visual history, summarizing the editing of a document or a selected object. Each view is composed of action depictions representing the userâ  s editing actions and enables the user to consider the operation history in context rather than in a disconnected list view. This metaphor provides instant access to any past action and we demonstrate that this is an intuitive interface to a selective undo mechanism.
</description>
<pubDate>Wed, 24 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45600</guid>
<dc:date>2009-06-24T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced Visual Authoring Using Operation History</title>
<link>https://hdl.handle.net/1721.1/45599</link>
<description>Enhanced Visual Authoring Using Operation History
Su, Sara L.
Graphical editors have introduced great flexibility to the designer's workflow, providing powerful digital tools and enabling the creation of complex and compelling designs. This thesis presents methods for improving these interactions by leveraging operation history. Much instrumentation and activity logging in software has been for the purpose of debugging, that is, for the benefit of the programmer or analyst. Our work addresses the mining of operation history for the benefit of the end user. We present three main contributions in this area. First, we introduce selection expansion, a method for facilitating the reuse of complex multiple-item selections by identifying items that are likely to be edited together. We then discuss an extension of this work, soft grouping, which gives users more control than standard selection and more flexibility than standard grouping. Finally, we present an interactive visualization of operation history, interactive storyboards, which enables in-context browsing and manipulation of operation history. We demonstrate these approaches in the context of vector graphics editing and present the results of pilot studies using our software implementation. While this thesis focuses on the usage patterns of graphic designers, many of the strategies could be generalized to other domains.
PhD thesis
</description>
<pubDate>Wed, 24 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45599</guid>
<dc:date>2009-06-24T00:00:00Z</dc:date>
</item>
<item>
<title>An integrated model of visual attention using shape-based features</title>
<link>https://hdl.handle.net/1721.1/45598</link>
<description>An integrated model of visual attention using shape-based features
Poggio, Tomaso; Serre, Thomas; Tan, Cheston; Chikkerur, Sharat
Apart from helping shed some light on human perceptual mechanisms, modeling visual attention has important applications in computer vision. It has been shown to be useful in priming object detection, pruning interest points, quantifying visual clutter as well as predicting human eye movements. Prior work has either relied on purely bottom-up approaches or top-down schemes using simple low-level features. In this paper, we outline a top-down visual attention model based on shape-based features. The same shape-based representation is used to represent both the objects and the scenes that contain them. The spatial priors imposed by the scene and the feature priors imposed by the target object are combined in a Bayesian framework to generate a task-dependent saliency map. We show that our approach can predict the location of objects as well as match eye movements (92% overlap with human observers). We also show that the proposed approach performs better than existing bottom-up and top-down computational models.
</description>
<pubDate>Sat, 20 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45598</guid>
<dc:date>2009-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>An Overview of MOOS-IvP and a Brief Users Guide to the IvP Helm Autonomy Software</title>
<link>https://hdl.handle.net/1721.1/45569</link>
<description>An Overview of MOOS-IvP and a Brief Users Guide to the IvP Helm Autonomy Software
Benjamin, Michael R.; Leonard, John J.; Schmidt, Henrik; Newman, Paul M.
This document describes the IvP Helm - an Open Source behavior-based autonomy application for unmanned vehicles. IvP is short for interval programming - a technique for representing and solving multi-objective optimizations problems. Behaviors in the IvP Helm are reconciled using multi-objective optimization when in competition with each other for influence of the vehicle. The IvP Helm is written as a MOOS application where MOOS is a set of Open Source publish-subscribe autonomy middleware tools. This document describes the configuration and use of the IvP Helm, provides examples of simple missions and information on how to download and build the software from the MOOS-IvP server at www.moosivp.org.
</description>
<pubDate>Thu, 18 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45569</guid>
<dc:date>2009-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Keeping Mobile Robots Connected</title>
<link>https://hdl.handle.net/1721.1/45568</link>
<description>Keeping Mobile Robots Connected
Lynch, Nancy; Ley-Wild, Ruy; Kuhn, Fabian; Cornejo, Alejandro
Designing robust algorithms for mobile agents with reliable communication is difficult due to the distributed nature of computation, in mobile ad hoc networks (MANETs) the matter is exacerbated by the need to ensure connectivity. Existing distributed algorithms provide coordination but typically assume connectivity is ensured by other means. We present a connectivity service that encapsulates an arbitrary motion planner and can refine any plan to preserve connectivity (the graph of agents remains connected) and ensure progress (the agents advance towards their goal). The service is realized by a distributed algorithm that is modular in that it makes no assumptions of the motion-planning mechanism except the ability for an agent to query its position and intended goal position, local in that it uses 1-hop broadcast to communicate with nearby agents but doesn't need any network routing infrastructure, and \emph{oblivious} in that it does not depend on previous computations. We prove the progress of the algorithm in one round is at least Omega(min(d,r)), where d is the minimum distance between an agent and its target and r is the communication radius. We characterize the worst case configuration and show that when d &gt;= r this bound is tight and the algorithm is optimal, since no algorithm can guarantee greater progress. Finally we show all agents get epsilon-close to their targets within O(D_0/r+n^2/epsilon) rounds where n is the number of agents and D_0 is the initial distance to the targets.
</description>
<pubDate>Wed, 17 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45568</guid>
<dc:date>2009-06-17T00:00:00Z</dc:date>
</item>
<item>
<title>Partitioning Strategies for Concurrent Programming</title>
<link>https://hdl.handle.net/1721.1/45567</link>
<description>Partitioning Strategies for Concurrent Programming
Devadas, Srinivas; Agarwal, Anant; Hoffmann, Henry
This work presents four partitioning strategies, or patterns, useful for decomposing a serial application into multiple concurrently executing parts. These partitioning strategies augment the commonly used task and data parallel design patterns by recognizing that applications are spatiotemporal in nature. Therefore, data and instruction decomposition are further distinguished by whether the partitioning is done in the spatial or in temporal dimension. Thus, this work describes four decomposition strategies: spatial data partitioning (SDP), temporal data partitioning (TDP), spatial instruction partitioning (SIP), and temporal instruction partitioning (TIP), while cataloging the benefits and drawbacks of each. In addition, the practical use of these strategies is demonstrated through a case study in which they are applied to implement several different parallelizations of a multicore H.264 encoder for HD video. This case study illustrates both the application of the patterns and their effects on the performance of the encoder.
</description>
<pubDate>Tue, 16 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45567</guid>
<dc:date>2009-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>A Useful Homomorphic Encryption Method</title>
<link>https://hdl.handle.net/1721.1/45566</link>
<description>A Useful Homomorphic Encryption Method
Micali, Silvio
</description>
<pubDate>Mon, 15 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45566</guid>
<dc:date>2009-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Simple LCD Transmitter Camera Receiver Data Link</title>
<link>https://hdl.handle.net/1721.1/45565</link>
<description>Simple LCD Transmitter Camera Receiver Data Link
Katabi, Dina; Raskar, Ramesh; Mohan, Ankit; Woo, Grace
We demonstrate a freespace optical system using a consumer camera and projector in indoor environments using available devices for visual computing. Through design, prototype and experimentation with this commodity hardware, we analyze a practical optical solution as well as the drawbacks for current wireless challenges unmet by classic RF wireless communication. We summarize and introduce some new applications enabled by such similar setups.
</description>
<pubDate>Mon, 15 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45565</guid>
<dc:date>2009-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Coherent Reaction</title>
<link>https://hdl.handle.net/1721.1/45563</link>
<description>Coherent Reaction
Edwards, Jonathan
Side effects are both the essence and bane of imperative programming. The programmer must carefully coordinate actions to manage their side effects upon each other. Such coordination is complex, error-prone, and fragile. Coherent reaction is a new model of change-driven computation that coordinates effects automatically. State changes trigger events called reactions that in turn change other states. A coherent execution order is one in which each reaction executes before any others that are affected by its changes. A coherent order is discovered iteratively by detecting incoherencies as they occur and backtracking their effects. Unlike alternative solutions, much of the power of imperative programming is retained, as is the common sense notion of mutable state. Automatically coordinating actions lets the programmer express what to do, not when to do it. Coherent reactions are embodied in the Coherence language, which is specialized for interactive applications like those common on the desktop and web. The fundamental building block of Coherence is the dynamically typed mutable tree. The fundamental abstraction mechanism is the virtual tree, whose value is lazily computed, and whose behavior is generated by coherent reactions.
</description>
<pubDate>Fri, 12 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45563</guid>
<dc:date>2009-06-12T00:00:00Z</dc:date>
</item>
<item>
<title>The Measurement of Visual Motion</title>
<link>https://hdl.handle.net/1721.1/45554</link>
<description>The Measurement of Visual Motion
Hildreth, Ellen C.; Ullman, Shimon
The analysis of visual motion divides naturally into two stages: the first is the measurement of motion, for example, the assignment of direction and magnitude of velocity to elements in the image, on the basis of the changing intensity pattern; the second is the use of motion measurements, for example, to separate the scene into distinct objects, and infer their three-dimensional structure. In this paper, we present a computational study of the measurement of motion. Similar to other visual processes, the motion of elements is not determined uniquely by information in the changing image; additional constraint is required to compute a unique velocity field. Given this global ambiguity of motion, local measurements from the changing image, such as those provided by directionally-selective simple cells in primate visual cortex, cannot possibly specify a unique local velocity vector, and in fact, specify only one component of velocity. Computation of the full two-dimensional velocity field requires the integration of local motion measurements, either over an area, or along contours in the image. We will examine possible algorithms for computing motion, based on a range of additional constraints. Finally, we will present implications for the biological computation of motion.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45554</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Radio Networks</title>
<link>https://hdl.handle.net/1721.1/45553</link>
<description>Modeling Radio Networks
Lynch, Nancy; Newport, Calvin
We describe a modeling framework and collection of foundational composition results for the study of probabilistic distributed algorithms in synchronous radio networks. Existing results in this setting rely on informal descriptions of the channel behavior and therefore lack easy comparability and are prone to error caused by definition subtleties. Our framework rectifies these issues by providing: (1) a method to precisely describe a radio channel as a probabilistic automaton; (2) a mathematical notion of implementing one channel using another channel, allowing for direct comparisons of channel strengths and a natural decomposition of problems into implementing a more powerful channel and solving the problem on the powerful channel; (3) a mathematical definition of a problem and solving a problem; (4) a pair of composition results that simplify the tasks of proving properties about channel implementation algorithms and combining problems with channel implementations. Our goal is to produce a model streamlined for the needs of the radio network algorithms community.
</description>
<pubDate>Thu, 04 Jun 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45553</guid>
<dc:date>2009-06-04T00:00:00Z</dc:date>
</item>
<item>
<title>Gradient Clock Synchronization in Dynamic Networks</title>
<link>https://hdl.handle.net/1721.1/45549</link>
<description>Gradient Clock Synchronization in Dynamic Networks
Locher, Thomas; Kuhn, Fabian; Oshman, Rotem
Over the last years, large-scale decentralized computer networks such as peer-to-peer and mobile ad hoc networks have become increasingly prevalent. The topologies of many of these networks are often highly dynamic. This is especially true for ad hoc networks formed by mobile wireless devices. In this paper, we study the fundamental problem of clock synchronization in dynamic networks. We show that there is an inherent trade-off between the skew S guaranteed along sufficiently old links and the time needed to guarantee a small skew along new links. For any sufficiently large initial skew on a new link, there are executions in which the time required to reduce the skew on the link to O(S) is at least Omega(n/S). We show that this bound is tight for moderately small values of S. Assuming a fixed set of n nodes and an arbitrary pattern of edge insertions and removals, a weak dynamic connectivity requirement suffices to prove the following results. We present an algorithm that always maintains a skew of O(n) between any two nodes in the network. For a parameter S = Omega(sqrt{rho n}), where rho is the maximum hardware clock drift, it is further guaranteed that if a communication link between two nodes u, v persists in the network for at least Omega(n/S) time, the clock skew between u and v is reduced to no more than O(S).
</description>
<pubDate>Fri, 29 May 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45549</guid>
<dc:date>2009-05-29T00:00:00Z</dc:date>
</item>
<item>
<title>Sepia: a Framework for Natural Language Semantics</title>
<link>https://hdl.handle.net/1721.1/45548</link>
<description>Sepia: a Framework for Natural Language Semantics
Marton, Gregory Adam; Westrick, Linda Brown
To help explore linguistic semantics in the context of computational natural language understanding, Sepia provides a realization the central theoretical idea of categorial grammar: linking words and phrases to compositional lambda semantics.  The Sepia framework provides a language in &#13;
which to express complex transformations from text to data structures, and tools surrounding that language for parsing and machine learning.  Lambda semantics are expressed as arbitrary Scheme programs, unlimited in the semantic representations they may build, and the rules for transformation are expressed in Combinatory Categorial Grammar, though the details of grammar formalism may be easily changed.  This report explains the major design decisions, and is meant to teach the reader how to understand Sepia semantics and how to create lexical items for a new language understanding task.
Source code and technical description
</description>
<pubDate>Thu, 28 May 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45548</guid>
<dc:date>2009-05-28T00:00:00Z</dc:date>
</item>
<item>
<title>Scene Classification with a Biologically Inspired Method</title>
<link>https://hdl.handle.net/1721.1/45516</link>
<description>Scene Classification with a Biologically Inspired Method
Terashima, Yoshito
We present a biologically motivated method for scene image classification. The core of the method is to use shape based image property that is provided by a hierarchical feedforward model of the visual cortex [18]. Edge based and color based image properties are additionally used to improve the accuracy. The method consists of two stages of image analysis. In the first stage, each of three paths of classification uses each image property (i.e. shape, edge or color based features) independently. In the second stage, a single classifier assigns the category of an image based on the probability distributions of the first stage classifier outputs. Experiments show that the method boosts the classification accuracy over the shape based model. We demonstrate that this method achieves a high accuracy comparable to other reported methods on publicly available color image dataset.
</description>
<pubDate>Sun, 10 May 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45516</guid>
<dc:date>2009-05-10T00:00:00Z</dc:date>
</item>
<item>
<title>The Abstract MAC Layer</title>
<link>https://hdl.handle.net/1721.1/45515</link>
<description>The Abstract MAC Layer
Kuhn, Fabian; Newport, Calvin; Lynch, Nancy
A diversity of possible communication assumptions complicates the study of algorithms and lower bounds for radio networks. We address this problem by defining an Abstract MAC Layer. This service provides reliable local broadcast communication, with timing guarantees stated in terms of a collection of abstract \emph{delay functions} applied to the relevant contention. Algorithm designers can analyze their algorithms in terms of these functions, independently of specific channel behavior. Concrete implementations of the Abstract MAC Layer over basic radio network models generate concrete definitions for these delay functions, automatically adapting bounds proven for the abstract service to bounds for the specific radio network under consideration. To illustrate this approach, we use the Abstract MAC Layer to study the new problem of Multi-Message Broadcast, a generalization of standard single-message broadcast, in which any number of messages arrive at any processes at any times.We present and analyze two algorithms for Multi-Message Broadcast in static networks: a simple greedy algorithm and one that uses regional leaders. We then indicate how these results can be extended to mobile networks.
</description>
<pubDate>Mon, 11 May 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45515</guid>
<dc:date>2009-05-11T00:00:00Z</dc:date>
</item>
<item>
<title>4D Frequency Analysis of Computational Cameras for Depth of Field Extension</title>
<link>https://hdl.handle.net/1721.1/45513</link>
<description>4D Frequency Analysis of Computational Cameras for Depth of Field Extension
Levin, Anat; Hasinoff, Samuel W.; Freeman, William T.; Green, Paul; Durand, Fredo
Depth of field (DOF), the range of scene depths that appear sharp in a photograph, poses a fundamental tradeoff in photography---wide apertures are important to reduce imaging noise, but they also increase defocus blur. Recent advances in computational imaging modify the acquisition process to extend the DOF through deconvolution. Because deconvolution quality is a tight function of the frequency power spectrum of the defocus kernel, designs with high spectra are desirable. In this paper we study how to design effective extended-DOF systems, and show an upper bound on the maximal power spectrum that can be achieved. We analyze defocus kernels in the 4D light field space and show that in the frequency domain, only a low-dimensional 3D manifold contributes to focus. Thus, to maximize the defocus spectrum, imaging systems should concentrate their limited energy on this manifold. We review several computational imaging systems and show either that they spend energy outside the focal manifold or do not achieve a high spectrum over the DOF. Guided by this analysis we introduce the lattice-focal lens, which concentrates energy at the low-dimensional focal manifold and achieves a higher power spectrum than previous designs. We have built a prototype lattice-focal lens and present extended depth of field results.
</description>
<pubDate>Fri, 08 May 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45513</guid>
<dc:date>2009-05-08T00:00:00Z</dc:date>
</item>
<item>
<title>ATAC: A Manycore Processor with On-Chip Optical Network</title>
<link>https://hdl.handle.net/1721.1/45510</link>
<description>ATAC: A Manycore Processor with On-Chip Optical Network
Liu, Jifeng; Psota, James; Beckmann, Nathan; Miller, Jason; Michel, Jurgen; Eastep, Jonathan; Kurian, George; Kimerling, Lionel; Agarwal, Anant; Beals, Mark
Ever since industry has turned to parallelism instead of frequency scaling to improve processor performance, multicore processors have continued to scale to larger and larger numbers of cores. Some believe that multicores will have 1000 cores or more by the middle of the next decade. However, their promise of increased performance will only be reached if their inherent scaling and programming challenges are overcome. Meanwhile, recent advances in nanophotonic device manufacturing are making chip-stack optics a reality; interconnect technology which can provide significantly more bandwidth at lower power than conventional electrical analogs. Perhaps more importantly, optical interconnect also has the potential to enable new, easy-to-use programming models enabled by an inexpensive broadcast mechanism. This paper introduces ATAC, a new manycore architecture that capitalizes on the recent advances in optics to address a number of the challenges that future manycore designs will face. The new constraints and opportunities associated with on-chip optical interconnect are presented and explored in the design of ATAC. Furthermore, this paper introduces ACKwise, a novel directory-based cache coherence protocol that takes advantage of the special properties of ATAC to achieve high performance and scalability on large-scale manycores. Early performance results show that a 1000-core ATAC chip achieves a speedup of as much as 39% when compared with a similarly sized manycore with an electrical mesh network.
</description>
<pubDate>Tue, 05 May 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45510</guid>
<dc:date>2009-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Remote Store Programming: Mechanisms and Performance</title>
<link>https://hdl.handle.net/1721.1/45509</link>
<description>Remote Store Programming: Mechanisms and Performance
Wentzlaff, David; Agarwal, Anant; Hoffmann, Henry
This paper presents remote store programming (RSP). This paradigm combines usability and efficiency through the exploitation of a simple hardware mechanism, the remote store, which can easily be added to existing multicores.Remote store programs are marked by fine-grained and one-sided communication which results in a stream of data flowing from the registers of a sending process to the cache of a destination process. The RSP model and its hardware implementation trade a relatively high store latency for a low load latency because loads are more common than stores, and it is easier to tolerate store latency than load latency. This paper demonstrates the performance advantages of remote store programming by comparing it to both cache-coherent shared memory and direct memory access (DMA) based approaches using the TILEPro64 processor. The paper studies two applications: a two-dimensional Fast Fourier Transform (2D FFT) and an H.264 encoder for high-definition video. For a 2D FFT using 56 cores, RSP is 1.64x faster than DMA and 4.4x faster than shared memory. For an H.264 encoder using 40 cores, RSP achieves the same performance as DMA and 4.8x the performance of shared memory. Along with these performance advantages, RSP requires the least hardware support of the three. RSP's features, performance, and hardware simplicity make it well suited to the embedded processing domain.
</description>
<pubDate>Tue, 05 May 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45509</guid>
<dc:date>2009-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Risk Allocation for Multi-agent Systems using Tatonnement</title>
<link>https://hdl.handle.net/1721.1/45142</link>
<description>Risk Allocation for Multi-agent Systems using Tatonnement
Williams, Brian C.; Ono, Masahiro
This paper proposes a new market-based distributed planning algorithm for multi-agent systems under uncertainty, called MIRA (Market-based Iterative Risk Allocation). In large coordination problems, from power grid management to multi-vehicle missions, multiple agents act collectively in order to optimize the performance of the system, while satisfying mission constraints. These optimal plans are particularly susceptible to risk when uncertainty is introduced. We present a distributed planning algorithm that minimizes the system cost while ensuring that the probability of violating mission constraints is below a user-specified level. We build upon the paradigm of risk allocation (Ono and Williams, AAAI-08), in which the planner optimizes not only the sequence of actions, but also its allocation of risk among each constraint at each time step. We extend the concept of risk allocation to multi-agent systems by highlighting risk as a good that is traded in a computational market. The equilibrium price of risk that balances the supply and demand is found by an iterative price adjustment process called tatonnement (also known as Walrasian auction). The simulation results demonstrate the efficiency and optimality of the proposed distributed planner.
</description>
<pubDate>Wed, 22 Apr 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45142</guid>
<dc:date>2009-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Computing Network Coordinates in the Presence of Byzantine Faults</title>
<link>https://hdl.handle.net/1721.1/45141</link>
<description>Computing Network Coordinates in the Presence of Byzantine Faults
Zhou, You
Network coordinate systems allow for efficient construction of large-scale distributed systems on the Internet. Coordinates provide locality information in a compact way, without requiring each node to contact every potential neighbor; distances between two nodes' coordinates represent estimates of the network latency between them. Past work on network coordinates has assumed that all nodes in the system behave correctly. The techniques in these systems do not behave well when nodes are Byzantine. These Byzantine failures, wherein a faulty node can behave arbitrarily, can make the coordinate-based distance estimates meaningless. For example, a Byzantine node can delay responding to some other node, thus distorting that node's computation of its own location. We present a network coordinate system based on landmarks, reference nodes that are used for measurements, some of which may be Byzantine faulty. It scales linearly in the number of clients computing their coordinates and does not require excessive network traffic to allow clients to do so. Our results show that our system is able to compute accurate coordinates even when some landmarks are exhibiting Byzantine faults.
MEng thesis
</description>
<pubDate>Thu, 16 Apr 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/45141</guid>
<dc:date>2009-04-16T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and evaluating blind deconvolution algorithms</title>
<link>https://hdl.handle.net/1721.1/44964</link>
<description>Understanding and evaluating blind deconvolution algorithms
Freeman, William; Durand, Fredo; Weiss, Yair; Levin, Anat
Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand.The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. On the other hand we show that since the kernel size is often smaller than the image size a MAP estimation of the kernel alone can be well constrained and accurately recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. We have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrates that the shift-invariant blur assumption made by most algorithms is often violated.
</description>
<pubDate>Tue, 31 Mar 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44964</guid>
<dc:date>2009-03-31T00:00:00Z</dc:date>
</item>
<item>
<title>Fragment Grammars: Exploring Computation and Reuse in Language</title>
<link>https://hdl.handle.net/1721.1/44963</link>
<description>Fragment Grammars: Exploring Computation and Reuse in Language
O'Donnell, Timothy J.; Tenenbaum, Joshua B.; Goodman, Noah D.
Language relies on a division of labor between stored units and structure building operations which combine the stored units into larger structures. This division of labor leads to a tradeoff: more structure-building means less need to store while more storage means less need to compute structure. We develop a hierarchical Bayesian model called fragment grammar to explore the optimum balance between structure-building and reuse. The model is developed in the context of stochastic functional programming (SFP) and in particular using a probabilistic variant of Lisp known as the Church programming language (Goodman, Mansinghka, Roy, Bonawitz, &amp; Tenenbaum, 2008). We show how to formalize several probabilistic models of language structure using Church, and how fragment grammar generalizes one of them---adaptor grammars (Johnson, Griffiths, &amp; Goldwater, 2007). We conclude with experimental data with adults and preliminary evaluations of the model on natural language corpus data.
</description>
<pubDate>Tue, 31 Mar 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44963</guid>
<dc:date>2009-03-31T00:00:00Z</dc:date>
</item>
<item>
<title>Representing Small Group Evolution</title>
<link>https://hdl.handle.net/1721.1/44959</link>
<description>Representing Small Group Evolution
Wormald, Nicholas; Richards, Whitman
Understanding the dynamics of network evolution rests in part on the representation chosen to characterize the evolutionary process. We offer a simple, three-parameter representation based on subgraphs that capture three important properties of social networks: leadership, team alignment or bonding among members, and diversity of expertise. When plotted on this representation, the evolution of a typical small group such as start-ups or street gangs has a spiral trajectory, moving toward a tentative fixed point as membership increases to two dozen or so. We show that a simple probabilistic model for recruitment and bonding can not explain these observations, and suggest that strategic moves among group members may come into play.
</description>
<pubDate>Mon, 30 Mar 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44959</guid>
<dc:date>2009-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>Oblivious Routing in On-Chip Bandwidth-Adaptive Networks</title>
<link>https://hdl.handle.net/1721.1/44958</link>
<description>Oblivious Routing in On-Chip Bandwidth-Adaptive Networks
Kinsy, Michel; Wen, Tina; Shim, Keun Sup; Lis, Mieszko; Cho, Myong Hyon; Devadas, Srinivas
Oblivious routing can be implemented on simple router hardware, but network performance suffers when routes become congested. Adaptive routing attempts to avoid hot spots by re-routing flows, but requires more complex hardware to determine and configure new routing paths. We propose on-chip bandwidth-adaptive networks to mitigate the performance problems of oblivious routing and the complexity issues of adaptive routing.In a bandwidth-adaptive network, the bisection bandwidth of a network can adapt to changing network conditions.  We describe one implementation of a bandwidth-adaptive network in the form of a two-dimensional mesh with adaptive bidirectional links, where the bandwidth of the link in one direction can be increased at the expense of the other direction. Efficient local intelligence is used to reconfigure each link, and this reconfiguration can be done very rapidly in response to changing traffic demands.  We compare the hardware designs of a unidirectional and bidirectional link and evaluate the performance gains provided by a bandwidth-adaptive network in comparison to a conventional network under uniform and bursty traffic when oblivious routing is used.
</description>
<pubDate>Fri, 27 Mar 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44958</guid>
<dc:date>2009-03-27T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Bugs in Web Applications Using Dynamic Test Generation and Explicit State Model Checking</title>
<link>https://hdl.handle.net/1721.1/44956</link>
<description>Finding Bugs in Web Applications Using Dynamic Test Generation and Explicit State Model Checking
Tip, Frank; Ernst, Michael D.; Dig, Danny; Dolby, Julian; Kiezun, Adam; Artzi, Shay; Paradkar, Amit
Web script crashes and malformed dynamically-generated web pages are common errors, and they seriously impact the usability of web applications. Current tools for web-page validation cannot handle the dynamically generated pages that are ubiquitous on today's Internet. We present a dynamic test generation technique for the domain of dynamic web applications. The technique utilizes both combined concrete and symbolic execution and explicit-state model checking. The technique generates tests automatically, runs the tests capturing logical constraints on inputs, and minimizes the conditions on the inputs to failing tests, so that the resulting bug reports are small and useful in finding and fixing the underlying faults. Our tool Apollo implements the technique for the PHP programming language. Apollo generates test inputs for a web application, monitors the application for crashes, and validates that the output conforms to the HTML specification. This paper presents Apollo's algorithms and implementation, and an experimental evaluation that revealed 302 faults in 6 PHP web applications.
</description>
<pubDate>Thu, 26 Mar 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44956</guid>
<dc:date>2009-03-26T00:00:00Z</dc:date>
</item>
<item>
<title>The Abstract MAC Layer</title>
<link>https://hdl.handle.net/1721.1/44620</link>
<description>The Abstract MAC Layer
Newport, Calvin; Lynch, Nancy; Kuhn, Fabian
A diversity of possible communication assumptions complicates the study of algorithms and lower bounds for radio networks. We address this problem by defining an Abstract MAC Layer. This service provides reliable local broadcast communication, with timing guarantees stated in terms of a collection of abstract delay functions applied to the relevant contention. Algorithm designers can analyze their algorithms in terms of these functions, independently of specific channel behavior. Concrete implementations of the Abstract MAC Layer over basic radio network models generate concrete definitions for these delay functions, automatically adapting bounds proven for the abstract service to bounds for the specific radio network under consideration. To illustrate this approach, we use the Abstract MAC Layer to study the new problem of Multi-Message Broadcast, a generalization of standard single-message broadcast, in which any number of messages arrive at any processes at any times.We present and analyze two algorithms for Multi-Message Broadcast in static networks: a simple greedy algorithm and one that uses regional leaders. We indicate how these results can be extended to mobile networks.
</description>
<pubDate>Sat, 21 Feb 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44620</guid>
<dc:date>2009-02-21T00:00:00Z</dc:date>
</item>
<item>
<title>Overcoming the Antennas-Per-Node Throughput Limit in MIMO LANs</title>
<link>https://hdl.handle.net/1721.1/44616</link>
<description>Overcoming the Antennas-Per-Node Throughput Limit in MIMO LANs
Perli, Samuel David; Gollakota, Shyamnath; Katabi, Dina
Today, the number of concurrent packets in a MIMO LAN is limited by the number of antennas on the AP. This paper shows how to overcome this limit. It presents a new design where multiple client-AP pairs can communicate concurrently, on the same 802.11 channel. We demonstrate both analytically and experimentally that our design almost doubles the throughput of a MIMO LAN.  The key idea underlying our approach is Interference Alignment and Cancellation (IAC), a novel technique for decoding concurrent sender-receiver pairs in MIMO LANs. It exploits two basic properties of MIMO LANs. First, MIMO transmitters can control the alignment of their signals at a receiver. Second, APs are typically connected to a backend Ethernet, which they can use for coordination. Hence, in IAC, transmitters align their signals such that the first AP can decode at least one of the concurrent packets. Once a packet is decoded, it is sent over the Ethernet to the second AP, which subtracts it from its received signal to decode a second packet, which it sends to the third AP to decode the next packet, and so on. We implement our technique in 2x2 MIMO GNU Radios, and demonstrate via wireless experiments that IAC increases the average throughput of a MIMO LAN by 1.5x on the downlink and 2x on the uplink.
</description>
<pubDate>Wed, 18 Feb 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44616</guid>
<dc:date>2009-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Class-Specific 3D Reconstruction from a Single Image</title>
<link>https://hdl.handle.net/1721.1/44615</link>
<description>Automatic Class-Specific 3D Reconstruction from a Single Image
Lozano-Perez, Tomas; Kaelbling, Leslie Pack; Chiu, Han-Pang
Our goal is to automatically reconstruct 3D objects from a single image, by using prior 3D shape models of classes. The shape models, defined as a collection of oriented primitive shapes centered at fixed 3D positions, can be learned from a few labeled images for each class. The 3D class model can then be used to estimate the 3D shape of an object instance, including occluded parts, from a single image. We provide a quantitative evaluation of the shape estimation process on real objects and demonstrate its usefulness in three applications: robot manipulation, object detection, and generating 3D 'pop-up' models from photos.
</description>
<pubDate>Wed, 18 Feb 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44615</guid>
<dc:date>2009-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>A Tour of MOOS-IvP Autonomy Software Modules</title>
<link>https://hdl.handle.net/1721.1/44590</link>
<description>A Tour of MOOS-IvP Autonomy Software Modules
Benjamin, Michael R.; Leonard, John J.; Schmidt, Henrik; Newman, Paul M.
This paper provides an overview of the MOOS-IvP autonomy software modules. The MOOS-IvP collection of software, i.e., codebase, described here has been developed and is currently maintained by three organizations - Oxford University, Massachusetts Institute of Technology (MIT), and the Naval Undersea Warfare Center (NUWC) Division Newport Rhode Island. The objective of this paper is to provide a comprehensive list of modules and provide for each (a) a general description of functionality, (b) dependency relationships to other modules, (c) rough order of magnitude in complexity or size, (d) authorship, and (e) current and planned distribution access.
</description>
<pubDate>Fri, 13 Feb 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44590</guid>
<dc:date>2009-02-13T00:00:00Z</dc:date>
</item>
<item>
<title>SoftCast: One Video to Serve All Wireless Receivers</title>
<link>https://hdl.handle.net/1721.1/44585</link>
<description>SoftCast: One Video to Serve All Wireless Receivers
Katabi, Dina; Rahul, Hariharan; Jakubczak, Szymon
The main challenge in wireless video multicast is to scalably serve multiple receivers who have different channel characteristics. Current wireless transmission schemes, however, cannot support smooth degradation. Specifically, each packet is transmitted at a particular bitrate and is decodable only by receivers that support the chosen bitrate. Broadcasting a video stream to all receivers requires transmitting at the lowest bitrate, and hence reduces everyone to the performance of the worst receiver in the multicast group.This paper introduces SoftCast, an alternative design for wireless video multicast, in which a sender broadcasts a single stream and each receiver watches a video quality that matches its channel quality. SoftCast achieves this by making the magnitude of the transmitted signal proportional to the pixel value. Hence, channel noise directly translates to a small perturbation in pixel values, allowing graceful degradation with increasing noise. SoftCast introduces a novel power allocation scheme that allows the transmission of real-valued video signals in a compact and resilient manner. We implement SoftCast in the WARP radio platform. Our results show that SoftCast improves the average video quality across multicast receivers by 3-7dB over the current approach. Further, it stays competitive with the current approach even for regular unicast.
</description>
<pubDate>Sat, 07 Feb 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44585</guid>
<dc:date>2009-02-07T00:00:00Z</dc:date>
</item>
<item>
<title>HAMPI: A Solver for String Constraints</title>
<link>https://hdl.handle.net/1721.1/44584</link>
<description>HAMPI: A Solver for String Constraints
Ernst, Michael D.; Kiezun, Adam; Ganesh, Vijay; Guo, Philip J.; Hooimeijer, Pieter
Many automatic testing, analysis, and verification techniques for programs can be effectively reduced to a constraint-generation phase followed by a constraint-solving phase. This separation of concerns often leads to more effective and maintainable tools. The increasing efficiency of off-the-shelf constraint solvers makes this approach even more compelling. However, there are few, if any, effective and sufficiently expressive off-the-shelf solvers for string constraints generated by analysis techniques for string-manipulating programs. We designed and implemented Hampi, a solver for string constraints over bounded string variables. Hampi constraints express membership in regular languages and bounded context-free languages. Hampi constraints may contain context-free-language definitions, regular-language definitions and operations, and the membership predicate. Given a set of constraints, Hampi outputs a string that satisfies all the constraints, or reports that the constraints are unsatisfiable. Hampi is expressive and efficient, and can be successfully applied to testing and analysis of real programs. Our experiments use Hampi in: static and dynamic analyses for finding SQL injection vulnerabilities in Web applications; automated bug finding in C programs using systematic testing; and compare Hampi with another string solver. Hampi's source code, documentation, and the experimental data are available at http://people.csail.mit.edu/akiezun/hampi.
</description>
<pubDate>Wed, 04 Feb 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44584</guid>
<dc:date>2009-02-04T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Stabilizing Message Routing in Mobile ad hoc Networks</title>
<link>https://hdl.handle.net/1721.1/44516</link>
<description>Self-Stabilizing Message Routing in Mobile ad hoc Networks
Lynch, Nancy; Lahiani, Limor; Dolev, Shlomi; Nolte, Tina
We present a self-stabilizing algorithm for routing messages between arbitrary pairs of nodes in a mobile ad hoc network. Our algorithm assumes the availability of a reliable GPS service, which supplies mobile nodes with accurate information about real time and about their own geographical locations. The GPS service provides an external, shared source of consistency for mobile nodes, allowing them to label and timestamp messages, and thereby aiding in recovery from failures. Our algorithm utilizes a Virtual Infrastructure programming abstraction layer, consisting of mobile client nodes, virtual stationary timed machines called Virtual Stationary Automata (VSAs), and a local broadcast service connecting VSAs and mobile clients. VSAs are associated with predetermined regions in the plane, and are emulated in a self-stabilizing manner by the mobile nodes. VSAs are relatively stable in the face of node mobility and failure, and can be used to simplify algorithm development for mobile networks. Our routing algorithm consists of three subalgorithms: [(1)] a VSA-to-VSA geographical routing algorithm, [2] a mobile client location management algorithm, and [3] the main algorithm, which utilizes both location management and geographical routing. All three subalgorithms are self-stabilizing, and consequently, the entire algorithm is also self-stabilizing.
</description>
<pubDate>Wed, 28 Jan 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44516</guid>
<dc:date>2009-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>The Art of the Propagator</title>
<link>https://hdl.handle.net/1721.1/44215</link>
<description>The Art of the Propagator
Sussman, Gerald Jay; Radul, Alexey
We develop a programming model built on the idea that the basic computational elements are autonomous machines interconnected by shared cells through which they communicate. Each machine continuously examines the cells it is interested in, and adds information to some based on deductions it can make from information from the others. This model makes it easy to smoothly combine expression-oriented and constraint-based programming; it also easily accommodates implicit incremental distributed search in ordinary programs.  This work builds on the original research of Guy Lewis Steele Jr. and was developed more recently with the help of Chris Hanson.
</description>
<pubDate>Mon, 26 Jan 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/44215</guid>
<dc:date>2009-01-26T00:00:00Z</dc:date>
</item>
<item>
<title>Organic Indoor Location Discovery</title>
<link>https://hdl.handle.net/1721.1/43951</link>
<description>Organic Indoor Location Discovery
Hicks, Jamey; Curtis, Dorothy; Teller, Seth; Charrow, Ben; Ryan, Russell; Ledlie, Jonathan; Battat, Jonathan
We describe an indoor, room-level location discovery method based on spatial variations in "wifi signatures," i.e., MAC addresses and signal strengths of existing wireless access points. The principal novelty of our system is its organic nature; it builds signal strength maps from the natural mobility and lightweight contributions of ordinary users, rather than dedicated effort by a team of site surveyors. Whenever a user's personal device observes an unrecognized signature, a GUI solicits the user's location. The resulting location-tagged signature or "bind" is then shared with other clients through a common database, enabling devices subsequently arriving there to discover location with no further user contribution.  &#13;
&#13;
Realizing a working system deployment required three novel elements: (1) a human-computer interface for indicating location over intervals of varying duration; (2) a client-server protocol for pre-fetching signature data for use in localization; and (3) a location-estimation algorithm incorporating highly variable signature data. We describe an experimental deployment of our method in a nine-story building with more than 1,400 distinct spaces served by more than 200 wireless access points. At the conclusion of the deployment, users could correctly localize to within 10 meters 92 percent of the time.
</description>
<pubDate>Tue, 30 Dec 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43951</guid>
<dc:date>2008-12-30T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient Auctions of One Good in Limited Supply</title>
<link>https://hdl.handle.net/1721.1/43947</link>
<description>Resilient Auctions of One Good in Limited Supply
Micali, Silvio; Chen, Jing
We present various resilient auction mechanisms for a good in limited supply. Our mechanisms achieve both player-knowledge and aggregated player-knowledge benchmarks.
</description>
<pubDate>Wed, 17 Dec 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43947</guid>
<dc:date>2008-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient  Provision of a Public and/or Private Good,  or: Resilient Auctions of One Good in Unlimited Supply</title>
<link>https://hdl.handle.net/1721.1/43946</link>
<description>Resilient  Provision of a Public and/or Private Good,  or: Resilient Auctions of One Good in Unlimited Supply
Chen, Jing; Micali, Silvio
We present two resilient mechanisms: the first for the provision of a public good, and the second for the provision of a private good. Both mechanisms adopt a knowledge-based benchmark.
</description>
<pubDate>Tue, 02 Dec 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43946</guid>
<dc:date>2008-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient Provision of a Public Good</title>
<link>https://hdl.handle.net/1721.1/43716</link>
<description>Resilient Provision of a Public Good
Micali, Silvio; Chen, Jing
We present two resilient mechanisms for the provision of a public good.  Both mechanisms adopt a knowledge-based benchmark.
</description>
<pubDate>Tue, 02 Dec 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43716</guid>
<dc:date>2008-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient Knowledge-Based  Mechanisms  For Truly Combinatorial Auctions (And Implementation in Surviving Strategies)</title>
<link>https://hdl.handle.net/1721.1/43715</link>
<description>Resilient Knowledge-Based  Mechanisms  For Truly Combinatorial Auctions (And Implementation in Surviving Strategies)
Micali, Silvio; Chen, Jing
We put forward a new mechanism achieving a high benchmark for (both revenue and) the sum of revenue and efficiency in truly combinatorial auctions. Notably, our mechanism guarantees its performance (1) in a very adversarial collusion model; (2) for any profile of strategies surviving the iterated elimination of dominated strategies; and (3) by leveraging the knowledge that the players have about each other (in a non-Bayesian setting).Our mechanism also is computationally efficient, and preserves the players' privacy to an unusual extent.
</description>
<pubDate>Wed, 08 Oct 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43715</guid>
<dc:date>2008-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematics of the Neural Response</title>
<link>https://hdl.handle.net/1721.1/43713</link>
<description>Mathematics of the Neural Response
Caponnetto, Andrea; Poggio, Tomaso; Bouvrie, Jake; Rosasco, Lorenzo; Smale, Steve
We propose a natural image representation, the neural response, motivated by the neuroscience of the visual cortex. The inner product defined by the neural response leads to a similarity measure between functions which we call the derived kernel. Based on a hierarchical architecture, we give a recursive definition of the neural response and associated derived kernel. The derived kernel can be used in a variety of application domains such as classification of images, strings of text and genomics data.
</description>
<pubDate>Wed, 26 Nov 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43713</guid>
<dc:date>2008-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Digital Circuits for Probabilistic Inference</title>
<link>https://hdl.handle.net/1721.1/43712</link>
<description>Stochastic Digital Circuits for Probabilistic Inference
Tenenbaum, Joshua B.; Jonas, Eric M.; Mansinghka, Vikash K.
We introduce combinational stochastic logic, an abstraction that generalizes deterministic digital circuit design (based on Boolean logic gates) to the probabilistic setting. We show how this logic can be combined with techniques from contemporary digital design to generate stateless and stateful circuits for exact and approximate sampling from a range of probability distributions. We focus on Markov chain Monte Carlo algorithms for Markov random fields, using massively parallel circuits. We implement these circuits on commodity reconfigurable logic and estimate the resulting performance in time, space and price. Using our approach, these simple and general algorithms could be affordably run for thousands of iterations on models with hundreds of thousands of variables in real time.
</description>
<pubDate>Mon, 24 Nov 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43712</guid>
<dc:date>2008-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Computational Security in Long-Lived Systems, Version 2</title>
<link>https://hdl.handle.net/1721.1/43711</link>
<description>Modeling Computational Security in Long-Lived Systems, Version 2
Lynch, Nancy; Pereira, Olivier; Kaynar, Dilsun; Cheung, Ling; Canetti, Ran
For many cryptographic protocols, security relies on the assumption that adversarial entities have limited computational power. This type of security degrades progressively over the lifetime of a protocol. However, some cryptographic services, such as timestamping services or digital archives, are long-lived in nature; they are expected to be secure and operational for a very long time (i.e., super-polynomial). In such cases, security cannot be guaranteed in the traditional sense: a computationally secure protocol may become insecure if the attacker has a super-polynomial number of interactions with the protocol. This paper proposes a new paradigm for the analysis of long-lived security protocols. We allow entities to be active for a potentially unbounded amount of real time, provided they perform only a polynomial amount of work per unit of real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded. We propose a new notion of long-term implementation, which is an adaptation of computational indistinguishability to the long-lived setting. We show that long-term implementation is preserved under polynomial parallel composition and exponential sequential composition. We illustrate the use of this new paradigm by analyzing some security properties of the long-lived timestamping protocol of Haber and Kamat.
</description>
<pubDate>Sat, 22 Nov 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43711</guid>
<dc:date>2008-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient Mechanisms For Truly Combinatorial Auctions</title>
<link>https://hdl.handle.net/1721.1/43709</link>
<description>Resilient Mechanisms For Truly Combinatorial Auctions
Micali, Silvio; Valiant, Paul
Dominant-strategy truthfulness is traditionally considered the best possible solution concept in mechanism design, as it enables one to predict with confidence which strategies INDEPENDENT players will actually choose. Yet, as with any other form of equilibrium, it too can be extremely vulnerable to COLLUSION. The problem of collusion is particularly evident for UNRESTRICTED combinatorial auctions}, arguably the hardest type of auctions.We thus investigate how much revenue can be guaranteed, in unrestricted combinatorial auctions, by dominant-strategy-truthful mechanisms that are COLLUSION-RESILIENT in a very strong sense; and obtain almost matching upper- and lower-bounds.
</description>
<pubDate>Thu, 13 Nov 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43709</guid>
<dc:date>2008-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>MOOS-IvP Autonomy Tools Users Manual</title>
<link>https://hdl.handle.net/1721.1/43708</link>
<description>MOOS-IvP Autonomy Tools Users Manual
Benjamin, Michael R.
This document describes seven common MOOS-IvP autonomy tools. The uHelmScope application provides a run-time scoping window into the state of an active IvP Helm executing its mission. The pMarineViewer application is a geo-based GUI tool for rendering marine vehicles and certain autonomy properties in their operational area. The uXMS application is a terminal based tool for live scoping on a MOOSDB process. The uTermCommand application is a terminal based tool for poking the MOOSDB with a set of MOOS file pre-defined variable-value pairs selectable with tab-completion of aliases from the command-line. The pEchoVar application provides a way of echoing an observed write to a variable with a new write with the same value to a different variable name. The uProcessWatch application is a way of monitoring the presence or absence of a set of MOOS processes and summarizing the collective status in a single MOOS variable. The uPokeDB application is a way of poking a MOOSDB from the command line with one or more variable-value pairs without any pre-existing configuration of a MOOS file.
</description>
<pubDate>Tue, 11 Nov 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43708</guid>
<dc:date>2008-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Energy Scalability of On-Chip Interconnection Networks in Multicore Architectures</title>
<link>https://hdl.handle.net/1721.1/43707</link>
<description>Energy Scalability of On-Chip Interconnection Networks in Multicore Architectures
Agarwal, Anant; Psota, James; Eastep, Jonathan; Konstantakopoulos, Theodoros
On-chip interconnection networks (OCNs) such as point-to-point networks and buses form the communication backbone in systems-on-a-chip, multicore processors, and tiled processors. OCNs can consume significant portions of a chip's energy budget, so analyzing their energy consumption early in the design cycle becomes important for architectural design decisions. Although numerous studies have examined OCN implementation and performance, few have examined energy. This paper develops an analytical framework for energy estimation in OCNs and presents results based on both analytical models of communication patterns and real network traces from applications running on a tiled multicore processor. Our analytical framework supports arbitrary OCN topologies under arbitrary communication patterns while accounting for wire length, switch energy, and network contention. It is the first to incorporate the effects of communication locality and network contention, and use real traces extensively. This paper compares the energy of point-to-point networks against buses under varying degrees of communication locality. The results indicate that, for 16 or more processors, a one-dimensional and a two-dimensional point-to-point network provide 66% and 82% energy savings, respectively, over a bus assuming that processors communicate with equal likelihood. The energy savings increase for patterns which exhibit locality. For the two-dimensional point-to-point OCN of the Raw tiled microprocessor, contention contributes a maximum of just 23% of the OCN energy, using estimated values for channel, switch control logic, and switch queue buffer energy of 34.5pJ, 17pJ, and 12pJ, respectively. Our results show that the energy-delay product per message decreases with increasing processor message injection rate.
</description>
<pubDate>Tue, 11 Nov 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/43707</guid>
<dc:date>2008-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Recursively invoking Linnaeus: A Taxonomy for Naming Systems</title>
<link>https://hdl.handle.net/1721.1/42898</link>
<description>Recursively invoking Linnaeus: A Taxonomy for Naming Systems
Sollins, Karen R.
Naming is a central element of a distributed or network system design. Appropriate design choices are central. This paper explores a taxonomy of naming systems, and engineering tradeoffs as an aid to the namespace designer. The three orthogonal components of the taxonomy are the characteristics of the namespace itself, name assignment, and name resolution. Within each of these, we explore a number of distinct characteristics. The position of this paper is that engineering design of naming systems should be informed by the possibilities and tradeoffs that those possibilities represent. The paper includes a review of a sampling of naming system designs that reflect different choices within the taxonomy and discussion about why those choices were made.
</description>
<pubDate>Fri, 01 Mar 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42898</guid>
<dc:date>2002-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>One Video Stream to Serve Diverse Receivers</title>
<link>https://hdl.handle.net/1721.1/42897</link>
<description>One Video Stream to Serve Diverse Receivers
Woo, Grace; Katabi, Dina; Chachulski, Szymon
The fundamental problem of wireless video multicast is to scalably serve multiple receivers which may have very different channel characteristics. Ideally, one would like to broadcast a single stream that allows each receiver to benefit from all correctly received bits to improve its video quality. We introduce Digital Rain, a new approach to wireless video multicast that adapts to channel characteristics without any need for receiver feedback or variable codec rates. Users that capture more packets or have fewer bit errors naturally see higher video quality. Digital Rain departs from current approaches in two ways: 1) It allows a receiver to exploit video packets that may contain bit errors; 2) It builds on the theory of compressed sensing to develop robust video encoding and decoding algorithms that degrade smoothly with bit errors and packet loss. Implementation results from an indoor wireless testbed show that Digital Rain significantly improves the received video quality and the number of supported receivers.
</description>
<pubDate>Sat, 18 Oct 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42897</guid>
<dc:date>2008-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Kernel Methods Using the Balancing Principle</title>
<link>https://hdl.handle.net/1721.1/42896</link>
<description>Adaptive Kernel Methods Using the Balancing Principle
Rosasco, Lorenzo; Pereverzyev, Sergei; De Vito, Ernesto
The regularization parameter choice is a fundamental problem in supervised learning since the performance of most algorithms crucially depends on the choice of one or more of such parameters. In particular a main theoretical issue regards the amount of prior knowledge on the problem needed to suitably choose the regularization parameter and obtain learning rates. In this paper we present a strategy, the balancing principle, to choose the regularization parameter without knowledge of the regularity of the target function. Such a choice adaptively achieves the best error rate. Our main result applies to regularization algorithms in reproducing kernel Hilbert space with the square loss, though we also study how a similar principle can be used in other situations. As a straightforward corollary we can immediately derive adaptive parameter choice for various kernel methods recently studied. Numerical experiments with the proposed parameter choice rules are also presented.
</description>
<pubDate>Thu, 16 Oct 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42896</guid>
<dc:date>2008-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Modular Generation and Customization</title>
<link>https://hdl.handle.net/1721.1/42895</link>
<description>Modular Generation and Customization
Edwards, Jonathan
Modularity and flexibility can conflict in multi-language systems. For example, the templates commonly used to generate web pages must be manually updated when the database schema changes. Modularity can be improved by generating web pages automatically from the database schema, but it is hard for such a generator to produce the same variety of outputs that are easily achieved by ad hoc edits to a template. Ideally, such ad hoc edits would be abstracted into transformations that compose with the generator, offering both modularity and flexibility. However common customizations cannot be abstracted using the standard techniques of textual identifiers and ordinal positions. These difficulties are distilled into a challenge problem to evaluate potential solutions. A solution is proposed based on field trees, a new data model for software artifacts that provides persistent identifiers and unshifting positions within sequences. But using field trees with conventional programming languages and development environments requires more effort than the ad hoc editing they seek to supplant. Field trees are therefore extended into differential trees, which integrate artifacts and their transformations into a unified representation.
</description>
<pubDate>Fri, 10 Oct 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42895</guid>
<dc:date>2008-10-10T00:00:00Z</dc:date>
</item>
<item>
<title>The Case for a Factored Operating System (fos)</title>
<link>https://hdl.handle.net/1721.1/42894</link>
<description>The Case for a Factored Operating System (fos)
Agarwal, Anant; Wentzlaff, David
The next decade will afford us computer chips with 1,000 - 10,000 cores on a single piece of silicon. Contemporary operating systems have been designed to operate on a single core or small number of cores and hence are not well suited to manage and provide operating system services at such large scale. Managing 10,000 cores is so fundamentally different from managing two cores that the traditional evolutionary approach of operating system optimization will cease to work. The fundamental design of operating systems and operating system data structures must be rethought. This work begins by documenting the scalability problems of contemporary operating systems. These studies are used to motivate the design of a factored operating system (fos). fos is a new operating system targeting 1000+ core multicore systems where space sharing replaces traditional time sharing to increase scalability. fos is built as a collection of Internet inspired services. Each operating system service is factored into a fleet of communicating servers which in aggregate implement a system service. These servers are designed much in the way that distributed Internet services are designed, but instead of providing high level Internet services, these servers provide traditional kernel services and manage traditional kernel data structures in a factored, spatially distributed manner. The servers are bound to distinct processing cores and by doing so do not fight with end user applications for implicit resources such as TLBs and caches. Also, spatial distribution of these OS services facilitates locality as many operations only need to communicate with the nearest server for a given service.
</description>
<pubDate>Wed, 08 Oct 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42894</guid>
<dc:date>2008-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>New Resiliency in  Truly Combinatorial Auctions  (and Implementation in Surviving Strategies)</title>
<link>https://hdl.handle.net/1721.1/42893</link>
<description>New Resiliency in  Truly Combinatorial Auctions  (and Implementation in Surviving Strategies)
Chen, Jing; Micali, Silvio
Following Micali and Valiant [MV07.a], a mechanism is resilient if it achieves its objective without any problem of (1) equilibrium selection and (2) player collusion. To advance resilient mechanism design,We put forward a new meaningful benchmark for the COMBINED social welfare-revenue performance of any mechanism in truly combinatorial auctions.We put forward a NEW notion of implementation, much more general than the ones used so far, which we believe to be of independent interest.We put forward a new RESILIENT mechanism that, by leveraging the knowledge that the players have about each other, guarantees at least one half of our benchmark under a very general collusion model.
</description>
<pubDate>Wed, 08 Oct 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42893</guid>
<dc:date>2008-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>ZigZag Decoding: Combating Hidden Terminals In Wireless Networks</title>
<link>https://hdl.handle.net/1721.1/42842</link>
<description>ZigZag Decoding: Combating Hidden Terminals In Wireless Networks
Katabi, Dina; Gollakota, Shyamnath
This paper presents ZigZag, an 802.11 receiver design that combats hidden terminals. ZigZag's core contribution is a new form of interference cancellation that exploits asynchrony across successive collisions. Specifically, 802.11 retransmissions, in the case of hidden terminals, cause successive collisions. These collisions have different interference-free stretches at their start, which ZigZag exploits to bootstrap its decoding. ZigZag makes no changes to the 802.11 MAC and introduces no overhead when there are no collisions. But, when senders collide, ZigZag attains the same throughput as if the colliding packets were a priori scheduled in separate time slots. We build a prototype of ZigZag in GNU Radio. In a testbed of 14 USRP nodes, ZigZag reduces the average packet loss rate at hidden terminals from 72.6% to about 0.7%.
</description>
<pubDate>Wed, 01 Oct 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42842</guid>
<dc:date>2008-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Refactoring Sequential Java Code for Concurrency via Concurrent Libraries</title>
<link>https://hdl.handle.net/1721.1/42841</link>
<description>Refactoring Sequential Java Code for Concurrency via Concurrent Libraries
Ernst, Michael D.; Marrero, John; Dig, Danny
Parallelizing existing sequential programs to run efficiently on multicores is hard. The Java 5 packagejava.util.concurrent (j.u.c.) supports writing concurrent programs: much of the complexity of writing threads-safe and scalable programs is hidden in the library.  To use this package, programmers still need to reengineer existing code. This is tedious because it requires changing many lines of code, is error-prone because programmers can use the wrong APIs, and is omission-prone because programmers can miss opportunities to use the enhanced APIs.  This paper presents our tool, CONCURRENCER, which enables programmers to refactor sequential code into parallel code that uses j.u.c. concurrent utilities. CONCURRENCER does not require any program annotations, although the transformations are very involved: they span multiple program statements and use custom program analysis.  A find-and-replace tool can not perform such transformations.  Empirical evaluation shows that CONCURRENCER refactors code effectively: CONCURRENCER correctly identifies and applies transformations that some open-source developers overlooked, and the converted code exhibits good speedup.
</description>
<pubDate>Tue, 30 Sep 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42841</guid>
<dc:date>2008-09-30T00:00:00Z</dc:date>
</item>
<item>
<title>Rank Priors for Continuous Non-Linear Dimensionality Reduction</title>
<link>https://hdl.handle.net/1721.1/42840</link>
<description>Rank Priors for Continuous Non-Linear Dimensionality Reduction
Stiefelhagen, Rainer; Darrell, Trevor; Urtasun, Raquel; Geiger, Andreas
Non-linear dimensionality reduction methods are powerful techniques to deal with high-dimensional datasets. However, they often are susceptible to local minima and perform poorly when initialized far from the global optimum, even when the intrinsic dimensionality is known a priori. In this work we introduce a prior over the dimensionality of the latent space, and simultaneously optimize both the latent space and its intrinsic dimensionality. Ad-hoc initialization schemes are unnecessary with our approach; we initialize the latent space to the observation space and automatically infer the latent dimensionality using an optimization scheme that drops dimensions in a continuous fashion. We report results applying our prior to various tasks involving probabilistic non-linear dimensionality reduction, and show that our method can outperform graph-based dimensionality reduction techniques as well as previously suggested ad-hoc initialization strategies.
</description>
<pubDate>Fri, 26 Sep 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42840</guid>
<dc:date>2008-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Combinatorial Optimization with Risk</title>
<link>https://hdl.handle.net/1721.1/42837</link>
<description>Stochastic Combinatorial Optimization with Risk
Nikolova, Evdokia
We consider general combinatorial optimization problems that can be formulated as minimizing the weight of a feasible solution wT x over an arbitrary feasible set. For these problems we describe a broad class of corresponding stochastic problems where the weight vector W has independent random components, unknown at the time of solution. A natural and important objective which incorporates risk in this stochastic setting, is to look for a feasible solution whose stochastic weight has a small tail or a small linear combination of mean and standard deviation. Our models can be equivalently reformulated as deterministic nonconvex programs for which no efficient algorithms are known. In this paper, we make progress on these hard problems.  Our results are several efficient general-purpose approximation schemes. They use as a black-box (exact or approximate) the solution to the underlying deterministic combinatorial problem and thus immediately apply to arbitrary combinatorial problems. For example, from an available ?-approximation algorithm to the deterministic problem, we construct a ?(1 + ?)-approximation algorithm that invokes the deterministic algorithm only a logarithmic number of times in the input and polynomial in 1/?, for any desired accuracy level ? &gt; 0. The algorithms are based on a geometric analysis of the curvature and approximability of the nonlinear level sets of the objective functions.
</description>
<pubDate>Sat, 13 Sep 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42837</guid>
<dc:date>2008-09-13T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Creation of SQL Injection and Cross-Site Scripting Attacks</title>
<link>https://hdl.handle.net/1721.1/42836</link>
<description>Automatic Creation of SQL Injection and Cross-Site Scripting Attacks
Kiezun, Adam; Guo, Philip J.; Jayaraman, Karthick; Ernst, Michael D.
We present a technique for finding security vulnerabilitiesin Web applications. SQL Injection (SQLI) and cross-sitescripting (XSS) attacks are widespread forms of attackin which the attacker crafts the input to the application toaccess or modify user data and execute malicious code. Inthe most serious attacks (called second-order, or persistent,XSS), an attacker can corrupt a database so as to causesubsequent users to execute malicious code.This paper presents an automatic technique for creatinginputs that expose SQLI and XSS vulnerabilities. The techniquegenerates sample inputs, symbolically tracks taintsthrough execution (including through database accesses),and mutates the inputs to produce concrete exploits. Oursis the first analysis of which we are aware that preciselyaddresses second-order XSS attacks.Our technique creates real attack vectors, has few falsepositives, incurs no runtime overhead for the deployed application,works without requiring modification of applicationcode, and handles dynamic programming-languageconstructs. We implemented the technique for PHP, in a toolArdilla. We evaluated Ardilla on five PHP applicationsand found 68 previously unknown vulnerabilities (23 SQLI,33 first-order XSS, and 12 second-order XSS).
</description>
<pubDate>Wed, 10 Sep 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42836</guid>
<dc:date>2008-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>How do programs become more concurrent? A story of program transformations</title>
<link>https://hdl.handle.net/1721.1/42832</link>
<description>How do programs become more concurrent? A story of program transformations
Dig, Danny; Marrero, John; Ernst, Michael D.
For several decades, programmers have relied onMooreâ  s Law to improve the performance of their softwareapplications. From now on, programmers need to programthe multi-cores if they want to deliver efficient code. Inthe multi-core era, a major maintenance task will be tomake sequential programs more concurrent. What are themost common transformations to retrofit concurrency intosequential programs?We studied the source code of 5 open-source Javaprojects. We analyzed qualitatively and quantitatively thechange patterns that developers have used in order toretrofit concurrency. We found that these transformationsbelong to four categories: transformations that improve thelatency, the throughput, the scalability, or correctness of theapplications. In addition, we report on our experience ofparallelizing one of our own programs. Our findings caneducate software developers on how to parallelize sequentialprograms, and can provide hints for tool vendors aboutwhat transformations are worth automating.
</description>
<pubDate>Fri, 05 Sep 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42832</guid>
<dc:date>2008-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Style Translation for Human Motion (Supplemental Material)</title>
<link>https://hdl.handle.net/1721.1/42004</link>
<description>Style Translation for Human Motion (Supplemental Material)
Hsu, Eugene; Pulli, Kari; Popovic, Jovan
Style translation is the process of transforming an input motion into a new style while preserving its original content. This problem is motivated by the needs of interactive applications, which require rapid processing of captured performances. Our solution learns to translate by analyzing differences between performances of the same content in input and output styles. It relies on a novel correspondence algorithm to align motions, and a linear time-invariant model to represent stylistic differences. Once the model is estimated with system identification, our system is capable of translating streaming input with simple linear operations at each frame.
</description>
<pubDate>Mon, 01 Aug 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42004</guid>
<dc:date>2005-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Simulation of Stylized Human Locomotion</title>
<link>https://hdl.handle.net/1721.1/42003</link>
<description>Interactive Simulation of Stylized Human Locomotion
Silva, Marco da; Popovic, Jovan; Abe, Yeuhi
Animating natural human motion in dynamic environments is difficult because of complex geometric and physical interactions. Simulation provides an automatic solution to parts of this problem, but it needs control systems to produce lifelike motions. This paper describes the systematic computation of controllers that can reproduce a range of locomotion styles in interactive simulations. Given a reference motion that describes the desired style, a derived control system can reproduce that style in simulation and in new environments. Because it produces high-quality motions that are both geometrically and physically consistent with simulated surroundings, interactive animation systems could begin to use this approach with more established kinematic methods.
</description>
<pubDate>Fri, 01 Aug 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/42003</guid>
<dc:date>2008-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mini-Robot Group User's Guide Part 1: The 11/45 System</title>
<link>https://hdl.handle.net/1721.1/41999</link>
<description>Mini-Robot Group User's Guide Part 1: The 11/45 System
Billmers, Meyer A.
This USER'S GUIDE is in two parts. Part 1 describes the facilities of the mini-robot group 11/45 and the software available to persons using those facilities. It is intended for those writing their own programs to be run on the 11/45 system.
A.I. Laboratory Working Papers are produced for internal circulation, and may contain information that is, for example, too preliminary or too detailed for formal publication. Although some will be given a limited external distribution, it is not intended that they should be considered papers to which reference can be made in the literature.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Thu, 01 Jun 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41999</guid>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Message Passing Instead of the GOTO Construct</title>
<link>https://hdl.handle.net/1721.1/41998</link>
<description>Using Message Passing Instead of the GOTO Construct
Hewitt, Carl
This paper advocates a programming methodology using message passing. Efficient programs are derived for fast exponentiation, merging ordered sequences, and path existence determination in a directed graph. The problems have been proposed by John Reynolds as interesting ones to investigate because they illustrate significant issues in programming. The methodology advocated here is directed toward the production of programs that are intended to execute efficiently in a computing environment with many processors. The absence of the GOTO construct does not seem to be constricting in any respect in the development of efficient programs using the programming methodology advocated here.
This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</description>
<pubDate>Sat, 01 Apr 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41998</guid>
<dc:date>1978-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Detection of Bent Fingers in Lead Bonding Frames</title>
<link>https://hdl.handle.net/1721.1/41997</link>
<description>Computer Detection of Bent Fingers in Lead Bonding Frames
Mitnick, Walter L.
In the production of logic circuits in dual inline packages, various tedious assembly line tasks are performed by human operators using microscopes or television enlargements. One boring and difficult task is the detection of bent fingers in lead bonding frames to which integrated circuit chips are subsequently bonded. Bent fingers can cause stresses which may eventually lead to the failure of circuits. This paper discusses the inspection problem and presents a computerized bent finger detection method which could be adapted to free human operators from this task. More immediately, it presents a method of examining an object and determining whether or not it is in focus based solely on inspection of the object's digitized light intensity profiles.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41997</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Fundamental Eel Equations</title>
<link>https://hdl.handle.net/1721.1/41996</link>
<description>The Fundamental Eel Equations
Horn, Berthold K.P.
Details of the kinematics, statics, and dynamics of a particularly simple form of locomotory system are developed to demonstrate the importance of understanding the behavior of the mechanical system interposed between the commands to the actuators and the generation of displacements in manipulation and locomotion systems, both natural and artificial.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Mon, 01 Dec 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41996</guid>
<dc:date>1975-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Intersection Problem</title>
<link>https://hdl.handle.net/1721.1/41995</link>
<description>The Intersection Problem
Fahlman, Scott E.
This paper is intended as a supplement to AI MEMO 331, "A System for Representing and Using Real-World Knowledge". It is an attempt to redefine and clarify what I now believe the central theme of the research to be. Briefly, I will present the following points:&#13;
1. The operation of set-intersection, performed upon large pre-existing sets, plays a pivotal role in the processes of intelligence.&#13;
2. Von Neumann machines intersect large sets very slowly. Attempts to avoid or speed up these intersections have obscured and distorted the other, non-intersection AI problems.&#13;
3. The parallel hardware system described in the earlier memo can be viewed as a conceptual tool for thinking about a world in which set-intersection of this sort is cheap. It thus divides many AI problems by factoring out all elements that arise solely due to set-intersection.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Sat, 01 Nov 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41995</guid>
<dc:date>1975-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>One System for Two Tasks: A Commonsense Algorithm Memory that Solves Problems and Comprehends Language</title>
<link>https://hdl.handle.net/1721.1/41994</link>
<description>One System for Two Tasks: A Commonsense Algorithm Memory that Solves Problems and Comprehends Language
Rieger, Chuck
Plan synthesis and language comprehension, or more generally, the act of discovering how one perception relates to others, are two sides of the same coin, because they both rely on a knowledge of cause and effect - algorithmic knowledge about how to do things and how things work. I will describe a new theory of representation for commonsense algorithmic world knowledge, then show how this knowledge can be organized into larger memory structures, as it has been in a LISP implementation of the theory. The large-scale organization of the memory is based on structures called a bypassable causal selection networks. A system of such networks serves to embed thousands of small commonsense algorithm patterns into a larger fabric which is directly usable by both a plan synthesizer and a language comprehender. Because these bypassable networks can adapt to context, so will the plan synthesizer and language comprehender. I will propose that the model is an approximation to the way humans organize and use algorithmic knowledge, and as such, that it suggests approaches not only to problem solving and language comprehension, but also to learning. I'll describe the commonsense algorithm representation, show how the system synthesizes plans using this knowledge, and trace through the process of language comprehension, illustrating how it threads its way through these algorithmic structures.
This is the edited text of the "Computers and Thought Lecture" delivered to the 4th International Conference on Artificial Intelligence, held in Tbilisi, Georgia, USSR, September 1975.&#13;
Work reported herein was conducted partly at the University of Maryland, under support of a University Research Board grant, and partly at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-75-c-0643.
</description>
<pubDate>Sat, 01 Nov 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41994</guid>
<dc:date>1975-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Solving The Findspace Problem, or How to Find Out Where Things Aren't ....</title>
<link>https://hdl.handle.net/1721.1/41993</link>
<description>On Solving The Findspace Problem, or How to Find Out Where Things Aren't ....
Pfister, Gregory F.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract NOOO14-70-A-0362-0006.
</description>
<pubDate>Thu, 29 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41993</guid>
<dc:date>1973-03-29T00:00:00Z</dc:date>
</item>
<item>
<title>Garbage Collection in a Very Large Address Space</title>
<link>https://hdl.handle.net/1721.1/41992</link>
<description>Garbage Collection in a Very Large Address Space
Bishop, Peter B.
The address space is broken into areas that can be garbage collected separately. An area is analogous to a file on current systems. Each process has a local computation area for its stack and temporary storage that is roughly analogous to a job core image. A mechanism is introduced for maintaining lists of inter-area links, the key to separate garbage collection. This mechanism is designed to be placed in hardware and does not create much overhead. It could be used in a practical computer system that uses the same address space for all users for the life of the system. It is necessary for the hardware to implement a reference count scheme that is adequate for handling stack frames. The hardware also facilitates implementation of protection by capabilities without the use of unique codes. This is due to elimination of dangling references. Areas can be deleted without creating dangling references.
This research was done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology and was supported by the Office of Naval Research under contract number N00014-75-C-0522.
</description>
<pubDate>Mon, 01 Sep 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41992</guid>
<dc:date>1975-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assigning Hierarchical Descriptions to Visual Assemblies of Blocks with Occlusion</title>
<link>https://hdl.handle.net/1721.1/41991</link>
<description>Assigning Hierarchical Descriptions to Visual Assemblies of Blocks with Occlusion
Dunlavey, Michael R.
This memo describes a program for parsing simple two-dimensional piles of blocks into plausible nested subassemblies. Each subassembly must be one of a few types known to the program, such as stack, tower, or arch. Each subassembly has the overall shape of a single block, allowing it to behave as part of another subassembly. Occlusion is represented by an area of the image plane whose contents cannot be seen. Heuristic aspects of the program are concerned with 1) ambiguity among competing subassemblies due to sloppiness of the placement of the blocks, 2) ambiguity due to uncertain measurements of blocks which are partially occluded, and 3) total ambiguity as to the contents of the occluded region.&#13;
Choice among competing subassemblies is accomplished by first making a topological description of the network of conflicts among subassemblies, then considering only the simplest competing subset. If this does not clearly indicate a winner, the system can make an in-depth comparison of the internal structures of the last two competing subassemblies.&#13;
Uncertainty as to measurements of blocks is handled by creation of a disjunction of more certain blocks, each of which participates in the parsing process. If this disjunction results in a pair of competing subassemblies, only one is used, the other being hidden as an alternate to the first, so that the choice of which will be accepted can be deferred. This is a deferrable choice because the alternate subassemblies are so closely similar that the parsing process does not depend on choosing one of them.&#13;
Uncertainty due to occlusion is handled by allowing a potential subassembly to use the occluded area as a "wild card", meaning that if the subassembly can be completed by creating a block which intersects the occluded area, it is so completed. Such an imaginary block may later be consolidated with a real one, or it may remain imaginary.&#13;
The reason for studying this problem is to become acquainted with the program and data structure needed to assign a nested structural description to a complicated visual assembly in which occlusion makes the data incomplete. The extension to 3-dimensional descriptions should be straightforward.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Wed, 01 Oct 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41991</guid>
<dc:date>1975-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Data Flow Computation to the Shaded Image Problem</title>
<link>https://hdl.handle.net/1721.1/41990</link>
<description>Application of Data Flow Computation to the Shaded Image Problem
Strat, Thomas M.
This paper presents a method of producing shaded images of terrain at an extremely fast rate by exploiting parallelism. The architecture of the Data Flow Computer is explained along with an appropriate "program" to compute the images. It is shown how shaded images of terrain can be computed in less than one-tenth of a second using a moderate-sized Data Flow Computer.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41990</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>What is Delaying the Manipulator Revolution?</title>
<link>https://hdl.handle.net/1721.1/41989</link>
<description>What is Delaying the Manipulator Revolution?
Horn, Berthold K.P.
Despite two decades of work on mechanical manipulators and their associated controls, we do not see wide-spread application of these devices to many of the tasks to which they seem so obviously suited. Somehow, a variety of interacting causes has conspired to prevent them from fulfilling their much talked about potential. In part, this appears to be the result of a research effort that was too small, too fragmented, and too discontinuous in time.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under ONR contract N00014-77-C-0389.
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41989</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchy in Knowledge Representations</title>
<link>https://hdl.handle.net/1721.1/41988</link>
<description>Hierarchy in Knowledge Representations
Doyle, Jon
This paper discusses a number of problems faced in communicating expertise and common sense to a computer, and the approaches taken by several current knowledge representation languages towards solving these problems. The main topic discussed is hierarchy. The importance of hierarchy is almost universally recognized. Hierarchy forms the backbone of many existing representation languages. We discuss several technical problems raised in constructing hierarchical and almost hierarchical systems as criteria and open problems.
This research was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-75-C-0643.
</description>
<pubDate>Tue, 01 Nov 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41988</guid>
<dc:date>1977-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of a Three Degree of Freedom Kinematic Chain</title>
<link>https://hdl.handle.net/1721.1/41987</link>
<description>Dynamics of a Three Degree of Freedom Kinematic Chain
Horn, Berthold K.P.
In order to be able to design a control system for high-speed control of mechanical manipulators, it is necessary to understand properly their dynamics. Here we present an analysis of a detailed model of a three-link device which may be viewed as either a "leg" in a locomotory system, or the first three degrees of freedom of an "arm" providing for its gross motions. The equations of motion are shown to be non-trivial, yet manageable.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Sat, 01 Oct 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41987</guid>
<dc:date>1977-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wumpus Protocol Analysis</title>
<link>https://hdl.handle.net/1721.1/41986</link>
<description>Wumpus Protocol Analysis
White, Barbara Y.
The goal of this research was to assist in the creation of a new, improved Wumpus advisor by taking protocols of ten people learning to play Wumpus with a human coach. It was hoped that by observing these subjects learn Wumpus from a human coach, that insights would be gained into how the computer coach could be modified or extended. In particular, attention was paid to the representations subjects used, the goals they pursued, and the problems they had as well as to the teaching methods used by the human versus the computer coach.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Mon, 01 Aug 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41986</guid>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vision Review</title>
<link>https://hdl.handle.net/1721.1/41985</link>
<description>Vision Review
Horn, Berthold K.P.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41985</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rational Arithmetic For Mini-Computers</title>
<link>https://hdl.handle.net/1721.1/41984</link>
<description>Rational Arithmetic For Mini-Computers
Horn, Berthold K.P.
A representation for numbers using two computer words is discussed, where the value represented is the ratio of the corresponding integers. This allows for better dynamic range and relative accuracy than single-precision fixed point, yet is less costly than floating point arithmetic. The scheme is easy to implement and particularly well suited for mini-computer applications that call for a great deal of numerical computation. The techniques described have been used to implement a mathematical function subroutine package for a mini-computer as well as a number of applications programs in the machine vision and machine manipulation area.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-A-0643.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41984</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>AMORD: A Deductive Procedure System</title>
<link>https://hdl.handle.net/1721.1/41983</link>
<description>AMORD: A Deductive Procedure System
Sussman, Gerald Jay; Steele, Guy L. Jr.; Rich, Charles; Doyle, Jon; de Kleer, Johan
We have implemented an interpreter for a rule-based system, AMORD, based on a non-chronological control structure and a system of automatically maintained data-dependencies. The purpose of this paper is tutorial. We wish to illustrate:&#13;
(1) The discipline of explicit control and dependencies,&#13;
(2) How to use AMORD, and&#13;
(3) One way to implement the mechanisms provided by AMORD.&#13;
This paper is organized into sections. The first section is a short "reference manual" describing the major features of AMORD. Next, we present some examples which illustrate the style of expression encouraged by AMORD. This style makes control information explicit in a rule-manipulable form, and depends on an understanding of the use of non-chronological justifications for program beliefs as a means for determining the current set of beliefs. The third section is a brief description of the Truth Maintenance System employed by AMORD for maintaining these justifications and program beliefs. The fourth section presents a completely annotated interpreter for AMORD, written in SCHEME.
This research was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-75-C-0643.
</description>
<pubDate>Mon, 01 Aug 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41983</guid>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Method, Based on Plans, for Understanding How a Loop Implements a Computation</title>
<link>https://hdl.handle.net/1721.1/41982</link>
<description>A Method, Based on Plans, for Understanding How a Loop Implements a Computation
Waters, Richard C.
The plan method analyzes the structure of a program. The plan which results from applying the method represents this structure by specifying how the parts of the program interact. This paper demonstrates the utility of the plan method by showing how a plan for a loop can be used to help prove the correctness of a loop. The plan does this by providing a convenient description of what the loop does. This paper also shows how a plan for a loop can be developed based on the code for the loop without the assistance of any commentary. This is possible primarily because most loops are built up in stereotyped ways according to a few fundamental plan types. An experiment is presented which supports the claim that a small number of plan types cover a large percentage of actual cases.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Fri, 01 Jul 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41982</guid>
<dc:date>1977-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A History Keeping Debugging System for PLASMA</title>
<link>https://hdl.handle.net/1721.1/41981</link>
<description>A History Keeping Debugging System for PLASMA
Morrison, Jerry Howard
PLASMA (for PLAnner-like System Modeled on Actors) is a message-passing computer language based on actor semantics. Since every event in the system is the receipt of a message actor by a target actor, a complete history of a computation can be kept by recording these events. The facility to search through and examine such a history, combined with the facility to pre-set breakpoints or stopping points, and the ability to restore side effects, provides a powerful way to debug programs written in PLAMSA. The kinds of history-manipulation and breakpoint setting commands needed, and the ways they can be used, particularly on recursive programs without side effects, are presented.
Artificial Intelligence Laboratory Massachusetts Institute of Technology Working papers are informal papers intended for internal use. This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided by the Office of Naval Research under contract N00014-75-C-0522.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41981</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extracting topographic features from elevation data using contour lines</title>
<link>https://hdl.handle.net/1721.1/41980</link>
<description>Extracting topographic features from elevation data using contour lines
Bruss, Anna R.
This paper describes a method for finding such topographical features as ridges and valleys in a given terrain. Contour lines are used to obtain the desired result.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41980</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational Theory of Animation</title>
<link>https://hdl.handle.net/1721.1/41979</link>
<description>A Computational Theory of Animation
Kahn, Kenneth M.
A system is proposed capable of generating narrative computer animation in response to a simple script. The major problem addressed is how to imbed into the system some of the knowledge that animators use when creating animation. Infinitely many animated films can fulfill a single script. The system is faced with the problem of how to make a good one by making decisions in very under-constrained situations. This paper is a total revision of AI Working Paper 119.
The author of this work is supported by an IBM Fellowship. The research described herein is being conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program.
</description>
<pubDate>Fri, 01 Apr 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41979</guid>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Semantic Component of PAL: The Personal Assistant Language Understanding Program</title>
<link>https://hdl.handle.net/1721.1/41978</link>
<description>The Semantic Component of PAL: The Personal Assistant Language Understanding Program
Bullwinkle, Candace
This paper summarizes the design and implementation of the "semantics" module of a natural language undertanding system for the personal assistant domain. This module includes mappings to deep frames, noun phrase referencing and discourse analysis.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defence under Office of Naval Research Contract N00014-75-C-0643.
</description>
<pubDate>Tue, 01 Mar 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41978</guid>
<dc:date>1977-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Birthday Party Frame System</title>
<link>https://hdl.handle.net/1721.1/41977</link>
<description>A Birthday Party Frame System
Clemenson, Gregory D.
This paper is an experimental investigation of the utility of the MIT-AI frames system. Using this system, a birthday party planning system was written, representing the basic decisions that comprise such a plan as frames. The planning problem is presented in the user in a way conforming to his natural planning procedures. The system is able to check the consistency of the plan parts, and finally produces a completed plan for the party, and can supply the user with some valuable summaries, such as a shopping list.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Tue, 01 Feb 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41977</guid>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>List Processing in Real Time on a Serial Computer</title>
<link>https://hdl.handle.net/1721.1/41976</link>
<description>List Processing in Real Time on a Serial Computer
Baker, Henry G. Jr.
A real-time list processing system is one in which the time required by each elementary list operation (CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical list processing systems such as LISP do not have this property because a call to CONS may invoke the garbage collector which requires time proportional to the number of accessible cells to finish. The space requirement of a classical LISP system with N accessible cells under equilibrium conditions is (1.5+μ)N or (1+μ)N, depending upon whether a stack is required for the garbage collector, where μ&gt;0 is typically less than 2.&#13;
A list processing system is presented which:&#13;
1) is real-time--i.e. T(CONS) is bounded by a constant independent of the number of cells in use;&#13;
2) requires space (2+2μ)N, i.e. not more than twice that of a classical system;&#13;
3) runs on a serial computer without a time-sharing clock;&#13;
4) handles directed cycles in the data structures;&#13;
5) is fast--the average time for each operation is about the same as with normal garbage collection;&#13;
6) compacts--minimizes the working set;&#13;
7) keeps the free pool in one contiguous block--objects of nonuniform size pose no problem;&#13;
8) uses one phase incremental collection--no separate mark, sweep, relocate phases;&#13;
9) requires no garbage collector stack;&#13;
10) requires no "mark bits", per se;&#13;
11) is simple--suitable for microcoded implementation.&#13;
Extensions of the system to handle a user program stack, compact list representation ("CDR-coding"), arrays of non-uniform size, and hash linking are discussed. CDR-coding is shown to reduce memory requirements for N LISP cells to ≈(I+μ)N. Our system is also compared with another approach to the real-time storage management problem, reference counting, and reference counting is shown to be neither competitive with our system when speed of allocation is critical, nor compatible, in the sense that a system with both forms of garbage collection is worse than our pure one.
Key Words and Phrases: real-time, compacting, garbage collection, list processing, virtual memory, file or database management, storage management, storage allocation, LISP, CDR-coding, reference counting.&#13;
CR Categories: 3.50, 3.60, 373, 3.80, 4.13, 24.32, 433, 4.35, 4.49&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.
</description>
<pubDate>Fri, 01 Apr 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41976</guid>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shallow Binding in LISP 1.5</title>
<link>https://hdl.handle.net/1721.1/41975</link>
<description>Shallow Binding in LISP 1.5
Baker, Henry G. Jr.
Shallow binding is a scheme which allows the value of a variable to be accessed in a bounded amount of computation. An elegant model for shallow binding in LISP 1.5 is presented in which context-switching is an environment structure transformation called "re-rooting". Re-rooting is completely general and reversible, and is optional in the sense that a LISP 1.5 interpreter will operate correctly whether or not re-rooting is invoked on every context change. Since re-rooting leaves (ASSOC X A) invariant, for all variables X and all environments A, the programmer can have access to a re-rooting primitive, (SHALLOW), which gives him dynamic control over whether accesses are shallow or deep, and which effects only the speed of execution of a program, not its semantics. So long as re-rooting is an indivisible operation, multiple processes can be active in the same environment structure. The re-rooting scheme is compared to a cache scheme for shallow binding and the two are found to be compatible. Finally, the concept of re-rooting is shown not to depend upon LISP's choice of dynamic instead of lexical binding for free variables; hence it can be used in an Algol interpreter, for example.
Key Words and Phrases: LISP 1.5, environment structures, FUNARGs, shallow and deep binding, multiprogramming, cache.&#13;
CR Categories: 4.13, 4.22, 4.32&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41975</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cryptology and Data Communications</title>
<link>https://hdl.handle.net/1721.1/41974</link>
<description>Cryptology and Data Communications
Waters, Richard C.
This paper is divided into two parts. The first part deals with cryptosystems and cryptanalysis. It surveys the basic information about cryptosystems and then addresses two specific questions. Are cryptosystems such as LUCIFER which are based on the ideas of Feistel and Shannon secure for all practical purposes? Is the proposed NBS standard cryptosystem secure for all practical purposes? This paper argues that the answer to the first question is "they might well be" and that the answer to the second is "no."&#13;
The second part of this paper considers how a cryptosystem can be used to provide security of data transmission in a computer environment. It discusses the two basic aspects of security: secrecy and authentication. It then describes and discusses a specific proposal by Kent of a set of protocols designed to provide security through encryption. Finally, an alternate proposal is given in order to explore some of the other design choices which could have been made.
Research reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the office of Naval Research under contract N00014-75-C-0643.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41974</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laws for Communicating Parallel Processes</title>
<link>https://hdl.handle.net/1721.1/41973</link>
<description>Laws for Communicating Parallel Processes
Baker, Henry; Hewitt, Carl
This paper presents some "laws" that must be satisfied by computations involving communicating parallel processes. The laws take the form of stating restrictions on the histories of computations that are physically realizable. The laws are intended to characterize aspects of parallel computations that are independent of the number of physical processors that are used in the computation.
DRAFT COPY ONLY&#13;
Working Papers are informal papers intended for internal use. This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided in part by the Office of Naval Research of the Department of Defense under contract N00014-75-C-0522.
</description>
<pubDate>Mon, 01 Nov 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41973</guid>
<dc:date>1976-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolving Parallel Programs</title>
<link>https://hdl.handle.net/1721.1/41972</link>
<description>Evolving Parallel Programs
Hewitt, Carl
Message passing is directed toward the production of programs that are intended to execute efficiently in a computing environment with a large number of processors. The paradigm attempts to address the computational issues of state change and communication directly with appropriate primitives. Efficient programs are evolved for fast factorial and path existence determination in a directed graph.&#13;
This paper is a contribution to the continuing debate on programming methodology. It advocates that simple initial implementations of programs should be constructed and then the implementations should be evolved to meet their partial specifications where it is anticipated that the partial specifications will themselves evolve with time.&#13;
The programming methodology used in this paper is intended for use with an actor machine which consists of a large number of processors connected by a high bandwidth network. We evolve implementations for factorial and for the path existence problem that execute in the logarithm of the amount of time required on a conventional machine. The implementation (with no redundant exploration) of the path existence problem evolved in this paper is more efficient than any implementation that can be programmed in a dialect of pure LISP that allows the arguments to a function to be evaluated in parallel. This is evidence that applicative programming in languages like pure LISP is apparently less efficient in some practical applications. The efficiency of such applicative languages is important because many computer scientists are proposing to use them on future generation parallel machines whose architectures exploit ultra large scale integration.
This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41972</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Position of the Sun</title>
<link>https://hdl.handle.net/1721.1/41971</link>
<description>The Position of the Sun
Horn, Berthold K.P.
The appearance of a surface depends dramatically on how it is illuminated. In order to interpret properly satellite and aerial imagery, it is necessary to know the position of the sun in the sky. This is particularly important if this interpretation is to be done in an automated fashion. Techniques using relatively straightforward methods are presented here for calculating the position of the sun with more than enough accuracy.&#13;
Caution: Do not use this technique for navigational purposes. Correction terms have been omitted; as a result, the ephemeris data calculated may be in error by about one minute of arc, an amount which is of no significance for the application of this data in image analysis.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research Contract N00014-75-C-0643.
</description>
<pubDate>Wed, 01 Mar 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41971</guid>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reporter: An Intelligent Noticer</title>
<link>https://hdl.handle.net/1721.1/41970</link>
<description>Reporter: An Intelligent Noticer
Rosenberg, Steven
Some researchers, notably Schank and Abelson, (1975) have argued for the existence of large numbers of scripts as a representation for complex events. This paper adopts a different viewpoint. I consider complex events to have no fixed definition. Instead they are defined by a set of target components. At any given time an arbitrarily complex description which contains the target components can be generated from semantic memory. This description provides evidence for a complex event containing the target components. It can be as complex or as simple as the task demands.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. It was supported in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Tue, 15 Nov 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41970</guid>
<dc:date>1977-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>The Incremental Garbage Collection of Processes</title>
<link>https://hdl.handle.net/1721.1/41969</link>
<description>The Incremental Garbage Collection of Processes
Hewitt, Carl; Baker, Henry G. Jr.
This paper investigates some problems associated with an argument evaluation order that we call "future" order, which is different from both call-by-name and call-by-value. In call-by-future, each formal parameter of a function is bound to a separate process (called a "future") dedicated to the evaluation of the corresponding argument. This mechanism allows the fully parallel evaluation of arguments to a function, and has been shown to augment the expressive power of a language.&#13;
We discuss an approach to a problem that arises in this context: futures which were thought to be relevant when they were created become irrelevant through being ignored in the body of the expression where they were bound. The problem of irrelevant processes also appears in multiprocessing problem-solving systems which start several processors working on the same problem but with different methods, and return with the solution which finishes first. This parallel method strategy has the drawback that the processes which are investigating the losing methods must be identified, stopped, and re-assigned to more useful tasks. &#13;
The solution we propose is that of garbage collection. We propose that the goal structure of the solution plan be explicitly represented in memory as part of the graph memory (like Lisp's heap) so that a garbage collection algorithm can discover which processes are performing useful work, and which can be recycled for a new task. &#13;
An incremental algorithm for the unified garbage collection of storage and processes is described.
Key Words and Phrases: garbage collection, multiprocessing systems, processor scheduling. "lazy evaluation, "eager" evaluation.&#13;
CR Categories: 3.60, 3.80, 4.13, 4.22, 4.32.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.&#13;
This paper was presented at the AI*PL Conference at Rochester, N.Y. in August, 1977.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41969</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Plan Verification in a Programmer's Apprentice</title>
<link>https://hdl.handle.net/1721.1/41968</link>
<description>Plan Verification in a Programmer's Apprentice
Shrobe, Howard Elliot
Brief Statement of the Problem:&#13;
An interactive programming environment called the Programmer's Apprentice is described. Intended for use by the expert programmer in the process of program design and maintenance, the apprentice will be capable of understanding, explaining and reasoning about the behavior of real-world LISP programs with side effects on complex data-structures. We view programs as engineered devices whose analysis must be carried out at many level of abstraction. This leads to a set of logical dependencies between modules which explains how and why modules interact to achieve an overall intention. Such a network of dependencies is a teleological structure which we call a plan; the process of elucidating such a plan stucture and showing that it is coherent and that it achieves its overall intended behavior we call plan verification.&#13;
This approach to program verification is sharply contrasted with the traditional Floyd-Hoare systems which overly restrict themselves to surface features of the programming language. More similar in philosophy is the evolving methodology of languages like CLU or ALPHARD which stress conceptual layering.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under the Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41968</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Plan Recognition in a Programmer's Apprentice</title>
<link>https://hdl.handle.net/1721.1/41967</link>
<description>Plan Recognition in a Programmer's Apprentice
Rich, Charles
Brief Statement of the Problem: &#13;
Stated most generally, the proposed research is concerned with understanding and representing the teleological structure of engineered devices. More specifically, I propose to study the teleological structure of computer programs written in LISP which perform a wide range of non-numerical computations. The major theoretical goal of the research is to further develop a formal representation for teleological structure, called plans, which will facilitate both the abstract description of particular programs, and the compilation of a library of programming expertise in the domain of non-numerical computation. Adequacy of the theory will be demonstrated by implementing a system (to eventually become part of a LISP Programmer's Apprentice) which will be able to recognize various plans in LISP programs written by human programmers and thereby generate cogent explanations of how the programs work, including the detection of some programming errors.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under the Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41967</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theory of Plans for Electronic Circuits</title>
<link>https://hdl.handle.net/1721.1/41966</link>
<description>A Theory of Plans for Electronic Circuits
de Kleer, Johan
A plan for a device assigns purposes to each of the more primitive components and explains how these components interact to achieve the desired behavior of the composite device. Such an information structure is critically important in analyzing, designing or troubleshooting devices. The first goal of this research is to develop a theory of plans for electronic circuits which can be used for these purposes. The second goal is the construction of a system which can automatically recognize a plan for a circuit from a geometrical representation of the circuit's schematic diagram.&#13;
Recognition is a process which recaptures the plan the designer originally had in mind. A theory of schemata will be introduced in which recognition is viewed as the identification of an instance of a schema in the library with the particular circuit being recognized. This process is guided by topological and geometric evidence extracted from the circuit schematic. Causal reasoning, using the technique of propagation of constraints, provides further evidence. One important use of causal reasoning is the confirmation of tentative instantiations based on topological and geometric evidence alone.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Fri, 01 Apr 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41966</guid>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping Sentences to Case Frames</title>
<link>https://hdl.handle.net/1721.1/41965</link>
<description>Mapping Sentences to Case Frames
Levin, Beth
This paper describes a range of phenomena that a case frame system should be able to handle and proposes generalizations to capture this behavior which are formulated as a set of production-like rules. These rules allow the possible surface orders of cases found in English declarative sentences to be generated from a case frame. This is important for the implementation of a case frame builder described here which requires the ability to determine what cases in a case frame can appear in a grammatical role. The appendix contains an in detail survey of some English verbs which illustrate the types of mapping found in English.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defence under Office of Naval Research Contract N00014-75-C-0643.
</description>
<pubDate>Tue, 01 Mar 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41965</guid>
<dc:date>1977-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Note on the Optimal Allocation of Spaces in MACLISP</title>
<link>https://hdl.handle.net/1721.1/41964</link>
<description>A Note on the Optimal Allocation of Spaces in MACLISP
Baker, Henry G. Jr.
This note describes a method for allocating storage among the various spaces in the MACLISP Implementation of LISP. The optimal strategy which minimizes garbage collector effort allocates free storage among the various spaces in such a way that they all run out at the same time. In an equilibrium situation, this corresponds to allocating free storage to the spaces in proportion to their usage. &#13;
Methods are investigated by which the rates of usage can be inferred, and a gc-daemon interrupt handler is developed which implements an approximately optimal strategy in MACLISP. Finally, the sensitivity of this method to rapidly varying differential rates of cell usage is discussed.
Key Words and Phrases: garbage collection, list processing, virtual memory, storage management, storage allocation, LISP.&#13;
CR Categories: 3.50, 3.60, 3.73, 3.80, 4.13, 422, 4.32, 4.33, 4.35, 4.49&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.
</description>
<pubDate>Wed, 16 Mar 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41964</guid>
<dc:date>1977-03-16T00:00:00Z</dc:date>
</item>
<item>
<title>PSUDOC - A Simple Diagnostic Program</title>
<link>https://hdl.handle.net/1721.1/41963</link>
<description>PSUDOC - A Simple Diagnostic Program
Lozano-Perez, Tomas
This paper describes PSUDOC, a very simple LISP program to carry out some medical diagnosis tasks. The program's domain is a subset of clinical medicine characterized by patients presenting with edema and/or hematuria. The program's goal is to go from the presenting symptoms to a hypothesis of the underlying disease state. The program uses a variation of simple tree searching strategies called ETS.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41963</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laws for Communicating Parallel Processes</title>
<link>https://hdl.handle.net/1721.1/41962</link>
<description>Laws for Communicating Parallel Processes
Baker, Henry; Hewitt, Carl
This paper presents some laws that must be satisfied by computations involving communicating parallel processes. The laws are stated in the context of the actor theory, a model for distributed parallel computation, and take the form of stating plausible restrictions on the histories of parallel computations to make them physically realizable. The laws are justified by appeal to physical intuition and are to be regarded as falsifiable assertions about the kinds of computations that occur in nature rather than as proven theorems in mathematics. The laws are used to analyze the mechanisms by which multiple processes can communicate to work effectively together to solve difficult problems.&#13;
Since the causal relations among the events in a parallel computation do not specify a total order on events, the actor model generalizes the notion of computation from a sequence of states to a partial order of events. The interpretation of unordered events in this partial order is that they proceed concurrently. The utility of partial orders is demonstrated by using them to express our laws for distributed computation.
Key Words and Phrases: parallel processes, parallel or asynchronous computations, partial orders of events, Actor theory.&#13;
CR Categories: 5.21, 5.24, 5.26.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.
</description>
<pubDate>Tue, 10 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41962</guid>
<dc:date>1977-05-10T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of Dependency Relationships in the Control of Reasoning</title>
<link>https://hdl.handle.net/1721.1/41961</link>
<description>The Use of Dependency Relationships in the Control of Reasoning
Doyle, Jon
Several recent problem-solving programs have indicated improved methods for controlling program actions. Some of these methods operate by analyzing the time-independent antecedent-consequent dependency relationships between the components of knowledge about the problem for solution. This paper is a revised version of a thesis proposal which indicates how a general system of automatically maintained dependency relationships can be used to effect many forms of control on reasoning in an antecedent reasoning framework.
Research reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under contract N00014-75-C-0643.
</description>
<pubDate>Mon, 01 Nov 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41961</guid>
<dc:date>1976-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reasoning By Analogy: A Progress Report</title>
<link>https://hdl.handle.net/1721.1/41960</link>
<description>Reasoning By Analogy: A Progress Report
Brown, Richard
Rather.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-75-C-0643. The views expressed are necessarily (and perhaps only) those of the author.
</description>
<pubDate>Fri, 01 Oct 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41960</guid>
<dc:date>1976-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Computational Theory to Psychology and Neurophysiology -- a case study from vision</title>
<link>https://hdl.handle.net/1721.1/41959</link>
<description>From Computational Theory to Psychology and Neurophysiology -- a case study from vision
Marr, David
The CNS needs to be understood at four nearly independent levels of description: (1) that at which the nature of a computation is expressed; (2) that at which the algorithms that implement a computation are characterised; (3) that at which an algorithm is committed to particular mechanisms; and (4) that at which the mechanisms are realised in hardware. In general, the nature of a computation is determined by the problem to be solved, the mechanisms that are used depend upon the available hardware, and the particular algorithms chosen depend on the problem and on the available mechanisms. Examples are given of theories at each level from current research in vision, and a brief review of the immediate prospects for the field is given.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Sun, 01 Aug 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41959</guid>
<dc:date>1976-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discourse Structure</title>
<link>https://hdl.handle.net/1721.1/41958</link>
<description>Discourse Structure
Rosenberg, Steven T.
An essential step in understanding connected discourse is the ability to link the meanings of successive sentences together. Given a growing database to which new sentence meanings must be linked, which out of many possible inference chains will succeed? To which items already in a data base is a new item relevent? To assure easy understandability of text the amount of processing time spent on unsuccessful linkage attempts must be reduced. This paper develops a preliminary theory of discourse structure. Several newspaper articles were examined in the light of this theory. Two examples were worked out in detail to explore how a hypothetical discourse understander might use the model of discourse structure to represent knowledge gained from processing text.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. It was supported in part by the National Science Foundation under grant C40708X and in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.&#13;
The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the National Science Foundation or the United States Government.
</description>
<pubDate>Tue, 17 Aug 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41958</guid>
<dc:date>1976-08-17T00:00:00Z</dc:date>
</item>
<item>
<title>Digital Control of a Six-Axis Manipulator</title>
<link>https://hdl.handle.net/1721.1/41957</link>
<description>Digital Control of a Six-Axis Manipulator
Blanchard, David C.
This paper describes a scheme for providing low-level control of a multi-link serial manipulator. The goal was to achieve adaptive behavior without making assumptions about the environment.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract NOOO14-75-C-0643.
</description>
<pubDate>Sun, 01 Aug 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41957</guid>
<dc:date>1976-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Representation and Use of Semantic Categories: A Survey and Prospectus</title>
<link>https://hdl.handle.net/1721.1/41956</link>
<description>On the Representation and Use of Semantic Categories: A Survey and Prospectus
Schatz, Bruce R.
This paper is intended as a brief introduction to several issues concerning semantic categories. These are the everyday, factual groupings of world knowledge according to some similarity in characteristics. Some psychological data concerning the structure, formation, and use of categories is surveyed. Then several psychological models (set-theoretic and network) are considered. Various artificial intelligence representations (concerning the symbol mapping and recognition problems) dealing with similar issues are also reviewed. It is argued that these data and representations approach semantic categories at too abstract a level and a set of guidelines which may be helpful in constructing a microworld are given.
This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-75-C-0643.
</description>
<pubDate>Sat, 01 May 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41956</guid>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hand Eye Coordination</title>
<link>https://hdl.handle.net/1721.1/41955</link>
<description>Hand Eye Coordination
Speckert, Glen
This paper describes a simple method of converting visual coordinates to arm coordinates which does not require knowledge of the position of the camera(s). Comparisons are made to other methods and two camera, three dimensional extensions are discusssed. The single camera method for converting points on a tabletop is used by Marc Raibert and Glen Speckert in a working hand-eye system which recognizes objects and picks them up under visual guidance. This was implemented on the MIT Micro-Automation PDP 11/45 using a low speed vidicon and a Scheinman arm.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research. contract N88814-75C-8643-8885.
</description>
<pubDate>Thu, 01 Jul 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41955</guid>
<dc:date>1976-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two Simple Algorithms For Displaying Orthographic Projections of Surfaces</title>
<link>https://hdl.handle.net/1721.1/41954</link>
<description>Two Simple Algorithms For Displaying Orthographic Projections of Surfaces
Woodham, Robert J.
Two simple algorithms are described for displaying orthographic projections of surfaces. The first, called RELIEF-PLOT, produces a three-dimensional plot of a surface z = f(x,y). The second, called SHADED-IMAGE, adds information about surface reflectivity and source illumination to produce a grey level image of a surface z = f(x,y).&#13;
Both algorithms demonstrate how a systematic profile expansion can be used to do hidden surface elimination essentially for free.
Work reported herein was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research Contract number N00014-75C-0643
</description>
<pubDate>Sun, 01 Aug 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41954</guid>
<dc:date>1976-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structured Planning and Debugging: A Linguistic Approach to Problem Solving</title>
<link>https://hdl.handle.net/1721.1/41953</link>
<description>Structured Planning and Debugging: A Linguistic Approach to Problem Solving
Miller, Mark L.; Goldstein, Ira P.
A structured approach to planning and debugging is obtained by using an Augmented Transition Network (ATN) to model the problem solving process. This proves to be a perspicuous representation for planning concepts including techniques of identification, decomposition and reformulation. It also provides an elegant theory of debugging, in which bugs are identified as errors in transitions between states in the ATN. Examples from the Blocks World and elementary graphics programming problems are used to illustrate the theory.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. It was supported in part by the National Science Foundation under grant C40708X and in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643. &#13;
The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the National Science Foundation or the United States Government.
</description>
<pubDate>Tue, 08 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41953</guid>
<dc:date>1976-06-08T00:00:00Z</dc:date>
</item>
<item>
<title>Symbol IC-Evaluation as an Aid to Program Synthesis</title>
<link>https://hdl.handle.net/1721.1/41952</link>
<description>Symbol IC-Evaluation as an Aid to Program Synthesis
Yonezawa, Akinori
Symbolic-evaluation is the process which abstractly evaluates an actor program and checks to see whether the program fulfills its contract (specification). In this paper, a formalism based on the conceptual representation is proposed as a specification language and a proof system for programs which may include change of behavior (side-effects). The relation between algebraic specifications and the specifications based on the conceptual representation is discussed and the limitation of the current algebraic specifications is pointed out. The proposed formalism can deal with problems of side-effects which have been beyond the scope of Floyd-Hoare proof rules. Symbolic-evaluation is carried out with explicit use of the notion of situation (local state of an actor system). Uses of situational tags in assertions make it possible to state relations holding between objects in different situations. As an illustrative example, an impure actors which behave like a queue is extensively examined. The verification of a procedure which deals with the queue-actors and the correctness of its implementations are demonstrated by the symbolic-evaluation. Furthermore how the symbolic-evaluation serves as an aid to program synthesis is illustrated using two different implementations of the queue-actor.
This report describes research done at the Artificial Intelligence laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advance Research Projects Agency of the Department of Defence under Office of Naval Research contract N00014-75-C0522.
</description>
<pubDate>Thu, 01 Apr 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41952</guid>
<dc:date>1976-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>CGOL - an Alternative External Representation For LISP users</title>
<link>https://hdl.handle.net/1721.1/41951</link>
<description>CGOL - an Alternative External Representation For LISP users
Pratt, Vaughan R.
Advantages of the standard external representation of LISP include its simple definition, its economical implementation and its convenient extensibility. These advantages have been gained by trading off syntactic variety for the rigidity of parenthesized prefix notation. This paper describes an approach to increasing the available notational variety in LISP without compromising the above advantages of the standard notation. A primary advantage of the availability of such variety is the extent to which documentation can be incorporated into the code itself, decreasing the chance of mismatches between cods and documentation. The approach differs from that of MLISP[superscript 4], which attempts to be a self-contained language rather than a notation available immediately on demand to the ordinary LISP user. A striking feature of a MACLISP implementation of this approach, the CGOL notation, is that any LISP user, at any time, without any prior preparation, and without significant compromise of storage or speed, can in mid-stream change to the CGOL notation merely by typing (CGOL) at the LISP he is presently using, even if he has already loaded and begun running his LISP program. Another striking feature is the possibility of notational transparency; a LISP user may ask LISP to read a file without needing to know the notation(s) used within that file.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Mon, 01 Mar 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41951</guid>
<dc:date>1976-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Actor-Based Computer Animation Language</title>
<link>https://hdl.handle.net/1721.1/41950</link>
<description>An Actor-Based Computer Animation Language
Kahn, Kenneth M.
This paper reproduces an appendix of a doctoral thesis proposal that describes a language based on actor semantics designed especially for animation. The system described herein is built upon MacLisp and is also compatible with Lisp-Logo. The system was implemented to serve two functions: to provide a base system for the knowledge-based animation system which is described in Working Paper 119 (or Logo WP 47) and to experiment with various extensions of Logo to improve its value as an educational tool.
This work was supported in part by the National Science Foundation under grant number GJ-1049 and conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program. Reproduction of this document in whole or in part is permitted for any purpose of the United States Government.
</description>
<pubDate>Sun, 01 Feb 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41950</guid>
<dc:date>1976-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Knowledge-Based Computer Animation System</title>
<link>https://hdl.handle.net/1721.1/41949</link>
<description>A Knowledge-Based Computer Animation System
Kahn, Kenneth M.
This paper reproduces part of a doctoral thesis proposal describing the design of a system capable of generating animated drawings in response to a simple story. The representation and interaction of the various sources of the knowledge necessary to accomplish this are discussed. The appropriateness of an actor formalism for representing the concurrent processes and knowledge of the system is touched upon here and discussed further in Working Paper 120 (or Logo WP 48) "An Actor-Based Computer Animation Language". Finally, the role of the system as an example of a visible intelligent system in education is discussed.
This work was supported in part by the National Science Foundation under grant number GJ-1049 and conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program. Reproduction of this document in whole or in part is permitted for any purpose of the United States Government.
</description>
<pubDate>Sun, 01 Feb 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41949</guid>
<dc:date>1976-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge Driven Recognition of the Human Body</title>
<link>https://hdl.handle.net/1721.1/41948</link>
<description>Knowledge Driven Recognition of the Human Body
Speckert, Glen
This paper shows how a good internal model of the subject viewed aids in the visual recognition and following of key parts. The role of knowledge driven top-down tools and methods is shown by recognizing a series of human figures drawn from Eadward Muybridge's collection of 1887. Knowledge of the subject's structure and actions are used to find the head, shoulder, elbow, hip, knees, and ankles of the subject.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41948</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mini-Robot Group User's Guide Part 2: Access From ITS</title>
<link>https://hdl.handle.net/1721.1/41947</link>
<description>Mini-Robot Group User's Guide Part 2: Access From ITS
Billmers, Meyer A.
Part 2 of the MINI-ROBOT USER'S GUIDE describes those devices attached to the mini-robot system which may be accessed from ITS, and describes the appropriate software for accessing them. Specifically, the photowriter, photoscanner, vidicon, and Scheinman arm are documented.
A.I. Laboratory Working Papers are produced for internal circulation, and may contain information that is, for example, too preliminary or too detailed for formal publication. Although some will be given a limited external distribution, it is not intended that they should be considered papers to which reference can be made in the literature.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Thu, 01 Jun 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41947</guid>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guided Time Warping for Motion Editing</title>
<link>https://hdl.handle.net/1721.1/41946</link>
<description>Guided Time Warping for Motion Editing
Hsu, Eugene; Silva, Marco da; Popovic, Jovan
Time warping allows users to modify timing without affecting poses. It has many applications in animation systems for motion editing, such as refining motions to meet new timing constraints or modifying the acting of animated characters. However, time warping typically requires many manual adjustments to achieve the desired results. We present a technique which simplifies this process by allowing time warps to be guided by a provided reference motion. Given few timing constraints, it computes a warp that both satisfies these constraints and maximizes local timing similarities to the reference. The algorithm is fast enough to incorporate into standard animation workflows. We apply the technique to two common tasks: preserving the natural timing of motions under new time constraints and modifying the timing of motions for stylistic effects.
</description>
<pubDate>Wed, 01 Aug 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41946</guid>
<dc:date>2007-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Style Translation for Human Motion</title>
<link>https://hdl.handle.net/1721.1/41945</link>
<description>Style Translation for Human Motion
Hsu, Eugene; Pulli, Kari; Popovic, Jovan
Style translation is the process of transforming an input motion into a new style while preserving its original content. This problem is motivated by the needs of interactive applications, which require rapid processing of captured performances. Our solution learns to translate by analyzing differences between performances of the same content in input and output styles. It relies on a novel correspondence algorithm to align motions, and a linear time-invariant model to represent stylistic differences. Once the model is estimated with system identification, our system is capable of translating streaming input with simple linear operations at each frame.
</description>
<pubDate>Mon, 01 Aug 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41945</guid>
<dc:date>2005-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Example-Based Control of Human Motion</title>
<link>https://hdl.handle.net/1721.1/41944</link>
<description>Example-Based Control of Human Motion
Hsu, Eugene; Gentry, Sommer; Popovic, Jovan
In human motion control applications, the mapping between a control specification and an appropriate target motion often defies an explicit encoding. We present a method that allows such a mapping to be defined by example, given that the control specification is recorded motion. Our method begins by building a database of semantically meaningful instances of the mapping, each of which is represented by synchronized segments of control and target motion. A dynamic programming algorithm can then be used to interpret an input control specification in terms of mapping instances. This interpretation induces a sequence of target segments from the database, which is concatenated to create the appropriate target motion. We evaluate our method on two examples of indirect control. In the first, we synthesize a walking human character that follows a sampled trajectory. In the second, we generate a synthetic partner for a dancer whose motion is acquired through motion capture.
</description>
<pubDate>Thu, 01 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41944</guid>
<dc:date>2004-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Note on Perturbation Results for Learning Empirical Operators</title>
<link>https://hdl.handle.net/1721.1/41940</link>
<description>A Note on Perturbation Results for Learning Empirical Operators
De Vito, Ernesto; Belkin, Mikhail; Rosasco, Lorenzo
A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of algorithms, it is an important problem to be able to assess the quality of such approximations. The contribution of our paper is two-fold:  1. We use a technique based on a concentration inequality for Hilbert spaces to provide new much simplified proofs for a number of results in spectral approximation.  2. Using these methods we provide several new results for estimating spectral properties of the graph Laplacian operator extending and strengthening results from [26].
</description>
<pubDate>Tue, 19 Aug 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41940</guid>
<dc:date>2008-08-19T00:00:00Z</dc:date>
</item>
<item>
<title>A Stored Picture Hacking Facility</title>
<link>https://hdl.handle.net/1721.1/41939</link>
<description>A Stored Picture Hacking Facility
Markowitz, Sidney
A short description of LISP functions that have been written for use with the stored picture facility. These functions allow one to display an image of a stored scene on the 340 scope, and produce graphs and histograms of intensity functions of portions of the scene.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41939</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transductive Ranking on Graphs</title>
<link>https://hdl.handle.net/1721.1/41938</link>
<description>Transductive Ranking on Graphs
Agarwal, Shivani
In ranking, one is given examples of order relationships among objects, and the goal is to learn from these examples a real-valued ranking function that induces a ranking or ordering over the object space. We consider the problem of learning such a ranking function in a transductive, graph-based setting, where the object space is finite and is represented as a graph in which vertices correspond to objects and edges encode similarities between objects. Building on recent developments in regularization theory for graphs and corresponding Laplacian-based learning methods, we develop an algorithmic framework for learning ranking functions on graphs. We derive generalization bounds for our algorithms in transductive models similar to those used to study other transductive learning problems, and give experimental evidence of the potential benefits of our framework.
</description>
<pubDate>Thu, 07 Aug 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41938</guid>
<dc:date>2008-08-07T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Envelope MDPs for Relational Equivalence-based Planning</title>
<link>https://hdl.handle.net/1721.1/41920</link>
<description>Adaptive Envelope MDPs for Relational Equivalence-based Planning
Gardiol, Natalia H.; Kaelbling, Leslie Pack
We describe a method to use structured representations of the environmentâ&#128;&#153;s dynamics to constrain and speed up the planning process. Given a problem domain described in a probabilistic logical description language, we develop an anytime technique that incrementally improves on an initial, partial policy. This partial solution is found by ï¬&#129;rst reducing the number of predicates needed to represent a relaxed version of the problem to a minimum, and then dynamically partitioning the action space into a set of equivalence classes with respect to this minimal representation. Our approach uses the envelope MDP framework, which creates a Markov decision process out of a subset of the full state space as de- termined by the initial partial solution. This strategy permits an agent to begin acting within a restricted part of the full state space and to expand its envelope judiciously as resources permit.
</description>
<pubDate>Tue, 29 Jul 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41920</guid>
<dc:date>2008-07-29T00:00:00Z</dc:date>
</item>
<item>
<title>Cognitive Cliches</title>
<link>https://hdl.handle.net/1721.1/41893</link>
<description>Cognitive Cliches
Chapman, David
This paper is an exploration of a wide class of mental structures called cognitive cliches that support intermediate methods that are moderately general purpose, in that a few of them will probably be applicable to any given task; efficient; but not individually particularly powerful. These structures are useful in representation, learning, and reasoning of various sorts. Together they form a general theory of special cases.&#13;
A cognitive cliche is a pattern that is commonly found in representations and, when recognized, can be exploited by applying the intermediate methods attached to it. The flavor of the idea is perhaps best conveyed by some examples: TRANSITIVITY, CROSS PRODUCTS, SUCCESSIVE APPROXIMATION, CONTAINMENT, ENABLEMENT, PATHS, RESOURCES, and PROPAGATION are all cognitive cliches.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41893</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision</title>
<link>https://hdl.handle.net/1721.1/41892</link>
<description>Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision
Levin, Anat; Freeman, William; Durand, Fredo
Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinholeoptics. The development of computational imaging is broadening thisscope; a variety of unconventional cameras do not directly capture atraditional image anymore, but instead require the jointreconstruction of structure and image information. For example, recentcoded aperture designs have been optimized to facilitate the jointreconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied bydifferent strategies. This paper introduces a unified framework for analyzing computational imaging approaches.Each sensor element is modeled as an inner product over the 4D light field.The imaging task is then posed as Bayesian inference: giventhe observed noisy light field projections and a new prior on light field signals, estimate the original light field. Under common imaging conditions, we compare theperformance of various camera designs using 2D light field simulations. Thisframework allows us to better understand the tradeoffs of each camera type and analyze their limitations.
</description>
<pubDate>Mon, 28 Jul 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41892</guid>
<dc:date>2008-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>Event Order Abstraction for Parametric Real-Time System Verification</title>
<link>https://hdl.handle.net/1721.1/41891</link>
<description>Event Order Abstraction for Parametric Real-Time System Verification
Umeno, Shinya
We present a new abstraction technique, event order abstraction (EOA), for parametric safety verification of real-time systems in which ``correct orderings of events'' needed for system correctness are maintained by timing constraints on the systems' behavior. By using EOA, one can separate the task of verifying a real-time system into two parts: 1. Safety property verification of the system given that only correct event orderings occur; and 2. Derivation of timing parameter constraints for correct orderings of events in the system.The user first identifies a candidate set of bad event orders.Then, by using ordinary untimed model-checking, the user examines whether a discretized system model in which all timing constraints are abstracted away satisfies a desirable safety property under the assumption that the identified bad event orders occur in no system execution. The user uses counterexamples obtained from the model-checker to identify additional bad event orders, and repeats the process until the model-checking succeeds. In this step, the user obtains a sufficient set of bad event orders that must be excluded by timing synthesis for system correctness.Next, the algorithm presented in the paper automatically derives a set of timing parameter constraints under which the system does not exhibit the identified bad event orderings. From this step combined with the untimed model-checking step,the user obtains a sufficient set of timing parameter constraints under which the system executes correctly with respect to a given safety property.We illustrate the use of EOA with a train-gate example inspired by the general railroad crossing problem. We also summarize three other case studies, a biphase mark protocol, the IEEE 1394 root contention protocol, and the Fischer mutual exclusion algorithm.
</description>
<pubDate>Mon, 28 Jul 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41891</guid>
<dc:date>2008-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>An $\Omega(n \log n)$ Lower Bound on the Cost of Mutual Exclusion</title>
<link>https://hdl.handle.net/1721.1/41890</link>
<description>An $\Omega(n \log n)$ Lower Bound on the Cost of Mutual Exclusion
Fan, Rui; Lynch, Nancy
We prove an $\Omega(n \log n)$ lower bound on the number ofnon-busywaiting memory accesses by any deterministic algorithm solving$n$ process mutual exclusion that communicates via shared registers.The cost of the algorithm is measured in the \emph{state change} costmodel, a variation of the cache coherent model. Our bound is tight inthis model. We introduce a novel information theoretic prooftechnique. We first establish a lower bound on the information neededby processes to solve mutual exclusion. Then we relate the amount ofinformation processes can acquire through shared memory accesses tothe cost they incur. We believe our proof technique is flexible andintuitive, and may be applied to a variety of other problems andsystem models.
</description>
<pubDate>Sun, 23 Jul 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41890</guid>
<dc:date>2006-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Elastic-Net Regularization in Learning Theory</title>
<link>https://hdl.handle.net/1721.1/41889</link>
<description>Elastic-Net Regularization in Learning Theory
De Mol, Christine; Rosasco, Lorenzo; De Vito, Ernesto
Within the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie ["Regularization and variable selection via the elastic net" J. R. Stat. Soc. Ser. B, 67(2):301-320, 2005] for the selection of groups of correlated variables. To investigate on the statistical properties of this scheme and in particular on its consistency properties, we set up a suitable mathematical framework. Our setting is random-design regression where we allow the response variable to be vector-valued and we consider prediction functions which are linear combination of elements (features) in an infinite-dimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular "elastic-net representation" of the regression function such that, if the number of data increases, the elastic-net estimator is consistent not only for prediction but also for variable/feature selection. Our results include finite-sample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elastic-net solution which is different from the optimization procedure originally proposed in "Regularization and variable selection via the elastic net".
</description>
<pubDate>Thu, 24 Jul 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41889</guid>
<dc:date>2008-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>A Projected Subgradient Method for Scalable Multi-Task Learning</title>
<link>https://hdl.handle.net/1721.1/41888</link>
<description>A Projected Subgradient Method for Scalable Multi-Task Learning
Quattoni, Ariadna; Carreras, Xavier; Collins, Michael; Darrell, Trevor
Recent approaches to multi-task learning have investigated the use of a variety of matrix norm regularization schemes for promoting feature sharing across tasks.In essence, these approaches aim at extending the l1 framework for sparse single task approximation to the multi-task setting. In this paper we focus on the computational complexity of training a jointly regularized model and propose an optimization algorithm whose complexity is linear with the number of training examples and O(n log n) with n being the number of parameters of the joint model. Our algorithm is based on setting jointly regularized loss minimization as a convex constrained optimization problem for which we develop an efficient projected gradient algorithm. The main contribution of this paper is the derivation of a gradient projection method with l1â&#136;&#146;â&#136;&#158; constraints that can be performed efficiently and which has convergence rates.
</description>
<pubDate>Wed, 23 Jul 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41888</guid>
<dc:date>2008-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Composable Probabilistic Inference with Blaise</title>
<link>https://hdl.handle.net/1721.1/41887</link>
<description>Composable Probabilistic Inference with Blaise
Bonawitz, Keith A
Probabilistic inference provides a unified, systematic framework for specifying and solving these problems. Recent work has demonstrated the great value of probabilistic models defined over complex, structured domains. However, our ability to imagine probabilistic models has far outstripped our ability to programmatically manipulate them and to effectively implement inference, limiting the complexity of the problems that we can solve in practice.This thesis presents Blaise, a novel framework for composable probabilistic modeling and inference, designed to address these limitations. Blaise has three components: * The Blaise State-Density-Kernel (SDK) graphical modeling language that generalizes factor graphs by: (1) explicitly representing inference algorithms (and their locality) using a new type of graph node, (2) representing hierarchical composition and repeated substructures in the state space, the interest distribution, and the inference procedure, and (3) permitting the structure of the model to change during algorithm execution. * A suite of SDK graph transformations that may be used to extend a model (e.g. to construct a mixture model from a model of a mixture component), or to make inference more effective (e.g. by automatically constructing a parallel tempered version of an algorithm or by exploiting conjugacy in a model). * The Blaise Virtual Machine, a runtime environment that can efficiently execute the stochastic automata represented by Blaise SDK graphs. Blaise encourages the construction of sophisticated models by composing simpler models, allowing the designer to implement and verify small portions of the model and inference method, and to reuse model components from one task to another. Blaise decouples the implementation of the inference algorithm from the specification of the interest distribution, even in cases (such as Gibbs sampling) where the shape of the interest distribution guides the inference. This gives modelers the freedom to explore alternate models without slow, error-prone reimplementation. The compositional nature of Blaise enables novel reinterpretations of advanced Monte Carlo inference techniques (such as parallel tempering) as simple transformations of Blaise SDK graphs.In this thesis, I describe each of the components of the Blaise modeling framework, as well as validating the Blaise framework by highlighting a variety of contemporary sophisticated models that have been developed by the Blaise user community. I also present several surprising findings stemming from the Blaise modeling framework, including that an Infinite Relational Model can be built using exactly the same inference methods as a simple mixture model, that constructing a parallel tempered inference algorithm should be a point-and-click/one-line-of-code operation, and that Markov chain Monte Carlo for probabilistic models with complicated long-distance dependencies, such as a stochastic version of Scheme, can be managed using standard Blaise mechanisms.
</description>
<pubDate>Wed, 23 Jul 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41887</guid>
<dc:date>2008-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>A Distributed Building Evacuation System</title>
<link>https://hdl.handle.net/1721.1/41879</link>
<description>A Distributed Building Evacuation System
Qumsiyeh, Dany M.
This thesis investigates the feasibility of a smart building evacuation system, capable of guiding occupants along safe paths to exits and responding to changing threats. Inspired by developments in amorphous computing, the design presented is scalable to large networks, robust to hardware and communication failure, and based on simple low-cost components. A simulation and hardware prototype demonstrate that this distributed building evacuation system is both feasible and cost effective.
</description>
<pubDate>Mon, 14 Jul 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41879</guid>
<dc:date>2008-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge Benchmarks in Adversarial Mechanism Design (Part I) and Implementation in Surviving Strategies (Part I)</title>
<link>https://hdl.handle.net/1721.1/41878</link>
<description>Knowledge Benchmarks in Adversarial Mechanism Design (Part I) and Implementation in Surviving Strategies (Part I)
Chen, Jing; Micali, Silvio
We put forward new benchmarks and solution concepts for Adversarial Mechanism Design, as defined by [MV07.a], and we exemplify them in the case of truly combinatorial auctions.We benchmark the combined performance (the sum of the auction's effciency and revenue)of a truly combinatorial auction against a very relevant but private knowledge of the players: essentially, the maximum revenue that the best informed player could guarantee if he were the seller. (I.e., by offering each other player a subset of the goods for a take-it-or-leave-it price.) We achieve this natural benchmark within a factor of 2, by means of a new and probabilisticauction mechanism, in KNOWLINGLY SURVIVING STRATEGIES. That is, the above performance of our mechanism is guaranteed in any rational play, independent of any possible beliefs of the players. Indeed, our performance guarantee holds for any possible choice of strategies, so long as each player chooses a strategy among those surviving iterated elimination of knowingly dominated strategies.Our mechanism is extremely robust. Namely, its performance guarantees hold even if all but one of the players collude (together or in separate groups) in any possible but reasonable way. Essentially, the only restriction for the collective utility function of a collusive subset S of the players is the following: the collective utility increases when one member of S is allocated asubset of the goods "individually better" for him and/or his "individual price" is smaller, while the allocations and prices of all other members of S stay the same.Our results improve on the yet unpublished ones of [MV07.b]. The second part of this paper, dealing with a more aggressive benchmark (essentially, the maximum welfare privately known to the players) is forthcoming.
</description>
<pubDate>Tue, 01 Jul 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41878</guid>
<dc:date>2008-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge Benchmarks in Adversarial Mechanism Design and Implementation in Surviving Strategies (Part I)</title>
<link>https://hdl.handle.net/1721.1/41877</link>
<description>Knowledge Benchmarks in Adversarial Mechanism Design and Implementation in Surviving Strategies (Part I)
Chen, Jing; Micali, Silvio
We put forward new benchmarks and solution concepts for Adversarial Mechanism Design, as defined by [MV07.a], and we exemplify them in the case of truly combinatorial auctions.We benchmark the combined performance (the sum of the auction's efficiency and revenue) of a truly combinatorial auction against a very relevant but private knowledge of the players: essentially, the maximum revenue that the best informed player could guarantee if he were the seller. (I.e., by offering each other player a subset of the goods for a take-it-or-leave-it price.)We achieve this natural benchmark within a factor of 2, by means of a new and probabilistic auction mechanism, in surviving strategies. That is, the above performance of our mechanism is guaranteed in any rational play, independent of any possible beliefs of the players. Indeed, our performance guarantee holds for any possible choice of strategies, so long as each player chooses a strategy among those surviving iterated elimination of dominated strategies.Our mechanism is extremely robust. Namely, its performance guarantees hold even if all but one of the players collude (together or in separate groups) in any possible but reasonable way. Essentially, the only restriction for the collective utility function of a collusive subset S of the players is the following: the collective utility increases when one member of S is allocated a ubset of the goods "individually better" for him and/or his "individual price" is smaller, while the allocations and prices of all other members of S stay the same.Our results improve on the yet unpublished ones of [MV07.b]. The second part of this paper, dealing with a more aggressive benchmark (essentially, the maximum welfare privately known to the players) is forthcoming.
</description>
<pubDate>Sun, 01 Jun 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41877</guid>
<dc:date>2008-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Player Knowledge in Combinatorial Auctions (and Implementation in Surviving Strategies)</title>
<link>https://hdl.handle.net/1721.1/41875</link>
<description>Leveraging Player Knowledge in Combinatorial Auctions (and Implementation in Surviving Strategies)
Chen, Jing; Micali, Silvio
None
</description>
<pubDate>Tue, 17 Jun 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41875</guid>
<dc:date>2008-06-17T00:00:00Z</dc:date>
</item>
<item>
<title>Flexible MIPS Soft Processor Architecture</title>
<link>https://hdl.handle.net/1721.1/41874</link>
<description>Flexible MIPS Soft Processor Architecture
Carli, Roberto
The flexible MIPS soft processor architecture borrows selected technologies from high-performance computing to deliver a modular, highly customizable CPU targeted towards FPGA implementations for embedded systems; the objective is to provide a more flexible architectural alternative to coprocessor-based solutions. The processor performs out-of-order execution on parallel functional units, it delivers in-order instruction commit and it is compatible with the MIPS-1 Instruction Set Architecture. Amongst many available options, the user can introduce custom instructions and matching functional units; modify existing units; change the pipelining depth within functional units to any fixed or variable value; customize instruction definitions in terms of operands, control signals and register file interaction; insert multiple redundant functional units for improved performance. The flexibility provided by the architecture allows the user to expand the processor functionality to implement instructions of coprocessor-level complexity through additional functional units. The processor design was implemented and simulated on two FPGA platforms, tested on multiple applications, and compared to three commercially available soft processor solutions in terms of features, area, clock frequency and benchmark performance.
</description>
<pubDate>Mon, 16 Jun 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41874</guid>
<dc:date>2008-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting and Tolerating Byzantine Faults in Database Systems</title>
<link>https://hdl.handle.net/1721.1/41873</link>
<description>Detecting and Tolerating Byzantine Faults in Database Systems
Vandiver, Benjamin Mead
This thesis describes the design, implementation, and evaluation of a replication scheme to handle Byzantine faults in transaction processing database systems. The scheme compares answers from queries and updates on multiple replicas which are off-the-shelf database systems, to provide a single database that is Byzantine fault tolerant. The scheme works when the replicas are homogeneous, but it also allows heterogeneous replication in which replicas come from different vendors. Heterogeneous replicas reduce the impact of bugs and security compromises because they are implemented independently and are thus less likely to suffer correlated failures. A final component of the scheme is a repair mechanism that can correct the state of a faulty replica, ensuring the longevity of the scheme.The main challenge in designing a replication scheme for transaction processingsystems is ensuring that the replicas state does not diverge while allowing a high degree of concurrency. We have developed two novel concurrency control protocols, commit barrier scheduling (CBS) and snapshot epoch scheduling (SES) that provide strong consistency and good performance. The two protocols provide different types of consistency: CBS provides single-copy serializability and SES provides single-copy snapshot isolation. We have implemented both protocols in the context of a replicated SQL database. Our implementation has been tested with production versions of several commercial and open source databases as replicas. Our experiments show a configuration that can tolerate one faulty replica has only a modest performance overhead (about 10-20% for the TPC-C benchmark). Our implementation successfully masks several Byzantine faults observed in practice and we have used it to find a new bug in MySQL.
</description>
<pubDate>Mon, 30 Jun 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41873</guid>
<dc:date>2008-06-30T00:00:00Z</dc:date>
</item>
<item>
<title>Revenue in Truly Combinatorial Auctions and Adversarial Mechanism Design</title>
<link>https://hdl.handle.net/1721.1/41872</link>
<description>Revenue in Truly Combinatorial Auctions and Adversarial Mechanism Design
Micali, Silvio; Valiant, Paul
Little is known about generating revenue in UNRESTRICTED combinatorial auctions. (In particular, the VCG mechanism has no revenue guarantees.) In this paper we determine how much revenue can be guaranteed in such auctions. Our analysis holds both in the standard model, when all players are independent and rational, as well as in a most adversarial model, where some players may bid collusively or even totally irrationally.
</description>
<pubDate>Fri, 02 Nov 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41872</guid>
<dc:date>2007-11-02T00:00:00Z</dc:date>
</item>
<item>
<title>Safe Open-Nested Transactions Through Ownership</title>
<link>https://hdl.handle.net/1721.1/41871</link>
<description>Safe Open-Nested Transactions Through Ownership
Agrawal, Kunal; Lee, I-Ting Angelina; Sukha, Jim
Researchers in transactional memory (TM) have proposed open nesting asa methodology for increasing the concurrency of a program. The ideais to ignore certain "low-level" memory operations of anopen-nested transaction when detecting conflicts for its parenttransaction, and instead perform abstract concurrency control for the"high-level" operation that nested transaction represents. Tosupport this methodology, TM systems use an open-nested commitmechanism that commits all changes performed by an open-nestedtransaction directly to memory, thereby avoiding low-levelconflicts. Unfortunately, because the TM runtime is unaware of thedifferent levels of memory, an unconstrained use of open-nestedcommits can lead to anomalous program behavior.In this paper, we describe a framework of ownership-awaretransactional memory which incorporates the notion of modules into theTM system and requires that transactions and data be associated withspecific transactional modules or Xmodules. We propose a newownership-aware commit mechanism, a hybrid between anopen-nested and closed-nested commit which commits a piece of datadifferently depending on whether the current Xmodule owns the data ornot. Moreover, we give a set of precise constraints on interactionsand sharing of data among the Xmodules based on familiar notions ofabstraction. We prove that ownership-aware TM has has cleanmemory-level semantics and can guarantee serializability bymodules, which is an adaptation of multilevel serializability fromdatabases to TM. In addition, we describe how a programmer canspecify Xmodules and ownership in a Java-like language. Our typesystem can enforce most of the constraints required by ownership-awareTM statically, and can enforce the remaining constraints dynamically.Finally, we prove that if transactions in the process of aborting obeyrestrictions on their memory footprint, the OAT model is free fromsemantic deadlock.
</description>
<pubDate>Wed, 20 Feb 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41871</guid>
<dc:date>2008-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>Matching Sets of Features for Efficient Retrieval and Recognition</title>
<link>https://hdl.handle.net/1721.1/41864</link>
<description>Matching Sets of Features for Efficient Retrieval and Recognition
Grauman, Kristen Lorraine
In numerous domains it is useful to represent a single example by the collection of local features or parts that comprise it. In computer vision in particular, local image features are a powerful way to describe images of objects and scenes. Their stability under variable image conditions is critical for success in a wide range of recognition and retrieval applications. However, many conventional similarity measures and machine learning algorithms assume vector inputs. Comparing and learning from images represented by sets of local features is therefore challenging, since each set may vary in cardinality and its elements lack a meaningful ordering. In this thesis I present computationally efficient techniques to handle comparisons, learning, and indexing with examples represented by sets of features. The primary goal of this research is to design and demonstrate algorithms that can effectively accommodate this useful representation in a way that scales with both the representation size as well as the number of images available for indexing or learning.I introduce the pyramid match algorithm, which efficiently forms an implicit partial matching between two sets of feature vectors. The matching has a linear time complexity, naturally forms a Mercer kernel, and is robust to clutter or outlier features, a critical advantage for handling images with variable backgrounds, occlusions, and viewpoint changes. I provide bounds on the expected error relative to the optimal partial matching. For very large databases, even extremely efficient pairwise comparisons may not offer adequately responsive query times. I show how to perform sub-linear time retrievals under the matching measure with randomized hashing techniques, even when input sets have varying numbers of features.My results are focused on several important vision tasks, including applications to content-based image retrieval, discriminative classification for object recognition, kernel regression, and unsupervised learning of categories. I show how the dramatic increase in performance enables accurate and flexible image comparisons to be made on large-scale data sets, and removes the need to artificially limit the number of local descriptions used per image when learning visual categories.
</description>
<pubDate>Fri, 11 Aug 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41864</guid>
<dc:date>2006-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Fast concurrent object classification and localization</title>
<link>https://hdl.handle.net/1721.1/41862</link>
<description>Fast concurrent object classification and localization
Yeh, Tom; Lee, John J.; Darrell, Trevor
Object localization and classification are important problems incomputer vision. However, in many applications, exhaustive searchover all class labels and image locations is computationallyprohibitive. While several methods have been proposed to makeeither classification or localization more efficient, few havedealt with both tasks simultaneously. This paper proposes anefficient method for concurrent object localization andclassification based on a data-dependent multi-classbranch-and-bound formalism. Existing bag-of-featuresclassification schemes, which can be expressed as weightedcombinations of feature counts can be readily adapted to ourmethod. We present experimental results that demonstrate the meritof our algorithm in terms of classification accuracy, localizationaccuracy, and speed, compared to baseline approaches includingexhaustive search, the ISM method, and single-class branch andbound.
</description>
<pubDate>Tue, 10 Jun 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41862</guid>
<dc:date>2008-06-10T00:00:00Z</dc:date>
</item>
<item>
<title>Agent Organization in the Knowledge Plane</title>
<link>https://hdl.handle.net/1721.1/41861</link>
<description>Agent Organization in the Knowledge Plane
Li, Ji
In designing and building a network like the Internet, we continue to face the problems of scale and distribution. With the dramatic expansion in scale and heterogeneity of the Internet, network management has become an increasingly difficult task. Furthermore, network applications often need to maintain efficient organization among the participants by collecting information from the underlying networks. Such individual information collection activities lead to duplicate efforts and contention for network resources.The Knowledge Plane (KP) is a new common construct that provides knowledge and expertise to meet the functional, policy and scaling requirements of network management, as well as to create synergy and exploit commonality among many network applications. To achieve these goals, we face many challenging problems, including widely distributed data collection, efficient processing of that data, wide availability of the expertise, etc.In this thesis, to provide better support for network management and large-scale network applications, I propose a knowledge plane architecture that consists of a network knowledge plane (NetKP) at the network layer, and on top of it, multiple specialized KPs (spec-KPs). The NetKP organizes agents to provide valuable knowledge and facilities about the Internet to the spec-KPs. Each spec-KP is specialized in its own area of interest. In both the NetKP and the spec-KPs, agents are organized into regions based on different sets of constraints. I focus on two key design issues in the NetKP: (1) a regionbased architecture for agent organization, in which I design an efficient and non-intrusive organization among regions that combines network topology and a distributed hash table; (2) request and knowledge dissemination, in which I design a robust and efficient broadcast and aggregation mechanism using a tree structure among regions. In the spec-KPs, I build two examples: experiment management on the PlanetLab testbed and distributed intrusion detection on the DETER testbed. The experiment results suggest a common approach driven by the design principles of the Internet and more specialized constraints can derive productive organization for network management and applications.
</description>
<pubDate>Wed, 11 Jun 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41861</guid>
<dc:date>2008-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>Non-Metrical Navigation Through Visual Path Control</title>
<link>https://hdl.handle.net/1721.1/41860</link>
<description>Non-Metrical Navigation Through Visual Path Control
Huang, Albert S.; Teller, Seth
We describe a new method for wide-area, non-metrical robot navigationwhich enables useful, purposeful motion indoors. Our method has twophases: a training phase, in which a human user directs a wheeledrobot with an attached camera through an environment while occasionallysupplying textual place names; and a navigation phase in which theuser specifies goal place names (again as text), and the robot issueslow-level motion control in order to move to the specified place. We show thatdifferences in the visual-field locations and scales of features matched acrosstraining and navigation can be used to construct a simple and robust controlrule that guides the robot onto and along the training motion path.Our method uses an omnidirectional camera, requires approximateintrinsic and extrinsic camera calibration, and is capable of effective motioncontrol within an extended, minimally-prepared building environment floorplan.We give results for deployment within a single building floor with 7 rooms, 6corridor segments, and 15 distinct place names.
</description>
<pubDate>Fri, 06 Jun 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41860</guid>
<dc:date>2008-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>On a model of visual cortex: learning invariance and selectivity</title>
<link>https://hdl.handle.net/1721.1/41858</link>
<description>On a model of visual cortex: learning invariance and selectivity
Caponnetto, Andrea; Poggio, Tomaso; Smale, Steve
In this paper we present a class of algorithms for similarity learning on spaces of images. The general framework that we introduce is motivated by some well-known hierarchical pre-processing architectures for object recognition which have been developed during the last decade, and which have been in some cases inspired by functional models of the ventral stream of the visual cortex. These architectures are characterized by the construction of a hierarchy of â&#128;&#156;localâ&#128;&#157; feature representations of the visual stimulus. We show that our framework includes some well-known techniques, and that it is suitable for the analysis of dynamic visual stimuli, presenting a quantitative error analysis in this setting.
</description>
<pubDate>Fri, 04 Apr 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41858</guid>
<dc:date>2008-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>The SoftPHY Abstraction: from Packets to Symbols in Wireless Network Design</title>
<link>https://hdl.handle.net/1721.1/41857</link>
<description>The SoftPHY Abstraction: from Packets to Symbols in Wireless Network Design
Jamieson, Kyle
At ever-increasing rates, we are using wireless systems to communicatewith others and retrieve content of interest to us. Current wirelesstechnologies such as WiFi or Zigbee use forward error correction todrive bit error rates down when there are few interferingtransmissions. However, as more of us use wireless networks toretrieve increasingly rich content, interference increases inunpredictable ways. This results in errored bits, degradedthroughput, and eventually, an unusable network. We observe that thisis the result of higher layers working at the packet granularity,whereas they would benefit from a shift in perspective from wholepackets to individual symbols.From real-world experiments on a 31-node testbed of Zigbee andsoftware-defined radios, we find that often, not all of the bitsin corrupted packets share fate. Thus, today's wireless protocolsretransmit packets where only a small number of the constituent bitsin a packet are in error, wasting network resources. In thisdissertation, we will describe a physical layer that passesinformation about its confidence in each decoded symbol up to higherlayers. These SoftPHY hints have many applications, one ofwhich, more efficient link-layer retransmissions, we will describe indetail. PP-ARQ is a link-layer reliable retransmission protocolthat allows a receiver to compactly encode a request forretransmission of only the bits in a packet that are likely in error.Our experimental results show that PP-ARQ increases aggregate networkthroughput by a factor of approximately 2x under variousconditions. Finally, we will place our contributions in the contextof related work and discuss other uses of SoftPHY throughout thewireless networking stack.
</description>
<pubDate>Tue, 03 Jun 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41857</guid>
<dc:date>2008-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Ignorable Information in Multi-Agent Scenarios</title>
<link>https://hdl.handle.net/1721.1/41530</link>
<description>Ignorable Information in Multi-Agent Scenarios
Milch, Brian; Koller, Daphne
In some multi-agent scenarios, identifying observations that an agent can safely ignore reduces exponentially the size of the agent's strategy space and hence the time required to find a Nash equilibrium. We consider games represented using the multi-agent influence diagram (MAID) framework of Koller and Milch [2001], and analyze the extent to which information edges can be eliminated. We define a notion of a safe edge removal transformation, where all equilibria in the reduced model are also equilibria in the original model. We show that existing edge removal algorithms for influence diagrams are safe, but limited, in that they do not detect certain cases where edges can be removed safely. We describe an algorithm that produces the "minimal" safe reduction, which removes as many edges as possible while still preserving safety. Finally, we note that both the existing edge removal algorithms and our new one can eliminate equilibria where agents coordinate their actions by conditioning on irrelevant information. Surprisingly, in some games these "lost" equilibria can be preferred by all agents in the game.
</description>
<pubDate>Mon, 12 May 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41530</guid>
<dc:date>2008-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Perfect Implementation of Normal-Form Mechanisms</title>
<link>https://hdl.handle.net/1721.1/41527</link>
<description>Perfect Implementation of Normal-Form Mechanisms
Izmalkov, Sergei; Lepinski, Matt; Micali, Silvio
Privacy and trust affect our strategic thinking, yet they have not been precisely modeled in mechanism design. In settings of incomplete information, traditional implementations of a normal-form mechanism ---by disregarding the players' privacy, or assuming trust in a mediator--- may not be realistic and fail to reach the mechanism's objectives. We thus investigate implementations of a new type.We put forward the notion of a perfect implementation of a normal-form mechanism M: in essence, an extensive-form mechanism exactly preserving all strategic properties of M, WITHOUT relying on a trusted mediator or violating the privacy of the players. We prove that ANY normal-form mechanism can be perfectly implemented by a PUBLIC mediator using envelopes and an envelope-randomizing device (i.e., the same tools used for running fair lotteries or tallying secret votes). Differently from a trusted mediator, a public one only performs prescribed public actions, so that everyone can verify that he is acting properly, and never learns any information that should remain private.
</description>
<pubDate>Thu, 01 Mar 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41527</guid>
<dc:date>2007-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gesture in Automatic Discourse Processing</title>
<link>https://hdl.handle.net/1721.1/41526</link>
<description>Gesture in Automatic Discourse Processing
Eisenstein, Jacob
Computers cannot fully understand spoken language without access to the wide range of modalities that accompany speech. This thesis addresses the particularly expressive modality of hand gesture, and focuses on building structured statistical models at the intersection of speech, vision, and meaning.My approach is distinguished in two key respects. First, gestural patterns are leveraged to discover parallel structures in the meaning of the associated speech. This differs from prior work that attempted to interpret individual gestures directly, an approach that was prone to a lack of generality across speakers. Second, I present novel, structured statistical models for multimodal language processing, which enable learning about gesture in its linguistic context, rather than in the abstract.These ideas find successful application in a variety of language processing tasks: resolving ambiguous noun phrases, segmenting speech into topics, and producing keyframe summaries of spoken language. In all three cases, the addition of gestural features -- extracted automatically from video -- yields significantly improved performance over a state-of-the-art text-only alternative. This marks the first demonstration that hand gesture improves automatic discourse processing.
</description>
<pubDate>Wed, 07 May 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41526</guid>
<dc:date>2008-05-07T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Object Recognition and Image Retrieval for Large-Scale Applications</title>
<link>https://hdl.handle.net/1721.1/41519</link>
<description>Efficient Object Recognition and Image Retrieval for Large-Scale Applications
Lee, John J.
Algorithms for recognition and retrieval tasks generally call for both speed and accuracy. When scaling up to very large applications, however, we encounter additional significant requirements: adaptability and scalability. In many real-world systems, large numbers of images are constantly added to the database, requiring the algorithm to quickly tune itself to recent trends so it can serve queries more effectively. Moreover, the systems need to be able to meet the demands of simultaneous queries from many users. In this thesis, I describe two new algorithms intended to meet these requirements and give an extensive experimental evaluation for both. The first algorithm constructs an adaptive vocabulary forest, which is an efficient image-database model that grows and shrinks as needed while adapting its structure to tune itself to recent trends. The second algorithm is a method for efficiently performing classification tasks by comparing query images to only afixed number of training examples, regardless of the size of the image database. These two methods can be combined to create a fast, adaptable, and scalable vision system suitable for large-scale applications. I also introduce LIBPMK, a fast implementation of common computer vision processing pipelines such as that of the pyramid match kernel. This implementation was used to build several successful interactive applications as well as batch experiments for research settings. This implementation, in addition to the two new algorithms introduced by this thesis, are a step toward meeting the speed, adaptability, and scalability requirements of practical large-scale vision systems.
</description>
<pubDate>Tue, 06 May 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41519</guid>
<dc:date>2008-05-06T00:00:00Z</dc:date>
</item>
<item>
<title>New-Age Cryptography</title>
<link>https://hdl.handle.net/1721.1/41518</link>
<description>New-Age Cryptography
Pass, Rafael; Vaikuntanathan, Vinod
We introduce new and general complexity theoretic hardness assumptions. These assumptions abstract out concrete properties of a random oracle and are significantly stronger than traditional cryptographic hardness assumptions; however, assuming their validity we can resolve a number of longstandingopen problems in cryptography.
</description>
<pubDate>Wed, 16 Apr 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41518</guid>
<dc:date>2008-04-16T00:00:00Z</dc:date>
</item>
<item>
<title>Transferring Nonlinear Representations using Gaussian Processes with a Shared Latent Space</title>
<link>https://hdl.handle.net/1721.1/41517</link>
<description>Transferring Nonlinear Representations using Gaussian Processes with a Shared Latent Space
Urtasun, Raquel; Quattoni, Ariadna; Lawrence, Neil; Darrell, Trevor
When a series of problems are related, representations derived from learning earlier tasks may be useful in solving later problems. In this paper we propose a novel approach to transfer learning with low-dimensional, non-linear latent spaces. We show how such representations can be jointly learned across multiple tasks in a Gaussian Process framework. When transferred to new tasks with relatively few training examples, learning can be faster and/or more accurate. Experiments on digit recognition and newsgroup classification tasks show significantly improved performance when compared to baseline performance with a representation derived from a semi-supervised learning approach or with a discriminative approach that uses only the target data.
</description>
<pubDate>Fri, 11 Apr 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41517</guid>
<dc:date>2008-04-11T00:00:00Z</dc:date>
</item>
<item>
<title>Random-World Semantics and Syntactic Independence for Expressive Languages</title>
<link>https://hdl.handle.net/1721.1/41516</link>
<description>Random-World Semantics and Syntactic Independence for Expressive Languages
McAllester, David; Milch, Brian; Goodman, Noah D.
We consider three desiderata for a language combining logic and probability: logical expressivity, random-world semantics, and the existence of a useful syntactic condition for probabilistic independence. Achieving these three desiderata simultaneously is nontrivial. Expressivity can be achieved by using a formalism similar to a programming language, but standard approaches to combining programming languages with probabilities sacrifice random-world semantics. Naive approaches to restoring random-world semantics undermine syntactic independence criteria. Our main result is a syntactic independence criterion that holds for a broad class of highly expressive logics under random-world semantics. We explore various examples including Bayesian networks, probabilistic context-free grammars, and an example from Mendelian genetics. Our independence criterion supports a case-factor inference technique that reproduces both variable elimination for BNs and the inside algorithm for PCFGs.
</description>
<pubDate>Sat, 03 May 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41516</guid>
<dc:date>2008-05-03T00:00:00Z</dc:date>
</item>
<item>
<title>Generalization of the MV Mechanism</title>
<link>https://hdl.handle.net/1721.1/41515</link>
<description>Generalization of the MV Mechanism
Chen, Jing
Micali and Valiant proposed a mechanism for combinatorial auctions that is dominant-strategy truthful, guarantees reasonably high revenue, and is very resilient against collusions. Their mechanism, however, uses as a subroutine the VCG mechanism, that is not polynomial time.We propose a modification of their mechanism that is efficient, while retaining their collusion resilience and a good fraction of their revenue, if given as a subroutine an efficient approximation of the VCG mechanism.
</description>
<pubDate>Thu, 01 May 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41515</guid>
<dc:date>2008-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Block Heavy Hitters</title>
<link>https://hdl.handle.net/1721.1/41514</link>
<description>Block Heavy Hitters
Andoni, Alexandr; Ba, Khanh Do; Indyk, Piotr
e study a natural generalization of the heavy hitters problem in thestreaming context. We term this generalization *block heavy hitters* and define it as follows. We are to stream over a matrix$A$, and report all *rows* that are heavy, where a row is heavy ifits ell_1-norm is at least phi fraction of the ell_1 norm ofthe entire matrix $A$. In comparison, in the standard heavy hittersproblem, we are required to report the matrix *entries* that areheavy. As is common in streaming, we solve the problem approximately:we return all rows with weight at least phi, but also possibly someother rows that have weight no less than (1-eps)phi. To solve theblock heavy hitters problem, we show how to construct a linear sketchof A from which we can recover the heavy rows of A.The block heavy hitters problem has already found applications forother streaming problems. In particular, it is a crucial buildingblock in a streaming algorithm that constructs asmall-size sketch for the Ulam metric, a metric on non-repetitivestrings under the edit (Levenshtein) distance.
</description>
<pubDate>Fri, 02 May 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41514</guid>
<dc:date>2008-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding camera trade-offs through a Bayesian analysis of light field projections</title>
<link>https://hdl.handle.net/1721.1/41513</link>
<description>Understanding camera trade-offs through a Bayesian analysis of light field projections
Levin, Anat; Freeman, William T.; Durand, Fredo
Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies.This paper introduces a unified framework for analyzing computational imagingapproaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a new prior on light field signals, estimatethe original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type andanalyze their limitations.
</description>
<pubDate>Wed, 16 Apr 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41513</guid>
<dc:date>2008-04-16T00:00:00Z</dc:date>
</item>
<item>
<title>Shadows and Cracks</title>
<link>https://hdl.handle.net/1721.1/41512</link>
<description>Shadows and Cracks
Dowson, Mark; Waltz, David
The VIRGIN program will interpret pictures of crack and shadow free scenes by labelling them according to the Clowes/Huffman formalism. This paper indicates methods of extending the program to include cracks and shadows and shows that such an extension makes available heuristics which allow the program to be less simple minded.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense, and was monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</description>
<pubDate>Tue, 01 Jun 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41512</guid>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Injection Molding at the MIT Artificial Intelligence Lab</title>
<link>https://hdl.handle.net/1721.1/41511</link>
<description>Injection Molding at the MIT Artificial Intelligence Lab
Binnard, Michael
This paper describes the injection molding equipment at the MIT Artificial Intelligence Lab and how to use it. Topic covered include mold design, insert molding, safety, and material properties.
</description>
<pubDate>Thu, 23 Feb 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41511</guid>
<dc:date>1995-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Capture It, Name It, Own it: How to capture re-occurring patterns, name them and turn them into reusable functions via Emacs kbd-macros</title>
<link>https://hdl.handle.net/1721.1/41510</link>
<description>Capture It, Name It, Own it: How to capture re-occurring patterns, name them and turn them into reusable functions via Emacs kbd-macros
Kozlowski, Stefan N.
The purpose of this talk is not to teach you about Emacs or Emacs kbd-macros, though we will use both as examples. I can teach you everything there is to know about Emacs and kbd-macros in 5 minutes. There are literally only about six commands which govern the majority of the Emacs kbd-macro universe but just knowing the commands is not going to help you much. To borrow an analogy from the introductory 6.001 lecture, I can teach you all the rules of chess in ten minutes but that does not mean that you will be a good chess player in ten minutes. The purpose of this talk is to get you to think about many of the methods and processes we perform each day in our jobs. Hopefully, such an examination will make you realize that we often repeat the same processes over and over. If we can isolate a repeated process, we can often capture it and transform it into a reusable function.&#13;
Today we will be looking at capturing such processes via Emacs kbd-macros through, you should be aware that many these of the methods can also be applied to UNIX, other operating systems, editors and languages. The reason we will be examining this topic via Emacs kbd-macros is that it is the easiest and most user-friendly way to approach the subject. We are going to start by looking at very simple examples and progress in complexity. I have written these macros and use some of them on a daily basis. Hopefully some these examples will directly correlate to duties you perform each day at work and you will be able to use some of them.
(**Note: This text was delivered as a lecture to the AI Lab Support Staff and still appears as such.**)&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-89-J-3202 and by the National Science Foundation under grant number MIP-9001651.
</description>
<pubDate>Fri, 01 May 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41510</guid>
<dc:date>1992-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tomorrow's Surgery: Micromotors and Microrobots</title>
<link>https://hdl.handle.net/1721.1/41509</link>
<description>Tomorrow's Surgery: Micromotors and Microrobots
Flynn, Anita M.; Udayakumar, K. R.; Barrett, David S.
Surgical procedures have changed radically over the last few years due to the arrival of new technology. What will technology bring us in the future?&#13;
This paper examines a few of the forces whose timing are causing new ideas to congeal from the fields of artificial intelligence, robotics, micromachining and smart materials.&#13;
Intelligence systems for autonomous mobile robots can now enable simple insect level behaviors in small amounts of silicon. These software breakthroughs coupled with new techniques for microfabricating miniature sensors and actuators from both silicon and ferroelectric families of materials offer glimpses of the future where robots will be small, cheap and potentially useful to surgeons.&#13;
In this paper we relate our recent efforts to fabricate piezoelectric micromotors in an effort to develop actuator technologies where brawn matches to the scale of the brain. We discuss our experiments with thin film ferroelectric motors 2mm in diameter and larger 8mm versions machined from bulk ceramic and sketch possible applications in the surgical field.
</description>
<pubDate>Wed, 01 Jul 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41509</guid>
<dc:date>1992-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>AI Lab Faculty</title>
<link>https://hdl.handle.net/1721.1/41508</link>
<description>AI Lab Faculty
Torrance, Mark C.
This document is meant to introduce new graduate students in the MIT AI Lab to the faculty members of the laboratory and their research interests. Each entry consists of the faculty member's picture, if available, some information on how to reach them, their responses to a few survey questions, and a few paragraphs excerpted from the AI Lab President's Report, as edited by Patrick Winston.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41508</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A User's Guide to the AI Lab: Getting Started at Tech Square</title>
<link>https://hdl.handle.net/1721.1/41507</link>
<description>A User's Guide to the AI Lab: Getting Started at Tech Square
Hofmeister, Scott; Ruecker, Lukas
</description>
<pubDate>Sun, 18 Aug 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41507</guid>
<dc:date>1991-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Fine Grained Robotics</title>
<link>https://hdl.handle.net/1721.1/41506</link>
<description>Fine Grained Robotics
Flynn, Anita M.; Barrett, David S.
Fine grained robotics is the idea of solving problems utilizing multitudes of very simple machines in place of one large complex entity. Organized in the proper way, simple machines and simple behaviors can lead to emergent solutions. Just as ants and termites perform useful work and build communal structures, gnat robots can solve problems in new ways. This notion of collective intelligence, married with technologies for mass-producing small robots very cheaply will blaze new avenues in all aspects of everyday life. Building gnat robots involves not only inventing the components from which to put together systems but also developing the technologies to produce the components.&#13;
This paper analyzes prototype microrobotic systems, specifically calculating torque and power requirements for three locomotion alternatives (flying, walking and swimming) for small robots. With target specifications for motors for these systems, we then review technology options and bottlenecks and sort through the tree of possibilities to pick and appropriate path along which we plan to proceed.
</description>
<pubDate>Fri, 01 Feb 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41506</guid>
<dc:date>1991-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Evolution of Society</title>
<link>https://hdl.handle.net/1721.1/41505</link>
<description>The Evolution of Society
Inman, Jeff
We re-examine the evolutionary stability of the tit-for-tat (tft) strategy in the context of the iterated prisoner's dilemma, as introduced by Axelrod and Hamilton. This environment involves a mixture of populations of "organisms" which interact with each other according to the rules of the prisoner's dilemma, from game theory. The tft strategy is nice, retaliatory and forgiving, and these properties contributed to the success of the strategy in the earlier experiments. However, it turns out that the property of being nice represents a weakness, when competing with an insular strategy, but the reverse is also true, which means that tft is not an evolutionarily stable strategy. In fact, insular strategies prove to be better at resisting incursion. Finally, we consider the implications of this result, in terms of naturally occurring societies.
</description>
<pubDate>Mon, 05 Aug 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41505</guid>
<dc:date>1991-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Correction of Force Errors for Flexible Manipulators in Quasi-Static Conditions</title>
<link>https://hdl.handle.net/1721.1/41504</link>
<description>Correction of Force Errors for Flexible Manipulators in Quasi-Static Conditions
Bicchi, Antonio; Melchiorri, Claudio
This paper deals with the problem of controlling the interactions of flexible manipulators with their environment. For executing a force control task, a manipulator with intrinsic (mechanical) compliance has some advantages over the rigid manipulators commonly employed in position control tasks. In particular, stability margins of the force control loop are increased, and robustness to uncertainties in the model of the environment is improved for compliant arms. On the other hand, the deformations of the arm under the applied load give rise to errors, that ultimately reflect in force control errors. This paper addresses the problem of evaluating these errors, and of compensating for them with suitable joint angle corrections. A solution to this problem is proposed in the simplifying assumptions that an accurate model of the arm flexibility is known, and that quasi-static corrections are of interest.
</description>
<pubDate>Sat, 01 Dec 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41504</guid>
<dc:date>1990-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Experiment in Knowledge Acquisition for Software Requirements</title>
<link>https://hdl.handle.net/1721.1/41503</link>
<description>An Experiment in Knowledge Acquisition for Software Requirements
Lefelhocz, Paul M.
The Requirements Apprentice (RA) is a demonstration system that assists a human analyst in the requirements-acquisition phase of the software-development process. By applying the RA to another example it has been possible to show some of the range of applicability of the RA. The same disambiguation, formalization, and contradiction-resolution techniques are useful in the air traffic control and library database domains and some clichés are shared between them. In addition, the need for an extension to the RA is seen: summarization of contradictions could be improved.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41503</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extending 2-D Smoothed Local Symmetries to 3-D</title>
<link>https://hdl.handle.net/1721.1/41502</link>
<description>Extending 2-D Smoothed Local Symmetries to 3-D
Braunegg, David J.
3-D Smoothed Local Symmetries (3-D SLS's) are presented as a representation for three-dimensional shapes. 3-D SLS's make explicit the perceptually salient features of 3-D objects and are especially suited to representing man-made objects. The definition of the 3-D SLS is given as a natural extension of the 2-D Smoothed Local Symmetry (2-D SLS). Analytic descriptions of the 3-D SLS are derived for objects composed of planar and spherical patches. Results of an implementation of the 3-D SLS are presented, along with suggestions for further research.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41502</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Program Design Assistant</title>
<link>https://hdl.handle.net/1721.1/41501</link>
<description>A Program Design Assistant
Tan, Yang Meng
The DA will be a design assistant which can assist the programmer in low-level design. The input language of the DA is a cliché-based program description language that allows the specification and high-level design of commonly-written programs to be described concisely. The DA language is high-level in the sense that programmers need not bother with detailed design. The DA will provide automatic low-level design assistance to the programmer in selecting appropriate algorithms and data structures. It will also detect inconsistencies and incompleteness in program descriptions.&#13;
A key related issue in this research is the representation of programming knowledge in a design assistant. The knowledge needed to automate low-level design and the knowledge in specific programming clichés have to be represented explicitly to facilitate reuse.
</description>
<pubDate>Thu, 01 Jun 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41501</guid>
<dc:date>1989-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principles of Knowledge Representation and Reasoning in the FRAPPE System</title>
<link>https://hdl.handle.net/1721.1/41500</link>
<description>Principles of Knowledge Representation and Reasoning in the FRAPPE System
Feldman, Yishai A.; Rich, Charles
The purpose of this paper is to elucidate the following four important architectural principles of knowledge representation and reasoning with the example of an implemented system: limited reasoning, truth maintenance, hybrid architecture, and many sorted logic.
</description>
<pubDate>Mon, 01 May 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41500</guid>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decision Representation Language (DRL) and Its Support Environment</title>
<link>https://hdl.handle.net/1721.1/41499</link>
<description>Decision Representation Language (DRL) and Its Support Environment
Lee, Jintae
In this report, I describe a language, called Decision Representation Language (DRL), for representing the qualitative aspects of decision making processes such as the alternatives being evaluated, goals to satisfy, and the arguments evaluating the alternatives. Once a decision process is represented in this language, the system can provide a set of services that support people making the decision. These services, together with the interface such as the object and the different presentation formats, form the support environment for using the language. I describe the services that have been so far identified to be useful — the managements of dependency, plausibility, viewpoints, and precedents. I also discuss how this work on DRL is related to other studies on decision making.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41499</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Don't Loop, Iterate</title>
<link>https://hdl.handle.net/1721.1/41498</link>
<description>Don't Loop, Iterate
Amsterdam, Jonathan
I describe an iteration macro for Common Lisp that is clear, efficient, extensible, and in excellent taste.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41498</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The GSL Cookbook</title>
<link>https://hdl.handle.net/1721.1/41497</link>
<description>The GSL Cookbook
Braunegg, David J.
This cookbook contains recipes prepared for the GSL (Graduate Student Lunch) at the Massachusetts Institute of Technology Artificial Intelligence Laboratory.
</description>
<pubDate>Wed, 01 Mar 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41497</guid>
<dc:date>1989-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining the Limits of Automated Program Recognition</title>
<link>https://hdl.handle.net/1721.1/41496</link>
<description>Determining the Limits of Automated Program Recognition
Wills, Linda M.
Program recognition is a program understanding technique in which stereotypic computational structures are identified in a program. From this identification and the known relationships between the structures, a hierarchical description of the program's design is recovered. The feasibility of this technique for small programs has been shown by several researchers. However, it seems unlikely that the existing program recognition systems will scale up to realistic, full-sized programs without some guidance (e.g., from a person using the recognition system as an assistant). One reason is that there are limits to what can be recovered by a purely code-driven approach. Some of the information about the program that is useful to know for common software engineering tasks, particularly maintenance, is missing from the code. Another reason guidance must be provided is to reduce the cost of recognition. To determine what guidance is appropriate, therefore, we must know what information is recoverable from the code and where the complexity of program recognition lies. I propose to study the limits of program recognition, both empirically and analytically. First, I will build an experimental system that performs recognition on realistic programs on the order of thousands of lines. This will allow me to characterize the information that can be recovered by this code-driven technique. Second, I will formally analyze the complexity of the recognition process. This will help determine how guidance can be applied most profitably to improve the efficiency of program recognition.
This working paper was submitted as a Ph.D. thesis proposal.
</description>
<pubDate>Thu, 01 Jun 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41496</guid>
<dc:date>1989-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating vision modules with coupled MRFs</title>
<link>https://hdl.handle.net/1721.1/41495</link>
<description>Integrating vision modules with coupled MRFs
Poggio, Tomaso
I outline a project for integrating several early visual modalities based on coupled Markov Random Fields models of the physical processes underlying image formation, such as depth, albedo and orientation of surfaces. The key ideas are:&#13;
a) to use as input data estimates of the various processes and their discontinuities, computed by several different algorithms.&#13;
b) to implement with MRFs the physical and geometrical constraints of local "continuity" of the processes and of their discontinuities. Processes are coupled to each other: the most common form of coupling is a veto — one process vetoing another — as in the case of discontinuities and the associated continuous field.
A. I. Laboratory Working Papers are produced for internal circulation and contain proteins, lipids, cholesterol, polysorbate-80, and other compounds unsuitable for external exposure. It is not intended that material in this paper be applied externally; it is intended for internal consumption only. Serving suggestion: add taco sauce (not included).
</description>
<pubDate>Sun, 01 Dec 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41495</guid>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Construction and Refinement of Justified Causal Models Through Variable-Level Explanation and Perception, and Experimenting</title>
<link>https://hdl.handle.net/1721.1/41494</link>
<description>Construction and Refinement of Justified Causal Models Through Variable-Level Explanation and Perception, and Experimenting
Doyle, Richard J.
The competence being investigated is causal modelling, whereby the behavior of a physical system is understood through the creation of an explanation or description of the underlying causal relations.&#13;
After developing a model of causality, I show how the causal modelling competence can arise from a combination of inductive and deductive inference employing knowledge of the general form of causal relations and of the kinds of causal mechanisms that exist in a domain.&#13;
The hypotheses generated by the causal modelling system range from purely empirical to more and more strongly justified. Hypotheses are justified by explanations derived from the domain theory and by perceptions which instantiate those explanations. Hypotheses never can be proven because the domain theory is neither complete nor consistent. Causal models which turn out to be inconsistent may be repairable by increasing the resolution of explanation and/or perception.&#13;
During the causal modelling process, many hypotheses may be partially justified and even leading hypotheses may have only minimal justification. An experiment design capability is proposed whereby the next observation can be deliberately arranged to distinguish several hypotheses or to make particular hypotheses more justified. Experimenting is seen as the active gathering of greater justification for fewer and fewer hypotheses.
</description>
<pubDate>Sun, 01 Dec 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41494</guid>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Further Evidence Against the Recovery Theory of Vision</title>
<link>https://hdl.handle.net/1721.1/41493</link>
<description>Further Evidence Against the Recovery Theory of Vision
Marill, Thomas
The problem of three-dimensional vision is generally formulated as the problem of recovering the three-dimensional scene that caused the image.&#13;
We have previously presented a certain line-drawing and shown that it has the following property: the three-dimensional object we see when we look at this line-drawing does not have the line-drawing as its image. It would therefore be impossible for the seen object to be the cause of the image. Such an occurrence constitutes a counterexample to the theory that vision recovers the scene that caused the image.&#13;
Here we show that such a counterexample is not an isolated case, but is the rule rather than the exception. Thus, as a general matter, the three-dimensional scenes we see when we look at line-drawings do not have these drawings as their image. This represents further evidence against the recovery theory.
</description>
<pubDate>Wed, 01 Feb 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41493</guid>
<dc:date>1989-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transcendence, Facticity,  and Modes of Non-Being</title>
<link>https://hdl.handle.net/1721.1/41492</link>
<description>Transcendence, Facticity,  and Modes of Non-Being
Donald, B. Randall; Canny, J. Francis
Research in artificial intelligence has yet to satisfactorily address the primordial fissure between human consciousness and the material order. How is this split reconciled in terms of human reality? By what duality is Bad Faith possible? We show that the answer is quite subtle, and of particular relevance to certain classical A.I. problems in introspection and intensional belief structure. A principled approach to bad faith and the consciousness of the other is suggested. We present ideas for an implementation in the domain of chemical engineering.
A.I. Laboratory working papers are produced for internal circulation, and may contain information that is, for example, to preliminary, too detailed, or too silly for formal publication. This paper handsomely satisfies all three criteria. While it is destined to become a landmark in its genre, readers are cautioned against making reference to this paper in the literature, as the authors would like to rejoin society with a clean slate. This paper could not have been produced without the assistance of many brilliant but unstable individuals who could not be reached for comment, and whose names have been suppressed pending determination of competence.
</description>
<pubDate>Sat, 01 Mar 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41492</guid>
<dc:date>1986-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vision Utilities</title>
<link>https://hdl.handle.net/1721.1/41491</link>
<description>Vision Utilities
Voorhees, Harry
This paper documents a collection of Lisp utilities which I have written while doing vision programming on a Symbolics Lisp machine. Many of these functions are useful both as interactive commands invoked from the Lisp Listener and as "building blocks" for constructing larger programs. Utilities documented here include functions for loading, storing, and displaying images, for creating synthetic images, for convolving and processing arrays, for making histograms, and for plotting data.
</description>
<pubDate>Sun, 01 Dec 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41491</guid>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Counterexample to the Theory that Vision Recovers Three-Dimensional Scenes</title>
<link>https://hdl.handle.net/1721.1/41490</link>
<description>A Counterexample to the Theory that Vision Recovers Three-Dimensional Scenes
Marill, Thomas
The problem of three-dimensional vision is generally formulated as the problem of recovering the three-dimensional scene that caused the image. Here we present a certain line-drawing and show that it has the following property: the three-dimensional object we see when we look at this line-drawing does not have the line-drawing as its image. It would therefore be impossible for the seen object to be the cause of the image. Such an occurrence constitutes a counterexample to the theory that vision recovers the scene that caused the image.
</description>
<pubDate>Tue, 01 Nov 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41490</guid>
<dc:date>1988-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Test Programming by Program Composition and Symbolic Simulation</title>
<link>https://hdl.handle.net/1721.1/41489</link>
<description>Test Programming by Program Composition and Symbolic Simulation
Shirley, Mark H.
Classical test generation techniques rely on search through gate-level circuit descriptions, which results in long runtimes. In some instances, classical techniques cannot be used because they would take longer than the lifetime of the product to generate tests which are needed when the first devices come off the assembly line. Despite these difficulties, human experts often succeed in writing test programs for very complex circuits. How can we account for their success?&#13;
We take a knowledge engineering approach to this problem by trying to capture in a program techniques gleaned from working with experienced test programmers. From these talks, we conjecture that expert test programming performance relies in part on two aspects of human problem solving.&#13;
First, the experts remember many cliched solutions to test programming problems. The difficulty lies in formalizing the notion of a cliche for this domain. For test programming, we propose that cliches contain goal to subgoal expansions, fragments of test program code, and constraints describing how program fragments fit together. We present an algorithm which uses testing cliches to generate test programs. Second, experts can simulate a circuit at various levels of abstraction and recognize patterns of activity in the circuit which are useful for solving test problems. We argue that symbolic simulation coupled with recognition of which simulated events solve our goals is an effective planning strategy in certain cases. We present a second algorithm which simulates circuit behavior on symbolic inputs at roughly the register transfer level and generates fragments of test programs suitable for use by our first algorithm.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41489</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Program Recognition: A Proposal</title>
<link>https://hdl.handle.net/1721.1/41488</link>
<description>Automated Program Recognition: A Proposal
Zelinka, Linda M.
The key to understanding a program is recognizing familiar algorithmic fragments and data structures in it. Automating this recognition process will make it easier to perform many tasks which require program understanding, e.g., maintenance, modification, and debugging. This paper proposes a recognition system, called the Recognizer, which automatically identifies occurrences of stereotyped computational fragments and data structures in programs. The Recognizer is able to identify these familiar fragments and structures even though they may be expressed in a wide range of syntactic forms. It does so systematically and efficiently by using a parsing technique. Two important advances have made this possible. The first is a language-independent graphical representation for programs and programming structures which canonicalizes many syntactic features of programs. The second is an efficient graph parsing algorithm.
</description>
<pubDate>Sun, 01 Dec 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41488</guid>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to do Research At the MIT AI Lab</title>
<link>https://hdl.handle.net/1721.1/41487</link>
<description>How to do Research At the MIT AI Lab
Chapman, David
This document presumptuously purports to explain how to do research. We give heuristics that may be useful in pickup up specific skills needed for research (reading, writing, programming) and for understanding and enjoying the process itself (methodology, topic and advisor selection, and emotional factors).
</description>
<pubDate>Sat, 01 Oct 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41487</guid>
<dc:date>1988-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Jordan Form of (i+j over j) over Z[subscript p]</title>
<link>https://hdl.handle.net/1721.1/41486</link>
<description>Jordan Form of (i+j over j) over Z[subscript p]
Strauss, Nicholas
The Jordan Form over field Z[subscript p] of J[superscript p][subscript p]n is diagonal for p &gt; 3 with characteristic polynomial, ϕ(x) = x[superscript 3] - 1, for p prime, n natural number. These matrices have dimension p[superscript n] x p[superscript n], with entries (i+j over j). I prove these results with the method of generating functions.
</description>
<pubDate>Mon, 01 Jul 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41486</guid>
<dc:date>1985-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>IDEME: A DBMS of Methods</title>
<link>https://hdl.handle.net/1721.1/41485</link>
<description>IDEME: A DBMS of Methods
Lee, Jintae
In this paper, an intelligent database management system (DBMS) called IDEME is presented. IDEME is a program that takes as input a task specification and finds a set of methods potentially relevant to solving that task. It does so by matching the task specification to the methods in its database at multiple levels of abstraction. After isolating potentially useful methods, IDEME ranks them by how relevant they might be to the task. From the most relevant method, it checks if its operational demands, i.e. those conditions that have to be satisfied for the method to be applicable, are satisfied by the present task. If so, it presents the algorithm of the method relativized to the present task; otherwise, it goes on to the next method. In this paper, the focus will be on the representation scheme that is used by IDEME to represent methods as well as tasks.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41485</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Writing and Representation</title>
<link>https://hdl.handle.net/1721.1/41484</link>
<description>Writing and Representation
Agre, Philip E.
This paper collects several notes I've written over the last year in an attempt to work through my dissatisfactions with the ideas about representation I was taught in school. Among these ideas are the notion of a 'world model'; the notion of representations having 'content' independent of the identity, location, attitudes, or activities of any agent; and the notion that a representation is the sort of thing you might implement with datastructures and pointers. Here I begin developing an alternative view of representation whose prototype is a set of instructions written in English on a sheet of paper you're holding in your hand while pursuing some ordinarily complicated concrete project in the everyday world. Figuring out what the markings on this paper are talking about is a fresh problem in every next setting, and solving this problem takes work. Several detailed stories about representation use in everyday activities—such as assembling a sofa from a kit, being taught to fold origami cranes, following stories across pages of a newspaper, filling a photocopier with toner, and keeping count when running laps—illustrate this view. Finally, I address the seeming tension between necessity of interpreting one's representations in every next setting and the idea that everyday life is fundamentally routine.
</description>
<pubDate>Thu, 01 Sep 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41484</guid>
<dc:date>1988-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Principle-Based Translator</title>
<link>https://hdl.handle.net/1721.1/41483</link>
<description>Toward a Principle-Based Translator
Dorr, Bonnie J.
A principle-based computational model of natural language translation consists of two components: (1) a module which makes use of a set of principles and parameters to transform the source language into an annotated surface form that can be easily converted into a "base" syntactic structure; and (2) a module which makes use of the same set of principles, but a different set of parameter values, to transform the "base" syntactic structure into the target language surface structure. This proposed scheme of language translation is an improvement over existing schemes since it is based on interactions between principles and parameters rather than on complex interactions between language-specific rules as found in older schemes.&#13;
The background for research of the problem includes: an examination of existing schemes of computerized language translation and an analysis of their shortcomings. Construction of the proposed scheme requires a preliminary investigation of the common "universal" principles and parametric variations across different languages within the framework of current linguistic theory.&#13;
The work to be done includes: construction of a module which uses linguistic principles and source language parameter values to parse and output the corresponding annotated surface structures of source language sentences; creation of procedures which handle the transformation of an annotated surface structure into a "base" syntactic structure; and development of a special purpose generation scheme which converts a "base" syntactic structure into a surface form in the target language.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41483</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Use YTEX</title>
<link>https://hdl.handle.net/1721.1/41482</link>
<description>How to Use YTEX
Brotsky, Daniel
YTEX—pronounced why-TEX or oops-TEX—is a TEX macro package. YTEX provides both an easy-to-use interface for TEX novices and a powerful macro-creation library for TEX programmers. It is this two-tier structure that makes YTEX more useful to a diverse TEX user community than other macro packages such as Plain or LaTEX.&#13;
This paper contains YTEX instructions intended for novice users. It summarizes the facilities provided by YTEX and concludes with a table of useful commands.&#13;
The version of YTEX documented her is release 2.0.
Work on YTEX was supported by a desire to avoid doing real work, like research.
</description>
<pubDate>Mon, 09 Jun 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41482</guid>
<dc:date>1986-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>Support for Obviously Synchonizable Series Expressions in Pascal</title>
<link>https://hdl.handle.net/1721.1/41481</link>
<description>Support for Obviously Synchonizable Series Expressions in Pascal
Orwant, Jonathan L.
Obviously synchronizable series expressions enable programmers to write algorithms as straightforward compositions of functions rather than as less comprehensible loops while retaining the significantly higher efficiency of loops. A macro package supporting these expressions in Lisp has been in use since December of 1987.&#13;
However, the theory behind obviously synchronizable series expressions is not restricted to Lisp; in fact, it is applicable to any programming language. Because many people view packages designed in Lisp to be dependent on the qualities which make Lisp different from other languages, it was decided to support the macro package in the all-purpose language Pascal. This paper discusses its implementation.
</description>
<pubDate>Tue, 01 Nov 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41481</guid>
<dc:date>1988-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Puma/Cougar Implementor's Guide</title>
<link>https://hdl.handle.net/1721.1/41480</link>
<description>Puma/Cougar Implementor's Guide
Jones, Joe L.; O'Donnell, Patrick A.
This document is intended to be a guide to assist a programmer in modifying or extending the Lisp Puma system, the Puma PDP-11 system, or the Cougar PDP-11 system. It consists mostly of short descriptions or hints, and is not intended to be a polished manual. The reader is expected to be familiar with the use of the Puma system, as described in "Using the PUMA System," and the Lisp flavor system, as described in the Lisp Machine Manual.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41480</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using the PUMA System</title>
<link>https://hdl.handle.net/1721.1/41479</link>
<description>Using the PUMA System
Jones, Joe L.; O'Donnell, Patrick A.
This document describes the operation of the Lisp Machine interface to the Unimation Puma 600 Robot Arm. The interface is evolved from a system described in an earlier paper, and much is the same. However, the under-lying interface between the Lisp Machine and the Puma has changed and some enhancements have been made. VAL has been replaced with a PDP-11/23, communicating with the Lisp Machine over the Chaosnet.&#13;
The purpose of this document is to provide instruction and information in the programming of the Puma arm from the Lisp Machine. The network protocol is not described here, nor are the internals of the implementation. These details are provided in separate documentation.&#13;
The reader will find in this paper both a tutorial section and a reference section. The tutorial will lead the reader through a sample session using the Puma by directly calling the primitive operations, and will provide an introduction to programming using the primitives. The reference section provides an overview of the network protocol and describes all of the primitive operations provided.&#13;
Please note that this document corresponds to the version of the Puma system in use on 11 March, 1985. The system is still undergoing development and enhancement, and there may be additional features, if you are running a newer system. The authors welcome reports of errors, inaccuracies, or suggestions for clarification or improvement in either the documentation or the code for the Puma system. Please send electronic mail to BUG-PUMA@MIT-OZ.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41479</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing the State Behavior of Programs</title>
<link>https://hdl.handle.net/1721.1/41478</link>
<description>Analyzing the State Behavior of Programs
Bawden, Alan
It is generally agreed that the unrestricted use of state can make a program hard to understand, hard to compile, and hard to execute, and that these difficulties increase in the presence of parallel hardware. This problem has led some to suggest that constructs that allow state should be banished from programming languages. But state is also a very useful phenomenon: some tasks are extremely difficult to accomplish without it, and sometimes the most perspicuous expression of an algorithm is one that makes use of state. Instead of outlawing state, we should be trying to understand it, so that we can make better use of it.&#13;
I propose a way of modeling systems in which the phenomenon of state occurs. I propose that systems that exhibit state-like behavior are those systems that must rely on their own nonlocal structure in order to function correctly, and I make this notion of nonlocal structure precise. This characterization offers some new insights into why state seems to cause the problems that it does. I propose to construct a compiler that takes advantage of these insights to achieve some of the benefits normally associated with purely functional programming systems.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41478</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Richer Language for Describing Software Errors</title>
<link>https://hdl.handle.net/1721.1/41477</link>
<description>Toward a Richer Language for Describing Software Errors
Levitin, Samuel M.
Several approaches to the meaning and uses of errors in software development are discussed. An experiment involving a strong type-checking language, CLU, is described, and the results discussed in terms of the state of the art language for bug description. This method of bug description is found to be lacking sufficient detail to model the progress of software through its entire lifetime. A new method of bug description is proposed, which can describe bug types encountered not only in the current experiment but also in previous experiments. It is expected that this method is robust enough to be independent of the various factors of a software project that influence the realms in which bugs will occur.
</description>
<pubDate>Wed, 01 May 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41477</guid>
<dc:date>1985-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Proposal for Research With the Goal of Formulating a Computational Theory of Rational Action</title>
<link>https://hdl.handle.net/1721.1/41476</link>
<description>A Proposal for Research With the Goal of Formulating a Computational Theory of Rational Action
Batali, John
A theory of rational action can be used to determine the right action to perform in a situation. I will develop a theory of rational action in which an agent has access to an explicit theory of rationality. The agent makes use of this theory when it chooses its actions, including the actions involved in determining how to apply the theory. The Intentional states of the agent are realized in states and processes of its physical body. The body of the agent is a computational entity whose operations are under the control of a program. The agent has full access to that program and controls its actions by manipulating that program. I will illustrate the theory by implementing a system which simulates the actions a rational agent takes in various situations.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41476</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Flow Graph Matching for Automated Program Recognition</title>
<link>https://hdl.handle.net/1721.1/41475</link>
<description>Parallel Flow Graph Matching for Automated Program Recognition
Ritto, Patrick M.
A flow graph matching algorithm has been implemented on the Connection Machine which employs parallel techniques to allow efficient subgraph matching. By constructing many different matchings in parallel, the algorithm is able to perform subgraph matching in polynomial time in the size of the graphs. The automated program recognition system can use this algorithm to help make a more efficient flow graph parser. The process of automated program recognition involves recognizing familiar data structures and algorithmic fragments (called clichés) in a program so that a hierarchical description of the program can be constructed. The recognition is done by representing the program as a flow graph and parsing it with a graph grammar which encodes the clichés. In order to find clichés in the midst of unfamiliar code, it is necessary to run the parser on all possible subgraphs of the graph, thus starting the parser an exponential number of times. This is too inefficient for practical use on large programs, so this algorithm has been implemented to allow the matchings to be performed in polynomial time.
</description>
<pubDate>Fri, 01 Jul 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41475</guid>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exceptional Situations in Lisp</title>
<link>https://hdl.handle.net/1721.1/41474</link>
<description>Exceptional Situations in Lisp
Pitman, Kent M.
Frequently, it is convenient to describe a program in terms of the normal situations in which it will be used, even if such a description does not describe the its complete behavior in all circumstances. This paper surveys the issues surrounding the description of program behavior in exceptional situations.
</description>
<pubDate>Fri, 01 Feb 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41474</guid>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Structures of Everyday Life</title>
<link>https://hdl.handle.net/1721.1/41473</link>
<description>The Structures of Everyday Life
Agre, Philip E.
This note descends from a talk I gave at the AI Lab's Revolving Seminar series in November 1984. I offer it as an informal introduction to some work I've been doing over the last year on common sense reasoning. Four themes wander in and out.&#13;
1) Computation provides an observation vocabulary for introspection. With a little work, you can learn to exhume your models of everyday activities. This method can provide empirical grounding for computational theories of the central systems of mind.&#13;
2) The central systems of mind arise in each of us as a rational response to the impediments to living posed by the laws of computation. One of these laws is that all search problems (theorem proving for example) are intractable. Another is that no one model of anything is good enough for all tasks. Reasoning from these laws can provide theoretical grounding for computational theories of the central systems of mind.&#13;
3) Mental models tend to form mathematical lattices under the relation variously called subsumption or generalization. Your mind puts a lot of effort into maintaining this lattice because it has so many important properties. One of these is that the more abstract models provide a normalized decomposition of world-situations that greatly constrains the search for useful analogies.&#13;
4) I have been using these ideas in building a computational theory of routines, the frequency repeated and phenomenologically automatic rituals of which most of daily life is made. I describe this theory briefly.
</description>
<pubDate>Fri, 01 Feb 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41473</guid>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Partial Mechanical Design Compiler</title>
<link>https://hdl.handle.net/1721.1/41472</link>
<description>A Partial Mechanical Design Compiler
Ward, Allen C.
I have implemented a simple "mechanical design compiler", that is a program which can convert high-level descriptions of a mechanical design into detail descriptions. (Human interaction is sometimes required.) The program operates in the domain of power transmission equipment composed of discrete, purchasable components. I describe a semantic theory which assigns meanings to the high-level descriptions, and a set of operations on statements in a "specification language" which perform some of the reasoning required by the "compilation" process.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41472</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tradeoffs in Designing a Parallel Architecture for the Apiary</title>
<link>https://hdl.handle.net/1721.1/41221</link>
<description>Tradeoffs in Designing a Parallel Architecture for the Apiary
Manning, Carl R.
The Apiary is an abstract computer architecture designed for performing computation based on the idea of message passing between dynamic computational objects called actors. An apiary connotes a community of worker bees busily working together; similarily, the Apiary architecture is made of many workers (processing elements) computing together. The Apiary architecture is designed to exploit the concurrency inherent in the actor model of computation by processing the messages to many different actors in parallel. This paper explores the nature of actor computations and how the Apiary performs computation with actors to give the render some background before looking at some of the tradeoffs which must be made to design special purpose hardware for the Apiary.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41221</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Mobile Robot Project</title>
<link>https://hdl.handle.net/1721.1/41220</link>
<description>A Mobile Robot Project
Brooks, Rodney A.
We are building a mobile robot which will roam around the AI lab observing and later perhaps doing. Our approach to building the robot and its controlling software differs from that used in many other projects in a number of ways. (1) We model the world as three dimensional rather than two. (2) We build no special environment for our robot and insist that it must operate in the same real world that we inhabit. (3) In order to adequately deal with uncertainty of perception and control we build relational maps rather than maps embedded in a coordinate system, and we maintain explicit models of all uncertainties. (4) We explicitly monitor the computational performance of the components of the control system, in order to refine the design of a real time control system for mobile robots based on a special purpose distributed computation engine. (5) We use vision as our primary sense and relegate acoustic sensors to local obstacle detection. (6) We use a new architecture for an intelligent system designed to provide integration of many early vision processes, and robust real-time performance even in cases of sensory overload, failure of certain early vision processes to deliver much information in particular situations, and computation module failure.
</description>
<pubDate>Fri, 01 Feb 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41220</guid>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spurious Behaviors in Qualitative Prediction</title>
<link>https://hdl.handle.net/1721.1/41219</link>
<description>Spurious Behaviors in Qualitative Prediction
Hall, Robert J.
I examine the scope and causes of the spurious behavior problem in two widely different approaches to qualitative prediction, Sacks' PLR and Kuipers' QSIM. QSIM's proliferation of spurious behaviors and PLR's limited applicability and problematic extensibility lead me to propose a third, intermediate approach to qualitative prediction called the Phase Space Geometry approach. This has the potential advantages of predicting far fewer spurious behaviors than QSIM-like approaches and being directly applicable to nonlinear systems of all orders.
This paper was originally an Area Exam report, so may seem somewhat sketchy and incomplete.
</description>
<pubDate>Tue, 01 Mar 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41219</guid>
<dc:date>1988-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Associative Learning of Standard Regularizing Operators in Early Vision</title>
<link>https://hdl.handle.net/1721.1/41218</link>
<description>Associative Learning of Standard Regularizing Operators in Early Vision
Poggio, Tomaso; Hurlbert, Anya
Standard regularization methods can be used to solve satisfactorily several problems in early vision, including edge detection, surface reconstruction, the computation of motion and the recovery of color. In this paper, we suggest (a) that quadratic variational principles corresponding to standard regularization methods are equivalent to a linear regularizing operator acting on the data and (b) that this operator can be synthesized through associative learning. The synthesis of the regularizing operator involves the computation of the pseudoinverse of the data. The pseudoinverse can be computed by iterative methods, that can be implemented in analog networks. Possible implications for biological visual systems are also discussed.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41218</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Intensional and Extensional Representations in Simulation</title>
<link>https://hdl.handle.net/1721.1/41217</link>
<description>The Role of Intensional and Extensional Representations in Simulation
Brotsky, Daniel
I review three systems which do simulation in different domains. I observe the following commonality in the representations underlying the simulations:&#13;
• The representations used for individuals tend to be domain-dependent. These representations are highly structured, concentrating in one place all the information concerning any particular individual. I call these representations intensional because two such representations are considered equal if their forms are identical.&#13;
• With important exceptions, the representations used for classes of individuals tend to be domain-independent. These representations are unstructured sets of predications involving the characteristics of class members. I call these representations extensional because two such representations are considered equal if the classes they specify are identical.&#13;
I draw out various ramifications of this dichotomy, and speculate as to its cause. In conclusion, I suggest research into the process of debugging extensional class representations and the development of intensional ones.
This paper was prepared as the author's area examination.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41217</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Novice's Guide to the UNIX at the AI Laboratory Version 1.0</title>
<link>https://hdl.handle.net/1721.1/41216</link>
<description>The Novice's Guide to the UNIX at the AI Laboratory Version 1.0
Highleyman, Liz A.
This is a manual for complete beginners. It requires little knowledge of the MIT computer systems, and assumes no knowledge of the UNIX operating system. This guide will show you how to log onto the AI Lab's SUN system using a SUN III or similar workstation or a non-dedicated terminal. Many of the techniques described will be applicable to other computers running UNIX. You will learn how to use various operating system and network features, send and receive electronic mail, create and edit files using GNU EMACS, process text using YTEX, and print your files.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41216</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The EIGHT Manual: A System for Geometric Modelling and Three-Dimensional Graphics on the Lisp Machine</title>
<link>https://hdl.handle.net/1721.1/41215</link>
<description>The EIGHT Manual: A System for Geometric Modelling and Three-Dimensional Graphics on the Lisp Machine
Donald, Bruce R.
We describe a simple geometric modelling system called Eight which supports interactive creation, editing, and display of three-dimensional polyhedral solids. Perspective views of a polyhedral environment may be generated, and hidden surfaces removed. Eight proved useful for creating world models, and as an underlying system for modelling object interaction in robotics research and applications. It is documented here in order to make the facility available to other members of the Artificial Intelligence Laboratory.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41215</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>BUILD -- A System Construction Tool</title>
<link>https://hdl.handle.net/1721.1/41214</link>
<description>BUILD -- A System Construction Tool
Robbins, Richard E.
BUILD is a proposed tool for constructing systems from existing modules. BUILD system descriptions are composed of module declarations and assertions of how modules refer to each other. An extensible library of information about module types and module interaction types is maintained. The library contains information that allows BUILD to derive construction dependencies from the module declarations and referencing patterns enumerated in system descriptions. BUILD will support facilities not adequately provided by existing tools; including automatic derivation of system descriptions, patching of systems, and incorporation of information about how modules change (e.g. the ability to differentiate between the effect of adding a function definition and the effect of adding a comment).
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41214</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Proposal For An Intelligent Debugging Assistant</title>
<link>https://hdl.handle.net/1721.1/41213</link>
<description>A Proposal For An Intelligent Debugging Assistant
Kuper, Ron I.
There are many ways to find bugs in programs. For example, observed input and output values can be compared to predicted values. An execution trace can be examined to locate errors in control flow. The utility of these and other strategies depends on the quality of the specifications available. The Debugging Assistant chooses the most appropriate debugging strategy based on the specification information available and the context of the bug. Particular attention has been given to applying techniques from the domain of hardware troubleshooting to the domain of software debugging. This has revealed two important differences between the two domains: (1) Unlike circuits, programs rarely come with complete specifications of their behavior, and (2) Unlike circuits, the cost of probing inputs and outputs of programs is low.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41213</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Routing Thoughts</title>
<link>https://hdl.handle.net/1721.1/41212</link>
<description>Routing Thoughts
Poggio, Tomaso A
In a parallel machine with many thousands of processors the routing of information between processors is a key task, which turns out to require as much hardware and perhaps more sophistication than local computing itself. There are at least two basic engineering solutions to the routing problem: one followed by most research projects is of the "packet switching" type, that behaves as a mail service, with data carrying addresses to route the packet through the system. The other, more similar to a traditional telephone system, has connections made and broken (or enabled and disabled) as required for exchanging information. These solutions, based on silicon technology and digital electronic, may be quite different from the routing solutions used by the prototypical parallel machine — the brain.&#13;
This paper asks questions concerning routing information in parallel machines with an eye to biological wetware. It is divided in four disconnected parts, that do not contain finished results but consist of suggestions for future speculations:&#13;
1) How to make Infinity Small.&#13;
2) Routers and Brains&#13;
3) Classifying Parallel Machines&#13;
3) The Problem of Remapping
This working paper is has been brought to you by the modern wonders of microcassette dictating equipment, through which Professor Poggio can now cough up working papers while doing something else more important.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41212</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>TEMPEST -- A Template Editor for Structured Text</title>
<link>https://hdl.handle.net/1721.1/41211</link>
<description>TEMPEST -- A Template Editor for Structured Text
Sterpe, Peter
This paper proposes an editing tool named TEMPEST (TEMPlate Editor for Structured Text) whose goal is to extend a text editing environment by using templates to incorporate into it some knowledge of the structure of the text that is being edited. TEMPEST's functionality is focused on the structural aspects of text editing that are not well supported by typical text editors. In addition, it uses a text-based approach which affords a wide range of applicability. A scenario is given to illustrate its use.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41211</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Improvement by Automatic Redistribution of Intermediate Results</title>
<link>https://hdl.handle.net/1721.1/41210</link>
<description>Program Improvement by Automatic Redistribution of Intermediate Results
Hall, Robert J.
The problem of automatically improving the performance of computer programs has many facets. A common source of program inefficiency is the use of abstraction techniques in program design: general tools used in a specific context often do unnecessary or redundant work. Examples include needless copy operations, redundant subexpressions, multiple traversals of the same datastructure and maintenance of overly complex data invariants. I propose to focus on one broadly applicable way of improving a program's performance: redistributing intermediate results so that computation can be avoided. I hope to demonstrate that this is a basic principle of optimization from which many of the current approaches to optimization may be derived. I propose to implement a system that automatically finds and exploits opportunities for redistribution in a given program. In addition to the program source, the system will accept an explanation of correctness and purpose of the code.&#13;
Beyond the specific task of program improvement, I anticipate that the research will contribute to our understanding of the design and explanatory structure of programs. Major results will include (1) definition and manipulation of representation of correctness and purpose of a program's implementation, and (2) definition, construction, and use of a representation of a program's dynamic behavior.
This paper was originally a Ph.D. thesis proposal.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41210</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chapter and Verse Program Description</title>
<link>https://hdl.handle.net/1721.1/41209</link>
<description>Chapter and Verse Program Description
Turrisi, Elizabeth K.
The design of a program is rarely a straightforward mapping from the problem solution to the code. More frequently, fragments of high level concepts are distributed over one or more modules such that it is hard to identify the fragments which belong to one particular concept. These mappings have to be untangled and described in order to give a complete picture of how the program implements the ideas.&#13;
The Chapter and Verse method of program description emphasizes the high level concepts which underlie a program, and the relationship between these concepts and the low level structure of program code. The organization of the description is similar to that of a textbook. The Chapter and Verse description aids in the use, modification, and evaluation of computer programs by promoting a full understanding of the programs.
</description>
<pubDate>Fri, 01 Jun 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41209</guid>
<dc:date>1984-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Switching Between Discrete and Continuous Models To Predict Genetic Activity</title>
<link>https://hdl.handle.net/1721.1/41208</link>
<description>Switching Between Discrete and Continuous Models To Predict Genetic Activity
Weld, Daniel S.
Molecular biologists use a variety of models when they predict the behavior of genetic systems. A discrete model of the behavior of individual macromolecular elements forms the foundation for their theory of each system. Yet a continuous model of the aggregate properties of the system is necessary for many predictive tasks.&#13;
I propose to build a computer program, called PEPTIDE, which can predict the behavior of moderately complex genetics systems by performing qualitative simulation on the discrete model, generating a continuous model from the discrete model through aggregation, and applying limit analysis to the continuous model. PEPTIDE's initial knowledge of a specific system will be represented with a discrete model which distinguishes between macromolecule structure and function and which uses five atomic processes as its functional primitives. Qualitative Process (QP) theory [Forbus 83] provides the representation for the continuous model.&#13;
Whenever a system has multiple models of a domain, the decision of which model to use in a given time becomes a critically important issue. Knowledge of the relative significance of differing element concentrations and the behavior of process structure cycles will allow PEPTIDE to determine when to switch reasoning modes.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41208</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Introduction to Using the Window System</title>
<link>https://hdl.handle.net/1721.1/41207</link>
<description>Introduction to Using the Window System
Weinreb, Daniel; Moon, David A.
This document is a draft copy of a portion of the Lisp Machine window system manual. It is being published in this form now to make it available, since the complete window system manual is unlikely to be finished in the near future. The information in this document is accurate as of system 67, but is not guaranteed to remain 100% accurate. To understand some portions of this document may depend on background information which is not contained in any published documentation.&#13;
This paper is a portion of a document which will explain how a programmer may make use of and extend the facilities in the Lisp machine known collectively as the Window System.
</description>
<pubDate>Thu, 14 Oct 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41207</guid>
<dc:date>1982-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical Shape from Shading and Occluding Contours in a Single View</title>
<link>https://hdl.handle.net/1721.1/41206</link>
<description>Numerical Shape from Shading and Occluding Contours in a Single View
Ikeuchi, Katsushi
An iterative method of using occluding boundary information is proposed to compute surface slope from shading.&#13;
We use a stereographic space rather than the more commonly used gradient space in order to express occluding boundary information. Further, we use "average" smoothness constraints rather than the more obvious "closed loop" smoothness constraints. We develop alternate constraints from the definition of surface smoothness, since the closed loop constraints do not work in the stereographic space. We solve the image irradiance equation iteratively using a Gauss-Seidel method applied to the constraints and boundary information. Numerical experiments show that the method is effective. Finally, we analyze SEM (Scanning Electron Microscope) pictures using this method. Other applications are also proposed.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research under Office of Naval Research contract N00014-77-C-0389.&#13;
Fig. 2-A and Fig. 26 are used from "magnification" by David Scharf under permission of the author.
</description>
<pubDate>Thu, 01 Nov 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41206</guid>
<dc:date>1979-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Semantic Description from Drawings of Scenes with Shadows</title>
<link>https://hdl.handle.net/1721.1/41205</link>
<description>Generating Semantic Description from Drawings of Scenes with Shadows
Waltz, David L.
The research reported here concerns the principles used to automatically generate three-dimensional representations from line drawings of scenes. The computer programs involved look at scenes which consist of polyhedra and which may contain shadows and various kinds of coincidentally aligned scene features. Each generated description includes information about edge shape (convex, concave, occluding, shadow, etc.), about decomposition of the scene into bodies, about the type of illumination for each region (illuminated, projected shadow, or oriented away from the light source), and about the spacial orientation of regions. The methods used are based on the labeling schemes of Huffman and Clowes; this research provides a considerable extension to their work and also gives theoretical explanation to the heuristic scene analysis work of Guzman, Winston, and others.
This report reproduces a thesis of the same title submitted to the Department of Electrical Engineering, Massachusetts Institute of Technology, in partial fulfillment of the requirements for the degree of Doctor of Philosophy, September 1972.
</description>
<pubDate>Wed, 01 Nov 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41205</guid>
<dc:date>1972-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Heterarchical Program  for Recognition of Polyhedra</title>
<link>https://hdl.handle.net/1721.1/41204</link>
<description>A Heterarchical Program  for Recognition of Polyhedra
Shirai, Yoshiaki
Recognition of polyhedra by a heterarchical program is presented. The program is based on the strategy of recognizing objects step by step, at each time making use of the previous results. At each stage, the most obvious and simple assumption is made and the assumption is tested. To find a line segment, a range of search is proposed. Once a line segment is found, more of the line is determined by tracking along it. Whenever a new fact is found, the program tries to reinterpret the scene taking the obtained information into consideration. Results of the experiment using an image dissector are satisfactory for scenes containing a few blocks and wedges. Some limitations of the present program and proposals for future development are described.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Reproduction of this document, in whole or in part, is permitted for any purpose of the United States Government.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41204</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Planning System for Robot Construction Tasks</title>
<link>https://hdl.handle.net/1721.1/41203</link>
<description>A Planning System for Robot Construction Tasks
Fahlman, Scott E.
This paper describes BUILD, a computer program which generates plans for building specified structures out of simple objects such as toy blocks. A powerful heuristic control structure enables BUILD to use a number of sophisticated construction techniques in its plans. Among these are the incorporation of pre-existing structure into the final design, pre-assembly of movable sub-structures on the table, and the use of extra blocks as temporary supports and counterweights in the course of construction.&#13;
Build does its planning in a modeled 3-space in which blocks of various shapes and sizes can be represented in any orientation and location. The modeling system can maintain several world models at once, and contains modules for displaying states, testing them for inter-object contact and collision, and for checking the stability of complex structure involving frictional forces.&#13;
Various alternative approaches are discussed, and suggestions are included for the extension of BUILD-like systems to other domains. Also discussed are the merits of BUILD's implementation language, CONNIVER, for this type of problem solving.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.&#13;
This report reproduces a thesis of the same title submitted to the Department of Electrical Engineering, Massachusetts Institute of Technology, in partial fulfillment of the requirements for the degree of Bachelor of Science and Master of Science, June 1973.
</description>
<pubDate>Tue, 01 May 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41203</guid>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning is Just a Way of Avoiding Figuring Out What To Do Next</title>
<link>https://hdl.handle.net/1721.1/41202</link>
<description>Planning is Just a Way of Avoiding Figuring Out What To Do Next
Brooks, Rodney A.
The idea of planning and plan execution is just an intuition based decomposition. There is no reason it has to be that way. Most likely in the long term, real empirical evidence from systems we know to be built that way (from designing them like that) will determine whether its a very good idea or not. Any particular planner is simply an abstraction barrier. Below that level we get a choice of whether to slot in another planner or to place a program which does the right thing. Why stop there? Maybe we can go up the hierarchy and eliminate the planners there too. To do this we must move from a state based way of reasoning to a process based way of acting.
</description>
<pubDate>Tue, 01 Sep 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41202</guid>
<dc:date>1987-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>CL1 Manual</title>
<link>https://hdl.handle.net/1721.1/41201</link>
<description>CL1 Manual
Bawden, Alan
CL1 is a prototyping language for programming a Connection Machine. It supports a model of the Connection Machine as a collection of tiny conventional machines (process elements), each with its own independent program counter.
</description>
<pubDate>Thu, 01 Sep 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41201</guid>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Cooperative Networks</title>
<link>https://hdl.handle.net/1721.1/41200</link>
<description>Design of Cooperative Networks
Marroquin, J. L.
In this paper we analyse several approaches to the design of Cooperative Algorithms for solving a general problem: That of computing the values of some property over a spatial domain, when these values are constrained (but not uniquely determined) by some observations, and by some a priori knowledge about the nature of the solution (smoothness, for example).&#13;
Specifically, we discuss the use of: Variational techniques; stochastic approximation methods for global optimization, and linear threshold networks. Finally, we present a new approach, based on the interconnection of Winner-take-all networks, for which it is possible to establish precise convergence results, including bounds on the rate of convergence.
</description>
<pubDate>Fri, 01 Jul 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41200</guid>
<dc:date>1983-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>MIT Mobile Robots - What's Next?</title>
<link>https://hdl.handle.net/1721.1/41199</link>
<description>MIT Mobile Robots - What's Next?
Flynn, Anita M.; Brooks, Rodney A.
The MIT Mobile Robot Project began in January of 1985 with the objective of building machines that could operate autonomously and robustly in dynamically changing environments. We now have four working robots, each progressively more intelligent and sophisticated. All incorporate some rather novel ideas about how to build a control system that can adequately deal with complex environments. The project has also contributed some innovative and creative technical solutions in terms of putting together sensors, actuators, power supplies and processing power into whole systems that actually work. From our experiences over the past two and a half years, we have gained insight into the real issues and problems and what the goals should be for future robotics research. This paper gives our perspectives on mobile robotics: our objectives, experiences, mistakes and future plans.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41199</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Differential Operators for Edge Detection</title>
<link>https://hdl.handle.net/1721.1/41198</link>
<description>Differential Operators for Edge Detection
Torre, V.; Poggio, Tomaso A
We present several results characterizing two differential operators used for edge detection: the Laplacian and the second directional derivative along the gradient. In particular, (a)we give conditions for coincidence of the zeros of the two operators, and (b) we show that the second derivative along the gradient has the same zeros of the normal curvature in the gradient direction.&#13;
Biological implications are also discussed. An experiment is suggested to test which of the two operators may be used by the human visual system.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41198</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formalizing Reusable Software Components</title>
<link>https://hdl.handle.net/1721.1/41197</link>
<description>Formalizing Reusable Software Components
Rich, Charles; Waters, Richard C.
There has been a long-standing desire in computer science for a way of collecting and using libraries of standard software components. Unfortunately, there has been only limited success in actually doing this. We believe that the lack of success stems not from any resistance to the idea, nor from any lack of trying, but rather from the difficulty of choosing an appropriate formalism for representing components. In this paper we define five desiderata for a good formalization of reusable software components and discuss many of the formalisms which have been used for representing components in light of these desiderata. We then briefly describe a formalism we are developing — the Plan Calculus — which seeks to satisfy these desiderata by combining together the best features of prior formalisms.
This paper has been accepted by the ITT Workshop on Reusability in Programming, Newport RI, September 7-9, 1983.
</description>
<pubDate>Fri, 01 Jul 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41197</guid>
<dc:date>1983-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Merging Illustrations and Printing on Big Paper</title>
<link>https://hdl.handle.net/1721.1/41196</link>
<description>Merging Illustrations and Printing on Big Paper
Roylance, Gerald
How to guide for some of the printing utilities in the AI lab. Describes how TEX tiles are processed and how some illustrations may be merged into the final copy. Also describes how to use TEX to print on 8.5x14 (legal) and 11x17 size paper.
</description>
<pubDate>Wed, 01 Jul 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41196</guid>
<dc:date>1987-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Inclusion</title>
<link>https://hdl.handle.net/1721.1/41195</link>
<description>Virtual Inclusion
Chapman, David; Agre, Philip E.
Several recent knowledge-representation schemes have used virtual copies for storage efficiency. Virtual copes are confusing. In the course of trying to understand, implement, and use Jon Doyle's SDL virtual copy mechanism, we encountered difficulties that led us to define an extension of virtual copies we call virtual inclusion. Virtual inclusion has interesting similarities to the environment structures maintained by a program in a block-structured language. It eliminates the clumsy typed part mechanism of SDL, and handles properly a proposed test of sophisticated virtual copy schemes.
</description>
<pubDate>Thu, 01 Sep 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41195</guid>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Naive Problem Solving and Naive Mathematics</title>
<link>https://hdl.handle.net/1721.1/41194</link>
<description>Naive Problem Solving and Naive Mathematics
Chapman, David
AI problem solvers have almost always been given a complete and correct axiomatization of their problem domain and of the operators available to change it. Here I discuss a paradigm for problem solving in which the problem solver initially is given only a list of available operators, with no indication as to the structure of the world or the behavior of the operators. Thus, to begin it is "blind" and can only stagger about in the world tripping over things until it begins to understand what is going on. Eventually it will learn enough to solve problems in the world as well as if it the world had been explained to it initially. I call this paradigm naive problem solving. The difficulty of adequately formalizing all but the most constrained domains makes naive problem solving desirable.&#13;
I have implemented a naive problem solver that learns to stack blocks and to use an elevator. It learns by finding instances of "naive mathematical cliches" which are common mental models that are likely to be useful in any domain.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41194</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The New Idiot's Guide to OZ</title>
<link>https://hdl.handle.net/1721.1/41193</link>
<description>The New Idiot's Guide to OZ
Highleyman, Liz A.
This is a manual for complete beginners. It assumes no knowledge of the MIT computer systems. This guide will teach you how to log onto the computer called OZ, a DEC PDP-20 computer running the TWENEX (TOPS-20) operating system. You will learn how to use various operating system features, send and receive electronic mail, create and edit files using EMACS, process text using YTEX, and print out your files. This manual has a companion on-line directory on OZ, called &lt;LIZ.GUIDE&gt;, which contains sample programs and examples to use in conjunction with this guide.
</description>
<pubDate>Mon, 01 Feb 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41193</guid>
<dc:date>1988-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interfacing to the Programmer's Apprentice</title>
<link>https://hdl.handle.net/1721.1/41192</link>
<description>Interfacing to the Programmer's Apprentice
Pitman, Kent
In this paper, we discuss the design of a user interface to the Knowledge Based Editor (KBE), a prototype implementation of the Programmer's Apprentice. Although internally quite sophisticated, the KBE hides most of its internal mechanisms from the user, presenting a simplified model of its behavior which is flexible and easy to use. Examples are presented to illustrate the decisions which have led from high-level design principles such as "integration with existing tools" and "simplicity of user model" to a working implementation which is true to those principles.
This paper has been submitted to SoftFair, an IEEE/NBS/SIGSOFT co-sponsored conference on software development tools, techniques, and alternatives, which will be held at the Hyatt Regency Crystal City, Arlington, VA., July 26-28, 1983.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41192</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representing Change for Common-Sense Physical Reasoning</title>
<link>https://hdl.handle.net/1721.1/41191</link>
<description>Representing Change for Common-Sense Physical Reasoning
Doyle, Richard J.
Change pervades every moment of our lives. Much of our success in dealing with a constantly changing world is based in common-sense physical reasoning about processes and physical systems. Processes are the way quantities interact over time. Physical systems can be described as a set of quantities and the processes that operate on them. Representations for causality, time, and quantity are needed to fully characterize change in this domain. Several ideas for these representations are examined and synthesized in this paper towards the goal of constructing a framework to support understanding of, reasoning about, and learning how things work.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41191</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Condor Programmer's Manual - Version II</title>
<link>https://hdl.handle.net/1721.1/41190</link>
<description>The Condor Programmer's Manual - Version II
Narasimhan, Sundar; Siegel, David M.
This is the CONDOR programmer's manual, that describes the hardware and software that form the basis of the real-time computational architecture built originally for the Utah-MIT hand. The architecture has been used successfully to control the hand and the MIT-Serial Link Direct Drive Arm in the past. A number of such systems are being built to address the computational needs of other robotics research efforts in and around the lab. This manual, which is intended primarily for programmers/users of the CONDOR system, represents our effort at documenting the system so that it can be a generally useful research tool.
</description>
<pubDate>Wed, 01 Jul 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41190</guid>
<dc:date>1987-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Connection Machine RAM Chip</title>
<link>https://hdl.handle.net/1721.1/41189</link>
<description>The Connection Machine RAM Chip
Flynn, Anita M.
This document describes the three transistor NMOS dynamic ram circuit used in the connection machine. It was designed and implemented by Brewster Kahle, with the assistance of Jim Cherry, Danny Hillis and Tom Knight. Prototypes were fabricated through the APRA MOSIS facility, using both four and three micro design rules. Jim Li and I tested both runs this fall. They work. This document describes how.
</description>
<pubDate>Mon, 03 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41189</guid>
<dc:date>1983-01-03T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of Manipulators with Less Than One Degree of Freedom</title>
<link>https://hdl.handle.net/1721.1/41188</link>
<description>Dynamics of Manipulators with Less Than One Degree of Freedom
Hillis, D.
We have developed an efficient Lagrangian formulation of manipulators with small numbers of degrees of freedom. The efficiency derives from the lack of velocities, accelerations, and generalized forces. The number of additions and multiplications remains constant, independent of the number of joints, as long as the number of joints remains less than one. While this is a restricted class of manipulators, we believe that it is important to understand it fully before studying of more complex systems. Manipulators with less that one degree of freedom are by far the most common manipulators used by industry. We have also noticed that many of the multiple-degree-of-freedom manipulators in our laboratory tend to be used in a zero-degree-of-freedom mode. With this formulation of the dynamics it should be possible in principle to compute the Lagrangian dynamics of manipulators with less than one degree of freedom in real time.
Acknowledgments. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. My thanks to Marvin Minsky, Phil Agre, and David Chapman for pointing out relevant trends in current robotics research. A.I. Laboratory Working Papers are produced for internal circulation, and may contain information that is, for example, too preliminary or too detailed for formal publication. it is not intended that they should be considered papers to which reference can be made in the literature.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41188</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Interaction Between Truth Maintenance, Equality, and Pattern-Directed Invocation: Issues of Completeness and Efficiency</title>
<link>https://hdl.handle.net/1721.1/41187</link>
<description>The Interaction Between Truth Maintenance, Equality, and Pattern-Directed Invocation: Issues of Completeness and Efficiency
Feldman, Yishai A.; Rich, Charles
We have implemented a reasoning system, called BREAD, which includes truth maintenance, equality, and pattern-directed invocation. This paper reports on the solution of two technical problems arising out of the interaction between these mechanisms. The first result is an algorithm which ensures the completeness of pattern-directed invocation with respect to equality. The second result is an algorithm which reduces a class of redundant proofs.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41187</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Empirical Study of Program Modification Histories</title>
<link>https://hdl.handle.net/1721.1/41186</link>
<description>An Empirical Study of Program Modification Histories
Zelinka, Linda M.
Large programs undergo many changes before they run in a satisfactory manner. For many large programs, modification histories are kept which record every change that is made to the program. By studying these records, patterns of program evolution can be identified. This paper describes a taxonomy of types of changes which was developed by studying several such histories. In addition, it discusses a possible application of this classification in an interactive tool for the updating of user documentation.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41186</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>What to Read: A Biased Guide to AI Literacy for the Beginner</title>
<link>https://hdl.handle.net/1721.1/41185</link>
<description>What to Read: A Biased Guide to AI Literacy for the Beginner
Agre, Philip E.
This note tries to provide a quick guide to AI literacy for the beginning AI hacker and for the experienced AI hacker or two whose scholarship isn't what it should be. most will recognize it as the same old list of classic papers, give or take a few that I feel to be under- or over-rated. It is not guaranteed to be thorough or balanced or anything like that.
Acknowledgements. It was Ken Forbus' idea, and he, Howie Shrobe, Dan Weld, and John Batali read various drafts. Dan Huttenlocher and Tom Knight helped with the speech recognition section. The science fiction section was prepared with the aid of my SF/AI editorial board, consisting of Carl Feynman and David Wallace, and of the ArpaNet SF-Lovers community. Even so, all responsibility rests with me.
</description>
<pubDate>Wed, 01 Nov 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41185</guid>
<dc:date>1972-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gnat Robots (And How They Will Change Robotics)</title>
<link>https://hdl.handle.net/1721.1/41184</link>
<description>Gnat Robots (And How They Will Change Robotics)
Flynn, A. M.
A new concept in mobile robots is proposed, namely that of a gnat-sized autonomous robot with on-board sensors, brains, actuators and power supplies, all fabricated on a single piece of silicon. Recent breakthroughs in computer architectures for intelligent robots, sensor integration algorithms and micromachining techniques for building on-chip micromotors, combined with the ever decreasing size of integrated logic, sensors and power circuitry have led to the possibility of a new generation of mobile robots which will vastly change the way we think about robotics.&#13;
Forget about today's first generation robots: costly, bulky machines with parts acquired from many different vendors. What will appear will be cheap, mass produced, slimmed down, integrated robots that need no maintenance, no spare parts, and no special care. The cost advantages of these robots will create new worlds of applications.&#13;
Gnat robots will offer a new approach in using automation technology. We will begin to think in terms of massive parallelism: using millions of simple, cheap, gnat robots in place of one large complicated robot. Furthermore, disposable robots will even become realistic.&#13;
This paper outlines how to build gnat robots. It discusses the technology thrusts that will be required for developing such machines and sets forth some strategies for design. A close look is taken at the tradeoffs involved in choosing components of the system: locomotion options, power sources, types of sensors and architectures for intelligence.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41184</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Talking to the Puma</title>
<link>https://hdl.handle.net/1721.1/41183</link>
<description>Talking to the Puma
Sobalvarro, Patrick G.
The AI Lab's Unimation Puma 600 is a general-purpose industrial robot arm that has been interfaced to a Lisp Machine for use in robotics projects at the lab. It has been fitted with a force-sensing wrist. The Puma is capable of moving payloads of up to 5 pounds at up to 1 meter per second, with positioning accuracy to within a millimeter.&#13;
This paper is a primer on the control of the Puma from a Lisp Machine. The current Lisp Machine interface is preliminary; the Lisp Machine communicates with the Puma is over a serial line in Unimation's VAL language. The interface will probably change over the next year; however, the commands documented in this paper will probably remain much the same.
</description>
<pubDate>Wed, 01 Sep 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41183</guid>
<dc:date>1982-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Program Description</title>
<link>https://hdl.handle.net/1721.1/41182</link>
<description>Automated Program Description
Cyphers, D. Scott
The Programmer's apprentice (PA) is an automated program development tool. The PA depends upon a library of common algorithms (cliches) as the source of its knowledge about programming. The PA uses these cliches to understand how a program is implemented. This knowledge may also be used to explain to a user of the PA how the program is implemented.&#13;
The problem with any explanation or description is knowing how much information to present, and how much information to hide. A set of simple heuristics for doing this can be used with the cliche representation of a program to produce reasonable descriptions of parts of programs. The system described combines "canned" phrases corresponding to cliche parts to form explanations. The process is fast and appears to be easily extensible to future versions of the PA and other domains.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41182</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>ACE: A Cliché-based Program Structure Editor</title>
<link>https://hdl.handle.net/1721.1/41181</link>
<description>ACE: A Cliché-based Program Structure Editor
Tan, Yang Meng
ACE extends the syntax-directed paradigm of program editing by adding support for programming clichés. A programming cliché is a standard algorithmic fragment. ACE supports the rapid construction of programs through the combination of clichés selected from a cliché library.&#13;
ACE is also innovative in the way it support the basic structure editor operations. Instead of being based directly on the grammar for a programming language, ACE is based on a modified grammar which is designed to facilitate editing. Uniformity of the user interface is achieved by encoding the modified grammar as a set of clichés.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41181</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Getting Started Computing at the AI Lab</title>
<link>https://hdl.handle.net/1721.1/41180</link>
<description>Getting Started Computing at the AI Lab
Stacy, Christopher C.
This document describes the computing facilities at M.I.T. Artificial Intelligence Laboratory, and explains how to get started using them. It is intended as an orientation document for newcomers to the lab, and will be updated by the author from time to time.
</description>
<pubDate>Tue, 07 Sep 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41180</guid>
<dc:date>1982-09-07T00:00:00Z</dc:date>
</item>
<item>
<title>TRIG: An Interactive Robotic Teach System</title>
<link>https://hdl.handle.net/1721.1/41179</link>
<description>TRIG: An Interactive Robotic Teach System
McLaughlin, James R.
Currently, it is difficult for a non-programmer to generate a complex sensor-based robotic program. Most robot programming methods either generate only very simple programs or are such that they are only useful to programmers. This paper presents an interactive teach system that will allow non-programmers to create a program for a six degree of freedom mechanical robot. In addition to conventional guiding capabilities, the teach system will allow the user to create complex programs containing sensor-based moves (move until touch), loops, and branches.
</description>
<pubDate>Tue, 01 Jun 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41179</guid>
<dc:date>1982-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovery Systems: From AM to CYRANO</title>
<link>https://hdl.handle.net/1721.1/41178</link>
<description>Discovery Systems: From AM to CYRANO
Haase, Ken
The emergence in 1976 of Doug Lenat's mathematical discovery program AM [Len76] [Len82a] was met with suprise and controversy; AM's performance seemed to bring the dream of super-intelligent machines to our doorstep, with amazingly simple methods to boot. However, the seeming promise of AM was not borne out: no generation of automated super-mathematicians appeared. Lenat's subsequent attempts (with his work on the Eurisko program) to explain and alleviate AM's problems were something of a novelty in Artificial Intelligence research; AI projects usually 'let lie' after a brief moment in the limelight with a handful of examples. Lenat's work on Eurisko revealed certain constraints on the design of discovery programs; in particular, Lenat discovered that a close coupling of representation syntax and semantics is neccessary for a discovery program to prosper in a given domain. After Eurisko, my own work on the discovery program Cyrano has revealed more constraints on discovery processes in general in particular, work on Cyrano has revealed a requirement of 'closure' in concept formation. The concepts generated by a discovery program's concept formation component must be usable as inputs to that same concept formation component. Beginning with a theoretical analysis of AM's actual performance, this program presents a theory of discovery and goes on to present the implementation of an experiment — the CYRANO program — based on this theory. (This article is a preliminary version of an invited paper fro the First International Symposium on Artificial Intelligence and Expert Systems, to be held in Berlin on May 18-22 1987.)
</description>
<pubDate>Sun, 01 Mar 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41178</guid>
<dc:date>1987-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Code Generation in the Programmer's Apprentice</title>
<link>https://hdl.handle.net/1721.1/41177</link>
<description>Code Generation in the Programmer's Apprentice
Handsaker, Robert E.
The Programmer's Apprentice is a highly interactive program development tool. The user interface to the system relies on program text which is generated from an internal plan representation. The programs generated need to be easy for a programmer to read and understand. This paper describes a design for a code generation module which can be tailored to produce code which reflects the stylistic preferences of individual programmers.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41177</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aspects of the Rover Problem</title>
<link>https://hdl.handle.net/1721.1/41176</link>
<description>Aspects of the Rover Problem
Doyle, Richard J.
The basic task of a rover is to move about automonously in an unknown environment. A working rover must have the following three subsystems which interact in various ways: 1) locomotion--the ability to move, 2) perception--the ability to determine the three-dimensional structure of the environment, and 3) navigation--the ability to negotiate the environment. This paper will elucidate the nature of the problem in these areas and survey approaches to solving them while paying attention to real-world issues.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41176</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge-Based Schematics Drafting: Aesthetic Configuration as a Design Task</title>
<link>https://hdl.handle.net/1721.1/41175</link>
<description>Knowledge-Based Schematics Drafting: Aesthetic Configuration as a Design Task
Valdes-Perez, Raul E.
Depicting an electrical circuit by a schematic is a tedious task that is a good candidate for automation. Programs that draft schematics with the usual algorithmic approach do not fully exploit knowledge of circuit function, relying mainly on the circuit topology. The extra-topological circuit characteristics are what an engineer uses to understand a schematic; human drafters take these characteristics into account when drawing a schematic.&#13;
This document presents a knowledge base and an architecture for drafting arithmetic digital circuits having a single theme. The relevance and limitations of this architecture and knowledge base for other types of circuit are explored.&#13;
It is argued that the task of schematics drafting is one of aesthetic design. The affect of aesthetic criteria on the program architecture is discussed. The circuit layout constraint language, the program's search regimen, and the backtracking scheme are highlighted and explained in detail.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41175</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hidden Cues in Random Line Stereograms</title>
<link>https://hdl.handle.net/1721.1/41174</link>
<description>Hidden Cues in Random Line Stereograms
Nishihara, H. K.; Poggio, Tomaso A
Successful fusion of random-line stereograms with breaks in the vernier acuity range has been interpreted to suggest that the interpolation process underlying hyperacuity is parallel and preliminary to stereomatching. In this paper (a) we demonstrate with computer experiments that vernier cues are not needed to solve the stereomatching problem posed by these stereograms and (b) we provide psychophysical evidence that human stereopsis probably does not use vernier cues alone to achieve fusion of these random-line stereograms.
</description>
<pubDate>Thu, 01 Apr 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41174</guid>
<dc:date>1982-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Primer for TEX Users</title>
<link>https://hdl.handle.net/1721.1/41173</link>
<description>A Primer for TEX Users
Jones, Judi
TEX is our latest text formatter. It is designed specifically for technical text (e.g., mathematics), and produces much higher quality output than other formatters previously available. Donald Knuth designed TEX at Stanford and published a manual TEX and METAFONT New Directions in Typesetting with "Everything you need to know about TEX." The original people who used TEX here set up their own macro files but now Daniel Brotsky has developed a standardized macro package which does the types of formatting usually desired. This macro package will be referred to as TBase in this document.&#13;
The aim of this memo is to help you create your first TEXT file, explain the basic commands for formatting (showing some examples) and clarify possible areas of confusion, giving pointers to the more technical documentation available for the advanced user. It is advisable for someone planning to use TEX to get copes of: INFO;TBASE INFO, NTEXLB;TBASE ORDER, NTEXLB;SAMPLE PRESS, NTEXLB:SAMPLE TEX and a copy of Knuth's manual. This document tries not to duplicate information already explained in the materials just mentioned - only to clarify some areas and set the information forth in an easily digestable manner.
</description>
<pubDate>Mon, 01 Mar 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41173</guid>
<dc:date>1982-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Critical Analysis of Programming in Societies of Behaviors</title>
<link>https://hdl.handle.net/1721.1/41172</link>
<description>Critical Analysis of Programming in Societies of Behaviors
Cudhea, Peter
Programming in societies of behavior-agents is emerging as a promising method for creating mobile robot control systems that are responsive both to internal priorities for action and to external world constraints. It is essentially a new approach to finding modularities in real-time control systems in which module boundaries are sought not between separate information processing functions, but between separate task-achieving units. Task achieving units for complex behaviors are created by merging together the task-achieving units from simpler component behaviors into societies with competing and cooperating parts. This paper surveys the areas of agreement and disagreement in four approaches to programming with societies of behaviors. By analyzing where the systems differ, both on what constitutes a task-achieving unit and on how to merge such units together, this paper hopes to lay the groundwork for future work on controlling robust mobile robots using this approach.
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41172</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report on the Second Workshop on Distributed AI</title>
<link>https://hdl.handle.net/1721.1/41171</link>
<description>Report on the Second Workshop on Distributed AI
Davis, Randall
On June 24, 1981 twenty-five participants from organizations around the country gathered in MIT's Endicott House for the Second Annual Workshop on Distributed AI. The three-day workshop was designed as an informal meeting, centered mainly around brief research reports presented by each group, along with an invited talk. In keeping with the spirit of the meeting, this report was prepared as a distributed document, with each speaker contributing a summary of his remarks.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41171</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Guide to ITS Operations: Useful Spells and Incantations</title>
<link>https://hdl.handle.net/1721.1/41170</link>
<description>A Guide to ITS Operations: Useful Spells and Incantations
Stacy, Christopher C.
It is said that it is not wise to dabble in the Arts without care and caution, for the spell is at once subtle and dangerous: Look herein! For if you read carefully and closely, you can incant a Word of Magic, and the system might be revived.&#13;
This working paper describes crash recovery procedures for a DEC KA-10 computer running ITS, the Incompatible Timesharing System. It is intended for people not intimately familiar with the system internals who need to handle emergency operation program when a system maintaner is not available.
</description>
<pubDate>Wed, 27 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41170</guid>
<dc:date>1982-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>A Requirements Analyst's Apprentice: A Proposal</title>
<link>https://hdl.handle.net/1721.1/41169</link>
<description>A Requirements Analyst's Apprentice: A Proposal
Reubenstein, Howard
The Requirements Analyst's APprentice (RAAP) partially automates the modeling process involved in creating a software requirement. It uses knowledge of the specific domain and general experience regarding software requirements to guide decisions made in the construction of a requirement. RAAP assists the analyst by maintaining consistency, detecting redundancy of description, and analyzing completeness relative to a known body of requirements experience. RAAP is a tool to be used by an analyst in his dealings with the customer. It helps him translate the customer's informal ideas into a requirements knowledge base. RAAP will have the ability to present its internal representation of the requirement in document form. Document-based requirements analysis is the state of the art. A computer-based, knowledge-based analysis system can provide improvement in quality, efficiency and maintainability over document-based requirements analysis and thus advance the state of the art towards automatic programming. RAAP takes a new approach to automating software development by concentrating on the modeling process involved in system construction (as opposed to the model translation process.) By supporting the intelligent creation of perspicuous models, it is hoped that flaws will become self revealing and the quality of software can be improved. Assistance is proved or the creation of "correct" models and for the analysis of the implications of modeling decisions.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41169</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Assq Chip and Its Progeny</title>
<link>https://hdl.handle.net/1721.1/41168</link>
<description>The Assq Chip and Its Progeny
Agre, Philip E.
The Assq Chip lives on the memory bus of the Scheme-81 chip of Sussman et al and serves as a  utility for the computation of a number of functions concerned with the maintenance of linear tables and lists. Motivated by a desire to apply the design methodology implicit in Scheme-81, it was designed in about two months, has a very simple architecture and layout, and is primarily machine-generated. The chip and the design process are described and evaluated in the context of a proposal to construct a Scheme-to-silicon compiler that automates the design methodology used in the Assq Chip.
</description>
<pubDate>Fri, 01 Jan 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41168</guid>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Understanding through Cliché Recognition</title>
<link>https://hdl.handle.net/1721.1/41167</link>
<description>Program Understanding through Cliché Recognition
Brotsky, Daniel
We propose research into automatic program understanding via recognition of common data structures and algorithms (clichés). Our goals are two-fold: first, to develop a theory of program structure which makes such recognition tractable; and second, to produce a program (named Inspector) which, given a Lisp program and a library of clichés, will construct a hierarchical decomposition of the program in terms of the clichés it uses.&#13;
Our approach involves assuming constraints on the possible decompositions of programs according to the teleological relations between their parts. Programs are analyzed by translating them into a language-independent form and then parsing this representation in accordance with a context-free web grammar induced by the library of clichés. Decompositions produced by this analysis will in general be partial, since most programs will not be made up entirely of clichés.&#13;
This work is motivated by the belief that identification of clichés used in program, together with knowledge of their properties, provides a sufficient basis for understanding large parts of that program's behavior. Inspector will become one component of a system of programs known as a programmer's apprentice, in which Inspector's output will be used to assist a programmer with program synthesis, debugging, and maintenance.
</description>
<pubDate>Tue, 01 Dec 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41167</guid>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Readable Layout of Unbalanced N-ary Trees</title>
<link>https://hdl.handle.net/1721.1/41166</link>
<description>Readable Layout of Unbalanced N-ary Trees
Solo, David M.
The automatic layout of unbounded n-ary tree structures is a problem of subjectively meshing two independent goals: clarity and space efficiency. This paper presents a minimal set of subjective aesthetics which insures highly readable structures, without overly restricted flexibility in the layout of the tree. This flexibility underlies the algorithm's ability to produce readable trees with greater uniformity of node density throughout the display than achieved by previous algorithms, an especially useful characteristic where nodes are labelled with text.
</description>
<pubDate>Fri, 01 Aug 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41166</guid>
<dc:date>1986-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programming Cliches and Cliche Extraction</title>
<link>https://hdl.handle.net/1721.1/41165</link>
<description>Programming Cliches and Cliche Extraction
Cyphers, D. Scott
The programmer's apprentice (PA) is an automated program development tool. The PA depends upon a library of common algorithms (cliches) as the source of its knowledge about programming. The PA can be made more usable if programmers not familiar with its implementation can add programming knowledge to the PA's library. This paper describes cliches and a technique for adding them to the library.&#13;
Because cliches often do not correspond to complete code, the library can not simply be a collection of programs. Instead, a plan representation is used. The approach taken for adding knowledge to the library is one of cliche extraction. A program containing a particular cliche is converted to its plan. The plan is pruned, with the results of the pruned plan being displayed in a code-like form. Eventually, only the cliche remains. The cliche is then added to the library.
This paper is a revision of an earlier Bachelor's thesis.
</description>
<pubDate>Mon, 01 Feb 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41165</guid>
<dc:date>1982-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representing Constraint Systems with Omega</title>
<link>https://hdl.handle.net/1721.1/41164</link>
<description>Representing Constraint Systems with Omega
Koton, Phyllis A.
This paper considers two constraint systems, that of Steele and Sussman, and Alan Borning's Thinglab. Some functional difficulties in these systems are discussed. A representation of constraint systems using the description system Omega is presented which is free of these difficulties.
</description>
<pubDate>Sun, 01 Nov 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41164</guid>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Primer for the Act-1 Language</title>
<link>https://hdl.handle.net/1721.1/41163</link>
<description>A Primer for the Act-1 Language
Theriault, Daniel G.
This document is intended to describe the current design for computer programming language, Act-1. It describes the Actor computational model, which Act-1 was designed to support. A perspective is provided from which to view the language, with respect to existing computer language systems and to the computer system and environment under development for support of the language. The language is informally introduced in a tutorial fashion and demonstrated through examples. A programming strategy for the language is described, further illustrating its use.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41163</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Disciplined Use of Simplifying Assumptions</title>
<link>https://hdl.handle.net/1721.1/41162</link>
<description>The Disciplined Use of Simplifying Assumptions
Rich, Charles; Waters, Richard C.
Simplifying assumptions — everyone uses them but no one's programming tool explicitly supports them. In programming, as in other kinds of engineering design, simplifying assumptions are an important method for dealing with complexity. Given a complex programming problem, expert programmers typically choose simplifying assumptions which, though false, allow them to arrive rapidly at a program which addresses the important features of the problem without being distracted by all of its details. The simplifying assumptions are then incrementally retracted with corresponding modifications to the initial program. This methodology is particularly applicable to rapid prototyping because the main questions of interest can often be answered using only the initial program.&#13;
Simplifying assumptions can easily be misused. In order to use them effectively two key issues must be addressed. First, simplifying assumptions should be chosen which simplify the design problems significantly without changing the essential character of the program which needs to be implemented. Second, the designer must keep track of all the assumptions he is making so that he can later retract them in an orderly manner. By explicitly dealing with these issues, a programming assistant system could directly support the use of simplifying assumptions as a disciplined part of the software development process.
Submitted to the ACM SIGSOFT Second Software Engineering Symposium: Workshop on Rapid Prototyping. Columbia, Maryland, April 19-21, 1982.
</description>
<pubDate>Tue, 01 Dec 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41162</guid>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Presentation Based User Interfaces</title>
<link>https://hdl.handle.net/1721.1/41161</link>
<description>Presentation Based User Interfaces
Ciccarelli, Eugene C.
This research will develop a methodology for designing user interfaces for general-purpose interactive systems. The central concept is the presentation, a structured pictorial or text object conveying information about some abstract object to the user. The methodology models a user interface as a shared communication medium, user and system communicating to each other by manipulating presentations.&#13;
The methodology stresses relations between presentations, especially presentations of the system itself; presentation manipulation by the user; presentation recognition by the system; and how properties of these establish a spectrum of interface styles.&#13;
The methodology suggests a general system base providing mechanisms to support construction of user interfaces. As part of an argument that such a base is feasible and valuable, and to demonstrate the domain independence of the methodology, three test systems will be implemented.
</description>
<pubDate>Wed, 01 Jul 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41161</guid>
<dc:date>1981-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal For a Study of Commonsense Physical Reasoning</title>
<link>https://hdl.handle.net/1721.1/41160</link>
<description>Proposal For a Study of Commonsense Physical Reasoning
Forbus, Kenneth D.
Our common sense views of physics are the first coin in our intellectual capital; understanding precisely what they contain could be very important both for understanding ourselves and for making machines more like us. This proposal describes a domain that has been designed for studying reasoning about constrained motion and describes my theories about performing such reasoning. The issues examined include qualitative reasoning about shape and physical processes, as well as ways of using knowledge about motion other than "envisioning". Being a proposal, the treatment of these issues is necessarily cursory and incomplete.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505.
</description>
<pubDate>Wed, 01 Jul 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41160</guid>
<dc:date>1981-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>GROK Doc: An Image Display Tool</title>
<link>https://hdl.handle.net/1721.1/41159</link>
<description>GROK Doc: An Image Display Tool
Little, Jim
The image display tool GROK provides a facility for displaying images on the black-and-white screen of a Symbolics 3600 monitor. It allows display of images and their manipulation through a special window it manages. Images become objects in that window, and are handled by a variety of routines accessible by mouse selection from window menus. GROK is an outgrowth of two programs- Keith Nishihara's GREY*, which provided the concept of an image manipulation and display program for black-and-white screens, and Margaret Fleck's GREYCROK, which formed the nucleus from which GROK mutated. Many of the functions in GROK are lifted directly from GREYCROK.
</description>
<pubDate>Mon, 14 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41159</guid>
<dc:date>1986-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Logo Turtle Graphics for the Lisp Machine</title>
<link>https://hdl.handle.net/1721.1/41158</link>
<description>Logo Turtle Graphics for the Lisp Machine
Lieberman, Henry
This paper is a manual for an implementation of Logo graphics primitives in Lisp on the MIT Lisp Machine. The graphics system provides:&#13;
Simple line drawing and erasing using "turtle geometry"&#13;
Flexible relative and absolute coordinate systems, scaling&#13;
Floating point coordinates&#13;
Drawing points, circles, boxes, text&#13;
Automatically filling closed curves with patterns&#13;
Saving and restoring pictures rapidly as arrays of points&#13;
Drawing on color displays, creating new colors&#13;
Three dimensional perspective drawing, two-color stereo display
</description>
<pubDate>Tue, 05 May 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41158</guid>
<dc:date>1981-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>A Step Towards Automatic Documentation</title>
<link>https://hdl.handle.net/1721.1/41157</link>
<description>A Step Towards Automatic Documentation
Frank, Claude
This paper describes a system which automatically generates program documentation. Starting with a plan generated by analyzing the program, the system computes several kinds of summary information about the program. The most notable are: a summary of the cliched computations performed by the loops in the program, and a summary of the types and uses of the arguments to the program. Based on this information, a few English sentences are produced describing each function analysed.
*Visiting Scientist on leave from Schlumberger-Doll Research.&#13;
The views and conclusions contained in this paper are those of the author, and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Department of Defense, or the United States Government.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41157</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Guardians for Concurrent Systems</title>
<link>https://hdl.handle.net/1721.1/41156</link>
<description>Guardians for Concurrent Systems
Hewitt, Carl; Attardi, Giuseppe
In this paper we survey the current state of the art on fundamental aspects of concurrent systems. We discuss the notion of concurrency and discuss a model of computation which unifies the lambda calculus model and the sequential stored program model. We develop the notion of a guardian as a module that regulates the use of shared resources by scheduling their access, providing protection, and implementing recovery from hardware failures. A shared checking account is an example of the kind of resource that needs a guardian. We introduce the notions of a customer and a transaction manager for a request and illustrate how to use them to implement arbitrary scheduling policies for a guardian. A proof methodology is presented for proving properties of guardians, such as a guarantee of service for all requests received.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41156</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Report on the Workshop on Distributed AI</title>
<link>https://hdl.handle.net/1721.1/41155</link>
<description>Report on the Workshop on Distributed AI
Davis, Randall
On June 9-11, 22 people gathered at Endicott House for the first workshop on the newly emerging topic of Distributed AI. They came with a wide range of views on the topic, and indeed a wide range of views of what precisely the topic was.&#13;
In keeping with the spirit of the workshop, this report describing it was prepared in a distributed fashion. Each of the speakers contributed a summary of his comments. Sessions during the workshop included both descriptions of work done or in progress, and group discussions focused on a range of topics. The report reflects the organization, with nine short articles describing research efforts, and four summarizing the informal comments used as the foci for the group discussions.
</description>
<pubDate>Mon, 01 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41155</guid>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Proposal for Sniffer: a System that Understands Bugs</title>
<link>https://hdl.handle.net/1721.1/41154</link>
<description>A Proposal for Sniffer: a System that Understands Bugs
Shapiro, Daniel G.
This paper proposes an interactive debugging aid that exhibits a deep understanding of a narrow class of bugs. This system, called Sniffer, will be able to find and identify errors, and explain them in terms which are relevant to the programmer. Sniffer is knowledgeable about side-effects. It is capable of citing the data which was in effect at the time an error became manifest.&#13;
The debugging knowledge in Sniffer is organized as a collection of independent experts which know about particular errors. The experts (sniffers) perform their function by applying a feature recognition process to the text for the program, and to the events which took place during the execution of the code. No deductive machinery is involved. The experts are supported by two systems; the cliche finder which identifies small portions of algorithms from a plan for the code, and the time rover which provides complete access to all program states that ever existed.&#13;
Sniffer is embedded in a run-time debugging aid. The user of the system interacts with the debugger to focus attention onto a manageable subset of the code, and then submits a complaint to the sniffer system that describes the behavior which was desired. Sniffer outputs a detailed report about any error which is discovered.
</description>
<pubDate>Tue, 01 Jul 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41154</guid>
<dc:date>1980-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Synthesis of Language Ideas for AI Control Structures</title>
<link>https://hdl.handle.net/1721.1/41153</link>
<description>A Synthesis of Language Ideas for AI Control Structures
Kornfeld, William A.
Two well known programming methodologies for artificial intelligence research are compared, the so-called pattern-directed invocation languages and the object-oriented languages. The features and limitations of both approaches are discussed. We show that pattern-directed invocation is a more general formalism, but entails a serious loss of efficiency. We then go on to demonstrate that a language for artificial intelligence research can be created that contains the best features of both approaches.
</description>
<pubDate>Tue, 01 Jul 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41153</guid>
<dc:date>1980-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Global Time in Actor Computations</title>
<link>https://hdl.handle.net/1721.1/41152</link>
<description>Global Time in Actor Computations
Clinger, Will
This research was supported by a National Science Foundation Graduate Fellowship in mathematics.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41152</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolutionary Programming with the Aid of A Programmers' Apprentice</title>
<link>https://hdl.handle.net/1721.1/41151</link>
<description>Evolutionary Programming with the Aid of A Programmers' Apprentice
Hewitt, Carl
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41151</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Better Definition of Transactions</title>
<link>https://hdl.handle.net/1721.1/41150</link>
<description>Towards a Better Definition of Transactions
Kerns, Barbara S.
This paper builds on a technical report written by Carl Hewitt and Henry Baker called "Actors and Continuous Functionals". What is called a "goal-oriented activity" in that paper will be referred to in this paper as a "transaction". The word "transaction" brings to mind an object closer in function to what we wish to present than does the word "activity".&#13;
This memo, therefore, presents the definitions of a reply and a transaction as given in Hewitt and Baker's paper and points out some discrepancies in their definitions. That is, that the properties of transactions and replies as they were defined did not correspond with our intuitions, and thus the definitions should be changed. The issues of what should constitute a transaction are discussed, and a new definition is presented which eliminates the discrepancies caused by the original definitions. Some properties of the newly defines transactions are discussed, and it is shown that the results of Hewitt and Baker's paper still hold given the new definitions.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41150</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preliminary Design of the APIARY for VLSI Support of Knowledge-Based Systems</title>
<link>https://hdl.handle.net/1721.1/41149</link>
<description>Preliminary Design of the APIARY for VLSI Support of Knowledge-Based Systems
Hewitt, Carl
Knowledge-based applications will require vastly increased computational resources to achieve their goals. We are working on the development of a VLSI Message Passing Architecture to meet this need. As a first step we present the preliminary design of the APIARY system in this paper. The APIARY is currently in an early stage of implementation at the MIT Artificial Intelligence Laboratory.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41149</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building English Explanations from Function Descriptions</title>
<link>https://hdl.handle.net/1721.1/41148</link>
<description>Building English Explanations from Function Descriptions
Roberts, Bruce
An explanatory component is an important ingredient in any complex AI system. A simple generative scheme to build descriptive phrases from Lisp function calls can produce respectable explanations if explanation generators capitalize on the function decomposition reflected in Lisp programs.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research under Office of Naval Research contract N00014-75-C-0389.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41148</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Security and Modularity in Message Passing</title>
<link>https://hdl.handle.net/1721.1/41147</link>
<description>Security and Modularity in Message Passing
Hewitt, Carl; Attardi, Giuseppe; Lieberman, Henry
This paper addresses theoretical issues involved for the implementation of security and modularity in concurrent systems. It explicates the theory behind a mechanism for safely delegating messages to shared handlers in order to increase the modularity of concurrent systems. Our mechanism has the property that the actions caused by delegated messages are atomic. That is the handling of a message delegated by a client actor appears to be indivisible to other users of the actor. Our mechanism for delegating communications is a generalization suitable for use in concurrent systems of the sub-class mechanism of SIMULA. Our mechanism has the benefit that it easily lends itself to the implementation of efficient flexible access control mechanisms in distributed systems. It is a generalization of the protection mechanisms provided by capability-based system, access control lists, and the access control mechanisms provided by PDP-10 SIMULA.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under contract N00014-75-C-0522.
</description>
<pubDate>Thu, 01 Feb 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41147</guid>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrent Systems Need Both Sequences And Serializers</title>
<link>https://hdl.handle.net/1721.1/41146</link>
<description>Concurrent Systems Need Both Sequences And Serializers
Hewitt, Carl
Contemporary concurrent programming languages fall roughly into two classes. Languages in the first class support the notion of a sequence of values and some kind of pipelining operation over the sequence of values. Languages in the second class support the notion of transactions and some way to serialize transactions. In terms of the actor model of computation this distinction corresponds to the difference between serialized and unserialized actors. In this paper the utility of modeling both serialized and unserialized actors in a coherent formalism is demonstrated.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under contract N00014-75-C-0522.
</description>
<pubDate>Thu, 01 Feb 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41146</guid>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The XPRT Description System</title>
<link>https://hdl.handle.net/1721.1/41145</link>
<description>The XPRT Description System
Steels, Luc
This paper introduces a frame-based description language and studies methods for reasoning about problems using knowledge expressed in the language.&#13;
The system is based on the metaphor of a society of communicating experts and incorporates within this framework most of the currently known AI techniques, such as pattern-directed invocation, explicit control of reasoning, propagation of constraints, dependency recording, context mechanisms, message passing, conflict resolution, default reasoning, etc.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The author was sponsored by the Institute of International Education on an ITT-fellowship.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41145</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Examples of Conceptual Grammar</title>
<link>https://hdl.handle.net/1721.1/41144</link>
<description>Some Examples of Conceptual Grammar
Steels, Luc
This paper gives some examples of the conceptual grammar approach to the representation of linguistic knowledge.&#13;
First we give a short overview of the language we use to represent knowledge. Then we discuss an example that deals with the expression of verbal parameters (such as voice and aspect) in English verbal groups. Finally we discuss an example of a formal language.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The author was sponsored by the Institute of International Education on an ITT-fellowship.
</description>
<pubDate>Fri, 01 Dec 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41144</guid>
<dc:date>1978-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Introducing Conceptual Grammar</title>
<link>https://hdl.handle.net/1721.1/41143</link>
<description>Introducing Conceptual Grammar
Steels, Luc
This paper contains an informal and sketchy overview of a new way of thinking about linguistics and linguistic processing known as conceptual grammar.&#13;
Some ideas are presented on what kind of knowledge is involved in a natural language, how this knowledge is organized and represented and how it is activated and acquired.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The author was sponsored by the Institute of International Education on an ITT-fellowship.
</description>
<pubDate>Wed, 01 Nov 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41143</guid>
<dc:date>1978-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Is a Knowledge Representation System Like  a Piano?</title>
<link>https://hdl.handle.net/1721.1/41142</link>
<description>How Is a Knowledge Representation System Like  a Piano?
Smith, Brian Cantwell
In the summer of 1978 a decision was made to devote a special issue of the SIGART newsletter to the subject of knowledge representation research. To assist in ascertaining the current state of people's thinking on this topic, the editors (Ron Brachman and myself) decided to circulate an informal questionnaire among the representation community. What was originally planned as a simple list of questions eventually developed into the current document, and we have decided to issue it as a report on its own merits. The questionnaire is offered here as a potential aid both for understanding knowledge representation research, and for analysing the philosophical foundations on which that research is based. &#13;
The questionnaire consists of two parts. Part I focuses first on specific details, but moves gradually towards more abstract and theoretical questions regarding assumptions about what knowledge representation is; about the role played by the computational metaphor about the relationships among model, theory, and program; etc. In part II, in a more speculative vein, we set forth for consideration nine hypothesis about various open issues in representation research.
The research reported here was supported by National Institutes of Health Grant No. 1 P41 RR 01096-02 from the Division of Research Resources, and was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology.
</description>
<pubDate>Wed, 01 Nov 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41142</guid>
<dc:date>1978-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stepping Motor Control System</title>
<link>https://hdl.handle.net/1721.1/41141</link>
<description>Stepping Motor Control System
Larson, Noble G.
This paper describes a hardware system designed to facilitate position and velocity control of a group of eight stepping motors using a PDP-11. The system includes motor driver cards and other interface cards in addition to a special digital control module. The motors can be driven at speeds up to 3000 rpm. Position feedback is provided by shaft encoders, but tachometers are not used.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-77-C-0389.
</description>
<pubDate>Thu, 01 Feb 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41141</guid>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specifying and Proving Properties of Guardians for Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/41140</link>
<description>Specifying and Proving Properties of Guardians for Distributed Systems
Hewitt, Carl; Attardi, Giuseppe; Lieberman, Henry
In a distributed system where many processors are connected by a network and communicate using message passing, many users can be allowed to access the same facilities. A public utility is usually an expensive or limited resource whose use has to be regulated. A guardian is an abstraction that can be used to regulate the use of resources by scheduling their access, providing protection, and implementing recovery from hardware failures. We present a language construct called a primitive serializer which can be used to express efficient implementations of guardians in modular fashion. We have developed proof methodology for proving strong properties of network utilities e.g. the utility is guaranteed to respond to each request which it is sent. This proof methodology is illustrated by proving properties of a guardian which manages two hardcopy printing devices.
This report describes research conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-c-0522.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41140</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Looking in the Shadows</title>
<link>https://hdl.handle.net/1721.1/41139</link>
<description>Looking in the Shadows
Woodham, Robert J.; Horn, Berthold K.P.
The registration of an image with a model of the surface being imaged is an important prerequisite to many image understanding tasks. Once registration is achieved, new image analysis techniques can be explored. One approach is to compare the real image with an image synthesized from the surface model. But, accurate comparison requires and accurate synthetic image.&#13;
More realistic synthetic images can be obtained once shadow information is included. Accurate shadow regions can be determined when a hidden-surface algorithm is applied to the surface model in order to calculate which surface elements can be seen from the light source. We illustrate this technique using LANDSAT imagery registered with digital terrain models. Once shadow information is included, the effect of sky illumination and atmospheric haze can be measured.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Sat, 01 May 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41139</guid>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Story Understanding: the Beginning of a Consensus</title>
<link>https://hdl.handle.net/1721.1/41138</link>
<description>Story Understanding: the Beginning of a Consensus
McDonald, David D.
This paper is written for an Area Examination on the three papers: "A Framed PAINTING: The Representation of a Common Sense Knowledge Fragment" by Eugene Charniak, "Reporter: An Intelligent Noticer" by Steve Rosenberg, and "Using Plans to Understand Natural Language" by Robert Wilensky. Surprisingly, these papers share a common view of what it means to understand a story. The first part of this paper reviews the previous notions of "understanding", showing the progression to today's consensus. The content of the consensus and how the individual papers fit within it is then described. Finally, unsolved problems not adequately dealt with by any of the approaches are presented briefly.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Thu, 01 Jun 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41138</guid>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control, Multiple Description, and Purpose in the Visual Perception of Complex Scenes: A Pogress Report</title>
<link>https://hdl.handle.net/1721.1/41137</link>
<description>Control, Multiple Description, and Purpose in the Visual Perception of Complex Scenes: A Pogress Report
Dunlavey, Michael R.
This memo describes a vision program for recognizing simple furniture comprising assemblies of blocks, in which the same item may be composed in diverse ways. As such, it is concerned with three theoretical issues, perceptual processing, supression of unwanted detail, and segregation and interconnection of information.&#13;
The program's perceptual processing relies on an elaborate, redundant, alterable model of the scene rather than on any clever process structure. This approach aids the interpretation of incomplete, ambiguous portions of the scene as well as simplifies the program. The model is capable of quantitative as well as qualitative alteration, by a constraint-propogation system and a system of frame-shift demons.&#13;
The hierarchical nature of the scene - assemblies of assemblies of blocks - is reflected as hierarchy in the model. Each assembly is represented as having an external aspect, by which it relates to surrounding assemblies, and an internal aspect, listing the parts and relationships composing it. This imposes a natural supression of detail.&#13;
In addition to the vertical layering of the model there are horizontal subdivisions adapted for different computational purposes. There is a 2D section representing the image, a 3D section representing the shape, and  a stability section representing the physical forces and moments acting upon each unit. Each of the sections can be used through any of several indirect reference frames corresponding to different spatial viewpoints. Many computations on the model, such as stability analysis, spatial relationships, and visual matching, are greatly simplified by first selecting the proper spatial viewpoints.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Fri, 01 Aug 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41137</guid>
<dc:date>1975-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analsysis by Propagation of Constraints in Elementary Geometry Problem Solving</title>
<link>https://hdl.handle.net/1721.1/41136</link>
<description>Analsysis by Propagation of Constraints in Elementary Geometry Problem Solving
Doyle, Jon
This paper describes GEL, a new geometry theorem prover. GEL is the result of an attempt to transfer the problem solving abilities of the EL electronic circuit analysis program of Sussman and Stallman to the domain of geometric diagrams. Like its ancestor, GEL is based on the concepts of "one-step local deductions" and "macro-elements." The performance of this program raises a number of questions about the efficacy of the approach to geometry theorem proving embodied in GEL, and also illustrates problems relating to algebraic simplification in geometric reasoning.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41136</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transparency</title>
<link>https://hdl.handle.net/1721.1/41135</link>
<description>Transparency
Stefanescu, Dan
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Tue, 01 Jul 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41135</guid>
<dc:date>1975-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Tracking of Real World Objects</title>
<link>https://hdl.handle.net/1721.1/41134</link>
<description>Visual Tracking of Real World Objects
Speckert, Glen
This paper describes the progress made towards tracking an object visually using a PIN diode attached to a dual mirror deflection system which enables the PIN diode to "optically point" to any position in two-space. A helium neon laser equipted with a similar mirror deflection system was used to point at the object being tracked. Actual objects tracked include a hand, a bouncing ping pong ball, and a white center on a black target attached to a moving metronome.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75C-0643.
</description>
<pubDate>Tue, 01 Jul 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41134</guid>
<dc:date>1975-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Frame-Based Knowledge Representation</title>
<link>https://hdl.handle.net/1721.1/41133</link>
<description>Frame-Based Knowledge Representation
Steels, Luc
The paper introduces a language for representing knowledge in a declarative form. With this language it is possible to define knowledge about a certain domain by introducing a number of concepts and by specifying their interrelations.&#13;
The paper is meant to be an informal introduction to the language. We present the available constructs, describe their meaning and present a number of examples.&#13;
In other papers (currently in preparation) we will give a formal semantics of the language, introduce the interference theory and discuss a possible procedural embedding.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The author was sponsored by the Institute of International Education on an ITT-fellowship.
</description>
<pubDate>Sun, 01 Oct 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41133</guid>
<dc:date>1978-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>How People Execute Handwriting</title>
<link>https://hdl.handle.net/1721.1/41132</link>
<description>How People Execute Handwriting
Hollerbach, John
Handwriting is shown to be composed mainly of cup-shaped strokes lasting approximately 200 msec. The strokes are based on a hexagonal pattern, with quantized slopes and lengths. Each side of the hexagon is produced by a 40 msec acceleration burst. Smooth writing is produced by merging and rounding these bursts.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643-0003.
</description>
<pubDate>Tue, 01 Jul 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41132</guid>
<dc:date>1975-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Presupposition in Lexical Analysis and Discourse</title>
<link>https://hdl.handle.net/1721.1/41131</link>
<description>Presupposition in Lexical Analysis and Discourse
Bullwinkle, Candace L.
Recent research in linguistic analysis of presuppositions has provided numerous indications of the role of presupposition in lexical analysis. Still others have argued there is no distinction between meaning and the presupposition of a word. In this paper I discuss both issues of what presuppositions are related to lexical analysis and what happens to these presupposition in discourse. Finally, I comment on how this knowledge could be made available to a natural language understanding program.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0003.
</description>
<pubDate>Tue, 01 Jul 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41131</guid>
<dc:date>1975-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Preliminary Report on a Program for Generating Natural Language</title>
<link>https://hdl.handle.net/1721.1/41130</link>
<description>A Preliminary Report on a Program for Generating Natural Language
McDonald, David
A program framework has been designed in which the linguistic facts and heuristics necessary for generating fluent natural language can be encoded. The linguistic data is represented in annotated procedures and data structures which are designed to make English translations of already formulated messages given in a primary program's internal representation. The messages must include the program's intentions in saying them, in order to adequately specify the grammatical operations required for a translation.&#13;
The pertinant questions in this research have been: what structure does natural language have that allows it to encode multifaceted messages; and how must that structure be taken into account in the design of a generation facility for a computer program.&#13;
This paper describes the control and data structures of the design and and their motivation. It is a condensation of my Master's Thesis &lt;1&gt;, to which the reader is refered for further information. Work is presently underway on implementing the design in LISP and developing a grammar for use in one or more of the domains given below.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0003.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41130</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bargaining Between Goals</title>
<link>https://hdl.handle.net/1721.1/41129</link>
<description>Bargaining Between Goals
Goldstein, Ira P.
Bargaining is a process used to modify conflicting demands on an expendable resource so that a satisfactory allocation can be made. In this paper, I consider the design of a bargaining system to handle the problem of scheduling an individual's weekly activities and appointments. The bargaining system is based on the powerful reasoning strategy of producing a simplified linear plan by considering the various constraints independently and then debugging the resulting conflicts.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0003.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41129</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Meta-evaluation of Actors with Side-effects</title>
<link>https://hdl.handle.net/1721.1/41128</link>
<description>Meta-evaluation of Actors with Side-effects
Yonezawa, Akinori
Meta-evaluation is a process which symbolically evaluates an actor and checks to see whether the actor fulfills its contract (specification). A formalism for writing contracts for actors with side-effects which allow sharing of data is presented. Typical examples of actors with side-effects are the cell, actor counterparts of the LISP function rplaca and rplacd, and procedures whose computation depends upon their input history. Meta-evaluation of actors with side-effects is carried out by using situational tags which denotes a situation (local state of an actor systems at the moment of the transmissions of messages). It is illustrated how the situational tags are used for proving the termination of the activation of actors.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N000-14-74-C-0643.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41128</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Meta-evaluation of Actors with Side-effects</title>
<link>https://hdl.handle.net/1721.1/41127</link>
<description>Meta-evaluation of Actors with Side-effects
Yonezawa, Akinori
Meta-evaluation is a process which symbolically evaluates an actor and checks to see whether the actor fulfills its contract (specification). A formalism for writing contracts for actors with side-effects is presented. Meta-evaluation of actors with side-effects is carried out by using situational tags which denotes a situation (local state of an actor systems at the moment of the transmissions of messages). And also it is illustrated how the situational tags are used for providing the termination of the activation of actors.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0004.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41127</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Application of Linear Systems Analysis to Image Processing. Some Notes.</title>
<link>https://hdl.handle.net/1721.1/41126</link>
<description>The Application of Linear Systems Analysis to Image Processing. Some Notes.
Horn, Berthold K.P.; Sjoberg, Robert W.
The Fourier transform is a convenient tool for analyzing the performance of an image-forming system, but must be treated with caution. One of its major uses is turning convolutions into products. It is also used to transform a problem that is more naturally thought of in terms of frequency than time or space. We define the point-spread function and modulation transfer function in a two-dimensional linear system as analogues of the one-dimensional impulse response and its Fourier transform, the frequency response, respectively. For many imagine devices, the point-spread function is rotationally symmeteric. Useful tranforms developed for the special cases of a "pill box,", a gaussian blob, and an inverse scatter function.&#13;
Fourier methods are appropriate in the analysis of a defocused imaging system. We define a focus function as a weighted sum of high frequency terms in the spectrum of the system. This function will be a maximum when the image is in focus, and we can hill-climb on it to determine the best focus. We compare this function against two others, the sum of squares of intensities, and the sum of square of first differences, and show it to be superior.&#13;
Another use of the Fourier transform is in optimal filtering, that is, of filtering to separate additive noise from a desired signal. We discuss the theory for the two-dimensional case, which is actually easier than for a single dimension since causality is not an issue. We show how to consumerist a linear, shift-invariant filter for imaging systems given only the input power spectrum and cross-power spectrum of input versus desired output.&#13;
Finally, we present two ways to calculate the line-spread function given the point-spread function.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41126</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinematics, Statics, and Dynamics of Two-D Manipulators</title>
<link>https://hdl.handle.net/1721.1/41125</link>
<description>Kinematics, Statics, and Dynamics of Two-D Manipulators
Horn, Berthold K.P.
In order to get some feeling for the kinematics, statics, and dynamics of manipulators, it is useful to separate the problem of visualizing linkages in three-space from the basic mechanics. The general-purpose two-dimensional manipulator is analyzed in this paper in order to gain a basic understanding of the issues without the complications of three-dimensional geometry.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41125</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Notes Relating to the Design of a High Quality Image Sensor</title>
<link>https://hdl.handle.net/1721.1/41124</link>
<description>Notes Relating to the Design of a High Quality Image Sensor
Horn, Berthold K.P.
Some of the information that as used in arriving at a design for a high quality image input device is documented. The device uses a PIN photo-diode directly coupled to an FET-input op-amp as the sensor and two moving-iron galvanometer-driven mirrors as the deflection system. The disadvantages of a system like this are its long random access time (about 4 milli-seconds) and the long settling time of the diode-amplifier system (about 1 milli-seconds). In almost all other respects such a sensor is superior to other known image sensors. Pictures taken with this device have shown that some of the difficulties experienced in image analysis can be directly traced to the low quality of images read in through vidicons and image dissectors.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41124</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Facts of Light</title>
<link>https://hdl.handle.net/1721.1/41123</link>
<description>The Facts of Light
Horn, Berthold K.P.
This is a random collection of facts about radiant and luminous energy. Some of this information may be useful in the design of photo-diode image sensors, in the set-up of lighting for television microscopes and the understanding of the characteristics of photographic image output devices. A definition of the units of measurement and the properties of lambertian surfaces is included.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41123</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representing the Semantics of Natural Language as Constraint Expressions</title>
<link>https://hdl.handle.net/1721.1/41122</link>
<description>Representing the Semantics of Natural Language as Constraint Expressions
Grossman, Richard W.
The issue of how to represent the "meaning" of an utterance is central to the problem of computer understanding of natural language. Rather than relying on ad-hoc structures or forcing the complexities of natural language into mathematically elegant but computationally cumbersome representations (such as first-order logic), this paper presents a novel representation which has many desirable computational and logical properties. It is proposed to use this representation to structure the "world knowledge" of a natural-language understanding system.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41122</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ideas About Management of LISP Data Bases</title>
<link>https://hdl.handle.net/1721.1/41121</link>
<description>Ideas About Management of LISP Data Bases
Sandewall, Erik
The trend toward larger data bases in A.I. programs makes it desirable to provide program support for the activity of building and maintaining LISP data bases. Many techniques can be drawn from present and proposed systems for supporting program maintenance, but there are also a variety of additional problems and possibilities. Most importantly, a system for supporting data base development needs a formal description of the user's data base. The description must at least partly be contributed by the user. The paper discusses the operation of such a support system, and describes some ideas that have been useful in a prototype system.
Work reported herein was conducted partly at Uppsala University, Swden, with support from the Swedish Board of Technical Development, and partly at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41121</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Issues for a Dynamic Vision System</title>
<link>https://hdl.handle.net/1721.1/41120</link>
<description>Some Issues for a Dynamic Vision System
Lavin, Mark A.
This paper is a thesis-proposal-proposal: a discussion of some issues which seem relevant to the problem of dealing with visual scenes undergoing change. The problem area is broadly stated, some relevant points are noted, and a possible scenario for a thesis is discussed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41120</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Evolution of Procedural Knowledge</title>
<link>https://hdl.handle.net/1721.1/41119</link>
<description>The Evolution of Procedural Knowledge
Miller, Mark L.
A focus on planning and debugging procedures underlies the enhanced proficiency of recent programs which solve problems and acquire new skills. By describing complex procedures as constituents of evolutionary sequences of families of simpler procedures, we can augment our understanding of how they were written and how they accomplish their goals, as well as improving our ability to debug them. To the extent that properties of such descriptions are task independent, we ought to be able to create a computational analogue for genetic epistemology, a theory of procedural ontogeny. Since such a theory ought to be relevant to the teaching of procedures and modelling of the learner, it is proposed than an educational application system be implemented, to help to clarify these ideas. The system would provide assistance to students solving geometry construction problems.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Thu, 16 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41119</guid>
<dc:date>1975-01-16T00:00:00Z</dc:date>
</item>
<item>
<title>Protection and Synchronization in Actor Systems</title>
<link>https://hdl.handle.net/1721.1/41118</link>
<description>Protection and Synchronization in Actor Systems
Hewitt, Carl
This paper presents a unified method [called ENCASING] for dealing with the closely related issues of synchronization and protection in actor systems [Hewitt et al. 1973a, 1973b, 1974a; Greif and Hewitt 1975]. Actors are a semantic concept in which no active process is ever allowed to treat anything as an object. Instead a polite request must be extended to accomplish what the activator [process] desires. Actors enable us to define effective and efficient protection schemes. Vulnerable actors can be protected before being passed out by ENCASING their behavior in a guardian which applies appropriate checks before invoking the protected actor. Protected actors can be freely passed out since they work only for actors which have the authority to use them where authority can be decided by an arbitrary procedure. Synchronization can be viewed as a [time-variant] kind of protection in which access is only allowed to the encased actor when it is safe to do so.
</description>
<pubDate>Fri, 01 Nov 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41118</guid>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding LISP Programs: Towards a Programmer's Apprentice</title>
<link>https://hdl.handle.net/1721.1/41117</link>
<description>Understanding LISP Programs: Towards a Programmer's Apprentice
Rich, Charles; Shrobe, Howard E.
Several attempts have been made to produce tools which will help the programmer of complex computer systems. A new approach is proposed which integrates the programmer's intentions, the program code, and the comments, by relating them to a knowledge base of programming techniques. Our research will extend the work of Sussman, Goldstein, and Hewitt on program description and annotation. A prototype system will be implemented which answers questions and detects bug in simple LISP programs.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41117</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Actor Semantics of PLANNER-73</title>
<link>https://hdl.handle.net/1721.1/41116</link>
<description>Actor Semantics of PLANNER-73
Greif, Irene; Hewitt, Carl
Work on PLANNER-73 and actors has led to the development of a basis for semantics of programming languages. Its value in describing programs with side-effects, parallelism, and synchronization is discussed. Formal definitions are written and explained for sequences, cells, and a simple synchronization primitive. In addition there is discussion of the implications of actor semantics for the controversy over elimination of side-effects.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</description>
<pubDate>Fri, 01 Nov 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41116</guid>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>CONS</title>
<link>https://hdl.handle.net/1721.1/41115</link>
<description>CONS
Thomas, Knight
DRAFT: Comments and corrections, technical or typographical, are solicited.&#13;
This work was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Fri, 01 Nov 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41115</guid>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The LISP Machine</title>
<link>https://hdl.handle.net/1721.1/41114</link>
<description>The LISP Machine
Greenblatt, Richard
This work was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Fri, 01 Nov 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41114</guid>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>FED, the Font "EDitor" and Font Formats</title>
<link>https://hdl.handle.net/1721.1/41113</link>
<description>FED, the Font "EDitor" and Font Formats
Cohen, Joseph D.; Jarvis, J. Pitts
This memo describes FED, a program used for compiling and inspecting fonts: AST font format, a text format which can be used to create and edit fonts: and KST font format, the binary format used by SCRIMP, TJ6, and PUB.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Tue, 01 Oct 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41113</guid>
<dc:date>1974-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>MAPPER Information</title>
<link>https://hdl.handle.net/1721.1/41112</link>
<description>MAPPER Information
Taenzer, David
This working paper describes a program on the Mini-Robot PDP-11 which is used for looking at picture files created by the VIDIN program. It may be used by ITS vision programmers to examine Vidicon picture files before sending them over to ITS.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Sun, 01 Sep 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41112</guid>
<dc:date>1974-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conversations Between Programs</title>
<link>https://hdl.handle.net/1721.1/41111</link>
<description>Conversations Between Programs
McDonald, David D.
This paper discusses the problem of getting a computer to speak, generating natural language that is appropriate to the situation and is what it wants to say. It describes, at a general level, a program which will embody a theory of how the various types of available information are used in the linguistic process as well as the possible packaging for some of that information and the experimental situation in which the program will be developed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Sun, 01 Sep 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41111</guid>
<dc:date>1974-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wait-and-See Strategies for Parsing Natural Language</title>
<link>https://hdl.handle.net/1721.1/41110</link>
<description>Wait-and-See Strategies for Parsing Natural Language
Marcus, Mitchell P.
The intent of this paper is to convey one idea central to the structure of a natural language parser currently under development, the notion of wait-and-see strategies. This notion will hopefully allow the recognition of the structure of natural language input by a process that is deterministic and "backupless", that can have strong expectations but still be immediately responsive to the actual structure of the input. The notion is also discussed as a paradigm for recognition processes in general.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Thu, 01 Aug 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41110</guid>
<dc:date>1974-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of a Network with a Given System Function</title>
<link>https://hdl.handle.net/1721.1/41109</link>
<description>Synthesis of a Network with a Given System Function
Sussman, Gerald Jay
I have just completed teaching two sections of 6.011 (Elementary Network Theory). One of the topics covered was synthesis of active filters by the "method of unilateral 2-ports". The explanation of this technique by the lecturer, John Kassakian, is of interest to those of us studying problem solving and the evolution of expertise. The evolution of the method of unilateral 2-ports seems to ft beautifully into the paradigm of synthesis of the solution to a problem by debugging of an almost-right plan. Of course, skill is acquired by incorporating the results of debugging, as we expect.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Sat, 01 Jun 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41109</guid>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Another Approach to English</title>
<link>https://hdl.handle.net/1721.1/41108</link>
<description>Another Approach to English
Brooks, Martin
A new approach to building descriptions of English is outlined and programs implementing the ideas for sentence-sized fragments are demonstrated.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Sat, 01 Jun 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41108</guid>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>XGP Font Catalog</title>
<link>https://hdl.handle.net/1721.1/41107</link>
<description>XGP Font Catalog
Knight, Thomas
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Fri, 24 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41107</guid>
<dc:date>1974-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>Advice on the Fast-paced World of Electronics</title>
<link>https://hdl.handle.net/1721.1/41106</link>
<description>Advice on the Fast-paced World of Electronics
McDermott, Drew
This paper is a reprint of a sketch of an electronic-circuit-designing program, submitted a a Ph.D. proposal. It describes the electronic design problem with respect to the classic trade-off between expertise and generality. The essence of the proposal is to approach the electronics domain indirectly, by writing an "advice-taking" program (in McCarthy's sense) which can be told about electronics, including heuristic knowledge about the use of specific electronics expertise. The core of this advice taker is a deductive program capable of deducing what its strategies should be.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41106</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grey Scale Display Slave</title>
<link>https://hdl.handle.net/1721.1/41105</link>
<description>Grey Scale Display Slave
Beeler, Michael
The programs SNAP and ZSLAVE are components of a new grey scale display system. The object is to produce photographs, from a computer display, which have grey scale resolution comparable to that of a the visual input devices and the vision data at the A.I. Lab.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41105</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinematics of the MIT-AI-VICARM Manipulator</title>
<link>https://hdl.handle.net/1721.1/41104</link>
<description>Kinematics of the MIT-AI-VICARM Manipulator
Horn, Berthold K.P.; Inoue, Hirochika
This paper describes the basic geometry of the electric manipulator designed for the Artificial Intelligence Laboratory by Victor Scheinman while on leave from Stanford University. The procedure for finding a set of joint angles that will place the terminal device in a desired position and orientation is developed in detail. This is on of the basic primitives that an arm controller should have. The orientation is specified in terms of Euler-angles. Typically eight sets of joint angles will produce the same terminal device position and orientation.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-70-A-0362-0005.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41104</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>X-Y Table User's Manual</title>
<link>https://hdl.handle.net/1721.1/41103</link>
<description>X-Y Table User's Manual
Larson, Noble
This working paper describes the mini-robot group's X-Y table and associated hardware.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41103</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Projects in Automatic Programming</title>
<link>https://hdl.handle.net/1721.1/41102</link>
<description>Some Projects in Automatic Programming
Goldstein, Ira; Sussman, Gerald Jay
This paper proposes three research topics within the general framework of Automatic Programming. The projects are designing (1) a student programmer, (2) a robot programmer and (3) a physicist's helper. The purpose of these projects is both to explore fundamental ideas regarding the nature of programming as well as to propose practical applications of AI research. The reason for offering this discussion as a Working Paper is to suggest possible research topics which members of the laboratory may be interested in pursuing.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Mon, 01 Apr 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41102</guid>
<dc:date>1974-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Application of Line-labeling and other Scene-analysis Techniques to the Problem of Hidden-line Removal</title>
<link>https://hdl.handle.net/1721.1/41101</link>
<description>An Application of Line-labeling and other Scene-analysis Techniques to the Problem of Hidden-line Removal
Lavin, Mark A.
The problem of hidden-line drawings of scenes composed of opaque polyhedra is considered. The use of Huffnan labeling is suggested as a method if simplifying the task and increasing its intuitive appeal. The relation between the hidden-line problem and scene recognition is considered. Finally, an extension to the hidden-line processor, allowing dynamic viewing of changing scenes, is suggested. That process can be made far more efficient through the use of Change-Driven Processing, where computations on unchanging inputs are not repeated.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41101</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence Approaches to Medical Diagnosis</title>
<link>https://hdl.handle.net/1721.1/41100</link>
<description>Artificial Intelligence Approaches to Medical Diagnosis
Rubin, Andee
The differential diagnosis of hematuria, blood in the urine, is studied from the point of view of identifying crucial structures and processes in medical diagnosis. The thesis attempts to fit the problem of medical diagnosis into the framework of other A.I. problems and paradigms and in particular explores the notions of pure search vs. heuristic methods, linearity and interaction, plausibility and the structure of hypotheses within the world of kidney disease.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41100</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mini-Robot Group User's Guide</title>
<link>https://hdl.handle.net/1721.1/41099</link>
<description>Mini-Robot Group User's Guide
Billmers, Meyer A.
This working paper describes the facilities of the mini-robot group and the software available to persons using those facilities.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41099</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Hypothesis-Driven Recognition System for the Blocks World</title>
<link>https://hdl.handle.net/1721.1/41098</link>
<description>An Hypothesis-Driven Recognition System for the Blocks World
Kuipers, Benjamin J.
This paper presents a visual recognition program in which recognition process is driven by hypotheses about the object being recognized. The hypothesis suggests which features to examine next, refines its predictions based on observed information, and selects a new hypothesis when observations contradict its predictions. After presenting the program, the paper identifies and discusses a number of theoretical issues raised by this work.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41098</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge About Interfacing Descriptions</title>
<link>https://hdl.handle.net/1721.1/41097</link>
<description>Knowledge About Interfacing Descriptions
Dunlavey, Michael R.
This concentrates on interactions between knowledge stated in diverse representations. It proposes a vision program that classifies any complicated object as an elaborated instance of a simple on it already understands. The resulting global-local connections facilitate evaluation of overall properties, such as visual shape and ability to support other objects.&#13;
Flexibility is achieved through simultaneous use of multiple equivalent representations. These are coordinated via interfacing rules for giving hints, constraining choices, and filling in missing detail, making use of the great redundancy in most visual scenes.&#13;
An important feature of the system consists of domain-dependent rules for guiding the flow of control and choosing hypothesis.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41097</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Knowledge, Causal Reasoning, and the Localization of Failures</title>
<link>https://hdl.handle.net/1721.1/41096</link>
<description>Qualitative Knowledge, Causal Reasoning, and the Localization of Failures
Brown, Allen L.
A research program is proposed, the goal of which is a computer system that embodies the knowledge and methodology of a competent radio repairman.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41096</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Video Ergo Scio</title>
<link>https://hdl.handle.net/1721.1/41095</link>
<description>Video Ergo Scio
Marr, David; Hewitt, Carl
An approach to vision research is described that combines ideas about low level processing with more abstract notions about the representation of knowledge in intelligent systems. A particular problem, of the representation of knowledge about the three-dimensional world, is discussed: the outline of a solution is given, and an experimental world of simple mechanical assemblies is described, in which the solution may be implemented and tested. A tentative summary is given of the knowledge that is required for operating in this world, and a research project is proposed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Thu, 01 Nov 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41095</guid>
<dc:date>1973-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>GT40 Utility Pograms and the LISP Display Slave</title>
<link>https://hdl.handle.net/1721.1/41094</link>
<description>GT40 Utility Pograms and the LISP Display Slave
Beeler, Michael; Cohen, Joseph D.; White, John L.
This memo describes two GT40 programs: URUG, an octal micro-debugger: and VT07, a Datapoint simulator and general display package. There is also a description of the MITAI LISP display slave, and how it uses VT07 as a remote graphics slave.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41094</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Multi-Scale Generalization of the HoG and HMAX Image Descriptors for Object Detection</title>
<link>https://hdl.handle.net/1721.1/41093</link>
<description>A Multi-Scale Generalization of the HoG and HMAX Image Descriptors for Object Detection
Bileschi, Stanley M
Recently, several powerful image features have been proposed whichcan be described as spatial histograms of oriented energy. Forinstance, the HoG, HMAX C1, SIFT, and Shape Context feature allrepresent an input image using with a discrete set of bins whichaccumulate evidence for oriented structures over a spatial regionand a range of orientations. In this work, we generalize thesetechniques to allow for a foveated input image, rather than arectilinear raster. It will be shown that improved object detectionaccuracy can be achieved via inputting a spectrum of imagemeasurements, from sharp, fine-scale image sampling within a smallspatial region within the target to coarse-scale sampling of a widefield of view around the target. Several alternative featuregeneration algorithms are proposed and tested which suitably makeuse of foveated image inputs. In the experiments we show thatfeatures generated from the foveated input format produce detectorsof greater accuracy, as measured for four object types from commonlyavailable data-sets. Finally, a flexible algorithm for generatingfeatures is described and tested which is independent of inputtopology and uses ICA to learn appropriate filters.
</description>
<pubDate>Wed, 09 Apr 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41093</guid>
<dc:date>2008-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Functions and Frames in the Learning of Structures</title>
<link>https://hdl.handle.net/1721.1/41092</link>
<description>Functions and Frames in the Learning of Structures
Freiling, Michael J.
This paper discusses methods for enhancing the learning abilities of the Winston program, first by representing functional properties of the objects considered, and secondly by embedding individual models in a hierarchically organized system to provide for economy of recognition. An example is presented illustrating the use of these methods.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41092</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Hypothesis-Frame System for Recognition Problems</title>
<link>https://hdl.handle.net/1721.1/41091</link>
<description>A Hypothesis-Frame System for Recognition Problems
Fahlman, Scott E.
This paper proposes a new approach to a broad class of recognition problems ranging from medical diagnosis to vision. The features of this approach include a top-down hypothesize-and-test style and the use of a great deal of high-level knowledge about the subject. This knowledge is packaged into small groups of related facts and procedures called frames.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41091</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Circular Scan</title>
<link>https://hdl.handle.net/1721.1/41090</link>
<description>Circular Scan
Winston, Patrick H.; Lerman, Jerome B.
Previous feature point detectors have been local in their support and have been universally designed for objects without appreciable texture. We have invented (or perhaps reinvented) a scheme using correlation between concentric or osculating circles which shows some promise of being a first step into the texture domain.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Wed, 01 Mar 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41090</guid>
<dc:date>1972-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Aspects of Medical Diagnosis</title>
<link>https://hdl.handle.net/1721.1/41089</link>
<description>Some Aspects of Medical Diagnosis
Sussman, Gerald J.
Since mid July Steve Pauker, Jerome Kassirer, and I (Gerald Jay Sussman) have been observing the diagnostic process of expert physicians with the goal of abstracting the underlying procedures being followed. One purpose of this position paper is to summarize our preliminary conclusions. I will attempt to pinpoint those aspects of the process we feel we understand, and where we are confused or unsure. I will also attempt to indicate some possible theoretical underpinnings of our ideas. Finally, I will propose what I consider to be a coherent research protocol for the development of these ideas.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41089</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Aspects of the Computation Performed by Visual Cortex in the Cat, With a Note on a Function of Lateral Inhibition</title>
<link>https://hdl.handle.net/1721.1/41088</link>
<description>Quantitative Aspects of the Computation Performed by Visual Cortex in the Cat, With a Note on a Function of Lateral Inhibition
Marr, David; Pettigrew, J. D.
A quantitative summary is given of the computation that is performed by visual cortex in the cat. Part of this computation seems to be achieved using a sample-and-average technique; some quantitative features of this technique are briefly set out.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Working Papers are informal papers intended for internal use.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41088</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A scenario of Planning and Debugging in Electronic Circuit Design</title>
<link>https://hdl.handle.net/1721.1/41087</link>
<description>A scenario of Planning and Debugging in Electronic Circuit Design
Sussman, Gerald J.
The purpose of this short document is to exhibit how a HACKER-like top-down planning and debugging system can be applied to the problem of the design and debugging of simple analog electronic circuits. I believe, and I hope to establish, that this kind of processing goes on at all levels of the problem-solving process--from specific, concrete applications, like Electronic Design, through abstract piecing together and debugging of problem-solving strategies.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Working Papers are informal papers intended for internal use.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41087</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Active Knowledge</title>
<link>https://hdl.handle.net/1721.1/41086</link>
<description>Active Knowledge
Freuder, Eugene C.
A progress report on the work described in Vision Flashes 33 and 43 on recognition of real objects. Emphasis is on the "active" use of knowledge in directing the flow of visual processing.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Mon, 01 Oct 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41086</guid>
<dc:date>1973-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tracking Wires on Printed Circuit Boards</title>
<link>https://hdl.handle.net/1721.1/41085</link>
<description>Tracking Wires on Printed Circuit Boards
Finin, Tim
This working paper describes a collection of LISP programs written to examine the backs of printed circuit boards. These programs find and trace the conductive wires plated on the insulating material. The "pads", or solder connections between these plated wires and leads from components on the front of the board, are also recognized and located by these programs.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Mon, 01 Oct 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41085</guid>
<dc:date>1973-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>ZigZag Decoding: Combating Hidden Terminals in Wireless Networks</title>
<link>https://hdl.handle.net/1721.1/41084</link>
<description>ZigZag Decoding: Combating Hidden Terminals in Wireless Networks
Katabi, Dina; Gollakota, Shyamnath
This paper presents ZigZag, an 802.11 receiver that combats hidden terminals. ZigZag exploits 802.11 retransmissions which, in the case of hidden terminals, cause successive collisions. Due to asynchrony, these collisions have different interference-free stretches at their start, which ZigZag uses to bootstrap its decoding.  ZigZag makes no changes to the 802.11 MAC and introduces no overhead when there are no collisions. But, when senders collide, ZigZag attains the same throughput as if the colliding packets were a priori scheduled in separate time slots. We build a prototype of ZigZag in GNU Radio. In a testbed of 14 USRP nodes, ZigZag reduces the average packet loss rate at hidden terminals from 82.3% to about 0.7%.
</description>
<pubDate>Tue, 08 Apr 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41084</guid>
<dc:date>2008-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Components on a Circuit Board</title>
<link>https://hdl.handle.net/1721.1/41083</link>
<description>Finding Components on a Circuit Board
Lozano-Perez, Tomas
This paper describes a set of programs written in LISP that recognize resistors on circuit boards. The approach leans heavily on a thorough examination of the features found in representative intensity arrays and on representing the important points procedurally. The programs attempt to exploit evidence as it is gathered. The issues of hypothesis formation and change are considered. This paper represents a continuation of research described in a S. B. thesis of the same title submitted at M.I.T. on June, 1973.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Sat, 01 Sep 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41083</guid>
<dc:date>1973-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Does Vision Need a Special-purpose Language?</title>
<link>https://hdl.handle.net/1721.1/41082</link>
<description>Does Vision Need a Special-purpose Language?
Fahlman, Scott E.
This paper briefly discusses the following questions: What are the benefits of special-purpose languages? When is a field ready for such a language? Are any parts of our current vision research ready?
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Sat, 01 Sep 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41082</guid>
<dc:date>1973-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The TRACK Program Package</title>
<link>https://hdl.handle.net/1721.1/41081</link>
<description>The TRACK Program Package
Lerman, Jerome B.; Woodham, Robert J.
A collection of LISP functions has been written to provide vidisector users with the following three line-oriented vision primitives:&#13;
(i) given an initial point and an estimated initial direction, track a line in that direction until the line terminates.&#13;
(ii) given two points, verify the existence of a line joining those two points.&#13;
(iii) given the location of a vertex, find suspect directions for possible lines emanating from that vertex.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Wed, 01 Aug 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41081</guid>
<dc:date>1973-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structured Descriptions</title>
<link>https://hdl.handle.net/1721.1/41080</link>
<description>Structured Descriptions
Gabriel, Richard P.
A descriptive formalism along with a philosophy for its use and expansion are presented wherein descriptions are of a highly structured nature. This descriptive system and the method of recognition are extended to the rudiments of a general system of machine vision.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Wed, 01 Aug 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41080</guid>
<dc:date>1973-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchy in Descriptions</title>
<link>https://hdl.handle.net/1721.1/41079</link>
<description>Hierarchy in Descriptions
Dunlavey, Michael R.
Organization of knowledge requires the flexible use of hierarchy in descriptions. This memo attempts to catalog the issues related to recognizing and executing such descriptions, drawing examples primarily from the blocks world.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Tue, 01 May 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41079</guid>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Package of LISP Functions for Making Movies and Demos</title>
<link>https://hdl.handle.net/1721.1/41078</link>
<description>A Package of LISP Functions for Making Movies and Demos
Lerman, Jerome B.
A collection of functions have been written to allow LISP users to record display calls in a disk file. This file can be UREAD into a small LISP to reproduce the display effects of the program without doing the required computations. Such a file can be regarded as a 'movie' or 'demo' file and can easily be used with the KODAK movie camera to produce a hard copy.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41078</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Suggestions and Advice</title>
<link>https://hdl.handle.net/1721.1/41077</link>
<description>Suggestions and Advice
Freuder, Eugene C.
Results of scene analysis, as they are achieved, direct and advise the flow of subsequent processing.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Thu, 01 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41077</guid>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanical Arm Control</title>
<link>https://hdl.handle.net/1721.1/41076</link>
<description>Mechanical Arm Control
Waters, Richard C.
This paper discusses three main problems associated with the control of the motion of a mechanical arm.&#13;
1) Transformation between different coordinate systems used to describe the state of the arm.&#13;
2) Calculation of detailed trajectories for the arm to follow when moving from point A to B.&#13;
3) Calculation of the forces that must be applied to the joints of the arm to make it move along a specified path.&#13;
Each of the above problems is amenable to exact solution, however, the resulting equations are, in general, to complex to be used in a real time application. Throughout this paper we investigate methods for getting approximate solutions to these equations.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Mon, 19 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41076</guid>
<dc:date>1973-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>The Gloss of Glossy Things</title>
<link>https://hdl.handle.net/1721.1/41075</link>
<description>The Gloss of Glossy Things
Lavin, Mark A.
This paper discusses the visual phenomenon of gloss. It is shown that the perception of this phenomenon derives from two effects (1) that the image reflected by a glossy surface lies in a different plane from the surface, and (2) that the highlights in a glossy scene are abnormally bright. The perception of gloss seems to arise as a side effect of depth perception and lightness judgment.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Thu, 01 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41075</guid>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Review of Human Vision Facts</title>
<link>https://hdl.handle.net/1721.1/41074</link>
<description>Review of Human Vision Facts
Ankcorn, John; Horn, Berthold K.P.; Winston, Patrick H.
This note is a collection of well known interesting facts about human vision. All parameters are approximate. Some may be wrong. There are sections on retina physiology, eye optics, light adaptation, psychological curios, color and eyeball movement.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Tue, 20 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41074</guid>
<dc:date>1973-03-20T00:00:00Z</dc:date>
</item>
<item>
<title>Description of Visual Texture by Computers</title>
<link>https://hdl.handle.net/1721.1/41073</link>
<description>Description of Visual Texture by Computers
Gaschnig, John Gary
Some general properties of textures are discussed for a restricted class of textures. A program is described which inputs a scene using vidisector camera, discerns the texture elements, calculates values for a set of descriptive features for each texture element, and displays the distribution of each feature. The results of the experiments indicate that the descriptive method used may be useful in characterizing more complex textures. This is essentially the content of a Bachelor's thesis completed in June, 1972.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Fri, 09 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41073</guid>
<dc:date>1973-03-09T00:00:00Z</dc:date>
</item>
<item>
<title>Climber: A Vertex-Finder</title>
<link>https://hdl.handle.net/1721.1/41072</link>
<description>Climber: A Vertex-Finder
Slesinger, Steve
A LISP program has been written which returns the location of a vertex in a suspected region, as well as an indication of the certainty of success.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Thu, 01 Feb 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41072</guid>
<dc:date>1973-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Projective Approach to Object Description</title>
<link>https://hdl.handle.net/1721.1/41071</link>
<description>The Projective Approach to Object Description
Hollerbach, John M.
A methodology is presented for generating descriptions of objects from line drawings. Using projection of planes, objects in a scene can be parsed and described at the same time. The descriptions are hierarchical, and lend themselves well to approximation. Possible application to curved objects is discussed.
This paper reproduces a thesis proposal of the same title submitted to the EE Department for the M.S. degree.&#13;
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashes are informal papers intended for internal use.
</description>
<pubDate>Fri, 15 Dec 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41071</guid>
<dc:date>1972-12-15T00:00:00Z</dc:date>
</item>
<item>
<title>LIBPMK: A Pyramid Match Toolkit</title>
<link>https://hdl.handle.net/1721.1/41070</link>
<description>LIBPMK: A Pyramid Match Toolkit
Lee, John J.
LIBPMK is a C++ implementation of Grauman and Darrell's pyramid match algorithm. This toolkit provides a flexible framework with which developers can quickly match sets of image features and run experiments. LIBPMK provides functionality for $k$-means and hierarchical clustering, dealing with data sets too large to fit in memory, building multi-resolution histograms, quickly performing pyramid matches, and training and testing support vector machines (SVMs). This report provides a tutorial on how to use the LIBPMK code, and gives the specifications of the LIBPMK API.
</description>
<pubDate>Mon, 07 Apr 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41070</guid>
<dc:date>2008-04-07T00:00:00Z</dc:date>
</item>
<item>
<title>DDD: Density Distribution Determination</title>
<link>https://hdl.handle.net/1721.1/41069</link>
<description>DDD: Density Distribution Determination
Horn, Berthold K.P.
This paper presents a solution to the problem of determining the distribution of an absorbing substance inside a non-opaque non-scattering body from images or ray samplings. It simultaneously solves the problem of determining the distribution of emitting substance in a transparent non-scattering medium. The relation to more common vision problems is discussed.
This is largely a cleaned up version of a solution found sometime ago when two other related problems were of interest. The one is the special situation when the density can have only two values, which has been solved for special cases by J. Kloustad. The other is the problem of shape determination from silhouettes, that is when the density is infinite in a simple region.&#13;
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision Flashed are informal papers intended for internal use.
</description>
<pubDate>Thu, 08 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41069</guid>
<dc:date>1973-03-08T00:00:00Z</dc:date>
</item>
<item>
<title>VISHEM: A bag of "robotics" formulae</title>
<link>https://hdl.handle.net/1721.1/41068</link>
<description>VISHEM: A bag of "robotics" formulae
Horn, Berthold K.P.
Here collected you will find a number of methods for solving certain kinds of "algebraic" problems found in vision and manipulation programs for our AMF arm and our TVC eye. They are collected here to avoid the need to regenerate them when needed and because I wanted to get rid of a large number of loose sheets of paper in my desk. Documented are various methods hidden in a number of old robotics and vision programs. Some are due to Tom Binford and Bill Gosper.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Working papers are informal papers intended for internal use.
</description>
<pubDate>Fri, 01 Dec 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41068</guid>
<dc:date>1972-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognition of Real Objects</title>
<link>https://hdl.handle.net/1721.1/41067</link>
<description>Recognition of Real Objects
Freuder, Eugene C.
High level semantic knowledge will be employed in the development of a machine vision program flexible enough to deal with a class of "everyday objects" in varied environments.&#13;
This report is in the nature of a thesis proposal for future work.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</description>
<pubDate>Sun, 01 Oct 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41067</guid>
<dc:date>1972-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Feedback in a Coordinated Hand-Eye System</title>
<link>https://hdl.handle.net/1721.1/41066</link>
<description>Visual Feedback in a Coordinated Hand-Eye System
Woodham, Robert J.
A system is proposed for the development of new techniques for the control and monitoring of a mechanical arm-hand. The use of visual feedback is seen to provide new interactive capabilities in a machine hand-eye system. The proposed system explores the use of visual feedback in such operations as the pouring and stirring of liquids, the location of objects for grasping, and the simple rote learning of new arm motions.
This paper reproduces a thesis proposal of the same title submitted to the Dept. of Electrical Engineering for the degree of Master of Science.&#13;
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</description>
<pubDate>Tue, 01 Aug 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41066</guid>
<dc:date>1972-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Approach to Three-Dimensional Decomposition and Description of Polyhedra</title>
<link>https://hdl.handle.net/1721.1/41065</link>
<description>An Approach to Three-Dimensional Decomposition and Description of Polyhedra
Hollerbach, John M.
This paper presents a description methodology for trihedral planar solids that, as in Roberts' approach, decomposes an object into simpler components. The present approach, however, is more sophisticated and results in a more natural description. Hidden vertices are located in the process of description generation. Also, it is shown how the 3-D coordinates of the vertices can be obtained from the 2-D coordinates.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</description>
<pubDate>Sat, 01 Jul 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41065</guid>
<dc:date>1972-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Summary of Selected Vision Topics</title>
<link>https://hdl.handle.net/1721.1/41064</link>
<description>Summary of Selected Vision Topics
Winston, Patrick H.
This is an introduction to some of the MIT AI vision work of the last few years. The topics discussed are 1) Waltz's work on line drawing semantics, 2) heterarchy, 3) the ancient learning business and 4) copying scenes. All topics are discussed in more detail elsewhere in working paper ot theses.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Working papers are informal papers intended for internal use.
</description>
<pubDate>Sat, 01 Jul 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41064</guid>
<dc:date>1972-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shedding Light on Shadows</title>
<link>https://hdl.handle.net/1721.1/41063</link>
<description>Shedding Light on Shadows
Waltz, David L.
This paper describes methods which allow a program to analyze and interpret a variety of scenes made up of polyhedra with trihedral vertices. Scenes may contain shadows, accidental edge alignments, and some missing lines. This work is based on ideas proposed initially by Huffman and Clowes; I have added methods which enable the program to use a number of facts about the physical world to constrain the possible interpretations of a line drawing, and have also introduced a far richer set of descriptions than previous programs have used.
This paper replaces Vision Flash 21.&#13;
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41063</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Program to Output Stored Pictures</title>
<link>https://hdl.handle.net/1721.1/41062</link>
<description>A Program to Output Stored Pictures
Woodham, Robert J.
A program called LPTSEE has been written for use with the MIT vision system. LPTSEE makes use of the overprint capability of the line printer to allow the user to output a stored picture image.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41062</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using the Vidisector and the Store Picture Facility</title>
<link>https://hdl.handle.net/1721.1/41061</link>
<description>Using the Vidisector and the Store Picture Facility
Lerman, Jerome B.
The stored picture facility (FAKETV) allows LISP users, and to some extent machine language users, to access a library of stored images rather than live vidisector scenes. The vidisector functions in LISP have been slightly restructured so that input from stored images or live images can be handled with no changes to the user's program. The procedure for creating stored images is also described.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41061</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Vision Potpourri</title>
<link>https://hdl.handle.net/1721.1/41060</link>
<description>A Vision Potpourri
Finin, Tim
This paper discusses some recent changes and additions to the vision system. Among the additions are the ability to use visual feedback when trying to acurately position an object and the ability to use the arm as a sensory device. Also discussed are some ideas and a description of preliminary work on a particular sort of high level three-dimensional reasoning.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.&#13;
Vision flashes are informal papers intended for internal use.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41060</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Position Extraction Using Stereo Eye Systems with a Relative Rotational Motion Capability</title>
<link>https://hdl.handle.net/1721.1/41059</link>
<description>Visual Position Extraction Using Stereo Eye Systems with a Relative Rotational Motion Capability
Corwin, Daniel W.
This paper discusses the problem of context-free position estimation using a stereo vision system with moveable eyes. Exact and approximate equations are developed linking position to measureable quantities of the image-space, and an algorithm for rough form. An estimate of errors and resolution limits is provided.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0003.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41059</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Scenes With Shadows</title>
<link>https://hdl.handle.net/1721.1/41058</link>
<description>Understanding Scenes With Shadows
Waltz, David L.
The basic problem of this research is to find methods which will enable a program to construct a three dimensional interpretation from the line drawing of a scene, where the scene may have shadows and various degeneracies. These methods differ from those used in earlier related programs in that they use region information extensively, and include formalisms for eye and lighting position. The eventual result of this research will be a program which should be able to successfully treat scenes with far fewer restrictions than present programs will tolerate.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense, and was monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</description>
<pubDate>Mon, 01 Nov 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41058</guid>
<dc:date>1971-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Progress in Extending the VIRGIN Program</title>
<link>https://hdl.handle.net/1721.1/41053</link>
<description>Progress in Extending the VIRGIN Program
Dowson, Mark
The VIRGIN program will interpret pictures of simple scenes. This paper describes a program, SINNER, which will deal with picture which contain cracks and shadows. In addition to handling pictures of this richer world, SINNER employs heuristics which use knowledge about the structure of the three dimensional world to reduce the number of interpretations of some pictures and to augment the efficiency of the parsing process.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</description>
<pubDate>Wed, 01 Sep 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41053</guid>
<dc:date>1971-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding the Skeleton of a Brick*</title>
<link>https://hdl.handle.net/1721.1/41052</link>
<description>Finding the Skeleton of a Brick*
Finin, Tim
TC-SKELETON's duty is to help find the dimensions of brick shaped objects by searching for sets of three complete edges, on for each dimension. The program was originally written by Patrick Winston, and then was refined and improved by Tim Finin.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense, and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0005.&#13;
Reproduction of this document, in whole or in part, is permitted for any purpose of the United States Government.&#13;
This memo was first issued in August 1971 as A.I Vision Flash 19.
</description>
<pubDate>Thu, 01 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41052</guid>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The FINDSPACE Problem</title>
<link>https://hdl.handle.net/1721.1/41051</link>
<description>The FINDSPACE Problem
Sussman, Gerald Jay
The FINDSPACE problem is that of establishing a volume in space where an object of specified dimensions will fit. The problem seems to have two subproblems: the hypothesis generation problem of finding a likely spot to try, and the verification problem of testing that spot for occupancy by other objects. This paper treats primarily the verification problem.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense, and was monitored by the Office of Naval Research contract number N00014-70-A-0362-0002.
</description>
<pubDate>Tue, 03 Aug 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41051</guid>
<dc:date>1971-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>Resolving Visual Ambiguity with a Probe</title>
<link>https://hdl.handle.net/1721.1/41050</link>
<description>Resolving Visual Ambiguity with a Probe
Gaschnig, John
The eye-hand robot at the Artificial Intelligence Laboratory now possesses the ability to occasionally copy simple configurations of blocks, using spare parts about whose presence it knows. One problem with which it cannot cope well is that of ambiguous scenes. This paper studies two types of ambiguity present in some scenes -- occlusion and illusion --  and proposes some ideas about effectively resolving the ambiguities through the use of the hand as an information detection device to work in conjunction with the eye.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense, and was monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</description>
<pubDate>Thu, 01 Jul 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41050</guid>
<dc:date>1971-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Binford-Horn LINEFINDER</title>
<link>https://hdl.handle.net/1721.1/41049</link>
<description>The Binford-Horn LINEFINDER
Horn, Berthold K.P.
This paper briefly describes the processing performed in the course of producing a line drawing from vidisector information.
</description>
<pubDate>Thu, 01 Jul 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41049</guid>
<dc:date>1971-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wandering About the Top of the Robot</title>
<link>https://hdl.handle.net/1721.1/41048</link>
<description>Wandering About the Top of the Robot
Winston, Patrick H.
Part I of this paper describes some of the new functions in the system. The discussion is seasoned here and there with parenthetical code fragments that may be ignored by readers unfamiliar with PLANNER.&#13;
Part II discussed the scenario evoked in a simple sample copy effort and Part III provides some technical notes helpful to those who wish to use the system.
</description>
<pubDate>Thu, 01 Jul 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41048</guid>
<dc:date>1971-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>What Corners Look Like</title>
<link>https://hdl.handle.net/1721.1/41047</link>
<description>What Corners Look Like
Dowson, Mark; Waltz, David
An algorithm is presented which provides a way of telling what a given trihedral corner will look like if viewed from a particular angle. The resulting picture is a junction of two or more lines each labelled according to Huffman's convention. Possible extensions of the algorithm are discussed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</description>
<pubDate>Tue, 01 Jun 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41047</guid>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Two Problems in Analyzing Scenes</title>
<link>https://hdl.handle.net/1721.1/41046</link>
<description>Two Problems in Analyzing Scenes
Finin, Tim
This paper is based on a B.S. thesis supervised by Patrick Winston. It deals with some previously unexplored problems in the analysis of visual scenes. The scenes consist of two dimensional line drawings of simple objects such as blocks and wedges. The problems have come out of the work that Patrick Winston has done and in discussing them I will be assuming the environment of his system. The first problem asks the questions "When is an object standing? When is it lying?" In the course of answering this question a method is developed for determining the relative true dimensions of an object from its two dimensional oblique projection. The second problem develops methods for discovering when on object is in front of another in situations where previous methods have failed.
Work reported herein was conducted at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</description>
<pubDate>Tue, 01 Jun 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41046</guid>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of Circular Array Sensors</title>
<link>https://hdl.handle.net/1721.1/41045</link>
<description>Applications of Circular Array Sensors
Trawick, Charles D.
The application of the Reticon RO-64 annular photo-diode array to the task of optical tracking of special targets, direct optical focusing, and automatic printed circuit board inspection were studied. In order to facilitate this work, a digital camera unit incorporating the array was designed and constructed.&#13;
Of the three applications investigated, the tracking task proved to be the most successful, since multiple targets were tracked in real time using the array. In the focusing application, the digital approach was found to be too slow for real-time use, and suggestions were made for the analog implementation of a focusing algorithm using the array. The printed circuit board inspection algorithm detected errors successfully, but the inefficiency of image acquisition with the array is a serious drawback, leading to the conclusion that linear arrays of similar design would provide faster and less expensive inspection.&#13;
Thus the annular geometry is best suited to the onetime sampling of points on a circle in an image, as in the case of the tracking and focusing tasks. The focusing task suffers mainly from the amount of computation required to achieve focus, and from its competition with more established indirect focusing techniques.
Submitted to the Department of Electrical Engineering and Computer Science on January 18, 1980 in partial fulfillment of the requirements for the Degree Master of Science in Electrical Engineering and Computer Science
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41045</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Suggestions for Genetic A.I.</title>
<link>https://hdl.handle.net/1721.1/41044</link>
<description>Suggestions for Genetic A.I.
Drescher, Gary L.
This paper presents suggestions for "Genetic A.I.": an attempt to model the genesis of intelligence in human infants, particularly as described by Piaget's theory of the Sensorimotor period. The paper includes a synopsis of Sensorimotor intelligence, followed by preliminary suggestions for a mechanism (the "Schema mechanism") for its development, and a hypothetical Scenario which partially reinterprets Sensorimotor development in terms of that mechanism.&#13;
The Schema mechanism focuses on Piaget's concept of the competition and evolution of mental "schemas." The schema is modelled here as an assertion that one partial state of the mechanism's world-representation is transformable to another via a given action, taken when the schema is "activated". A proposed process of "correlation" allows a schema's assertion to be extended or revised in response to empirically-observed effects of the schema's activation. Correlation uses the the formation and activation of schemas to propose and test hypothesis, in contrast with the passive tabulation characteristic of associationist mechanisms. Further features are proposed to enable schemas to become coordinated into composite structures, "compound actions", which can be used by other schemas; and to synthesize new "items" (state-elements) when existing ones prove inadequate to model the world.&#13;
The Scenario outlines how the Schema mechanism might begin to make its way through the progression of Sensorimotor stages; development culminating in Piaget's third stage is discussed. This development includes learning about the visual and tactile effects of eye and hand motions-- eg, learning how to look directly at an object, or to move a hand into view; and the organization of that knowledge to designate the tactile properties of "visual objects", and vice versa-- eg knowing how to touch an object which is seen-- paving the way to a sensory-modality-invariant representation of objects and space.&#13;
The Schema mechanism attempts to "learn from scratch", without built-in expertise or built-in structure in its learning domains. In the past there has been little success among AI programs of this genre. But many such attempts have suffered from mechanisms which were trivial in that they placed the full burden of acquiring and structuring knowledge on one or two simple tricks, whereas, I claim, the present effort shows a willingness to incorporate a multiplicity of elements in a complicated mechanism. In addition, the Schema mechanism benefits from its orientation around a nontrivial theory of development. Piaget gives a comprehensive account of the infant's evolution of primitive problem-solving and domain-specific (chiefly object-manipulation) knowledge; this account is used here as a roadmap that describes the proper course for the mechanism to follow. Thus, there is a nontrivial (or at least nonarbitrary) sequence of target abilities to use as a framework for evaluating and revising the mechanism's performance.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41044</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formalizing the Expertise of the Assembly Language Programmer</title>
<link>https://hdl.handle.net/1721.1/41043</link>
<description>Formalizing the Expertise of the Assembly Language Programmer
Duffey, Roger DuWayne II
A novel compiler strategy for generating high quality code is described. The quality of the code results from reimplementing the program in the target language using knowledge of the program's behavior. The research is a first step towards formalizing the expertise of the assembly language programmer. The ultimate goal is to formalize code generation and implementation techniques in the same way that parsing and code generation techniques have been formalized. An experimental code generator based on the reimplementation strategy will be constructed. The code generator will provide a framework for analyzing the costs, applicability, and effectiveness of various implementation techniques. Several common code generation problems will be studied. Code written by experienced programmers and code generated by a conventional optimizing compiler will provide standards of comparison.
</description>
<pubDate>Mon, 01 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41043</guid>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Operating the Lisp Machine</title>
<link>https://hdl.handle.net/1721.1/41042</link>
<description>Operating the Lisp Machine
Moon, David A.; Wechsler, Allan C.
This document is a draft copy of a portion of the Lisp Machine window system manual. It is being published in this form now to make it available, since the complete window system manual is unlikely to be finished in the near future. The information in this document is accurate as of system 67, but is not guaranteed to remain 100% accurate.&#13;
This document explains how to use the Lisp Machine from a non-programmer's point of view. It explains the general characteristics of the user interface, particularly the window system and the program-control commands. This document is intended to tell you everything you need to know to sit down at a Lisp machine and run programs, but does not deal with the writing of programs. Many arcane commands and user-interface features are also documented herein, although the beginning user can safely ignore them.
</description>
<pubDate>Wed, 01 Apr 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41042</guid>
<dc:date>1981-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of Thread Memory in Amnesic Aphasia and Concept Learning.(note 0)</title>
<link>https://hdl.handle.net/1721.1/41041</link>
<description>The Use of Thread Memory in Amnesic Aphasia and Concept Learning.(note 0)
Vaina, Lucia M.; Greenblatt, Richard D.
We propose a new type of semantic memory, called thread memory. The primitives of this memory are threads, defined as keyed multilink, loop-free chains, which link semantic nodes. All links run from superordinate categories to subordinate categories. This is the opposite direction to those in the usual tree structure in that brother nodes in the tree share the structure above their common ancestors. The most valuable feature of the thread memory is its capacity to learn. A program which can learn concepts using as data children's primer books, was written by R. Greenblatt and runs on the LISP-MACHINE at the MIT-AI Laboratory. We have considered the thread memory as working hypothesis for exploring the mechanisms of naming deficits in aphasia and the ways of rehabilitation.
</description>
<pubDate>Wed, 05 Sep 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41041</guid>
<dc:date>1979-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Lisp Machine Choice Facilities</title>
<link>https://hdl.handle.net/1721.1/41040</link>
<description>Lisp Machine Choice Facilities
Moon, David A.
This document is a draft copy of a portion of the Lisp Machine window system manual. It is being published in this form now to make it available, since the complete window system manual is unlikely to be finished in the near future. The information in this document is accurate as of system 70, but is not guaranteed to remain 100% accurate. To understand some portions of this document may depend on background information which is not contained in any published documentation.&#13;
The window system contains several facilities to allow the user to make choices. These all work by displaying some arrangement of choices in a window; by pointing to one with the mouse the user can select it. This document explains what the various facilities are, how to use them, and how to customize them for your own purposes.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41040</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conceptual Phrases and Deterministic English Parsing</title>
<link>https://hdl.handle.net/1721.1/41039</link>
<description>Conceptual Phrases and Deterministic English Parsing
Dill, David
The grammar of many of the lower-level constituents of grammatical structures in English has not been a area of exciting new linguistic discovery, in contrast with study of clause-level constituents. The syntax of these conceptual phrases, as they are termed here, seems to be somewhat ad hoc, which presents problems for their specification for the purpose of computer understanding of natural language.&#13;
This report concludes that their irregular behavior stems from a closer relationship between the syntax and the semantics of these than other English constructs. Conceptual phrases all correspond to objects in a single, tightly constrained semantic class, and as a result, semantic knowledge about them can be used to 'optimize' the process of communicating them.&#13;
The unique nature of conceptual phrases is exploited to provide a combined syntactic and semantic description for them, consisting of syntactically augmented frames, that is much simpler than individual syntactic or semantic descriptions. An example representation for numbers is given, along with an analysis of some problems that occur when a practical implementation is attempted.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Wed, 01 Aug 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41039</guid>
<dc:date>1979-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exact Reproduction of Colored Images</title>
<link>https://hdl.handle.net/1721.1/41038</link>
<description>Exact Reproduction of Colored Images
Horn, Berthold K.P.
The problem of producing a colored image from a colored original is analyzed. Conditions are determined for the production of an image, in which the colors cannot be distinguished from those in the original by a human observer. If the final image is produced by superposition of controlled amounts of colored lights, only a simple linear transform need be applied to the outputs of the image sensors to produce the control inputs required for the image generators. In systems which depend instead on control of the concentration or fractional area covered by colored dyes, a more difficult computation is called for. This calculation may for practical purposes be expressed in table look-up form.&#13;
The conditions for exact reproduction of colored images should prove useful in the design and analysis of image processing systems whose final output is intended for human viewing. Judging by the design of many existing systems, these rules are not generally known or adhered to. Modern computational techniques make it practical to tackle this problem now. Adherence to design constraints developed here is of particular important where colors are to be judged when the original is not directly accessible to the observer as, for example, when it is on another planet.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41038</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Steps Toward a Psycholinguistic Model of Language Production</title>
<link>https://hdl.handle.net/1721.1/41037</link>
<description>Steps Toward a Psycholinguistic Model of Language Production
McDonald, David D.
This paper discusses what it would mean to have a psychological model of the language production process: what such a model would have to account for, what it would use as evidence. It outlines and motivates one particular model including: presumptions about the input to the process, a characterization of language production as a process of selection under constraint, and the principle stipulations of the model. This paper is an introduction, which is largely nontechnical and uses only simple examples. A detailed presentation of the architecture of the model, its grammar, and its interface to the speaker will be forthcoming in other papers.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41037</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating a Semantic Network in LMS</title>
<link>https://hdl.handle.net/1721.1/41036</link>
<description>Simulating a Semantic Network in LMS
Koton, Phyllis A.
A semantic network is a collection of nodes and the links between them. The nodes represent concepts, functions and entities, and the links represent relationships between varoius nodes. Any semantic network must be supplied with a language of conventions for representing knowledge as nodes and links in the network, so that storage and retrieval of knowledge can be carried out efficiently.&#13;
This thesis examines two approaches to the problem of representing real-world knowledge in a computer: one designed for use on serial computers, the other design to run on a parallel network machine. The two formalisms are shown to be nearly identical, and a simulation of the parallel language in the serial language is given.
Submitted to the Department of Electrical Engineering and Computer Science on January 1, 1980 in partial fulfillment of the requirements for the Degree of Bachelor of Science.
</description>
<pubDate>Mon, 29 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41036</guid>
<dc:date>1980-09-29T00:00:00Z</dc:date>
</item>
<item>
<title>Logical Control Theory Applied to Mechanical Arms</title>
<link>https://hdl.handle.net/1721.1/41035</link>
<description>Logical Control Theory Applied to Mechanical Arms
Pankiewicz, Ronald Joseph
A new control algorithm based upon Logical Control Theory is developed for mechanical manipulators. The controller uses discrete tesselations of state space and a finite set of fixed torques to regulate non-rehearsed movements in real time. Varying effective inertia, coupling between degrees of freedom, and fictional, gravitational and Coriolis forces are readily handled. A logical controller was implemented on a mini-computer for the MIT Scheinman Vicarm. The controller's performance compares favorably with that of controllers designed according to existing methodologies as used, for example, in the control of present day industrial manipulators.
Submitted to the Department of Electrical Engineering and Computer Science on January 19, 1979 in partial fulfillment of the requirements for the Degrees of Master of Science and Electrical Engineer.&#13;
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.&#13;
Thesis supervisor:&#13;
Berthold K. P. Horn,&#13;
Associate Professor of Electrical Engineering and Computer Science
</description>
<pubDate>Thu, 01 Feb 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41035</guid>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Last Whole XGP Font Catalog</title>
<link>https://hdl.handle.net/1721.1/41034</link>
<description>The Last Whole XGP Font Catalog
Christman, David P.; Sjoberg, Robert W.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41034</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Numerical Method for Shape-From-Shading From A Single Image</title>
<link>https://hdl.handle.net/1721.1/41033</link>
<description>A Numerical Method for Shape-From-Shading From A Single Image
Strat, Thomas M.
The shape of an object can be determined from the shading in a single image by solving a first-order, non-linear partial differential equation. The method of characteristics can be used to do this, but it suffers from a number of theoretical difficulties and implementation problems. This thesis presents an iterative relaxation algorithm for solving this equation on a grid of points. Here, repeated local computations eventually lead to a global solution.&#13;
The algorithm solves for the surface orientation at each point by employing an iterative relaxation scheme. The constraint of surface smoothness is achieved while simultaneously satisfying the constraints imposed by the equation of image illumination. The algorithm has the distinct advantage of being capable of handling any reflectance function whether analytically or empirically specified.&#13;
Included are brief overviews of some of the more important shape-from-shading algorithms in existence and a list of potential applications of this iterative approach to several image domains including scanning electron microscopy, remote sensing of topography and industrial inspection.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.&#13;
Thesis Supervisor: Berthold K. P. Horn&#13;
Title: Associate Professor of Electrical Engineering and Computer Science
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41033</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Worms of Ganymedes - Hazards of Image "Restoration"</title>
<link>https://hdl.handle.net/1721.1/41032</link>
<description>Worms of Ganymedes - Hazards of Image "Restoration"
Horn, Berthold K.P.
</description>
<pubDate>Mon, 01 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/41032</guid>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cognitive Security for Personal Devices</title>
<link>https://hdl.handle.net/1721.1/40810</link>
<description>Cognitive Security for Personal Devices
Greenstadt, Rachel; Beal, Jacob
Humans should be able to think of computers as extensions of their body, as craftsmen do with their tools. Current security models, however, are too unlike those used in human minds---for example, computers authenticate users by challenging them to repeat a secret rather than by continually observing the many subtle cues offered by their appearance and behavior. We propose three lines of research that can be combined to produce cognitive security on computers and other personal devices: imprinting and continuously deployed multi-modal biometrics, self-protection through virtualization and trusted computing, and adjustably autonomous security.
</description>
<pubDate>Mon, 17 Mar 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40810</guid>
<dc:date>2008-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>A Fair Power Domain for Actor Computations</title>
<link>https://hdl.handle.net/1721.1/40809</link>
<description>A Fair Power Domain for Actor Computations
Clinger, Will
Actor-based languages feature extreme concurrency, allow side effects, and specify a form of fairness which permits unbounded nondeterminism. This makes it difficult to provide a satisfactory mathematical foundation for the semantics.&#13;
Due to the high degree of parallelism, an oracle semantics would be intractable. A weakest precondition semantics is out of the question because of the possibility of unbounded nondeterminism. The most attractive approach, fixed point semantics using power domains, has not been helpful because the available power domain constructions, although very general, seemed to deal inadequately with fairness.&#13;
By taking advantage of the relatively complex structure of the actor computation domain C, however, a power domain P(C) can be defined which is similar to Smyth's weak power domain but richer. Actor systems, which are collections of mutually recursive primitive actors with side effects, may be assigned meanings as least fixed points of their associated continuous functions acting on this power domain. Given a denotation A ∈ P(C), the set of possible complete computations of the actor system it represents is the set of least upper bounds of a certain set of "fair" chain in A, and this set of chains is definable within A itself without recourse to oracles or an auxiliary interpretive semantics.&#13;
It should be emphasized that this power domain construction is not nearly as generally applicable as those of the Plotkin [Pl] and Smyth [Sm], which can be used with any complete partial order. Fairness seems to require that the domain from which the power domain is to be constructed contain sufficient operational information.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Office of Naval Research of the Department of Defense under Contract N00014-75-C-0522.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40809</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Trajectory Analysis and Semantic Region Modeling Using A Nonparametric Bayesian Model</title>
<link>https://hdl.handle.net/1721.1/40808</link>
<description>Trajectory Analysis and Semantic Region Modeling Using A Nonparametric Bayesian Model
Grimson, Eric; Wang, Xiaogang; Ng, Gee-Wah; Ma, Keng Teck
We propose a novel nonparametric Bayesian model, Dual Hierarchical Dirichlet Processes (Dual-HDP), for trajectory analysis and semantic region modeling in surveillance settings, in an unsupervised way. In our approach, trajectories are treated as documents and observations of an object on a trajectory are treated as words in a document. Trajectories are clustered into different activities. Abnormal trajectories are detected as samples with low likelihoods. The semantic regions, which are intersections of paths commonly taken by objects, related to activities in the scene are also modeled. Dual-HDP advances the existing Hierarchical Dirichlet Processes (HDP) language model. HDP only clusters co-occurring words from documents into topics and automatically decides the number of topics. Dual-HDP co-clusters both words and documents. It learns both the numbers of word topics and document clusters from data. Under our problem settings, HDP only clusters observations of objects, while Dual-HDP clusters both observations and trajectories. Experiments are evaluated on two data sets, radar tracks collected from a maritime port and visual tracks collected from a parking lot.
</description>
<pubDate>Tue, 24 Jun 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40808</guid>
<dc:date>2008-06-24T00:00:00Z</dc:date>
</item>
<item>
<title>Two-stage Optimization Approach to Robust Model Predictive Control with a Joint Chance Constraint</title>
<link>https://hdl.handle.net/1721.1/40804</link>
<description>Two-stage Optimization Approach to Robust Model Predictive Control with a Joint Chance Constraint
Ono, Masahiro; Williams, Brian C.
When controlling dynamic systems such as mobile robots in uncertain environments, there is a trade off between risk and reward. For example, a race car can turn a corner faster by taking a more challenging path. This paper proposes a new approach to planning a control sequence with guaranteed risk bound. Given a stochastic dynamic model, the problem is to find a control sequence that optimizes a performance metric, while satisfying chance constraints i.e. constraints on the upper bound of the probability of failure. We propose a two-stage optimization approach, with the upper stage optimizing the risk allocation and the lower stage calculating the optimal control sequence that maximizes the reward. In general, upper-stage is a non-convex optimization problem, which is hard to solve. We develop a new iterative algorithm for this stage that efficiently computes the risk allocation with a small penalty to optimality. The algorithm is implemented and tested on the autonomous underwater vehicle (AUV) depth planning problem, which demonstrates the substantial improvement in computation cost and suboptimality compared to the prior arts.
</description>
<pubDate>Thu, 06 Mar 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40804</guid>
<dc:date>2008-03-06T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Motion Planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure</title>
<link>https://hdl.handle.net/1721.1/40803</link>
<description>Efficient Motion Planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure
Ono, Masahiro; Williams, Brian C.
When controlling dynamic systems such as mobile robots in uncertain environments, there is a trade off between risk and reward. For example, a race car can turn a corner faster by taking a more challenging path. This paper proposes a new approach to planning a control sequence with guaranteed risk bound. Given a stochastic dynamic model, the problem is to find a control sequence that optimizes a performance metric, while satisfying chance constraints i.e. constraints on the upper bound of the probability of failure. We propose a two-stage optimization approach, with the upper stage optimizing the risk allocation and the lower stage calculating the optimal control sequence that maximizes the reward. In general, upper-stage is a non-convex optimization problem, which is hard to solve. We develop a new iterative algorithm for this stage that efficiently computes the risk allocation with a small penalty to optimality. The algorithm is implemented and tested on the autonomous underwater vehicle (AUV) depth planning problem, which demonstrates the substantial improvement in computation cost and suboptimality compared to the prior arts.
</description>
<pubDate>Thu, 06 Mar 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40803</guid>
<dc:date>2008-03-06T00:00:00Z</dc:date>
</item>
<item>
<title>The J%JOIN Package</title>
<link>https://hdl.handle.net/1721.1/40802</link>
<description>The J%JOIN Package
Griffith, Arnold K.
The J%JOIN program creates links between the elements of a set of line segments on the basis of their geometric proximity. According to the value of the third argument, (T or NIL), the program will either place a set of links in an array, suitable for use by the program P%PURPOSE, or will return a set of "re-adjusted" line segments with the property that lines apparently converging on a common vertex are assigned identical end points at the appropriate ends. Twelve geometric parameters are used to control the joining procedure.&#13;
Starred sections (*) are for reference only; J%JOIN may be successfully used by someone familiar with only the unstarred sections of this memo.
Work reported herein was supported by the Artificial Intelligence Laboratory, an M.I.T. research program sponsored by the Advanced Research Projects Agency of the Department of Defense under office of Naval Research contract number N00014-70-A-0362-0002.
</description>
<pubDate>Fri, 02 Apr 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40802</guid>
<dc:date>1971-04-02T00:00:00Z</dc:date>
</item>
<item>
<title>The Line Proposer P%PROPOSE1, and Additional Notes on "F%FEATUREPOINTS" and "GVERIFY1"</title>
<link>https://hdl.handle.net/1721.1/40801</link>
<description>The Line Proposer P%PROPOSE1, and Additional Notes on "F%FEATUREPOINTS" and "GVERIFY1"
Griffith, Arnold K.
The line proposer P%PROPOSE1 is described in the first part of this memo. It makes use of links provided by the J%JOIN program, in proposing possibly missing lines in a line drawing of simple plane-faced objects. The remainder of this paper updates the descriptions of "F%FEATUREPOINTS" and "GVERIFY1" given in flashes #3 and #2 respectively.
Work reported herein was supported by the Artificial Intelligence Laboratory, an M.I.T. research program sponsored by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract number N00014-70-A-0362-0002.
</description>
<pubDate>Fri, 02 Apr 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40801</guid>
<dc:date>1971-04-02T00:00:00Z</dc:date>
</item>
<item>
<title>What's What</title>
<link>https://hdl.handle.net/1721.1/40800</link>
<description>What's What
Winston, Patrick H.
An outline of the modules used in the copy demonstration, the reasons for doing robotics, and some possible directions for further work.
</description>
<pubDate>Mon, 01 Mar 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40800</guid>
<dc:date>1971-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heterarchy in the M.I.T. Robot</title>
<link>https://hdl.handle.net/1721.1/40799</link>
<description>Heterarchy in the M.I.T. Robot
Winston, Patrick H.
Work reported herein was conducted at the Artificial Intelligence Laboratory, an M.I.T. research program supported by the Advanced Research Projects Agency of the Department of Defense and was monitored by the Office of Naval Research under Contract Number N00014-70-A-0362-0002.
</description>
<pubDate>Mon, 01 Mar 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40799</guid>
<dc:date>1971-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Use .VSCAN</title>
<link>https://hdl.handle.net/1721.1/40798</link>
<description>How to Use .VSCAN
Griffith, Arnold K.
</description>
<pubDate>Mon, 01 Mar 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40798</guid>
<dc:date>1971-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transfer learning for image classification with sparse prototype representations</title>
<link>https://hdl.handle.net/1721.1/40797</link>
<description>Transfer learning for image classification with sparse prototype representations
Quattoni, Ariadna; Collins, Michael; Darrell, Trevor
To learn a new visual category from few examples, prior knowledge from unlabeled data as well as previous related categories may be useful.  We develop a new method for transfer learning which exploits available unlabeled data and an arbitrary kernel function; we form a representation based on kernel distances to a large set of unlabeled data points. To transfer knowledge from previous related problems we observe that a category might be learnable using only a small subset of reference prototypes. Related problems may share a significant number of relevant prototypes; we find such a reduced representation by performing a joint loss minimization over the training sets of related problems with a shared regularization penalty that minimizes the total number of prototypes involved in the approximation.This optimization problem can be formulated as a linear program thatcan be solved efficiently. We conduct experiments on a news-topic prediction task where the goal is to predict whether an image belongs to a particularnews topic. Our results show that when only few examples are available for training a target topic, leveraging knowledge learnt from other topics can significantly improve performance.
</description>
<pubDate>Mon, 03 Mar 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40797</guid>
<dc:date>2008-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Grammatical Models for Object Recognition</title>
<link>https://hdl.handle.net/1721.1/40288</link>
<description>Learning Grammatical Models for Object Recognition
Aycinena, Meg; Kaelbling, Leslie Pack; Lozano-Perez, Tomas
Many object recognition systems are limited by their inability to share common parts or structure among related object classes. This capability is desirable because it allows information about parts and relationships in one object class to be generalized to other classes for which it is relevant. With this goal in mind, we have designed a representation and recognition framework that captures structural variability and shared part structure within and among object classes. The framework uses probabilistic geometric grammars (PGGs) to represent object classes recursively in terms of their parts, thereby exploiting the hierarchical and substitutive structure inherent to many types of objects. To incorporate geometric and appearance information, we extend traditional probabilistic context-free grammars to represent distributions over the relative geometric characteristics of object parts as well as the appearance of primitive parts. We describe an efficient dynamic programming algorithm for object categorization and localization in images given a PGG model. We also develop an EM algorithm to estimate the parameters of a grammar structure from training data, and a search-based structure learning approach that finds a compact grammar to explain the image data while sharing substructure among classes. Finally, we describe a set of experiments that demonstrate empirically that the system provides a performance benefit.
</description>
<pubDate>Mon, 25 Feb 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40288</guid>
<dc:date>2008-02-25T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Transport-Level Characteristics of Spam</title>
<link>https://hdl.handle.net/1721.1/40287</link>
<description>Exploiting Transport-Level Characteristics of Spam
Beverly, Robert; Sollins, Karen
In the arms race to secure electronic mail users and servers fromunsolicited messages (spam), the most successful solutions employtechniques that are difficult for spammers to circumvent. Thisresearch investigates the transport-layer characteristics ofemail in order to provide a new, novel and robust defense againstspam. We find that spam SMTP flows exhibit TCP behavior consistentwith traffic competing for link access, large round trip times andresource constrained hosts. Thus, SMTP flow characteristics providesufficient statistical power to differentiate between spam andlegitimate mail (ham). We build "SpamFlow" to learn and exploitthese differences. Using machine learning feature selection weidentify the most discriminatory flow properties and effect greaterthan 90% spam classification accuracy without content or reputationanalysis. SpamFlow correctly identifies 78% of the false negativesgenerated by a popular content filtering application -- demonstratingthe power in combining SpamFlow with existing techniques. Finally, weargue that SpamFlow is not easily subvertible due to economicand practical constraints inherent in sourcing spam.
</description>
<pubDate>Fri, 15 Feb 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40287</guid>
<dc:date>2008-02-15T00:00:00Z</dc:date>
</item>
<item>
<title>Unsupervised Distributed Feature Selection for Multi-view Object Recognition</title>
<link>https://hdl.handle.net/1721.1/40286</link>
<description>Unsupervised Distributed Feature Selection for Multi-view Object Recognition
Christoudias, C. Mario; Urtasun, Raquel; Darrell, Trevor
Object recognition accuracy can be improved when information frommultiple views is integrated, but information in each view can oftenbe highly redundant. We consider the problem of distributed objectrecognition or indexing from multiple cameras, where thecomputational power available at each camera sensor is limited andcommunication between sensors is prohibitively expensive. In thisscenario, it is desirable to avoid sending redundant visual featuresfrom multiple views, but traditional supervised feature selectionapproaches are inapplicable as the class label is unknown at thecamera. In this paper we propose an unsupervised multi-view featureselection algorithm based on a distributed compression approach.With our method, a Gaussian Process model of the joint viewstatistics is used at the receiver to obtain a joint encoding of theviews without directly sharing information across encoders. Wedemonstrate our approach on recognition and indexing tasks withmulti-view image databases and show that our method comparesfavorably to an independent encoding of the features from eachcamera.
</description>
<pubDate>Sun, 17 Feb 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40286</guid>
<dc:date>2008-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>Making Medical Records More Resilient</title>
<link>https://hdl.handle.net/1721.1/40285</link>
<description>Making Medical Records More Resilient
Rudin, Robert
Hurricane Katrina showed that the current methods for handling medicalrecords are minimally resilient to large scale disasters. This research presents a preliminary model for measuring the resilience of medical records systemsagainst public policy goals and uses the model to illuminate the current state of medical record resilience. From this analysis, three recommendations for how to make medical records more resilient are presented.The recommendations are: 1) Federal and state governments should use the preliminary resiliencemodel introduced here as the basis for compliance requirements for electronicmedical record technical architectures. 2) Regional Health Information Organizations (RHIOs) should consideroffering services in disaster management to healthcare organizations. This willhelp RHIOs create sustainable business models. 3) Storage companies should consider developing distributed storagesolutions based on Distributed Hash Table (DHT) technology for medical recordstorage. Distributed storage would alleviate public concerns over privacy withcentralized storage of medical records. Empirical evidence is presenteddemonstrating the performance of DHT technology using a prototype medicalrecord system.
</description>
<pubDate>Sun, 17 Feb 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40285</guid>
<dc:date>2008-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>Wicked Problems and Gnarly Results: Reflecting on Design and Evaluation Methods for Idiosyncratic Personal Information Management Tasks</title>
<link>https://hdl.handle.net/1721.1/40281</link>
<description>Wicked Problems and Gnarly Results: Reflecting on Design and Evaluation Methods for Idiosyncratic Personal Information Management Tasks
Bernstein, Michael; Van Kleek, Max; Khushraj, Deepali; Nayak, Rajeev; Liu, Curtis; schraefel, mc; Karger, David R.
This paper is a case study of an artifact design and evaluation process; it is a reflection on how right thinking about design methods may at times result in sub-optimal results. Our goal has been to assess our decision making processthroughout the design and evaluation stages for a software prototype in order to consider where design methodology may need to be tuned to be more sensitive to the domain of practice, in this case software evaluation in personal information management. In particular, we reflect on design methods around (1) scale of prototype, (2) prototyping and design process, (3) study design, and (4) study population.
</description>
<pubDate>Sun, 10 Feb 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40281</guid>
<dc:date>2008-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Bugs In Dynamic Web Applications</title>
<link>https://hdl.handle.net/1721.1/40249</link>
<description>Finding Bugs In Dynamic Web Applications
Artzi, Shay; Kiezun, Adam; Dolby, Julian; Tip, Frank; Dig, Danny; Paradkar, Amit; Ernst, Michael D.
Web script crashes and malformed dynamically-generated web pages are common errors, and they seriously impact usability of web applications. Currenttools for web-page validation cannot handle the dynamically-generatedpages that are ubiquitous on today's Internet.In this work, we apply a dynamic test generation technique, based oncombined concrete and symbolic execution, to the domain of dynamic webapplications. The technique generates tests automatically andminimizes the bug-inducing inputs to reduce duplication and to makethe bug reports small and easy to understand and fix.We implemented the technique in Apollo, an automated tool thatfound dozens of bugs in real PHP applications. Apollo generatestest inputs for the web application, monitors the application forcrashes, and validates that the output conforms to the HTMLspecification. This paper presents Apollo's algorithms andimplementation, and an experimental evaluation that revealed a totalof 214 bugs in 4 open-source PHP web applications.
</description>
<pubDate>Wed, 06 Feb 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40249</guid>
<dc:date>2008-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>WaveScript: A Case-Study in Applying a Distributed Stream-Processing Language</title>
<link>https://hdl.handle.net/1721.1/40095</link>
<description>WaveScript: A Case-Study in Applying a Distributed Stream-Processing Language
Newton, Ryan; Girod, Lewis; Craig, Michael; Madden, Sam; Morrisett, Greg
Applications that combine live data streams with embedded, parallel,and distributed processing are becoming more commonplace. WaveScriptis a domain-specific language that brings high-level, type-safe,garbage-collected programming to these domains. This is made possibleby three primary implementation techniques. First, we employ a novelevaluation strategy that uses a combination of interpretation andreification to partially evaluate programs into stream dataflowgraphs. Second, we use profile-driven compilation to enable manyoptimizations that are normally only available in the synchronous(rather than asynchronous) dataflow domain. Finally, we incorporatean extensible system for rewrite rules to capture algebraic propertiesin specific domains (such as signal processing).We have used our language to build and deploy a sensor-network for theacoustic localization of wild animals, in particular, theYellow-Bellied marmot. We evaluate WaveScript's performance on thisapplication, showing that it yields good performance on both embeddedand desktop-class machines, including distributed execution andsubstantial parallel speedups. Our language allowed us to implementthe application rapidly, while outperforming a previous Cimplementation by over 35%, using fewer than half the lines of code.We evaluate the contribution of our optimizations to this success.
</description>
<pubDate>Thu, 31 Jan 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40095</guid>
<dc:date>2008-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Cabernet: A Content Delivery Network for Moving Vehicles</title>
<link>https://hdl.handle.net/1721.1/40094</link>
<description>Cabernet: A Content Delivery Network for Moving Vehicles
Eriksson, Jakob; Balakrishnan, Hari; Madden, Sam
This paper describes the design, implementation, and evaluation of Cabernet, a system to deliver data to and from moving vehicles using open 802.11 (WiFi) access points encountered opportunistically during travel. Network connectivity in Cabernet is both fleeting (access points are typicallywithin range for a few seconds) and intermittent (because the access points don't provide continuous coverage), and suffers from high packet loss rates over the wireless channel. On the positive side, in the absence of losses, achievable data rates over WiFi can reach many megabits per second. Unfortunately, current protocols don't establish end-to-end connectivity fast enough, don't cope well with intermittent connectivity, and don't handle high packet loss rates well enough to achieve this potential throughput. Cabernet incorporates two new techniques to improve data delivery throughput: QuickWifi, a streamlined client-side process to establish end-to-end connectivity quickly, reducing the mean time to establish connectivity from 12.9 seconds to less than 366 ms and CTP, a transport protocol that distinguishes congestion on the wired portion of the path from losses over the wireless link to reliably and efficiently deliver data to nodes in cars. We have deployed the system on a fleet of 10 taxis, each running several hours per day in the Boston area. Our experiments show that CTP improves throughput by a factor of 2x over TCP and that QuickWifi increases the number of connectionsby a factor of 4x over unoptimized approaches. Thus, Cabernet is perhaps the first practical system capable of delivering data to moving vehicles over existing short-range WiFi radios, with a mean transfer capacity of approximately 38 megabytes/hour per car, or a mean rate of 87 kbit/s.
</description>
<pubDate>Thu, 17 Jan 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40094</guid>
<dc:date>2008-01-17T00:00:00Z</dc:date>
</item>
<item>
<title>Exact Algorithms for the Canadian Traveller Problem on Paths and Trees</title>
<link>https://hdl.handle.net/1721.1/40093</link>
<description>Exact Algorithms for the Canadian Traveller Problem on Paths and Trees
Karger, David; Nikolova, Evdokia
The Canadian Traveller problem is a stochastic shortest paths problem in which one learns the cost of an edge only when arriving at one of its endpoints. The goal is to find an adaptive policy (adjusting as one learns more edge lengths) that minimizes the expected cost of travel. The problem is known to be #P hard. Since there has been no significant progress on approximation algorithms for several decades, we have chosen to seek out special cases for which exact solutions exist, in the hope of demonstrating techniques that could lead to further progress. Applying techniques from the theory of Markov Decision Processes, we give an exact solution for graphs of parallel (undirected) paths from source to destination with random two-valued edge costs. We also offer a partial generalization to traversing perfect binary trees.
</description>
<pubDate>Mon, 28 Jan 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40093</guid>
<dc:date>2008-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Simulation of Human Motion Data using Short-Horizon Model-Predictive Control</title>
<link>https://hdl.handle.net/1721.1/40091</link>
<description>Simulation of Human Motion Data using Short-Horizon Model-Predictive Control
Silva, Marco da; Abe, Yeuhi; Popovic, Jovan
Many data-driven animation techniques are capable of producing high quality motions of human characters. Few techniques, however, are capable of generating motions that are consistent with physically simulated environments. Physically simulated characters, in contrast, are automatically consistent with the environment, but their motionsare often unnatural because they are difficult to control. We present a model-predictive controller that yields natural motions by guiding simulated humans toward real motion data. During simulation, the predictive component of the controller solves a quadratic program to compute the forces for a short window of time into the future. These forces are then applied by a low-gain proportional-derivative component, which makes minor adjustments until the next planning cycle. The controller is fast enough for interactive systems such as games and training simulations. It requires no precomputation and little manual tuning. The controller is resilient to mismatches between the character dynamics and the input motion, which allows it to track motion capture data even where the real dynamics are not known precisely. The same principled formulation can generate natural walks, runs, and jumps in a number of different physically simulated surroundings.
</description>
<pubDate>Tue, 15 Jan 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40091</guid>
<dc:date>2008-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>Theories in Practice: Easy-to-Write Specifications that Catch Bugs</title>
<link>https://hdl.handle.net/1721.1/40090</link>
<description>Theories in Practice: Easy-to-Write Specifications that Catch Bugs
Saff, David; Boshernitsan, Marat; Ernst, Michael D.
Automated testing during development helps ensure that software works according to the test suite. Traditional test suites verify a few well-picked scenarios or example inputs. However, such example-based testing does not uncover errors in legal inputs that the test writer overlooked. We propose theory-based testing as an adjunct to example-based testing. A theory generalizes a (possibly infinite) set of example-based tests. A theory is an assertion that should be true for any data, and it can be exercised by human-chosen data or by automatic data generation. A theory is expressed in an ordinary programming language, it is easy for developers to use (often even easier than example-based testing), and it serves as a lightweight form of specification. Six case studies demonstrate the utility of theories that generalize existing tests to prevent bugs, clarify intentions, and reveal design problems.
</description>
<pubDate>Mon, 14 Jan 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40090</guid>
<dc:date>2008-01-14T00:00:00Z</dc:date>
</item>
<item>
<title>Sparse recovery using sparse matrices</title>
<link>https://hdl.handle.net/1721.1/40089</link>
<description>Sparse recovery using sparse matrices
Berinde, Radu; Indyk, Piotr
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a high-dimensional vector x from its lower-dimensional sketch Ax. A popular way of performing this recovery is by finding x* such that Ax=Ax*, and ||x*||_1 is minimal. It is known that this approach ``works'' if A is a random *dense* matrix, chosen from a proper distribution.In this paper, we investigate this procedure for the case where A is binary and *very sparse*. We show that, both in theory and in practice, sparse matrices are essentially as ``good'' as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time.
</description>
<pubDate>Thu, 10 Jan 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/40089</guid>
<dc:date>2008-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>Relational Envelope-based Planning</title>
<link>https://hdl.handle.net/1721.1/39838</link>
<description>Relational Envelope-based Planning
Gardiol, Natalia Hernandez
This thesis proposes a synthesis of logic and probability for solving stochastic sequential decision-making problems. We address two main questions: How can we take advantage of logical structure to speed up planning in a principled way? And, how can probability inform the production of a more robust, yet still compact, policy? We can take as inspiration a mobile robot acting in the world: it is faced with a varied amount ofsensory data and uncertainty in its action outcomes. Or, consider a logistics planning system: it must deliver a large number of objects to the right place at the right time. Many interesting sequential decision-making domains involve large statespaces, large stochastic action sets, and time pressure to act. In this work, we show how structured representations of the environment's dynamics can constrain and speed up the planning process. We start with a problem domain described in a probabilistic logical description language.Our technique is based on, first, identifying the most parsimonious representation that permits solution of the described problem. Next, we take advantage of the structured problem description to dynamically partition the action space into a set of equivalence classes with respect to this minimal representation. The partitioned action space results in fewer distinctactions. This technique can yield significant gains in planning efficiency.Next, we develop an anytime technique to elaborate on this initial plan. Our approach uses the envelope MDP framework, which creates a Markov decision process out of a subset of the possible state space. This strategy lets an agent begin acting quicklywithin a restricted part of the full state space, as informed by the original plan,and to judiciously expand its envelope as resources permit.Finally, we show how the representation space itself can be elaborated within the anytime framework. This approach balances the need to respond to time-pressure and to produce the most robust policies possible. We present experimental results in some synthetic planning domains and in a simulated military logistics domain.
</description>
<pubDate>Mon, 31 Dec 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39838</guid>
<dc:date>2007-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Views on Vision</title>
<link>https://hdl.handle.net/1721.1/39837</link>
<description>Views on Vision
Freuder, Eugene C.
</description>
<pubDate>Mon, 01 Feb 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39837</guid>
<dc:date>1971-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Object Partition Problem</title>
<link>https://hdl.handle.net/1721.1/39836</link>
<description>The Object Partition Problem
Freuder, Eugene C.
</description>
<pubDate>Mon, 01 Feb 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39836</guid>
<dc:date>1971-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Feature Point Generation Programs</title>
<link>https://hdl.handle.net/1721.1/39835</link>
<description>Feature Point Generation Programs
Griffith, Arnold K.
The programs in this set extract, from a raster of intensity values over some scene, a set of points which are adjudged to lie along the boundaries of objects in the scene. Intensities may be obtained directly from the new vidissector, or from a previously created file of intensity values.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39835</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Line Verifier GVERIFY1</title>
<link>https://hdl.handle.net/1721.1/39834</link>
<description>The Line Verifier GVERIFY1
Griffith, Arnold K.
A line verifier is presented which, given the co-ordinates of the end points of the hypothesized line, returns a (possibly) more accurate version of the end points, together with an estimate of the probability that there is a line in the region between the two end points given. No estimate is given as to the actual extent of the line: the increased accuracy of the returned end points lies in the accuracy of the slope and intercept of the line through them.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39834</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning complex cell invariance from natural videos: A plausibility proof</title>
<link>https://hdl.handle.net/1721.1/39833</link>
<description>Learning complex cell invariance from natural videos: A plausibility proof
Masquelier, Timothee; Serre, Thomas; Thorpe, Simon; Poggio, Tomaso
One of the most striking feature of the cortex is its ability to wire itself. Understanding how the visual cortex wires up through development and how visual experience refines connections into adulthood is a key question for Neuroscience. While computational models of the visual cortex are becoming increasingly detailed, the question of how such architecture could self-organize through visual experience is often overlooked. Here we focus on the class of hierarchical feedforward models of the ventral stream of the visual cortex, which extend the classical simple-to-complex cells model by Hubel and Wiesel (1962) to extra-striate areas, and have been shown to account for a host of experimental data. Such models assume two functional classes of simple and complex cells with specific predictions about their respective wiring and resulting functionalities.In these networks, the issue of learning, especially for complex cells, is perhaps the least well understood. In fact, in most of these models, the connectivity between simple and complex cells is not learned butrather hard-wired. Several algorithms have been proposed for learning invariances at the complex cell level based on a trace rule to exploit the temporal continuity of sequences of natural images, but very few can learn from natural cluttered image sequences.Here we propose a new variant of the trace rule that only reinforces the synapses between the most active cells, and therefore can handle cluttered environments. The algorithm has so far been developed and tested at the level of V1-like simple and complex cells: we verified that Gabor-like simple cell selectivity could emerge from competitive Hebbian learning. In addition, we show how the modified trace rule allows the subsequent complex cells to learn to selectively pool over simple cells with the same preferred orientation but slightly different positions thus increasing their tolerance to the precise position of the stimulus within their receptive fields.
</description>
<pubDate>Wed, 26 Dec 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39833</guid>
<dc:date>2007-12-26T00:00:00Z</dc:date>
</item>
<item>
<title>Report on the Probabilistic Language Scheme</title>
<link>https://hdl.handle.net/1721.1/39831</link>
<description>Report on the Probabilistic Language Scheme
Radul, Alexey
Reasoning with probabilistic models is a widespread andsuccessful technique in areas ranging from computer vision, to naturallanguage processing, to bioinformatics. Currently, these reasoningsystems are either coded from scratch in general-purpose languages oruse formalisms such as Bayesian networks that have limited expressivepower. In both cases, the resulting systems are difficult to modify,maintain, compose, and interoperate with. This work presents ProbabilisticScheme, an embedding of probabilistic computation into Scheme. Thisgives programmers an expressive language for implementing modularprobabilistic models that integrate naturally with the rest of Scheme.
</description>
<pubDate>Mon, 22 Oct 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39831</guid>
<dc:date>2007-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Team MIT Urban Challenge Technical Report</title>
<link>https://hdl.handle.net/1721.1/39822</link>
<description>Team MIT Urban Challenge Technical Report
Leonard, John; Barrett, David; How, Jonathan; Teller, Seth; Antone, Matt; Campbell, Stefan; Epstein, Alex; Fiore, Gaston; Fletcher, Luke; Frazzoli, Emilio; Huang, Albert; Jones, Troy; Koch, Olivier; Kuwata, Yoshiaki; Mahelona, Keoni; Moore, David; Moyer, Katy; Olson, Edwin; Peters, Steven; Sanders, Chris; Teo, Justin; Walter, Matthew
This technical report describes Team MIT&#146;s approach to theDARPA Urban Challenge. We have developed a novel strategy forusing many inexpensive sensors, mounted on the vehicle periphery,and calibrated with a new cross-­modal calibrationtechnique. Lidar, camera, and radar data streams are processedusing an innovative, locally smooth state representation thatprovides robust perception for real­ time autonomous control. Aresilient planning and control architecture has been developedfor driving in traffic, comprised of an innovative combination ofwell­proven algorithms for mission planning, situationalplanning, situational interpretation, and trajectory control. These innovations are being incorporated in two new roboticvehicles equipped for autonomous driving in urban environments,with extensive testing on a DARPA site visit course. Experimentalresults demonstrate all basic navigation and some basic trafficbehaviors, including unoccupied autonomous driving, lanefollowing using pure-­pursuit control and our local frameperception strategy, obstacle avoidance using kino-­dynamic RRTpath planning, U-­turns, and precedence evaluation amongst othercars at intersections using our situational interpreter. We areworking to extend these approaches to advanced navigation andtraffic scenarios.
</description>
<pubDate>Fri, 14 Dec 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39822</guid>
<dc:date>2007-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>The L%LINES Package</title>
<link>https://hdl.handle.net/1721.1/39814</link>
<description>The L%LINES Package
Griffith, Arnold K.
The program (L%LINES X Y) takes feature point output from the FP%FPOINTS program (q.v.) for horizontal and vertical scans (X and Y respectively); and outputs a list consisting of two lists of line segments, represented in an obvious manner, obtained from the respective arguments. "Feature points" are points on the field of view which seem to lie along some edge in the scene. The line segments output by L%LINES are obtained by examining a set of feature points for straight chains of points.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39814</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Information Flow as Network Flow Capacity</title>
<link>https://hdl.handle.net/1721.1/39812</link>
<description>Quantitative Information Flow as Network Flow Capacity
McCamant, Stephen; Ernst, Michael D.
We present a new technique for determining how much information abouta program's secret inputs is revealed by its public outputs. Incontrast to previous techniques based on reachability from secretinputs (tainting), it achieves a more precise quantitative result bycomputing a maximum flow of information between the inputs andoutputs. The technique uses static control-flow regions to soundlyaccount for implicit flows via branches and pointer operations, butoperates dynamically by observing one or more program executions andgiving numeric flow bounds specific to them (e.g., "17 bits"). Themaximum flow in a network also gives a minimum cut (a set of edgesthat separate the secret input from the output), which can be used toefficiently check that the same policy is satisfied on futureexecutions. We performed case studies on 5 real C, C++, and ObjectiveC programs, 3 of which had more than 250K lines of code. The toolchecked multiple security policies, including one that was violated bya previously unknown bug.
</description>
<pubDate>Mon, 10 Dec 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39812</guid>
<dc:date>2007-12-10T00:00:00Z</dc:date>
</item>
<item>
<title>Verifiably Secure Devices</title>
<link>https://hdl.handle.net/1721.1/39659</link>
<description>Verifiably Secure Devices
Lepinski, Matt; Micali, Silvio; Izmalkov, Sergei
We put forward the notion of a verifiably secure device, in essence a stronger notion of secure computation, and achieve it in the ballot-box model. Verifiably secure devices1. Provide a perfect solution to the problem of achieving correlated equilibrium, an important and extensively investigated problem at the intersection of game theory, cryptography and efficient algorithms; and2. Enable the secure evaluation of multiple interdependent functions.
</description>
<pubDate>Wed, 05 Dec 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39659</guid>
<dc:date>2007-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping Stream Programs into the Compressed Domain</title>
<link>https://hdl.handle.net/1721.1/39651</link>
<description>Mapping Stream Programs into the Compressed Domain
Thies, William; Hall, Steven; Amarasinghe, Saman
Due to the high data rates involved in audio, video, and signalprocessing applications, it is imperative to compress the data todecrease the amount of storage used. Unfortunately, this implies thatany program operating on the data needs to be wrapped by adecompression and re-compression stage. Re-compression can incursignificant computational overhead, while decompression swamps theapplication with the original volume of data.In this paper, we present a program transformation that greatlyaccelerates the processing of compressible data. Given a program thatoperates on uncompressed data, we output an equivalent program thatoperates directly on the compressed format. Our transformationapplies to stream programs, a restricted but useful class ofapplications with regular communication and computation patterns. Ourformulation is based on LZ77, a lossless compression algorithm that isutilized by ZIP and fully encapsulates common formats such as AppleAnimation, Microsoft RLE, and Targa.We implemented a simple subset of our techniques in the StreamItcompiler, which emits executable plugins for two popular video editingtools: MEncoder and Blender. For common operations such as coloradjustment and video compositing, mapping into the compressed domainoffers a speedup roughly proportional to the overall compressionratio. For our benchmark suite of 12 videos in Apple Animationformat, speedups range from 1.1x to 471x, with a median of 15x.
</description>
<pubDate>Fri, 30 Nov 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39651</guid>
<dc:date>2007-11-30T00:00:00Z</dc:date>
</item>
<item>
<title>ReCrash: Making Crashes Reproducible</title>
<link>https://hdl.handle.net/1721.1/39639</link>
<description>ReCrash: Making Crashes Reproducible
Kim, Sunghun; Artzi, Shay; Ernst, Michael D.
It is difficult to fix a problem without being able to reproduce it.However, reproducing a problem is often difficult and time-consuming.This paper proposes a novel algorithm, ReCrash, that generatesmultiple unit tests that reproduce a given program crash.ReCrash dynamically tracks method calls during every execution of the target program. If the program crashes, ReCrash saves information about the relevant method calls and uses the saved information to create unit tests reproducing the crash.We present reCrashJ an implementation of ReCrash for Java. reCrashJ reproducedreal crashes from javac, SVNKit, Eclipse JDT, and BST. reCrashJ is efficient, incurring 13%-64% performance overhead. If this overhead is unacceptable, then reCrashJ has another mode that has negligible overhead until a crash occurs and 0%-1.7% overhead until a second crash, at which point the test cases are generated.
</description>
<pubDate>Tue, 20 Nov 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39639</guid>
<dc:date>2007-11-20T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Feature Selection In Actor-Critic Algorithms</title>
<link>https://hdl.handle.net/1721.1/39427</link>
<description>Towards Feature Selection In Actor-Critic Algorithms
Rohanimanesh, Khashayar; Roy, Nicholas; Tedrake, Russ
Choosing features for the critic in actor-critic algorithms with function approximation is known to be a challenge. Too few critic features can lead to degeneracy of the actor gradient, and too many features may lead to slower convergence of the learner. In this paper, we show that a well-studied class of actor policies satisfy the known requirements for convergence when the actor features are selected carefully. We demonstrate that two popular representations for value methods - the barycentric interpolators and the graph Laplacian proto-value functions - can be used to represent the actor in order to satisfy these conditions. A consequence of this work is a generalization of the proto-value function methods to the continuous action actor-critic domain. Finally, we analyze the performance of this approach using a simulation of a torque-limited inverted pendulum.
</description>
<pubDate>Thu, 01 Nov 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39427</guid>
<dc:date>2007-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transfering Nonlinear Representations using Gaussian Processes with a Shared Latent Space</title>
<link>https://hdl.handle.net/1721.1/39426</link>
<description>Transfering Nonlinear Representations using Gaussian Processes with a Shared Latent Space
Urtasun, Raquel; Quattoni, Ariadna; Darrell, Trevor
When a series of problems are related, representations derived fromlearning earlier tasks may be useful in solving later problems. Inthis paper we propose a novel approach to transfer learning withlow-dimensional, non-linear latent spaces. We show how suchrepresentations can be jointly learned across multiple tasks in adiscriminative probabilistic regression framework. When transferred tonew tasks with relatively few training examples, learning can befaster and/or more accurate. Experiments on a digit recognition taskshow significantly improved performance when compared to baselineperformance with the original feature representation or with arepresentation derived from a semi-supervised learning approach.
</description>
<pubDate>Tue, 06 Nov 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39426</guid>
<dc:date>2007-11-06T00:00:00Z</dc:date>
</item>
<item>
<title>Collusion-Resilient Revenue In Combinatorial Auctions</title>
<link>https://hdl.handle.net/1721.1/39420</link>
<description>Collusion-Resilient Revenue In Combinatorial Auctions
Valiant, Paul; Micali, Silvio
In auctions of a single good, the second-price mechanism achieves, in dominantstrategies, a revenue benchmark that is naturally high and resilient to anypossible collusion.We show how to achieve, to the maximum extent possible, the same propertiesin combinatorial auctions.
</description>
<pubDate>Fri, 02 Nov 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39420</guid>
<dc:date>2007-11-02T00:00:00Z</dc:date>
</item>
<item>
<title>Set Interfaces for Generalized Typestate and Data Structure Consistency Verification</title>
<link>https://hdl.handle.net/1721.1/39419</link>
<description>Set Interfaces for Generalized Typestate and Data Structure Consistency Verification
Lam, Patrick; Zee, Karen; Kuncak, Viktor; Rinard, Martin
Typestate systems allow the type of an object to change during its lifetime in the computation. Unlike standard type systems, they can enforce safety properties that depend on changing object states. We present a new, generalized formulation of typestate that models the typestate of an object through membership in abstract sets. This abstract set formulation enables developers to reason about cardinalities of sets, and in particular to state and verify the condition that certain sets are empty. We support hierarchical typestate classifications by specifying subset and disjointness properties over the typestate sets.We present our formulation of typestate in the context of the Hob program specification and verification framework. The Hob framework allows the combination of typestate analysis with powerful independently developed analyses such as shape analyses or theorem proving techniques. We implemented our analysis and annotated several programs (75-2500 lines of code) with set specifications. Our implementation includes several optimizations that improve the scalability of the analysis and a novel loop invariant inferencealgorithm that eliminates the need to specify loop invariants. We present experimental data demonstrating the effectiveness of our techniques.
</description>
<pubDate>Wed, 31 Oct 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39419</guid>
<dc:date>2007-10-31T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Self-Healing Gradients</title>
<link>https://hdl.handle.net/1721.1/39418</link>
<description>Fast Self-Healing Gradients
Beal, Jacob; Bachrach, Jonathan; Vickery, Dan; Tobenkin, Mark
We present CRF-Gradient, a self-healing gradient algorithm that provably reconfigures in O(diameter) time. Self-healing gradients are a frequently used building block for distributed self-healing systems, but previous algorithms either have a healing rate limited by the shortest link in the network or must rebuild invalid regions from scratch. We have verified CRF-Gradient in simulation and on a network of Mica2 motes. Our approach can also be generalized and applied to create other self-healing calculations, such as cumulative probability fields.
</description>
<pubDate>Sat, 01 Mar 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/39418</guid>
<dc:date>2008-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pluggable type-checking for custom type qualifiers in Java</title>
<link>https://hdl.handle.net/1721.1/38878</link>
<description>Pluggable type-checking for custom type qualifiers in Java
Papi, Matthew M.; Ali, Mahmood; Correa Jr., Telmo Luis; Perkins, Jeff H.; Ernst, Michael D.
We have created a framework for adding custom type qualifiers to the Javalanguage in a backward-compatible way.  The type system designer definesthe qualifiers and creates a compiler plug-in that enforces theirsemantics.  Programmers can write the type qualifiers in their programs andbe informed of errors or assured that the program is free of those errors.The system builds on existing Java tools and APIs.In order to evaluate our framework, we have written four type-checkersusing the framework:  for a non-null type system that can detect andprevent null pointer errors; for an interned type system that can detectand prevent equality-checking errors; for a reference immutability typesystem, Javari, that can detect and prevent mutation errors; and for areference and object immutability type system, IGJ, that can detect andprevent even more mutation errors.  We have conducted case studies usingeach checker to find real errors in existing software.  These case studiesdemonstrate that the checkers and the framework are practical and useful.
</description>
<pubDate>Mon, 17 Sep 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38878</guid>
<dc:date>2007-09-17T00:00:00Z</dc:date>
</item>
<item>
<title>MIXIT: The Network Meets the Wireless Channel</title>
<link>https://hdl.handle.net/1721.1/38871</link>
<description>MIXIT: The Network Meets the Wireless Channel
Katti, Sachin; Katabi, Dina
The traditional contract between the network and the lower layers states that the network does routing and the lower layers deliver correct packets. In a wireless network, however, different nodes may hear most bits in a transmission, yet none of them receives the whole packet uncorrupted. The current approach imposes fatesharing on the bits, dropping a whole packet because of a few incorrect bits. In contrast, this paper proposes MIXIT, a new architecture that performs opportunistic routing on groups of correctly received symbols.  We show using simulations driven with Software Radios measurements that MIXIT provides $4$x throughput improvement over state-of-the-art opportunistic routing.
</description>
<pubDate>Tue, 04 Sep 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38871</guid>
<dc:date>2007-09-04T00:00:00Z</dc:date>
</item>
<item>
<title>Factors Affecting the Adoption of Faculty-Developed Academic Software: A Study of Five iCampus Projects</title>
<link>https://hdl.handle.net/1721.1/38487</link>
<description>Factors Affecting the Adoption of Faculty-Developed Academic Software: A Study of Five iCampus Projects
Ehrmann, Stephen C.; Gilbert, Steven W.; McMartin, Flora; Abelson, Harold; Long, Philip D.
Instruction in higher education must adapt more rapidly to: changes in workforce needs, global issues, advances in disciplines, and resource constraints.  The pace of such improvement depends on the speed with which new ideas and materials are adopted across institutions. In 1999 Microsoft pledged $25 million and staff support for iCampus, a seven-year MIT project to develop pioneering uses of educational technology. The TLT Group studied five iCampus projects in order to identify factors affecting institutionalization and widespread dissemination. Among the factors impeding adoption: lack of rewards and support for faculty to adopt innovations; faculty isolation; and a lack of attention to adoption issues among projects selected for funding. The study made recommendations for universities, foundations, government agencies and corporations: 1) continue making education more authentic, active, collaborative, and feedback-rich; 2) create demand to adopt ideas and materials from other sources by encouraging all faculty members to improve and document learning in their programs, year after year; 3) nurture coalitions for instructional improvement, across and within institutions; 4) create more effective higher education &#150; corporate alliances; and 5) improve institutional services to support faculty in educational design, software development, assessment methods, formative evaluation, and/or in sharing ideas with others who teach comparable courses.
</description>
<pubDate>Sat, 20 Oct 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38487</guid>
<dc:date>2007-10-20T00:00:00Z</dc:date>
</item>
<item>
<title>World Wide Web Without Walls</title>
<link>https://hdl.handle.net/1721.1/38485</link>
<description>World Wide Web Without Walls
Brodsky, Micah Z. (Micah Zev); Krohn, Maxwell; Morris, Robert; Walfish, Michael; Yip, Alexander
Today's Web is built upon a particular symbiotic relationship betweensites and users: the sites invest capital to create and market a setof features, and users gain access to the sites often in exchange fortheir data (e.g., photos, personal information, creative musings,etc.).  This paper imagines a very different Web ecosystem, in whichusers retain control of their data and developers can justify theirexistence without hoarding user data.
</description>
<pubDate>Fri, 24 Aug 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38485</guid>
<dc:date>2007-08-24T00:00:00Z</dc:date>
</item>
<item>
<title>Constraint and Restoring Force</title>
<link>https://hdl.handle.net/1721.1/38484</link>
<description>Constraint and Restoring Force
Beal, Jacob; Bachrach, Jonathan; Tobenkin, Mark
Long-lived sensor network applications must be able to self-repair and adapt to changing demands. We introduce a new approach for doing so: Constraint and Restoring Force. CRF is a physics-inspired framework for computing scalar fields across a sensor network with occasional changes. We illustrate CRF&#146;s usefulness by applying it to gradients, a common building block for sensor network systems. The resulting algorithm, CRF-Gradient, determines locally when to self-repair and when to stop and save energy. CRF-Gradient is self-stabilizing, converges in O(diameter) time, and has been verified experimentally in simulation and on a network of Mica2 motes. Finally we show how CRF can be applied to other algorithms as well, such as the calculation of probability fields.
</description>
<pubDate>Fri, 24 Aug 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38484</guid>
<dc:date>2007-08-24T00:00:00Z</dc:date>
</item>
<item>
<title>Learning by Learning To Communicate</title>
<link>https://hdl.handle.net/1721.1/38483</link>
<description>Learning by Learning To Communicate
Beal, Jacob
Human intelligence is a product of cooperation among many different specialists.  Much of this cooperation must be learned, but we do not yet have a mechanism that explains how this might happen for the "high-level" agile cooperation that permeates our daily lives.I propose that the various specialists learn to cooperate by learning to communicate, basing this proposal on the phenomenon of "communication bootstrapping," in which shared experiences form a basis for agreement on a system of signals.  In this dissertation, I lay out a roadmap for investigating this hypothesis, identifying problems that must be overcome in order to understand the capabilities of communication bootstrapping and in order to test whether it is exploited by human intelligence.I then demonstrate progress along the course of investigation laid out in my roadmap:* I establish a measure of "developmental cost" that allows me to eliminate many possible designs* I develop a method of engineering devices for use in models of intelligence, including characterizing their behavior under a wide variety of conditions and compensating for their misbehavior using "failure simplification."* I develop mechanisms that reliably produce communication bootstrapping such that it can be used to connect specialists in an engineered system.* I construct a demonstration system including a simulated world and pair of observers that learn world dynamics via communication bootstrapping.
PhD thesis
</description>
<pubDate>Thu, 23 Aug 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38483</guid>
<dc:date>2007-08-23T00:00:00Z</dc:date>
</item>
<item>
<title>Factors Affecting the Adoption of Faculty-Developed Academic Software: A Study of Five iCampus Projects</title>
<link>https://hdl.handle.net/1721.1/38482</link>
<description>Factors Affecting the Adoption of Faculty-Developed Academic Software: A Study of Five iCampus Projects
Ehrmann, Stephen C.; Gilbert, Steven W.; McMartin, Flora
Initiated in 1999, iCampus is a research collaboration between Microsoft Research and MIT whose goal is to create and demonstrate technologies with the potential for revolutionary change throughout the university curriculum.&#148; The program was made possible by a $25 million research grant from Microsoft to MIT, and involves extensive collaboration between MIT and Microsoft staff.&lt;p /&gt;This assessment study by the TLT Group addresses the question: The TLT Group has been asked, &#147;In light of the experience of iCampus, especially those projects selected by MIT and Microsoft for close study, what can be learned about priorities for educational technology initiatives in the future and about how the spread of such innovations can be more effectively supported?&#148;&lt;p /&gt;The major conclusions are that the five projects studied improved important elements of an MIT education by making learning more authentic, active, collaborative, and feedback-rich.  Nevertheless, wider adoption beyond MIT was extremely difficult to achieve, largely due to structure issues in universities that make it difficult for educational technology to spread beyond the initial innovators, even to other departments within the same institution.  The report includes recommendations for universities, external sponsors, and for MIT in particular, about steps to take to achieve more effective dissemination.
</description>
<pubDate>Mon, 20 Aug 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38482</guid>
<dc:date>2007-08-20T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Secure Services from Untrusted Developers</title>
<link>https://hdl.handle.net/1721.1/38453</link>
<description>Toward Secure Services from Untrusted Developers
Brodsky, Micah Z. (Micah Zev); Efstathopoulos, Petros; Kaashoek, Frans; Kohler, Eddie; Krohn, Maxwell; Mazieres, David; Morris, Robert; VanDeBogart, Steve; Yip, Alexander
We present a secure service prototype built from untrusted,contributed code.The service manages private data for a variety of different users, anduser programs frequently require access to other users' private data.However, aside from covert timing channels, no part of the service cancorrupt private data or leak it between users or outside the systemwithout permission from the data's owners.Instead, owners may choose to reveal their data in a controlled manner.This application model is demonstrated by Muenster, a job searchwebsite that protects both the integrity and secrecy of each user's data.In spite of running untrusted code, Muenster and other services canprevent overt leaks because the untrusted modules are constrained bythe operating system to follow pre-specified security policies, whichare nevertheless flexible enough for programmers to do useful work.We build Muenster atop Asbestos, a recently described operating systembased on a form of decentralized information flowcontrol.
</description>
<pubDate>Mon, 06 Aug 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38453</guid>
<dc:date>2007-08-06T00:00:00Z</dc:date>
</item>
<item>
<title>Perfect Implementation of Normal-Form Mechanisms</title>
<link>https://hdl.handle.net/1721.1/38208</link>
<description>Perfect Implementation of Normal-Form Mechanisms
Izmalkov, Sergei; Lepinski, Matt; Micali, Silvio
Privacy and trust affect our strategic thinking, yet they have not been precisely modeled in mechanism design. In settings of incomplete information,  traditional implementations of a normal-form mechanism ---by disregarding the players' privacy, or assuming trust in a mediator--- may not be realistic and fail to reach the mechanism's objectives. We thus investigate implementations of a new type.We put forward the notion of a  perfect implementation of a normal-form mechanism M: in essence, an extensive-form mechanism exactly preserving all strategic properties of M, without relying on a trusted  party or violating the privacy of the players.We prove that ANY normal-form mechanism  can be perfectly implemented via envelopes and an envelope-randomizing device (i.e., the same tools used for running fair lotteries or tallying secret votes).
</description>
<pubDate>Sat, 01 Jan 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38208</guid>
<dc:date>2005-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Agent Organization and Request Propagation in the Knowledge Plane</title>
<link>https://hdl.handle.net/1721.1/38207</link>
<description>Agent Organization and Request Propagation in the Knowledge Plane
Li, Ji
In designing and building a network like the Internet, we continue to face the problems of scale and distribution. In particular, network management has become an increasingly difficult task, and network applications often need to maintain efficient connectivity graphs for various purposes. The knowledge plane was proposed as a new construct to improve network management and applications. In this proposal, I propose an application-independent mechanism to support the construction of application-specific connectivity graphs. Specifically, I propose to build a network knowledge plane and multiple sub-planes for different areas of network services. The network knowledge plane provides valuable knowledge about the Internet to the sub-planes, and each sub-plane constructs its own connectivity graph using network knowledge and knowledge in its own specific area. I focus on two key design issues: (1) a region-based architecture for agent organization; (2) knowledge dissemination and request propagation. Network management and applications benefit from the underlying network knowledge plane and sub-planes. To demonstrate the effectiveness of this mechanism, I conduct case studies in network management and security.
</description>
<pubDate>Thu, 26 Jul 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38207</guid>
<dc:date>2007-07-26T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous Space-Time Semantics Allow Adaptive Program Execution</title>
<link>https://hdl.handle.net/1721.1/38206</link>
<description>Continuous Space-Time Semantics Allow Adaptive Program Execution
Bachrach, Jonathan; Beal, Jacob; Fujiwara, Takeshi
A spatial computer is a collection of devices filling spacewhose ability to interact is strongly dependent on theirproximity. Previously, we have showed that programmingsuch a computer as a continuous space can allow self-scalingacross computers with different device distributionsand can increase robustness against device failure. Wehave extended these ideas to time, allowing self-scalingacross computers with different communication and executionrates. We have used a network of 24 Mica2 Motes todemonstrate that a program exploiting these ideas showsminimal difference in behavior as the time between programsteps ranges from 100 ms to 300 ms and on a configurationwith mixed rates.
</description>
<pubDate>Sun, 01 Jul 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/38206</guid>
<dc:date>2007-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Dirichlet Process-Based Models For Discovery of Cross-species Mammalian Gene Expression</title>
<link>https://hdl.handle.net/1721.1/37817</link>
<description>Hierarchical Dirichlet Process-Based Models For Discovery of Cross-species Mammalian Gene Expression
Gerber, Georg K.; Dowell, Robin D.; Jaakkola, Tommi S.; Gifford, David K.
An important research problem in computational biology is theidentification of expression programs, sets of co-activatedgenes orchestrating physiological processes, and thecharacterization of the functional breadth of these programs.  Theuse of mammalian expression data compendia for discovery of suchprograms presents several challenges, including: 1) cellularinhomogeneity within samples, 2) genetic and environmental variationacross samples, and 3) uncertainty in the numbers of programs andsample populations. We developed GeneProgram, a new unsupervisedcomputational framework that uses expression data to simultaneouslyorganize genes into overlapping programs and tissues into groups toproduce maps of inter-species expression programs, which are sortedby generality scores that exploit the automatically learnedgroupings.  Our method addresses each of the above challenges byusing a probabilistic model that: 1) allocates mRNA to differentexpression programs that may be shared across tissues, 2) ishierarchical, treating each tissue as a sample from a population ofrelated tissues, and 3) uses Dirichlet Processes, a non-parametricBayesian method that provides prior distributions over numbers ofsets while penalizing model complexity.  Using real gene expressiondata, we show that GeneProgram outperforms several popularexpression analysis methods in recovering biologically interpretablegene sets.  From a large compendium of mouse and human expressiondata, GeneProgram discovers 19 tissue groups and 100 expressionprograms active in mammalian tissues.  Our method automaticallyconstructs a comprehensive, body-wide map of expression programs andcharacterizes their functional generality. This map can be used forguiding future biological experiments, such as discovery of genesfor new drug targets that exhibit minimal "cross-talk" withunintended organs, or genes that maintain general physiologicalresponses that go awry in disease states.  Further, our method isgeneral, and can be applied readily to novel compendia of biologicaldata.
</description>
<pubDate>Fri, 06 Jul 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37817</guid>
<dc:date>2007-07-06T00:00:00Z</dc:date>
</item>
<item>
<title>Using The Barton Libraries Dataset As An RDF benchmark</title>
<link>https://hdl.handle.net/1721.1/37816</link>
<description>Using The Barton Libraries Dataset As An RDF benchmark
Abadi, Daniel J.; Marcus, Adam; Madden, Samuel R.; Hollenbach, Kate
This report describes the Barton Libraries RDF dataset and Longwell querybenchmark that we use for our recent VLDB paper on Scalable Semantic WebData Management Using Vertical Partitioning.
</description>
<pubDate>Fri, 06 Jul 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37816</guid>
<dc:date>2007-07-06T00:00:00Z</dc:date>
</item>
<item>
<title>Table 2 (Supplemental): Complete data for all 100 expression programs discovered by GeneProgram from the Novartis Gene Atlas v2</title>
<link>https://hdl.handle.net/1721.1/37603</link>
<description>Table 2 (Supplemental): Complete data for all 100 expression programs discovered by GeneProgram from the Novartis Gene Atlas v2
Gerber, Georg K.; Dowell, Robin D.; Jaakkola, Tommi S.; Gifford, David K.
Table 2 (Supplemental): Complete data for all 100 recurrent expression programs (EPs) discovered by GeneProgram.  Each EP has two identifying rows, a list of meta-genes, and a list of significantly enriched GO categories.  The first identifying row has three columns: (1) the EP identifier (an arbitrarily assigned number), (2) the number of meta-genes in the EP, and (3) the percentage of samples the EP occurs in.  The identifying row lists all tissues that use the EP (h_ = human tissue, m_ = mouse tissue).  Numbers in parentheses next to each tissue indicate the degree to which the tissue uses the EP.After the identifying rows the set of meta-genes in the EP are listed. Each meta-gene has eight columns: (1) the human RefSeq identifier, (2) the mouse RefSeq identifier, (3) the empirical mean expression level, (4) the empirical mean occurrence percentage, (5) the human gene name, (6) the human Swis-Prot description, (7) the mouse gene name, and (8) the mouse Swis-Prot description.Following the meta-genes are lists of significant GO categories (the first list uses human annotations, and the second uses mouse annotations).  The columns for each line in this list are: (1) GO term, (2) enrichment p-value, (3) number of genes in the EP in the category/total genes in the EP with some GO category, (4) category description, and (5) total number of genes in the category that are also in the dataset analyzed.
</description>
<pubDate>Mon, 25 Jun 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37603</guid>
<dc:date>2007-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>Table 1 (Supplemental): Summary of expression programs discovered by GeneProgram from Novartis Tissue Atlas v2 data</title>
<link>https://hdl.handle.net/1721.1/37602</link>
<description>Table 1 (Supplemental): Summary of expression programs discovered by GeneProgram from Novartis Tissue Atlas v2 data
Gerber, Georg K.; Dowell, Robin D.; Jaakkola, Tommi S.; Gifford, David K.
Table 1 (Supplemental): Summary of  recurrent expression programs (EPs) discovered by GeneProgram.  The columns are: (1) the EP identifier (an arbitrarily assigned number), (2) the number of genes in the EP, (3) the number of tissues in the EP, (4) the species using the EP (i.e., one or more tissues from the species uses the EP, H = human, M = mouse), (5) the generality score (GS), (6) the top three tissues using the EP (numbers in parentheses = usage percentages), (7)-(9) the GO category name, GO term, and associated p-value for the most abundant significantly enriched category (i.e., the significant category with the most genes overlapping with the EP's genes).
</description>
<pubDate>Mon, 25 Jun 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37602</guid>
<dc:date>2007-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>Stateful Anycast for DDoS Mitigation</title>
<link>https://hdl.handle.net/1721.1/37601</link>
<description>Stateful Anycast for DDoS Mitigation
Hansen, Richard E.
Distributed denial-of-service (DDoS) attacks can easily cripple victim hosts or networks, yet effective defenses remain elusive.  Normal anycast can be used to force the diffusion of attack traffic over a group of several hosts to increase the difficulty of saturating resources at or near any one of the hosts.  However, because a packet sent to the anycast group may be delivered to any member, anycast does not support protocols that require a group member to maintain state (such as TCP).  This makes anycast impractical for most applications of interest.This document describes the design of Stateful Anycast, a conceptual anycast-like network service based on IP anycast.  Stateful Anycast is designed to support stateful sessions without losing anycast&#146;s ability to defend against DDoS attacks.  Stateful Anycast employs a set of anycasted proxies to direct packets to the proper stateholder.  These proxies provide DDoS protection by dropping a session&#146;s packets upon group member request.  Stateful Anycast is incrementally deployable and can scale to support many groups.
MEng thesis
</description>
<pubDate>Thu, 21 Jun 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37601</guid>
<dc:date>2007-06-21T00:00:00Z</dc:date>
</item>
<item>
<title>Information Accountability</title>
<link>https://hdl.handle.net/1721.1/37600</link>
<description>Information Accountability
Weitzner, Daniel J.; Abelson, Harold; Berners-Lee, Tim; Feigenbaum, Joan; Hendler, James; Sussman, Gerald Jay
Ease of information flow is both the boon and the bane of large-scale, decentralized systems like the World Wide Web.  For all the benefits and opportunities brought by the information revolution, with that same revolution have come the challenges of inappropriate use. Such excesses and abuses in the use of information are most commonly viewed through the lens of information security. This paper argues that debates over online privacy, copyright, and information policy questions have been overly dominated by the access restriction perspective. Our alternative is to design systems that are oriented toward information accountability and appropriate use, rather than information security and access restriction.  Our goal is to extend the Web architecture to support transparency and accountability.
</description>
<pubDate>Wed, 13 Jun 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37600</guid>
<dc:date>2007-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>The Psychophysiology of Risk Processing and Decision Making at a Regional Stock Exchange</title>
<link>https://hdl.handle.net/1721.1/37599</link>
<description>The Psychophysiology of Risk Processing and Decision Making at a Regional Stock Exchange
Perry, John C.
A longstanding controversy in philosophy is whether decision-making isgoverned by reason or emotion.  I study the role of physiologicalresponses in the decision-making process within the realm of financialmarkets, where both the environment and decisions---trades---aremeasurable.In an experiment performed on a regional stock exchange, mycollaborators and I record six different types of physiologicalsignals---skin conductance/galvanic skin response (SCR/GSR), bloodvolume pulse (BVP), electrocardiogram (ECG),electroencephalogram (EEG), electromyogram (EMG), andtemperature (Temp)---of monetarily motivated professionals making highpressure decisions.  From these signals I estimate underlyingphysiological features, such as heart rate,changes in body temperature, and amplitude of SCR, which are proxy foraffect.  Simultaneously, we record real-time market information whichthe specialists process and which serves as the basis for theirdecisions, as well as recording their decisions and outcomes.In a sample of eight market-makers, I find statistically significantdifferences in mean skin conductance response and cardiovascularvariables during transient market events relative to no-market-eventcontrol intervals.  In addition, I find a strong relationship betweentrading decisions and physiological responses.  Using regression, Idemonstrate that heart rate variability can statisticallysignificantly improve predictions of trading decisions, although notby much.
PhD thesis
</description>
<pubDate>Tue, 12 Jun 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37599</guid>
<dc:date>2007-06-12T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of Posynomial MOSFET Models Using Genetic Algorithms and Visualization</title>
<link>https://hdl.handle.net/1721.1/37597</link>
<description>An Analysis of Posynomial MOSFET Models Using Genetic Algorithms and Visualization
Salameh, Lynne Rafik
Analog designers are interested in optimization tools which automate the process of circuit sizing. Geometric programming, which uses posynomial models of MOSFET parameters, represents one such tool. Genetic algorithms have been used to evolve posynomial models for geometric programs, with a reasonable mean error when modeling MOSFET parameters. By visualizing MOSFET data using two dimensional plots, this thesis investigates the behavior of various MOSFET small and large signal parameters and consequently proposes a lower bound on the maximum error, which a posynomial cannot improve upon. It then investigates various error metrics which can be used to balance the mean and maximum errors generated by posynomial MOSFET models. Finally, the thesis uses empirical data to verify the existence of the lower bound, and compares the maximum error from various parameters modeled by the genetic algorithm and by monomial fitting. It concludes that posynomial MOSFET models suffer from inherent inaccuracies. Additionally, although genetic algorithms improve on the maximum model error, the improvement, in general, does not vastly surpass results obtained through monomial fitting, which is a less computationally intensive method. Genetic algorithms are hence best used when modeling partially convex MOSFET parameters, such as r0 .
MEng thesis
</description>
<pubDate>Tue, 05 Jun 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37597</guid>
<dc:date>2007-06-05T00:00:00Z</dc:date>
</item>
<item>
<title>CAPRI: A Common Architecture for Distributed Probabilistic Internet Fault Diagnosis</title>
<link>https://hdl.handle.net/1721.1/37595</link>
<description>CAPRI: A Common Architecture for Distributed Probabilistic Internet Fault Diagnosis
Lee, George J.
This thesis presents a new approach to root cause localization and fault diagnosis in the Internet based on a Common Architecture for Probabilistic Reasoning in the Internet (CAPRI) in which distributed, heterogeneous diagnostic agents efficiently conduct diagnostic tests and communicate observations, beliefs, and knowledge to probabilistically infer the cause of network failures.  Unlike previous systems that can only diagnose a limited set of network component failures using a limited set of diagnostic tests, CAPRI provides a common, extensible architecture for distributed diagnosis that allows experts to improve the system by adding new diagnostic tests and new dependency knowledge.To support distributed diagnosis using new tests and knowledge, CAPRI must overcome several challenges including the extensible representation and communication of diagnostic information, the description of diagnostic agent capabilities, and efficient distributed inference.  Furthermore, the architecture must scale to support diagnosis of a large number of failures using many diagnostic agents.  To address these challenges, this thesis presents a probabilistic approach to diagnosis based on an extensible, distributed component ontology to support the definition of new classes of components and diagnostic tests; a service description language for describing new diagnostic capabilities in terms of their inputs and outputs; and a message processing procedure for dynamically incorporating new information from other agents, selecting diagnostic actions, and inferring a diagnosis using Bayesian inference and belief propagation.To demonstrate the ability of CAPRI to support distributed diagnosis of real-world failures, I implemented and deployed a prototype network of agents on Planetlab for diagnosing HTTP connection failures.  Approximately 10,000 user agents and 40 distributed regional and specialist agents on Planetlab collect information from over 10,000 users and diagnose over 140,000 failures using a wide range of active and passive tests, including DNS lookup tests, connectivity probes, Rockettrace measurements, and user connection histories.  I show how to improve accuracy and cost by learning new dependency knowledge and introducing new diagnostic agents.  I also show that agents can manage the cost of diagnosing many similar failures by aggregating related requests and caching observations and beliefs.
PhD thesis
</description>
<pubDate>Mon, 04 Jun 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37595</guid>
<dc:date>2007-06-04T00:00:00Z</dc:date>
</item>
<item>
<title>Amorphous Computing</title>
<link>https://hdl.handle.net/1721.1/37591</link>
<description>Amorphous Computing
Abelson, Harold; Beal, Jacob; Sussman, Gerald Jay
The goal of amorphous computing is to identify organizationalprinciples and create programming technologies for obtainingintentional, pre-specified behavior from the cooperation of myriadunreliable parts that are arranged in unknown, irregular, andtime-varying ways.  The heightened relevance of amorphous computingtoday stems from the emergence of new technologies that could serve assubstrates for information processing systems of immense power atunprecedentedly low cost, if only we could master the challenge ofprogramming them.  This document is a review of amorphous computing.
</description>
<pubDate>Mon, 01 Jan 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37591</guid>
<dc:date>2007-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local Geometry of Multiattribute Tradeoff Preferences</title>
<link>https://hdl.handle.net/1721.1/37590</link>
<description>Local Geometry of Multiattribute Tradeoff Preferences
McGeachie, Michael
Existing preference reasoning systems have been successful insimple domains. Broader success requires more natural and moreexpressive preference representations.  This thesis develops arepresentation of logical preferences that combines numericaltradeoff ratios between partial outcome descriptions withqualitative preference information. We argue our system is uniqueamong preference reasoning systems; previous work has focused onqualitative or quantitative preferences, tradeoffs, exceptions andgeneralizations, or utility independence, but none have combinedall of these expressions under a unified methodology.We present new techniques for representing and giving meaning toquantitative tradeoff statements between different outcomes.  Thetradeoffs we consider can be multi-attribute tradeoffs relatingmore than one attribute at a time, they can refer to discrete orcontinuous domains, be conditional or unconditional, andquantified or qualitative.  We present related methods ofrepresenting judgments of attribute importance.  We then buildupon a methodology for representing arbitrary qualitative ceteris paribuspreference, or preferences ``other things being equal," aspresented in MD04.  Tradeoff preferences inour representation are interpreted as constraints on the partialderivatives of the utility function. For example, a decision makercould state that ``Color is five times as important as price,availability, and time," a sentiment one might express in thecontext of repainting a home, and this is interpreted asindicating that utility increases in the positive color directionfive times faster than utility increases in the positive pricedirection.  We show that these representations generalize both theeconomic notion of marginal rates of substitution and previousrepresentations of preferences in AI.
PhD thesis
</description>
<pubDate>Thu, 01 Feb 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37590</guid>
<dc:date>2007-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>TIARA:  Trust Management, Intrusion-tolerance, Accountability, and Reconstitution Architecture</title>
<link>https://hdl.handle.net/1721.1/37589</link>
<description>TIARA:  Trust Management, Intrusion-tolerance, Accountability, and Reconstitution Architecture
Shrobe, Howard; Knight, Thomas; Hon, Andre de
The last 20 years have led to unprecedented improvements in chipdensity and system performance fueled mainly by Moore's Law.  Duringthe same time, system and application software have bloated, leadingto unmanageable complexity, vulnerability to attack, rigidity and lackof robustness and accountability. These problems arise from the factthat all key elements of the computational environment, from hardwarethrough system software and middleware to application code regard theworld as consisting of unconstrained ``raw seething bits''.  No elementof the entire stack is responsible for enforcing over-archingconventions of memory structuring or access control.  Outsiders mayeasily penetrate the system by exploiting vulnerabilities (e.g. bufferoverflows) arising from this lack of basic constraints. Attacks arenot easily contained, whether they originate from the clever outsiderwho penetrates the defenses or from the insider who exploits existingprivileges.  Finally, because there are no facilities for tracing theprovenance of data, even when an attack is detected, it is difficultif not impossible to tell which data are traceable to the attack andwhat data may still be trusted. We have abundant computational resources allowing us to fix thesecritical problems using a combination of hardware, system software,and programming language technology: In this report, we describe theTIARAproject, which is using these resources to design a newcomputer system thatis less vulnerable, more tolerant of intrusions, capable of recoveryfrom attacks, and accountable for their actions.  TIARA provides thesecapabilities without significant impact on overall system performance.  Itachieves these goals through the judicious use of a modest amountof extra, but reasonably generable purpose, hardware that is dedicatedto tracking the provenance of data at a very fine grained level, toenforcing access control policies, and to constructing a coherentobject-oriented model of memory.  This hardware runs in parallel withthe main data-paths of the system and operates on a set of extra bitstagging each word with data-type, bounds, access control andprovenance information. Operations that violate the intendedinvariants are trapped, while normal results are tagged withinformation derived from the tags of the input operands.This hardware level provides fine-grained support for a series ofsoftware layers that enable a variety of comprehensive access controlpolicies, self-adaptive computing, and fine-grained recoveryprocessing.  The first of these software layers establishes aconsistent object-oriented level of computing while higher layersestablish wrappers that may not be bypassed, access controls, dataprovenance tracking.  At the highest level we create the ``planlevel'' of computing in which code is executed in parallel with anabstract model (or executable specification) of the system that checkswhether the code behaves as intended.
</description>
<pubDate>Wed, 30 May 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37589</guid>
<dc:date>2007-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond the Bits: Cooperative Packet Recovery Using Physical Layer Information</title>
<link>https://hdl.handle.net/1721.1/37587</link>
<description>Beyond the Bits: Cooperative Packet Recovery Using Physical Layer Information
Woo, Grace Rusi; Kheradpour, Pouya; Katabi, Dina
Wireless networks can suffer from high packet loss rates.  This paper shows that the loss rate can be significantly reduced by exposing information readily available at the physical layer. We make the physical layer convey an estimate of its confidence that a particular bit is ``0'' or ``1'' to the higher layers.  When used with cooperative design, this information dramatically improves the throughput of the wireless network. Access points that hear the same transmission combine their information to correct bits in a packet with minimal overhead. Similarly, a receiver may combine multiple erroneous transmissions to recover a correct packet.  We analytically prove that our approach minimizes the errors in packet recovery.  We also experimentally demonstrate its benefits using a testbed of GNU software radios. The results show that our approach can reduce loss rate by up to 10x in comparison with the current approach, and significantly outperforms prior cooperation proposals.
PhD thesis
</description>
<pubDate>Tue, 29 May 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37587</guid>
<dc:date>2007-05-29T00:00:00Z</dc:date>
</item>
<item>
<title>The Creation of OpenCourseWare at MIT</title>
<link>https://hdl.handle.net/1721.1/37585</link>
<description>The Creation of OpenCourseWare at MIT
Abelson, Harold
This paper traces the genesis of the MIT OpenCourseWare project from its initial strategic precursors in 1999 and 2000, through its launch in 2001 and its subsequent evolution.  The story told here illuminates the interplay among institutional leadership, and strategic planning, and with university culture in launching major educational technology enterprises.  It also shows how initiatives can evolve in unexpected ways, and can even surpass their initial goals.  The paper concludes with an overview of challenges facing OpenCourseWare in moving from the end of its production ramp-up and towards sustainability.
</description>
<pubDate>Sat, 19 May 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37585</guid>
<dc:date>2007-05-19T00:00:00Z</dc:date>
</item>
<item>
<title>Developmental Cost for Models of Intelligence</title>
<link>https://hdl.handle.net/1721.1/37336</link>
<description>Developmental Cost for Models of Intelligence
Beal, Jacob
We can evaluate models of natural intelligence, as well as theirindividual components, by using a model of hardware and developmentcosts, ignoring almost all the details of biology.  The basic argumentis that neither the gross anatomy of the brain nor the behavior ofindividual cells nor the behavior of the whole poses sufficientconstraint on the algorithms that might run within the brain, but thatthe process of engineering an intelligence under this cost model posessimilar challenges to those faced by a human growing from a singlecell to an adult.  This will allow us to explore architectural ideasfreely, yet retain confidence that when a system works, the principlesallowing it to work are likely to be similar to those that allow humanintelligence to work.
</description>
<pubDate>Tue, 15 May 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37336</guid>
<dc:date>2007-05-15T00:00:00Z</dc:date>
</item>
<item>
<title>Notes on Regularized Least Squares</title>
<link>https://hdl.handle.net/1721.1/37318</link>
<description>Notes on Regularized Least Squares
Rifkin, Ryan M.; Lippert, Ross A.
This is a collection of information about regularized least squares (RLS). The facts here are not &#147;new results&#148;, but we have not seen them usefully collected together before. A key goal of this work is to demonstrate that with RLS, we get certain things &#147;for free&#148;: if we can solve a single supervised RLS problem, we can search for a good regularization parameter lambda at essentially no additional cost.The discussion in this paper applies to &#147;dense&#148; regularized least squares, where we work with matrix factorizations of the data or kernel matrix. It is also possible to work with iterative methods such as conjugate gradient, and this is frequently the method of choice for large data sets in high dimensions with very few nonzero dimensions per point, such as text classifciation tasks. The results discussed here do not apply to iterative methods, which have different design tradeoffs.We present the results in greater detail than strictly necessary, erring on the side of showing our work. We hope that this will be useful to people trying to learn more about linear algebra manipulations in the machine learning context.
</description>
<pubDate>Tue, 01 May 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37318</guid>
<dc:date>2007-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tiny images</title>
<link>https://hdl.handle.net/1721.1/37291</link>
<description>Tiny images
Torralba, Antonio; Fergus, Rob; Freeman, William T.
The human visual system is remarkably tolerant to degradations in image resolution: in a scene recognition task, human performance is similar whether $32 \times 32$ color images or multi-mega pixel images are used. With small images, even object recognition and segmentation is performed robustly by the visual system, despite the object being unrecognizable in isolation. Motivated by these observations, we explore the space of 32x32 images using a database of 10^8 32x32 color images gathered from the Internet using image search engines. Each image is loosely labeled with one of the 70,399 non-abstract nouns in English, as  listed in the Wordnet lexical database. Hence the image database represents a dense sampling of all object categories and scenes. With this dataset, we use nearest neighbor methods to perform objectrecognition across the 10^8 images.
</description>
<pubDate>Mon, 23 Apr 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37291</guid>
<dc:date>2007-04-23T00:00:00Z</dc:date>
</item>
<item>
<title>Principles for Engineered Emergence (slides)</title>
<link>https://hdl.handle.net/1721.1/37152</link>
<description>Principles for Engineered Emergence (slides)
Beal, Jacob
Principles for Engineered EmergenceIt is difficult to establish engineering control over the behavior ofaggregates of unreliable devices with complicated interactionpatterns.  I take a linguistic view of this problem, searching formechanisms that simplify the composition and abstraction ofcomplicated behaviors.  From my work on various problems of aggregatecontrol in cognitive architectures and spatial computing, I havenoticed common themes in mechanisms that solve them.  From these, Iextract four principles which seem to help in engineering robustaggregate behavior---self-scaling, sparseness, gradual degradation,and failure simplification---and give examples of how they can beexploited.
</description>
<pubDate>Thu, 12 Apr 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37152</guid>
<dc:date>2007-04-12T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Adaptive Systems for Information Survivability: PMOP and AWDRAT</title>
<link>https://hdl.handle.net/1721.1/37151</link>
<description>Self-Adaptive Systems for Information Survivability: PMOP and AWDRAT
Shrobe, Howard; Laddaga, Robert; Balzer, Robert; Goldman, Neil; Wile, Dave; Tallis, Marcelo; Hollebeek, Tim; Egyed, Alexander
Information systems form the backbones of the critical infrastructures of modern societies. Unfortunately, these systems are highly vulnerable to attacks that can result in enormous damage. Furthermore, traditional approaches to information security have not provided all the protections necessary to defeat and recover from a concerted attack; in particular, they are largely irrelevant to the problem of defending against attacks launched by insiders.This paper describes two related systems PMOP and AWDRAT that were developed during the DARPA Self Regenerative Systems program. PMOP defends against insider attacks while AWDRAT is intended to detect compromises to software systems. Both rely on self-monitoring, diagnosis and self-adaptation. We describe both systems and show the results of experiments with each.
</description>
<pubDate>Tue, 10 Apr 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37151</guid>
<dc:date>2007-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>A Few Days of A Robot's Life in the Human's World: Toward Incremental Individual Recognition</title>
<link>https://hdl.handle.net/1721.1/37144</link>
<description>A Few Days of A Robot's Life in the Human's World: Toward Incremental Individual Recognition
Aryananda, Lijin
This thesis presents an integrated framework and implementation for Mertz, an expressive robotic creature for exploring the task of face recognition through natural interaction in an incremental and unsupervised fashion.  The goal of this thesis is to advance toward a framework which would allow robots to incrementally ``get to know'' a set of familiar individuals in a natural and extendable way.  This thesis is motivated by the increasingly popular goal of integrating robots in the home.  In order to be effective in human-centric tasks, the robots must be able to not only recognize each family member, but also to learn about the roles of various people in the household.In this thesis, we focus on two particular limitations of the current technology.  Firstly, most of face recognition research concentrate on the supervised classification problem.  Currently, one of the biggest problems in face recognition is how to generalize the system to be able to recognize new test data that vary from the training data.  Thus, until this problem is solved completely, the existing supervised approaches may require multiple manual introduction and labelling sessions to include training data with enough variations. Secondly, there is typically a large gap between research prototypes and commercial products, largely due to lack of robustness and scalability to different environmental settings.In this thesis, we propose an unsupervised approach which wouldallow for a more adaptive system which can incrementally update thetraining set with more recent data or new individuals over time.Moreover, it gives the robots a more natural {\em socialrecognition} mechanism to learn not only to recognize each person'sappearance, but also to remember some relevant contextualinformation that the robot observed during previous interactionsessions. Therefore, this thesis focuses on integrating anunsupervised and incremental face recognition system within aphysical robot which interfaces directly with humans through naturalsocial interaction.  The robot autonomously detects, tracks, andsegments face images during these interactions and automaticallygenerates a training set for its face recognition system.  Moreover,in order to motivate robust solutions and address scalabilityissues, we chose to put the robot, Mertz, in unstructured publicenvironments to interact with naive passersby, instead of with onlythe researchers within the laboratory environment.While an unsupervised and incremental face recognition system is acrucial element toward our target goal, it is only a part of thestory.  A face recognition system typically receives eitherpre-recorded face images or a streaming video from a static camera.As illustrated an ACLU review of a commercial face recognitioninstallation, a security application which interfaces with thelatter is already very challenging.  In this case, our target goalis a robot that can recognize people in a home setting. Theinterface between robots and humans is even more dynamic.  Both therobots and the humans move around.We present the robot implementation and its unsupervised incremental face recognition framework.  We describe analgorithm for clustering local features extracted from a large set of automatically generated face data.  We demonstrate the robot's capabilities and limitations in a series of experiments at a public lobby. In a final experiment, the robot interacted with a few hundred individuals in an eight day period and generated a training set of over a hundred thousand face images. We evaluate the clustering algorithm performance across a range of parameters on this automatically generated training data and also the Honda-UCSD video face database. Lastly, we present some recognition results using the self-labelled clusters.
PhD thesis
</description>
<pubDate>Tue, 03 Apr 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/37144</guid>
<dc:date>2007-04-03T00:00:00Z</dc:date>
</item>
<item>
<title>Discriminative Gaussian Process Latent Variable Model for Classification</title>
<link>https://hdl.handle.net/1721.1/36901</link>
<description>Discriminative Gaussian Process Latent Variable Model for Classification
Urtasun, Raquel; Darrell, Trevor
Supervised learning is difficult with high dimensional input spacesand very small training sets, but accurate classification may bepossible if the data lie on a low-dimensional manifold.  GaussianProcess Latent Variable Models can discover low dimensional manifoldsgiven only a small number of examples, but learn a latent spacewithout regard for class labels.  Existing methods for discriminativemanifold learning (e.g., LDA, GDA) do constrain the class distributionin the latent space, but are generally deterministic and may notgeneralize well with limited training data.  We introduce a method forGaussian Process Classification using latent variable models trainedwith discriminative priors over the latent space, which can learn adiscriminative latent space from a small training set.
</description>
<pubDate>Wed, 28 Mar 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36901</guid>
<dc:date>2007-03-28T00:00:00Z</dc:date>
</item>
<item>
<title>Combined Static and Dynamic Mutability Analysis</title>
<link>https://hdl.handle.net/1721.1/36880</link>
<description>Combined Static and Dynamic Mutability Analysis
Artzi, Shay; Kiezun, Adam; Glasser, David; Ernst, Michael D.
Knowing which method parameters may be mutated during a method's execution is useful for many software engineering tasks. We present an approach to discovering parameter immutability, in which several lightweight, scalable analyses are combined in stages, with each stage rening the overall result. The resulting analysis is scalable and combines the strengths of its component  analyses. As one of the component analyses, we present a novel, dynamic mutability analysis and show how its results can be improved by random input generation. Experimental results on programs of up to 185 kLOC show that, compared to previous approaches, our approach increases both scalability and overall accuracy.
</description>
<pubDate>Fri, 23 Mar 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36880</guid>
<dc:date>2007-03-23T00:00:00Z</dc:date>
</item>
<item>
<title>Phonetic Classification Using Hierarchical, Feed-forward, Spectro-temporal Patch-based Architectures</title>
<link>https://hdl.handle.net/1721.1/36865</link>
<description>Phonetic Classification Using Hierarchical, Feed-forward, Spectro-temporal Patch-based Architectures
Rifkin, Ryan; Bouvrie, Jake; Schutte, Ken; Chikkerur, Sharat; Kouh, Minjoon; Ezzat, Tony; Poggio, Tomaso
A preliminary set of experiments are described in which a biologically-inspired computer vision system (Serre, Wolf et al. 2005; Serre 2006; Serre, Oliva et al. 2006; Serre, Wolf et al. 2006) designed for visual object recognition was applied to the task of phonetic classification. During learning, the systemprocessed 2-D wideband magnitude spectrograms directly as images, producing a set of 2-D spectrotemporal patch dictionaries at different spectro-temporal positions, orientations, scales, and of varying complexity. During testing, features were computed by comparing the stored patches with patches fromnovel spectrograms. Classification was performed using a regularized least squares classifier (Rifkin, Yeo et al. 2003; Rifkin, Schutte et al. 2007) trained on the features computed by the system. On a 20-classTIMIT vowel classification task, the model features achieved a best result of 58.74% error, compared to 48.57% error using state-of-the-art MFCC-based features trained using the same classifier. This suggests that hierarchical, feed-forward, spectro-temporal patch-based architectures may be useful for phonetic analysis.
</description>
<pubDate>Wed, 21 Mar 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36865</guid>
<dc:date>2007-03-21T00:00:00Z</dc:date>
</item>
<item>
<title>Object and Reference Immutability using Java Generics</title>
<link>https://hdl.handle.net/1721.1/36850</link>
<description>Object and Reference Immutability using Java Generics
Zibin, Yoav; Potanin, Alex; Artzi, Shay; Kiezun, Adam; Ernst, Michael D.
A compiler-checked immutability guarantee provides useful documentation, facilitates reasoning, and enables optimizations. This paper presents Immutability Generic Java (IGJ), a novel language extension that expresses immutability without changing Java&#146;s syntax by building upon Java&#146;s generics and annotation mechanisms. In IGJ, each class has one additional generic parameter that is Immutable, Mutable, or ReadOnly. IGJ guarantees both reference immutability (only mutable references can mutate an object) and object immutability (an immutable reference points to an immutable object). IGJ is the first proposal for enforcing object immutability, and its reference immutability is more expressive than previous work. IGJ also permits covariant changes of generic arguments in a type-safe manner, e.g., a readonly list of integers is a subtype of a readonly list of numbers. IGJ extends Java&#146;s type system with a few simple rules. We formalize this type system and prove it sound. Our IGJ compiler works by type-erasure and generates byte-code that can be executed on any JVM without runtime penalty.
</description>
<pubDate>Fri, 16 Mar 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36850</guid>
<dc:date>2007-03-16T00:00:00Z</dc:date>
</item>
<item>
<title>Building Spatial Computers</title>
<link>https://hdl.handle.net/1721.1/36840</link>
<description>Building Spatial Computers
Bachrach, Jonathan; Beal, Jacob
Programmability is a major challenge in spatial computing, anaggregate control problem found in domains such as sensor networks,swarm robotics, and modular robotics.  We address this challenge witha model of a spatial computer (Proto Abstract Machine) and adistributed operating system, ProtoKernel, which implements PAMapproximately.  ProtoKernel has been demonstrated on platforms inthree spatial computing domains: sensor networks, swarm robotics, andmodular robotics.
</description>
<pubDate>Wed, 14 Mar 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36840</guid>
<dc:date>2007-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Theory of Object Recognition: Computations and Circuits in the Feedforward Path of the Ventral Stream in Primate Visual Cortex</title>
<link>https://hdl.handle.net/1721.1/36407</link>
<description>A Theory of Object Recognition: Computations and Circuits in the Feedforward Path of the Ventral Stream in Primate Visual Cortex
Serre, T.; Kouh, M.; Cadieu, C.; Knoblich, U.; Kreiman, G.; Poggio, Tomaso A
We describe a quantitative theory to account for the computations performed by the feedforward path of the ventral stream of visual cortex and the local circuits implementing them. We show that a model instantiating the theory is capable of performing recognition on datasets of complex images at the level of human observers in rapid categorization tasks. We also show that the theory is consistent with (and in some case has predicted) several properties of neurons in V1, V4, IT and PFC. The theory seems sufficiently comprehensive, detailed and satisfactory to represent an interesting challenge for physiologists and modelers: either disprove its basic features or propose alternative theories of equivalent scope. The theory suggests a number of open questions for visual physiology and psychophysics.
</description>
<pubDate>Mon, 19 Dec 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36407</guid>
<dc:date>2005-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Method Selection and Dispatching of Contingent, Temporally Flexible Plans</title>
<link>https://hdl.handle.net/1721.1/36372</link>
<description>Distributed Method Selection and Dispatching of Contingent, Temporally Flexible Plans
Block, Stephen
Many applications of autonomous agents require groups to work in tight coordination. To be dependable, these groups must plan, carry out and adapt their activities in a way that is robust to failure and to uncertainty. Previous work developed contingent, temporally flexible plans. These plans provide robustness to uncertain activity durations, through flexible timing constraints, and robustness to plan failure, through alternate approaches to achieving a task. Robust execution of contingent, temporally flexible plans consists of two phases. First, in the plan extraction phase, the executive chooses between the functionally redundant methods in the plan to select an execution sequence that satisfies the temporal bounds in the plan. Second, in the plan execution phase, the executive dispatches the plan, using the temporal flexibility to schedule activities dynamically.Previous contingent plan execution systems use a centralized architecture in which a single agent conducts planning for the entire group. This can result in a communication bottleneck at the time when plan activities are passed to the other agents for execution, and state information is returned. Likewise, a computation bottleneck may also occur because a single agent conducts all processing.This thesis introduces a robust, distributed executive for temporally flexible plans, called Distributed-Kirk, or D-Kirk. To execute a plan, D-Kirk first distributes the plan between the participating agents, by creating a hierarchical ad-hoc network and by mapping the plan onto this hierarchy. Second, the plan is reformulated using a distributed, parallel algorithm into a form amenable to fast dispatching. Finally, the plan is dispatched in a distributed fashion.We then extend the D-Kirk distributed executive to handle contingent plans. Contingent plans are encoded as Temporal Plan Networks (TPNs), which use a non-deterministic choice operator to compose temporally flexible plan fragments into a nested hierarchy of contingencies. A temporally consistent plan is extracted from the TPN using a distributed, parallel algorithm that exploits the structure of the TPN.At all stages of D-Kirk, the communication load is spread over all agents, thus eliminating the communication bottleneck. In particular, D-Kirk reduces the peak communication complexity of the plan execution phase by a factor of O(A/e'), where e' is the number of edges per node in the dispatchable plan, determined by the branching factor of the input plan, and A is the number of agents involved in executing the plan.In addition, the distributed algorithms employed by D-Kirk reduce the computational load on each agent and provide opportunities for parallel processing, thus increasing efficiency. In particular, D-Kirk reduces the average computational complexity of plan dispatching from O(eN^3) in the centralized case, to typical values of O(eN^2) per node and O(eN^3/A) per agent in the distributed case, where N is the number of nodes in the plan and e is the number of edges per node in the input plan.Both of the above results were confirmed empirically using a C++ implementation of D-Kirk on a set of parameterized input plans. The D-Kirk implementation was also tested in a realistic application where it was used to control a pair of robotic manipulators involved in a cooperative assembly task.
SM thesis
</description>
<pubDate>Mon, 05 Mar 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36372</guid>
<dc:date>2007-03-05T00:00:00Z</dc:date>
</item>
<item>
<title>Sensitive Manipulation</title>
<link>https://hdl.handle.net/1721.1/36371</link>
<description>Sensitive Manipulation
Torres-Jara, Eduardo
This thesis presents an effective alternative to the traditionalapproach to robotic manipulation. In our approach, manipulation ismainly guided by tactile feedback as opposed to vision. Themotivation comes from the fact that manipulating an object impliescoming in contact with it, consequently, directly sensing physicalcontact seems more important than vision to control theinteraction of the object and the robot. In this work, thetraditional approach of a highly precise arm and vision systemcontrolled by a model-based architecture is replaced by one thatuses a low mechanical impedance arm with dense tactile sensing andexploration capabilities run by a behavior-based architecture.The robot OBRERO has been built to implement this approach. Newtactile sensing technology has been developed and mounted on therobot's hand. These sensors are biologically inspired and presentmore adequate features for manipulation than those of state of theart tactile sensors. The robot's limb was built with compliantactuators, which present low mechanical impedance, to make theinteraction between the robot and the environment safer than thatof a traditional high-stiffness arm. A new actuator was created tofit in the hand size constraints. The reduced precision ofOBRERO's limb is compensated by the capability of explorationgiven by the tactile sensors, actuators and the softwarearchitecture.The success of this approach is shown by picking up objects in anunmodelled environment. This task, simple for humans, has been achallenge for robots. The robot can deal with new, unmodelledobjects. OBRERO can come gently in contact, explore, lift, andplace the object in a different location. It can also detectslippage and external forces acting on an object while it is held.Each one of these steps are done by using tactile feedback. Thistask can be done with very light objects with no fixtures and onslippery surfaces.
PhD thesis
</description>
<pubDate>Fri, 02 Mar 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36371</guid>
<dc:date>2007-03-02T00:00:00Z</dc:date>
</item>
<item>
<title>Trading Structure for Randomness in Wireless Opportunistic Routing</title>
<link>https://hdl.handle.net/1721.1/36345</link>
<description>Trading Structure for Randomness in Wireless Opportunistic Routing
Chachulski, Szymon; Jennings, Michael; Katti, Sachin; Katabi, Dina
Opportunistic routing is a recent technique that achieves high throughput in the face of lossy wireless links. The current opportunistic routing protocol, ExOR, ties the MAC with routing, imposing a strict schedule on routers' access to the medium. Although the scheduler delivers opportunistic gains, it misses some of the inherent features of the 802.11 MAC. For example, it prevents spatial reuse and thus may underutilize the wireless medium.  It also eliminates the layering abstraction, making the protocol less amenable to extensions of alternate traffic type such as multicast.This paper presents MORE, a MAC-independent opportunistic routing protocol. MORE randomly mixes packets before forwarding them. This randomness ensures that routers that hear the same transmission do not forward the same packets. Thus, MORE needs no special scheduler to coordinate routers and can run directly on top of 802.11. Experimental results from a 20-node wireless testbed show that MORE's average unicast throughput is 20% higher than ExOR, and the gains rise to 50% over ExOR when there is a chance of spatial reuse. For multicast, MORE's gains increase with the number of destinations, and are 35-200% greater than ExOR.
</description>
<pubDate>Fri, 23 Feb 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36345</guid>
<dc:date>2007-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Information Slicing: Anonymity Using Unreliable Overlays</title>
<link>https://hdl.handle.net/1721.1/36344</link>
<description>Information Slicing: Anonymity Using Unreliable Overlays
Katti, Sachin; Cohen, Jeffrey; Katabi, Dina
This paper proposes a new approach to anonymous communication called information slicing. Typically, anonymizers use onion routing, where a message is encrypted in layers with the public keys of the nodes along the path. Instead, our approach scrambles the message, divides it into pieces, and sends the pieces along disjoint paths. We show that information slicing addresses message confidentiality as well as source and destination anonymity. Surprisingly, it does not need any public key cryptography. Further, our approach naturally addresses the problem of node failures. These characteristics make it a good fit for use over dynamic peer-to-peer overlays. We evaluate the anonymity ofinformation slicing via analysis and simulations.  Our prototype implementation on PlanetLab shows that it achieves higher throughput than onion routing and effectively copes with node churn.
</description>
<pubDate>Fri, 23 Feb 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36344</guid>
<dc:date>2007-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Embracing Wireless Interference: Analog Network Coding</title>
<link>https://hdl.handle.net/1721.1/36343</link>
<description>Embracing Wireless Interference: Analog Network Coding
Katti, Sachin; Gollakota, Shyamnath; Katabi, Dina
Traditionally, interference is considered harmful.Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets,routers forward the interfering signals. The destination leverages network-level information to cancel the interference and recover the signal destined to it. The result is analog network coding because it codes signals not bits. So, what if wireless routers forward signals instead of packets? Theoretically, we prove that such an approach doubles the capacity of the canonical relay network. Surprisingly, it is also practical. We implement our design using softwareradios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding.
</description>
<pubDate>Fri, 23 Feb 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/36343</guid>
<dc:date>2007-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Using Task-Structured Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol</title>
<link>https://hdl.handle.net/1721.1/35918</link>
<description>Using Task-Structured Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol
Canetti, Ran; Cheung, Ling; Kaynar, Dilsun; Liskov, Moses; Lynch, Nancy; Pereira, Olivier; Segala, Roberto
The Probabilistic I/O Automata framework of Lynch, Segala and Vaandrager provides tools for precisely specifying protocols and reasoning about their correctness using multiple levels of abstraction, based on implementation relationships between these levels. We enhance this framework to allow analyzing protocols that use cryptographic primitives. This requires resolving and reconciling issues such as nondeterministic behavior and scheduling, randomness, resource-bounded computation, and computational hardness assumptions. The enhanced framework allows for more rigorous and systematic analysis of cryptographic protocols. To demonstrate the use of this framework, we present an example analysis that we have done for an Oblivious Transfer protocol.
</description>
<pubDate>Fri, 16 Feb 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35918</guid>
<dc:date>2007-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic shaping and decomposition of reward functions</title>
<link>https://hdl.handle.net/1721.1/35890</link>
<description>Automatic shaping and decomposition of reward functions
Marthi, Bhaskara
This paper investigates the problem of automatically learning how torestructure the reward function of a Markov decision process so as tospeed up reinforcement learning.  We begin by describing a method thatlearns a shaped reward function given a set of state and temporalabstractions.  Next, we consider decomposition of the per-timestepreward in multieffector problems, in which the overall agent can bedecomposed into multiple units that are concurrently carrying outvarious tasks.  We show by example that to find a good rewarddecomposition, it is often necessary to first shape the rewardsappropriately.  We then give a function approximation algorithm forsolving both problems together.  Standard reinforcement learningalgorithms can be augmented with our methods, and we showexperimentally that in each case, significantly faster learningresults.
</description>
<pubDate>Tue, 13 Feb 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35890</guid>
<dc:date>2007-02-13T00:00:00Z</dc:date>
</item>
<item>
<title>PPR: Partial Packet Recovery for Wireless Networks</title>
<link>https://hdl.handle.net/1721.1/35889</link>
<description>PPR: Partial Packet Recovery for Wireless Networks
Jamieson, Kyle; Balakrishnan, Hari
Bit errors occur over wireless channels when the signal isn't strongenough to overcome the effects of interference and noise.  Currentwireless protocols may use forward error correction (FEC) to correct forsome (small) number of bit errors, but generally retransmit the wholepacket if the FEC is insufficient.  We observe that current wirelessmesh network protocols retransmit a number of packets and that most ofthese retransmissions end up sending bits that have already beenreceived multiple times, wasting network capacity.  To overcome thisinefficiency, we develop, implement, and evaluate a partial packetrecovery (PPR) system.PPR incorporates three new ideas: (1) SoftPHY, an expandedphysical layer (PHY) interface to provide hints to the higher layersabout how ``close'' the actual received symbol was to the one decoded,(2) a postamble scheme to recover data even when a packet'spreamble is corrupted and not decodable at the receiver, and (3) PP-ARQ, an asynchronous link-layer retransmission protocol that allowsa receiver to compactly encode and request for retransmission only thoseportions of a packet that are likely in error.Our experimental results from a 27-node 802.15.4 testbed that includesTelos motes with 2.4 GHz Chipcon radios and GNU Radio nodes implementingthe Zigbee standard (802.15.4) show that PPR increases the framedelivery rate by a factor of 2x under moderate load, and7x under heavy load when many links have marginal quality.
</description>
<pubDate>Fri, 02 Feb 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35889</guid>
<dc:date>2007-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>HQ Replication: Properties and Optimizations</title>
<link>https://hdl.handle.net/1721.1/35888</link>
<description>HQ Replication: Properties and Optimizations
Cowling, James; Myers, Daniel; Liskov, Barbara; Rodrigues, Rodrigo; Shrira, Liuba
There are currently two approaches to providing Byzantine-fault-tolerant state machine replication: a replica-based approach, e.g., BFT, that uses communication between replicas to agree on a proposed ordering of requests, and a quorum-based approach, such as Q/U, in which clients contact replicas directly to optimistically execute operations. Both approaches have shortcomings: the quadratic cost of inter-replica communication is unnecessary when there is no contention, and Q/U requires a large number of replicas and performs poorly under contention.We present HQ, a hybrid Byzantine-fault-tolerant state machine replication protocol that overcomes these problems. HQ employs a lightweight quorum-based protocol when there is no contention, but  uses BFT to resolve contention when it arises.  Furthermore, HQ uses only 3f+1 replicas to tolerate f faults, providing optimal resilience to node failures.We implemented a prototype of HQ, and we compare its performance to BFT and Q/U analytically and experimentally. Additionally, in this work we use a new implementation of BFT designed to scale as the number of faults increases.  Our results show that both HQ and our new implementation of BFT scale as f increases; additionally our hybrid approach of using BFT to handle contention works well.
</description>
<pubDate>Mon, 12 Feb 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35888</guid>
<dc:date>2007-02-12T00:00:00Z</dc:date>
</item>
<item>
<title>Phonetic Classification Using Hierarchical, Feed-forward, Spectro-temporal Patch-based Architectures</title>
<link>https://hdl.handle.net/1721.1/35835</link>
<description>Phonetic Classification Using Hierarchical, Feed-forward, Spectro-temporal Patch-based Architectures
Rifkin, Ryan; Bouvrie, Jake; Schutte, Ken; Chikkerur, Sharat; Kouh, Minjoon; Ezzat, Tony; Poggio, Tomaso
A preliminary set of experiments are described in which a biologically-inspired computer vision system (Serre, Wolf et al. 2005; Serre 2006; Serre, Oliva et al. 2006; Serre, Wolf et al. 2006) designed for visual object recognition was applied to the task of phonetic classification. During learning, the systemprocessed 2-D wideband magnitude spectrograms directly as images, producing a set of 2-D spectrotemporal patch dictionaries at different spectro-temporal positions, orientations, scales, and of varying complexity. During testing, features were computed by comparing the stored patches with patches fromnovel spectrograms. Classification was performed using a regularized least squares classifier (Rifkin, Yeo et al. 2003; Rifkin, Schutte et al. 2007) trained on the features computed by the system. On a 20-class TIMIT vowel classification task, the model features achieved a best result of 58.74% error, compared to 48.57% error using state-of-the-art MFCC-based features trained using the same classifier. This suggests that hierarchical, feed-forward, spectro-temporal patch-based architectures may be useful for phoneticanalysis.
</description>
<pubDate>Thu, 01 Feb 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35835</guid>
<dc:date>2007-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explorations in Low-Cost Compliant Robotics</title>
<link>https://hdl.handle.net/1721.1/35821</link>
<description>Explorations in Low-Cost Compliant Robotics
Kumpf, Adam
This thesis presents the findings of exploratory research in low-cost compliant robotics.  The most heavily leveraged trade-off is that of mechanical precision for computational power, with the hope that the price of future computation will continue to fall exponentially while the expected price of precision mechanical parts will remain relatively constant.  The most novel contribution of this research is the Torsionally Compliant Elastomer Joint (TCEJ) which allows for compliance and sensing in a very small package while using extremely inexpensive components.  Computational modeling of hysteresis, signal compression, and backlash are also explored to compensate for the non-idealities often found in cheap mechanical parts.  Three proof-of-concept systems are described along with a set of experiments used to test their capabilities.  Finally, future work is proposed that will likely shape the next generation of low-cost compliant robotics.
MEng thesis
</description>
<pubDate>Tue, 30 Jan 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35821</guid>
<dc:date>2007-01-30T00:00:00Z</dc:date>
</item>
<item>
<title>Online Active Learning in Practice</title>
<link>https://hdl.handle.net/1721.1/35784</link>
<description>Online Active Learning in Practice
Monteleoni, Claire; Kaariainen, Matti
We compare the practical performance of several recently proposed algorithms for active learning in the online setting.  We consider two algorithms (and their combined variants) that are strongly online, in that they do not store any previously labeled examples, and for which formal guarantees have recently been proven under various assumptions.  We perform an empirical evaluation on optical character recognition (OCR) data, an application that we argue to be appropriately served by online active learning.  We compare the performance between the algorithm variants and show significant reductions in label-complexity over random sampling.
</description>
<pubDate>Tue, 23 Jan 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35784</guid>
<dc:date>2007-01-23T00:00:00Z</dc:date>
</item>
<item>
<title>Robot Manipulation in Human Environments</title>
<link>https://hdl.handle.net/1721.1/35727</link>
<description>Robot Manipulation in Human Environments
Edsinger, Aaron
Human environments present special challenges for robot manipulation. They are often dynamic, difficult to predict, and beyond the control of a robot engineer. Fortunately, many characteristics of these settings can be used to a robot's advantage. Human environments are typically populated by people, and a robot can rely on the guidance and assistance of a human collaborator. Everyday objects exhibit common, task-relevant features that reduce the cognitive load required for the object's use. Many tasks can be achieved through the detection and control of these sparse perceptual features. And finally, a robot is more than a passive observer of the world. It can use its body to reduce its perceptual uncertainty about the world.In this thesis we present advances in robot manipulation that address the unique challenges of human environments. We describe the design of a humanoid robot named Domo, develop methods that allow Domo to assist a person in everyday tasks, and discuss general strategies for building robots that work alongside people in their homes and workplaces.
PhD thesis
</description>
<pubDate>Tue, 16 Jan 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35727</guid>
<dc:date>2007-01-16T00:00:00Z</dc:date>
</item>
<item>
<title>Scale Control Processor Test-Chip</title>
<link>https://hdl.handle.net/1721.1/35724</link>
<description>Scale Control Processor Test-Chip
Batten, Christopher; Krashinsky, Ronny; Asanovic, Krste
We are investigating vector-thread architectures which provide competitive performance and efficiency across a broad class of application domains. Vector-thread architectures unify data-level, thread-level, and instruction-level parallelism, providing new ways of parallelizing codes that are difficult to vectorize or that incur excessive synchronization costs when multithreaded. To illustrate these ideas we have developed the Scale processor, which is an example of a vector-thread architecture designed for low-power and high-performance embedded systems. The prototype includes a single-issue 32-bit RISC control processor, a vector-thread unit which supports up to 128 virtual processor threads and can execute up to 16 instructions per cycle, and a 32 KB shared primary cache.Since the Scale Vector-Thread Processor is a large and complex design (especially for an academic project), we first designed and fabricated the Scale Test Chip (STC1). STC1 includes a simplified version of the Scale control processor, 8 KB of RAM, a host interface, and a custom clock generator.  STC1 helped mitigate the risk involved in fabricating the full Scale chip in several ways. First, we were able to establish and test our CAD toolflow. Our toolflow included several custom tools which had not previously been used in any tapeouts. Second, we were able to better characterize our target package and process. For example, STC1 enabled us to better correlate the static timing numbers from our CAD tools with actual silicon and also to characterize the expected rise/fall times of our external signal pins. Finally, STC1 allowed us to test our custom clock generator. We used our experiences with STC1 to help us implement the Scale vector-thread processor. Scale was taped out on October 15, 2006 and it is currently being fabricated through MOSIS. This report discusses the fabrication of STC1 and presents power and performance results.
</description>
<pubDate>Fri, 12 Jan 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35724</guid>
<dc:date>2007-01-12T00:00:00Z</dc:date>
</item>
<item>
<title>Latent-Dynamic Discriminative Models for Continuous Gesture Recognition</title>
<link>https://hdl.handle.net/1721.1/35276</link>
<description>Latent-Dynamic Discriminative Models for Continuous Gesture Recognition
Morency, Louis-Philippe; Quattoni, Ariadna; Darrell, Trevor
Many problems in vision involve the prediction of a class label for each frame in an unsegmented sequence. In this paper we develop a discriminative framework for simultaneous sequence segmentation and labeling which can capture both intrinsic and extrinsic class dynamics. Our approach incorporates hidden state variables which model the sub-structure of a class sequence and learn the dynamics between class labels. Each class label has a disjoint set of associated hidden states, which enables efficient training and inference in our model. We evaluated our method on the task of recognizing human gestures from unsegmented video streams and performed experiments on three different datasets of head and eye gestures. Our results demonstrate that our model for visual gesture recognition outperform models based on Support Vector Machines, Hidden Markov Models, and Conditional Random Fields.
</description>
<pubDate>Sun, 07 Jan 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35276</guid>
<dc:date>2007-01-07T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifier-Free Boolean Algebra with Presburger Arithmetic is NP-Complete</title>
<link>https://hdl.handle.net/1721.1/35258</link>
<description>Quantifier-Free Boolean Algebra with Presburger Arithmetic is NP-Complete
Kuncak, Viktor
Boolean Algebra with Presburger Arithmetic (BAPA) combines1) Boolean algebras of sets of uninterpreted elements (BA)and 2) Presburger arithmetic operations (PA).  BAPA canexpress the relationship between integer variables andcardinalities of unbounded finite sets and can be used toexpress verification conditions in verification of datastructure consistency properties.In this report I consider the Quantifier-Free fragment ofBoolean Algebra with Presburger Arithmetic (QFBAPA).Previous algorithms for QFBAPA had non-deterministicexponential time complexity.  In this report I show thatQFBAPA is in NP, and is therefore NP-complete.  My resultyields an algorithm for checking satisfiability of QFBAPAformulas by converting them to polynomially sized formulasof quantifier-free Presburger arithmetic.  I expect thisalgorithm to substantially extend the range of QFBAPAproblems whose satisfiability can be checked in practice.
</description>
<pubDate>Mon, 01 Jan 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/35258</guid>
<dc:date>2007-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bounded CCA2-Secure Non-Malleable Encryption</title>
<link>https://hdl.handle.net/1721.1/34968</link>
<description>Bounded CCA2-Secure Non-Malleable Encryption
Pass, Rafael; Shelat, Abhi; Vaikuntanathan, Vinod
Under an adaptive chosen ciphertext attack (CCA2), the security of an encryption scheme must hold against  adversaries that have access to a decryption oracle.   We consider a weakening of CCA2 security, wherein security need only hold against  adversaries making an a-priori bounded number of queries to the decryption oracle. Concerning this notion, which we call bounded-CCA2 security, we show the following two results.  (1) Bounded-CCA2 secure non-malleable encryption schemes exist if and only if semantically-secure (IND-CPA-secure) encryption schemes exist.(As far as we know, bounded-CCA2 non-malleability is the strongest notion of security known to be satisfiable assuming only the existence of semantically-secure encryption schemes.)  (2) In contrast to CCA2 security, bounded-CCA2 security alone does not imply non-malleability.  In particular, if there exists an encryption scheme that is bounded-CCA2 secure, then there exists another encryption scheme which remains   bounded-CCA2 secure, but is malleable under a simple chosen-plaintext attack.
</description>
<pubDate>Thu, 14 Dec 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34968</guid>
<dc:date>2006-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Memoization Attacks and Copy Protection in Partitioned Applications</title>
<link>https://hdl.handle.net/1721.1/34954</link>
<description>Memoization Attacks and Copy Protection in Partitioned Applications
O'Donnell, Charles W.; Suh,, G. Edward; Dijk, Marten vn; Devadas, Srinivas
Application source code protection is a major concern for software architects today. Secure platforms have been proposed that protect the secrecy of application algorithms and enforce copy protection assurances. Unfortunately, these capabilities incur a sizeable performance overhead. Partitioning an application into secure and insecure regions can help diminish these overheads but invalidates guarantees of code secrecy and copy protection.This work examines one of the problems of partitioning an application into public and private regions, the ability of an adversary to recreate those private regions. To our knowledge, it is the first to analyze this problem when considering application operation as a whole. Looking at the fundamentals of the issue, we analyze one of the simplest attacks possible, a ``Memoization Attack.'' We implement an efficient Memoization Attack and discuss necessary techniques that limit storage and computation consumption. Experimentation reveals that certain classes of real-world applications are vulnerable to Memoization Attacks. To protect against such an attack, we propose a set of indicator tests that enable an application designer to identify susceptible application code regions.
</description>
<pubDate>Fri, 08 Dec 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34954</guid>
<dc:date>2006-12-08T00:00:00Z</dc:date>
</item>
<item>
<title>Distributed Area Search with a Team of Robots</title>
<link>https://hdl.handle.net/1721.1/34943</link>
<description>Distributed Area Search with a Team of Robots
Tzanov, Velin K.
The main goal of this thesis is to demonstrate the applicability of the distributed systems paradigm to robotic systems. This goal is accomplished by presenting two solutions to the Distributed Area Search problem: organizing a team of robots to collaborate in the task of searching through an area. The first solution is designed for unreliable robots equipped with a reliable GPS-style localization system. This solution demonstrates the efficiency and fault-tolerance of this type of distributed robotic systems, as well as their applicability to the real world. We present a theoretically near-optimal algorithm for solving Distributed Area Search under this setting, and we also present an implementation of our algorithm on an actual system, consisting of twelve robots. The second solution is designed for a completely autonomous system, without the aid of any centralized subsystem. It demonstrates how a distributed robotic system can solve a problem that is practically unsolvable for a single-robot system.
MEng thesis
</description>
<pubDate>Tue, 05 Dec 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34943</guid>
<dc:date>2006-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Materialization Strategies in a Column-Oriented DBMS</title>
<link>https://hdl.handle.net/1721.1/34929</link>
<description>Materialization Strategies in a Column-Oriented DBMS
Abadi, Daniel J.; Myers, Daniel S.; DeWitt, David J.; Madden, Samuel R.
There has been renewed interest in column-oriented database architectures in recent years. For read-mostly query workloads such as those found in data warehouse and decision support applications, ``column-stores'' have been shown to perform particularly well relative to ``row-stores.'' In order for column-stores to be readily adopted as a replacement for row-stores, however, they must present the same interface to client applications as do row stores, which implies that they must output row-store-style tuples.Thus, the input columns stored on disk must be converted to rows at some point in the query plan, but the optimal point at which to do the conversion is not obvious. This problem can be considered as the opposite of the projection problem in row-store systems: while row-stores need to determine where in query plans to place projection operators to make tuples narrower, column-stores need to determine when to combine single-column projections into wider tuples. This paper describes a variety of strategies for tuple construction and intermediate result representations and provides a systematic evaluation of these strategies.
</description>
<pubDate>Mon, 27 Nov 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34929</guid>
<dc:date>2006-11-27T00:00:00Z</dc:date>
</item>
<item>
<title>Scoop: An Adaptive Indexing Scheme for Stored Data in Sensor Networks</title>
<link>https://hdl.handle.net/1721.1/34916</link>
<description>Scoop: An Adaptive Indexing Scheme for Stored Data in Sensor Networks
Gil, Thomer M.; Madden, Samuel
In this paper, we present the design of Scoop, a system for indexing and querying stored data in sensor networks. Scoop works by collecting statistics about the rate of queries and distribution of sensor readings over a sensor network, and uses those statistics to build an index that tells nodes where in the network to store their readings. Using this index, a user&#146;s queries over that stored data can be answered efficiently, without &amp;#64258;ooding those queries throughout the network. This approach offers a substantial advantage over other solutions that either store all data externally on a basestation (requiring every reading to be collected from all nodes), or that store all data locally on the node that produced it (requiring queries to be &amp;#64258;ooded throughout the network). Our results, in fact, show that Scoop offers a factor of four improvement over existing techniques in a real implementation on a 64-node mote-based sensor network. These results also show that Scoop is able to efficciently adapt to changes in the distribution and rates of data and queries.
</description>
<pubDate>Mon, 27 Nov 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34916</guid>
<dc:date>2006-11-27T00:00:00Z</dc:date>
</item>
<item>
<title>Context-based Visual Feedback Recognition</title>
<link>https://hdl.handle.net/1721.1/34893</link>
<description>Context-based Visual Feedback Recognition
Morency, Louis-Philippe
During face-to-face conversation, people use visual feedback (e.g.,head and eye gesture) to communicate relevant information and tosynchronize rhythm between participants. When recognizing visualfeedback, people often rely on more than their visual perception.For instance, knowledge about the current topic and from previousutterances help guide the recognition of nonverbal cues. The goal ofthis thesis is to augment computer interfaces with the ability toperceive visual feedback gestures and to enable the exploitation ofcontextual information from the current interaction state to improvevisual feedback recognition.We introduce the concept of visual feedback anticipationwhere contextual knowledge from an interactive system (e.g. lastspoken utterance from the robot or system events from the GUIinterface) is analyzed online to anticipate visual feedback from ahuman participant and improve visual feedback recognition. Ourmulti-modal framework for context-based visual feedback recognitionwas successfully tested on conversational and non-embodiedinterfaces for head and eye gesture recognition.We also introduce Frame-based Hidden-state Conditional RandomField model, a new discriminative model for visual gesturerecognition which can model the sub-structure of a gesture sequence,learn the dynamics between gesture labels, and can be directlyapplied to label unsegmented sequences. The FHCRF model outperformsprevious approaches (i.e. HMM, SVM and CRF) for visual gesturerecognition and can efficiently learn relevant contextualinformation necessary for visual feedback anticipation.A real-time visual feedback recognition library for interactiveinterfaces (called Watson) was developed to recognize head gaze,head gestures, and eye gaze using the images from a monocular orstereo camera and the context information from the interactivesystem. Watson was downloaded by more then 70 researchers around theworld and was successfully used by MERL, USC, NTT, MIT Media Lab andmany other research groups.
PhD thesis
</description>
<pubDate>Wed, 15 Nov 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34893</guid>
<dc:date>2006-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Information-Flow Tracking for C and Related Languages</title>
<link>https://hdl.handle.net/1721.1/34892</link>
<description>Quantitative Information-Flow Tracking for C and Related Languages
McCamant, Stephen; Ernst, Michael D.
We present a new approach for tracking programs' use of data througharbitrary calculations, to determine how much information about secretinputs is revealed by public outputs.  Using a fine-grained dynamicbit-tracking analysis, the technique measures the information revealedduring a particular execution.  The technique accounts for indirectflows, e.g. via branches and pointer operations.  Two kinds ofuntrusted annotation improve the precision of the analysis.  Animplementation of the technique based on dynamic binary translation isdemonstrated on real C, C++, and Objective C programs of up to half amillion lines of code.  In case studies, the tool checked multiplesecurity policies, including one that was violated by a previouslyunknown bug.
</description>
<pubDate>Fri, 17 Nov 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34892</guid>
<dc:date>2006-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>A Fast Approximation of the Bilateral Filter using a Signal Processing Approach</title>
<link>https://hdl.handle.net/1721.1/34876</link>
<description>A Fast Approximation of the Bilateral Filter using a Signal Processing Approach
Paris, Sylvain; Durand, Fredo
The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using downsampling in space and intensity.  This affords a principled expression of accuracy in terms of bandwidth and sampling. The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically, we are able to process a 2~megapixel image using our acceleration technique in less than a second, and have the result be visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering.
</description>
<pubDate>Thu, 09 Nov 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34876</guid>
<dc:date>2006-11-09T00:00:00Z</dc:date>
</item>
<item>
<title>On the Adaptive Real-Time Detection of Fast-Propagating Network Worms</title>
<link>https://hdl.handle.net/1721.1/34875</link>
<description>On the Adaptive Real-Time Detection of Fast-Propagating Network Worms
Jung, Jaeyeon; Milito, Rodolfo A.; Paxson, Vern
We present two light-weight worm detection algorithms thatoffer significant advantages over fixed-threshold methods.The first algorithm, RBS (rate-based sequential hypothesis testing)aims at the large class of worms that attempts to quickly propagate, thusexhibiting abnormal levels of the rate at which hosts initiateconnections to new destinations. The foundation of RBS derives fromthe theory of sequential hypothesis testing, the use of which fordetecting randomly scanning hosts was first introduced by our previouswork with the TRW (Threshold Random Walk) scan detection algorithm. The sequential hypothesistesting methodology enables engineering the detectors to meet falsepositives and false negatives targets, rather than triggering whenfixed thresholds are crossed. In this sense, the detectors that weintroduce are truly adaptive.We then introduce RBS+TRW, an algorithm that combines fan-out rate (RBS)and probability of failure (TRW) of connections to new destinations.RBS+TRW provides a unified framework that at one end acts as a pure RBSand at the other end as pure TRW, and extends RBS's power in detectingworms that scan randomly selected IP addresses.
</description>
<pubDate>Fri, 10 Nov 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34875</guid>
<dc:date>2006-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>On Using First-Order Theorem Provers in the Jahob Data Structure Verification System</title>
<link>https://hdl.handle.net/1721.1/34874</link>
<description>On Using First-Order Theorem Provers in the Jahob Data Structure Verification System
Bouillaguet, Charles; Kuncak, Viktor; Wies, Thomas; Zee, Karen; Rinard, Martin
This paper presents our integration of efficient  resolution-based theorem provers into the Jahob data  structure verification system.  Our experimental results  show that this approach enables Jahob to automatically  verify the correctness of a range of complex dynamically  instantiable data structures, including data structures  such as hash tables and search trees, without the need for  interactive theorem proving or techniques tailored to  individual data structures.  Our primary technical results include: (1) a translation  from higher-order logic to first-order logic that enables  the application of resolution-based theorem provers and  (2) a proof that eliminating type (sort) information in  formulas is both sound and complete, even in the presence  of a generic equality operator.  Our  experimental results show that the elimination of   type information dramatically decreases the time required  to prove the resulting formulas.  These techniques enabled us to verify complex correctness  properties of Java programs such as a mutable set  implemented as an imperative linked list, a finite map  implemented as a functional ordered tree, a hash table  with a mutable array, and a simple library system example  that uses these container data structures.  Our system  verifies (in a matter of minutes) that data structure  operations correctly update the finite map, that they  preserve data structure invariants (such as ordering of  elements, membership in appropriate hash table buckets, or  relationships between sets and relations), and that there  are no run-time errors such as null dereferences or array  out of bounds accesses.
</description>
<pubDate>Thu, 09 Nov 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34874</guid>
<dc:date>2006-11-09T00:00:00Z</dc:date>
</item>
<item>
<title>Analogical Retrieval via Intermediate Features: The Goldilocks Hypothesis</title>
<link>https://hdl.handle.net/1721.1/34635</link>
<description>Analogical Retrieval via Intermediate Features: The Goldilocks Hypothesis
Finlayson, Mark Alan; Winston, Patrick Henry
Analogical reasoning has been implicated in many important cognitive processes, such as learning, categorization, planning, and understanding natural language. Therefore, to obtain a full understanding of these processes, we must come to a better understanding of how people reason by analogy. Analogical reasoning is thought to occur in at least three stages: retrieval of a source description from memory upon presentation of a target description, mapping of the source description to the target description, and transfer of relationships from source description to target description. Here we examine the first stage, the retrieval of relevant sources from long-term memory for their use in analogical reasoning. Specifically we ask: what can people retrieve from long-term memory, and how do they do it?Psychological experiments show that subjects display two sorts of retrieval patterns when reasoning by analogy: a novice pattern and an expert pattern. Novice-like subjects are more likely to recall superficiallysimilar descriptions that are not helpful for reasoning by analogy. Conversely, expert-like subjects are more likely to recall structurally-related descriptions that are useful for further analogical reasoning. Previous computational models of the retrieval stage have only attempted to model novice-like retrieval. We introduce a computational model that can demonstrate both novice-like and expert-like retrieval with the same mechanism. The parameter of the model that is varied to produce these two types of retrieval is the average size of the features used to identify matches in memory. We find that, in agreement with an intuition from the work of Ullman and co-workers regarding the use of features in visual classification (Ullman, Vidal-Naquet,&amp; Sali, 2002), that features of an intermediate size are most useful for analogical retrieval.We conducted two computational experiments on our own dataset of fourteen formally described stories, which showed that our model gives the strongest analogical retrieval, and is most expert-like, when it uses features that are on average of intermediate size. We conducted a third computational experiment on the Karla the Hawk dataset which showed a modest effect consistent with our predictions. Because our model and Ullman&#146;s work both rely on intermediate-sized features to perform recognition-like tasks, we take both as supporting what we call the Goldilocks hypothesis: that on the average those features that are maximally useful for recognition are neither too small nor too large, neither too simple nor too complex, but rather are in the middle, of intermediate size and complexity.
</description>
<pubDate>Tue, 07 Nov 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34635</guid>
<dc:date>2006-11-07T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Atomic Data through Indirect Learning in Dynamic Network</title>
<link>https://hdl.handle.net/1721.1/34249</link>
<description>Implementing Atomic Data through Indirect Learning in Dynamic Network
Konwar, K.; Musial, P.M.; Nicolau, N.C.; Shvartsman., A.A.
Developing middleware services for dynamic distributed systems, e.g., ad-hoc networks, is a challenging task given that suchservices must deal with communicating devices that may join and leave the system, and fail or experience arbitrary delays. Algorithmsdeveloped for static settings are often not usable in dynamic settings because they rely on (logical) all-to-all connectivityor assume underlying routing protocols, which may be unfeasible in highly dynamic settings. This paper explores the indirectlearning approach to information dissemination within a dynamic distributed data service. The indirect learning scheme is usedto improve the liveness of the atomic read/write object service in the settings with uncertain connectivity. The service is formallyproved to be correct, i.e., the atomicity of the objects is guaranteed in all executions. Conditional analysis of the performanceof the new service is presented. This analysis has the potential of being generalized to other similar dynamic algorithms. Underthe assumption that the network is connected, and assuming reasonable timing conditions, the bounds on the duration of theread/write operations of the new service are calculated. Finally, the paper proposes a deployment strategy where indirect learningleads to an improvement in communication costs relative to a previous solution.
</description>
<pubDate>Thu, 12 Oct 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34249</guid>
<dc:date>2006-10-12T00:00:00Z</dc:date>
</item>
<item>
<title>Programming a Sensor Network as an Amorphous Medium</title>
<link>https://hdl.handle.net/1721.1/34223</link>
<description>Programming a Sensor Network as an Amorphous Medium
Bachrach, Jonathan; Beal, Jacob
In many sensor network applications, the network is deployedto approximate a physical space. The network itself is not ofinterest: rather, we are interested in measuring the propertiesof the space it fills, and of establishing control over thebehavior of that space.The spatial nature of sensor network applications meansthat many can be expressed naturally and succinctly in termsof the global behavior of an amorphous medium---a continuouscomputational material filling the space of interest. Althoughwe cannot construct such a material, we can approximateit using a sensor network.Using this amorphous medium abstraction separates sensornetwork problems into two largely independent domains.Above the abstraction barrier we are concerned with longrangecoordination and concise description of applications,while below the barrier we are concerned with fast, efficient,and robust communication between neighboring devices.We apply the amorphous medium abstraction with Proto,a high-level language for programming sensor/actuator networks.Existing applications, such as target tracking andthreat avoidance, can be expressed in only a few lines of Protocode. The applications are then compiled for execution on akernel that approximates an amorphous medium. Programswritten using our Proto implementation have been verified insimulation on over ten thousand nodes, as well as on a networkof Berkeley Motes.
</description>
<pubDate>Thu, 01 Jun 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34223</guid>
<dc:date>2006-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design of a Relational Engine</title>
<link>https://hdl.handle.net/1721.1/34218</link>
<description>The Design of a Relational Engine
Torlak, Emina; Jackson, Daniel
The key design challenges in the construction of a SAT-based relational engine are described, and novel techniques are proposed to address them.  An efficient engine must have a mechanism for specifying partial solutions, an effective symmetry detection and breaking scheme, and an economical translation from relational to boolean logic.  These desiderata are addressed with three new techniques: a symmetry detection algorithm that works in the presence of partial solutions, a sparse-matrix representation of relations, and a compact representation of boolean formulas inspired by boolean expression diagrams and reduced boolean circuits.  The presented techniques have been implemented and evaluated, with promising results.
</description>
<pubDate>Fri, 29 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34218</guid>
<dc:date>2006-09-29T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptation for Regularization Operators in Learning Theory</title>
<link>https://hdl.handle.net/1721.1/34217</link>
<description>Adaptation for Regularization Operators in Learning Theory
Caponnetto, Andrea; Yao, Yuan
We consider learning algorithms induced by regularization methods in the regression setting.  We show that previously obtained error bounds for these algorithms using a-priori choices of the regularization parameter, can be attained using a suitable a-posteriori choice based on validation.  In particular, these results prove adaptation of the rate of convergence of the estimators to the minimax rate induced by the "effective dimension" of the problem.  We also show universal consistency for theses class methods.
</description>
<pubDate>Sun, 10 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34217</guid>
<dc:date>2006-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Rates for Regularization Operators in Learning Theory</title>
<link>https://hdl.handle.net/1721.1/34216</link>
<description>Optimal Rates for Regularization Operators in Learning Theory
Caponnetto, Andrea
We develop some new error bounds for learning algorithms induced by regularization methods in the regression setting.  The "hardness" of the problem is characterized in terms of the parameters r and s, the first related to the "complexity" of the target function, the second connected to the effective dimension of the marginal probability measure over the input space.  We show, extending previous results, that by a suitable choice of the regularization parameter as a function of the number of the available examples, it is possible attain the optimal minimax rates of convergence for the expected squared loss of the estimators, over the family of priors fulfilling the constraint r + s &gt; 1/2.  The setting considers both labelled and unlabelled examples, the latter being crucial for the optimality results on the priors in the range r &lt; 1/2.
</description>
<pubDate>Sun, 10 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34216</guid>
<dc:date>2006-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Ubiquitous Memory Introspection (Preliminary Manuscript)</title>
<link>https://hdl.handle.net/1721.1/34013</link>
<description>Ubiquitous Memory Introspection (Preliminary Manuscript)
Zhao, Qin; Rabbah, Rodric; Amarasinghe, Saman; Rudolph, Larry; Wong, Weng-Fai
Modern memory systems play a critical role in the performance ofapplications, but a detailed understanding of the application behaviorin the memory system is not trivial to attain. It requires timeconsuming simulations of the memory hierarchy using long traces, andoften using detailed modeling. It is increasingly possible to accesshardware performance counters to measure events in the memory system,but the measurements remain coarse grained, better suited forperformance summaries than providing instruction level feedback. Theavailability of a low cost, online, and accurate methodology forderiving fine-grained memory behavior profiles can prove extremelyuseful for runtime analysis and optimization of programs.This paper presents a new methodology for Ubiquitous MemoryIntrospection (UMI). It is an online and lightweight mini-simulationmethodology that focuses on simulating short memory access tracesrecorded from frequently executed code regions. The simulations arefast and can provide profiling results at varying granularities, downto that of a single instruction or address. UMI naturally complementsruntime optimizations techniques and enables new opportunities formemory specific optimizations.In this paper, we present a prototype implementation of a runtimesystem implementing UMI. The prototype is readily deployed oncommodity processors, requires no user intervention, and can operatewith stripped binaries and legacy software. The prototype operateswith an average runtime overhead of 20% but this slowdown is only 6%slower than a state of the art binary instrumentation tool.  We used32 benchmarks, including the full suite of SPEC2000 benchmarks, forour evaluation. We show that the mini-simulation results accuratelyreflect the cache performance of two existing memory systems, anIntel Pentium~4 and an AMD Athlon MP (K7) processor. We alsodemonstrate that low level profiling information from the onlinesimulation can serve to identify high-miss rate load instructions with a77% rate of accuracy compared to full offline simulations thatrequired days to complete. The online profiling results are used atruntime to implement a simple software prefetching strategy thatachieves a speedup greater than 60% in the best case.
</description>
<pubDate>Mon, 25 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34013</guid>
<dc:date>2006-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>RingScalar: A Complexity-Effective Out-of-Order Superscalar Microarchitecture</title>
<link>https://hdl.handle.net/1721.1/34012</link>
<description>RingScalar: A Complexity-Effective Out-of-Order Superscalar Microarchitecture
Tseng, Jessica H.; Asanovic, Krste
RingScalar is a complexity-effective microarchitecture for out-of-order superscalar processors, that reduces the area, latency, and power of all major structures in the instruction flow.  The design divides an N-way superscalar into N columns connected in a unidirectional ring, where each column contains a portion of the instruction window, a bank of the register file, and an ALU.  The design exploits the fact that most decoded instructions are waiting on just one operand to use only a single tag per issue window entry, and to restrict instruction wakeup and value bypass to only communicate with the neighboring column.  Detailed simulations of four-issue single-threaded machines running SPECint2000 show that RingScalar has IPC only 13% lower than an idealized superscalar, while providing large reductions in area, power, and circuit latency.
</description>
<pubDate>Mon, 18 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/34012</guid>
<dc:date>2006-09-18T00:00:00Z</dc:date>
</item>
<item>
<title>Combined static and dynamic mutability analysis</title>
<link>https://hdl.handle.net/1721.1/33968</link>
<description>Combined static and dynamic mutability analysis
Artzi, Shay; Ernst, Michael D.; Glasser, David; Kiezun, Adam
Knowing which method parameters may be mutated during a method'sexecution is useful for many software engineering tasks.  We presentan approach to discovering parameter immutability, in which severallightweight, scalable analyses are combined in stages, with each stagerefining the overall result.  The resulting analysis is scalable andcombines the strengths of its component analyses.  As one of thecomponent analyses, we present a novel, dynamic mutability analysisand show how its results can be improved by random input generation.Experimental results on programs of up to 185 kLOC demonstrate that,compared to previous approaches, our approach increases both scalabilityand overall accuracy.
</description>
<pubDate>Sun, 17 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33968</guid>
<dc:date>2006-09-17T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Monotonic Counters and Count-Limited Objects using a TPM without a Trusted OS (Extended Version)</title>
<link>https://hdl.handle.net/1721.1/33966</link>
<description>Virtual Monotonic Counters and Count-Limited Objects using a TPM without a Trusted OS (Extended Version)
Sarmenta, Luis F. G.; van Dijk, Marten; O'Donnell, Charles W.; Rhodes, Jonathan; Devadas, Srinivas
A trusted monotonic counter is a valuable primitive thatenables a wide variety of highly scalable offlineand decentralized applications that would otherwise be prone to replay attacks, including offline payment, e-wallets, virtual trusted storage, and digital rights management (DRM).In this paper, we show how one can implement a very large number of virtual monotonic counters on an untrusted machine with a Trusted Platform Module (TPM) or similar device, without relying on a trusted OS.  We first present a log-based scheme that can be implemented with the current version of the TPM (1.2) and used incertain applications.We then show how the addition of a few simple features tothe TPM makes it possible to implement a hash-tree-based schemethat not only offers improved performance and scalability compared to the log-based scheme, but also makes it possible to implement count-limited objects (or ``clobs'' for short) -- i.e., encrypted keys, data, and other objectsthat can only be used when an associated virtual monotonic counter is within a certain range.Such count-limited objects include n-time use keys, n-out-of-m data blobs,n-copy migratable objects, and other variants, which have many potential uses in digital rights management (DRM), digital cash, digital voting, itinerant computing,and other application areas.
</description>
<pubDate>Mon, 11 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33966</guid>
<dc:date>2006-09-11T00:00:00Z</dc:date>
</item>
<item>
<title>Refactoring for parameterizing Java classes</title>
<link>https://hdl.handle.net/1721.1/33965</link>
<description>Refactoring for parameterizing Java classes
Kiezun, Adam; Ernst, Michael D.; Tip, Frank; Fuhrer, Robert M.
Type safety and expressiveness of many existing Java libraries and theirclient applications would improve, if the libraries were upgraded to definegeneric classes.  Efficient and accurate tools exist to assist clientapplications to use generics libraries, but so far the libraries themselvesmust be parameterized manually, which is a tedious, time-consuming, anderror-prone task.  We present a type-constraint-based algorithm forconverting non-generic libraries to add type parameters.  The algorithmhandles the full Java language and preserves backward compatibility, thusmaking it safe for existing clients.  Among other features, it is capableof inferring wildcard types and introducing type parameters formutually-dependent classes.  We have implemented the algorithm as a fullyautomatic refactoring in Eclipse.We evaluated our work in two ways.  First, our tool parameterized code thatwas lacking type parameters.  We contacted the developers of several ofthese applications, and in all cases where we received a response, theyconfirmed that the resulting parameterizations were correct and useful.Second, to better quantify its effectiveness, our tool parameterizedclasses from already-generic libraries, and we compared the results tothose that were created by the libraries' authors.  Our tool performed therefactoring accurately -- in 87% of cases the results were as good as thosecreated manually by a human expert, in 9% of cases the tool results werebetter, and in 4% of cases the tool results were worse.
</description>
<pubDate>Tue, 05 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33965</guid>
<dc:date>2006-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Task-Structured Probabilistic I/O Automata</title>
<link>https://hdl.handle.net/1721.1/33964</link>
<description>Task-Structured Probabilistic I/O Automata
Canetti,, Ran; Cheung,, Ling; Kaynar,, Dilsun; Liskov,, Moses; Lynch,, Nancy; Pereira,, Olivier; Segala, Roberto
Modeling frameworks such as Probabilistic I/O Automata (PIOA) andMarkov Decision Processes permit both probabilistic andnondeterministic choices.  In order to use such frameworks to express claims about probabilities of events, one needs mechanisms for resolving nondeterministic choices.  For PIOAs, nondeterministic choices have traditionally been resolved by schedulers that have perfect information about the past execution.  However, such schedulers are too powerful for certain settings, such as cryptographic protocol analysis, where information must sometimes be hidden. Here, we propose a new, less powerful nondeterminism-resolutionmechanism for PIOAs, consisting of tasks and local schedulers.Tasks are equivalence classes of system actions that are scheduled byoblivious, global task sequences.  Local schedulers resolve nondeterminism within system components, based on local information only.  The resulting task-PIOA framework yields simple notions of external behavior and implementation, and supports simple compositionality results.We also define a new kind of simulation relation, and show it to besound for proving implementation.  We illustrate the potential of the task-PIOA    framework by outlining its use in verifying an Oblivious Transfer protocol.
</description>
<pubDate>Tue, 05 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33964</guid>
<dc:date>2006-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Javari: Adding Reference Immutability to Java</title>
<link>https://hdl.handle.net/1721.1/33963</link>
<description>Javari: Adding Reference Immutability to Java
Tschantz, Matthew S.
This paper describes a programming language, Javari, that is capable of expressing and enforcing immutability constraints.  The specific constraint expressed is that the abstract state of the object to which an immutable reference refers cannot be modified using that reference.  The abstract state is (part of) the transitively reachable state: that is, the state of the object and all state reachable from it by following references.  The type system permits explicitly excluding fields from the abstract state of an object.  For a statically type-safe language, the type system guarantees reference immutability.The type system is distinguishes the notions of assignability and mutability; integrates with Java's generic types and with multi-dimensional arrays; provides a mutability polymorphism approach to avoiding code duplication; and has type-safe support for reflection and serialization.  This paper describes a core calculus including formal type rules for the language.Additionally, this paper describes a type inference algorithm that can be used convert existing Java programs to Javari.  Experimental results from a prototype implementation of the algorithm are presented.
MEng thesis
</description>
<pubDate>Tue, 05 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33963</guid>
<dc:date>2006-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Random Lens Imaging</title>
<link>https://hdl.handle.net/1721.1/33962</link>
<description>Random Lens Imaging
Fergus, Rob; Torralba, Antonio; Freeman, William T.
We call a random lens one for which the function relating the input light ray to the output sensor location is pseudo-random. Imaging systems with random lensescan expand the space of possible camera designs, allowing new trade-offs in optical design and potentially adding new imaging capabilities. Machine learningmethods are critical for both camera calibration and image reconstruction from the sensor data. We develop the theory and compare two different methods for calibration and reconstruction: an MAP approach, and basis pursuit from compressive sensing. We show proof-of-concept experimental results from a random lens made from a multi-faceted mirror, showing successful calibration and image reconstruction. We illustrate the potential for super-resolution and 3D imaging.
</description>
<pubDate>Sat, 02 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33962</guid>
<dc:date>2006-09-02T00:00:00Z</dc:date>
</item>
<item>
<title>Finding the needles in the haystack: Generating legal test inputs for object-oriented programs</title>
<link>https://hdl.handle.net/1721.1/33959</link>
<description>Finding the needles in the haystack: Generating legal test inputs for object-oriented programs
Artzi, Shay; Ernst, Michael D.; Kiezun, Adam; Pacheco, Carlos; Perkins, Jeff H.
A test input for an object-oriented program typically consists of asequence of method calls that use the API defined by the programunder test. Generating legal test inputs can be challenging because,for some programs, the set of legal method sequences is much smallerthan the set of all possible sequences; without a formalspecification of legal sequences, an input generator is bound toproduce mostly illegal sequences.We propose a scalable technique that combines dynamic analysis withrandom testing to help an input generator create legal test inputswithout a formal specification, even for programs in whichmost sequences are illegal. The technique uses an example executionof the program to infer a model of legal call sequences, and usesthe model to guide a random input generator towards legal butbehaviorally-diverse sequences.We have implemented our technique for Java, in a tool calledPalulu, and evaluated its effectiveness in creating legal inputsfor real programs. Our experimental results indicate that thetechnique is effective and scalable. Our preliminary evaluationindicates that the technique can quickly generate legal sequencesfor complex inputs: in a case study, Palulu created legal testinputs in seconds for a set of complex classes, for which it took anexpert thirty minutes to generate a single legal input.
</description>
<pubDate>Thu, 31 Aug 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33959</guid>
<dc:date>2006-08-31T00:00:00Z</dc:date>
</item>
<item>
<title>Learning with Online Constraints: Shifting Concepts and Active Learning</title>
<link>https://hdl.handle.net/1721.1/33958</link>
<description>Learning with Online Constraints: Shifting Concepts and Active Learning
Monteleoni, Claire E.
Many practical problems such as forecasting, real-time decisionmaking, streaming data applications, and resource-constrainedlearning, can be modeled as learning with online constraints.  Thisthesis is concerned with analyzing and designing algorithms forlearning under the following online constraints: 1) The algorithm hasonly sequential, or one-at-time, access to data.  2) The time andspace complexity of the algorithm must not scale with the number ofobservations.  We analyze learning with online constraints in avariety of settings, including active learning.  The active learningmodel is applicable to any domain in which unlabeled data is easy tocome by and there exists a (potentially difficult or expensive)mechanism by which to attain labels.First, we analyze a supervised learning framework in which nostatistical assumptions are made about the sequence of observations,and algorithms are evaluated based on their regret, i.e. theirrelative prediction loss with respect to the hindsight-optimalalgorithm in a comparator class.  We derive a lower bound on regretfor a class of online learning algorithms designed to track shiftingconcepts in this framework.  We apply an algorithm we provided inprevious work, that avoids this lower bound, to an energy-managementproblem in wireless networks, and demonstrate this application in anetwork simulation. Second, we analyze a supervised learning frameworkin which the observations are assumed to be iid, and algorithms arecompared by the number of prediction mistakes made in reaching atarget generalization error.  We provide a lower bound on mistakes forPerceptron, a standard online learning algorithm, for this framework.We introduce a modification to Perceptron and show that it avoids thislower bound, and in fact attains the optimal mistake-complexity forthis setting.Third, we motivate and analyze an online active learning framework.The observations are assumed to be iid, and algorithms are judged bythe number of label queries to reach a target generalizationerror. Our lower bound applies to the active learning setting as well,as a lower bound on labels for Perceptron paired with any activelearning rule.  We provide a new online active learning algorithm thatavoids the lower bound, and we upper bound its label-complexity.  Theupper bound is optimal and also bounds the algorithm's total errors(labeled and unlabeled).  We analyze the algorithm further, yielding alabel-complexity bound under relaxed assumptions.  Using opticalcharacter recognition data, we empirically compare the new algorithmto an online active learning algorithm with data-dependent performanceguarantees, as well as to the combined variants of these twoalgorithms.
PhD thesis
</description>
<pubDate>Fri, 01 Sep 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33958</guid>
<dc:date>2006-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting the Risk and Trajectory of Intensive Care Patients Using Survival Models</title>
<link>https://hdl.handle.net/1721.1/33957</link>
<description>Predicting the Risk and Trajectory of Intensive Care Patients Using Survival Models
Hug, Caleb W.
Using artificial intelligence to assist physicians in patient care has received sustained interest over the past several decades.  Recently, with automated systems at most bedsides, the amount of patient information collected continues to increase, providing specific impetus for intelligent systems that can interpret this information. In fact, the large set of sensors and test results, often measured repeatedly over long periods of time, make it challenging for caregivers to quickly utilize all of the data for optimal patient treatment.This research focuses on predicting the survival of ICU patients throughout their stay.  Unlike traditional static mortality models, this survival prediction is explored as an indicator of patient state and trajectory.  Using survival analysis techniques and machine learning, models are constructed that predict individual patient survival probabilities at fixed intervals in the future.  These models seek to help physicians interpret the large amount of data available in order to provide optimal patient care.We find that the survival predictions from our models are comparable to survival predictions using the SAPS score, but are available throughout the patient's ICU course instead of only at 24 hours after admission.  Additionally, we demonstrate effective prediction of patient mortality over fixed windows in the future.
SM thesis
</description>
<pubDate>Wed, 30 Aug 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33957</guid>
<dc:date>2006-08-30T00:00:00Z</dc:date>
</item>
<item>
<title>Anthills Built to Order: Automating Construction with Artificial Swarms</title>
<link>https://hdl.handle.net/1721.1/33791</link>
<description>Anthills Built to Order: Automating Construction with Artificial Swarms
Werfel, Justin
Social insects build large, complex structures, which emerge through the collective actions of many simple agents acting with no centralized control or preplanning.  These natural systems motivate investigating the use of artificial swarms to automate construction or fabrication.  The goal is to be able to take an unspecified number of simple robots and a supply of building material, give the system a high-level specification for any arbitrary structure desired, and have a guarantee that it will produce that structure without further intervention.In this thesis I describe such a distributed system for automating construction, in which autonomous mobile robots collectively build user-specified structures from square building blocks.  The approach preserves many desirable features of the natural systems, such as considerable parallelism and robustness to factorslike robot loss and variable order or timing of actions.  Further, unlike insect colonies, it can build particular desired structures according to a high-level design provided by the user.Robots in this system act without explicit communication or cooperation, instead using the partially completed structure to coordinate their actions.  This mechanism is analogous to that of stigmergy used by social insects, in which insects take actions that affect the environment, and the environmental state influences further actions.  I introduce a framework of "extended stigmergy" in which building blocks are allowed to store, process or communicate information.  Increasing the capabilities of the building material (rather than of the robots) in this way increases the availability of nonlocal structure information.  Benefits include significant improvements in construction speed and in ability to take advantage of the parallelism of the swarm.This dissertation describes system design and control rules for decentralized teams of robots that provably build arbitrary solid structures in two dimensions.  I present a hardware prototype, and discuss extensions to more general structures, including those built with multiple block types and in three dimensions.
PhD thesis
</description>
<pubDate>Fri, 12 May 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33791</guid>
<dc:date>2006-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient Network Coding In the Presence of Byzantine Adversaries</title>
<link>https://hdl.handle.net/1721.1/33790</link>
<description>Resilient Network Coding In the Presence of Byzantine Adversaries
Jaggi, Sidharth; Langberg, Michael; Katti, Sachin; Ho, Tracy; Katabi, Dina; Medard, Muriel
Network coding substantially increases network throughput. But since it involves mixing of information inside the network, a single corrupted packet generated by a malicious node can end up contaminating all the information reaching a destination, preventing decoding. This paper introduces the first distributed polynomial-time rate-optimal network codes that work in the presence of Byzantine nodes. We present algorithms that target adversaries with different attacking capabilities. When the adversary can eavesdrop on all links and jam Z links , our first algorithm achieves a rate of C-2Z, where C is the network capacity. In contrast, when the adversary has limited snooping capabilities, we provide algorithms that achieve the higher rate of C-Z.
</description>
<pubDate>Sat, 05 Aug 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33790</guid>
<dc:date>2006-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Human Document Classification Using Bags of Words</title>
<link>https://hdl.handle.net/1721.1/33789</link>
<description>Human Document Classification Using Bags of Words
Wolf, Florian; Poggio, Tomaso; Sinha, Pawan
Humans are remarkably adept at classifying text documents into cate-gories.  For instance, while reading a news story, we are rapidly able to assess whether it belongs to the domain of finance, politics or sports.  Automating this task would have applications for content-based search or filtering of digital documents.  To this end, it is interesting to investigate the nature of information humans use to classify documents.  Here we report experimental results suggesting that this information might, in fact, be quite simple.  Using a paradigm of progressive revealing, we determined classification performance as a function of number of words.  We found that subjects are able to achieve similar classification accuracy with or without syntactic information across a range of passage sizes.  These results have implications for models of human text-understanding and also allow us to estimate what level of performance we can expect, in principle, from a system without requiring a prior step of complex natural language processing.
</description>
<pubDate>Wed, 09 Aug 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33789</guid>
<dc:date>2006-08-09T00:00:00Z</dc:date>
</item>
<item>
<title>Dealers, Insiders and Bandits: Learning and its Effects on Market Outcomes</title>
<link>https://hdl.handle.net/1721.1/33235</link>
<description>Dealers, Insiders and Bandits: Learning and its Effects on Market Outcomes
Das, Sanmay
This thesis seeks to contribute to the understanding of markets populated by boundedly rational agents who learn from experience. Bounded rationality and learning have both been the focus of much research in computer science, economics and finance theory. However, we are at a critical stage in defining the direction of future research in these areas. It is now clear that realistic learning problems faced by agents in market environments are often too hard to solve in a classically rational fashion. At the same time, the greatly increased computational power available today allows us to develop and analyze richer market models and to evaluate different learning procedures and algorithms within these models. The danger is that the ease with which complex markets can be simulated could lead to a plethora of models that attempt to explain every known fact about different markets. The first two chapters of this thesis define a principled approach to studying learning in rich models of market environments, and the rest of the thesis provides a proof of concept by demonstrating the applicability of this approach in modeling settings drawn from two different broad domains, financial market microstructure and search theory. In the domain of market microstructure, this thesis extends two important models from the theoretical finance literature. The third chapter introduces an algorithm for setting prices in dealer markets based on the model of Glosten and Milgrom (1985), and produces predictions about the behavior of prices in securities markets. In some cases, these results confirm economic intuitions in a significantly more complex setting (like the existence of a local profit maximum for a monopolistic market-maker) and in others they can be used to provide quantitative guesses for variables such as rates of convergence to efficient market conditions following price jumps that provide insider information. The fourth chapter studies the problem faced by a trader with insider information in KyleÂ&#146;s (1985) model. I show how the insider trading problem can be usefully analyzed from the perspective of reinforcement learning when some important market parameters are unknown, and that the equilibrium behavior of an insider who knows these parameters can be learned by one who does not, but also that the time scale of convergence to the equilibrium behavior may be impractical, and agents with limited time horizons may be better off using approximate algorithms that do not converge to equilibrium behavior. The fifth and sixth chapters relate to search problems. Chapter 5 introduces models for a class of problems in which there is a search Â&#147;seasonÂ&#148; prior to hiring or matching, like academic job markets. It solves for expected values in many cases, and studies the difference between a Â&#147;high informationÂ&#148; process where applicants are immediately told when they have been rejected and a Â&#147;low informationÂ&#148; process where employers do not send any signal when they reject an applicant. The most important intuition to emerge from the results is that the relative benefit of the high information process is much greater when applicants do not know their own Â&#147;attractiveness,Â&#148; which implies that search markets might be able to eliminate inefficiencies effectively by providing good information, and we do not always have to think about redesigning markets as a whole. Chapter 6 studies two-sided search explicitly and introduces a new class of multi-agent learning problems, two-sided bandit problems, that capture the learning and decision problems of agents in matching markets in which agents must learn their preferences. It also empirically studies outcomes under different periodwise matching mechanisms and shows that some basic intuitions about the asymptotic stability of matchings are preserved in the model. For example, when agents are matched in each period using the Gale-Shapley algorithm, asymptotic outcomes are always stable, while a matching mechanism that induces a stopping problem for some agents leads to the lowest probabilities of stability. By contributing to the state of the art in modeling different domains using computational techniques, this thesis demonstrates the success of the approach to modeling complex economic and social systems that is prescribed in the first two chapters.
PhD thesis
</description>
<pubDate>Wed, 12 Jul 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33235</guid>
<dc:date>2006-07-12T00:00:00Z</dc:date>
</item>
<item>
<title>Iterative Collaborative Ranking of  Customers and Providers</title>
<link>https://hdl.handle.net/1721.1/33234</link>
<description>Iterative Collaborative Ranking of  Customers and Providers
Teow, Loo Nin; Katabi, Dina
This paper introduces a new application: predicting the Internet provider-customer market. We cast the problem in the collaborative filtering framework, where we use current and past customer-provider relationships to compute for each Internet customer a ranking of potential future service providers. Furthermore, for each Internet service provider (ISP), we rank potential future customers. We develop a novel iterative ranking algorithm that draws inspiration from several sources, including collaborative filtering, webpage ranking, and kernel methods. Further analysis of our algorithm shows that it can be formulated in terms of an affine eigenvalue problem. Experiments on the actual Internet customer-provider data show promising results.
</description>
<pubDate>Tue, 04 Jul 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33234</guid>
<dc:date>2006-07-04T00:00:00Z</dc:date>
</item>
<item>
<title>MORE: A Network Coding Approach to Opportunistic Routing</title>
<link>https://hdl.handle.net/1721.1/33230</link>
<description>MORE: A Network Coding Approach to Opportunistic Routing
Chachulski, Szymon; Jennings, Michael; Katti, Sachin; Katabi, Dina
Opportunistic routing has the potential to substantially increase wireless network throughput. Prior work on opportunistic routing, however, requires tight node coordination. Different nodes in a network must have knowledge of which packets other nodes have received. Furthermore, the nodes have to agree on which nodes should transmit which packets. Such coordination becomes fragile in dense or large networks.This paper introduces MORe, a new opportunistic routing protocol that avoids node-coordination. Our design is rooted in the theory of network coding.Routers code packets going to the same destination and forward the coded versions. The destination decodes and recovers the original packets. This approach needs no coordination and provably maximizes network throughput. We have implemented our design and evaluated it in a 25-node testbed. Our results show that MORE provides an average throughput increase of 60% and a maximum of 10-fold, demonstrating that the theoretical gains promised by network coding are realizable in practice.
</description>
<pubDate>Fri, 30 Jun 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33230</guid>
<dc:date>2006-06-30T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Execution of Bipedal Walking Tasks From Biomechanical Principles</title>
<link>https://hdl.handle.net/1721.1/33229</link>
<description>Robust Execution of Bipedal Walking Tasks From Biomechanical Principles
Hofmann, Andreas
Effective use of robots in unstructured environments requires that they have sufficient autonomy and agility to execute task-level commands successfully.  A challenging example of such a robot is a bipedal walking machine.  Such a robot should be able to walk to a particular location within a particular time, while observing foot placement constraints, and avoiding a fall, if this is physically possible.  Although stable walking machines have been built, the problem of task-level control, where the tasks have stringent state-space and temporal requirements, and where significant disturbances may occur, has not been studied extensively.  This thesis addresses this problem through three objectives.  The first is to devise a plan specification where task requirements are expressed in a qualitative form that provides for execution flexibility.  The second is to develop a task-level executive that accepts such a plan, and outputs a sequence of control actions that result in successful plan execution.  The third is to provide this executive with disturbance handling ability.Development of such an executive is challenging because the biped is highly nonlinear and has limited actuation due to its limited base of support.  We address these challenges with three key innovations.  To address the nonlinearity, we develop a dynamic virtual model controller to linearize the biped, and thus, provide an abstracted biped that is easier to control.  The controller is model-based, but uses a sliding control technique to compensate for model inaccuracy.  To address the under-actuation, our system generates flow tubes, which define valid operating regions in the abstracted biped.  The flow tubes represent sets of state trajectories that take into account dynamic limitations due to under-actuation, and also satisfy plan requirements.  The executive keeps trajectories in the flow tubes by adjusting a small number of control parameters for key state variables in the abstracted biped, such as center of mass.  Additionally, our system uses a novel strategy that employs angular momentum to enhance translational controllability of the systemÂ&#146;s center of mass.  We evaluate our approach using a high-fidelity biped simulation.  Tests include walking with foot-placement constraints, kicking a soccer ball, and disturbance recovery.
PhD thesis
</description>
<pubDate>Fri, 28 Apr 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33229</guid>
<dc:date>2006-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Was the Patient Cured? Understanding Semantic Categories and Their Relationships in Patient Records</title>
<link>https://hdl.handle.net/1721.1/33223</link>
<description>Was the Patient Cured? Understanding Semantic Categories and Their Relationships in Patient Records
Sibanda, Tawanda Carleton
In this thesis, we detail an approach to extracting key information in medical discharge summaries. Starting with a narrative patient report, we first identify and remove information that compromises privacy (de-identification);next we recognize words and phrases in the text belonging to semantic categories of interest to doctors (semantic category recognition).For disease and symptoms, we determine whether the problem is present, absent, uncertain, or associated with somebody else (assertion classification). Finally, we classify the semantic relationships existing between our categories (semantic relationship classification).Our approach utilizes a series of statistical models that rely heavily on local lexical and syntactic context, and achieve competitive results compared to more complexNLP solutions. We conclude the thesis by presenting the design for the Category and Relationship Extractor (CaRE). CaRE combines our solutions to de-identification, semantic category recognition, assertion classification, and semantic relationship classification into a singleapplication that facilitates the easy extraction of semantic information from medical text.
MEng thesis
</description>
<pubDate>Wed, 28 Jun 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33223</guid>
<dc:date>2006-06-28T00:00:00Z</dc:date>
</item>
<item>
<title>Using Task-Structured Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol</title>
<link>https://hdl.handle.net/1721.1/33217</link>
<description>Using Task-Structured Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol
Canetti, Ran; Cheung, Ling; Kaynar, Dilsun; Liskov, Moses; Lynch, Nancy; Pereira, Olivier; Segala, Roberto
The Probabilistic I/O Automata framework of Lynch, Segala and Vaandrager provides tools for precisely specifying protocols and reasoning about theircorrectness using multiple  levels of abstraction, based on implementation relationships between these levels. We enhance this framework to allow analyzingprotocols that use cryptographic primitives. This requires resolving andreconciling issues such as nondeterministic behavior and scheduling, randomness,resource-bounded computation, and computational hardness assumptions.  The enhanced framework allows for more rigorous and systematic analysis of cryptographic protocols. To demonstrate the use of this framework, we present an example analysis that we have  done for an Oblivious Transfer protocol.
</description>
<pubDate>Tue, 20 Jun 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33217</guid>
<dc:date>2006-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Using Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol</title>
<link>https://hdl.handle.net/1721.1/33154</link>
<description>Using Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol
Canetti, Ran; Cheung, Ling; Kaynar, Dilsun; Liskov, Moses; Lynch, Nancy; Pereira, Olivier; Segala, Roberto
We demonstrate how to carry out cryptographic security analysis ofdistributed protocols within the Probabilistic I/O Automataframework of Lynch, Segala, and Vaandrager. This framework providestools for arguing rigorously about the concurrency and schedulingaspects of protocols, and about protocols presented at differentlevels of abstraction. Consequently, it can help in makingcryptographic analysis more precise and less susceptible to errors.We concentrate on a relatively simple two-party Oblivious Transferprotocol, in the presence of a semi-honest adversary (essentially,an eavesdropper). For the underlying cryptographic notion ofsecurity, we use a version of Canetti's Universally Composablesecurity.In spite of the relative simplicity of the example, the exercise isquite nontrivial. It requires taking many fundamental issues intoaccount, including nondeterministic behavior, scheduling,resource-bounded computation, and computational hardness assumptionsfor cryptographic primitives.
</description>
<pubDate>Mon, 19 Jun 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33154</guid>
<dc:date>2006-06-19T00:00:00Z</dc:date>
</item>
<item>
<title>Approximate Correspondences in High Dimensions</title>
<link>https://hdl.handle.net/1721.1/33002</link>
<description>Approximate Correspondences in High Dimensions
Grauman, Kristen; Darrell, Trevor
Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors.  The matching similarity is computed in linear time and forms a Mercer kernel.  We also show how the matching itself (a correspondence field) may be extracted for a small increase in computational cost. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate thatour approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel.
</description>
<pubDate>Thu, 15 Jun 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33002</guid>
<dc:date>2006-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>New Techniques for Geographic Routing</title>
<link>https://hdl.handle.net/1721.1/33000</link>
<description>New Techniques for Geographic Routing
Leong, Ben
As wireless sensor networks continue to grow in size, we are facedwith the prospect of emerging wireless networks with hundreds orthousands of nodes. Geographic routing algorithms are a promisingalternative to tradition ad hoc routing algorithms in this new domainfor point-to-point routing, but deployments of such algorithms arecurrently uncommon because of some practical difficulties.This dissertation explores techniques that address two major issues inthe deployment of geographic routing algorithms: (i) the costsassociated with distributed planarization and (ii) the unavailabilityof location information.  We present and evaluate two new algorithmsfor geographic routing: Greedy Distributed Spanning Tree Routing(GDSTR) and Greedy Embedding Spring Coordinates (GSpring).Unlike previous geographic routing algorithms which require theplanarization of the network connectivity graph, GDSTR switches torouting on a spanning tree instead of a planar graph when packets endup at dead ends during greedy forwarding. To choose a direction on thetree that is most likely to make progress towards the destination,each GDSTR node maintains a summary of the area covered by the subtreebelow each of its tree neighbors using convex hulls. This distributeddata structure is called a hull tree. GDSTR not only requires an orderof magnitude less bandwidth to maintain these hull trees than CLDP,the only distributed planarization algorithm that is known to workwith practical radio networks, it often achieves better routingperformance than previous planarization-based geographic routingalgorithms.GSpring is a new virtual coordinate assignment algorithm that derivesgood coordinates for geographic routing when location information isnot available. Starting from a set of initial coordinates for a set ofelected perimeter nodes, GSpring uses a modified spring relaxationalgorithm to incrementally adjust virtual coordinates to increase theconvexity of voids in the virtual routing topology. This reduces theprobability that packets will end up in dead ends during greedyforwarding, and improves the routing performance of existinggeographic routing algorithms.The coordinates derived by GSpring yield comparable routingperformance to that for actual physical coordinates and significantlybetter performance than that for NoGeo, the best existing algorithmfor deriving virtual coordinates for geographic routing. Furthermore,GSpring is the first known algorithm that is able to derivecoordinates that achieve better geographic routing performance thanactual physical coordinates for networks with obstacles.
PhD thesis
</description>
<pubDate>Wed, 14 Jun 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/33000</guid>
<dc:date>2006-06-14T00:00:00Z</dc:date>
</item>
<item>
<title>Schematic Querying of Large Tracking Databases</title>
<link>https://hdl.handle.net/1721.1/32999</link>
<description>Schematic Querying of Large Tracking Databases
Dalley, Gerald; Izo, Tomas
In dealing with long-term tracking databases withwide-area coverage, an important problem is in formulating anintuitive and fast query system for analysis. In such a querysystem, a user who is not a computer vision research should beable to readily specify a novel query to the system and obtainthe desired results. Furthermore, these queries should be able tonot only search out individual actors (e.g. "find all white cars")but also find interactions amongst multiple actors (e.g. "find alldrag racing activities in the city"). Informally, we have foundthat people often use sketches when describing activities andinteractions. In this paper, we demonstrate a preliminary systemthat automatically interprets schematic drawings of activities.The system transforms the schematics into executable code thatsearches a tracking database. Through our query optimization,these queries tend to take orders of magnitude less time to executethan equivalent queries running on a partially-optimized SQLdatabase.
</description>
<pubDate>Mon, 12 Jun 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32999</guid>
<dc:date>2006-06-12T00:00:00Z</dc:date>
</item>
<item>
<title>Infrastructure for Engineered Emergence on Sensor/Actuator Networks</title>
<link>https://hdl.handle.net/1721.1/32988</link>
<description>Infrastructure for Engineered Emergence on Sensor/Actuator Networks
Beal, Jacob; Bachrach, Jonathan
The ability to control emergent phenomena depends on decomposingthem into aspects susceptible to independent engineering. Forspatial self-managing systems, the amorphous-medium abstraction lets youseparate the systemÂ&#146;s specification from its implementation.
</description>
<pubDate>Wed, 01 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32988</guid>
<dc:date>2006-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>CogSci to AI: It's the Brainware, Stupid!</title>
<link>https://hdl.handle.net/1721.1/32987</link>
<description>CogSci to AI: It's the Brainware, Stupid!
Beal, Jacob; Sussman, Gerald
Current modularization techniques fail when applied to hard AI problems.But cognitive science shows that the mind has modules specialized for particular functions.Unlike current engineered modules, the modules of themind learn to communicate with each other as a child matures.Kirby's ideas on language evolution, combined with constraints derivedfrom neuroanatomy, yield a new mechanism for integrating modules intoa system: a communications bootstrapping system in which two agentsbuild a shared vocabulary capturing information common to their mutualexperience, including cross-module knowledge about the world.
</description>
<pubDate>Wed, 01 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32987</guid>
<dc:date>2006-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Amorphous Medium Language</title>
<link>https://hdl.handle.net/1721.1/32986</link>
<description>Amorphous Medium Language
Beal, Jacob
Programming reliable behavior on a large mesh network composed of unreliable parts is difficult. Amorphous Medium Language addresses this problem by abstracting robustness and networking issues away from the programmer via language of geometric primitives and homeostasis maintenance.AML is designed to operate on a high diameter network composed of thousands to billions of nodes, and does not assume coordinate, naming, or routing services. Computational processes are distributed through geometric regions of the space approximated by the network and specify behavior in terms of homeostasis conditions and actions to betaken when homeostasis is violated.AML programs are compiled for local execution using previously developed amorphous computing primitives which provide robustness against ongoing failures and joins and localize the impact of changes in topology. I show some examples of how AML allows complex robust behavior to be expressed in simple programs and some preliminary results from simulation.
</description>
<pubDate>Fri, 01 Jul 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32986</guid>
<dc:date>2005-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programming an Amorphous Computational Medium</title>
<link>https://hdl.handle.net/1721.1/32985</link>
<description>Programming an Amorphous Computational Medium
Beal, Jacob
Amorphous computing considers the problem of controllingmillions of spatially distributed unreliable devices which communicateonly with nearby neighbors. To program such a system, we need a highleveldescription language for desired global behaviors, and a system tocompile such descriptions into locally executing code which robustly createsand maintains the desired global behavior. I survey existing amorphouscomputing primitives and give desiderata for a language describingcomputation on an amorphous computer. I then bring these together inAmorphous Medium Language, which computes on an amorphous computeras though it were a space-filling computational medium.
</description>
<pubDate>Wed, 01 Sep 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32985</guid>
<dc:date>2004-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>What the Assassin's Guild Taught Me About Distributed Computing</title>
<link>https://hdl.handle.net/1721.1/32984</link>
<description>What the Assassin's Guild Taught Me About Distributed Computing
Beal, Jacob
Distributed computing and live-action roleplaying share many of thesame fundamental problems, as live-action roleplaying games commonly include simulations carried out by their players.Games run by the MIT Assassin's Guild are particularly illustrative ofdistributed computing issues due to their large scope and highcomplexity.I discuss three distributed computing issues addressed by Assassin'sGuild game design---information hiding, error correction, andliveness/consistency tradeoffs---and the relevance of the solutionsused by game writers to current problems in distributed computing.
</description>
<pubDate>Sat, 27 May 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32984</guid>
<dc:date>2006-05-27T00:00:00Z</dc:date>
</item>
<item>
<title>First Class Copy &amp; Paste</title>
<link>https://hdl.handle.net/1721.1/32980</link>
<description>First Class Copy &amp; Paste
Edwards, Jonathan
The Subtext project seeks to make programming fundamentally easier by altering the nature of programming languages and tools. This paper defines an operational semantics for an essential subset of the Subtext language. It also presents a fresh approach to the problems of mutable state, I/O, and concurrency.Inclusions reify copy &amp; paste edits into persistent relationships that propagate changes from their source into their destination. Inclusions formulate a programming language in which there is no distinction between a programÂ&#146;s representation and its execution. Like spreadsheets, programs are live executions within a persistent runtime, and programming is direct manipulation of these executions via a graphical user interface. There is no need to encode programs into source text.Mutation of state is effected by the computation of hypothetical recursive variants of the state, which can then be lifted into new versions of the state. Transactional concurrency is based upon queued single-threaded execution. Speculative execution of queued hypotheticals provides concurrency as a semantically transparent implementation optimization.
</description>
<pubDate>Mon, 22 May 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32980</guid>
<dc:date>2006-05-22T00:00:00Z</dc:date>
</item>
<item>
<title>Learning using the Born Rule</title>
<link>https://hdl.handle.net/1721.1/32978</link>
<description>Learning using the Born Rule
Wolf, Lior
In Quantum Mechanics the transition from a deterministic descriptionto a probabilistic one is done using a simple rule termed the Bornrule. This rule states that the probability of an outcome ($a$)given a state ($\Psi$) is the square of their inner products($(a^\top\Psi)^2$).In this paper, we unravel a new probabilistic justification forpopular algebraic algorithms, based on the Born rule. Thesealgorithms include two-class and multiple-class spectral clustering,and algorithms based on Euclidean distances.
</description>
<pubDate>Tue, 16 May 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32978</guid>
<dc:date>2006-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>A Machine-Checked Safety Proof for a CISC-Compatible SFI Technique</title>
<link>https://hdl.handle.net/1721.1/32546</link>
<description>A Machine-Checked Safety Proof for a CISC-Compatible SFI Technique
McCamant, Stephen
Executing untrusted code while preserving security requires that thecode be prevented from modifying memory or executing instructionsexcept as explicitly allowed.  Software-based fault isolation (SFI) or"sandboxing" enforces such a policy by rewriting code at theinstruction level.  In previous work, we developed a new SFI techniquethat is applicable to CISC architectures such as the Intel IA-32,based on enforcing additional alignment constraints to avoiddifficulties with variable-length instructions.  This report describesa machine-checked proof we developed to increase our confidence in thesafety provided by the technique.  The proof, constructed for asimplified model of the technique using the ACL2 theorem provingenvironment, certifies that if the code rewriting has been checked tohave been performed correctly, the resulting program cannot perform adangerous operation when run.  We describe the high-level structure ofthe proof, then give the intermediate lemmas with interspersedcommentary, and finally evaluate the process of the proof'sconstruction.
</description>
<pubDate>Thu, 11 May 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32546</guid>
<dc:date>2006-05-11T00:00:00Z</dc:date>
</item>
<item>
<title>Learning a Dictionary of Shape-Components in Visual Cortex: Comparison with Neurons, Humans and Machines</title>
<link>https://hdl.handle.net/1721.1/32544</link>
<description>Learning a Dictionary of Shape-Components in Visual Cortex: Comparison with Neurons, Humans and Machines
Serre, Thomas
In this thesis, I describe a quantitative model that accounts for the circuits and computations of the feedforward path of the ventral stream of visual cortex.  This model is consistent with a general theory of visual processing that extends the hierarchical model of (Hubel &amp; Wiesel, 1959) from primary to extrastriate visual areas. It attempts to explain the first few hundred milliseconds of visual processing and Â&#147;immediate recognitionÂ&#148;. One of the key elements in the approach is the learning of a generic dictionary of shape-components from V2 to IT, which provides an invariant representation to task-specific categorization circuits in higher brain areas. This vocabulary of shape-tuned units is learned in an unsupervised manner from natural images, and constitutes a large and redundant set of image features with different complexities and invariances.  This theory significantly extends an earlier approach by (Riesenhuber &amp; Poggio, 1999) and builds upon several existing neurobiological models and conceptual proposals.First, I present evidence to show that the model can duplicate the tuning properties of neurons in various brain areas (e.g., V1, V4 and IT). In particular, the model agrees with data from V4 about the response of neurons to combinations of simple two-bar stimuli (Reynolds et al, 1999) (within the receptive field of the S2 units) and some of the C2 units in the model show a tuning for boundary conformations which is consistent with recordings from V4 (Pasupathy &amp; Connor, 2001). Second, I show that not only can the model duplicate the tuning properties of neurons in various brain areas when probed with artificial stimuli, but it can also handle the recognition of objects in the real-world, to the extent of competing with the best computer vision systems. Third, I describe a comparison between the performance of the model and the performance of human observers in a rapid animal vs. non-animal recognition task for which recognition is fast and cortical back-projections are likely to be inactive.  Results indicate that the model predicts human performance extremely well when the delay between the stimulus and the mask is about 50 ms.  This suggests that cortical back-projections may not play a significant role when the time interval is in this range, and the model may therefore provide a satisfactory description of the feedforward path.Taken together, the evidences suggest that we may have the skeleton of a successful theory of visual cortex.  In addition, this may be the first time that a neurobiological model, faithful to the physiology and the anatomy of visual cortex, not only competes with some of the best computer vision systems thus providing a realistic alternative to engineered artificial vision systems, but also achieves performance close to that of humans in a categorization task involving complex natural images.
PhD thesis
</description>
<pubDate>Tue, 25 Apr 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32544</guid>
<dc:date>2006-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Abstraction Layers for Scalable Microfluidic Biocomputers (Extended Version)</title>
<link>https://hdl.handle.net/1721.1/32543</link>
<description>Abstraction Layers for Scalable Microfluidic Biocomputers (Extended Version)
Thies, William; Urbanski, John Paul; Thorsen, Todd; Amarasinghe, Saman
Microfluidic devices are emerging as an attractive technology for automatically orchestrating the reactions needed in a biological computer.  Thousands of microfluidic primitives have already been integrated on a single chip, and recent trends indicate that the hardware complexity is increasing at rates comparable to Moore's Law.  As in the case of silicon, it will be critical to develop abstraction layers--such as programming languages and Instruction Set Architectures (ISAs)--that decouple software development from changes in the underlying device technology.Towards this end, this paper presents BioStream, a portable language for describing biology protocols, and the Fluidic ISA, a stable interface for microfluidic chip designers.  A novel algorithm translates microfluidic mixing operations from the BioStream layer to the Fluidic ISA.  To demonstrate the benefits of these abstraction layers, we build two microfluidic chips that can both execute BioStream code despite significant differences at the device level.  We consider this to be an important step towards building scalable biocomputers.
</description>
<pubDate>Fri, 05 May 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32543</guid>
<dc:date>2006-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Supplement to "Distributed Quota Enforcement for Spam Control"</title>
<link>https://hdl.handle.net/1721.1/32542</link>
<description>Supplement to "Distributed Quota Enforcement for Spam Control"
Walfish, Michael; Zamfirescu, J.D.; Balakrishnan, Hari; Karger, David; Shenker, Scott
This report is a supplement to our paper "Distributed Quota Enforcement forSpam Control" (NSDI 2006). We assume here that the reader has readthe main paper. In this report, we first analyze the enforcer nodes'key-value maps and then analyze two of the experiments from the main paper.
</description>
<pubDate>Sat, 29 Apr 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32542</guid>
<dc:date>2006-04-29T00:00:00Z</dc:date>
</item>
<item>
<title>A Combined Stochastic and Greedy Hybrid Estimation Capability for Concurrent Hybrid Models with Autonomous Mode Transitions</title>
<link>https://hdl.handle.net/1721.1/32539</link>
<description>A Combined Stochastic and Greedy Hybrid Estimation Capability for Concurrent Hybrid Models with Autonomous Mode Transitions
Blackmore, Lars; Funiak, Stanislav; Williams, Brian
Robotic and embedded systems have become increasingly pervasive in applicationsranging from space probes and life support systems to robot assistants. In order to act robustly in the physical world, robotic systems must be able to detect changes in operational mode, such as faults, whose symptoms manifest themselves only in the continuous state. In such systems, the state is observed indirectly, and must therefore be estimated in a robust, memory-efficient manner from noisy observations.Probabilistic hybrid discrete/continuous models, such as Concurrent Probabilistic Hybrid Automata (CPHA) are convenient modeling tools for such systems. In CPHA, the hidden state is represented with discrete and continuous state variables that evolve probabilistically. In this paper, we present a novel method for estimating the hybrid state of CPHA that achieves robustness by balancing greedy and stochastic search. The key insight is that stochastic and greedy search methods, taken together, are often particularly effective in practice.To accomplish this, we first develop an efficient stochastic sampling approach for CPHA based on Rao-Blackwellised Particle Filtering. We then propose a strategy for mixing stochastic and greedy search. The resulting method is able to handle three particularly challenging aspects of real-world systems, namely that they 1) exhibit autonomous mode transitions, 2) consist of a large collection of concurrently operating components, and 3) are non-linear. Autonomous mode transitions, that is, discrete transitions that depend on thecontinuous state, are particularly challenging to address, since they couple the discrete and continuous state evolution tightly. In this paper we extend the class of autonomous mode transitions that can be handled to arbitrary piecewise polynomial transition distributions.We perform an empirical comparison of the greedy and stochastic approaches to hybrid estimation, and then demonstrate the robustness of the mixed method incorporated with our HME (Hybrid Mode Estimation) capability. We show that this robustness comes at only a small performance penalty.
</description>
<pubDate>Fri, 28 Apr 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32539</guid>
<dc:date>2006-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>A Probabilistic Particle Control Approach to Optimal, Robust Predictive Control</title>
<link>https://hdl.handle.net/1721.1/32538</link>
<description>A Probabilistic Particle Control Approach to Optimal, Robust Predictive Control
Blackmore, Lars
Autonomous vehicles need to be able to plan trajectories to a specified goal that avoid obstacles, and are robust to the inherent uncertainty in the problem. This uncertainty arises due to uncertain state estimation, disturbances and modeling errors. Previous solutions to the robust path planning problem solved this problem using a finite horizon optimal stochastic control approach. This approach finds the optimal path subject to chance constraints, which ensure that the probability of collision with obstacles is below a given threshold. This approach is limited to problems where all uncertain distributions are Gaussian, and typically result in highly conservative plans. In many cases, however, the Gaussian assumption is invalid; for example in the case of localization, the belief state about a vehicleÂ&#146;s position can consist of highly non-Gaussian, even multimodal, distributions.In this paper we present a novel method for finite horizon stochastic control ofdynamic systems subject to chance constraints. The method approximates the distribution of the system state using a finite number of particles. By expressing these particles in terms of the control variables, we are able to approximate the original stochastic control problem as a deterministic one; furthermore the approximation becomes exact as the number of particles tends to infinity. For a general class of chance constrained problems with linear system dynamics, we show that the approximate problem can be solved using efficient Mixed-Integer Linear Programming techniques. We apply the new method to aircraft control in turbulence, and show simulation results that demonstrate the efficacy of the approach.
</description>
<pubDate>Fri, 28 Apr 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32538</guid>
<dc:date>2006-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Coordinating Agile Systems through the Model-based Execution of Temporal Plans</title>
<link>https://hdl.handle.net/1721.1/32537</link>
<description>Coordinating Agile Systems through the Model-based Execution of Temporal Plans
Leaute, Thomas
Agile autonomous systems are emerging, such as unmanned aerial vehicles (UAVs), that must robustly perform tightly coordinated time-critical missions; for example, military surveillance or search-and-rescue scenarios. In the space domain, execution of temporally flexible plans has provided an enabler for achieving the desired coordination and robustness, in the context of space probes and planetary rovers, modeled as discrete systems. We address the challenge of extending plan execution to systems with continuous dynamics, such as air vehicles and robot manipulators, and that are controlled indirectly through the setting of continuous state variables.Systems with continuous dynamics are more challenging than discrete systems, because they require continuous, low-level control, and cannot be controlled by issuing simple sequences of discrete commands. Hence, manually controlling these systems (or plants) at a low level can become very costly, in terms of the number of human operators necessary to operate the plant. For example, in the case of a fleet of UAVs performing a search-and-rescue scenario, the traditional approach to controlling the UAVs involves providing series of close waypoints for each aircraft, which incurs a high workload for the human operators, when the fleet consists of a large number of vehicles.Our solution is a novel, model-based executive, called Sulu, that takes as input a qualitative state plan, specifying the desired evolution of the state of the system. This approach elevates the interaction between the human operator and the plant, to a more abstract level where the operator is able to Â&#147;coachÂ&#148; the plant by qualitatively specifying the tasks, or activities, the plant must perform. These activities are described in a qualitative manner, because they specify regions in the plantÂ&#146;s state space in which the plant must be at a certain point in time. Time constraints are also described qualitatively, in the form of flexible temporal constraints between activities in the state plan. The design of low-level control inputs in order to meet this abstract goal specification is then delegated to the autonomous controller, hence decreasing the workload per human operator. This approach also provides robustness to the executive, by giving it room to adapt to disturbances and unforeseen events, while satisfying the qualitative constraints on the plant state, specified in the qualitative state plan.Sulu reasons on a model of the plant in order to dynamically generate near-optimal control sequences to fulfill the qualitative state plan. To achieve optimality and safety, Sulu plans into the future, framing the problem as a disjunctive linear programming problem. To achieve robustness to disturbances and maintain tractability, planning is folded within a receding horizon, continuous planning and execution framework. The key to performance is a problem reduction method based on constraint pruning. We benchmark performance using multi-UAV firefighting scenarios on a real-time, hardware-in-the-loop testbed.
SM thesis
</description>
<pubDate>Fri, 28 Apr 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32537</guid>
<dc:date>2006-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting and tracking multiple interacting objects without class-specific models</title>
<link>https://hdl.handle.net/1721.1/32536</link>
<description>Detecting and tracking multiple interacting objects without class-specific models
Bose, Biswajit; Wang, Xiaogang; Grimson, Eric
We propose a framework for detecting and tracking multiple interacting objects from a single, static, uncalibrated camera. The number of objects is variable and unknown, and object-class-specific models are not available. We use background subtraction results as measurements for object detection and tracking. Given these constraints, the main challenge is to associate pixel measurements with (possibly interacting) object targets. We first track clusters of pixels, and note when they merge or split. We then build an inference graph, representing relations between the tracked clusters. Using this graph and a generic object model based on spatial connectedness and coherent motion, we label the tracked clusters as whole objects, fragments of objects or groups of interacting objects. The outputs of our algorithm are entire tracks of objects, which may include corresponding tracks from groups of objects during interactions. Experimental results on multiple video sequences are shown.
</description>
<pubDate>Tue, 25 Apr 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32536</guid>
<dc:date>2006-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Of Malicious Motes and Suspicious Sensors</title>
<link>https://hdl.handle.net/1721.1/32534</link>
<description>Of Malicious Motes and Suspicious Sensors
Gilbert, Seth; Guerraoui, Rachid; Newport, Calvin
How much damage can a malicious tiny device cause in a single-hopwireless network?  Imagine two players, Alice and Bob, who want toexchange information.  Collin, a malicious adversary, wants to preventthem from communicating.  By broadcasting at the same time as Alice orBob, Collin can destroy their messages or overwhelm them with his ownmalicious data.  Being a tiny device, however, Collin can onlybroadcast up to B times. Given that Alice and Bob do not knowB, and cannot distinguish honest from malicious messages, howlong can Collin prevent them from communicating?  We show the answerto be 2B + Theta(lg|V|) communication rounds, where V is theset of values that Alice and Bob may transmit.  We prove this resultto be optimal by deriving an algorithm that matches our lowerbound---even in the stronger case where Alice and Bob do not start thegame at the same time.We then argue that this specific 3-player game captures the generalextent to which a malicious adversary can disrupt coordination in asingle-hop wireless network. We support this claim by deriving---via reduction from the 3-player game---round complexity lower boundsfor several classical n-player problems: 2B + Theta(lg|V|) for reliable broadcast,2B + Omega(lg(n/k)) for leader election among k contenders,and 2B + Omega(k*lg(|V|/k)) for static k-selection.  We then consider an extension of our adversary model that also includes up to t crash failures. We study binary consensus as the archetypal problem for this environment and show a bound of 2B + Theta(t) rounds. We conclude by providing tight, or nearly tight, upper bounds for all four problems.  The new upper and lower bounds in this paper represent the first such results for a wireless network in which the adversary has the ability to disrupt communication.
</description>
<pubDate>Wed, 19 Apr 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32534</guid>
<dc:date>2006-04-19T00:00:00Z</dc:date>
</item>
<item>
<title>Revisiting Internet Adressing: Back to the Future!</title>
<link>https://hdl.handle.net/1721.1/32532</link>
<description>Revisiting Internet Adressing: Back to the Future!
Vutukuru, Mythili; Feamster, Nick; Walfish, Michael; Balakrishnan, Hari; Shenker, Scott
IP prefixes undermine three goals of Internet routing: accurate reflection of network-layer reachability, secure routing messages, and effective traffic control. This paper presents Atomic IP (AIP), a simple change to Internet addressing (which in fact reverts to how addressing once worked), that allows Internet routing to achieve these goals.
</description>
<pubDate>Fri, 14 Apr 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32532</guid>
<dc:date>2006-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>The Symmetriad: A Journey of Discovery Through the Land of the Polychora</title>
<link>https://hdl.handle.net/1721.1/32531</link>
<description>The Symmetriad: A Journey of Discovery Through the Land of the Polychora
Radul, Alexey
I devised and implemented a method for constructing regular andsemiregular geometric objects in n-dimensional Euclidean space.Given a finite reflection group (a Coxeter group) G, there is a standard way to give G a group action on n-space.Reflecting a point through this group action yieldsan object that exhibits the symmetries specified by G.  If the pointis chosen well, the object is guaranteed to be regular or semiregular,and many interesting regular and semiregular objectsarise this way.  By starting with the symmetry group, I can use thegroup structure both to simplify the actual graphics involved withdisplaying the object, and to illustrate various aspects of itsstructure.  For example, subgroups of the symmetry group (and theircosets) correspond to substructures of the object.  Conversely, bydisplaying such symmetric objects and their various substructures, Ifind that I can elucidate the structure of the symmetry group thatgives rise to them.I have written The Symmetriad, the computer system whose name thisdocument has inherited, and used it to explore 3- and 4-dimensionalsymmetric objects and their symmetry groups.  The 3-dimensionalobjects are already well understood, but they serve to illustrate thetechniques used on the 4-dimensional objects and make them morecomprehensible.  Four dimensions offers a treasure trove of intriguingstructures, many of which have no ready 3D analogue.  These are what Iwill show you here.
MEng thesis
</description>
<pubDate>Sat, 01 Jan 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32531</guid>
<dc:date>2005-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Task-Structured Probabilistic I/O Automata</title>
<link>https://hdl.handle.net/1721.1/32525</link>
<description>Task-Structured Probabilistic I/O Automata
Canetti, Ran; Cheung, Ling; Kaynar, Dilsun; Liskov, Moses; Lynch, Nancy; Pereira, Olivier; Segala, Roberto
In the Probabilistic I/O Automata (PIOA) framework, nondeterministicchoices are resolved using perfect-information schedulers,which are similar to history-dependent policies for Markov decision processes(MDPs). These schedulers are too powerful in the setting of securityanalysis, leading to unrealistic adversarial behaviors. Therefore, weintroduce in this paper a novel mechanism of task partitions for PIOAs.This allows us to define partial-information adversaries in a systematicmanner, namely, via sequences of tasks.The resulting task-PIOA framework comes with simple notions of externalbehavior and implementation, and supports simple compositionalityresults. A new type of simulation relation is defined and proven soundwith respect to our notion of implementation. To illustrate the potentialof this framework, we summarize our verification of an ObliviousTransfer protocol, where we combine formal and computational analyses.Finally, we present an extension with extra expressive power, usinglocal schedulers of individual components.
</description>
<pubDate>Fri, 31 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/32525</guid>
<dc:date>2006-03-31T00:00:00Z</dc:date>
</item>
<item>
<title>Maximum Entropy Correlated Equilibria</title>
<link>https://hdl.handle.net/1721.1/31339</link>
<description>Maximum Entropy Correlated Equilibria
Ortiz, Luis E.; Schapire, Robert E.; Kakade, Sham M.
We study maximum entropy correlated equilibria in (multi-player)games and provide two gradient-based algorithms that are guaranteedto converge to such equilibria. Although we do not provideconvergence rates for these algorithms, they do have strong connectionsto other algorithms (such as iterative scaling) which are effectiveheuristics for tasks such as statistical estimation.
</description>
<pubDate>Mon, 20 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31339</guid>
<dc:date>2006-03-20T00:00:00Z</dc:date>
</item>
<item>
<title>Pyramid Match Kernels: Discriminative Classification with Sets of Image Features (version 2)</title>
<link>https://hdl.handle.net/1721.1/31338</link>
<description>Pyramid Match Kernels: Discriminative Classification with Sets of Image Features (version 2)
Grauman, Kristen; Darrell, Trevor
Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering.  Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences -- generally a computationally expensive task that becomes impractical for largeset sizes.  We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space.  This ``pyramid match" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kerneldoes not penalize the presence of extra features, it is robust to clutter.  We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels.  We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches.  (This tech report updates MIT-CSAIL-TR-2005-017 and the paper "The Pyramid Match Kernel: Discriminative Classification with Sets of Images Features" which appeared in the proceedings of ICCV 2005.)
</description>
<pubDate>Sat, 18 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31338</guid>
<dc:date>2006-03-18T00:00:00Z</dc:date>
</item>
<item>
<title>Computing action equivalences for planning under time-constraints</title>
<link>https://hdl.handle.net/1721.1/31337</link>
<description>Computing action equivalences for planning under time-constraints
Gardiol, Natalia H.; Kaelbling, Leslie Pack
In order for autonomous artificial decision-makers to solverealistic tasks, they need to deal with the dual problems of searching throughlarge state and action spaces under time pressure.We study the problem of planning in domains with lots of objects.  Structuredrepresentations of action can help provide guidance when the number of actionchoices and size of the state space is large.We show how structured representations ofaction effects can help us partition the action space in to a smallerset of approximate equivalence classes. Then, the pared-downaction space can be used to identify a useful subset of the state space in whichto search for a solution.  As computational resources permit, we thenallow ourselves to elaborate the original solution. This kind of analysisallows us to collapse the action space and permits faster planning in muchlarger domains than before.
</description>
<pubDate>Mon, 20 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31337</guid>
<dc:date>2006-03-20T00:00:00Z</dc:date>
</item>
<item>
<title>DNA Binding and Games</title>
<link>https://hdl.handle.net/1721.1/31311</link>
<description>DNA Binding and Games
Perez-Breva, Luis; Ortiz, Luis E.; Yeang, Chen-Hsiang, 1969-; Jaakkola, Tommi
We propose a game-theoretic approach tolearn and predict coordinate binding of multiple DNA bindingregulators. The framework implements resource constrainedallocation of proteins to local neighborhoods as well as to sitesthemselves, and explicates coordinate and competitive bindingrelations among proteins with affinity to the site or region. The focus of this paper is on mathematical foundationsof the new modeling approach. We demonstrate the approachin the context of the lambda-phage switch, a well-known biologicalsubsystem, and provide simulation results that successfully illustrate the predictions that can be derived from the modelwith known structure and affinities. Subsequentwork will elaborate on methods for learning the affinities and gamestructures from available binding data.
</description>
<pubDate>Mon, 06 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31311</guid>
<dc:date>2006-03-06T00:00:00Z</dc:date>
</item>
<item>
<title>Using Task-Structured Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol</title>
<link>https://hdl.handle.net/1721.1/31310</link>
<description>Using Task-Structured Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol
Canetti, Ran; Cheung, Ling; Kaynar, Dilsun; Liskov, Moses; Lynch, Nancy; Pereira, Olivier; Segala, Roberto
AbstractThe Probabilistic I/O Automata framework of Lynch, Segala and Vaandrager provides tools for precisely specifying protocols and reasoning about their correctness using multiple levels of abstraction, based on implementation relationships between these levels. We enhance this framework to allow analyzing protocols that use cryptographic primitives. This requires resolving and reconciling issues such as nondeterministic behavior and scheduling, randomness, resource-bounded computation, and computational hardness assumptions. The enhanced framework allows for more rigorous and systematic analysis of cryptographic protocols. To demonstrate the use of this framework, wepresent an example analysis that we have done for an Oblivious Transfer protocol.
</description>
<pubDate>Wed, 08 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31310</guid>
<dc:date>2006-03-08T00:00:00Z</dc:date>
</item>
<item>
<title>Hyperglue: Designing High-Level Agent Communication for Distributed Applications</title>
<link>https://hdl.handle.net/1721.1/31223</link>
<description>Hyperglue: Designing High-Level Agent Communication for Distributed Applications
Peters, Stephen; Look, Gary; Quigley, Kevin; Shrobe, Howard; Gajos, Krzysztof
We are building a new communication model and discoverysystem which will allow agent-based intelligent spacesto interact with one another. This new infrastructure layer,called Hyperglue, coordinates agent actions at a higher levelthan most agent communication does, providing an interfacefor communication at the level of "real-world" entities suchas people, places, organizations, and information sources.The resulting structure is one which allows these agent communitiesto interact, while preserving the privacy, privileges,and preferences of the entities they represent. In this paperwe describe the rationale for Hyperglue, and present theinitial design as an extension of the existing Metaglue agentframework developed at the MIT AI Lab.
</description>
<pubDate>Wed, 01 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31223</guid>
<dc:date>2006-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Plan-Driven Pervasive Computing</title>
<link>https://hdl.handle.net/1721.1/31222</link>
<description>Plan-Driven Pervasive Computing
Look, Gary; Peters, Stephen; Shrobe, Howard
The goal of human-centered, pervasive computing should be to hide the details of the computing environment, allowing users to concentrate on their goals, rather than on the direct management of devices. This paper describes a system that operates at the level of goals and plans, rather than individual resources. It adaptively selects from its plan library that plan which is likely to best achieve the userÂ&#146;s goal in view of his preferences and current resource availability. Once the plan and resources are selected, it monitors the execution of the plan, dispatching subtasks when they are ready to be executed.
</description>
<pubDate>Wed, 01 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31222</guid>
<dc:date>2006-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Amorphous Infrastructure for Language Implementation</title>
<link>https://hdl.handle.net/1721.1/31221</link>
<description>Amorphous Infrastructure for Language Implementation
Newton, Ryan; Beal, Jacob
We propose a method for the robust implementation of simple graphical automataon an amorphous computer. This infrastructure is applied to the implementationof purely functional programming languages. Specifically, it is usedin conjunction with data-flow techniques to implement a toy language homologousto recurrence equations, exploiting control-flow parallelism through paralleloperand evaluation. Also, data parallelism is explored in a separate implementation,in which a simple mark-up syntax enables Scheme programs to performspatially-distributed tree-walking without modifying their semantics. This additionenables an idiomatically expressed interpreter to be trivially instrumented,producing a spatially distributed universal machine, and once again achievingcontrol flow parallelism in the interpreted language.
</description>
<pubDate>Tue, 10 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31221</guid>
<dc:date>2002-12-10T00:00:00Z</dc:date>
</item>
<item>
<title>A soft touch: Compliant Tactile Sensors for Sensitive Manipulation</title>
<link>https://hdl.handle.net/1721.1/31220</link>
<description>A soft touch: Compliant Tactile Sensors for Sensitive Manipulation
Torres-Jara, Eduardo; Vasilescu, Iuliu; Coral, Raul
We present the design, analysis and construction of a biologicallyinspired tactile sensor. The sensor can measure normal and lateralforces, conform to the surfaces with which it comes in contact andincrease the friction of the surface for a good grasp.The sensor is built using a simple process and the applied forcesare read using standard electronics. These features make thesensors ideal for mass production.We are motivated to build tactile sensors that are useful forrobotic manipulation given that the current ones do not have thefeatures that we consider necessary. The sensors presented in thispaper have been designed to deal with these issues. They have beendesigned and implemented in the fingers of the humanoid robotObrero.
</description>
<pubDate>Wed, 01 Mar 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31220</guid>
<dc:date>2006-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finite Horizon Control Design for Optimal Discrimination between Several Models</title>
<link>https://hdl.handle.net/1721.1/31219</link>
<description>Finite Horizon Control Design for Optimal Discrimination between Several Models
Blackmore, Lars; Williams, Brian
Multiple-Model fault detection is a powerful method for detecting changes, such as faults, in dynamic systems. In many cases, the ability of such a detection scheme to distinguish between possible models for the system dynamics depends critically on the control inputs applied to the system. Prior work has therefore aimed to design control inputs in order to improve fault detection. We previously developed a new method that uses constrained finite horizon control design to create control inputs that minimize an upper bound on the probability of model selection error. This method is limited, however, to the problem of selection between two models. In this paper we describe a new method that extends this approach to handle an arbitrary number of models. By optimizing subject to hard constraints, the new method can ensure that a defined task is fulfilled, while optimally discriminating between models. This means that the discrimination power of the designed control input can be much greater than that created by other approaches, which typically design Â&#145;auxiliaryÂ&#146; signals with limited power so that the effect on the system state is small. Furthermore, the optimization criterion, which is an upper bound on the probability of model selection error, has a more meaningful interpretation than alternative approaches that are based on information gain, for example.We demonstrate the method using an aircraft fault detectionscenario and show that the new method significantly reducesthe bound on the probability of error when compared to amanually generated identification sequence and a fuel-optimalsequence.
</description>
<pubDate>Tue, 28 Feb 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31219</guid>
<dc:date>2006-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Animation of Dynamic Manipulation</title>
<link>https://hdl.handle.net/1721.1/31218</link>
<description>Interactive Animation of Dynamic Manipulation
Abe, Yeuhi; Popovic, Jovan
Lifelike animation of manipulation must account for the dynamicinteraction between animated characters, objects, and their environment. Failing to do so would ignore the often significant effects objectshave on the motion of the character. For example, lifting a heavy objectwould appear identical to lifting a light one. Physical simulationhandles such interaction correctly, with a principled approach thatadapts easily to different circumstances, changing environments, andunexpected disturbances. Our work shows how to control lifelike animatedcharacters so that they accomplish manipulation tasks within aninteractive physical simulation. Our new multi-task control algorithmsimplifies descriptions of manipulation by supporting prioritized goalsin both the joint space of the character and the task-space of theobject. The end result is a versatile algorithm that incorporatesrealistic force limits and recorded motion postures to portray lifelikemanipulation automatically.
</description>
<pubDate>Tue, 28 Feb 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31218</guid>
<dc:date>2006-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Control and Estimation for Cooperative Manipulator Tasks</title>
<link>https://hdl.handle.net/1721.1/31217</link>
<description>Control and Estimation for Cooperative Manipulator Tasks
Blackmore, Lars; Block, Steve
The objective of this project is to achieve reliable transfer of an object from one robotic manipulator to another. This capability is useful for a number of applications, for instance robotic assembly, or robots with multiple manipulators, such as humanoid robots.Achieving reliable object transfer poses a number of challenges for both control and estimation. As with most manipulation problems, the inverse kinematics problem must be solved so that the desired endpoint location can be specified in Cartesian coordinates, rather than in the joint space of the manipulator. Anadditional challenge particular to the cooperative robotics problem is that more than one manipulator may have a grasp on the same object. Manipulators that are carrying out simple position control may encounter problems when grasping the same object. Minor errors in forward kinematics can lead to large controllerforces, or even unstable dynamics, as each controller tries to counteract the other to drive the perceived error to zero.On the estimation side, carrying out reliable transfer depends critically on determining the grasp state; in other words, does a particular robot have a grasp on the object, or do both have the object? The grasp state must be determined before the sequence of events in a transfer task can proceed. For example, the manipulator receiving the object cannot move away until it is certain that the manipulator passing the object has released. In many instances, having pressure sensors mounted in the hand is infeasible. For example, packaging reasons can mean that the necessary space is not available, as is the case with the JPL LEMUR hexapod. We therefore need to infer the grasp state from the available observations, which are usually supplied by position encoders at the joints.For this project we assume that each manipulator carries out estimation independently, without joint angle observations from the other robot, but with knowledge of its own joint angles and of the commands to be issued to both robots. This is typical of a multi-agent cooperative task, and the lack of observations makes the estimation task even more challenging.This report describes the approach we use to solve this problem, which is comprised of an impedance controller and a hybrid estimator.
</description>
<pubDate>Tue, 28 Feb 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31217</guid>
<dc:date>2006-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Encrypted Keyword Search in a Distributed Storage System</title>
<link>https://hdl.handle.net/1721.1/31216</link>
<description>Encrypted Keyword Search in a Distributed Storage System
Artzi, Shay; Kiezun, Adam; Newport, Calvin; Schultz, David
Encrypted keyword search allows a server to perform a search over a set of encrypted documents on behalf of a client without learning the contents of the documents or the words being searched for. Designing a practical system is challenging because the privacy constraint thwarts standard indexing and ranking techniques. We present Mafdet, an encrypted keyword search system we have implemented. Our system makes the search practical even for large data sets. We evaluated Mafdet's performance on a set of queries and a large collection of documents. In these queries, Mafdet's accuracy is within 6% of Google Desktop, and the search time is on the order of seconds for document sets as large as 2.6 GB.
</description>
<pubDate>Thu, 23 Feb 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31216</guid>
<dc:date>2006-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Network Coding Made Practical</title>
<link>https://hdl.handle.net/1721.1/31212</link>
<description>Network Coding Made Practical
Katti, Sachin; Rahul, Hariharan; Hu, Wenjun; Katabi, Dina; Crowcroft, Jon
We propose a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our design is rooted in the theory of network coding. In contrast to prior work on network coding, which is mainly theoretical and focuses on multicast traffic, ours is practical and solves the common case of unicast traffic.  We present the first implementation of network coding in a wireless network. Our system introduces a coding layer between the IP and MAC layers. It works with UDP and TCP traffic, and hence seamlessly integrates with existing applications. We evaluate our design on a 34-node wireless testbed and show that it delivers a 3-4x increase in the throughput ofwireless mesh networks.
</description>
<pubDate>Thu, 16 Feb 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31212</guid>
<dc:date>2006-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Semantic Scene Models by Trajectory Analysis</title>
<link>https://hdl.handle.net/1721.1/31208</link>
<description>Learning Semantic Scene Models by Trajectory Analysis
Wang, Xiaogang; Tieu, Kinh; Grimson, Eric
In this paper, we describe an unsupervised learning framework to segment a scene into semantic regions and to build semantic scene models from long-term observations of moving objects in the scene. First, we introduce two novel similarity measures for comparing trajectories in far-field visual surveillance. The measures simultaneously compare the spatial distribution of trajectories and other attributes, such as velocity and object size, along the trajectories. They also pro-vide a comparison confidence measure which indicates how well the measured im-age-based similarity approximates true physical similarity.  We also introduce novel clustering algorithms which use both similarity and comparison confidence. Based on the proposed similarity measures and clustering methods, a framework to learn semantic scene models by trajectory analysis is developed. Trajectories are first clustered into vehicles and pedestrians, and then further grouped based on spatial and velocity distributions. Different trajectory clusters represent different activities. The geometric and statistical models of structures in the scene, such as roads, walk paths, sources and sinks, are automatically learned from the trajectory clusters. Abnormal activities are detected using the semantic scene models. The system is robust to low-level tracking errors.
</description>
<pubDate>Fri, 10 Feb 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/31208</guid>
<dc:date>2006-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>Transparent Accountable Data Mining: New Strategies for Privacy Protection</title>
<link>https://hdl.handle.net/1721.1/30972</link>
<description>Transparent Accountable Data Mining: New Strategies for Privacy Protection
Weitzner, Daniel J.; Abelson, Harold; Berners-Lee, Tim; Hanson, Chris; Hendler, James; Kagal, Lalana; McGuinness, Deborah L.; Sussman, Gerald Jay; Waterman, K. Krasnow
Attempts to address issues of personal privacy in a world of computerized databases and information networks -- from security technology to data protection regulation to Fourth Amendment law jurisprudence -- typically proceed from the perspective of controlling or preventing access to information.  We argue that this perspective has become inadequate and obsolete, overtaken by the ease of sharing and copying data and of aggregating and searching across multiple data bases, to reveal private information from public sources.  To replace this obsolete framework, we propose that issues of privacy protection currently viewed in terms of data access be re-conceptualized in terms of data use.  From a technology perspective, this requires supplementing legal and technical mechanisms for access control with new mechanisms for transparency and accountability of data use.  In this paper, we present a technology infrastructure -- the Policy Aware Web -- that supports transparent and accountable data use on the World Wide Web, and elements of a new legal and regulatory regime that supports privacy through provable accountability to usage rules rather than merely data access restrictions.
</description>
<pubDate>Fri, 27 Jan 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30972</guid>
<dc:date>2006-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>A Consistency Management Layer for Inter-Domain Routing</title>
<link>https://hdl.handle.net/1721.1/30971</link>
<description>A Consistency Management Layer for Inter-Domain Routing
Kushman, Nate; Katabi, Dina; Wroclawski, John
This paper proposes an isolation layer -- a shim -- betweeninter-domain routing and packet forwarding. The job of this layer isto coordinate between Autonomous Systems (AS's) on when and how tomodify the forwarding state to ensure inter-domain routing loops donot cause forwarding loops. The benefits of a consistency layer aretwofold.  First, it prevents the creation of transient inter-domainforwarding loops and the resulting packet loss, high latency, andconnection failures.Second, by taking the burden of forwarding consistency off theinter-domain routing protocol, it enables inter-domain routingprotocols with more complex convergence characteristics than BGP, suchas protocols that optimize route selection based on performance.  Weoffer two possible designs for the consistency layer. We prove thatboth designs are free of forwarding loops and show they are easy todeploy in the current Internet.
</description>
<pubDate>Fri, 27 Jan 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30971</guid>
<dc:date>2006-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Information Theoretic Framework for Pair- and Group-wise Registration of Medical Images</title>
<link>https://hdl.handle.net/1721.1/30970</link>
<description>A Unified Information Theoretic Framework for Pair- and Group-wise Registration of Medical Images
Zollei, Lilla
The field of medical image analysis has been rapidly growing for the past two decades. Besides a significant growth in computational power, scanner performance, and storage facilities, this acceleration is partially due to an unprecedented increase in the amount of data sets accessible for researchers. Medical experts traditionally rely on manual comparisons of images, but the abundance of information now available makes this task increasingly difficult. Such a challenge prompts for more automation in processing the images.In order to carry out any sort of comparison among multiple medical images, onefrequently needs to identify the proper correspondence between them. This step allows us to follow the changes that happen to anatomy throughout a time interval, to identify differences between individuals, or to acquire complementary information from different data modalities. Registration achieves such a correspondence. In this dissertation we focus on the unified analysis and characterization of statistical registration approaches.We formulate and interpret a select group of pair-wise registration methods in the context of a unified statistical and information theoretic framework. This clarifies the implicit assumptions of each method and yields a better understanding of their relative strengths and weaknesses. This guides us to a new registration algorithm that incorporates the advantages of the previously described methods. Next we extend the unified formulation with analysis of the group-wise registration algorithms that align a population as opposed to pairs of data sets. Finally, we present our group-wise registration framework, stochastic congealing. The algorithm runs in a simultaneous fashion, with every member of the population approaching the central tendency of the collection at the same time. It eliminates the need for selecting a particular referenceframe a priori, resulting in a non-biased estimate of a digital template. Our algorithm adopts an information theoretic objective function which is optimized via a gradientbased stochastic approximation process embedded in a multi-resolution setting. We demonstrate the accuracy and performance characteristics of stochastic congealing via experiments on both synthetic and real images.
PhD thesis
</description>
<pubDate>Wed, 25 Jan 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30970</guid>
<dc:date>2006-01-25T00:00:00Z</dc:date>
</item>
<item>
<title>Service Identification in TCP/IP: Well-Known versus Random Port Numbers</title>
<link>https://hdl.handle.net/1721.1/30606</link>
<description>Service Identification in TCP/IP: Well-Known versus Random Port Numbers
Masiello, Elizabeth
The sixteen-bit well-known port number is often overlooked as a network identifier in Internet communications. Its purpose at the most fundamental level is only to demultiplex flows of traffic. Several unintended uses of the port number evolved from associating services with a list of well-known port numbers. This thesis documents those unintended consequences in an effort to describe the port number's influence on Internet players from ISPs to application developers to individual users. Proposals and examples of moving away from well-known port numbers to randomly assigned ones are then presented, with analysis of impacts on the political and economic systems on which Internet communication is dependent.
SM thesis
</description>
<pubDate>Wed, 11 Jan 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30606</guid>
<dc:date>2006-01-11T00:00:00Z</dc:date>
</item>
<item>
<title>Wide-Area Egomotion Estimation from Known 3D Structure</title>
<link>https://hdl.handle.net/1721.1/30605</link>
<description>Wide-Area Egomotion Estimation from Known 3D Structure
Koch, Olivier; Teller, Seth
We describe an algorithm that takes as inputs a coarse3D model of an environment, and a video sequence acquiredwithin the environment, and produces as output an estimateof the cameraÂ&#146;s 6-DOF egomotion expressed in the coordinatesof the 3D model. Our method has several novelaspects: it performs line-based structure-from-motion; italigns the local line constellation to the known model; andit uses off-line visibility analysis to dramatically acceleratethe alignment process.We present simulation results demonstrating themethodÂ&#146;s operation in a multi-room environment. We showthat the method can estimate metric egomotion accuratelyand could be used for for many minutes of operation andthousands of video frames.
</description>
<pubDate>Mon, 09 Jan 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30605</guid>
<dc:date>2006-01-09T00:00:00Z</dc:date>
</item>
<item>
<title>Nuggeteer: Automatic Nugget-Based Evaluation Using Descriptions and Judgements</title>
<link>https://hdl.handle.net/1721.1/30604</link>
<description>Nuggeteer: Automatic Nugget-Based Evaluation Using Descriptions and Judgements
Marton, Gregory
TREC Definition and Relationship questions are evaluated on thebasis of information nuggets that may be contained in systemresponses.  Human evaluators provide informal descriptions of eachnugget, and judgements (assignments of nuggets to responses) for eachresponse submitted by participants.The best present automatic evaluation for these kinds of questions isPourpre.  Pourpre uses a stemmed unigram similarity of responses withnugget descriptions, yielding an aggregate result that is difficult tointerpret, but is useful for relative comparison.  Nuggeteer, bycontrast, uses both the human descriptions and the human judgements,and makes binary decisions about each response, so that the end resultis as interpretable as the official score.I explore n-gram length, use of judgements, stemming, and termweighting, and provide a new algorithm quantitatively comparable to,and qualitatively better than the state of the art.
</description>
<pubDate>Mon, 09 Jan 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30604</guid>
<dc:date>2006-01-09T00:00:00Z</dc:date>
</item>
<item>
<title>Polylogarithmic Approximation Algorithm for Non-Uniform Multicommodity Buy-at-Bulk</title>
<link>https://hdl.handle.net/1721.1/30602</link>
<description>Polylogarithmic Approximation Algorithm for Non-Uniform Multicommodity Buy-at-Bulk
Hajiaghayi, MohammadTaghi; Kortsarz, Guy; Salavatipour, Mohammad R.
We consider the non-uniform multicommodity buy-at-bulknetworkdesign problem. In this problem we are given a graph $G(V,E)$withtwo cost functions on the edges, a buy cost $b:E\longrightarrow \RR^+$and a rent cost$r:E\longrightarrow\RR^+$, and a set of source-sink pairs$s_i,t_i\in V$ ($1\leq i\leq \alpha$)with each pair $i$ having a positivedemand $\delta_i$. Our goal is to designa minimum cost network $G(V,E')$such that for every $1\leq i\leq\alpha$,  $s_i$ and $t_i$ are in thesameconnected component in $G(V,E')$. Thetotal cost of $G(V,E')$ is the sum ofbuy costs of the edges in $E'$plus sum of total demand going through everyedge in $E'$ times therent cost of that edge. Since the costs of differentedges can bedifferent, we say that the problem is non-uniform. Thefirstnon-trivial approximation algorithm for this problem is due toCharikarand Karagiozova (STOC' 05) whose algorithm has anapproximation guarantee of$\exp(O(\sqrt{\log n\log\log n}))$,when all $\delta_i=1$ and$\exp(O(\sqrt{\log N\log\log N}))$ for the generaldemand case where $N$ isthe sum of all demands. We improve upon this result, bypresenting the firstpolylogarithmic (specifically, $O(\log^4 n)$ for unit demandsand $O(\log^4N)$ for the general demands)approximation for this problem. The algorithmrelies on a recent result\cite{HKS1} for the buy-at-bulk $k$-Steiner treeproblem.
</description>
<pubDate>Sat, 26 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30602</guid>
<dc:date>2005-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Approximating Buy-at-Bulk k-Steiner trees</title>
<link>https://hdl.handle.net/1721.1/30601</link>
<description>Approximating Buy-at-Bulk k-Steiner trees
Hajiaghayi, MohammadTaghi; Kortsarz, Guy; Salavatipour, Mohammad R.
In the buy-at-bulk $k$-Steiner tree (or rent-or-buy$k$-Steiner tree) problem we are given a graph $G(V,E)$ with a setof terminals $T\subseteq V$ including a particular vertex $s$ calledthe root, and an integer $k\leq |T|$. There are two cost functionson the edges of $G$, a buy cost $b:E\longrightarrow \RR^+$ and a rentcost $r:E\longrightarrow \RR^+$. The goal is to find a subtree $H$ of$G$ rooted at $s$ with at least $k$ terminals so that the cost$\sum_{e\in H} b(e)+\sum_{t\in T-s} dist(t,s)$ is minimize, where$dist(t,s)$ is the distance from $t$ to $s$ in $H$ with respect tothe $r$ cost. Our main result is  an $O(\log^5 n)$-approximation forthe buy-at-bulk $k$-Steiner tree problem.To achieve this we also design an approximation algorithm forbicriteria $k$-Steiner tree. In the bicriteria $k$-Steiner tree problem weare given a graph $G$ with edge costs $b(e)$ and distance costs$r(e)$ over the edges, and an integer $k$. Our goal is to find aminimum cost (under $b$-cost) $k$-Steiner tree such that thediameter under $r$-cost is at most some given bound $D$. An$(\alpha,\beta)$-approximation finds a subgraph of diameter at most$\alpha\cdot {D}$ (with respect to $r$) and cost with respect to$b$ of at most $\beta\cdot opt$ where $opt$ is the minimum cost ofany solution with diameter at most $D$. Marathe et al \cite{ravi}gave an $(O(\log n),O(\log n))$-approximation algorithm for thebicriteria Steiner tree problem. Their algorithm does not extend tothe bicriteria $k$-Steiner tree problem.Our algorithm for the buy-at-bulk $k$-Steiner tree problem relies on an$(O(\log^2 n),O(\log^4 n))$-approximation algorithm we develop for the(shallow-light) bicriteria  $k$-Steiner tree problem, which is ofindependent interest. Indeed, this is also one of the main tools we use to obtainthe first polylogarithmic approximation algorithm for non-uniformmulticommodity buy-at-bulk~\cite{HKS}.
</description>
<pubDate>Tue, 15 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30601</guid>
<dc:date>2005-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver's Gift</title>
<link>https://hdl.handle.net/1721.1/30591</link>
<description>Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver's Gift
Arsenio, Artur Miguel
The goal of this work is to build a cognitive system for the humanoid robot, Cog, that exploits human caregivers as catalysts to perceive and learn about actions, objects, scenes, people, and the robot itself. This thesis addresses a broad spectrum of machine learning problems across several categorization levels. Actions by embodied agents are used to automatically generate training data for the learning mechanisms, so that the robot develops categorization autonomously. Taking inspiration from the human brain, a framework of algorithms and methodologies was implemented to emulate different cognitive capabilities on the humanoid robot Cog. This framework is effectively applied to a collection of AI, computer vision, and signal processing problems. Cognitive capabilities of the humanoid robot are developmentally created, starting from infant-like abilities for detecting, segmenting, and recognizing percepts over multiple sensing modalities. Human caregivers provide a helping hand for communicating such information to the robot. This is done by actions that create meaningful events (by changing the world in which the robot is situated) thus inducing the "compliant perception" of objects from these human-robot interactions. Self-exploration of the world extends the robot's knowledge concerning object properties.This thesis argues for enculturating humanoid robots using infant development as a metaphor for building a humanoid robot's cognitive abilities. A human caregiver redesigns a humanoid's brain by teaching the humanoid robot as she would teach a child, using children's learning aids such as books, drawing boards, or other cognitive artifacts. Multi-modal object properties are learned using these tools and inserted into several recognition schemes, which are then applied to developmentally acquire new object representations. The humanoid robot therefore sees the world through the caregiver's eyes.Building an artificial humanoid robot's brain, even at an infant's cognitive level, has been a long quest which still lies only in the realm of our imagination. Our efforts towards such a dimly imaginable task are developed according to two alternate and complementary views: cognitive and developmental.
</description>
<pubDate>Sun, 26 Sep 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30591</guid>
<dc:date>2004-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>Electronic Cash with Blind Deposits: How to Have No Spare Change</title>
<link>https://hdl.handle.net/1721.1/30427</link>
<description>Electronic Cash with Blind Deposits: How to Have No Spare Change
Liskov, Moses
Electronic cash schemes in which the bank authenticates many coins at once suffer from the problem that coins that are authenticated together can be linked to one another. Unfortunately, unless a user spends coins in a closely prescribed manner, different batches of coins ("wallets") will be linked together in these schemes. This is illustrated by the problem of what a customer does with the "spare change" - an unusable small amount of money left in a wallet. We propose a new protocol to be used in e-cash schemes: blind deposits. In a blind deposit, a customer returns a coin to the bank without revealing the coin. We present a secure and efficient e-cash scheme with this added feature based on that of Liskov-Micali [LM01].
</description>
<pubDate>Tue, 14 Oct 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30427</guid>
<dc:date>2003-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Trees of (Reducible) 1324-avoiding Permutations</title>
<link>https://hdl.handle.net/1721.1/30426</link>
<description>Generating Trees of (Reducible) 1324-avoiding Permutations
Marinov, Darko; Rodoicic, Rados
We consider permutations that avoid the pattern 1324. We give exact formulas for thenumber of reducible 1324-avoiding permutations and the number of {1324, 4132, 2413, 3241}-avoiding permutations. By studying the generating tree for all 1324-avoiding permutations,we obtain a recurrence formula for their number. A computer program provides data for thenumber of 1324-avoiding permutations of length up to 20.
</description>
<pubDate>Thu, 09 Oct 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30426</guid>
<dc:date>2003-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Error weighted classifier combination for multi-modal human identification</title>
<link>https://hdl.handle.net/1721.1/30590</link>
<description>Error weighted classifier combination for multi-modal human identification
Ivanov, Yuri; Serre, Thomas; Bouvrie, Jacob
In this paper we describe a technique of classifier combination used in a human identification system. The system integrates all available features from multi-modal sources within a Bayesian framework. The framework allows representinga class of popular classifier combination rules and methods within a single formalism. It relies on a Â&#147;per-classÂ&#148; measure of confidence derived from performance of each classifier on training data that is shown to improve performance on a synthetic data set. The method is especially relevant in autonomous surveillance setting where varying time scales and missing features are a common occurrence. We show an application of this technique to the real-world surveillance database of video and audio recordings of people collected over several weeks in the office setting.
</description>
<pubDate>Wed, 14 Dec 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30590</guid>
<dc:date>2005-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Software Upgrades for Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/30589</link>
<description>Automatic Software Upgrades for Distributed Systems
Ajmani, Sameer
Upgrading the software of long-lived, highly-available distributed systems is difficult. It is not possible to upgrade all the nodes in a system at once, since some nodes may be unavailable and halting the system for an upgrade is unacceptable. Instead, upgrades may happen gradually, and there may be long periods of time when different nodes are running different software versions and need to communicate using incompatible protocols. We present a methodology and infrastructure that address these challenges and make it possible to upgrade distributed systems automatically while limiting service disruption.Our methodology defines how to enable nodes to interoperate across versions, how to preserve the state of a system across upgrades, and how to schedule an upgrade so as to limit service disruption. The approach is modular: defining an upgrade requires understanding only the new software and the version it replaces.The upgrade infrastructure is a generic platform for distributing and installing software while enabling nodes to interoperate across versions. The infrastructure requires no access to the system source code and is transparent: node software is unaware that different versions even exist. We have implemented a prototype of the infrastructure called Upstart that intercepts socket communication using a dynamically-linked C++ library. Experiments show that Upstart has low overhead and works well for both local-area and Internet systems.
</description>
<pubDate>Wed, 30 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30589</guid>
<dc:date>2005-11-30T00:00:00Z</dc:date>
</item>
<item>
<title>Conditional Random People: Tracking Humans with CRFs and Grid Filters</title>
<link>https://hdl.handle.net/1721.1/30588</link>
<description>Conditional Random People: Tracking Humans with CRFs and Grid Filters
Taycher, Leonid; Shakhnarovich, Gregory; Demirdjian, David; Darrell, Trevor
We describe a state-space tracking approach based on a Conditional Random Field(CRF) model, where the observation potentials are \emph{learned} from data. Wefind functions that embed both state and observation into a space wheresimilarity corresponds to $L_1$ distance, and define an observation potentialbased on distance in this space. This potential is extremely fast to compute and in conjunction with a grid-filtering framework can be used to reduce acontinuous state estimation problem to a discrete one. We show how a statetemporal prior in the grid-filter can be computed in a manner similar to asparse HMM, resulting in real-time system performance. The resulting system isused for human pose tracking in video sequences.
</description>
<pubDate>Thu, 01 Dec 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30588</guid>
<dc:date>2005-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying Expression Fingerprints using Linguistic Information</title>
<link>https://hdl.handle.net/1721.1/30587</link>
<description>Identifying Expression Fingerprints using Linguistic Information
Uzuner, Ozlem
This thesis presents a technology to complement taxation-based policy proposals aimed at addressing the digital copyright problem.  Theapproach presented facilitates identification of intellectual propertyusing expression fingerprints. Copyright law protects expression of content.  Recognizing literaryworks for copyright protection requires identification of theexpression of their content.  The expression fingerprints described inthis thesis use a novel set of linguistic features that capture boththe content presented in documents and the manner of expression usedin conveying this content.  These fingerprints consist of bothsyntactic and semantic elements of language.  Examples of thesyntactic elements of expression include structures of embedding andembedded verb phrases.  The semantic elements of expression consist ofhigh-level, broad semantic categories.  Syntactic and semantic elements of expression enable generation ofmodels that correctly identify books and their paraphrases 82% of thetime, providing a significant (approximately 18%) improvement over modelsthat use tfidf-weighted keywords.  The performance of models builtwith these features is also better than models created with standardfeatures used in stylometry (e.g., function words), which yield anaccuracy of 62%.In the non-digital world, copyright holders collect revenues bycontrolling distribution of their works.  Current approaches to thedigital copyright problem attempt to provide copyright holders withthe same kind of control over distribution by employing Digital RightsManagement (DRM) systems.  However, DRM systems also enable copyrightholders to control and limit fair use, to inhibit others' speech, andto collect private information about individual users of digitalworks.Digital tracking technologies enable alternate solutions to thedigital copyright problem; some of these solutions can protectcreative incentives of copyright holders in the absence of controlover distribution of works.  Expression fingerprints facilitatedigital tracking even when literary works are DRM- and watermark-free,and even when they are paraphrased.  As such, they enable meteringpopularity of works and make practicable solutions that encouragelarge-scale dissemination and unrestricted use of digital works andthat protect the revenues of copyright holders, for example throughtaxation-based revenue collection and distribution systems, withoutimposing limits on distribution.
</description>
<pubDate>Fri, 18 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30587</guid>
<dc:date>2005-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Accurate and Scalable Surface Representation and Reconstruction from Images</title>
<link>https://hdl.handle.net/1721.1/30586</link>
<description>Accurate and Scalable Surface Representation and Reconstruction from Images
Zeng, Gang; Paris, Sylvain; Quan, Long; Sillion, Francois
We introduce a new surface representation, the patchwork, to extend the problem of surface reconstruction from multiple images. A patchwork is the combination of several patches that are built one by one. This design potentially allows the reconstruction of an object of arbitrarily large dimensions while preserving a fine level of detail. We formally demonstrate that this strategy leads to a spatial complexity independent of the dimensions of the reconstructed object, and to a time complexity linear with respect to the object area. The former property ensures that we never run out of storage (memory) and the latter means that reconstructing an object can be done in a reasonable amount of time. In addition, we show that the patchwork representation handles equivalently open and closed surfaces whereas most of the existing approaches are limited to a specific scenario (open or closed surface but not both).Most of the existing optimization techniques can be cast into this framework. To illustrate the possibilities offered by this approach, we propose two applications that expose how it dramatically extends a recent accurate graph-cut technique. We first revisit the popular carving techniques. This results in a well-posed reconstruction problem that still enjoys the tractability of voxel space. We also show how we can advantageously combine several image-driven criteria to achieve a finely detailed geometry by surface propagation. The above properties of the patchwork representation and reconstruction are extensively demonstrated on real image sequences.
</description>
<pubDate>Fri, 18 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30586</guid>
<dc:date>2005-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Perceptron-Based Active Learning</title>
<link>https://hdl.handle.net/1721.1/30585</link>
<description>Analysis of Perceptron-Based Active Learning
Dasgupta, Sanjoy; Kalai, Adam Tauman; Monteleoni, Claire
We start by showing that in an active learning setting, the Perceptron algorithm needs $\Omega(\frac{1}{\epsilon^2})$ labels to learn linear separators within generalization error $\epsilon$.  We then present a simple selective sampling algorithm for this problem, which combines a modification of the perceptron update with an adaptive filtering rule for deciding which points to query. For data distributed uniformly over the unit sphere, we show that our algorithm reaches generalization error $\epsilon$ after asking for just $\tilde{O}(d \log \frac{1}{\epsilon})$ labels. This exponential improvement over the usual sample complexity of supervised learning has previously been demonstrated only for the computationally more complex query-by-committee algorithm.
</description>
<pubDate>Thu, 17 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30585</guid>
<dc:date>2005-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>Online Learning of Non-stationary Sequences</title>
<link>https://hdl.handle.net/1721.1/30584</link>
<description>Online Learning of Non-stationary Sequences
Monteleoni, Claire; Jaakkola, Tommi
We consider an online learning scenario in which the learner can make predictions on the basis of a fixed set of experts.  We derive upper and lower relative loss bounds for a class of universal learning algorithms involving a switching dynamics over the choice of the experts.  On the basis of the performance bounds we provide the optimal a priori discretization of the switching-rate parameter that governs the switching dynamics. We demonstrate the algorithm in the context of wireless networks.
</description>
<pubDate>Thu, 17 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30584</guid>
<dc:date>2005-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>New LSH-based Algorithm for Approximate Nearest Neighbor</title>
<link>https://hdl.handle.net/1721.1/30583</link>
<description>New LSH-based Algorithm for Approximate Nearest Neighbor
Andoni, Alexandr; Indyk, Piotr
We present an algorithm for c-approximate nearest neighbor problem in a d-dimensional Euclidean space,  achieving query time ofO(dn^{1/c^2+o(1)}) and space O(dn + n^{1+1/c^2+o(1)}).
</description>
<pubDate>Fri, 04 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30583</guid>
<dc:date>2005-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>On Field Constraint Analysis</title>
<link>https://hdl.handle.net/1721.1/30582</link>
<description>On Field Constraint Analysis
Wies, Thomas; Kuncak, Viktor; Lam, Patrick; Podelski, Andreas; Rinard, Martin
We introduce field constraint analysis, a new  technique for verifying data structure invariants.  A  field constraint for a field is a formula specifying a set of objects  to which the field can point.  Field constraints enable  the application of decidable logics to data structures  which were originally beyond the scope of these logics, by verifying the  backbone of the data structure and then verifying  constraints on fields that cross-cut the backbone in  arbitrary ways.  Previously, such cross-cutting fields  could only be verified when they were uniquely determined by  the backbone, which significantly limited the range of  analyzable data structures.  Our field constraint analysis permits \\emph{non-deterministic} field  constraints on cross-cutting fields, which allows to verify  invariants of data structures such as skip lists.  Non-deterministic  field constraints also enable the verification of invariants between  data structures, yielding an expressive generalization of static  type declarations.  The generality of our field constraints requires new  techniques, which are orthogonal to the traditional use of  structure simulation.  We present one such technique and  prove its soundness.  We have implemented this technique  as part of a symbolic shape analysis deployed in  the context of the Hob system for verifying data structure  consistency.  Using this implementation we were able to  verify data structures that were previously beyond the  reach of similar techniques.
</description>
<pubDate>Thu, 03 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30582</guid>
<dc:date>2005-11-03T00:00:00Z</dc:date>
</item>
<item>
<title>Subcontracted Rational SFE</title>
<link>https://hdl.handle.net/1721.1/30581</link>
<description>Subcontracted Rational SFE
Lepinski, Matthew; Micali, Silvio
In their paper, "Rational Secure Computation and Ideal Mechanism Design," Izmalkov, Lepinski and Micali show that any one-shot mediated game can be simulated by the players themselves, without the help of a trusted mediator, using physical envelopes and a ballot-box. We show that communication between the players is not essential to the ILM protocol. That is, we provide a protocol for rational secure function evaluation (Rational SFE) where the players just send a set of envelopes to a referee who simply performs a sequence of publicly verifiable actions. That is, the players can "subcontract" all of the computation to an untrusted referee. In addition to providing a communication structure that more closely matches the ideal game, our protocol also enables us to better simulate mediated games in which abort is not a dominated action.
</description>
<pubDate>Wed, 02 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30581</guid>
<dc:date>2005-11-02T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Realizing the Performance and Availability Benefits of a Global Overlay Network</title>
<link>https://hdl.handle.net/1721.1/30580</link>
<description>Towards Realizing the Performance and Availability Benefits of a Global Overlay Network
Rahul, Hariharan; Kasbekar, Mangesh; Sitaraman, Ramesh; Berger, Arthur
Prior analyses of the benefits of routing overlays are based onplatforms consisting of nodes located primarily in North America, onthe academic Internet, and at the edge of the network. This paper isthe first global study of the benefits of overlays on the commercialInternet in terms of round trip latencies and availability, usingmeasurements from diverse ISPs over 1100 locations (77 countries, 630cities and 6 continents).Our study shows that while overlays provide some improvements in North America, their benefits are especially significant for paths withAsian endpoints.  Regarding practical considerations in constructingoverlay routes, we show that an algorithm that randomly chooses asmall number of alternate redundant paths achieves an availability ofover 99.5%. We also propose and evaluate a simple predictive schemethat achieves almost optimal latency using only 2-3 paths, and thatthis is achievable with surprisingly persistent routing choices.
</description>
<pubDate>Tue, 01 Nov 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30580</guid>
<dc:date>2005-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Cyclic Memory Allocation to Eliminate Memory Leaks</title>
<link>https://hdl.handle.net/1721.1/30579</link>
<description>Using Cyclic Memory Allocation to Eliminate Memory Leaks
Nguyen, Huu Hai; Rinard, Martin
We present and evaluate a new memory management technique foreliminating memory leaks in programs with dynamic memoryallocation. This technique observes the execution of the program on asequence of training inputsto find m-bounded allocation sites,which have the property that at any time during the execution of theprogram, the program accesses at most only the last m objects allocated atthat site. The technique then transforms the program to usecyclic memory allocation at that site: it preallocates a buffercontaining m objects of the type allocated at that site, with eachallocation returning the next object in the buffer. At the end of thebuffer the allocations wrap back around to the first object.  Cyclicallocation eliminates any memory leak at the allocation site - thetotal amount of memory required to hold all of the objects everallocated at the site is simply $m$ times the object size.We evaluate our technique by applying it to several widely-used opensource programs.  Our results show that it is able to successfullyeliminate important memory leaks in these programs.  A potentialconcern is that the estimated bounds m may be too small, causing theprogram to overlay live objects in memory.  Our results indicate thatour bounds estimation technique is quite accurate in practice,providing incorrect results for only one of the 160 m-bounded sitesthat it identifies. To evaluate the potential impact ofoverlaying live objects, we artificially reduce the bounds at$m$-bounded sites and observe the resulting behavior.The resulting overlayingof live objects often does not affect thefunctionality of the program at all; even when it does impairpart of the functionality, the program does not fail andis still able to acceptably deliver the remaining functionality.
</description>
<pubDate>Wed, 26 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30579</guid>
<dc:date>2005-10-26T00:00:00Z</dc:date>
</item>
<item>
<title>MPEG-2 in a Stream Programming Language</title>
<link>https://hdl.handle.net/1721.1/30578</link>
<description>MPEG-2 in a Stream Programming Language
Drake, Matthew; Hoffmann, Hank; Rabbah, Rodric; Amarasinghe, Saman
Image and video codecs are prevalent in multimedia applications, ranging from embedded systems, to desktop computers, to high-end servers such as HDTV editing consoles. It is not uncommon however that developers create (from scratch) and customize their codec implementations for each of the architecture targets they intend their coders and decoders to run on. This practice is time consuming anderror prone, leading to code that is not malleable or portable.  In this paper we describe an implementation of the MPEG-2 codec using the StreamIt programming language. StreamIt is an architecture-independent stream language that aims to improve programmer productivity, while concomitantly exposing the inherent parallelism and communication topology of the application. We describe why MPEG is a good match forthe streaming programming model, and illustrate the malleability of the implementation using a simple modification to the decoder to support alternate color compression formats. StreamIt allows for modular application development, which also reduces the complexity of the debugging process since stream components can be verifiedindependently. This in turn leads to greater programmer productivity. We implement a fully functional MPEG-2 decoder in StreamIt. The decoder was developed in eight weeks by a single student programmer who did not have any prior experience with MPEG or other video codecs. Many of the MPEG-2 components were subsequently reused to assemble a JPEG codec.
</description>
<pubDate>Sat, 22 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30578</guid>
<dc:date>2005-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Asymptotics of Gaussian Regularized Least-Squares</title>
<link>https://hdl.handle.net/1721.1/30577</link>
<description>Asymptotics of Gaussian Regularized Least-Squares
Lippert, Ross; Rifkin, Ryan
We consider regularized least-squares (RLS) with a Gaussian kernel. Weprove that if we let the Gaussian bandwidth $\sigma \rightarrow\infty$ while letting the regularization parameter $\lambda\rightarrow 0$, the RLS solution tends to a polynomial whose order iscontrolled by the relative rates of decay of $\frac{1}{\sigma^2}$ and$\lambda$: if $\lambda = \sigma^{-(2k+1)}$, then, as $\sigma \rightarrow\infty$, the RLS solution tends to the $k$th order polynomial withminimal empirical error.  We illustrate the result with an example.
</description>
<pubDate>Thu, 20 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30577</guid>
<dc:date>2005-10-20T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge Flow Analysis for Security Protocols</title>
<link>https://hdl.handle.net/1721.1/30576</link>
<description>Knowledge Flow Analysis for Security Protocols
Torlak, Emina; van Dijk, Marten; Gassend, Blaise; Jackson, Daniel; Devadas, Srinivas
Knowledge flow analysis offers a simple and flexible way to find flaws in security protocols. A protocol is described by a collection of rules constraining the propagation of knowledge amongst principals. Because this characterization corresponds closely to informal descriptions of protocols, it allows a succinct and natural formalization; because it abstracts away message ordering, and handles communications between principals and applications of cryptographic primitives uniformly, it is readily represented in a standard logic. A generic framework in the Alloy modelling language is presented, and instantiated for two standard protocols, and a new key management scheme.
</description>
<pubDate>Wed, 19 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30576</guid>
<dc:date>2005-10-19T00:00:00Z</dc:date>
</item>
<item>
<title>Towards the Prevention of Dyslexia</title>
<link>https://hdl.handle.net/1721.1/30575</link>
<description>Towards the Prevention of Dyslexia
Geiger, Gadi; Amara, Domenic G
Previous studies have shown that dyslexic individuals who supplement windowed reading practice with intensive small-scale hand-eye coordination tasks exhibit marked improvement in their reading skills. Here we examine whether similar hand-eye coordination activities, in the form of artwork performed by children in kindergarten, first and second grades, could reduce the number of students at-risk for reading problems. Our results suggest that daily hand-eye coordination activities significantly reduce the number of students at-risk. We believe that the effectiveness of these activities derives from their ability to prepare the students perceptually for reading.
</description>
<pubDate>Tue, 18 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30575</guid>
<dc:date>2005-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>Victim Migration: Dynamically Adapting Between Private and Shared CMP Caches</title>
<link>https://hdl.handle.net/1721.1/30574</link>
<description>Victim Migration: Dynamically Adapting Between Private and Shared CMP Caches
Zhang, MIchael; Asanovic, Krste
Future CMPs will have more cores and greater onchip cache capacity. The on-chip cache can either be divided into separate private L2 caches for each core, or treated as a large shared L2 cache. Private caches provide low hit latency but low capacity, while shared caches have higher hit latencies but greater capacity. Victim replication was previously introduced as a way of reducing the average hit latency of a shared cache by allowing a processor to make a replica of a primary cache victim in its local slice of the global L2 cache. Although victim replication performs well on multithreaded and single-threaded codes, it performs worse than the private scheme for multiprogrammed workloads where there is little sharing between the different programs running at the same time. In this paper, we propose victim migration, which improves on victim replication by adding an additional set of migration tags on each node which are used to implement an exclusive cache policy for replicas. When a replica has been created on a remote node, it is not also cached on the home node, but only recorded in the migration tags. This frees up space on the home node to store shared global lines or replicas for the local processor. We show that victim migration performs better than private, shared, and victim replication schemes across a range of single threaded, multithreaded, and multiprogrammed workloads, while using less area than a private cache design. Victim migration provides a reduction in average memory access latency of up to 10% over victim replication.
</description>
<pubDate>Mon, 10 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30574</guid>
<dc:date>2005-10-10T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to Trade with Insider Information</title>
<link>https://hdl.handle.net/1721.1/30573</link>
<description>Learning to Trade with Insider Information
Das, Sanmay
This paper introduces algorithms for learning how to trade usinginsider (superior) information in Kyle's model of financial markets.Prior results in finance theory relied on the insider having perfectknowledge of the structure and parameters of the market. I show herethat it is possible to learn the equilibrium trading strategy whenits form is known even without knowledge of the parameters governingtrading in the model. However, the rate of convergence toequilibrium is slow, and an approximate algorithm that does notconverge to the equilibrium strategy achieves better utility whenthe horizon is limited. I analyze this approximate algorithm fromthe perspective of reinforcement learning and discuss the importanceof domain knowledge in designing a successful learning algorithm.
</description>
<pubDate>Fri, 07 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30573</guid>
<dc:date>2005-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Software Upgrades for Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/30572</link>
<description>Automatic Software Upgrades for Distributed Systems
Ajmani, Sameer; Liskov, Barbara; Shrira, Liuba; Curtis, Dorothy
Upgrading the software of long-lived, highly-available distributedsystems is difficult.  It is not possible to upgrade all the nodes in asystem at once, since some nodes may be unavailable and halting thesystem for an upgrade is unacceptable.  Instead, upgrades must happengradually, and there may be long periods of time when different nodesrun different software versions and need to communicate usingincompatible protocols.  We present a methodology and infrastructurethat make it possible to upgrade distributed systems automatically whilelimiting service disruption.  We introduce new ways to reason aboutcorrectness in a multi-version system. We also describe a prototypeimplementation that supports automatic upgrades with modest overhead.
</description>
<pubDate>Thu, 06 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30572</guid>
<dc:date>2005-10-06T00:00:00Z</dc:date>
</item>
<item>
<title>Secondary Structure Prediction of All-Helical Proteins Using Hidden Markov Support Vector Machines</title>
<link>https://hdl.handle.net/1721.1/30571</link>
<description>Secondary Structure Prediction of All-Helical Proteins Using Hidden Markov Support Vector Machines
Gassend, B.; O'Donnell, C. W.; Thies, W.; Lee, A.; van Dijk, M.; Devadas, S.
Our goal is to develop a state-of-the-art predictor with an intuitive and biophysically-motivated energy model through the use of Hidden Markov Support Vector Machines (HM-SVMs), a recent innovation in the field of machine learning.  We focus on the prediction of alpha helices in proteins and show that using HM-SVMs, a simple 7-state HMM with 302 parameters can achieve a Q_alpha value of 77.6% and a SOV_alpha value of 73.4%.  We briefly describe how our method can be generalized to predicting beta strands and sheets.
</description>
<pubDate>Thu, 06 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30571</guid>
<dc:date>2005-10-06T00:00:00Z</dc:date>
</item>
<item>
<title>Combining diagrammatic and symbolic reasoning</title>
<link>https://hdl.handle.net/1721.1/30570</link>
<description>Combining diagrammatic and symbolic reasoning
Arkoudas, Konstantine
We introduce a domain-independent framework for heterogeneous natural deduction that combines diagrammatic and sentential reasoning. The framework is presented in the form of a family of denotational proof languages (DPLs). Diagrams are represented as possibly partial descriptions of finite system states. This allows us to dealwith incomplete information, which we formalize by admitting sets as attribute values. We introduce a notion of attribute interpretations that enables us to interpret  first-order signatures into such system states, and develop a formal semantic framework based on Kleene\'s strong three-valued logic. We extend the assumption-base semantics of DPLs to accodomodate diagrammatic reasoning by introducing general inference mechanisms  for the valid extraction of information from diagrams and for the incorporation of sentential information into diagrams. A rigorous big-step operational semantics is given, on the basis of which we prove that our framework is sound. In addition, we specify detailed algorithms for implementing proof checkers for the resulting languages, and discuss associated efficiency issues.
</description>
<pubDate>Thu, 06 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30570</guid>
<dc:date>2005-10-06T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial and Temporal Abstractions in POMDPs Applied to Robot Navigation</title>
<link>https://hdl.handle.net/1721.1/30569</link>
<description>Spatial and Temporal Abstractions in POMDPs Applied to Robot Navigation
Theocharous, Georgios; Mahadevan, Sridhar; Kaelbling, Leslie Pack
Partially observable Markov decision processes (POMDPs) are a well studied paradigm for programming autonomous robots, where the robot sequentially chooses actions to achieve long term goals efficiently.  Unfortunately, for real world robots and other similar domains, the uncertain outcomes of the actions and the fact that the true world state may not be completely observable make learning of models of the world extremely difficult, and using them algorithmically infeasible.  In this paper we show that learning POMDP models and planning with them can become significantly easier when we incorporate into our algorithms the notions of spatial and tempral abstraction.  We demonstrate the superiority of our algorithms by comparing them with previous flat approaches for large scale robot navigation.
</description>
<pubDate>Tue, 27 Sep 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30569</guid>
<dc:date>2005-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Audio-visual Activity Analysis</title>
<link>https://hdl.handle.net/1721.1/30568</link>
<description>Automated Audio-visual Activity Analysis
Stauffer, Chris
Current computer vision techniques can effectively monitor gross activities in sparse environments.  Unfortunately, visual stimulus is often not sufficient for reliably discriminating between many types of activity.  In many cases where the visual information required for a particular task is extremely subtle or non-existent, there is often audio stimulus that is extremely salient for a particular classification or anomaly detection task.  Unfortunately unlike visual events, independent sounds are often very ambiguous and not sufficient to define useful events themselves.  Without an effective method of learning causally-linked temporal sequences of sound events that are coupled to the visual events, these sound events are generally only useful for independent anomalous sounds detection, e.g., detecting a gunshot or breaking glass.  This paper outlines a method for automatically detecting a set of audio events and visual events in a particular environment, for determining statistical anomalies, for automatically clustering these detected events into meaningful clusters, and for learning salient temporal relationships between the audio and visual events.  This results in a compact description of the different types of compound audio-visual events in an environment.
</description>
<pubDate>Tue, 20 Sep 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30568</guid>
<dc:date>2005-09-20T00:00:00Z</dc:date>
</item>
<item>
<title>LabelMe: a database and web-based tool for image annotation</title>
<link>https://hdl.handle.net/1721.1/30567</link>
<description>LabelMe: a database and web-based tool for image annotation
Russell, Bryan C.; Torralba, Antonio; Murphy, Kevin P.; Freeman, William T.
Research in object detection and recognition in cluttered scenes requires large image collections with ground truth labels.  The labels should provide information about the object classes present in each image, as well as their shape and locations, and possibly other attributes such as pose.  Such data is useful for testing, as well as for supervised learning.  This project provides a web-based annotation tool that makes it easy to annotate images, and to instantly sharesuch annotations with the community.  This tool, plus an initial set of 10,000 images (3000 of which have been labeled), can be found at http://www.csail.mit.edu/$\sim$brussell/research/LabelMe/intro.html
</description>
<pubDate>Thu, 08 Sep 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30567</guid>
<dc:date>2005-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>Using Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol</title>
<link>https://hdl.handle.net/1721.1/30566</link>
<description>Using Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol
Canetti, Ran; Cheung, Ling; Kaynar, Dilsun; Liskov, Moses; Lynch, Nancy; Olivier; Segala, Roberto
We demonstrate how to carry out cryptographic security analysis ofdistributed protocols within the Probabilistic I/O Automata frameworkof Lynch, Segala, and Vaandrager.This framework provides tools for arguing rigorously about theconcurrency and scheduling aspects of protocols, and about protocolspresented at different levels of abstraction.Consequently, it can help in making cryptographic analysis moreprecise and less susceptible to errors.We concentrate on a relatively simple two-party Oblivious Transferprotocol, in the presence of a semi-honest adversary (essentially, aneavesdropper).For the underlying cryptographic notion of security, we use a versionof Canetti's Universally Composable security.In spite of the relative simplicity of the example, the exercise isquite nontrivial.It requires taking many fundamental issues into account,including nondeterministic behavior, scheduling, resource-boundedcomputation, and computational hardness assumptions for cryptographicprimitives.
</description>
<pubDate>Fri, 19 Aug 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30566</guid>
<dc:date>2005-08-19T00:00:00Z</dc:date>
</item>
<item>
<title>Collective Choice with Uncertain Domain Moldels</title>
<link>https://hdl.handle.net/1721.1/30565</link>
<description>Collective Choice with Uncertain Domain Moldels
Richards, Whitman
When groups of individuals make choices among several alternatives, the most compelling social outcome is the Condorcet winner, namely the alternative beating all others in a pair-wise contest. Obviously the Condorcet winner cannot be overturned if one sub-group proposes another alternative it happens to favor.  However, in some cases, and especially with haphazard voting, there will be no clear unique winner, with the outcome consisting of a triple of pair-wise winners that each beat different subsets of the alternatives (i.e. a Â&#147;top-cycleÂ&#148;.)  We explore the sensitivity of Condorcet winners to various perturbations in the voting process that lead to top-cycles. Surprisingly, variations in the number of votes for each alternative is much less important than consistency in a voterÂ&#146;s view of how alternatives are related. As more and more voterÂ&#146;s preference orderings on alternatives depart from a shared model of the domain, then unique Condorcet outcomes become increasingly unlikely.
</description>
<pubDate>Tue, 16 Aug 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30565</guid>
<dc:date>2005-08-16T00:00:00Z</dc:date>
</item>
<item>
<title>Slicing the Onion: Anonymous Routing Without PKI</title>
<link>https://hdl.handle.net/1721.1/30564</link>
<description>Slicing the Onion: Anonymous Routing Without PKI
Katti, Sachin; Katabi, Dina; Puchala, Katarzyna
Recent years have witnessed many proposals for anonymous routing in overlay peer-to-peer networks. The proposed protocols either expose the receiver and the message content, or require the overlay nodes to have public-private key pairs with the public keys known to everyone. In practice, however, key distribution and management are well-known difficultproblems and have crippled any widespread deployment of anonymous routing. This paper uses a combination of information slicing and source routing to provide anonymous communication in a way similar to Onion Routing but without a public key infrastructure (PKI).
</description>
<pubDate>Mon, 15 Aug 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30564</guid>
<dc:date>2005-08-15T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Stabilizing Mobile Node Location Management and Message</title>
<link>https://hdl.handle.net/1721.1/30563</link>
<description>Self-Stabilizing Mobile Node Location Management and Message
Dolev, Shlomi; Lahiani, Limor; Lynch, Nancy; Nolte, Tina
We present simple algorithms for achieving self-stabilizing locationmanagement and routing in mobile ad-hoc networks. While mobile clients maybe susceptible to corruption and stopping failures, mobile networks areoften deployed with a reliable GPS oracle, supplying frequent updates ofaccurate real time and location information to mobile nodes. Informationfrom a GPS oracle provides an external, shared source of consistency formobile nodes, allowing them to label and timestamp messages, and henceaiding in identification of, and eventual recovery from, corruption andfailures. Our algorithms use a GPS oracle.Our algorithms also take advantage of the Virtual Stationary Automataprogramming abstraction, consisting of mobile clients, virtual timedmachines called virtual stationary automata (VSAs), and a local broadcastservice connecting VSAs and mobile clients. VSAs are distributed at knownlocations over the plane, and emulated in a self-stabilizing manner by themobile nodes in the system. They serve as fault-tolerant building blocksthat can interact with mobile clients and each other, and can simplifyimplementations of services in mobile networks.We implement three self-stabilizing, fault-tolerant services, each builton the prior services: (1) VSA-to-VSA geographic routing, (2) mobileclient location management, and (3) mobile client end-to-end routing. Weuse a greedy version of the classical depth-first search algorithm toroute messages between VSAs in different regions. The mobile clientlocation management service is based on home locations: Each clientidentifier hashes to a set of home locations, regions whose VSAs areperiodically updated with the client\'s location. VSAs maintain thisinformation and answer queries for client locations. Finally, theVSA-to-VSA routing and location management services are used to implementmobile client end-to-end routing.
</description>
<pubDate>Thu, 11 Aug 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30563</guid>
<dc:date>2005-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Probabilistically Checkable Proofs of Proximity</title>
<link>https://hdl.handle.net/1721.1/30562</link>
<description>Implementing Probabilistically Checkable Proofs of Proximity
Bhattacharyya, Arnab
Abstract: In this paper, we describe a proof-of-concept implementation of the probabilistically checkable proof of proximity (PCPP) system described by Ben-Sasson and Sudan in \\cite{bs05}.  In particular, we implement a PCPP prover and verifier for Reed-Solomon codes; the prover converts an evaluation of a polynomial on a linear set into a valid PCPP, while the verifier queries the evaluation and the PCPP to check that the evaluation is close to a Reed-Solomon codeword.  We prove tight bounds on the various parameters associated with the prover and verifier and describe some interesting programmatic issues that arise during their implementation.
</description>
<pubDate>Mon, 08 Aug 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30562</guid>
<dc:date>2005-08-08T00:00:00Z</dc:date>
</item>
<item>
<title>On Algorithms and Complexity for Sets with Cardinality Constraints</title>
<link>https://hdl.handle.net/1721.1/30561</link>
<description>On Algorithms and Complexity for Sets with Cardinality Constraints
Marnette, Bruno; Kuncak, Viktor; Rinard, Martin
Typestate systems ensure many desirable properties of imperativeprograms, including initialization of object fields and correct use ofstateful library interfaces.  Abstract sets with cardinalityconstraints naturally generalize typestate properties: relationshipsbetween the typestates of objects can be expressed as subset anddisjointness relations on sets, and elements of sets can berepresented as sets of cardinality one.  In addition, sets withcardinality constraints provide a natural language for specifyingoperations and invariants of data structures.Motivated by these program analysis applications, thispaper presents new algorithms and new complexity results forconstraints on sets and their cardinalities.  We studyseveral classes of constraints and demonstrate a trade-offbetween their expressive power and their complexity.Our first result concerns a quantifier-free fragment of BooleanAlgebra with Presburger Arithmetic.  We give a nondeterministicpolynomial-time algorithm for reducing the satisfiability of sets withsymbolic cardinalities to constraints on constant cardinalities, andgive a polynomial-space algorithm for the resulting problem.  The bestpreviously existing algorithm runs in exponential space andnondeterministic exponential time.In a quest for more efficient fragments, we identify severalsubclasses of sets with cardinality constraints whose satisfiabilityis NP-hard.  Finally, we identify a class of constraints that haspolynomial-time satisfiability and entailment problems and can serveas a foundation for efficient program analysis.  We give a system ofrewriting rules for enforcing certain consistency properties of theseconstraints and show how to extract complete information fromconstraints in normal form.  This result implies the soundness andcompleteness of our algorithms.
</description>
<pubDate>Wed, 03 Aug 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30561</guid>
<dc:date>2005-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>How to Construct a Correct and Scalable iBGP Configuration</title>
<link>https://hdl.handle.net/1721.1/30560</link>
<description>How to Construct a Correct and Scalable iBGP Configuration
Vutukuru, Mythili; Valiant, Paul; Kopparty, Swastik; Balakrishnan, Hari
The Border Gateway Protocol (BGP), the current inter domain routing protocol in the Internet, has two modes of operation: eBGP (External BGP), used to exchange routing information between autonomous systems, and iBGP (Internal BGP), used to propagate that information within an autonomous system (AS).  This paper focuses on the construction of an iBGP session configuration that guarantees two correctness properties - loop-free forwarding paths and complete visibility to all eBGP-learned best routes - while attempting to minimize the number of iBGP sessions (for scalability) and ensuring that the constructed configuration guarantees the two correctness properties even in the face of link failures and IGPpath changes.  Our algorithm constructs an iBGP configuration based on route reflectors, a commonly used way to control the number of iBGP sessions.  The algorithm, BGPSep, uses the notion of a graph separator, a (small) set of nodes that partition a graph into connected components of roughly equal sizes, recursively applies this idea to the connected components, and produces a route reflector hierarchy and the associated iBGP sessions.  We prove thatBGPSep guarantees the desired correctness properties, andevaluate an implementation of the BGPSep algorithm on several real-world and simulated network topologies.  Across these topologies, we find that the number of iBGP sessions with is afactor of 2.5 to 5 times smaller than with a \"full mesh\" iBGP, while guaranteeing the desired correctness properties.
</description>
<pubDate>Wed, 03 Aug 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30560</guid>
<dc:date>2005-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>Proving Atomicity: An Assertional Approach</title>
<link>https://hdl.handle.net/1721.1/30559</link>
<description>Proving Atomicity: An Assertional Approach
Chockler, Gregory; Lynch, Nancy; Mitra, Sayan; Tauber, Joshua
Atomicity (or linearizability) is a commonly used  consistency criterion for distributed services and objects. Although  atomic object implementations are abundant, proving that algorithms  achieve atomicity has turned out to be a challenging problem. In  this paper, we initiate the study of systematic ways of verifying  distributed implementations of atomic objects, beginning with  read/write objects (registers).  Our general approach is to replace  the existing operational reasoning about events and partial orders  with assertional reasoning about invariants and simulation  relations.  To this end, we define an abstract state machine that  captures the atomicity property and prove correctness of the object  implementations by establishing a simulation mapping between the  implementation and the specification automata. We demonstrate the  generality of our specification by showing that it is implemented by  three different read/write register constructions (the  message-passing register emulation of Attiya, Bar-Noy and Dolev, its  optimized version based on real time, and the shared memory register  construction of Vitanyi and Awerbuch), and by a general atomic  object implementation based on the Lamport\'s replicated state  machine algorithm.
</description>
<pubDate>Fri, 22 Jul 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30559</guid>
<dc:date>2005-07-22T00:00:00Z</dc:date>
</item>
<item>
<title>Byzantine Clients Rendered Harmless</title>
<link>https://hdl.handle.net/1721.1/30558</link>
<description>Byzantine Clients Rendered Harmless
Liskov, Barbara; Rodrigues, Rodrigo
Byzantine quorum systems have been proposed that work properly even when up to f replicas fail arbitrarily.However, these systems are not so successful when confronted with Byzantine faulty clients. This paper presents novelprotocols that provide atomic semantics despite Byzantine clients. Our protocols are the first to handle all problemscaused by Byzantine clients. They prevent Byzantine clients from interfering with good clients: bad clients cannotprevent good clients from completing reads and writes, and they cannot cause good clients to see inconsistencies. Inaddition we also prevent bad clients that have been removed from operation from leaving behind more than a boundednumber of writes that could be done on their behalf by a colluder.Our protocols are designed to work in an asynchronous system like the Internet and they are highly efficient. Werequire 3f +1 replicas, and either two or three phases to do writes; reads normally complete in one phase and requireno more than two phases, no matter what the bad clients are doing.We also present strong correctness conditions for systems with Byzantine clients that limit what can be done onbehalf of bad clients once they leave the system. Furthermore we prove that our protocols are both safe (they meetthose conditions) and live.
</description>
<pubDate>Thu, 21 Jul 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30558</guid>
<dc:date>2005-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>Boosting a Biologically Inspired Local Descriptor for Geometry-free Face and Full Multi-view 3D Object Recognition</title>
<link>https://hdl.handle.net/1721.1/30557</link>
<description>Boosting a Biologically Inspired Local Descriptor for Geometry-free Face and Full Multi-view 3D Object Recognition
Yokono, Jerry Jun; Poggio, Tomaso
Object recognition systems relying on local descriptors are increasingly used because of their perceived robustness with respect to occlusions and to global geometrical deformations.  Descriptors of this type -- based on a set of oriented Gaussian derivative filters -- are used in our recognition system.  In this paper, we explore a multi-view 3D object recognition system that does not use explicit geometrical information. The basic idea is to find discriminant features to describe an object across different views.  A boosting procedure is used to select features out of a large feature pool of local features collected from the positive training examples.  We describe experiments on face images with excellent recognition rate.
</description>
<pubDate>Thu, 07 Jul 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30557</guid>
<dc:date>2005-07-07T00:00:00Z</dc:date>
</item>
<item>
<title>Ultra-fast Object Recognition from Few Spikes</title>
<link>https://hdl.handle.net/1721.1/30556</link>
<description>Ultra-fast Object Recognition from Few Spikes
Hung, Chou; Kreiman, Gabriel; Poggio, Tomaso; DiCarlo, James J.
Understanding the complex brain computations leading to object recognition requires quantitatively characterizing the information represented in inferior temporal cortex (IT), the highest stage of the primate visual stream. A read-out technique based on a trainable classifier is used to characterize the neural coding of selectivity and invariance at the population level. The activity of very small populations of independently recorded IT neurons (~100 randomly selected cells) over very short time intervals (as small as 12.5 ms) contains surprisingly accurate and robust information about both object Â&#145;identityÂ&#146; and Â&#145;categoryÂ&#146;, which is furthermore highly invariant to object position and scale. Significantly, selectivity and invariance are present even for novel objects, indicating that these properties arise from the intrinsic circuitry and do not require object-specific learning. Within the limits of the technique, there is no detectable difference in the latency or temporal resolution of the IT information supporting so-called Â&#145;categorizationÂ&#146; (a.k. basic level) and Â&#145;identificationÂ&#146; (a.k. subordinate level) tasks.  Furthermore, where information, in particular information about stimulus location and scale, can also be read-out from the same small population of IT neurons. These results show how it is possible to decode invariant object information rapidly, accurately and robustly from a small population in IT and provide insights into the nature of the neural code for different kinds of object-related information.
</description>
<pubDate>Wed, 06 Jul 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30556</guid>
<dc:date>2005-07-06T00:00:00Z</dc:date>
</item>
<item>
<title>Etna: a Fault-tolerant Algorithm for Atomic Mutable DHT Data</title>
<link>https://hdl.handle.net/1721.1/30555</link>
<description>Etna: a Fault-tolerant Algorithm for Atomic Mutable DHT Data
Muthitacharoen, Athicha; Gilbert, Seth; Morris, Robert
This paper presents Etna, an algorithm for atomic reads and writes of replicated data stored in a distributed hash table. Etna correctly handles dynamically changing sets of replica hosts, and is optimized for reads, writes, and reconfiguration, in that order.Etna maintains a series of replica configurations as nodes in the system change, using new sets of replicas from the pool supplied by the distributed hash table system. It uses the Paxos protocol to ensure consensus on the members of each new configuration. For simplicity and performance, Etna serializes all reads and writes through a primary during the lifetime of each configuration. As a result, Etna completes read and write operations in only a single round from the primary.Experiments in an environment with high network delaysshow that Etna's read latency is determined by round-tripdelay in the underlying network, while write and reconfiguration latency is determined by the transmission time required to send data to each replica. Etna's write latency is about the same as that of a non-atomic replicating DHT, and Etna's read latency is about twice that of a non-atomic DHT due to Etna assembling a quorum for every read.
</description>
<pubDate>Wed, 15 Jun 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30555</guid>
<dc:date>2005-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous Virtual Mobile Nodes</title>
<link>https://hdl.handle.net/1721.1/30554</link>
<description>Autonomous Virtual Mobile Nodes
Dolev, Shlomi; Gilbert, Seth; Schiller, Elad; Shvartsman, Alex; Welch, Jennifer
This paper presents a new abstraction for virtual infrastructure in mobile ad hoc networks. An AutonomousVirtual Mobile Node (AVMN) is a robust and reliable entity that is designed to cope with theinherent difficulties caused by processors arriving, leaving, and moving according to their own agendas,as well as with failures and energy limitations. There are many types of applications that may make useof the AVMN infrastructure: tracking, supporting mobile users, or searching for energy sources.The AVMN extends the focal point abstraction in [9] and the virtual mobile node abstraction in [10].The new abstraction is that of a virtual general-purpose computing entity, an automaton that can makeautonomous on-line decisions concerning its own movement. We describe a self-stabilizing implementationof this new abstraction that is resilient to the chaotic behavior of the physical processors and providesautomatic recovery from any corrupted state of the system.
</description>
<pubDate>Wed, 15 Jun 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30554</guid>
<dc:date>2005-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Test Factoring for Java</title>
<link>https://hdl.handle.net/1721.1/30553</link>
<description>Automatic Test Factoring for Java
Saff, David; Artzi, Shay; Perkins, Jeff H.; Ernst, Michael D.
Test factoring creates fast, focused unit tests from slow system-widetests; each new unit test exercises only a subset of the functionalityexercised by the system test.  Augmenting a test suite with factoredunit tests should catch errors earlier in a test run.One way to factor a test is to introduce 'mock' objects.  If a testexercises a component T, which interacts with another component E (the'environment'), the implementation of E can be replaced by a mock.The mock checks that T's calls to E are as expected, and it simulatesE's behavior in response.  We introduce an automatic technique fortest factoring.  Given a system test for T and E, and a record of T'sand E's behavior when the system test is run, test factoring generatesunit tests for T in which E is mocked.  The factored tests can isolatebugs in T from bugs in E and, if E is slow or expensive, improve testperformance or cost.We have built an implementation of automatic dynamic test factoring for theJava language.  Our experimental data indicates that it can reduce therunning time of a system test suite by up to an order of magnitude.
</description>
<pubDate>Wed, 08 Jun 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30553</guid>
<dc:date>2005-06-08T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear Latent Variable Models for Video Sequences</title>
<link>https://hdl.handle.net/1721.1/30552</link>
<description>Nonlinear Latent Variable Models for Video Sequences
rahimi, ali; recht, ben; darrell, trevor
Many high-dimensional time-varying signals can be modeled as a  sequence of noisy nonlinear observations of a low-dimensional  dynamical process.  Given high-dimensional observations and a  distribution describing the dynamical process, we present a  computationally inexpensive approximate algorithm for estimating the  inverse of this mapping. Once this mapping is learned, we can invert  it to construct a generative model for the signals. Our algorithm  can be thought of as learning a manifold of images by taking into  account the dynamics underlying the low-dimensional representation  of these images. It also serves as a nonlinear system identification  procedure that estimates the inverse of the observation function in  nonlinear dynamic system.  Our algorithm reduces to a generalized  eigenvalue problem, so it does not suffer from the computational or  local minimum issues traditionally associated with nonlinear system  identification, allowing us to apply it to the problem of learning  generative models for video sequences.
</description>
<pubDate>Mon, 06 Jun 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30552</guid>
<dc:date>2005-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Theoretical Analysis of Geographic Routing in Social Networks</title>
<link>https://hdl.handle.net/1721.1/30551</link>
<description>Theoretical Analysis of Geographic Routing in Social Networks
Kumar, Ravi; Liben-Nowell, David; Novak, Jasmine; Raghavan, Prabhakar; Tomkins, Andrew
We introduce a formal model for geographic social networks, and introduce the notion of rank-based friendship, in which the probability that a person v is a friend of a person u is inversely proportional to the number of people w who live closer to u than v does.  We then prove our main theorem, showing that rank-based friendship is a sufficient explanation of the navigability of any geographic social network that adheres to it.
</description>
<pubDate>Fri, 03 Jun 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30551</guid>
<dc:date>2005-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>A Novel Active Contour Framework. Multi-component Level Set  Evolution under Topology Control</title>
<link>https://hdl.handle.net/1721.1/30550</link>
<description>A Novel Active Contour Framework. Multi-component Level Set  Evolution under Topology Control
Segonne, Florent; Pons, Jean-Philippe; Fischl, Bruce; Grimson, Eric
We present a novel framework to exert a topology control over a level set evolution. Level set methods offer several advantages over parametric active contours, in particular automated topological changes. In some applications, where some a priori knowledge of the target topology is available, topological changes may not be desirable. A method, based on the concept of simple point borrowed from digital topology, was recently proposed to achieve a strict topology preservation during a level set evolution. However, topologically constrained evolutions often generate topological barriers that lead to large geometric inconsistencies. We introduce a topologically controlled level set framework that greatly alleviates this problem. Unlike existing work, our method allows connected components to merge, split or vanish under some specific conditions that ensure that no topological defects are generated. We demonstrate the strength of our method on a wide range of numerical experiments.
</description>
<pubDate>Wed, 01 Jun 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30550</guid>
<dc:date>2005-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simultaneous Localization and Tracking in Wireless Ad-hoc Sensor Networks</title>
<link>https://hdl.handle.net/1721.1/30549</link>
<description>Simultaneous Localization and Tracking in Wireless Ad-hoc Sensor Networks
Taylor, Christopher J.
In this thesis we present LaSLAT, a sensor network algorithm thatsimultaneously localizes sensors, calibrates sensing hardware, andtracks unconstrained moving targets using only range measurementsbetween the sensors and the target. LaSLAT is based on a Bayesian filter, which updates a probabilitydistribution over the quantities of interest as measurementsarrive. The algorithm is distributable, and requires only a constantamount of space with respect to the number of measurementsincorporated. LaSLAT is easy to adapt to new types of hardware and newphysical environments due to its use of intuitive probabilitydistributions: one adaptation demonstrated in this thesis uses amixture measurement model to detect and compensate for bad acousticrange measurements due to echoes.We also present results from a centralized Java implementation ofLaSLAT on both two- and three-dimensional sensor networks in whichranges are obtained using the Cricket ranging system. LaSLAT is ableto localize sensors to within several centimeters of their groundtruth positions while recovering a range measurement bias for eachsensor and the complete trajectory of the mobile.
</description>
<pubDate>Tue, 31 May 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30549</guid>
<dc:date>2005-05-31T00:00:00Z</dc:date>
</item>
<item>
<title>Empirical Effective Dimension and Optimal Rates for Regularized Least Squares Algorithm</title>
<link>https://hdl.handle.net/1721.1/30548</link>
<description>Empirical Effective Dimension and Optimal Rates for Regularized Least Squares Algorithm
Caponnetto, Andrea; Rosasco, Lorenzo; Vito, Ernesto De; Verri, Alessandro
This paper presents an approach to model selection for regularized least-squares on reproducing kernel Hilbert spaces in the semi-supervised setting.  The role of effective dimension was recently shown to be crucial in the definition of a rule for the choice of the regularization parameter, attaining asymptotic optimal performances in a minimax sense.  The main goal of the present paper is showing how the effective dimension can be replaced by an empirical counterpart while conserving optimality.  The empirical effective dimension can be computed from independent unlabelled samples.  This makes the approach particularly appealing in the semi-supervised setting.
</description>
<pubDate>Fri, 27 May 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30548</guid>
<dc:date>2005-05-27T00:00:00Z</dc:date>
</item>
<item>
<title>Comparing Visual Features for Morphing Based Recognition</title>
<link>https://hdl.handle.net/1721.1/30547</link>
<description>Comparing Visual Features for Morphing Based Recognition
Wu, Jia Jane
This thesis presents a method of object classification using the idea of deformable shape matching.  Three types of visual features, geometric blur, C1 and SIFT, are used to generate feature descriptors.  These feature descriptors are then used to find point correspondences between pairs of images.  Various morphable models are created by small subsets of these correspondences using thin-plate spline.  Given these morphs, a simple algorithm, least median of squares (LMEDS), is used to find the best morph.  A scoring metric, using both LMEDS and distance transform, is used to classify test images based on a nearest neighbor algorithm.  We perform the experiments on the Caltech 101 dataset [5].  To ease computation, for each test image, a shortlist is created containing 10 of the most likely candidates.  We were unable to duplicate the performance of [1] in the shortlist stage because we did not use hand-segmentation to extract objects for our training images.  However, our gain from the shortlist to correspondence stage is comparable to theirs.  In our experiments, we improved from 21% to 28% (gain of 33%), while [1] improved from 41% to 48% (gain of 17%).  We find that using a non-shape based approach, C2 [14], the overall classification rate of 33.61% is higher than all of the shaped based methods tested in our experiments.
</description>
<pubDate>Wed, 25 May 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30547</guid>
<dc:date>2005-05-25T00:00:00Z</dc:date>
</item>
<item>
<title>Lexical Chains and Sliding Locality Windows in Content-based Text Similarity Detection</title>
<link>https://hdl.handle.net/1721.1/30546</link>
<description>Lexical Chains and Sliding Locality Windows in Content-based Text Similarity Detection
Nahnsen, Thade; Uzuner, Ozlem; Katz, Boris
We present a system to determine content similarity of documents. More specifically, our goal is to identify book chapters that are translations of the same original chapter; this task requires identification of not only the different topics in the documents but also the particular flow of these topics. We experiment with different representations employing n-grams of lexical chains and test these representations on a corpus of approximately 1000 chapters gathered from books with multiple parallel translations.  Our representations include the cosine similarity of attribute vectors of n-grams of lexical chains, the cosine similarity of tf*idf-weighted keywords, and the cosine similarity of unweighted lexical chains (unigrams of lexical chains) as well as multiplicative combinations of the similarity measures produced by these approaches. Our results identify fourgrams of unordered lexical chains as a particularly useful representation for text similarity evaluation.
</description>
<pubDate>Thu, 19 May 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30546</guid>
<dc:date>2005-05-19T00:00:00Z</dc:date>
</item>
<item>
<title>Some Properties of Empirical Risk Minimization over Donsker Classes</title>
<link>https://hdl.handle.net/1721.1/30545</link>
<description>Some Properties of Empirical Risk Minimization over Donsker Classes
Caponnetto, Andrea; Rakhlin, Alexander
We study properties of algorithms which minimize (or almost minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) to a new hypothesis. We also show that under some conditions the expected errors of the almost-minimizers are becoming close with a rate faster than n^{-1/2}.
</description>
<pubDate>Tue, 17 May 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30545</guid>
<dc:date>2005-05-17T00:00:00Z</dc:date>
</item>
<item>
<title>A Region-based Architecture for Service-Providing Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/30544</link>
<description>A Region-based Architecture for Service-Providing Distributed Systems
Singh, Neha
A service-providing system consists of hosts that provide services such as data, content, computational and memory resources and data-based services to other entities in the system. Consumers that wish to use services describe their needs with a set of high-level objectives. In this thesis, we address the problem of locating services in a large-scale distributed system using their descriptions, rather than their addresses. We propose a network architecture that is based on the concept of dividing the service-providing hosts into Regions. A Region is a grouping of elements of the network that share a set of common characteristics and policies. Members of a region manage their interactions with other regions and their elements according to some defined rules and policies. Hosts can be divided into regions based on various properties such as their content, their commercial model or their security characteristics to name a few. The service provided by a region is an ! aggregate of the services provided by all its member hosts. The region-based architecture routes a service request through the network efficiently based on its description and on the advertisements from regions providing services. Division of hosts into a set of independent regions partitions the search space and produces a scalable structure. The architecture also does not impose any rules on the internal organization of regions making the system flexible and dynamic.
</description>
<pubDate>Tue, 17 May 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30544</guid>
<dc:date>2005-05-17T00:00:00Z</dc:date>
</item>
<item>
<title>Risk Bounds for Regularized Least-squares Algorithm with Operator-valued kernels</title>
<link>https://hdl.handle.net/1721.1/30543</link>
<description>Risk Bounds for Regularized Least-squares Algorithm with Operator-valued kernels
Vito, Ernesto De; Caponnetto, Andrea
We show that recent results in [3] on risk bounds for regularized least-squares on reproducing kernel Hilbert spaces can be straightforwardly extended to the vector-valued regression setting.  We first briefly introduce central concepts on operator-valued kernels.  Then we show how risk bounds can be expressed in terms of a generalization of effective dimension.
</description>
<pubDate>Mon, 16 May 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30543</guid>
<dc:date>2005-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient, Verifiable Binary Sandboxing for a CISC Architecture</title>
<link>https://hdl.handle.net/1721.1/30542</link>
<description>Efficient, Verifiable Binary Sandboxing for a CISC Architecture
McCamant, Stephen; Morrisett, Greg
Executing untrusted code while preserving security requiresenforcement of memory and control-flow safety policies:untrusted code must be prevented from modifying memory orexecuting code except as explicitly allowed.  Software-basedfault isolation (SFI) or \"sandboxing\" enforces thosepolicies by rewriting the untrusted code at the level ofindividual instructions.  However, the original sandboxingtechnique of Wahbe et al. is applicable only to RISCarchitectures, and other previous work is either insecure,or has been not described in enough detail to giveconfidence in its security properties.  We present a noveltechnique that allows sandboxing to be easily applied to aCISC architecture like the IA-32.  The technique can beverified to have been applied at load time, so that neitherthe rewriting tool nor the compiler needs to be trusted.  Wedescribe a prototype implementation which provides a robustsecurity guarantee, is scalable to programs of any size, andhas low runtime overheads.  Further, we give amachine-checked proof that any program approved by theverification algorithm is guaranteed to respect the desiredsafety property.
</description>
<pubDate>Mon, 02 May 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30542</guid>
<dc:date>2005-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Simultaneous Localization, Calibration, and Tracking in an ad Hoc Sensor Network</title>
<link>https://hdl.handle.net/1721.1/30541</link>
<description>Simultaneous Localization, Calibration, and Tracking in an ad Hoc Sensor Network
Taylor, Christopher; Rahimi, Ali; Bachrach, Jonathan; Shrobe, Howard
We introduce Simultaneous Localization and Tracking (SLAT), the  problem of tracking a target in a sensor network while  simultaneously localizing and calibrating the nodes of the network.  Our proposed solution, LaSLAT, is a Bayesian filter providing  on-line probabilistic estimates of sensor locations and target  tracks. It does not require globally accessible beacon signals or  accurate ranging between the nodes.  When applied to a network of 27  sensor nodes, our algorithm can localize the nodes to within one or  two centimeters.
</description>
<pubDate>Tue, 26 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30541</guid>
<dc:date>2005-04-26T00:00:00Z</dc:date>
</item>
<item>
<title>Gestural Cues for Sentence Segmentation</title>
<link>https://hdl.handle.net/1721.1/30540</link>
<description>Gestural Cues for Sentence Segmentation
Eisenstein, Jacob; Davis, Randall
In human-human dialogues, face-to-face meetings are often preferred over phone conversations.One explanation is that non-verbal modalities such as gesture provide additionalinformation, making communication more efficient and accurate. If so, computerprocessing of natural language could improve by attending to non-verbal modalitiesas well. We consider the problem of sentence segmentation, using hand-annotatedgesture features to improve recognition. We find that gesture features correlate wellwith sentence boundaries, but that these features improve the overall performance of alanguage-only system only marginally. This finding is in line with previous research onthis topic. We provide a regression analysis, revealing that for sentence boundarydetection, the gestural features are largely redundant with the language model andpause features. This suggests that gestural features can still be useful when speech recognition is inaccurate.
</description>
<pubDate>Tue, 19 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30540</guid>
<dc:date>2005-04-19T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Rates for Regularized Least-squares Algorithm</title>
<link>https://hdl.handle.net/1721.1/30539</link>
<description>Fast Rates for Regularized Least-squares Algorithm
Caponnetto, Andrea; Vito, Ernesto De
We develop a theoretical analysis of generalization performances of regularized least-squares on reproducing kernel Hilbert spaces for supervised learning.  We show that the concept of effective dimension of an integral operator plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples.  In fact, a minimax analysis is performed which shows asymptotic optimality of the above-mentioned criterion.
</description>
<pubDate>Thu, 14 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30539</guid>
<dc:date>2005-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Learning From Snapshot Examples</title>
<link>https://hdl.handle.net/1721.1/30538</link>
<description>Learning From Snapshot Examples
Beal, Jacob
Examples are a powerful tool for teaching both humans and computers.In order to learn from examples, however, a student must first extractthe examples from its stream of perception. Snapshot learning is ageneral approach to this problem, in which relevant samples ofperception are used as examples.  Learning from these examples can inturn improve the judgement of the snapshot mechanism, improving thequality of future examples.  One way to implement snapshot learning isthe Top-Cliff heuristic, which identifies relevant samples using ageneralized notion of peaks. I apply snapshot learning with theTop-Cliff heuristic to solve a distributed learning problem and showthat the resulting system learns rapidly and robustly, and canhallucinate useful examples in a perceptual stream from a teacherlesssystem.
</description>
<pubDate>Wed, 13 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30538</guid>
<dc:date>2005-04-13T00:00:00Z</dc:date>
</item>
<item>
<title>De-Emphasis of Distracting Image Regions Using Texture Power Maps</title>
<link>https://hdl.handle.net/1721.1/30537</link>
<description>De-Emphasis of Distracting Image Regions Using Texture Power Maps
Su, Sara L.; Durand, Fredo; Agrawala, Maneesh
A major obstacle in photography is the presence of distracting elements that pull attention away from the main subject and clutter the composition. In this article, we present a new image-processing technique that reduces the salience of distracting regions. It is motivated by computational models of attention that predict that texture variation influences bottom-up attention mechanisms. Our method reduces the spatial variation of texture using power maps, high-order features describing local frequency content in an image. We show how modification of power maps results in  powerful image de-emphasis. We validate our results using a user search experiment and eye tracking data.
</description>
<pubDate>Tue, 12 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30537</guid>
<dc:date>2005-04-12T00:00:00Z</dc:date>
</item>
<item>
<title>Construction by robot swarms using extended stigmergy</title>
<link>https://hdl.handle.net/1721.1/30536</link>
<description>Construction by robot swarms using extended stigmergy
Werfel, Justin; Bar-Yam, Yaneer; Nagpal, Radhika
We describe a system in which simple, identical, autonomous robots assemble two-dimensional structures out of identical building blocks.  We show that, in a system divided in this way into mobile units and structural units, giving the blocks limited communication abilities enables robots to have sufficient global structural knowledge to rapidly build elaborate pre-designed structures.  In this way we extend the principle of stigmergy (storing information in the environment) used by social insects, by increasing the capabilities of the blocks that represent that environmental information.  As a result, arbitrary solid structures can be built using a few fixed, local behaviors, without requiring construction to be planned out in detail.
</description>
<pubDate>Fri, 08 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30536</guid>
<dc:date>2005-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Motion Coordination Using Virtual Nodes</title>
<link>https://hdl.handle.net/1721.1/30535</link>
<description>Motion Coordination Using Virtual Nodes
Lynch, Nancy; Mitra, Sayan; Nolte, Tina
We describe how a virtual node abstraction layer can be used to coordinate the motion of real mobile nodes in a region of 2-space. In particular, we consider how nodes in a mobile ad hoc network can arrange themselves along a predetermined curve in the plane, and can maintain themselves in such a configuration in the presence of changes in the underlying mobile ad hoc network, specifically, when nodes may join or leave the system or may fail. Our strategy is to allow the mobile nodes to implement a virtual layer consisting of mobile client nodes, stationary Virtual Nodes (VNs) for predetermined zones in the plane, and local broadcast communication.  The VNs coordinate among themselves to distribute the client nodesbetween zones based on the length of the curve through those zones, while each VN directs its zone's local client nodes to move themselves to equally spaced locations on the local portion of the target curve.
</description>
<pubDate>Wed, 06 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30535</guid>
<dc:date>2005-04-06T00:00:00Z</dc:date>
</item>
<item>
<title>On Relational Analysis of Algebraic Datatypes</title>
<link>https://hdl.handle.net/1721.1/30534</link>
<description>On Relational Analysis of Algebraic Datatypes
Kuncak, Viktor; Jackson, Daniel
We present a technique that enables the use of finite modelfinding to check the satisfiability of certain formulaswhose intended models are infinite.  Such formulas arisewhen using the language of sets and relations to reasonabout structured values such as algebraic datatypes.  Thekey idea of our technique is to identify a natural syntacticclass of formulas in relational logic for which reasoningabout infinite structures can be reduced to reasoning aboutfinite structures.  As a result, when a formula belongs tothis class, we can use existing finite model findingtools to check whether the formula holds in the desiredinfinite model.
</description>
<pubDate>Tue, 05 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30534</guid>
<dc:date>2005-04-05T00:00:00Z</dc:date>
</item>
<item>
<title>Wait-free Regular Storage from Byzantine Components</title>
<link>https://hdl.handle.net/1721.1/30533</link>
<description>Wait-free Regular Storage from Byzantine Components
Abraham, Ittai; Chockler, Gregory; Keidar, Idit; Malkhi, Dahlia
We present a simple, efficient, and self-contained construction of a wait-free regular register from Byzantine storage components.  Our construction utilizes a novel building block, called 1-regular register, which can be implemented from Byzantine fault-prone components with the same round complexity as a safe register, and with only a slight increase in storage space.
</description>
<pubDate>Tue, 05 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30533</guid>
<dc:date>2005-04-05T00:00:00Z</dc:date>
</item>
<item>
<title>An Expectation Maximization Approach for Integrated Registration, Segmentation, and Intensity Correction</title>
<link>https://hdl.handle.net/1721.1/30532</link>
<description>An Expectation Maximization Approach for Integrated Registration, Segmentation, and Intensity Correction
Pohl, Kilian M.; Fisher, John; Grimson, W. Eric L.; Wells, William M.
This paper presents a statistical framework which combines the registration of an atlas with the segmentation of MR images. We use an Expectation Maximization-based algorithm to find a solution within the model, which simultaneously estimates image inhomogeneities, anatomical labelmap, and a mapping from the atlas to the image space. An example of the approach is given for a brain structure-dependent affine mapping approach. The algorithm produces high quality segmentations for brain tissues as well as their substructures. We demonstrate the approach on a set of 30 brain MR images. In addition, we show that the approach performs better than similar methods which separate the registration from the segmentation problem.
</description>
<pubDate>Fri, 01 Apr 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30532</guid>
<dc:date>2005-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Variable Selection with Dimensionality Reduction</title>
<link>https://hdl.handle.net/1721.1/30531</link>
<description>Combining Variable Selection with Dimensionality Reduction
Wolf, Lior; Bileschi, Stanley
This paper bridges the gap between variable selection methods (e.g., Pearson coefficients, KS test) and dimensionality reductionalgorithms (e.g., PCA, LDA). Variable selection algorithms encounter difficulties dealing with highly correlated data,since many features are similar in quality. Dimensionality reduction algorithms tend to combine all variables and cannotselect a subset of significant variables.Our approach combines both methodologies by applying variable selection followed by dimensionality reduction. Thiscombination makes sense only when using the same utility function in both stages, which we do. The resulting algorithmbenefits from complex features as variable selection algorithms do, and at the same time enjoys the benefits of dimensionalityreduction.1
</description>
<pubDate>Wed, 30 Mar 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30531</guid>
<dc:date>2005-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>Matrix Approximation and Projective Clustering via Iterative Sampling</title>
<link>https://hdl.handle.net/1721.1/30530</link>
<description>Matrix Approximation and Projective Clustering via Iterative Sampling
Rademacher, Luis; Vempala, Santosh; Wang, Grant
We present two new results for the problem of approximating a given real m by n matrix A by a rank-k matrix D, where k &lt; min{m, n}, so as to minimize ||A-D||_F^2.  It is known that bysampling O(k/eps) rows of the matrix, one can find a low-rank approximation with additive error eps||A||_F^2.  Our first result shows that with adaptive sampling in t rounds and O(k/eps) samples in each round, the additive error drops exponentially as eps^t; the computation time is nearly linear in the number of nonzero entries. This demonstrates that multiple passes can be highly beneficial for a natural (and widely studied) algorithmic problem. Our second result is that there exists a subset of O(k^2/eps) rows such that their span contains a rank-k approximation with multiplicative (1+eps) error (i.e., the sum of squares distance has a small \"core-set\" whose span determines a good approximation). This existence theorem leads to a PTAS for the following projective clustering probl! em: Given a set of points P in R^d, and integers k,j, find a set of j subspaces F_1,...,F_j, each of dimension at most k, that minimize \\sum_{p \\in P} min_i d(p,F_i)^2.
</description>
<pubDate>Tue, 29 Mar 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30530</guid>
<dc:date>2005-03-29T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Object and Feature Dynamics in Probabilistic Tracking</title>
<link>https://hdl.handle.net/1721.1/30529</link>
<description>Combining Object and Feature Dynamics in Probabilistic Tracking
Taycher, Leonid; Fisher III, John W.; Darrell, Trevor
Objects can exhibit different dynamics at different scales, a property that isoftenexploited by visual tracking algorithms. A local dynamicmodel is typically used to extract image features that are then used as inputsto a system for tracking the entire object using a global dynamic model.Approximate local dynamicsmay be brittle---point trackers drift due to image noise and adaptivebackground models adapt to foreground objects that becomestationary---but constraints from the global model can make them more robust.We propose a probabilistic framework for incorporating globaldynamics knowledge into the local feature extraction processes.A global tracking algorithm can beformulated as a generative model and used to predict feature values thatinfluence the observation process of thefeature extractor. We combine such models in a multichain graphicalmodel framework.We show the utility of our framework for improving feature tracking and thusshapeand motion estimates in a batch factorization algorithm.We also propose an approximate filtering algorithm appropriate for onlineapplications, and demonstrate its application to problems such as backgroundsubtraction, structure from motion and articulated body tracking.
</description>
<pubDate>Wed, 02 Mar 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30529</guid>
<dc:date>2005-03-02T00:00:00Z</dc:date>
</item>
<item>
<title>Receptive field structures for recognition</title>
<link>https://hdl.handle.net/1721.1/30528</link>
<description>Receptive field structures for recognition
Balas, Benjamin; Sinha, Pawan
Localized operators, like Gabor wavelets and difference-of-Gaussian filters, are considered to be useful tools for image representation. This is due to their ability to form a Â&#145;sparse codeÂ&#146; that can serve as a basis set for high-fidelity reconstruction of natural images. However, for many visual tasks, the more appropriate criterion of representational efficacy is Â&#145;recognitionÂ&#146;, rather than Â&#145;reconstructionÂ&#146;. It is unclear whether simple local features provide the stability necessary to subserve robust recognition of complex objects. In this paper, we search the space of two-lobed differential operators for those that constitute a good representational code under recognition/discrimination criteria. We find that a novel operator, which we call the Â&#145;dissociated dipoleÂ&#146; displays useful properties in this regard. We describe simple computational experiments to assess the merits of such dipoles relative to the more traditional local operators. The results suggest that non-local operators constitute a vocabulary that is stable across a range of image transformations.
</description>
<pubDate>Tue, 01 Mar 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30528</guid>
<dc:date>2005-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>File Synchronization with Vector Time Pairs</title>
<link>https://hdl.handle.net/1721.1/30527</link>
<description>File Synchronization with Vector Time Pairs
Cox, Russ; Josephson, William
Vector time pairs are a new method for trackingsynchronization metadata.  A vector time pairconsists of two vector times: one tracking filemodification history and one tracking filesynchronization history.  Because the vectortimes are maintained separately and used fordifferent purposes, different algorithms andoptimizations can be applied to each.  As aresult, vector time pairs impose no restrictionon synchronization patterns, never falsely detectconflicts, require no space to store deletionnotices, require network bandwidth proportionalonly to the number of files changed, and supportpartial synchronizations.  No other currentsynchronization method has all these properties.Results from an implementation of vector timepairs in a new user-level file synchronizercalled Tra confirm the benefits of vectortime pairs.
</description>
<pubDate>Mon, 28 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30527</guid>
<dc:date>2005-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Impossibility of boosting distributed service resilience</title>
<link>https://hdl.handle.net/1721.1/30526</link>
<description>Impossibility of boosting distributed service resilience
Attie, Paul; Guerraoui, Rachid; Kouznetsov, Petr; Lynch, Nancy; Rajsbaum, Sergio
We prove two theorems saying that no distributed system in whichprocesses coordinate using reliable registers and f-resilient servicescan solve the consensus problem in the presence of f+1 undetectableprocess stopping failures.  (A service is f-resilient if it isguaranteed to operate as long as no more than f of the processesconnected to it fail.)Our first theorem assumes that the given services are atomic objects,and allows any connection pattern between processes and services.  Incontrast, we show that it is possible to boost the resilience ofsystems solving problems easier than consensus: the k-set consensusproblem is solvable for 2k-1 failures using 1-resilient consensusservices.  The first theorem and its proof generalize to the largerclass of failure-oblivious services.Our second theorem allows the system to contain failure-awareservices, such as failure detectors, in addition to failure-obliviousservices; however, it requires that each failure-aware service beconnected to all processes.  Thus, f+1 process failures overall candisable all the failure-aware services.  In contrast, it is possibleto boost the resilience of a system solving consensus if arbitrarypatterns of connectivity are allowed between processes andfailure-aware services: consensus is solvable for any number offailures using only 1-resilient 2-process perfect failure detectors.
</description>
<pubDate>Fri, 25 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30526</guid>
<dc:date>2005-02-25T00:00:00Z</dc:date>
</item>
<item>
<title>Discovering object categories in image collections</title>
<link>https://hdl.handle.net/1721.1/30525</link>
<description>Discovering object categories in image collections
Sivic, Josef; Russell, Bryan C.; Efros, Alexei A.; Zisserman, Andrew; Freeman, William T.
Given a set of images containing multiple object categories,we seek to discover those categories and their image locations withoutsupervision.  We achieve this using generative modelsfrom the statistical text literature: probabilistic Latent SemanticAnalysis (pLSA), and Latent Dirichlet Allocation (LDA). In text analysisthese are used to discover topics in a corpus using the bag-of-wordsdocument representation. Here we discover topics as object categories, sothat an image containing instances of several categories is modelled as amixture of topics.The models are applied to images by using avisual analogue of a word, formed by vector quantizing SIFT like regiondescriptors.  We investigate a set of increasingly demanding scenarios,starting with image sets containing only two object categories through tosets containing multiple categories (including airplanes, cars, faces,motorbikes, spotted cats) and background clutter. The object categoriessample both intra-class and scale variation, and both the categories andtheir approximate spatial layout are found without supervision.We also demonstrate classification of unseen images and images containingmultiple objects. Performance of the proposed unsupervised method is compared tothe semi-supervised approach of Fergus et al.
</description>
<pubDate>Fri, 25 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30525</guid>
<dc:date>2005-02-25T00:00:00Z</dc:date>
</item>
<item>
<title>Improving 802.11 Range with Forward Error Correction</title>
<link>https://hdl.handle.net/1721.1/30524</link>
<description>Improving 802.11 Range with Forward Error Correction
Riemann, Reina; Winstein, Keith
The ISO/IEC 8802-11:1999(E) specification uses a 32-bit CRC for error detection and whole-packet retransmissions for recovery. In long-distance orhigh-interference links where the probability of a bit error is high,this strategy results in excessive losses, because any erroneous bitcauses an entire packet to be discarded. By ignoring the CRC andadding redundancy to 802.11 payloads in software, we achievedsubstantially reduced loss rates on indoor and outdoor long-distancelinks and extended line-of-sight range outdoors by 70 percent.
</description>
<pubDate>Thu, 24 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30524</guid>
<dc:date>2005-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>Complexity of finding Nash equilibria in 0-1 bimatrix games</title>
<link>https://hdl.handle.net/1721.1/30523</link>
<description>Complexity of finding Nash equilibria in 0-1 bimatrix games
Abbott, Tim; Kane, Daniel; Valiant, Paul
We exhibit a polynomial reduction from the problem of finding a Nashequilibrium of a bimatrix game with rational coefficients to the problemof finding a Nash equilibrium of a bimatrix game with 0-1 coefficients.
</description>
<pubDate>Tue, 08 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30523</guid>
<dc:date>2005-02-08T00:00:00Z</dc:date>
</item>
<item>
<title>Stable Policy Routing with Provider Independence</title>
<link>https://hdl.handle.net/1721.1/30522</link>
<description>Stable Policy Routing with Provider Independence
Feamster, Nick; Johari, Ramesh; Balakrishnan, Hari
Thousands of competing autonomous systems (ASes) mustcooperate with each other to provide global Internet connectivity.These ASes encode various economic, business,and performance decisions in their routing policies. The currentinterdomain routing system enables ASes to express policyusing rankings that determine how each router in an ASorders the different routes to a destination, and filters thatdetermine which routes are hidden from each neighboringAS. Since the Internet is composed of many independent,competing networks, the interdomain routing system shouldallow providers to set their rankings independently, and tohave no constraints on allowed filters. This paper studiesrouting protocol stability under these constraints. We firstdemonstrate that certain rankings that are commonly usedin practice may not ensure routing stability. We then provethat, with ranking independence and unrestricted filtering,guaranteeing that the routing system will converge to a stablepath assignment essentially requires ASes to rank routesbased on AS-path lengths. Finally, we discuss the implicationsof these results for the future of interdomain routing.
</description>
<pubDate>Tue, 08 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30522</guid>
<dc:date>2005-02-08T00:00:00Z</dc:date>
</item>
<item>
<title>Using computational models to study texture representations in the human visual system.</title>
<link>https://hdl.handle.net/1721.1/30521</link>
<description>Using computational models to study texture representations in the human visual system.
Balas, Benjamin
Traditionally, human texture perception has been studied using artificial textures made of random-dot patterns or abstract structured elements. At the same time, computer algorithms for the synthesis of natural textures have improved dramatically. The current study seeks to unify these two fields of research through a psychophysical assessment of a particular computational model, thus providing a sense of what image statistics are most vital for representing a range of natural textures. We employ Portilla and SimoncelliÂ&#146;s 2000 model of texture synthesis for this task (a parametric model of analysis and synthesis designed to mimic computations carried out by the human visual system). We find an intriguing interaction between texture type (periodic v. structured) and image statistics (autocorrelation function and filter magnitude correlations), suggesting different processing strategies may be employed for these two texture families under pre-attentive viewing.
</description>
<pubDate>Mon, 07 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30521</guid>
<dc:date>2005-02-07T00:00:00Z</dc:date>
</item>
<item>
<title>Functional Differential Geometry</title>
<link>https://hdl.handle.net/1721.1/30520</link>
<description>Functional Differential Geometry
Sussman, Gerald Jay; Wisdom, Jack
Differential geometry is deceptively simple.  It is surprisingly easyto get the right answer with unclear and informal symbol manipulation.To address this problem we use computer programs to communicate aprecise understanding of the computations in differential geometry.Expressing the methods of differential geometry in a computer languageforces them to be unambiguous and computationally effective.  The taskof formulating a method as a computer-executable program and debuggingthat program is a powerful exercise in the learning process.  Also,once formalized procedurally, a mathematical idea becomes a tool thatcan be used directly to compute results.
</description>
<pubDate>Wed, 02 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30520</guid>
<dc:date>2005-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>The Security Power of the Ballot Box</title>
<link>https://hdl.handle.net/1721.1/30519</link>
<description>The Security Power of the Ballot Box
Lepinski, Matt; Izmalkov, Sergei
We show that any function F can be securely evaluated by a protocolwith ballots and a ballot box. That is, N mutually suspicious players,each player possessing a secret input, can use ballots and a ballotbox to jointly evaluate F on their secret inputs so that (no matterhow many players may collude and deviate from their prescribed instructions, and no matter how long they compute!) each player learnsexactly the output of the function with the same privacy and correctnessas if all players privately handed their secret inputs to a trustedparty, who privately evaluates F and privately returns the outputs toeach player.Our protocol is (1) efficient, (2) enjoys perfect privacy, (3) guarantees perfect correctness, (4) is universally composable, and (5)is collusion-free even for games with secret actions.
</description>
<pubDate>Wed, 02 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30519</guid>
<dc:date>2005-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>Determining articulator configuration in voiced stop consonants by matching time-domain patterns in pitch periods</title>
<link>https://hdl.handle.net/1721.1/30518</link>
<description>Determining articulator configuration in voiced stop consonants by matching time-domain patterns in pitch periods
Kondacs, Attila
In this thesis I will be concerned with linking the observed speechsignal to the configuration of articulators.Due to the potentially rapid motion of the articulators, the speechsignal can be highly non-stationary. The typical linear analysistechniques that assume quasi-stationarity may not have sufficienttime-frequency resolution to determine the place of articulation.I argue that the traditional low and high-level primitives of speechprocessing, frequency and phonemes, are inadequate and should bereplaced by a representation with three layers: 1. short pitch periodresonances and other spatio-temporal patterns 2. articulatorconfiguration trajectories 3. syllables. The patterns indicatearticulator configuration trajectories (how the tongue, jaws, etc. aremoving), which are interpreted as syllables and words.My patterns are an alternative to frequency. I use shorttime-domain features of the sound waveform, which can be extractedfrom each vowel pitch period pattern, to identify the positions of thearticulators with high reliability. These features are importantbecause by capitalizing on detailed measurements within a single pitchperiod, the rapid articulator movements can be tracked. No linearsignal processing approach can achieve the combination of sensitivityto short term changes and measurement accuracy resulting from thesenonlinear techniques.The measurements I use are neurophysiologically plausible: theauditory system could be using similar methods.I have demonstrated this approach by constructing a robust techniquefor categorizing the English voiced stops as the consonants B, D, or Gbased on the vocalic portions of their releases. The classificationrecognizes 93.5%, 81.8% and 86.1% of the b, d and gto ae transitions with false positive rates 2.9%, 8.7% and2.6% respectively.
</description>
<pubDate>Fri, 28 Jan 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30518</guid>
<dc:date>2005-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Stationary Automata for Mobile Networks</title>
<link>https://hdl.handle.net/1721.1/30517</link>
<description>Virtual Stationary Automata for Mobile Networks
Dolev, Shlomi; Gilbert, Seth; Lahiani, Limor; Lynch, Nancy; Nolte, Tina
We define a programming abstraction formobile networks called the Virtual Stationary Automataprogramming layer, consisting of real mobile clients, virtualtimed I/O automata called virtual stationary automata(VSAs), and a communication service connecting VSAs andclient nodes. The VSAs are located at prespecified regionsthat tile the plane, defining a static virtual infrastructure.We present a self-stabilizing algorithm to emulate a VSAusing the real mobile nodes that are currently residingin the VSAÂ&#146;s region. We also describe several examplesof applications whose implementations benefit from thesimplicity obtained through use of the VSA abstraction.
</description>
<pubDate>Fri, 21 Jan 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30517</guid>
<dc:date>2005-01-21T00:00:00Z</dc:date>
</item>
<item>
<title>Biologically-Inspired Robust Spatial Programming</title>
<link>https://hdl.handle.net/1721.1/30516</link>
<description>Biologically-Inspired Robust Spatial Programming
Beal, Jacob; Sussman, Gerald
Inspired by the robustness and flexibility of biological systems, we are developing linguistic and programming tools to allow us to program spatial systems populated by vast numbers of unreliable components interconnected in unknown, irregular, and time-varying ways. We organize our computations around geometry, making the fact that our system is made up of discrete individuals implicit. Geometry allows us to specify requirements in terms of the behavior of the space occupied by the aggregate rather than the behavior of individuals, thereby decreasing complexity. So we describe the behavior of space explicitly, abstracting away the discrete nature of the components. As an example, we present the Amorphous Medium Language, which describes behavior in terms of homeostatic maintenance of constraints on nested regions of space.
</description>
<pubDate>Tue, 18 Jan 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30516</guid>
<dc:date>2005-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>How Much of a Hypertree can be Captured by Windmills?</title>
<link>https://hdl.handle.net/1721.1/30515</link>
<description>How Much of a Hypertree can be Captured by Windmills?
Liang, Percy; Srebro, Nati
Current approximation algorithms for maximum weight {\em hypertrees} find heavy {\em windmill farms}, and are based on the fact that a constant ratio (for constant width $k$) of the weight of a $k$-hypertree can be captured by a $k$-windmill farm. However, the exact worst case ratio is not known and is only bounded to be between $1/(k+1)!$ and $1/(k+1)$. We investigate this worst case ratio by searching for weighted hypertrees that minimize the ratio of their weight that can be captured with a windmill farm. To do so, we use a novel approach in which a linear program is used to find ``bad'' inputs to a dynamic program.
</description>
<pubDate>Mon, 03 Jan 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30515</guid>
<dc:date>2005-01-03T00:00:00Z</dc:date>
</item>
<item>
<title>A Dynamic Data Structure for Checking Hyperacyclicity</title>
<link>https://hdl.handle.net/1721.1/30514</link>
<description>A Dynamic Data Structure for Checking Hyperacyclicity
Liang, Percy; Srebro, Nati
We present a dynamic data structure that keeps track of an acyclic hypergraph (equivalently, a triangulated graph) and enables verifying that adding a candidate hyperedge (clique) will not break the acyclicity of the augmented hypergraph. This is a generalization of the use of Tarjan's Union-Find data structure for maintaining acyclicity when augmenting forests, and the amortized time per operation has a similar almost-constant dependence on the size of the hypergraph. Such a data structure is useful when augmenting acyclic hypergraphs, e.g.\~in order to greedily construct a high-weight acyclic hypergraph. In designing this data structure, we introduce a hierarchical decomposition of acyclic hypergraphs that aid in understanding {\em hyper-connectivity}, and introduce a novel concept of a {\em hypercycle} which is excluded from acyclic hypergraphs.
</description>
<pubDate>Mon, 03 Jan 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30514</guid>
<dc:date>2005-01-03T00:00:00Z</dc:date>
</item>
<item>
<title>Neural Voting Machines</title>
<link>https://hdl.handle.net/1721.1/30513</link>
<description>Neural Voting Machines
Richards, Whitman; Seung, H. Sebastian
Â&#147;Winner-take-allÂ&#148; networks typically pick as winners that alternative with the largest excitatory input. This choice is far from optimal when there is uncertainty in the strength of the inputs, and when information is available about how alternatives may be related. In the Social Choice community, many other procedures will yield more robust winners. The Borda Count and the pair-wise Condorcet tally are among the most favored. Their implementations are simple modifications of classical recurrent networks.
</description>
<pubDate>Fri, 31 Dec 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30513</guid>
<dc:date>2004-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>A general mechanism for tuning: Gain control circuits and synapses underlie tuning of cortical neurons</title>
<link>https://hdl.handle.net/1721.1/30512</link>
<description>A general mechanism for tuning: Gain control circuits and synapses underlie tuning of cortical neurons
Kouh, Minjoon; Poggio, Tomaso
Tuning to an optimal stimulus is a widespread property of neurons in cortex. We propose that such tuning is a consequence of normalization or gain control circuits. We also present a biologically plausible neural circuitry of tuning.
</description>
<pubDate>Fri, 31 Dec 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30512</guid>
<dc:date>2004-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Methods and Experiments With Bounded Tree-width Markov Networks</title>
<link>https://hdl.handle.net/1721.1/30511</link>
<description>Methods and Experiments With Bounded Tree-width Markov Networks
Liang, Percy; Srebro, Nathan
Markov trees generalize naturally to bounded tree-width Markov networks, onwhich exact computations can still be done efficiently.  However, learning themaximum likelihood Markov network with tree-width greater than 1 is NP-hard, sowe discuss a few algorithms for approximating the optimal Markov network.  Wepresent a set of methods for training a density estimator.  Each method isspecified by three arguments: tree-width, model scoring metric (maximumlikelihood or minimum description length), and model representation (using onejoint distribution or several class-conditional distributions).  On thesemethods, we give empirical results on density estimation and classificationtasks and explore the implications of these arguments.
</description>
<pubDate>Thu, 30 Dec 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30511</guid>
<dc:date>2004-12-30T00:00:00Z</dc:date>
</item>
<item>
<title>Machine-Checkable Correctness Proofs forIntra-procedural Dataflow Analyses</title>
<link>https://hdl.handle.net/1721.1/30510</link>
<description>Machine-Checkable Correctness Proofs forIntra-procedural Dataflow Analyses
Salcianu, Alexandru; Arkoudas, Konstantine
This technical report describes our experience using the interactive theorem proverAthena for proving the correctness of abstract interpretation-based dataflow analyses.For each analysis, our methodology requires the analysis designer to formallyspecify the property lattice, the transfer functions, and the desired modeling relationbetween the concrete program states and the results computed by the analysis. Thegoal of the correctness proof is to prove that the desired modeling relation holds.The proof allows the analysis clients to rely on the modeling relation for their owncorrectness. To reduce the complexity of the proofs, we separate the proof of eachdataflow analysis into two parts: a generic part, proven once, independent of anyspecific analysis; and several analysis-specific conditions proven in Athena.
</description>
<pubDate>Thu, 16 Dec 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30510</guid>
<dc:date>2004-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>On Decision Procedures for Set-Value Fields</title>
<link>https://hdl.handle.net/1721.1/30509</link>
<description>On Decision Procedures for Set-Value Fields
Kuncak, Viktor; Rinard, Martin
An important feature of object-oriented programming languages is the ability todynamically instantiate user-defined container data structures such as lists, trees,and hash tables. Programs implement such data structures using references todynamically allocated objects, which allows data structures to store unboundednumbers of objects, but makes reasoning about programs more difficult. Reasoningabout object-oriented programs with complex data structures is simplified if datastructure operations are specified in terms of abstract sets of objects associatedwith each data structure. For example, an insertion into a data structure in thisapproach becomes simply an insertion into a dynamically changing set-valued fieldof an object, as opposed to a manipulation of a dynamically linked structure linkedto the object.In this paper we explore reasoning techniques for programs that manipulate datastructures specified using set-valued abstract fields associated with container objects.We compare the expressive power and the complexity of specification languagesbased on 1) decidable prefix vocabulary classes of first-order logic, 2) twovariablelogic with counting, and 3) Nelson-Oppen combinations of multisortedtheories. Such specification logics can be used for verification of object-orientedprograms with supplied invariants. Moreover, by selecting an appropriate subsetof properties expressible in such logic, the decision procedures for these logics yieldautomated computation of lattice operations in abstract interpretation domain, aswell as automated computation of abstract program semantics.
</description>
<pubDate>Tue, 30 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30509</guid>
<dc:date>2004-11-30T00:00:00Z</dc:date>
</item>
<item>
<title>Comparing Network Coding with Multicommodity Flow for the k-pairs Communication Problem</title>
<link>https://hdl.handle.net/1721.1/30508</link>
<description>Comparing Network Coding with Multicommodity Flow for the k-pairs Communication Problem
Harvey, Nicholas J.; Kleinberg, Robert D.; Lehman, April Rasala
Given a graph G = (V,E) and k source-sink pairs of vertices, this papers investigates the maximum rate r at which all pairs can simultaneously communicate. We view this problem from two perspectives and compare their advantages. In the multicommodity flow formulation, a solution provides dedicated bandwidth r between each source-sink pair. In the information flow formulation, a vertex can transmit a function of the information it received thereby allowing multiple source-sink pairs to share bandwidth. For directed acyclic graphs with n vertices, we show that the rate achievable in the information flow formulation can be a multiplicative factor n larger than the rate achievable in the multicommodity flow formulation. It is well known [5] that for undirected graphs with n vertices, in the multicommodity flow formulation, the maximum rate achievable can be an O(1/log|V|) multiplicative factor smaller than the value of the sparsest cut. We extend this result to show that the maximum rate achievable in the information flow setting can be an O(1/log|V|) multiplicative factor smaller than the sparsest cut value.For directed acyclic graphs G, we define a parameter called the value of the most meager cut which is an upper bound for the maximum rate achievable in the information flow setting.We also present an example illustrating that this upper bound is not always tight.
</description>
<pubDate>Wed, 24 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30508</guid>
<dc:date>2004-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Learning with Matrix Factorizations</title>
<link>https://hdl.handle.net/1721.1/30507</link>
<description>Learning with Matrix Factorizations
Srebro, Nathan
Matrices that can be factored into a product of two simpler matricescan serve as a useful and often natural model in the analysis oftabulated or high-dimensional data.  Models based on matrixfactorization (Factor Analysis, PCA) have been extensively used instatistical analysis and machine learning for over a century, withmany new formulations and models suggested in recent years (LatentSemantic Indexing, Aspect Models, Probabilistic PCA, Exponential PCA,Non-Negative Matrix Factorization and others).  In this thesis weaddress several issues related to learning with matrix factorizations:we study the asymptotic behavior and generalization ability ofexisting methods, suggest new optimization methods, and present anovel maximum-margin high-dimensional matrix factorizationformulation.
</description>
<pubDate>Mon, 22 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30507</guid>
<dc:date>2004-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Availability-Consistency Trade-Offs in a Fault-Tolerant Stream Processing System</title>
<link>https://hdl.handle.net/1721.1/30506</link>
<description>Availability-Consistency Trade-Offs in a Fault-Tolerant Stream Processing System
Balazinska, Magdalena; Balakrishnan, Hari; Madden, Samuel; Stonebraker, Mike
processing. In contrast to previous techniques that handlenode failures, our approach also tolerates network failuresand network partitions. The approach is based on a principledtrade-off between consistency and availability in theface of failure, that (1) ensures that all data on an inputstream is processed within a specified time threshold, but(2) reduces the impact of failures by limiting if possible thenumber of results produced based on partially available inputdata, and (3) corrects these results when failures heal.Our approach is well-suited for applications such as environmentmonitoring, where high availability and Â&#147;real-timeÂ&#148;response is preferable to perfect answers.Our approach uses replication and guarantees that all processingreplicas achieve state consistency, both in the absenceof failures and after a failure heals. We achieve consistencyin the former case by defining a data-serializing operatorthat ensures that the order of tuples to a downstreamoperator is the same at all the replicas. To achieve consistencyafter a failure heals, we develop approaches based oncheckpoint/redo and undo/redo techniques.We have implemented these schemes in a prototype distributedstream processing system, and present experimentalresults that show that the system meets the desiredavailability-consistency trade-offs.
</description>
<pubDate>Mon, 22 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30506</guid>
<dc:date>2004-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Image Matching with Distributions of Local Invariant Features</title>
<link>https://hdl.handle.net/1721.1/30505</link>
<description>Efficient Image Matching with Distributions of Local Invariant Features
Grauman, Kristen; Darrell, Trevor
Sets of local features that are invariant to common image transformations are an effective representation to use when comparing images; current methods typically judge feature sets' similarity via a voting scheme (which ignores co-occurrence statistics) or by comparing histograms over a set of prototypes (which must be found by clustering).  We present a method for efficiently comparing images based on their discrete distributions (bags) of distinctive local invariant features, without clustering descriptors.  Similarity between images is measured with an approximation of the Earth Mover's Distance (EMD), which quickly computes the minimal-cost correspondence between two bags of features.  Each image's feature distribution is mapped into a normed space with a low-distortion embedding of EMD.  Examples most similar to a novel query image are retrieved in time sublinear in the number of examples via approximate nearest neighbor search in the embedded space.  We also show how the feature representation may be extended to encode the distribution of geometric constraints between the invariant features appearing in each image.We evaluate our technique with scene recognition and texture classification tasks.
</description>
<pubDate>Mon, 22 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30505</guid>
<dc:date>2004-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>A new biologically motivated framework for robust object recognition</title>
<link>https://hdl.handle.net/1721.1/30504</link>
<description>A new biologically motivated framework for robust object recognition
Serre, Thomas; Wolf, Lior; Poggio, Tomaso
In this paper, we introduce a novel set of features for robust object recognition, which exhibits outstanding performances on a variety ofobject categories while being capable of learning from only a fewtraining examples. Each element of this set is a complex featureobtained by combining position- and scale-tolerant edge-detectors overneighboring positions and multiple orientations.Our system - motivated by a quantitative model of visual cortex -outperforms state-of-the-art systems on a variety of object imagedatasets from different groups. We also show that our system is ableto learn from very few examples with no prior category knowledge.  Thesuccess of the approach is also a suggestive plausibility proof for aclass of feed-forward models of object recognition in cortex. Finally,we conjecture the existence of a universal overcompletedictionary of features that could handle the recognition of all objectcategories.
</description>
<pubDate>Sun, 14 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30504</guid>
<dc:date>2004-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>Capacity Allocation in Wireless LANs</title>
<link>https://hdl.handle.net/1721.1/30503</link>
<description>Capacity Allocation in Wireless LANs
Tan, Godfrey; Guttag, John
Today's access point based wireless LANs (WLANs) are inefficient and unfair. For many traffic loads they provide far less total throughput than they should, and do a poor job allocating what throughput they do deliver. Inappropriate association of nodes to access points and rates to flows plays a large role in these problems. We address a major root cause of this problem in this paper.Current practice ignores the distinction between flows that connect two wireless nodes via an access point and flows that connect wireless nodes to the wired infrastructure. As wireless devices and applications become more pervasive, ignoring this distinction will lead to a significant degradation in perceived performance.In this paper, we i) describe a series of examples that illustrates the impact of two-hop flows on the performance of the system, ii) provide a practical algorithm to solve the AP-assignment problem and iii) evaluate the performance of our algorithm against other approaches. Our preliminary results show that our algorithm can increase average achieved throughput by as much as 50% for some traffic loads.
</description>
<pubDate>Fri, 12 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30503</guid>
<dc:date>2004-11-12T00:00:00Z</dc:date>
</item>
<item>
<title>Regularization Through Feature Knock Out</title>
<link>https://hdl.handle.net/1721.1/30502</link>
<description>Regularization Through Feature Knock Out
Wolf, Lior; Martin, Ian
In this paper, we present and analyze a novel regularization technique based on enhancing our dataset with corrupted copies of the original data. The motivation is that since the learning algorithm lacks information about which parts of thedata are reliable, it has to produce more robust classification functions. We then demonstrate how this regularization leads to redundancy in the resulting  classifiers, which is somewhat in contrast to the common interpretations of the OccamÂ&#146;s razor principle. Using this framework, we propose a simple addition to the gentle boosting algorithm which enables it to work with only a few examples. We test this new algorithm on a variety of datasets and show convincing results.
</description>
<pubDate>Fri, 12 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30502</guid>
<dc:date>2004-11-12T00:00:00Z</dc:date>
</item>
<item>
<title>Shape Representation in V4: Investigating Position-Specific Tuning for Boundary Conformation with the Standard Model of Object Recognition</title>
<link>https://hdl.handle.net/1721.1/30501</link>
<description>Shape Representation in V4: Investigating Position-Specific Tuning for Boundary Conformation with the Standard Model of Object Recognition
Cadieu, Charles; Kouh, Minjoon; Riesenhuber, Maximilian; Poggio, Tomaso
The computational processes in the intermediate stages of the ventral pathway responsible for visual object recognition are not well understood. A recent physiological study by A. Pasupathy and C. Connor in intermediate area V4 using contour stimuli, proposes that a population of V4 neurons display bjectcentered,position-specific curvature tuning [18]. The Â&#147;standard modelÂ&#148; of object recognition, a recently developed model [23] to account for recognition properties of IT cells (extending classical suggestions by Hubel, Wiesel and others [9, 10, 19]), is used here to model the response of the V4 cells described in [18]. Our results show that a feedforward, network level mechanism can exhibit selectivity and invariance properties that correspond to the responses of the V4 cells described in [18]. These results suggest howobject-centered, position-specific curvature tuning of V4 cells may arise from combinations of complex V1 cell responses. Furthermore, the model makes predictions about the responses of the same V4 cells studied by Pasupathy and Connor to novel gray level patterns, such as gratings and natural images. Thesepredictions suggest specific experiments to further explore shape representation in V4.
</description>
<pubDate>Fri, 12 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30501</guid>
<dc:date>2004-11-12T00:00:00Z</dc:date>
</item>
<item>
<title>Neural Network Models for Zebra Finch Song Production and Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/30500</link>
<description>Neural Network Models for Zebra Finch Song Production and Reinforcement Learning
Werfel, Justin
The zebra finch is a standard experimental system for studying learning and generation of temporally extended motor patterns.  The first part of this project concerned the evaluation of simple models for the operation and structure of the network in the motor nucleus RA.  A directed excitatory chain with a global inhibitory network, for which experimental evidence exists, was found to produce waves of activity similar to those observed in RA; this similarity included one particularly important feature of the measured activity, synchrony between the onset of bursting in one neuron and the offset of bursting in another.  Other models, which were simpler and more analytically tractable, were also able to exhibit this feature, but not for parameter values quantitatively close to those observed.Another issue of interest concerns how these networks are initially learned by the bird during song acquisition.  The second part of the project concerned the analysis of exemplars of REINFORCE algorithms, a general class of algorithms for reinforcement learning in neural networks, which are on several counts more biologically plausible than standard prescriptions such as backpropagation.  The former compared favorably with backpropagation on tasks involving single input-output pairs, though a noise analysis suggested it should not perform so well.  On tasks involving trajectory learning, REINFORCE algorithms meet with some success, though the analysis that predicts their success on input-output-pair tasks fails to explain it for trajectories.
</description>
<pubDate>Tue, 09 Nov 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30500</guid>
<dc:date>2004-11-09T00:00:00Z</dc:date>
</item>
<item>
<title>Managing the 802.11 Energy/Performance Tradeoff with Machine Learning</title>
<link>https://hdl.handle.net/1721.1/30499</link>
<description>Managing the 802.11 Energy/Performance Tradeoff with Machine Learning
Monteleoni, Claire; Balakrishnan, Hari; Feamster, Nick; Jaakkola, Tommi
This paper addresses the problem of managing the tradeoff betweenenergy consumption and performance in wireless devices implementingthe IEEE 802.11 standard. To save energy, the 802.11 specificationproposes a power-saving mode (PSM), where a device can sleep to saveenergy, periodically waking up to receive packets from a neighbor(e.g., an access point) that may have buffered packets for thesleeping device. Previous work has shown that a fixed polling time forwaking up degrades the performance of Web transfers, because networkactivity is bursty and time-varying. We apply a new online machinelearning algorithm to this problem and show, using ns simulation andtrace analysis, that it is able to adapt well to network activity. Thelearning process makes no assumptions about the underlying networkactivity being stationary or even Markov. Our learning power-savingalgorithm, LPSM, guides the learning using a "loss function" thatcombines the increased latency from potentially sleeping too long andthe wasted use of energy in waking up too soon.  In our nssimulations, LPSM saved 7%-20% more energy than 802.11 in power-savingmode, with an associated increase in average latency by a factor of1.02, and not more than 1.2.  LPSM is straightforward to implementwithin the 802.11 PSM framework.
</description>
<pubDate>Wed, 27 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30499</guid>
<dc:date>2004-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>On Spatial Conjunction as Second-Order Logic</title>
<link>https://hdl.handle.net/1721.1/30498</link>
<description>On Spatial Conjunction as Second-Order Logic
Kuncak, Viktor; Rinard, Martin
Spatial conjunction is a powerful construct for reasoning about dynamically allocateddata structures, as well as concurrent, distributed and mobile computation. Whileresearchers have identified many uses of spatial conjunction, its precise expressive powercompared to traditional logical constructs was not previously known.In this paper we establish the expressive power of spatial conjunction. We construct anembedding from first-order logic with spatial conjunction into second-order logic, and moresurprisingly, an embedding from full second order logic into first-order logic with spatialconjunction. These embeddings show that the satisfiability of formulas in first-order logicwith spatial conjunction is equivalent to the satisfiability of formulas in second-order logic.These results explain the great expressive power of spatial conjunction and can be usedto show that adding unrestricted spatial conjunction to a decidable logic leads to an undecidablelogic. As one example, we show that adding unrestricted spatial conjunction totwo-variable logic leads to undecidability.On the side of decidability, the embedding into second-order logic immediately implies thedecidability of first-order logic with a form of spatial conjunction over trees. The embeddinginto spatial conjunction also has useful consequences: because a restricted form of spatialconjunction in two-variable logic preserves decidability, we obtain that a correspondinglyrestricted form of second-order quantification in two-variable logic is decidable. The resultinglanguage generalizes the first-order theory of boolean algebra over sets and is useful inreasoning about the contents of data structures in object-oriented languages.
</description>
<pubDate>Mon, 25 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30498</guid>
<dc:date>2004-10-25T00:00:00Z</dc:date>
</item>
<item>
<title>Botz-4-Sale: Surviving Organized DDoS Attacks that Mimic Flash Crowds</title>
<link>https://hdl.handle.net/1721.1/30497</link>
<description>Botz-4-Sale: Surviving Organized DDoS Attacks that Mimic Flash Crowds
Kandula, Srikanth; Katabi, Dina; Jacob, Matthias; Berger, Arthur
Recent denial of service attacks are mounted by professionalsusing Botnets of tens of thousands of compromisedmachines. To circumvent detection, attackers areincreasingly moving away from pure bandwidth  oods toattacks that mimic the Web browsing behavior of a largenumber of clients, and target expensive higher-layer resourcessuch as CPU, database and disk bandwidth. Theresulting attacks are hard to defend against using standardtechniques as the malicious requests differ from thelegitimate ones in intent but not in content.We present the design and implementation of Kill-Bots, a kernel extension to protect Web servers againstDDoS attacks that masquerade as  ash crowds. Kill-Botsprovides authentication using graphical tests but is differentfrom other systems that use graphical tests. First,instead of authenticating clients based on whether theysolve the graphical test, Kill-Bots uses the test to quicklyidentify the IP addresses of the attack machines. Thisallows it to block the malicious requests while allowingaccess to legitimate users who are unable or unwillingto solve graphical tests. Second, Kill-Bots sends a testand checks the client's answer without allowing unauthenticatedclients access to sockets, TCBs, worker processes,etc. This protects the authentication mechanismfrom being DDoSed. Third, Kill-Bots combines authenticationwith admission control. As a result, it improvesperformance, regardless of whether the server overloadis caused by DDoS or a true Flash Crowd. We have implementedKill-Bots in the Linux kernel and evaluated itin the wide-area Internet using PlanetLab.
</description>
<pubDate>Fri, 22 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30497</guid>
<dc:date>2004-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Combining dynamic abstractions in large MDPs</title>
<link>https://hdl.handle.net/1721.1/30496</link>
<description>Combining dynamic abstractions in large MDPs
Steinkraus, Kurt; Kaelbling, Leslie Pack
One of the reasons that it is difficult to plan and act in real-worlddomains is that they are very large.  Existing research generallydeals with the large domain size using a static representation andexploiting a single type of domain structure.  In this paper, wecreate a framework that encapsulates existing and new abstraction andapproximation methods into modules, and combines arbitrary modulesinto a system that allows for dynamic representation changes.  We showthat the dynamic changes of representation allow our framework tosolve larger and more interesting domains than were previouslypossible, and while there are no optimality guarantees, suitablemodule choices gain tractability at little cost to optimality.
</description>
<pubDate>Thu, 21 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30496</guid>
<dc:date>2004-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>NIRA: A New Internet Routing Architecture</title>
<link>https://hdl.handle.net/1721.1/30495</link>
<description>NIRA: A New Internet Routing Architecture
Yang, Xiaowei
The present Internet routing system faces two challengingproblems. First, unlike in the telephone system, Internet users cannotchoose their wide-area Internet service providers (ISPs) separatelyfrom their local access providers.  With the introduction of newtechnologies such as broadband residential service andfiber-to-the-home, the local ISP market is often a monopoly or aduopoly. The lack of user choice is likely to reduce competition amongwide-area ISPs, limiting the incentives for wide-area ISPs to improvequality of service, reduce price, and offer new services. Second, thepresent routing system fails to scale effectively in the presence ofreal-world requirements such as multi-homing for robust and redundantInternet access. A multi-homed site increases the amount of routingstate maintained globally by the Internet routing system. As thedemand for multi-homing continues to rise, the amount of routing statecontinues to grow.This dissertation presents the design of a new Internet routingarchitecture (NIRA) that simultaneously addresses these twoproblems. NIRA gives a user the ability to choose the sequence ofInternet service providers his packets traverse. It also has betterscaling characteristics than today's routing system. The design ofNIRA is decomposed into four modular components: route discovery,route availability discovery, route representation and packetforwarding, and provider compensation. This dissertation describesmechanisms to realize each of these components. It also makes clearthose places in the design where a globally agreed mechanism isneeded, and those places where alternative mechanisms can be designedand deployed locally. In particular, this dissertation describes ascalable route discovery mechanism. With this mechanism, a user onlyneeds to know a small region of the Internet in order to select aroute to reach a destination. In addition, a novel routerepresentation and packet forwarding scheme is designed such that asource and a destination address can uniquely represent a sequence ofproviders a packet traverses.Network measurement, simulation, and analytic modeling are used incombination to evaluate the design of NIRA. The evaluation suggeststhat NIRA is scalable.
</description>
<pubDate>Thu, 14 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30495</guid>
<dc:date>2004-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Byzantine Fault Tolerance in Long-Lived Systems</title>
<link>https://hdl.handle.net/1721.1/30494</link>
<description>Byzantine Fault Tolerance in Long-Lived Systems
Rodrigues, Rodrigo; Liskov, Barbara
This paper proposes counter-measures that can be deployedas part of a replicated system to reduce the size ofW, and thus reduce the class of attacks to which the system is vulnerable. Obviously it will not be possible to withstandall attacks via this technique, in particular attacks with verysmall A. But we will propose techniques that can reduceWto quite a small value.In the remainder of this paper, we discuss how to lowerthe value of W. We begin by discussing attacks. Then wediscuss some prior work in this area and why it is insufficient.The final section describes the approach we propose.
</description>
<pubDate>Fri, 13 Aug 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30494</guid>
<dc:date>2004-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management</title>
<link>https://hdl.handle.net/1721.1/30493</link>
<description>EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management
Leong, Ben; Liskov, Barbara; Demaine, Erik D.
EpiChord is a DHT lookup algorithm that demonstrates that we canremove the O(log n)-state-per-node restriction on existing DHTtopologies to achieve significantly better lookup performance andresilience using a novel reactive routing state maintenance strategythat amortizes network maintenance costs into existing lookups and byissuing parallel queries. Our technique allows us to design a newclass of unlimited-state-per-node DHTs that is able to adapt naturallyto a wide range of lookup workloads. EpiChord is able to achieveO(1)-hop lookup performance under lookup-intensive workloads, and atleast O(log n)-hop lookup performance under churn-intensiveworkloads even in the worst case (though it is expected to performbetter on average).Our reactive routing state maintenance strategy allows us to maintainlarge amounts of routing state with only a modest amount of bandwidth,while parallel queries serve to reduce lookup latency and allow us toavoid costly lookup timeouts.  In general, EpiChord exploits theinformation gleaned from observing lookup traffic to improve lookupperformance, and only sends network probes when necessary. Nodespopulate their caches mainly from observing network traffic, andcache entries are flushed from the cache after a fixed lifetime.Our simulations show that with our approach can reduce both lookuplatencies and pathlengths by a factor of 3 by issuing only 3 queriesasynchronously in parallel per lookup.  Furthermore, we show that weare able to achieve this result with minimal additional communicationoverhead and the number of messages generated per lookup is no morethan that for the corresponding sequential Chord lookup algorithm overa range of lookup workloads.  We also present a novel token-passingstabilization scheme that automatically detects and repairs globalrouting inconsistencies.
</description>
<pubDate>Fri, 13 Aug 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30493</guid>
<dc:date>2004-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Early Sketch Processing with Application in HMM Based Sketch Recognition</title>
<link>https://hdl.handle.net/1721.1/30492</link>
<description>Early Sketch Processing with Application in HMM Based Sketch Recognition
Sezgin, Tevfik Metin; Davis, Randall
Freehand sketching is a natural and crucial part of everyday humaninteraction, yet is almost totally unsupported by current user interfaces. With the increasing availability of tablet notebooks and pen based PDAs, sketchbased interaction has gained attention as a natural interaction modality.We are working to combine the flexibility and ease of use of paper and pencilwith the processing power of a computer, to produce a user interface fordesign that feels as natural as paper, yet is considerably smarter. One of themost basic tasks in accomplishing this is converting the original digitizedpen strokes in a sketch into the intended geometric objects. In this paper wedescribe an implemented system that combines multiple sources of knowledge toprovide robust early processing for freehand sketching. We also show how thisearly processing system can be used as part of a fast sketch recognition system with polynomial time segmentation and recognition algorithms.
</description>
<pubDate>Wed, 28 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30492</guid>
<dc:date>2004-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>Realistic Modeling of Simple and Complex Cell Tuning in the HMAXModel, and Implications for Invariant Object Recognition in Cortex</title>
<link>https://hdl.handle.net/1721.1/30491</link>
<description>Realistic Modeling of Simple and Complex Cell Tuning in the HMAXModel, and Implications for Invariant Object Recognition in Cortex
Serre, Thomas; Riesenhuber, Maximilian
Riesenhuber \&amp; Poggio recently proposed a model of object recognitionin cortex which, beyond integrating general beliefs about the visualsystem in a quantitative framework, made testable predictions aboutvisual processing. In particular, they showed that invariant objectrepresentation could be obtained with a selective pooling mechanismover properly chosen afferents through a {\sc max} operation: Forinstance, at the complex cells level, pooling over a group of simplecells at the same preferred orientation and position in space but atslightly different spatial frequency would provide scale tolerance,while pooling over a group of simple cells at the same preferredorientation and spatial frequency but at slightly different positionin space would provide position tolerance. Indirect support for suchmechanisms in the visual system come from the ability of thearchitecture at the top level to replicate shape tuning as well asshift and size invariance properties of ``view-tuned cells'' (VTUs)found in inferotemporal cortex (IT), the highest area in the ventralvisual stream, thought to be crucial in mediating object recognitionin cortex. There is also now good physiological evidence that a {\scmax} operation is performed at various levels along the ventralstream. However, in the original paper by Riesenhuber \&amp; Poggio,tuning and pooling parameters of model units in early and intermediateareas were only qualitatively inspired by physiological data. Inparticular, many studies have investigated the tuning properties ofsimple and complex cells in primary visual cortex, V1. We show thatunits in the early levels of HMAX can be tuned to produce realisticsimple and complex cell-like tuning, and that the earlier findings onthe invariance properties of model VTUs still hold in this morerealistic version of the model.
</description>
<pubDate>Tue, 27 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30491</guid>
<dc:date>2004-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Distribution Volume Tracking on Privacy-Enhanced Wireless Grid</title>
<link>https://hdl.handle.net/1721.1/30490</link>
<description>Distribution Volume Tracking on Privacy-Enhanced Wireless Grid
Uzuner, Ozlem
In this paper, we discuss a wireless grid in which users are highly mobile, and form ad-hoc and sometimes short-lived connections with other devices.  As they roam through networks, the users may choose to employ privacy-enhancing technologies to address their privacy needs and benefit from the computational power of the grid for a variety of tasks, including sharing content.  The high rate of mobility of the users on the wireless grid, when combined with privacy enhancing mechanisms and ad-hoc connections, makes it difficult to conclusively link devices and/or individuals with network activities and to hold them liable for particular downloads.  Protecting intellectual property in this scenario requires a solution that can work in absence of knowledge about behavior of particular individuals.  Building on previous work, we argue for a solution that ensures proper compensation to content owners without inhibiting use and dissemination of works.  Our proposal is based on digital tracking for measuring distribution volume of content and compensation of authors based on this accounting information.  The emphasis is on obtaining good estimates of rate of popularity of works, without keeping track of activities of individuals or devices.  The contribution of this paper is a revenue protection mechanism, Distribution Volume Tracking, that does not invade the privacy of users in the wireless grid and works even in the presence of privacy-enhancing technologies they may employ.
</description>
<pubDate>Sun, 25 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30490</guid>
<dc:date>2004-07-25T00:00:00Z</dc:date>
</item>
<item>
<title>Discovering Latent Classes in Relational Data</title>
<link>https://hdl.handle.net/1721.1/30489</link>
<description>Discovering Latent Classes in Relational Data
Kemp, Charles; Griffiths, Thomas L.; Tenenbaum, Joshua B.
We present a framework for learning abstract relational knowledge with the aimof explaining how people acquire intuitive theories of physical, biological, orsocial systems.  Our approach is based on a generative relational model withlatent classes, and simultaneously determines the kinds of entities that existin a domain, the number of these latent classes, and the relations betweenclasses that are possible or likely.  This model goes beyond previouspsychological models of category learning,  which consider attributesassociated with individual categories but not relationships between categories.We apply this domain-general framework to two specific problems: learning thestructure of kinship systems and learning causal theories.
</description>
<pubDate>Thu, 22 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30489</guid>
<dc:date>2004-07-22T00:00:00Z</dc:date>
</item>
<item>
<title>An Algorithm for Deciding BAPA: Boolean Algebra with Presburger Arithmetic</title>
<link>https://hdl.handle.net/1721.1/30488</link>
<description>An Algorithm for Deciding BAPA: Boolean Algebra with Presburger Arithmetic
Kuncak, Viktor; Nguyen, Huu Hai; Rinard, Martin
We describe an algorithm for deciding the first-order multisorted theory BAPA, which combines 1) Boolean algebras of sets of uninterpreted elements (BA) and 2) Presburger arithmetic operations (PA). BAPA can express the relationship between integer variables and cardinalities of sets, and supports arbitrary quantification over both sets and integers.Our motivation for BAPA is deciding verification conditions that arise in the static analysis of data structure consistency properties. Data structures often use an integer variable to keep track of the number of elements they store; an invariant of such a data structure is that the value of the integer variable is equal to the number of elements stored in the data structure. When the data structure content is represented by a set, the resulting constraints can be captured in BAPA. BAPA formulas with quantifier alternations arise when annotations contain quantifiers themselves, or when proving simulation relation conditions for refinement and equivalence of program fragments. Furthermore, BAPA constraints can be used to extend the techniques for proving the termination of integer programs to programs that manipulate data structures, and have applications in constraint databases.We give a formal description of a decision procedure for BAPA, which implies the decidability of the satisfiability and validity problems for BAPA. We analyze our algorithm and obtain an elementary upper bound on the running time, thereby giving the first complexity bound for BAPA. Because it works by a reduction to PA, our algorithm yields the decidability of a combination of sets of uninterpreted elements with any decidable extension of PA. Our algorithm can also be used to yield an optimal decision procedure for BA though a reduction to PA with bounded quantifiers.We have implemented our algorithm and used it to discharge verification conditions in the Jahob system for data structure consistency checking of Java programs; our experience with the algorithm is promising.
</description>
<pubDate>Mon, 19 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30488</guid>
<dc:date>2004-07-19T00:00:00Z</dc:date>
</item>
<item>
<title>Definition and Expansion of Composite Automata in IOA</title>
<link>https://hdl.handle.net/1721.1/30487</link>
<description>Definition and Expansion of Composite Automata in IOA
Tauber, Joshua A.; Garland, Stephen J.
The IOA language provides notations for defining both primitive and composite I/O automata.This note describes, both formally and with examples, the constraints on these definitions, thecomposability requirements for the components of a composite automaton, and the transformationof a composite automaton into an equivalent primitive automaton.Section 2 introduces four examples used throughout this note to illustrate new definitions andoperations. Section 3 treats IOA programs for primitive I/O automata: it introduces notationsfor describing the syntactic structures that appear in these programs, and it lists syntactic andsemantic conditions that these programs must satisfy to represent valid primitive I/O automata.Section 4 describes how to reformulate primitive IOA programs into an equivalent but more regular(desugared) form that is used in later definitions in this note. Section 5 treats IOA programsfor composite I/O automata: it introduces notations for describing the syntactic structures thatappear in these programs, describes resortings induced by them, and lists syntactic and semanticconditions that these programs must satisfy to represent valid composite I/O automata. Section 6describes the translation of the name spaces of component automata into a unified name spacefor the composite automaton. Section 7 shows how to expand an IOA program for a compositeautomaton into an equivalent IOA program for a primitive automaton. The expansion is generatedby combining syntactic structures of the desugared programs for the component automata afterapplying appropriate replacements of sorts and variables. Section 8 details the expansion of thecomposite automaton introduced in Section 2 using the desugared forms developed throughoutSections 4Â&#150;6 and the techniques described in Section 7. Finally, Section 9 gives a precise definitionof the resortings and substitutions used to replace sorts and variables.
</description>
<pubDate>Mon, 19 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30487</guid>
<dc:date>2004-07-19T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Removal of Nondeterminism for Code Generation in I/O Automata</title>
<link>https://hdl.handle.net/1721.1/30486</link>
<description>Systematic Removal of Nondeterminism for Code Generation in I/O Automata
Vaziri, Mandana; Tauber, Joshua A.; Tsai, Michael J.; Lynch, Nancy
The Input/Output  (I/O) automaton model  developed by Lynch and Tuttle models components in asynchronous concurrentsystems as labeled transition systems.  IOA is a precise language for describing I/O automata and for stating their properties.  A toolset is beingdeveloped for IOA  to support distributed software design and implementation. One of the tools consists of a userassisted code generator fromIOA into an imperative programming language such as C or Java. One aspect that distinguishes IOA programs from programs written inimperative languages  is the presence of nondeterminism  which comesin the form of explicit nondeterministic statements and implicit scheduling choices made during execution.  Code generation therefore consistspartially of systematically removing all forms of nondeterminism. In this paper, we describe our approach and design for code generation.We focus on the issue of removing implicit nondeterminism  and specify a transformation on IOA programs that makes all nondeterminismexplicit.  The programmer can then replace all explicit nondeterminismwith deterministic statements  prior to code generation.  We also describethis transformation at a semantic level  i.e., at the level of the I/O automaton mathematical model.  We show that the transformation definedat the IOA level conforms to the one at the semantic level.
</description>
<pubDate>Mon, 19 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30486</guid>
<dc:date>2004-07-19T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamically Resizable Static CMOS Logic for Fine-Grain Leakage</title>
<link>https://hdl.handle.net/1721.1/30485</link>
<description>Dynamically Resizable Static CMOS Logic for Fine-Grain Leakage
Heo, Seongmoo; Asanovic, Krste
Digital circuits often have a critical path that runs through a smallsubset of the component subblocks, but where the path changes dynamicallyduring operation.  Dynamically resizable static CMOS (DRCMOS) logic isproposed as a fine-grain leakage reduction technique that dynamicallydownsizes transistors in inactive subblocks while maintaining speed insubblocks along the current critical path.  A 64-entry register free listand a 64-entry pick-two arbiter are used to evaluate DRCMOS. DRCMOS isshown to give a 50% reduction in total power for equal delay in a70 nm technology.
</description>
<pubDate>Mon, 12 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30485</guid>
<dc:date>2004-07-12T00:00:00Z</dc:date>
</item>
<item>
<title>A Constant-Factor Approximation Algorithm for Embedding Unweighted Graphs into Trees</title>
<link>https://hdl.handle.net/1721.1/30484</link>
<description>A Constant-Factor Approximation Algorithm for Embedding Unweighted Graphs into Trees
Badoiu, Mihai; Indyk, Piotr; Sidiropoulos, Anastasios
We present a constant-factor approximation algorithm for computing anembedding of the shortest path metric of an unweighted graph into atree, that minimizes the multiplicative distortion.
</description>
<pubDate>Mon, 05 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30484</guid>
<dc:date>2004-07-05T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Approximations of the Frequency Moments</title>
<link>https://hdl.handle.net/1721.1/30483</link>
<description>Optimal Approximations of the Frequency Moments
Indyk, Piotr; Woodruff, David
We give a one-pass, O~(m^{1-2/k})-space algorithm for estimating the k-th frequency moment of a data stream for any real k&gt;2. Together with known lower bounds, this resolves the main problem left open by Alon, Matias, Szegedy, STOC'96. Our algorithm enables deletions as well as insertions of stream elements.
</description>
<pubDate>Fri, 02 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30483</guid>
<dc:date>2004-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Contextual models for object detection using boosted random fields</title>
<link>https://hdl.handle.net/1721.1/30482</link>
<description>Contextual models for object detection using boosted random fields
Torralba, Antonio; Murphy, Kevin P.; Freeman, William T.
We seek to both detect and segment objects in images.  To exploit both local image data as well as contextual information, we introduce Boosted Random Fields (BRFs), which uses Boosting to learn the graph structure and local evidence of a conditional random field (CRF). The graph structure is learned by assembling graph fragments in an additive model. The connections between individual pixels are not very informative, but by using dense graphs, we can pool information from large regions of the image; dense models also support efficient inference. We show how contextual information from other objects can improve detection performance, both in terms of accuracy and speed, by using a computational cascade. We apply our system to detect stuff and things in office and street scenes.
</description>
<pubDate>Fri, 25 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30482</guid>
<dc:date>2004-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>Middleboxes No Longer Considered Harmful</title>
<link>https://hdl.handle.net/1721.1/30481</link>
<description>Middleboxes No Longer Considered Harmful
Walfish, Michael; Stribling, Jeremy; Krohn, Maxwell; Balakrishnan, Hari; Morris, Robert; Shenker, Scott
Intermediate network elements, such as network address translators (NATs), firewalls, and transparent caches are now commonplace. The usual reaction in the network architecture community to these so-called middleboxes is a combination of scorn (because they violate important architectural principles) and dismay (because these violations make the Internet less flexible). While we acknowledge these concerns, we also recognize that middleboxes have become an Internet fact of life for important reasons. To retain their functions while eliminating their dangerous side-effects, we propose an extension to the Internet architecture, called the Delegation-Oriented Architecture (DOA), that not only allows, but also facilitates, the deployment of middleboxes. DOA involves two relatively modest changes to the current architecture: (a) a set of references that are carried in packets and serve as persistent host identifiers and (b) a way to resolve these references to delegates chosen by the referenced host.
</description>
<pubDate>Thu, 24 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30481</guid>
<dc:date>2004-06-24T00:00:00Z</dc:date>
</item>
<item>
<title>How People Re-find Information When the Web Changes</title>
<link>https://hdl.handle.net/1721.1/30480</link>
<description>How People Re-find Information When the Web Changes
Teevan, Jaime
This paper investigates how people return to information in a dynamic information environment.  For example, a person might want to return to Web content via a link encountered earlier on a Web page, only to learn that the link has since been removed.  Changes can benefit users by providing new information, but they hinder returning to previously viewed information.  The observational study presented here analyzed instances, collected via a Web search, where people expressed difficulty re-finding information because of changes to the information or its environment.  A number of interesting observations arose from this analysis, including that the path originally taken to get to the information target appeared important in its re-retrieval, whereas, surprisingly, the temporal aspects of when the information was seen before were not.  While people expressed frustration when problems arose, an explanation of why the change had occurred was often sufficient to allay that frustration, even in the absence of a solution.  The implications of these observations for systems that support re-finding in dynamic environments are discussed.
</description>
<pubDate>Fri, 18 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30480</guid>
<dc:date>2004-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Building Grounded Abstractions for Artificial Intelligence Programming</title>
<link>https://hdl.handle.net/1721.1/30479</link>
<description>Building Grounded Abstractions for Artificial Intelligence Programming
Hearn, Robert A.
Most Artificial Intelligence (AI) work can be characterized as either ``high-level'' (e.g., logical, symbolic) or ``low-level'' (e.g., connectionist networks, behavior-based robotics). Each approach suffers from particular drawbacks. High-level AI uses abstractions that often have no relation to the way real, biological brains work. Low-level AI, on the other hand, tends to lack the powerful abstractions that are needed to express complex structures and relationships. I have tried to combine the best features of both approaches, by building a set of programming abstractions defined in terms of simple, biologically plausible components. At the ``ground level'', I define a primitive, perceptron-like computational unit. I then show how more abstract computational units may be implemented in terms of the primitive units, and show the utility of the abstract units in sample networks. The new units make it possible to build networks using concepts such as long-term memories, short-term memories, and frames. As a demonstration of these abstractions, I have implemented a simulator for ``creatures'' controlled by a network of abstract units. The creatures exist in a simple 2D world, and exhibit behaviors such as catching mobile prey and sorting colored blocks into matching boxes. This program demonstrates that it is possible to build systems that can interact effectively with a dynamic physical environment, yet use symbolic representations to control aspects of their behavior.
</description>
<pubDate>Wed, 16 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30479</guid>
<dc:date>2004-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>Versatility and VersaBench: A New Metric and a Benchmark Suite for Flexible Architectures</title>
<link>https://hdl.handle.net/1721.1/30478</link>
<description>Versatility and VersaBench: A New Metric and a Benchmark Suite for Flexible Architectures
Rabbah, Rodric M.; Bratt, Ian; Asanovic, Krste; Agarwal, Anant
For the last several decades, computer architecture research has largely benefited from, and continues to be driven by ad-hoc benchmarking. Often the benchmarks are selected to represent workloads that architects believe should run on the computational platforms they design. For example, benchmark suites such as SPEC, Winstone, and MediaBench, which represent workstation, desktop and media workloads respectively, have influenced computer architecture innovation for the last decade. Recently, advances in VLSI technology have created an increasing interest within the computer architecture community to build a new kind of processor that is more flexible than extant general purpose processors. Such new processor architectures must efficiently support a broad class of applications including graphics, networking, and signal processing in addition to the traditional desktop workloads. Thus, given the new focus on flexibility demands, a new benchmark suite and new metrics are necessary to accurately reflect the goals of the architecture community. This paper thus proposes VersaBench as a new benchmark suite, and a new Versatility measure to characterize architectural flexibility, or in other words, the ability of the architecture to effectively execute a wide array of workloads. The benchmark suite is composed of applications drawn from several domains including desktop, server, stream, and bit-level processing. The Versatility measure is a single scalar metric inspired by the SPEC paradigm. It normalizes processor performance on each benchmark by that of the highest-performing machine for that application. This paper reports the measured versatility for several existing processors, as well as for some new and emerging research processors. The benchmark suite is freely distributed, and we are actively cataloging and sharing results for various reference processors.
</description>
<pubDate>Mon, 14 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30478</guid>
<dc:date>2004-06-14T00:00:00Z</dc:date>
</item>
<item>
<title>Scalar Operand Networks: Design, Implementation, and Analysis</title>
<link>https://hdl.handle.net/1721.1/30477</link>
<description>Scalar Operand Networks: Design, Implementation, and Analysis
Taylor, Michael Bedford; Lee, Walter; Amarasinghe, Saman; Agarwal, Anant
The bypass paths and multiported register files in microprocessors serve as an implicit interconnect tocommunicate operand values among pipeline stages and multiple ALUs. Previous superscalar designs implementedthis interconnect using centralized structures that do not scale with increasing ILP demands. Insearch of scalability, recent microprocessor designs in industry and academia exhibit a trend toward distributedresources such as partitioned register files, banked caches, multiple independent compute pipelines,and even multiple program counters. Some of these partitioned microprocessor designs have begun to implementbypassing and operand transport using point-to-point interconnects. We call interconnects optimizedfor scalar data transport, whether centralized or distributed, scalar operand networks. Although thesenetworks share many of the challenges of multiprocessor networks such as scalability and deadlock avoidance,they have many unique requirements, including ultra-low latencies (a few cycles versus tens of cycles)and ultra-fast operation-operand matching. This paper discusses the unique properties of scalar operandnetworks (SONs), examines alternative ways of implementing them, and introduces the AsTrO taxonomy todistinguish between them. It discusses the design of two alternative networks in the context of the Raw microprocessor,and presents detailed timing, area and energy statistics for a real implementation. The paperalso presents a 5-tuple performance model for SONs and analyzes their performance sensitivity to networkproperties for ILP workloads.
</description>
<pubDate>Tue, 08 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30477</guid>
<dc:date>2004-06-08T00:00:00Z</dc:date>
</item>
<item>
<title>Deionizer: A Tool for Capturing and Embedding I/O Cells</title>
<link>https://hdl.handle.net/1721.1/30476</link>
<description>Deionizer: A Tool for Capturing and Embedding I/O Cells
Taylor, Michael Bedford
In this paper, we introduce the concept of a deionizer. A deionizeris a special type of partial evaluator whose purpose is to create a newversion of a program that can run without accessing a partial set of I/O resources.Although a deionizer can be used for application embedding, this short paper addresses the use of dionization for improving benchmark accuracy.The paper briefly discusses the key ideas and then explains the implementation and useof the MIT deionizer. This deionizer was used to produce the results for a recent conference paper that compares theRaw processor to a Pentium III.
</description>
<pubDate>Mon, 07 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30476</guid>
<dc:date>2004-06-07T00:00:00Z</dc:date>
</item>
<item>
<title>BioJADE: A Design and Simulation Tool for Synthetic Biological Systems</title>
<link>https://hdl.handle.net/1721.1/30475</link>
<description>BioJADE: A Design and Simulation Tool for Synthetic Biological Systems
Goler, Jonathan A.
The next generations of both biological engineering and computer engineering demand that control be exerted at the molecular level. Creating, characterizing and controlling synthetic biological systems may provide us with the ability to build cells that are capable of a plethora of activities, from computation to synthesizing nanostructures. To develop these systems, we must have a set of tools not only for synthesizing systems, but also designing and simulating them. The BioJADE project provides a comprehensive, extensible design and simulation platform for synthetic biology. BioJADE is a graphical design tool built in Java, utilizing a database back end, and supports a range of simulations using an XML communication protocol. BioJADE currently supports a library of over 100 parts with which it can compile designs into actual DNA, and then generate synthesis instructions to build the physical parts. The BioJADE project contributes several tools to Synthetic Biology. BioJADE in itself is a powerful tool for synthetic biology designers. Additionally, we developed and now make use of a centralized BioBricks repository, which enables the sharing of BioBrick components between researchers, and vastly reduces the barriers to entry for aspiring Synthetic Biologists.
</description>
<pubDate>Fri, 28 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30475</guid>
<dc:date>2004-05-28T00:00:00Z</dc:date>
</item>
<item>
<title>Data Structure Repair Using Goal-Directed Reasoning</title>
<link>https://hdl.handle.net/1721.1/30474</link>
<description>Data Structure Repair Using Goal-Directed Reasoning
Demsky, Brian; Rinard, Martin
Model-based data structure repair is a promising techniquefor enabling programs to continue to execute successfullyin the face of otherwise fatal data structure corruption errors.Previous research in this  eld relied on the developerto write a speci cation to explicitly translate model repairsinto concrete data structure repairs, raising the possibilityof 1) incorrect translations causing the supposedly repairedconcrete data structures to be inconsistent, and 2) repairedmodels with no corresponding concrete data structure representation.We present a new repair algorithm that uses goal-directedreasoning to automatically translate model repairs into concretedata structure repairs. This new repair algorithm eliminatesthe possibility of incorrect translations and repairedmodels with no corresponding representation as concretedata structures. Unlike our old algorithm, our new algorithmcan also repair linked data structures such as a list ora tree.
</description>
<pubDate>Tue, 18 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30474</guid>
<dc:date>2004-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Commonsense Categorical Knowledge in a Thread Memory System</title>
<link>https://hdl.handle.net/1721.1/30473</link>
<description>Learning Commonsense Categorical Knowledge in a Thread Memory System
Stamatoiu, Oana L.
If we are to understand how we can build machines capable of broadpurpose learning and reasoning, we must first aim to build systemsthat can represent, acquire, and reason about the kinds of commonsenseknowledge that we humans have about the world. This endeavor suggestssteps such as identifying the kinds of knowledge people commonly haveabout the world, constructing suitable knowledge representations, andexploring the mechanisms that people use to make judgments about theeveryday world. In this work, I contribute to these goals by proposingan architecture for a system that can learn commonsense knowledgeabout the properties and behavior of objects in the world. Thearchitecture described here augments previous machine learning systemsin four ways: (1) it relies on a seven dimensional notion of context,built from information recently given to the system, to learn andreason about objects' properties; (2) it has multiple methods that itcan use to reason about objects, so that when one method fails, it canfall back on others; (3) it illustrates the usefulness of reasoningabout objects by thinking about their similarity to other, betterknown objects, and by inferring properties of objects from thecategories that they belong to; and (4) it represents an attempt tobuild an autonomous learner and reasoner, that sets its own goals forlearning about the world and deduces new facts by reflecting on itsacquired knowledge. This thesis describes this architecture, as wellas a first implementation, that can learn from sentences such as ``Ablue bird flew to the tree'' and ``The small bird flew to the cage''that birds can fly. One of the main contributions of thiswork lies in suggesting a further set of salient ideas about how wecan build broader purpose commonsense artificial learners andreasoners.
</description>
<pubDate>Tue, 18 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30473</guid>
<dc:date>2004-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Temporal Planning with Complex Processes</title>
<link>https://hdl.handle.net/1721.1/30472</link>
<description>Generative Temporal Planning with Complex Processes
Kennell, Jonathan
Autonomous vehicles are increasingly being used in mission-critical applications, and robust methods are needed for controlling these inherently unreliable and complex systems.  This thesis advocates the use of model-based programming, which allows mission designers to program autonomous missions at the level of a coach or wing commander.  To support such a system, this thesis presents the Spock generative planner.  To generate plans, Spock must be able to piece together vehicle commands and team tactics that have a complex behavior represented by concurrent processes.  This is in contrast to traditional planners, whose operators represent simple atomic or durative actions.  Spock represents operators using the RMPL language, which describes behaviors using parallel and sequential compositions of state and activity episodes.  RMPL is useful for controlling mobile autonomous missions because it allows mission designers to quickly encode expressive activity models using object-oriented design methods and an intuitive set of activity combinators.  Spock also is significant in that it uniformly represents operators and plan-space processes in terms of Temporal Plan Networks, which support temporal flexibility for robust plan execution.  Finally, Spock is implemented as a forward progression optimal planner that walks monotonically forward through plan processes, closing any open conditions and resolving any conflicts.  This thesis describes the Spock algorithm in detail, along with example problems and test results.
</description>
<pubDate>Tue, 18 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30472</guid>
<dc:date>2004-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Verifying the Correctness of Wide-Area Internet Routing</title>
<link>https://hdl.handle.net/1721.1/30471</link>
<description>Verifying the Correctness of Wide-Area Internet Routing
Feamster, Nick; Balakrishnan, Hari
Several studies have shown that wide-area Internet routing is fragile, with failures occurring for a variety of reasons. Routing fragility is largely due to the flexible and powerful ways in which BGP can be configured to perform various tasks, which range from implementing the policies of commercial relationships to configuring backup paths. Configuring routers in an AS is like writing a distributed program, and BGP's flexible configuration and today's relatively low-level configuration languages make the process error-prone. The primary method used by operators to determine whether their complex configurations are correct is to try them out in operation.We believe that there is a need for a systematic approach to verifying router configurations before they are deployed. This paper develops a static analysis framework for configuration checking, and uses it in the design of rcc, a ``router configuration checker''. rcc takes as input a set of router configurations and flags anomalies and errors, based on a set of well-defined correctness conditions. We have used rcc to check BGP configurations from 9 operational networks, testing nearly 700 real-world router configurations in the process. Every network we analyzed had configuration errors, some of which were potentially serious and had previously gone unnoticed. Our analysis framework and results also suggest ways in which BGP and configuration languages should be improved. rcc has also been downloaded by 30 network operators to date.
</description>
<pubDate>Mon, 17 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30471</guid>
<dc:date>2004-05-17T00:00:00Z</dc:date>
</item>
<item>
<title>A Combined Pointer and Purity Analysis for Java Programs</title>
<link>https://hdl.handle.net/1721.1/30470</link>
<description>A Combined Pointer and Purity Analysis for Java Programs
Salcianu, Alexandru; Rinard, Martin
We present a new method purity analysis for Java programs.A method is pure if it does not mutate any location that exists in the program state right before method invocation.Our analysis is built on top of a combined pointer and escape analysis for Java programs and is capable of determining that methods are pure even when the methods do heap mutation, provided that the mutation affects only objects created after the beginning of the method. Because our analysis extracts a precise representation of the region of the heap that each method may access, it is able to provide useful information even for methods with externally visible side effects. In particular, it can recognize read-only parameters (a parameter is read-only if the method does not mutate any objects transitively reachable from the parameter) and safe parameters (a parameter is safe if it is read-only and the method does not create any new externally visible paths in the heap to objects transitively reachable from the parameter). The analysis can also generate regular expressions that characterize the externally visible heap locations that the method mutates.We have implemented our analysis and used it to analyze several data structure implementations. Our results show that our analysis effectively recognize a variety of pure methods, including pure methods that allocate and mutate complex auxiliary data structures. Even if the methods are not pure, our analysis can provide information which may enable developers to usefully bound the potential side effects of the method.
</description>
<pubDate>Mon, 17 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30470</guid>
<dc:date>2004-05-17T00:00:00Z</dc:date>
</item>
<item>
<title>Video Matching</title>
<link>https://hdl.handle.net/1721.1/30469</link>
<description>Video Matching
Sand, Peter; Teller, Seth
This paper describes a method for bringing two videos (recorded at different times) into spatiotemporal alignment, then comparing and combining corresponding pixels for applications such as background subtraction, compositing, and increasing dynamic range. We align a pair of videos by searching for frames that best match according to a robust image registration process. This process uses locally weighted regression to interpolate and extrapolate high-likelihood image correspondences, allowing new correspondences to be discovered and refined. Image regions that cannot be matched are detected and ignored, providing robustness to changes in scene content and lighting, which allows a variety of new applications.
</description>
<pubDate>Tue, 11 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30469</guid>
<dc:date>2004-05-11T00:00:00Z</dc:date>
</item>
<item>
<title>On Verifying a File System Implementation</title>
<link>https://hdl.handle.net/1721.1/30468</link>
<description>On Verifying a File System Implementation
Arkoudas, Konstantine; Zee, Karen; Kuncak, Viktor; Rinard, Martin
We present a correctness proof for a basic file system implementation. This implementation contains key elements of standard Unix file systems such as inodes and fixed-size disk blocks. We prove the implementation correct by establishing a simulation relation between the specification of the file system (which models the file system as an abstract map from file names to sequences of bytes) and its implementation (which uses fixed-size disk blocks to store the contents of the files).We used the Athena proof checker to represent and validate our proof. Our experience indicates that Athena's use of block-structured natural deduction, support for structural induction and proof abstraction, and seamless connection with high-performance automated theorem provers were essential to our ability to successfully manage a proof of this size.
</description>
<pubDate>Thu, 06 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30468</guid>
<dc:date>2004-05-06T00:00:00Z</dc:date>
</item>
<item>
<title>Can Basic ML Techniques Illuminate Rateless Erasure Codes?</title>
<link>https://hdl.handle.net/1721.1/30467</link>
<description>Can Basic ML Techniques Illuminate Rateless Erasure Codes?
Gupta, Anjali; Krohn, Maxwell; Walfish, Michael
The recently developed rateless erasure codes are a near-optimal channel coding technique that guaranteeslow overhead and fast decoding. The underlying theory, and current implementations, of thesecodes assume that a network transmitter encodes according to a pre-specified probability distribution.In this report, we use basic Machine Learning techniques to try to understand what happens when thisassumption is false. We train several classes of models using certain features that describe the empiricaldistribution realized at a network receiver, and we investigate whether these models can Â&#147;learnÂ&#148; topredict whether a given encoding will require extra overhead. Our results are mixed.
</description>
<pubDate>Wed, 05 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30467</guid>
<dc:date>2004-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Statistical and Information Theoretic Framework for Multi-modal Image Registration</title>
<link>https://hdl.handle.net/1721.1/30466</link>
<description>A Unified Statistical and Information Theoretic Framework for Multi-modal Image Registration
Zollei, Lilla; Fisher, John; Wells, William
We formulate and interpret several multi-modal registration methods inthe context of a unified statistical and information theoretic framework. A unified interpretation clarifies the implicit assumptionsof each method yielding a better understanding of their relativestrengths and weaknesses. Additionally, we discuss a generativestatistical model from which we derive a novel analysis tool, the"auto-information function", as a means of assessing and exploiting thecommon spatial dependencies inherent in multi-modal imagery. Weanalytically derive useful properties of the "auto-information" aswell as verify them empirically on multi-modal imagery. Among theuseful aspects of the "auto-information function" is that it canbe computed from imaging modalities independently and it allows one todecompose the search space of registration problems.
</description>
<pubDate>Wed, 28 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30466</guid>
<dc:date>2004-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Rotation Invariant Object Recognition from One Training Example</title>
<link>https://hdl.handle.net/1721.1/30465</link>
<description>Rotation Invariant Object Recognition from One Training Example
Yokono, Jerry Jun; Poggio, Tomaso
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. Such a descriptor--based on a set of oriented Gaussian derivative filters-- is used in our recognition system. We report here an evaluation of several techniques for orientation estimation to achieve rotation invariance of the descriptor. We also describe feature selection based on a single training image. Virtual images are generated by rotating and rescaling the image and robust features are selected. The results confirm robust performance in cluttered scenes, in the presence of partial occlusions, and when the object is embedded in different backgrounds.
</description>
<pubDate>Tue, 27 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30465</guid>
<dc:date>2004-04-27T00:00:00Z</dc:date>
</item>
<item>
<title>Light-Weight Leases for Storage-Centric Coordination</title>
<link>https://hdl.handle.net/1721.1/30464</link>
<description>Light-Weight Leases for Storage-Centric Coordination
Chockler, Gregory; Malkhi, Dahlia
We propose light-weight lease primitives to leverage fault-tolerant coordination among clients accessing a shared storage infrastructure (such as network attached disks or storage servers). In our approach, leases are implemented from the very shared data that they protect. That is, there is no global lease manager, there is a lease per data item (e.g., a file, a directory, a disk partition, etc.) or a collection thereof. Our lease primitives are useful for facillitating exculsive access to data in systems satisfying certain timeliness constraints. In addition, they can be utilized as a building block for implementing dependable services resilient to timing failures. In particular, we show a simple lease based solution for fault-tolerant Consensus which is a benchmark distributed coordination problem.
</description>
<pubDate>Thu, 22 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30464</guid>
<dc:date>2004-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Cascading Regularized Classifiers</title>
<link>https://hdl.handle.net/1721.1/30463</link>
<description>Cascading Regularized Classifiers
Perez-Breva, Luis
Among the various methods to combine classifiers, Boosting was originally thought as an stratagem to cascade pairs of classifiers through their disagreement. I recover the same idea from the work of Niyogi et al. to show how to loosen the requirement of weak learnability, central to Boosting, and introduce a new cascading stratagem. The paper concludes with an empirical study of an implementation of the cascade that, under assumptions that mirror the conditions imposed by Viola and Jones in [VJ01], has the property to preserve the generalization ability of boosting.
</description>
<pubDate>Wed, 21 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30463</guid>
<dc:date>2004-04-21T00:00:00Z</dc:date>
</item>
<item>
<title>M&amp;M: A Passive Toolkit for Measuring, Correlating, and Tracking Path Characteristics</title>
<link>https://hdl.handle.net/1721.1/30462</link>
<description>M&amp;M: A Passive Toolkit for Measuring, Correlating, and Tracking Path Characteristics
Katti, Sachin; Katabi, Dina; Kohler, Eddie; Strauss, Jacob
This paper presents M&amp;M, a passive measurement toolkitsuitable for large-scale studies of Internet path characteristics.The multiQ tool uses equally-spaced mode gaps in TCP flowsÂ&#146;packet interarrival time distributions to detect multiple bottleneckcapacities and their relative order. Unlike previous tools,multiQ can discover up to three bottlenecks fromthe tcpdumptrace of a single flow, and can work with acknowledgment aswell as data interarrivals.We also describe the mystery tool, asimple TCP loss event, packet loss, and RTT analyzer designedto work in concert with multiQ. The M&amp;M toolkit can measuresimple path properties; correlate different types of measurementof the same path, producing new kinds of results; andbecause M&amp;M is passive, it can use publicly-available traces totrack the value of a measurement over multiple years.We validate our tools in depth using the RON overlay network[4], which provides more than 400 heterogeneous Internetpaths and detailed information about their characteristics.We compare multiQ with Nettimer and Pathrate, two othercapacity measurement tools, in the first wide-area, real-worldvalidation of capacity measurement techniques. Each tool accuratelydiscovers minimum capacities (85% of measurementsare within 10%of the true value); multiQ additionally discoversmultiple bottlenecks and their orderings. We also use ourtoolkit to perform several measurement studies using a reservoirof 375 million traced packets spanning the last two years.Among the results of these studies are that bottleneck capacityon our traced links has gone up by around an order ofmagnitudefrom 2002 to 2004, and that differences in levels of statisticalmultiplexing on 10 Mb/s and 100 Mb/s bottleneck links resultin flows over those links having similar fair-share bandwidths.
</description>
<pubDate>Wed, 14 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30462</guid>
<dc:date>2004-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>A 1020-Node Modular Microphone Array and Beamformer for Intelligent Computing Spaces</title>
<link>https://hdl.handle.net/1721.1/30461</link>
<description>A 1020-Node Modular Microphone Array and Beamformer for Intelligent Computing Spaces
Weinstein, Eugene; Steele, Kenneth; Agarwal, Anant; Glass, James
Ubiquitous computing environments are characterized by an unboundedamount of noise and crosstalk. In these environments, traditionalmethods of sound capture are insufficient, and array microphones areneeded in order to obtain a clean recording of desired speech. In thiswork, we have designed, implemented, and tested LOUD, a novel 1020-nodemicrophone array utilizing the Raw tile parallel processorarchitecture for computation. To the best of our knowledge,this is currently the largest microphone array in the world. We haveexplored the uses of the array within ubiquitous computing scenarios byimplementing an acoustic beamforming algorithm for sound sourceamplification in a noisy environment, and have obtained preliminaryresults demonstrating the efficacy of the array. From one to 1020microphones, we have shown a 13.7dB increase in peak SNR on arepresentative utterance, an 87.2% drop in word error rate withinterferer present, and an 89.6% drop in WER without an interferer.
</description>
<pubDate>Wed, 14 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30461</guid>
<dc:date>2004-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Contextual Influences on Saliency</title>
<link>https://hdl.handle.net/1721.1/30460</link>
<description>Contextual Influences on Saliency
Torralba, Antonio
This article describes a model for including scene/context priors in attention guidance. In the proposed scheme, visual context information can be available early in the visual processing chain, in order to modulate the saliency of image regions and to provide an efficient short cut for object detection and recognition. The scene is represented by means of a low-dimensional global description obtained from low-level features. The global scene features are then used to predict the probability of presence of the target object in the scene, and its location and scale, before exploring the image. Scene information can then be used to modulate the saliency of image regions early during the visual processing in order to provide an efficient short cut for object detection and recognition.
</description>
<pubDate>Wed, 14 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30460</guid>
<dc:date>2004-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Quantitative Comparison of Reconfigurable, Tiled, and Conventional Architectures on Bit-level Computation</title>
<link>https://hdl.handle.net/1721.1/30459</link>
<description>A Quantitative Comparison of Reconfigurable, Tiled, and Conventional Architectures on Bit-level Computation
Wentzlaff, David; Agarwal, Anant
General purpose computing architectures are being called on to work on amore diverse application mix every day.  This has been fueled by the needfor reduced time to market and economies of scale that are the hallmarksof software on general purpose microprocessors.  As this application mixexpands, application domains such as bit-level computation, which hasprimarily been the domain of ASICs and FPGAs, will need to be effectivelyhandled by general purpose hardware.  Examples of bit-level applicationsinclude Ethernet framing, forward error correction encoding/decoding, andefficient state machine implementation.In this paper we compare how differing computational structures such asASICs, FPGAs, tiled architectures, and superscalar microprocessors areable to compete on bit-level communication applications.  A quantitativecomparison in terms of absolute performance and performance per area willbe presented.  These results show that although modest gains~(2-3x) inabsolute performance can be achieved when using FPGAs versus tunedmicroprocessor implementations, it is the significantly larger gains~(2-3orders of magnitude) that can be achieved in performance per area thatwill motivate work on supporting bit-level computation in a generalpurpose fashion in the future.
</description>
<pubDate>Tue, 13 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30459</guid>
<dc:date>2004-04-13T00:00:00Z</dc:date>
</item>
<item>
<title>Long-Lived Rambo: Trading Knowledge for Communication</title>
<link>https://hdl.handle.net/1721.1/30458</link>
<description>Long-Lived Rambo: Trading Knowledge for Communication
Georgiou, Chryssis; Musial, Peter M.; Shvartsman, Alexander A.
Shareable data services providing consistency guarantees, such as atomicity (linearizability), make building distributedsystems easier. However, combining linearizability with efficiency in practical algorithms is difficult. A reconfigurablelinearizable data service, called Rambo, was developed by Lynch and Shvartsman. This service guarantees consistencyunder dynamic conditions involving asynchrony, message loss, node crashes, and new node arrivals. The specificationof the original algorithm is given at an abstract level aimed at concise presentation and formal reasoning aboutcorrectness. The algorithm propagates information by means of gossip messages. If the service is in use for along time, the size and the number of gossip messages may grow without bound. This paper presents a consistentdata service for long-lived objects that improves on Rambo in two ways: it includes an incremental communicationprotocol and a leave service. The new protocol takes advantage of the local knowledge, and carefully manages thesize of messages by removing redundant information, while the leave service allows the nodes to leave the systemgracefully. The new algorithm is formally proved correct by forward simulation using levels of abstraction. Anexperimental implementation of the system was developed for networks-of-workstations. The paper also includesselected analytical and preliminary empirical results that illustrate the advantages of the new algorithm.
</description>
<pubDate>Mon, 12 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30458</guid>
<dc:date>2004-04-12T00:00:00Z</dc:date>
</item>
<item>
<title>On Generalized Records and Spatial Conjunction in Role Logic</title>
<link>https://hdl.handle.net/1721.1/30457</link>
<description>On Generalized Records and Spatial Conjunction in Role Logic
Kuncak, Viktor; Rinard, Martin
We have previously introduced role logic as a notation fordescribing properties of relational structures in shapeanalysis, databases and knowledge bases.  A natural fragmentof role logic corresponds to two-variable logic withcounting and is therefore decidable.We show how to use role logic to describe open and closedrecords, as well the dual of records, inverse records.  Weobserve that the spatial conjunction operation of separationlogic naturally models record concatenation.  Moreover, weshow how to eliminate the spatial conjunction of formulas ofquantifier depth one in first-order logic with counting.  Asa result, allowing spatial conjunction of formulas ofquantifier depth one preserves the decidability oftwo-variable logic with counting.  This result applies totwo-variable role logic fragment as well.The resulting logic smoothly integrates type system andpredicate calculus notation and can be viewed as a naturalgeneralization of the notation for constraints arising inrole analysis and similar shape analysis approaches.
</description>
<pubDate>Tue, 06 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30457</guid>
<dc:date>2004-04-06T00:00:00Z</dc:date>
</item>
<item>
<title>Converting Java Programs to Use Generic Libraries</title>
<link>https://hdl.handle.net/1721.1/30456</link>
<description>Converting Java Programs to Use Generic Libraries
Donovan, Alan; Kiezun, Adam; Tschantz, Matthew S.; Ernst, Michael D.
Java 1.5 will include a type system (called JSR-14) that supports parametric polymorphism, or generic classes. This will bring many benefits to Java programmers, not least because current Java practice makes heavy use of logically-generic classes, including container classes.Translation of Java source code into semantically equivalent JSR-14 source code requires two steps: parameterization (adding type parameters to class definitions) and instantiation (adding the type arguments at each use of a parameterized class). Parameterization need be done only once for a class, whereas instantiation must be performed for each client, of which there are potentially many more. Therefore, this work focuses on the instantiation problem. We present a technique to determine sound and precise JSR-14 types at each use of a class for which a generic type specification is available. Our approach uses a precise and context-sensitive pointer analysis to determine possible types at allocation sites, and a set-constraint-based analysis (that incorporates guarded, or conditional, constraints) to choose consistent types for both allocation and declaration sites. The technique handles all features of the JSR-14 type system, notably the raw types that provide backward compatibility. We have implemented our analysis in a tool that automatically inserts type parameters into Java code, and we report its performance when applied to a number of real-world Java programs.
</description>
<pubDate>Tue, 30 Mar 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30456</guid>
<dc:date>2004-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Problems Caused by Component Upgrades</title>
<link>https://hdl.handle.net/1721.1/30455</link>
<description>Predicting Problems Caused by Component Upgrades
McCamant, Stephen; Ernst, Michael D.
This report presents a new, automatic technique to assess whether replacing a component of a softwaresystem by a purportedly compatible component may change the behavior of the system. The techniqueoperates before integrating the new component into the system or running system tests, permitting quickerand cheaper identification of problems. It takes into account the systemÂ&#146;s use of the component, becausea particular component upgrade may be desirable in one context but undesirable in another. No formalspecifications are required, permitting detection of problems due either to errors in the component or toerrors in the system. Both external and internal behaviors can be compared, enabling detection of problemsthat are not immediately reflected in the output.The technique generates an operational abstraction for the old component in the context of the system,and one for the new component in the context of its test suite. An operational abstraction is a set of programproperties that generalizes over observed run-time behavior. Modeling a system as divided into modules,and taking into account the control and data flow between the modules, we formulate a logical conditionto guarantee that the systemÂ&#146;s behavior is preserved across a component replacement. If automated logicalcomparison indicates that the new component does not make all the guarantees that the old one did, thenthe upgrade may affect system behavior and should not be performed without further scrutiny.We describe a practical implementation of the technique, incorporating enhancements to handle nonlocalstate, non-determinism, and missing test suites, and to distinguish old from new incompatibilities. Weevaluate the implementation in case studies using real-world systems, including the Linux C library and 48Unix programs. Our implementation identified real incompatibilities among versions of the C library thataffected some of the programs, and it approved the upgrades for other programs that were unaffected by thechanges.This report is a revision of the first authorÂ&#146;s MasterÂ&#146;s thesis, submitted January 2004.
</description>
<pubDate>Tue, 30 Mar 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30455</guid>
<dc:date>2004-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of sets of oriented and non-oriented receptive fields as local descriptors</title>
<link>https://hdl.handle.net/1721.1/30454</link>
<description>Evaluation of sets of oriented and non-oriented receptive fields as local descriptors
Yokono, Jerry Jun; Poggio, Tomaso
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. We propose a performance criterion for a local descriptor  based on the tradeoff between selectivity and invariance. In this paper, we evaluate several local descriptors with respect to selectivity and invariance. The descriptors that we evaluated are Gaussian derivatives up to the third order, gray image patches, and Laplacian-based descriptors with either three scales or one scale filters. We compare selectivity and invariance to several affine changes such as rotation, scale, brightness, and viewpoint. Comparisons have been made keeping the dimensionality of the descriptors roughly constant. The overall results indicate a good performance by the descriptor based on a set of oriented Gaussian filters. It is interesting that oriented receptive fields similar to the Gaussian derivatives as well as receptive fields similar to the Laplacian are found in primate visual cortex.
</description>
<pubDate>Wed, 24 Mar 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30454</guid>
<dc:date>2004-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Unroll Factors Using Nearest Neighbors</title>
<link>https://hdl.handle.net/1721.1/30453</link>
<description>Predicting Unroll Factors Using Nearest Neighbors
Stephenson, Mark; Amarasinghe, Saman
In order to deliver the promise of MooreÂ&#146;s Law to the enduser, compilers must make decisions that are intimately tiedto a specific target architecture. As engineers add architecturalfeatures to increase performance, systems becomeharder to model, and thus, it becomes harder for a compilerto make effective decisions.Machine-learning techniques may be able to help compilerwriters model modern architectures. Because learning techniquescan effectively make sense of high dimensional spaces,they can be a valuable tool for clarifying and discerningcomplex decision boundaries. In our work we focus on loopunrolling, a well-known optimization for exposing instructionlevel parallelism. Using the Open Research Compileras a testbed, we demonstrate how one can use supervisedlearning techniques to model the appropriateness of loopunrolling.We use more than 1,100 loops Â&#151; drawn from 46 benchmarksÂ&#151; to train a simple learning algorithm to recognizewhen loop unrolling is advantageous. The resulting classifiercan predict with 88% accuracy whether a novel loop(i.e., one that was not in the training set) benefits fromloop unrolling. Furthermore, we can predict the optimal ornearly optimal unroll factor 74% of the time. We evaluatethe ramifications of these prediction accuracies using theOpen Research Compiler (ORC) and the Itanium r  2 architecture.The learned classifier yields a 6% speedup (overORCÂ&#146;s unrolling heuristic) for SPEC benchmarks, and a 7%speedup on the remainder of our benchmarks. Because thelearning techniques we employ run very quickly, we wereable to exhaustively determine the four most salient loopcharacteristics for determining when unrolling is beneficial.
</description>
<pubDate>Mon, 22 Mar 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30453</guid>
<dc:date>2004-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>REED: Robust, Efficient Filtering and Event Detection in Sensor Networks</title>
<link>https://hdl.handle.net/1721.1/30452</link>
<description>REED: Robust, Efficient Filtering and Event Detection in Sensor Networks
Abadi, Daniel J.; Madden, Samuel R.
This paper presents an algorithm for handling many types of filters insensor networks that cannot be expressed using a simple predicate.Specifically, the action of the filter may be predicated on sensor produceddata where an entire table of sensor-data/result-value pairs are needed toresolve the filter. We describe and evaluate three algorithms that canperform these filters that take advantage of database distributed jointechniques. Our join-based algorithms are capable of running in verylimited amounts of RAM, can distribute the storage burden over groups ofnodes, and are tolerant to dropped packets and node failures. REED isthus suitable for a wide range of event-detection applications thattraditional sensor network database and data collection systems cannot beused to implement.
</description>
<pubDate>Mon, 22 Mar 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30452</guid>
<dc:date>2004-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>Face processing in humans is compatible with a simple shape-based model of vision</title>
<link>https://hdl.handle.net/1721.1/30451</link>
<description>Face processing in humans is compatible with a simple shape-based model of vision
Riesenhuber; Jarudi; Gilad; Sinha
Understanding how the human visual system recognizes objects is one of the key challenges in neuroscience. Inspired by a large body of physiological evidence (Felleman and Van Essen, 1991; Hubel and Wiesel, 1962; Livingstone and Hubel, 1988; Tso et al., 2001; Zeki, 1993), a general class of recognition models has emerged which is based on a hierarchical organization of visual processing, with succeeding stages being sensitive to image features of increasing complexity (Hummel and Biederman, 1992; Riesenhuber and Poggio, 1999; Selfridge, 1959). However, these models appear to be incompatible with some well-known psychophysical results. Prominent among these are experiments investigating recognition impairments caused by vertical inversion of images, especially those of faces. It has been reported that faces that differ Â&#147;featurallyÂ&#148; are much easier to distinguish when inverted than those that differ Â&#147;configurallyÂ&#148; (Freire et al., 2000; Le Grand et al., 2001; Mondloch et al., 2002) Â&#150; a finding that is difficult to reconcile with the aforementioned models. Here we show that after controlling for subjectsÂ&#146; expectations, there is no difference between Â&#147;featurallyÂ&#148; and Â&#147;configurallyÂ&#148; transformed faces in terms of inversion effect. This result reinforces the plausibility of simple hierarchical models of object representation and recognition in cortex.
</description>
<pubDate>Fri, 05 Mar 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30451</guid>
<dc:date>2004-03-05T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Mobile Nodes for Mobile Ad Hoc Networks</title>
<link>https://hdl.handle.net/1721.1/30450</link>
<description>Virtual Mobile Nodes for Mobile Ad Hoc Networks
Dolev, Shlomi; Gilbert, Seth; Lynch, Nancy A.; Schiller, Elad; Shvarstman, Alex A.; Welch, Jennifer
One of the most significant challenges introduced by mobile networks is the difficulty in coping withthe unpredictable movement of mobile nodes. If, instead, the mobile nodes could be programmed totravel through the world in a predictable and useful manner, the task of designing algorithms for mobilenetworks would be significantly simplified. Alas, users of mobile devices in the real world are notamenable to following instructions as to where their devices may travel.While real mobile nodes may be disinclined to move as desired, we propose executing algorithmson virtual mobile nodes that move in a predetermined, predictable, manner through the real world. Inthis paper, we define the Virtual Mobile Node Abstraction, and present selected algorithms that takeadvantage of virtual mobile nodes to simply and efficiently perform complicated tasks in highly dynamic,unpredictable mobile ad hoc networks.We then present the Mobile Point Emulator, a new algorithm that implements robust virtual mobilenodes. This algorithm replicates the virtual node at a constantly changing set of real nodes, choosingnew replicas as the real nodes move in and out of the path of the virtual node. We claim that the MobilePoint algorithm correctly implements a virtual mobile node, and that it is robust as long as the virtualnode travels through well-populated areas of the network.
</description>
<pubDate>Thu, 26 Feb 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30450</guid>
<dc:date>2004-02-26T00:00:00Z</dc:date>
</item>
<item>
<title>GeoQuorums: Implementing Atomic Memory in Mobile Ad Hoc Networks</title>
<link>https://hdl.handle.net/1721.1/30449</link>
<description>GeoQuorums: Implementing Atomic Memory in Mobile Ad Hoc Networks
Dolev, Shlomi; Gilbert, Seth; Lynch, Nancy A.; Shvartsman, Alex A.; Welch, Jennifer L.
We present a new approach, the GeoQuorums approach, for implementing atomic read/write shared memoryin mobile ad hoc networks. Our approach is based on associating abstract atomic objects with certain geographiclocations. We assume the existence of focal points, geographic areas that are normally Â&#147;populatedÂ&#148; by mobile nodes.For example, a focal point may be a road junction, a scenic observation point, or a water resource in the desert. Mobilenodes that happen to populate a focal point participate in implementing a shared atomic object, using a replicated statemachine approach. These objects, which we call focal point objects, are then used to implement atomic read/writeoperations on a virtual shared object, using our new GeoQuorums algorithm. The GeoQuorums algorithm uses aquorum-based strategy in which each each quorum consists of a set of focal point objects. The quorums are used tomaintain the consistency of the shared memory and to tolerate limited failures of the focal point objects, caused bydepopulation of the corresponding geographic areas. We present a mechanism for changing the set of quorums onthe fly, thus improving efficiency. Overall, the new GeoQuorums algorithm efficiently implements read and writeoperations in a highly dynamic, mobile network.
</description>
<pubDate>Wed, 25 Feb 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30449</guid>
<dc:date>2004-02-25T00:00:00Z</dc:date>
</item>
<item>
<title>MultiChord: A Resilient Namespace Management Protocol</title>
<link>https://hdl.handle.net/1721.1/30448</link>
<description>MultiChord: A Resilient Namespace Management Protocol
Lynch, Nancy; Stoica, Ion
MultiChord is a new variant of the Chord namespace management algorithm [7] that includes lightweight mechanismsfor accommodating a limited rate of change, specifically, process joins and failures. This paper describes thealgorithm formally and evaluates its performance, using both simulation and analysis. Our main result is that lookupsare provably correctÂ&#151;that is, each lookup returns results that are consistent with a hypothetical ideal system that differsfrom the actual system only in entries corresponding to recent joins and failuresÂ&#151;in the presence of a limited rateof change. In particular, if the number of joins and failures that occur during a given time interval in a given regionof system are bounded, then all lookups are correct. A second result is a guaranteed upper bound for the latency of alookup operation in the absence of any other lookups in the system. Finally, we establish a relationship between thedeterministic assumptions of bounded joins and failures and the probabilistic assumptions (which are often used tomodel large scale networks). In particular, we derive a lower bound for the mean time between two violations of thedeterministic assumptions in a steady state system where joins and failures are modeled by Poisson processes.
</description>
<pubDate>Thu, 19 Feb 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30448</guid>
<dc:date>2004-02-19T00:00:00Z</dc:date>
</item>
<item>
<title>New Architectural Models for Visibly Controllable Computing: The Relevance of Dynamic Object Oriented Architecturesand Plan Based Computing Models</title>
<link>https://hdl.handle.net/1721.1/30447</link>
<description>New Architectural Models for Visibly Controllable Computing: The Relevance of Dynamic Object Oriented Architecturesand Plan Based Computing Models
Shrobe, Howard; Laddaga, Robert
Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution.  But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate.  To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to.  This is a radically different criterion for success.What makes a computer system visible and controllable?  This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither.  Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code.  This paper is a retrospective examination of the features of the Lisp Machine hardware and software system.  Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained.It is our hope that this is a lesson that can impact tomorrow's designs.  We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.
</description>
<pubDate>Mon, 09 Feb 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30447</guid>
<dc:date>2004-02-09T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Availability and Security Through Failure-Oblivious Computing</title>
<link>https://hdl.handle.net/1721.1/30446</link>
<description>Enhancing Availability and Security Through Failure-Oblivious Computing
Rinard, Martin; Cadar, Cristian; Dumitran, Daniel; Roy, Daniel M.; Jr., William S. Beebee
We present a new technique, failure-oblivious computing,that enables programs to continue to execute through memoryerrors without memory corruption. Our safe compilerfor C inserts checks that dynamically detect invalid memoryaccesses. Instead of terminating the execution or throwingan exception, the generated code simply discards invalidwrites and manufactures values to return for invalid reads,enabling the program to continue its normal execution.We have applied failure-oblivious computing to a set ofwidely-used programs that are part of the Linux-based opensourceinteractive computing environment. Our results showthat our techniques 1) make these programs invulnerableto known security attacks that exploit memory errors, and2) enable the programs to continue to operate successfullyto service legitimate requests and satisfy the needs of theirusers even after attacks trigger their memory errors.
</description>
<pubDate>Fri, 06 Feb 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30446</guid>
<dc:date>2004-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Visual Hulls: Example-Based 3D Shape Estimation from a Single Silhouette</title>
<link>https://hdl.handle.net/1721.1/30445</link>
<description>Virtual Visual Hulls: Example-Based 3D Shape Estimation from a Single Silhouette
Grauman, Kristen; Shakhnarovich, Gregory; Darrell, Trevor
Recovering a volumetric model of a person, car, or other objectof interest from a single snapshot would be useful for many computergraphics applications.  3D model estimation in general is hard, andcurrently requires active sensors, multiple views, or integration overtime.  For a known object class, however, 3D shape can be successfullyinferred from a single snapshot.  We present a method for generating a``virtual visual hull''-- an estimate of the 3D shape of an objectfrom a known class, given a single silhouette observed from an unknownviewpoint.  For a given class, a large database of multi-viewsilhouette examples from calibrated, though possibly varied, camerarigs are collected.  To infer a novel single view input silhouette'svirtual visual hull, we search for 3D shapes in the database which aremost consistent with the observed contour.  The input is matched tocomponent single views of the multi-view training examples.  A set ofviewpoint-aligned virtual views are generated from the visual hullscorresponding to these examples.  The 3D shape estimate for the inputis then found by interpolating between the contours of these alignedviews.  When the underlying shape is ambiguous given a single viewsilhouette, we produce multiple visual hull hypotheses; if a sequenceof input images is available, a dynamic programming approach isapplied to find the maximum likelihood path through the feasiblehypotheses over time.  We show results of our algorithm on real andsynthetic images of people.
</description>
<pubDate>Wed, 28 Jan 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30445</guid>
<dc:date>2004-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Selecting Relevant Genes with a Spectral Approach</title>
<link>https://hdl.handle.net/1721.1/30444</link>
<description>Selecting Relevant Genes with a Spectral Approach
Wolf, Lior; Shashua, Amnon; Mukherjee, Sayan
Array technologies have made it possible to record simultaneouslythe expression pattern of thousands of genes. A fundamental problemin the analysis of gene expression data is the identification ofhighly relevant genes that either discriminate between phenotypiclabels or are important with respect to the cellular process studied inthe experiment: for example cell cycle or heat shock in yeast experiments,chemical or genetic perturbations of mammalian cell lines,and genes involved in class discovery for human tumors. In this paperwe focus on the task of unsupervised gene selection. The problemof selecting a small subset of genes is particularly challengingas the datasets involved are typically characterized by a very smallsample size Â&#151; in the order of few tens of tissue samples Â&#151; andby a very large feature space as the number of genes tend to bein the high thousands. We propose a model independent approachwhich scores candidate gene selections using spectral properties ofthe candidate affinity matrix. The algorithm is very straightforwardto implement yet contains a number of remarkable properties whichguarantee consistent sparse selections. To illustrate the value of ourapproach we applied our algorithm on five different datasets. Thefirst consists of time course data from four well studied Hematopoieticcell lines (HL-60, Jurkat, NB4, and U937). The other fourdatasets include three well studied treatment outcomes (large celllymphoma, childhood medulloblastomas, breast tumors) and oneunpublished dataset (lymph status). We compared our approachboth with other unsupervised methods (SOM,PCA,GS) and withsupervised methods (SNR,RMB,RFE). The results clearly showthat our approach considerably outperforms all the other unsupervisedapproaches in our study, is competitive with supervised methodsand in some case even outperforms supervised approaches.
</description>
<pubDate>Tue, 27 Jan 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30444</guid>
<dc:date>2004-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>Risk Bounds for Mixture Density Estimation</title>
<link>https://hdl.handle.net/1721.1/30443</link>
<description>Risk Bounds for Mixture Density Estimation
Rakhlin, Alexander; Panchenko, Dmitry; Mukherjee, Sayan
In this paper we focus on the problem of estimating a boundeddensity using a finite combination of densities from a givenclass. We consider the Maximum Likelihood Procedure (MLE) and the greedy procedure described by Li and Barron. Approximation and estimation bounds are given for the above methods. We extend and improve upon the estimation results of Li and Barron, and in particular prove an $O(\frac{1}{\sqrt{n}})$ bound on the estimation error which does not depend on the number of densities in the estimated combination.
</description>
<pubDate>Tue, 27 Jan 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30443</guid>
<dc:date>2004-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>On the difficulty of feature-based attentional modulations in visual object recognition: A modeling study.</title>
<link>https://hdl.handle.net/1721.1/30442</link>
<description>On the difficulty of feature-based attentional modulations in visual object recognition: A modeling study.
Schneider, Robert; Riesenhuber, Maximilian
Numerous psychophysical experiments have shown an important role for attentional modulations in vision. Behaviorally, allocation of attention can improve performance in object detection and recognition tasks. At the neural level, attention increases firing rates of neurons in visual cortex whose preferredstimulus is currently attended to. However, it is not yet known how these two phenomena are linked, i.e., how the visual system could be "tuned" in a task-dependent fashion to improve task performance. To answer this question, we performed simulations with the HMAX model of object recognition in cortex [45].We modulated firing rates of model neurons in accordance with experimental   results about effects of feature-based attention on single neurons and measured changes in the model's performance in a variety of object recognition tasks. It turned out that recognition performance could only be improved under very limited circumstances and that attentional influences on the process of object recognition per se tend to display a lack of specificity or raise false alarm rates. These observations lead us to postulate a new role for the observed attention-related neural response modulations.
</description>
<pubDate>Wed, 14 Jan 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30442</guid>
<dc:date>2004-01-14T00:00:00Z</dc:date>
</item>
<item>
<title>On Modular Pluggable Analyses Using Set Interfaces</title>
<link>https://hdl.handle.net/1721.1/30441</link>
<description>On Modular Pluggable Analyses Using Set Interfaces
Lam, Patrick; Kuncak, Viktor; Rinard, Martin
We present a technique that enables the focused applicationof multiple analyses to different modules in the same program. Our researchhas two goals: 1) to address the scalability limitations of preciseanalyses by focusing the analysis on only those parts of the program thatare relevant to the properties that the analysis is designed to verify, and2) to enable the application of specialized analyses that verify propertiesof specifc classes of data structures to programs that simultaneouslymanipulate several dfferent kinds of data structures.In our approach, each module encapsulates a data structure and usesmembership in abstract sets to characterize how objects participate inits data structure. Each analysis verifies that the implementation of themodule 1) preserves important internal data structure representationinvariants and 2) conforms to a specification that uses formulas in a setalgebra to characterize the effects of operations on the data structure.The analyses use the common set abstraction to 1) characterize howobjects participate in multiple data structures and to 2) enable the interanalysiscommunication required to verify properties that depend onmultiple modules analyzed by different analyses.We characterize the key soundness property that an analysis plugin mustsatisfy to successfully participate in our system and present several analysisplugins that satisfy this property: a flag plugin that analyzes modulesin which abstract set membership is determined by a flag  field in eachobject, and a graph types plugin that analyzes modules in which abstractset membership is determined by reachability properties of objects storedin tree-like data structures.
</description>
<pubDate>Thu, 18 Dec 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30441</guid>
<dc:date>2003-12-18T00:00:00Z</dc:date>
</item>
<item>
<title>Rosebud: A Scalable Byzantine-Fault-Tolerant Storage Architecture</title>
<link>https://hdl.handle.net/1721.1/30440</link>
<description>Rosebud: A Scalable Byzantine-Fault-Tolerant Storage Architecture
Rodrigues, Rodrigo; Liskov, Barbara
This paper presents Rosebud, a new Byzantine faulttolerantstorage architecture designed to be highly scalableand deployable in the wide-area. To support massiveamounts of data, we need to partition the data among thenodes. To support long-lived operation, we need to allowthe set of nodes in the system to change. To our knowledge,we are the first to present a complete design and arunning implementation of Byzantine-fault-tolerant storagealgorithms for a large scale, dynamic membership.We deployed Rosebud in a wide area testbed and ran experimentsto evaluate its performance, and our experimentsshow that it performs well. We show that our storage algorithmsperform equivalently to highly optimized replicationalgorithms in the wide-area. We also show that performancedegradation is minor when the system reconfigures.
</description>
<pubDate>Wed, 17 Dec 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30440</guid>
<dc:date>2003-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>RamboNodes for the Metropolitan Ad Hoc Network</title>
<link>https://hdl.handle.net/1721.1/30439</link>
<description>RamboNodes for the Metropolitan Ad Hoc Network
Beal, Jacob; Gilbert, Seth
We present an algorithm to store data robustly in a large, geographically distributed network by means of localized regions of data storage that move in response to changing conditions. For example, data might migrate away from failures or toward regions of high demand. The PersistentNode algorithm provides this service robustly, but with limited safety guarantees. We use the RAMBO framework to transform PersistentNode into RamboNode, an algorithm that guarantees atomic consistency in exchange for increased cost and decreased liveness. In addition, a half-life analysis of RamboNode shows that it is robust against continuous low-rate failures. Finally, we provide experimental simulations for the algorithm on 2000 nodes, demonstrating how it services requests and examining how it responds to failures.
</description>
<pubDate>Wed, 17 Dec 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30439</guid>
<dc:date>2003-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Contour Matching Using Approximate Earth Mover's Distance</title>
<link>https://hdl.handle.net/1721.1/30438</link>
<description>Fast Contour Matching Using Approximate Earth Mover's Distance
Grauman, Kristen; Darrell, Trevor
Weighted graph matching is a good way to align a pair of shapesrepresented by a set of descriptive local features; the set ofcorrespondences produced by the minimum cost of matching features fromone shape to the features of the other often reveals how similar thetwo shapes are.  However, due to the complexity of computing the exactminimum cost matching, previous algorithms could only run efficientlywhen using a limited number of features per shape, and could not scaleto perform retrievals from large databases.  We present a contourmatching algorithm that quickly computes the minimum weight matchingbetween sets of descriptive local features using a recently introducedlow-distortion embedding of the Earth Mover's Distance (EMD) into anormed space.  Given a novel embedded contour, the nearest neighborsin a database of embedded contours are retrieved in sublinear time viaapproximate nearest neighbors search.  We demonstrate our shapematching method on databases of 10,000 images of human figures and60,000 images of handwritten digits.
</description>
<pubDate>Fri, 05 Dec 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30438</guid>
<dc:date>2003-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Mobilized ad-hoc networks:  A reinforcement learning approach</title>
<link>https://hdl.handle.net/1721.1/30437</link>
<description>Mobilized ad-hoc networks:  A reinforcement learning approach
Chang, Yu-Han; Ho, Tracey; Kaelbling, Leslie Pack
Research in mobile ad-hoc networks has focused on situations in whichnodes have no control over their movements.  We investigate animportant but overlooked domain in which nodes do have controlover their movements.  Reinforcement learning methods can be used tocontrol both packet routing decisions and node mobility, dramaticallyimproving the connectivity of the network.  We first motivate theproblem by presenting theoretical bounds for the connectivityimprovement of partially mobile networks and then present superiorempirical results under a variety of different scenarios in which themobile nodes in our ad-hoc network are embedded with adaptive routingpolicies and learned movement policies.
</description>
<pubDate>Thu, 04 Dec 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30437</guid>
<dc:date>2003-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>Component based recognition of objects in an office environment</title>
<link>https://hdl.handle.net/1721.1/30436</link>
<description>Component based recognition of objects in an office environment
Morgenstern, Christian; Heisele, Bernd
We present a component-based approach for recognizing objectsunder large pose changes. From a set of training images of a givenobject we extract a large number of components which are clusteredbased on the similarity of their image features and their locations withinthe object image. The cluster centers build an initial set of componenttemplates from which we select a subset for the final recognizer.In experiments we evaluate different sizes and types of components andthree standard techniques for component selection. The component classifiersare finally compared to global classifiers on a database of fourobjects.
</description>
<pubDate>Fri, 28 Nov 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30436</guid>
<dc:date>2003-11-28T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Longest Increasing and Common Subsequences in Streaming Data</title>
<link>https://hdl.handle.net/1721.1/30435</link>
<description>Finding Longest Increasing and Common Subsequences in Streaming Data
Liben-Nowell, David; Vee, Erik; Zhu, An
In this paper, we present algorithms and lower bounds for the Longest Increasing Subsequence(LIS) and Longest Common Subsequence (LCS) problems in the data streaming model.
</description>
<pubDate>Wed, 26 Nov 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30435</guid>
<dc:date>2003-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>The Satisfiability Threshold of Random 3-SAT Is at Least 3.52</title>
<link>https://hdl.handle.net/1721.1/30434</link>
<description>The Satisfiability Threshold of Random 3-SAT Is at Least 3.52
Hajiaghayi, MohammadTaghi; Sorkin, Gregory B.
We prove that a random 3-SAT instance with clause-to-variable densityless than 3.52 is satisfiable with high probability.The proof comes through an algorithm which selects (and sets) a variabledepending on its degree and that of its complement.
</description>
<pubDate>Thu, 20 Nov 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30434</guid>
<dc:date>2003-11-20T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Specification-Assisted Error Localization and Correction</title>
<link>https://hdl.handle.net/1721.1/30433</link>
<description>Efficient Specification-Assisted Error Localization and Correction
Demsky, Brian; Cadar, Cristian; Roy, Daniel; Rinard, Martin
We present a new error localization tool, Archie, that accepts aspecification of key data structure consistency constraints, then generatesan algorithm that checks if the data structures satisfy theconstraints. We also present a set of specification analyses and optimizationsthat (for our benchmark software system) improve theperformance of the generated checking algorithm by over a factorof 3,900 as compared with the initial interpreted implementation,enabling Archie to efficiently support interactive debugging.We evaluate ArchieÂ&#146;s effectiveness by observing the actions oftwo developer populations (one using Archie, the other using standarderror localization techniques) as they attempted to localize andcorrect three errors in a benchmark software system. With Archie,the developers were able to localize each error in less than 10 minutesand correct each error in (usually much) less than 20 minutes.Without Archie, the developers were, with one exception, unableto locate each error after more than an hour of effort. These resultsillustrate ArchieÂ&#146;s potential to substantially improve current errorlocalization and correction techniques.
</description>
<pubDate>Thu, 13 Nov 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30433</guid>
<dc:date>2003-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Internet Routing on Topology-Independent Node Identities</title>
<link>https://hdl.handle.net/1721.1/30432</link>
<description>Scalable Internet Routing on Topology-Independent Node Identities
Ford, Bryan
Unmanaged Internet Protocol (UIP) is a fully selforganizingnetwork-layer protocol that implements scalableidentity-based routing. In contrast with addressbasedrouting protocols, which depend for scalability oncentralized hierarchical address management, UIP nodesuse a flat namespace of cryptographic node identifiers.Node identities can be created locally on demand andremain stable across network changes. Unlike locationindependentname services, the UIP routing protocol canstitch together many conventional address-based networkswith disjoint or discontinuous address domains, providingconnectivity between any pair of participating nodes evenwhen no underlying network provides direct connectivity.The UIP routing protocol works on networks with arbitrarytopologies and global traffic patterns, and requiresonlyO(log N) storage per node for routing state, enablingeven small, ubiquitous edge devices to act as ad-hoc selfconfiguringrouters. The protocol rapidly recovers fromnetwork partitions, bringing every node up-to-date in amulticast-based chain reaction of O(log N) depth. Simulationresults indicate that UIP finds routes that are onaverage within 2X  the length of the best possible route.
</description>
<pubDate>Fri, 31 Oct 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30432</guid>
<dc:date>2003-10-31T00:00:00Z</dc:date>
</item>
<item>
<title>Evolving Robocode Tank Fighters</title>
<link>https://hdl.handle.net/1721.1/30431</link>
<description>Evolving Robocode Tank Fighters
Eisenstein, Jacob
In this paper, I describe the application of genetic programming to evolve a controller for a robotic tank in a simulated environment.The purpose is to explore how genetic techniques can best be applied to produce controllers based on subsumption and behavior oriented languages such as REX.  As part of my implementation, I developed TableRex, a modification of REX that can be expressed on a fixed-lengthgenome.  Using a fixed subsumption architecture of TableRex modules, I evolved robots that beat some of the most competitive hand-coded adversaries.
</description>
<pubDate>Tue, 28 Oct 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30431</guid>
<dc:date>2003-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>On Role Logic</title>
<link>https://hdl.handle.net/1721.1/30430</link>
<description>On Role Logic
Kuncak, Viktor; Rinard, Martin
We present role logic, a notation for describing propertiesof relational structures in shape analysis, databases, andknowledge bases.  We construct role logic using the ideas ofde Bruijn's notation for lambda calculus, an encoding offirst-order logic in lambda calculus, and a simple rule forimplicit arguments of unary and binary predicates.The unrestricted version of role logic has the expressivepower of first-order logic with transitive closure.  Using asyntactic restriction on role logic formulas, we identify anatural fragment RL^2 of role logic.  We show that the RL^2fragment has the same expressive power as two-variable logicwith counting C^2 and is therefore decidable.We present a translation of an imperative language into thedecidable fragment RL^2, which allows compositionalverification of programs that manipulate relationalstructures.  In addition, we show how RL^2 encodes booleanshape analysis constraints and an expressive descriptionlogic.
</description>
<pubDate>Fri, 24 Oct 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30430</guid>
<dc:date>2003-10-24T00:00:00Z</dc:date>
</item>
<item>
<title>A Stream Algorithm for the SVD</title>
<link>https://hdl.handle.net/1721.1/30429</link>
<description>A Stream Algorithm for the SVD
Strumpen, Volker; Hoffmann, Henry; Agarwal, Anant
We present a stream algorithm for the Singular-Value Decomposition (SVD) of anM X N matrix A. Our algorithm trades speed of numerical convergence for parallelism,and derives from a one-sided, cyclic-by-rows Hestenes SVD. Experimental results showthat we can create O(M) parallelism, at the expense of increasing the computationalwork by less than a factor of about 2. Our algorithm qualifes as a stream algorithmin that it requires no more than a small, bounded amount of local storage per processor and its compute efficiency approaches an optimal 100% asymptotically for largenumbers of processors and appropriate problem sizes.
</description>
<pubDate>Wed, 22 Oct 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30429</guid>
<dc:date>2003-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Updatable Zero-Knowledge Sets</title>
<link>https://hdl.handle.net/1721.1/30428</link>
<description>Updatable Zero-Knowledge Sets
Liskov, Moses; Milcali, Silvio
We build on the work of Micali, Rabin, and Killian [4] to introduce zero-knowledge sets and databases that may be updated in a desirable way. In particular, in order to make an update the owner of the set must publish a commitment to the update, and update the commitment to the set. The update should take time independent of the size of the set. In addition, the update should not leak which key was added (or removed), or what data is associated with that key. Furthermore, our update will be transparent in that those already possessing a proof of a particular key being present or absent should be able to update their proofs to obtain a valid proof relative to the updated set, except if their proof is relative to the element that was changed.
</description>
<pubDate>Tue, 14 Oct 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30428</guid>
<dc:date>2003-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Correctness Proof for a Byzantine-Fault-Tolerant Read/Write Atomic Memory with Dynamic Replica Membership</title>
<link>https://hdl.handle.net/1721.1/30425</link>
<description>A Correctness Proof for a Byzantine-Fault-Tolerant Read/Write Atomic Memory with Dynamic Replica Membership
Rodrigues, Rodrigo; Liskov, Barbara
We prove correctness of a Byzantine-fault-tolerant replication algorithm for a read/writeatomic memory that supports a dynamic replica set.
</description>
<pubDate>Thu, 25 Sep 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30425</guid>
<dc:date>2003-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating shape representation in area V4 with HMAX: Orientation and Grating selectivities</title>
<link>https://hdl.handle.net/1721.1/30424</link>
<description>Investigating shape representation in area V4 with HMAX: Orientation and Grating selectivities
Kouh, Minjoon; Riesenhuber, Maximilian
The question of how shape is represented is of central interest to understanding visual processing in cortex. While tuning properties of the cells in early part of the ventral visual stream, thought to be responsible for object recognition in the primate, are comparatively well understood, several different theories have been proposed regarding tuning in higher visual areas, such as V4.  We used the model of object recognition in cortex presented by Riesenhuber and Poggio (1999), where more complex shape tuning in higher layers is the result of combining afferent inputs tuned to simpler features, and compared the tuning properties of model units in intermediate layers to those of V4 neurons from the literature.  In particular, we investigated the issue of shape representation in visual area V1 and V4 using oriented bars and various types of gratings (polar, hyperbolic, and Cartesian), as used in several physiology experiments.  Our computational model was able to reproduce several physiological findings, such as the broadening distribution of the orientation bandwidths and the emergence of a bias toward non-Cartesian stimuli.  Interestingly, the simulation results suggest that some V4 neurons receive input from afferents with spatially separated receptive fields, leading to experimentally testable predictions.  However, the simulations also show that the stimulus set of Cartesian and non-Cartesian gratings is not sufficiently complex to probe shape tuning in higher areas, necessitating the use of more complex stimulus sets.
</description>
<pubDate>Mon, 08 Sep 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30424</guid>
<dc:date>2003-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Vector Parallelism in Software Pipelined Loops</title>
<link>https://hdl.handle.net/1721.1/30423</link>
<description>Exploiting Vector Parallelism in Software Pipelined Loops
Larsen, Sam; Rabbah, Rodric; Amarasinghe, Saman
An emerging trend in processor design is the incorporation of short vector instructions into the ISA.  In fact, vector extensions have appeared in most general-purpose microprocessors.  To utilize these instructions, traditional vectorization technology can be used to identify and exploit data parallelism. In contrast, efficient use of a processor\'s scalar resources is typically achieved through ILP techniques such as software pipelining.  In order to attain the best performance, it is necessary to utilize both sets of resources.  This paper presents a novel approach for exploiting vector parallelism in a software pipelined loop.  At its core is a method for judiciously partitioning operations between vector and scalar resources.  The proposed algorithm (i) lowers the burden on the scalar resources by offloading computation to the vector functional units, and (ii) partially (or fully) inhibits the optimizations when full vectorization will decrease performance. !  This results in better resource usage and allows for software pipelining with shorter initiation intervals.  Although our techniques complement statically scheduled machines most naturally, we believe they are applicable to any architecture that tightly integrates support for ILP and data parallelism.An important aspect of the proposed methodology is its ability to manage explicit communication of operands between vector and scalar instructions.  Our methodology also allows for a natural handling of misaligned vector memory operations.  For architectures that provide hardware support for misaligned references, software pipelining effectively hides the latency of these potentially expensive instructions.  When explicit alignment is required in software, our algorithm accounts for these extra costs and vectorizes only when it is profitable.  Finally, our heuristic can take advantage of alignment information where it is available.We evaluate our methodology using several DSP and SPEC FP benchmarks.  Compared to software pipelining, our approach is able to achieve an average speedup of 1.30x and 1.18x for the two benchmark sets, respectively.
</description>
<pubDate>Fri, 03 Jun 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30423</guid>
<dc:date>2005-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Input/Output Automata: A Formal Model for Dynamic Systems</title>
<link>https://hdl.handle.net/1721.1/30422</link>
<description>Dynamic Input/Output Automata: A Formal Model for Dynamic Systems
Attie, Paul C.; Lynch, Nancy A.
We present a mathematical state-machine model, the Dynamic I/O Automaton (DIOA) model, for defining and analyzing dynamic systems of interacting components. The systems we consider are dynamic in two senses: (1) components can be created and destroyed as computation proceeds, and (2) the events in which the components may participate may change. The new model admits a notion of external system behavior, based on sets of traces. It also features a parallel composition operator for dynamic systems, which respects external behavior, and a notion of simulation from one dynamic system to another, which can be used to prove that one system implements the other.
</description>
<pubDate>Sat, 26 Jul 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30422</guid>
<dc:date>2003-07-26T00:00:00Z</dc:date>
</item>
<item>
<title>On Our Experience with Modular Pluggable Analyses</title>
<link>https://hdl.handle.net/1721.1/30421</link>
<description>On Our Experience with Modular Pluggable Analyses
Lam, Patrick; Kuncak, Viktor; Rinard, Martin
We present a technique that enables the focused applicationof multiple analyses to di erent modules in thesame program. In our approach, each module encapsulatesone or more data structures and uses membershipin abstract sets to characterize how objects participatein data structures. Each analysis veri es that the implementationof the module 1) preserves important internaldata structure consistency properties and 2) correctlyimplements an interface that uses formulas in a set algebrato characterize the e ects of operations on theencapsulated data structures. Collectively, the analysesuse the set algebra to 1) characterize how objects participatein multiple data structures and to 2) enable theinter-analysis communication required to verify propertiesthat depend on multiple modules analyzed by differentanalyses.We have implemented our system and deployed threepluggable analyses into it: a ag analysis for modulesin which abstract set membership is determined by aag  eld in each object, a plugin for modules that encapsulatelinked data structures such as lists and trees,and an array plugin in which abstract set membershipis determined by membership in an array. Our experimentalresults indicate that our approach makes it possibleto e ectively combine multiple analyses to verifyproperties that involve objects shared by multiple modules,with each analysis analyzing only those modulesfor which it is appropriate.
</description>
<pubDate>Mon, 04 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30421</guid>
<dc:date>2004-10-04T00:00:00Z</dc:date>
</item>
<item>
<title>Pyramid Match Kernels: Discriminative Classification with Sets of Image Features</title>
<link>https://hdl.handle.net/1721.1/30420</link>
<description>Pyramid Match Kernels: Discriminative Classification with Sets of Image Features
Grauman, Kristen; Darrell, Trevor
Discriminative learning is challenging when examples are setsof local image features, and the sets vary in cardinality and lackany sort of meaningful ordering.  Kernel-based classificationmethods can learn complex decision boundaries, but a kernelsimilarity measure for unordered set inputs must somehow solve forcorrespondences -- generally a computationally expensive task thatbecomes impractical for large set sizes.  We present a new fastkernel function which maps unordered feature sets tomulti-resolution histograms and computes a weighted histogramintersection in this space.  This ``pyramid match" computation islinear in the number of features, and it implicitly findscorrespondences based on the finest resolution histogram cell wherea matched pair first appears. Since the kernel does not penalize thepresence of extra features, it is robust to clutter.  We show thekernel function is positive-definite, making it valid for use inlearning algorithms whose optimal solutions are guaranteed only forMercer kernels.  We demonstrate our algorithm on object recognitiontasks and show it to be dramatically faster than currentapproaches.
</description>
<pubDate>Thu, 17 Mar 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30420</guid>
<dc:date>2005-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Conformational Search with Constraint Satisfaction</title>
<link>https://hdl.handle.net/1721.1/30419</link>
<description>Systematic Conformational Search with Constraint Satisfaction
Tucker-Kellogg, Lisa
Throughout biological, chemical, and pharmaceutical research,conformational searches are used to explore the possiblethree-dimensional configurations of molecules.  This thesis describesa new systematic method for conformational search, including anapplication of the method to determining the structure of a peptidevia solid-state NMR spectroscopy.  A separate portion of the thesis isabout protein-DNA binding, with a three-dimensional macromolecularstructure determined by x-ray crystallography.The search method in this thesis enumerates all conformations of amolecule (at a given level of torsion angle resolution) that satisfy aset of local geometric constraints, such as constraints derived fromNMR experiments.  Systematic searches, historically used for smallmolecules, generally now use some form of divide-and-conquer forapplication to larger molecules.  Our method can achieve a significantimprovement in runtime by making some major and counter-intuitivemodifications to traditional divide-and-conquer:(1) OmniMerge divides a polymer into many alternative pairs ofsubchains and searches all the pairs, instead of simply cutting inhalf and searching two subchains.  Although the extra searches mayappear wasteful, the bottleneck stage of the overall search, which isto re-connect the conformations of the largest subchains, can be greatlyaccelerated by the availability of alternative pairs of sidechains.(2)  Propagation of disqualified conformations acrossoverlapping subchains can disqualify infeasible conformations veryrapidly, which further offsets the cost of searching the extrasubchains of OmniMerge.(3) The search may be run in two stages, once at low-resolutionusing a side-effect of OmniMerge to determine an optimalpartitioning of the molecule into efficient subchains; then again athigh-resolution while making use of the precomputed subchains.(4) An A* function prioritizes each subchain based onestimated future search costs.  Subchains with sufficiently lowpriority can be omitted from the search, which improves efficiency.A common theme of these four ideas is to make good choices about howto break the large search problem into lower-dimensional subproblems.In addition, the search method uses heuristic local searches withinthe overall systematic framework, to maintain the systematic guaranteewhile providing the empirical efficiency of stochastic search.These novel algorithms were implemented and the effectiveness of eachinnovation is demonstrated on a highly constrained peptide with 40degrees of freedom.
</description>
<pubDate>Fri, 01 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30419</guid>
<dc:date>2004-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Software Upgrades for Distributed Systems (PhD thesis)</title>
<link>https://hdl.handle.net/1721.1/30418</link>
<description>Automatic Software Upgrades for Distributed Systems (PhD thesis)
Ajmani, Sameer
Upgrading the software of long-lived, highly-available distributedsystems is difficult.  It is not possible to upgrade all the nodes in asystem at once, since some nodes may be unavailable and halting thesystem for an upgrade is unacceptable.  Instead, upgrades may happengradually, and there may be long periods of time when different nodesare running different software versions and need to communicate usingincompatible protocols.  We present a methodology and infrastructurethat address these challenges and make it possible to upgradedistributed systems automatically while limiting service disruption.Our methodology defines how to enable nodes to interoperate acrossversions, how to preserve the state of a system across upgrades, and howto schedule an upgrade so as to limit service disruption.  The approachis modular: defining an upgrade requires understanding only the newsoftware and the version it replaces.The upgrade infrastructure is a generic platform for distributing andinstalling software while enabling nodes to interoperate acrossversions.  The infrastructure requires no access to the system sourcecode and is transparent: node software is unaware that differentversions even exist.  We have implemented a prototype of theinfrastructure called Upstart that intercepts socket communication usinga dynamically-linked C++ library.  Experiments show that Upstart has lowoverhead and works well for both local-area and Internet systems.
</description>
<pubDate>Thu, 06 Oct 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30418</guid>
<dc:date>2005-10-06T00:00:00Z</dc:date>
</item>
<item>
<title>Selectivity of Local Field Potentials in Macaque Inferior Temporal Cortex</title>
<link>https://hdl.handle.net/1721.1/30417</link>
<description>Selectivity of Local Field Potentials in Macaque Inferior Temporal Cortex
Kreiman, Gabriel; Hung, Chou; Poggio, Tomaso; DiCarlo, James
While single neurons in inferior temporal (IT) cortex show differential responses to distinct complex stimuli, little is known about the responses of populations of neurons in IT.  We recorded single electrode data, including multi-unit activity (MUA) and local field potentials (LFP), from 618 sites in the inferior temporal cortex of macaque monkeys while the animals passively viewed 78 different pictures of complex stimuli. The LFPs were obtained by low-pass filtering the extracellular electrophysiological signal with a corner frequency of 300 Hz. As reported previously, we observed that spike counts from MUA showed selectivity for some of the pictures.  Strikingly, the LFP data, which is thought to constitute an average over large numbers of neurons, also showed significantly selective responses.  The LFP responses were less selective than the MUA responses both in terms of the proportion of selective sites as well as in the selectivity of each site. We observed that there was only little overlap between the selectivity of MUA and LFP recordings from the same electrode.  To assess the spatial organization of selective responses, we compared the selectivity of nearby sites recorded along the same penetration and sites recorded from different penetrations.  We observed that MUA selectivity was correlated on spatial scales up to 800 &amp;#61549;m while the LFP selectivity was correlated over a larger spatial extent, with significant correlations between sites separated by several mm.  Our data support the idea that there is some topographical arrangement to the organization of selectivity in inferior temporal cortex and that this organization may be relevant for the representation of object identity in IT.
</description>
<pubDate>Tue, 21 Sep 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30417</guid>
<dc:date>2004-09-21T00:00:00Z</dc:date>
</item>
<item>
<title>The Interval Programming Model for Multi-objective Decision Making</title>
<link>https://hdl.handle.net/1721.1/30416</link>
<description>The Interval Programming Model for Multi-objective Decision Making
Benjamin, Michael R.
The interval programming model (IvP) is a mathematical programmingmodel for representing and solving multi-objective optimizationproblems.  The central characteristic of the model is the use ofpiecewise linearly defined objective functions and a solution methodthat searches through the combination space of pieces rather thanthrough the actual decision space. The piecewise functions typicallyrepresent an approximation of some underlying function, but thisconcession is balanced on the positive side by relative freedom fromfunction form assumptions as well as the assurance of global optimality.In this paper the model and solution algorithms are described, and theapplicability of IvP to certain applications arediscussed.
</description>
<pubDate>Mon, 27 Sep 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30416</guid>
<dc:date>2004-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>The Quorum Deployment Problem</title>
<link>https://hdl.handle.net/1721.1/30415</link>
<description>The Quorum Deployment Problem
Gilbert, Seth; Malewicz, Grzegorz
Quorum systems are commonly used to maintain the consistency of replicated data in adistributed system. Much research has been devoted to developing quorum systems with good theoreticalproperties, such as fault tolerance and high availability. However, even given a theoreticallygood quorum system, it is not obvious how to efficiently deploy such a system in a real network. Thispaper introduces a new combinatorial optimization problem, the Quorum Deployment Problem, andstudies its complexity. We demonstrate that it is NP-hard to approximate the Quorum DeploymentProblem within any factor of n?, where n is the number of nodes in the distributed network and ? &gt; 0.The problem is NP-hard in even the simplest possible distributed network: a one-dimensional line withmetric cost. We begin to study algorithms for variants of the problem. Some variants can be solved optimallyin polynomial time and some NP-hard variants can be approximated to within a constant factor.
</description>
<pubDate>Fri, 29 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30415</guid>
<dc:date>2004-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Eclat: Automatic Generation and Classification of Test Inputs</title>
<link>https://hdl.handle.net/1721.1/30414</link>
<description>Eclat: Automatic Generation and Classification of Test Inputs
Pacheo, Carlos; Ernst, Michael D.
This paper describes a technique that helps a test engineerselect, from a large set of randomly generated testinputs, a small subset likely to reveal faults in the softwareunder test. The technique takes a program or software component,plus a set of normal executionsÂ&#151;say, from an existingtest suite, or from observations of the software runningproperly. The technique works by extracting an operationalmodel of the softwareÂ&#146;s operation, and comparingeach inputÂ&#146;s operational pattern of execution against themodel. Test inputs whose operational pattern is suggestiveof a fault are further reduced by selecting only one inputper such pattern. The result is a small portion of the originalinputs, deemed most likely to reveal faults. Thus, ourtechnique can also be seen as an error-detection technique.We have implemented these ideas in the Eclat tool, designedfor unit testing of Java classes. Eclat generates alarge number of inputs and uses our technique to select onlya few of them as fault-revealing. The inputs that it selectsare an order of magnitude more likely to reveal faults thannon-selected inputs.
</description>
<pubDate>Thu, 14 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30414</guid>
<dc:date>2004-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>The Architecture of MAITA: A Tool for Monitoring, Analysis, and Interpretation</title>
<link>https://hdl.handle.net/1721.1/30413</link>
<description>The Architecture of MAITA: A Tool for Monitoring, Analysis, and Interpretation
Jon, Doyle; Kohane, Isaac; Long, William; Szolovits, Peter
This report describes the aims, functions, and organization of the MAITAsystem for knowledge-based construction, adaptation, and control of networks of monitoringprocesses.
</description>
<pubDate>Tue, 18 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30413</guid>
<dc:date>2004-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Asynchronous Distributed Systems Using the IOA Toolkit</title>
<link>https://hdl.handle.net/1721.1/30412</link>
<description>Implementing Asynchronous Distributed Systems Using the IOA Toolkit
Georgiou, Chryssis; Mavrommatis, Panayiotis P.; Tauber, Joshua A.
This document is a report about the capabilities and performance of the IOA Toolkit, and in particularthe tools that provide support for implementing and running distributed systems (checker,composer, code generator). The Toolkit compiles distributed systems specified in IOA into Javaclasses, which run on a network of workstations and communicate using the Message Passing Interface(MPI). In order to test the toolkit, several distributed algorithms were implemented, rangingfrom simple algorithms such as LCR leader election in a ring network to more complex algorithmssuch as the GHS algorithm for computing the minimum spanning tree in an arbitrary graph. Allof our experiments completed successfully, and several runtime measurements were made.
</description>
<pubDate>Wed, 06 Oct 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30412</guid>
<dc:date>2004-10-06T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive identification of alternative events conserved in human and mouse</title>
<link>https://hdl.handle.net/1721.1/30411</link>
<description>Predictive identification of alternative events conserved in human and mouse
Yeo, Gene; Van Nostrand, Eric; Holste, Dirk; Poggio, Tomaso; Burge, Christopher
Alternative pre-messenger RNA splicing affects a majority of human genes and plays important roles in development and disease.  Alternative splicing (AS) events conserved since the divergence of human and mouse are likely of primary biological importance, but relatively few such events are known.  Here we describe sequence features that distinguish exons subject to evolutionarily conserved AS, which we call 'alternative-conserved exons' (ACEs) from other orthologous human/mouse exons, and integrate these features into an exon classification algorithm, ACEScan.  Genome-wide analysis of annotated orthologous human-mouse exon pairs identified ~2,000 predicted ACEs.  Alternative splicing was verified in both human and mouse tissues using an RT-PCR-sequencing protocol for 21 of 30 (70%) predicted ACEs tested, supporting the validity of a majority of ACEScan predictions.  By contrast, AS was observed in mouse tissues for only 2 of 15 (13%) tested exons which had EST or cDNA evidence of AS in human but were not predicted ACEs, and was never observed for eleven negative control exons in human or mouse tissues.  Predicted ACEs were much more likely to preserve reading frame, and less likely to disrupt protein domains than other AS events, and were enriched in genes expressed in the brain and in genes involved in transcriptional regulation, RNA processing and development.  Our results also imply that the vast majority of AS events represented in the human EST databases are not conserved in mouse, and therefore may represent aberrant, disease- or allele-specific, or highly lineage-restricted splicing events.
</description>
<pubDate>Thu, 30 Sep 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30411</guid>
<dc:date>2004-09-30T00:00:00Z</dc:date>
</item>
<item>
<title>A Reliable Broadcast Scheme for Sensor Networks</title>
<link>https://hdl.handle.net/1721.1/30410</link>
<description>A Reliable Broadcast Scheme for Sensor Networks
Livadas, Carolos; Lynch, Nancy A.
In this short technical report, we present a simple yet effective reliable broadcast protocol for sensor networks. This protocol disseminates packets throughout the sensor network by flooding and recovers from losses resulting from collisions by having hosts retransmit packets whenever they notice that their neighbors have fallen behind. Such retransmissions serve to flood the appropriate packets throughout the regions of the sensor network that did not receive the given packets as a result of prior flooding attempts.
</description>
<pubDate>Mon, 11 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30410</guid>
<dc:date>2003-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>On The Boolean Algebra of Shape Analysis Constraints</title>
<link>https://hdl.handle.net/1721.1/30409</link>
<description>On The Boolean Algebra of Shape Analysis Constraints
Kuncak, Viktor; Rinard, Martin
Shape analysis is a promising technique for statically verifyingand extracting properties of programs that manipulatecomplex data structures. We introduce a new characterizationof constraints that arise in parametric shapeanalysis based on manipulation of three-valued structuresas dataflow facts.We identify an interesting syntactic class of first-orderlogic formulas that captures the meaning of three-valuedstructures under concretization. This class is broader thanpreviously introduced classes, allowing for a greater flexibilityin the formulation of shape analysis constraints inprogram annotations and internal analysis representations.Three-valued structures can be viewed as one possible normalform of the formulas in our class.Moreover, we characterize the meaning of three-valuedstructures under Â&#147;tight concretizationÂ&#148;. We show that theseemingly minor change from concretization to tight concretizationincreases the expressive power of three-valuedstructures in such a way that the resulting constraints areclosed under all boolean operations. We call the resultingconstraints boolean shape analysis constraints.The main technical contribution of this paper is a naturalsyntactic characterization of boolean shape analysis constraintsas arbitrary boolean combinations of first-order sentencesof certain form, and an algorithm for transformingsuch boolean combinations into the normal form that correspondsdirectly to three-valued structures.Our result holds in the presence of arbitrary shape analysisinstrumentation predicates. The result enables the reduction(without any approximation) of the entailment andthe equivalence of shape analysis constraints to the satisfiabilityof shape analysis constraints. When the satisfiabilityof the constraints is decidable, our result implies that theentailment and the equivalence of the constraints are alsodecidable, which enables the use of constraints in a compositionalshape analysis with a predictable behavior.
</description>
<pubDate>Fri, 22 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30409</guid>
<dc:date>2003-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>Permutation Tests for Classification</title>
<link>https://hdl.handle.net/1721.1/30408</link>
<description>Permutation Tests for Classification
Mukherjee, Sayan; Golland, Polina; Panchenko, Dmitry
We introduce and explore an approach to estimating statisticalsignificance of classification accuracy, which is particularly usefulin scientific applications of machine learning where highdimensionality of the data and the small number of training examplesrender most standard convergence bounds too loose to yield ameaningful guarantee of the generalization ability of theclassifier. Instead, we estimate statistical significance of theobserved classification accuracy, or the likelihood of observing suchaccuracy by chance due to spurious correlations of thehigh-dimensional data patterns with the class labels in the giventraining set. We adopt permutation testing, a non-parametric techniquepreviously developed in classical statistics for hypothesis testing inthe generative setting (i.e., comparing two probabilitydistributions). We demonstrate the method on real examples fromneuroimaging studies and DNA microarray analysis and suggest atheoretical analysis of the procedure that relates the asymptoticbehavior of the test to the existing convergence bounds.
</description>
<pubDate>Thu, 28 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30408</guid>
<dc:date>2003-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>The Theory of Timed I/O Automata</title>
<link>https://hdl.handle.net/1721.1/30407</link>
<description>The Theory of Timed I/O Automata
Kaynor, Dilsun K.; Lynch, Nancy; Segala, Roberto; Vaandrager, Frits
This monograph presents the Timed Input/Output Automaton (TIOA) modeling framework, a basic mathematical framework to support description and analysis of timed systems.
</description>
<pubDate>Wed, 02 Mar 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30407</guid>
<dc:date>2005-03-02T00:00:00Z</dc:date>
</item>
<item>
<title>Fluorescence Assay for Polymerase Arrival Rates</title>
<link>https://hdl.handle.net/1721.1/30406</link>
<description>Fluorescence Assay for Polymerase Arrival Rates
Che, Austin
To engineer complex synthetic biological systems will require modulardesign, assembly, and characterization strategies. The RNApolymerase arrival rate (PAR) is defined to be the rate that RNApolymerases arrive at a specified location on the DNA. Designing andcharacterizing biological modules in terms of RNA polymerase arrivalrates provides for many advantages in the construction and modeling ofbiological systems.PARMESAN is an in vitro method for measuring polymerase arrival ratesusing pyrrolo-dC, a fluorescent DNA base that can substitute forcytosine. Pyrrolo-dC shows a detectable fluorescence difference whenin single-stranded versus double-stranded DNA. During transcription,RNA polymerase separates the two strands of DNA, leading to a changein the fluorescence of pyrrolo-dC. By incorporating pyrrolo-dC atspecific locations in the DNA, fluorescence changes can be taken as adirect measurement of the polymerase arrival rate.
</description>
<pubDate>Sun, 31 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30406</guid>
<dc:date>2003-08-31T00:00:00Z</dc:date>
</item>
<item>
<title>Marriage, Honesty, and Stability</title>
<link>https://hdl.handle.net/1721.1/30405</link>
<description>Marriage, Honesty, and Stability
Immorlica, Nicole; Mahdian, Mohammad
Many centralized two-sided markets form a matching between participantsby running a stable marriage algorithm. It is a well-knownfact that no matching mechanism based on a stable marriage algorithmcan guarantee truthfulness as a dominant strategy for participants.However, as we will show in this paper, in a probabilisticsetting where the preference lists of one side of the market are composedof only a constant (independent of the the size of the market)number of entries, each drawn from an arbitrary distribution, thenumber of participants that have more than one stable partner is vanishinglysmall. This proves (and generalizes) a conjecture of Rothand Peranson [23]. As a corollary of this result, we show that, withhigh probability, the truthful strategy is the best response for a givenplayer when the other players are truthful. We also analyze equilibriaof the deferred acceptance stable marriage game. We show thatthe game with complete information has an equilibrium in which a(1?o(1)) fraction of the strategies are truthful in expectation. In themore realistic setting of a game of incomplete information, we willshow that the set of truthful strategies form a (1+o(1))-approximateBayesian-Nash equilibrium. Our results have implications in manypractical settings and were inspired by the work of Roth and Peranson[23] on the National Residency Matching Program.
</description>
<pubDate>Mon, 28 Jul 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30405</guid>
<dc:date>2003-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>Near-Optimal Distributed Failure Circumscription</title>
<link>https://hdl.handle.net/1721.1/30404</link>
<description>Near-Optimal Distributed Failure Circumscription
Beal, Jacob
Small failures should only disrupt a small part of a network.  One wayto do this is by marking the surrounding area as untrustworthy ---circumscribing the failure. This can be done with a distributedalgorithm using hierarchical clustering and neighbor relations, andthe resulting circumscription is near-optimal for convex failures.
</description>
<pubDate>Mon, 11 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30404</guid>
<dc:date>2003-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>The Theory of Timed I/O Automata</title>
<link>https://hdl.handle.net/1721.1/30403</link>
<description>The Theory of Timed I/O Automata
Kaynar, Dilsun K.; Lynch, Nancy; Segala, Roberto; Vaandrager, Frits
Revised version -- November 23, 2004.This paper presents the Timed Input/Output Automaton (TIOA) modeling framework, a basic mathematical framework to support description and analysis of timed systems.
</description>
<pubDate>Wed, 27 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30403</guid>
<dc:date>2003-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Selecting Refining and Evaluating Properties for Program Analysis</title>
<link>https://hdl.handle.net/1721.1/30402</link>
<description>Selecting Refining and Evaluating Properties for Program Analysis
Dodoo, Nii; Lin, Lee; Ernst, Michael D.
This research proposes and evaluates techniques for selectingpredicates for conditional program propertiesÂ&#151;thatis, implications such as p ) q whose consequent must betrue whenever the predicate is true. Conditional propertiesare prevalent in recursive data structures, which behave differentlyin their base and recursive cases, in programs thatcontain branches, in programs that fail only on some inputs,and in many other situations. The experimental context ofthe research is dynamic detection of likely program invariants,but the ideas are applicable to other domains.Trying every possible predicate for conditional propertiesis computationally infeasible and yields too many undesirableproperties. This paper compares four policies forselecting predicates: procedure return analysis, code conditionals,clustering, and random selection. It also showshow to improve predicates via iterated analysis. An experimentalevaluation demonstrates that the techniques improveperformance on two tasks: statically proving the absence ofrun-time errors with a theorem-prover, and separating faultyfrom correct executions of erroneous programs.
</description>
<pubDate>Mon, 21 Jul 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30402</guid>
<dc:date>2003-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>Learning object segmentation from video data</title>
<link>https://hdl.handle.net/1721.1/30401</link>
<description>Learning object segmentation from video data
Ross, Michael G.; Kaelbling, Leslie Pack
This memo describes the initial results of a project to create aself-supervised algorithm for learning object segmentation from videodata. Developmental psychology and computational experience havedemonstrated that the motion segmentation of objects is a simpler,more primitive process than the detection of object boundaries bystatic image cues. Therefore, motion information provides a plausiblesupervision signal for learning the static boundary detection task andfor evaluating performance on a test set. A video camera andpreviously developed background subtraction algorithms canautomatically produce a large database of motion-segmented images forminimal cost. The purpose of this work is to use the information insuch a database to learn how to detect the object boundaries in novelimages using static information, such as color, texture, and shape.This work was funded in part by the Office of Naval Research contract#N00014-00-1-0298, in part by the Singapore-MIT Alliance agreement of11/6/98, and in part by a National Science Foundation Graduate StudentFellowship.
</description>
<pubDate>Mon, 08 Sep 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30401</guid>
<dc:date>2003-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>Representation and Detection of Shapes in Images</title>
<link>https://hdl.handle.net/1721.1/30400</link>
<description>Representation and Detection of Shapes in Images
Felzenszwalb, Pedro F.
We present a set of techniques that can be used to represent anddetect shapes in images.  Our methods revolve around a particularshape representation based on the description of objects usingtriangulated polygons.  This representation is similar to the medialaxis transform and has important properties from a computationalperspective.  The first problem we consider is the detection ofnon-rigid objects in images using deformable models.  We present anefficient algorithm to solve this problem in a wide range ofsituations, and show examples in both natural and medical images.  Wealso consider the problem of learning an accurate non-rigid shapemodel for a class of objects from examples.  We show how to learn goodmodels while constraining them to the form required by the detectionalgorithm.  Finally, we consider the problem of low-level imagesegmentation and grouping.  We describe a stochastic grammar thatgenerates arbitrary triangulated polygons while capturing Gestaltprinciples of shape regularity.  This grammar is used as a prior modelover random shapes in a low level algorithm that detects objects inimages.
</description>
<pubDate>Fri, 08 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30400</guid>
<dc:date>2003-08-08T00:00:00Z</dc:date>
</item>
<item>
<title>Sharing visual features for multiclass and multiview object detection</title>
<link>https://hdl.handle.net/1721.1/30399</link>
<description>Sharing visual features for multiclass and multiview object detection
Torralba, Antonio; Murphy, Kevin P.; Freeman, William T.
We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects.We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.
</description>
<pubDate>Wed, 14 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30399</guid>
<dc:date>2004-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Dissociated Dipoles: Image representation via non-local comparisons</title>
<link>https://hdl.handle.net/1721.1/30398</link>
<description>Dissociated Dipoles: Image representation via non-local comparisons
Balas, Benjamin J.; Sinha, Pawan
A fundamental question in visual neuroscience is how to represent image structure. The most common representational schemes rely on differential operators that compare adjacent image regions. While well-suited to encoding local relationships, such operators have significant drawbacks. Specifically, each filterÂ&#146;s span is confounded with the size of its sub-fields, making it difficult to compare small regions across large distances. We find that such long-distance comparisons are more tolerant to common image transformations than purely local ones, suggesting they may provide a useful vocabulary for image encoding. .We introduce the Â&#147;Dissociated Dipole,Â&#148; or Â&#147;SticksÂ&#148; operator, for encoding non-local image relationships. This operator de-couples filter span from sub-field size, enabling parametric movement between edge and region-based representation modes. We report on the perceptual plausibility of the operator, and the computational advantages of non-local encoding. Our results suggest that non-local encoding may be an effective scheme for representing image structure.
</description>
<pubDate>Wed, 13 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30398</guid>
<dc:date>2003-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Direction Estimation of Pedestrian from Images</title>
<link>https://hdl.handle.net/1721.1/30397</link>
<description>Direction Estimation of Pedestrian from Images
Shimizu, Hiroaki; Poggio, Tomaso
The capability of estimating the walking direction of people would be useful in many applications such as those involving autonomous cars and robots.We introduce an approach for estimating the walking direction of people from images, based on learning the correct classification of a still image by using SVMs. We find that the performance of the system can be improved by classifying each image of a walking sequence and combining the outputs of the classifier.Experiments were performed to evaluate our system and estimate the trade-off between number of images in walking sequences and performance.
</description>
<pubDate>Wed, 27 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30397</guid>
<dc:date>2003-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Secure Program Execution Via Dynamic Information Flow Tracking</title>
<link>https://hdl.handle.net/1721.1/30396</link>
<description>Secure Program Execution Via Dynamic Information Flow Tracking
Suh, G. Edward; Lee, Jaewook; Zhang, David; Devadas, Srinivas
We present a simple architectural mechanism called dynamicinformation flow tracking that can significantly improve thesecurity of computing systems with negligible performanceoverhead. Dynamic information flow tracking protects programs against malicious software attacks by identifying spurious information flows from untrusted I/O and restrictingthe usage of the spurious information.Every security attack to take control of a program needsto transfer the programÂ&#146;s control to malevolent code. Inour approach, the operating system identifies a set of inputchannels as spurious, and the processor tracks all information flows from those inputs. A broad range of attacks areeffectively defeated by checking the use of the spurious values as instructions and pointers.Our protection is transparent to users or application programmers; the executables can be used without any modification. Also, our scheme only incurs, on average, a memoryoverhead of 1.4% and a performance overhead of 1.1%.
</description>
<pubDate>Mon, 21 Jul 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/30396</guid>
<dc:date>2003-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>New Algorithms for Load Balancing in Peer-to-Peer Systems</title>
<link>https://hdl.handle.net/1721.1/29831</link>
<description>New Algorithms for Load Balancing in Peer-to-Peer Systems
Karger, David; Ruhl, Matthias
Load balancing is a critical issue for the efficient operation of peer-to-peer networks. We give new protocols for several scenarios, whose provable performance guarantees are within a constant factor of optimal.

First, we give an improved version of consistent hashing, a scheme used for item to node assignments in the Chord system. In its original form, it required every network node to operate O(log n) virtual nodes to achieve a balanced load, causing a corresponding increase in space and bandwidth usage. Our protocol eliminates the necessity of virtual nodes while maintaining a balanced load. Improving on related protocols, our scheme allows for the deletion of nodes and admits a simpler analysis, since the assignments do not depend on the history of the network.

We then analyze a simple protocol for load sharing by movements of data from higher loaded to lower loaded nodes. This protocol can be extended to preserve the ordering of data items. As an application, we use the last protocol to give an efficient implementation of a distributed data structure for range searches on ordered data.
</description>
<pubDate>Wed, 16 Jul 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/29831</guid>
<dc:date>2003-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Compact Representations for Fast Nonrigid Registration of Medical Images</title>
<link>https://hdl.handle.net/1721.1/29830</link>
<description>Compact Representations for Fast Nonrigid Registration of Medical Images
Timoner, Samson
We develop efficient techniques for the non-rigid registration of medical
images by using representations that adapt to the anatomy found in such
images.

 Images of anatomical structures typically have uniform intensity interiors
and smooth boundaries. We create methods to represent such regions
compactly using tetrahedra.  Unlike voxel-based representations, tetrahedra
can accurately describe the expected smooth surfaces of medical
objects. Furthermore, the interior of such objects can be represented using
a small number of tetrahedra. Rather than describing a medical object using
tens of thousands of voxels, our representations generally contain only a few
thousand elements.

Tetrahedra facilitate the creation of efficient non-rigid registration
algorithms based on finite element methods (FEM).  We create a fast,
FEM-based method to non-rigidly register segmented anatomical structures
from two subjects. Using our compact tetrahedral representations, this
method generally requires less than one minute of processing time on a desktop
PC.

We also create a novel method for the non-rigid registration of gray scale
images. To facilitate a fast method, we create a tetrahedral representation
of a displacement field that automatically adapts to both the anatomy in an
image and to the displacement field.  The resulting algorithm has a
computational cost that is dominated by the number of nodes in the mesh
(about 10,000), rather than the number of voxels in an image (nearly
10,000,000). For many non-rigid registration problems, we can find a
transformation from one image to another in five minutes. This speed is
important as it allows use of the algorithm during surgery.

We apply our algorithms to find correlations between the shape of
anatomical structures and the presence of schizophrenia. We show that a
study based on our representations outperforms studies based on other
representations. We also use the results of our non-rigid registration
algorithm as the basis of a segmentation algorithm. That algorithm also
outperforms other methods in our tests, producing smoother segmentations
and more accurately reproducing manual segmentations.
</description>
<pubDate>Fri, 04 Jul 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/29830</guid>
<dc:date>2003-07-04T00:00:00Z</dc:date>
</item>
<item>
<title>On the Max-Flow Min-Cut Ratio for Directed Multicommodity Flows</title>
<link>https://hdl.handle.net/1721.1/29829</link>
<description>On the Max-Flow Min-Cut Ratio for Directed Multicommodity Flows
Hajiaghayi, MohammadTaghi; Leighton, F. Thomson
We give a pure combinatorial problem whose solution determines max-flow
min-cut ratio for directed multicommodity flows. In addition, this
combinatorial problem has applications in improving the approximation  factor of Greedy algorithm for maximum edge disjoint path problem. More
precisely, our upper bound improves the approximation factor for this
problem to O(n^{3/4}). Finally, we demonstrate how even for very simple
graphs the aforementioned ratio might be very large.
</description>
<pubDate>Sat, 05 Jul 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/29829</guid>
<dc:date>2003-07-05T00:00:00Z</dc:date>
</item>
<item>
<title>Trajectory and Force Control of a Direct Drive Arm</title>
<link>https://hdl.handle.net/1721.1/7344</link>
<description>Trajectory and Force Control of a Direct Drive Arm
An, Chae Hun
Using the MIT Serial Link Direct Drive Arm as  the main experimental device, various issues  in trajectory and force control of manipulators  were studied in this thesis. Since accurate  modeling is important for any controller,  issues of estimating the dynamic model of a  manipulator and its load were addressed first.  Practical and effective algorithms were  developed fro the Newton-Euler equations to  estimate the inertial parameters of  manipulator rigid-body loads and links. Load  estimation was implemented both on PUMA  600 robot and on the MIT Serial Link Direct  Drive Arm. With the link estimation algorithm,  the inertial parameters of the direct drive arm  were obtained. For both load and link  estimation results, the estimated parameters  are good models of the actual system for  control purposes since torques and forces  can be predicted accurately from these  estimated parameters.  The estimated model of the direct drive arm  was them used to evaluate trajectory following  performance by feedforward and computed  torque control algorithms. The experimental  evaluations showed that the dynamic  compensation can greatly improve trajectory  following accuracy.  Various stability issues of force control were  studied next. It was determined that there are  two types of instability in force control.  Dynamic instability, present in all of the  previous force control algorithms discussed  in this thesis, is caused by the interaction of a  manipulator with a stiff environment.  Kinematics instability is present only in the  hybrid control algorithm of Raibert and Craig,  and is caused by the interaction of the inertia  matrix with the Jacobian inverse coordinate  transformation in the feedback path. Several  methods were suggested and demonstrated  experimentally to solve these stability  problems. The result of the stability analyses  were then incorporated in implementing a  stable force/position controller on the direct  drive arm by the modified resolved  acceleration method using both joint torque  and wrist force sensor feedbacks.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7344</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interaction and Intelligent Behavior</title>
<link>https://hdl.handle.net/1721.1/7343</link>
<description>Interaction and Intelligent Behavior
Mataric, Maja J.
We introduce basic behaviors as primitives for  control and learning in situated, embodied  agents interacting in complex domains. We  propose methods for selecting, formally  specifying, algorithmically implementing,  empirically evaluating, and combining  behaviors from a basic set. We also introduce  a general methodology for automatically  constructing higher--level behaviors by  learning to select from this set. Based on a  formulation of reinforcement learning using  conditions, behaviors, and shaped  reinforcement, out approach makes behavior  selection learnable in noisy, uncertain  environments with stochastic dynamics. All  described ideas are validated with groups of  up to 20 mobile robots performing safe--wandering, following, aggregation,  dispersion, homing, flocking, foraging, and  learning to forage.
</description>
<pubDate>Mon, 01 Aug 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7343</guid>
<dc:date>1994-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric Aspects of Visual Object Recognition</title>
<link>https://hdl.handle.net/1721.1/7342</link>
<description>Geometric Aspects of Visual Object Recognition
Breuel, Thomas M.
This thesis presents there important results  in visual object recognition based on shape.  (1) A new algorithm (RAST; Recognition by  Adaptive Sudivisions of Tranformation space)  is presented that has lower average-case  complexity than any known recognition  algorithm. (2) It is shown, both theoretically  and empirically, that representing 3D objects  as collections of 2D views (the "View-Based  Approximation") is feasible and affects the  reliability of 3D recognition systems no more  than other commonly made approximations.  (3) The problem of recognition in cluttered  scenes is considered from a Bayesian  perspective; the commonly-used "bounded-error errorsmeasure" is demonstrated to  correspond to an independence assumption.  It is shown that by modeling the statistical  properties of real-scenes better, objects can  be recognized more reliably.
</description>
<pubDate>Fri, 01 May 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7342</guid>
<dc:date>1992-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complexity of Human Language Comprehension</title>
<link>https://hdl.handle.net/1721.1/7341</link>
<description>Complexity of Human Language Comprehension
Ristad, Eric Sven
The goal of this article is to reveal the  computational structure of modern principle-and-parameter (Chomskian) linguistic  theories: what computational problems do  these informal theories pose, and what is the  underlying structure of those computations?  To do this, I analyze the computational  complexity of human language  comprehension: what linguistic  representation is assigned to a given sound?  This problem is factored into smaller,  interrelated (but independently statable)  problems. For example, in order to  understand a given sound, the listener must  assign a phonetic form to the sound;  determine the morphemes that compose the  words in the sound; and calculate the  linguistic antecedent of every pronoun in the  utterance. I prove that these and other  subproblems are all NP-hard, and that  language comprehension is itself PSPACE-hard.
</description>
<pubDate>Thu, 01 Dec 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7341</guid>
<dc:date>1988-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Perception of Subjective Surfaces</title>
<link>https://hdl.handle.net/1721.1/7340</link>
<description>The Perception of Subjective Surfaces
Brady, Michael; Grimson, W. Eric L.
It is proposed that subjective contours are an  artifact of the perception of natural three-dimensional surfaces. A recent theory of  surface interpolation implies that "subjective  surfaces" are constructed in the visual system  by interpolation between three-dimensional  values arising from interpretation of a variety  of surface cues. We show that subjective  surfaces can take any form, including singly  and doubly curved surfaces, as well as the  commonly discussed fronto-parallel planes.  In addition, it is necessary in the context of  computational vision to make explicit the  discontinuities, both in depth and in surface  orientation, in the surfaces constructed by  interpolation. It is proposed that subjective  surfaces and subjective contours are  demonstrated. The role played by figure  completion and enhanced brightness contrast  in the determination of subjective surfaces is  discussed. All considerations of surface  perception apply equally to subjective  surfaces.
</description>
<pubDate>Sun, 01 Nov 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7340</guid>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrical Design: A Problem for Artificial Intelligence Research</title>
<link>https://hdl.handle.net/1721.1/7339</link>
<description>Electrical Design: A Problem for Artificial Intelligence Research
Sussman, Gerald Jay
This report outlines the problem of intelligent  failure recovery in a problem-solver for  electrical design. We want our problem solver  to learn as much as it can from its mistakes.  Thus we cast the engineering design process  on terms of Problem Solving by Debugging  Almost-Right Plans, a paradigm for automatic  problem solving based on the belief that  creation and removal of "bugs" is an  unavoidable part of the process of solving a  complex problem. The process of localization  and removal of bugs called for by the  PSBDARP theory requires an approach to  engineering analysis in which every result has  a justification which describes the exact set of  assumptions it depends upon. We have  developed a program based on Analysis by  Propagation of Constraints which can explain  the basis of its deductions. In addition to  being useful to a PSBDARP designer, these  justifications are used in Dependency-Directed Backtracking to limit the  combinatorial search in the analysis routines.  Although the research we will describe is  explicitly about electrical circuits, we believe  that similar principles and methods are  employed by other kinds of engineers,  including computer programmers.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7339</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>11SIM Reference Manual</title>
<link>https://hdl.handle.net/1721.1/7338</link>
<description>11SIM Reference Manual
Eastlake, Donald
A program that simulates a Digital Equipment  Corporation PDP-11 computer and many of its  peripherals on the AI Laboratory Time Sharing  System (ITS) is described from a user's  reference point of view. This simulator has a  built in DDT-like command level which  provides the user with the normal range of  DDT facilities but also with several special  debugging features built into the simulator.  The DDT command language was  implemented by Richard M. Stallman while  the simulator was written by the author of this  memo.
</description>
<pubDate>Wed, 01 Dec 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7338</guid>
<dc:date>1971-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty Propagation in Model-Based Recognition</title>
<link>https://hdl.handle.net/1721.1/7337</link>
<description>Uncertainty Propagation in Model-Based Recognition
Jacobs, D.W.; Alter, T.D.
Building robust recognition systems requires  a careful understanding of the effects of error  in sensed features. Error in these image  features results in a region of uncertainty in  the possible image location of each additional  model feature. We present an accurate,  analytic approximation for this uncertainty  region when model poses are based on  matching three image and model points, for  both Gaussian and bounded error in the  detection of image points, and for both  scaled-orthographic and perspective  projection models. This result applies to  objects that are fully three- dimensional,  where past results considered only two-dimensional objects. Further, we introduce a  linear programming algorithm to compute the  uncertainty region when poses are based on  any number of initial matches. Finally, we use  these results to extend, from two-dimensional  to three- dimensional objects, robust  implementations of alignmentt interpretation- tree search, and ransformation clustering.
</description>
<pubDate>Wed, 01 Feb 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7337</guid>
<dc:date>1995-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting the Parallelism Exposed by Partial Evaluation</title>
<link>https://hdl.handle.net/1721.1/7336</link>
<description>Exploiting the Parallelism Exposed by Partial Evaluation
Surati, Rajeev
We describe the key role played by partial evaluation in the Supercomputing Toolkit, a parallel computing system for scientific applications that effectively exploits the vast amount of parallelism exposed by partial evaluation. The Supercomputing Toolkit parallel processor and its associated partial evaluation-based compiler have been used extensively by scientists at MIT, and have made possible recent results in astrophysics showing that the motion of the planets in our solar system is chaotically unstable.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7336</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causes and Effects of Chaos</title>
<link>https://hdl.handle.net/1721.1/7335</link>
<description>Causes and Effects of Chaos
Bradley, Elizabeth
Most of the recent literature on chaos and  nonlinear dynamics is written either for  popular science magazine readers or for  advanced mathematicians. This paper gives a  broad introduction to this interesting and  rapidly growing field at a level that is between  the two. The graphical and analytical tools  used in the literature are explained and  demonstrated, the rudiments of the current  theory are outlined and that theory is  discussed in the context of several examples:  an electronic circuit, a chemical reaction and a  system of satellites in the solar system.
</description>
<pubDate>Sat, 01 Dec 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7335</guid>
<dc:date>1990-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Convergence Rates of Approximation by Translates</title>
<link>https://hdl.handle.net/1721.1/7316</link>
<description>Convergence Rates of Approximation by Translates
Girosi, Federico; Anzellotti, Gabriele
In this paper we consider the problem of approximating a function belonging to some funtion space Î¦ by a linear comination of n translates of a given function G. Ussing a lemma by Jones (1990) and Barron (1991) we show that it is possible to define function spaces and functions G for which the rate of convergence to zero of the erro is 0(1/n) in any number of dimensions. The apparent avoidance of the "curse of dimensionality" is due to the fact that these function spaces are more and more constrained as the dimension increases. Examples include spaces of the Sobolev tpe, in which the number of weak derivatives is required to be larger than the number of dimensions. We give results both for approximation in the L2 norm and in the Lc norm. The interesting feature of these results is that, thanks to the constructive nature of Jones" and Barron"s lemma, an iterative procedure is defined that can achieve this rate.
</description>
<pubDate>Sun, 01 Mar 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7316</guid>
<dc:date>1992-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Object Detection in Images by Components</title>
<link>https://hdl.handle.net/1721.1/7293</link>
<description>Object Detection in Images by Components
Mohan, Anuj
In this paper we present a component based  person detection system that is capable of  detecting frontal, rear and near side views of  people, and partially occluded persons in  cluttered scenes. The framework that is  described here for people is easily applied to  other objects as well. The motivation for  developing a component based approach is  two fold: first, to enhance the performance of  person detection systems on frontal and rear  views of people and second, to develop a  framework that directly addresses the  problem of detecting people who are partially  occluded or whose body parts blend in with  the background. The data classification is  handled by several support vector machine  classifiers arranged in two layers. This  architecture is known as Adaptive  Combination of Classifiers (ACC). The  system performs very well and is capable of  detecting people even when all components  of a person are not found. The performance of  the system is significantly better than a full  body person detector designed along similar  lines. This suggests that the improved  performance is due to the components based  approach and the ACC data classification  structure.
</description>
<pubDate>Wed, 11 Aug 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7293</guid>
<dc:date>1999-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Pose Determination of a Grasped Object Using Limited Sensing</title>
<link>https://hdl.handle.net/1721.1/7292</link>
<description>Pose Determination of a Grasped Object Using Limited Sensing
Siegel, David M.
This report explores methods for determining  the pose of a grasped object using only  limited sensor information. The problem of  pose determination is to find the position of  an object relative to the hand. The information  is useful when grasped objects are being  manipulated. The problem is hard because of  the large space of grasp configurations and  the large amount of uncertainty inherent in  dexterous hand control. By studying limited  sensing approaches, the problem's inherent  constraints can be better understood. This  understanding helps to show how additional  sensor data can be used to make recognition  methods more effective and robust.
</description>
<pubDate>Wed, 01 May 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7292</guid>
<dc:date>1991-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Note on Support Vector Machines Degeneracy</title>
<link>https://hdl.handle.net/1721.1/7291</link>
<description>A Note on Support Vector Machines Degeneracy
Rifkin, Ryan; Pontil, Massimiliano; Verri, Alessandro
When training Support Vector Machines  (SVMs) over non-separable data sets, one  sets the threshold $b$ using any dual cost  coefficient that is strictly between the bounds  of $0$ and $C$. We show that there exist  SVM training problems with dual optimal  solutions with all coefficients at bounds, but  that all such problems are degenerate in the  sense that the "optimal separating  hyperplane" is given by ${f w} = {f 0}$, and the  resulting (degenerate) SVM will classify all  future points identically (to the class that  supplies more training data). We also derive  necessary and sufficient conditions on the  input data for this to occur. Finally, we show  that an SVM training problem can always be  made degenerate by the addition of a single  data point belonging to a certain  unboundedspolyhedron, which we  characterize in terms of its extreme points and  rays.
</description>
<pubDate>Wed, 11 Aug 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7291</guid>
<dc:date>1999-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Support Vector Machines: Training and Applications</title>
<link>https://hdl.handle.net/1721.1/7290</link>
<description>Support Vector Machines: Training and Applications
Osuna, Edgar; Freund, Robert; Girosi, Federico
The Support Vector Machine (SVM) is a new  and very promising classification technique  developed by Vapnik and his group at AT&amp;T  Bell Labs. This new learning algorithm can be  seen as an alternative training technique for  Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting  property of this approach is that it is an  approximate implementation of the Structural  Risk Minimization (SRM) induction principle.  The derivation of Support Vector Machines, its  relationship with SRM, and its geometrical  insight, are discussed in this paper. Training  a SVM is equivalent to solve a quadratic  programming problem with linear and box  constraints in a number of variables equal to  the number of data points. When the number  of data points exceeds few thousands the  problem is very challenging, because the  quadratic form is completely dense, so the  memory needed to store the problem grows  with the square of the number of data points.  Therefore, training problems arising in some  real applications with large data sets are  impossible to load into memory, and cannot  be solved using standard non-linear  constrained optimization algorithms. We  present a decomposition algorithm that can  be used to train SVM's over large data sets.  The main idea behind the decomposition is  the iterative solution of sub-problems and the  evaluation of, and also establish the stopping  criteria for the algorithm. We present previous  approaches, as well as results and important  details of our implementation of the algorithm  using a second-order variant of the Reduced  Gradient Method as the solver of the sub-problems. As an application of SVM's, we  present preliminary results we obtained  applying SVM to the problem of detecting  frontal human faces in real images.
</description>
<pubDate>Sat, 01 Mar 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7290</guid>
<dc:date>1997-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Equivalence Between Sparse Approximation and Support Vector Machines</title>
<link>https://hdl.handle.net/1721.1/7289</link>
<description>An Equivalence Between Sparse Approximation and Support Vector Machines
Girosi, Federico
In the first part of this paper we show a  similarity between the principle of Structural  Risk Minimization Principle (SRM) (Vapnik,  1982) and the idea of Sparse Approximation,  as defined in (Chen, Donoho and Saunders,  1995) and Olshausen and Field (1996). Then  we focus on two specific (approximate)  implementations of SRM and Sparse  Approximation, which have been used to solve  the problem of function approximation. For  SRM we consider the Support Vector Machine  technique proposed by V. Vapnik and his  team at AT&amp;T Bell Labs, and for Sparse  Approximation we consider a modification of  the Basis Pursuit De-Noising algorithm  proposed by Chen, Donoho and Saunders  (1995). We show that, under certain  conditions, these two techniques are  equivalent: they give the same solution and  they require the solution of the same  quadratic programming problem.
</description>
<pubDate>Thu, 01 May 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7289</guid>
<dc:date>1997-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Kineticist's Workbench: Combining Symbolic and Numerical Methods in the Simulation of Chemical Reaction Mechanisms</title>
<link>https://hdl.handle.net/1721.1/7288</link>
<description>The Kineticist's Workbench: Combining Symbolic and Numerical Methods in the Simulation of Chemical Reaction Mechanisms
Eisenberg, Michael A.
The Kineticist's Workbench is a program that  simulates chemical reaction mechanisms by  predicting, generating, and interpreting  numerical data. Prior to simulation, it analyzes  a given mechanism to predict that  mechanism's behavior; it then simulates the  mechanism numerically; and afterward, it  interprets and summarizes the data it has  generated. In performing these tasks, the  Workbench uses a variety of techniques:  graph- theoretic algorithms (for analyzing  mechanisms), traditional numerical  simulation methods, and algorithms that  examine simulation results and reinterpret  them in qualitative terms. The Workbench  thus serves as a prototype for a new class of  scientific computational tools---tools that  provide symbiotic collaborations between  qualitative and quantitative methods.
</description>
<pubDate>Wed, 01 May 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7288</guid>
<dc:date>1991-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Nonparametric Approach to Pricing and Hedging Derivative Securities via Learning Networks</title>
<link>https://hdl.handle.net/1721.1/7287</link>
<description>A Nonparametric Approach to Pricing and Hedging Derivative Securities via Learning Networks
Hutchinson, James M.; Lo, Andrew; Poggio, Tomaso
We propose a nonparametric method for  estimating derivative financial asset pricing  formulae using learning networks. To  demonstrate feasibility, we first simulate  Black-Scholes option prices and show that  learning networks can recover the Black-Scholes formula from a two-year training set  of daily options prices, and that the resulting  network formula can be used successfully to  both price and delta-hedge options out-of-sample. For comparison, we estimate  models using four popular methods: ordinary  least squares, radial basis functions,  multilayer perceptrons, and projection pursuit.  To illustrate practical relevance, we also apply  our approach to S&amp;P 500 futures options data  from 1987 to 1991.
</description>
<pubDate>Fri, 01 Apr 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7287</guid>
<dc:date>1994-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Architectural Models for Visibly Controllable Computing: The Relevance of Dynamic Object Oriented Architectures and Plan Based Computing Models</title>
<link>https://hdl.handle.net/1721.1/7286</link>
<description>New Architectural Models for Visibly Controllable Computing: The Relevance of Dynamic Object Oriented Architectures and Plan Based Computing Models
Shrobe, Howard; Laddaga, Robert
Traditionally, we've focussed on the question of how to make a system easy to code the first time, or  perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we  must conclude that the important question is how to make a system easy to operate. To do this we  need to make it easy for the operators to see what's going on and to then manipulate the system so  that it does what it is supposed to. This is a radically different criterion for success.  What makes a computer system visible and controllable? This is a difficult question, but it's clear that  today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly,  the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's  mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of  the features of the Lisp Machine hardware and software system. Our key claim is that by building the  Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the  spirit of the Lisp Machine could be extended to include a comprehensive access control model and how  new layers of abstraction could further enrich this model.
</description>
<pubDate>Mon, 09 Feb 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7286</guid>
<dc:date>2004-02-09T00:00:00Z</dc:date>
</item>
<item>
<title>Rotation Invariant Object Recognition from One Training Example</title>
<link>https://hdl.handle.net/1721.1/7285</link>
<description>Rotation Invariant Object Recognition from One Training Example
Yokono, Jerry Jun; Poggio, Tomaso
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. Such a descriptor--based on a set of oriented Gaussian derivative filters-- is used in our recognition system. We report here an evaluation of several techniques for orientation estimation to achieve rotation invariance of the descriptor. We also describe feature selection based on a single training image. Virtual images are generated by rotating and rescaling the image and robust features are selected. The results confirm robust performance in cluttered scenes, in the presence of partial occlusions, and when the object is embedded in different backgrounds.
</description>
<pubDate>Tue, 27 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7285</guid>
<dc:date>2004-04-27T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of sets of oriented and non-oriented receptive fields as local descriptors</title>
<link>https://hdl.handle.net/1721.1/7284</link>
<description>Evaluation of sets of oriented and non-oriented receptive fields as local descriptors
Yokono, Jerry Jun; Poggio, Tomaso
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. We propose a performance criterion for a local descriptor based on the tradeoff between selectivity and invariance. In this paper, we evaluate several local descriptors with respect to selectivity and invariance. The descriptors that we evaluated are Gaussian derivatives up to the third order, gray image patches, and Laplacian-based descriptors with either three scales or one scale filters. We compare selectivity and invariance to several affine changes such as rotation, scale, brightness, and viewpoint. Comparisons have been made keeping the dimensionality of the descriptors roughly constant. The overall results indicate a good performance by the descriptor based on a set of oriented Gaussian filters. It is interesting that oriented receptive fields similar to the Gaussian derivatives as well as receptive fields similar to the Laplacian are found in primate visual cortex.
</description>
<pubDate>Wed, 24 Mar 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7284</guid>
<dc:date>2004-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>Face processing in humans is compatible with a simple shape-based model of vision</title>
<link>https://hdl.handle.net/1721.1/7283</link>
<description>Face processing in humans is compatible with a simple shape-based model of vision
Riesenhuber; Jarudi; Gilad; Sinha
Understanding how the human visual system recognizes objects is one of the key challenges in neuroscience. Inspired by a large body of physiological evidence (Felleman and Van Essen, 1991; Hubel and Wiesel, 1962; Livingstone and Hubel, 1988; Tso et al., 2001; Zeki, 1993), a general class of recognition models has emerged which is based on a hierarchical organization of visual processing, with succeeding stages being sensitive to image features of increasing complexity (Hummel and Biederman, 1992; Riesenhuber and Poggio, 1999; Selfridge, 1959). However, these models appear to be incompatible with some well-known psychophysical results. Prominent among these are experiments investigating recognition impairments caused by vertical inversion of images, especially those of faces. It has been reported that faces that differ "featurally" are much easier to distinguish when inverted than those that differ "configurally" (Freire et al., 2000; Le Grand et al., 2001; Mondloch et al., 2002) ??finding that is difficult to reconcile with the aforementioned models. Here we show that after controlling for subjects' expectations, there is no difference between "featurally" and "configurally" transformed faces in terms of inversion effect. This result reinforces the plausibility of simple hierarchical models of object representation and recognition in cortex.
</description>
<pubDate>Fri, 05 Mar 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7283</guid>
<dc:date>2004-03-05T00:00:00Z</dc:date>
</item>
<item>
<title>Selecting Relevant Genes with a Spectral Approach</title>
<link>https://hdl.handle.net/1721.1/7282</link>
<description>Selecting Relevant Genes with a Spectral Approach
Wolf, Lior; Amnon Shashua,; Mukherjee, Sayan
Array technologies have made it possible to record simultaneously the expression pattern of thousands of genes. A fundamental problem in the analysis of gene expression data is the identification of highly relevant genes that either discriminate between phenotypic labels or are important with respect to the cellular process studied in the experiment: for example cell cycle or heat shock in yeast experiments, chemical or genetic perturbations of mammalian cell lines, and genes involved in class discovery for human tumors. In this paper we focus on the task of unsupervised gene selection. The problem of selecting a small subset of genes is particularly challenging as the datasets involved are typically characterized by a very small sample size ?? the order of few tens of tissue samples ??d by a very large feature space as the number of genes tend to be in the high thousands. We propose a model independent approach which scores candidate gene selections using spectral properties of the candidate affinity matrix. The algorithm is very straightforward to implement yet contains a number of remarkable properties which guarantee consistent sparse selections. To illustrate the value of our approach we applied our algorithm on five different datasets. The first consists of time course data from four well studied Hematopoietic cell lines (HL-60, Jurkat, NB4, and U937). The other four datasets include three well studied treatment outcomes (large cell lymphoma, childhood medulloblastomas, breast tumors) and one unpublished dataset (lymph status). We compared our approach both with other unsupervised methods (SOM,PCA,GS) and with supervised methods (SNR,RMB,RFE). The results clearly show that our approach considerably outperforms all the other unsupervised approaches in our study, is competitive with supervised methods and in some case even outperforms supervised approaches.
</description>
<pubDate>Tue, 27 Jan 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7282</guid>
<dc:date>2004-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>Risk Bounds for Mixture Density Estimation</title>
<link>https://hdl.handle.net/1721.1/7281</link>
<description>Risk Bounds for Mixture Density Estimation
Rakhlin, Alexander; Panchenko, Dmitry; Mukherjee, Sayan
In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Procedure (MLE) and  the greedy procedure described by Li and Barron. Approximation  and estimation bounds are given for the above methods. We extend and improve upon the estimation results of Li and Barron, and in particular prove an $O(\\frac{1}{\\sqrt{n}})$ bound on the estimation error which does not depend on the number of densities in the estimated combination.
</description>
<pubDate>Tue, 27 Jan 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7281</guid>
<dc:date>2004-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>On the difficulty of feature-based attentional modulations in visual object recognition: A modeling study.</title>
<link>https://hdl.handle.net/1721.1/7280</link>
<description>On the difficulty of feature-based attentional modulations in visual object recognition: A modeling study.
Schneider, Robert; Riesenhuber, Maximilian
Numerous psychophysical experiments have shown an important role for attentional modulations in vision. Behaviorally, allocation of attention can improve performance in object detection and recognition tasks. At the neural level, attention increases firing rates of neurons in visual cortex whose preferred stimulus is currently attended to. However, it is not yet known how these two phenomena are linked, i.e., how the visual system could be "tuned" in a task-dependent fashion to improve task performance. To answer this question, we performed simulations with the HMAX model of object recognition in cortex [45]. We modulated firing rates of model neurons in accordance with experimental  results about effects of feature-based attention on single neurons and measured changes in the model's performance in a variety of object recognition tasks. It turned out that recognition performance could only be improved under very limited circumstances and that attentional influences on the process of object  recognition per se tend to display a lack of specificity or raise false alarm rates. These observations lead us to postulate a new role for the observed attention-related neural response modulations.
</description>
<pubDate>Wed, 14 Jan 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7280</guid>
<dc:date>2004-01-14T00:00:00Z</dc:date>
</item>
<item>
<title>Component based recognition of objects in an office environment</title>
<link>https://hdl.handle.net/1721.1/7279</link>
<description>Component based recognition of objects in an office environment
Morgenstern, Christian; Heisele, Bernd
We present a component-based approach for recognizing objects under large pose changes. From a set of training images of a given object we extract a large number of components which are clustered based on the similarity of their image features and their locations within the object image. The cluster centers build an initial set of component templates from which we select a subset for the final recognizer. In experiments we evaluate different sizes and types of components and three standard techniques for component selection. The component classifiers are finally compared to global classifiers on a database of four objects.
</description>
<pubDate>Fri, 28 Nov 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7279</guid>
<dc:date>2003-11-28T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating shape representation in area V4 with HMAX: Orientation and Grating selectivities</title>
<link>https://hdl.handle.net/1721.1/7278</link>
<description>Investigating shape representation in area V4 with HMAX: Orientation and Grating selectivities
Kouh, Minjoon; Riesenhuber, Maximilian
The question of how shape is represented is of central interest to understanding visual processing in cortex. While tuning properties of the cells in early part of the ventral visual stream, thought to be responsible for object recognition in the primate, are comparatively well understood, several different theories have been proposed regarding tuning in higher visual areas, such as V4. We used the model of object recognition in cortex presented by Riesenhuber and Poggio (1999), where more complex shape tuning in higher layers is the result of combining afferent inputs tuned to simpler features, and compared the tuning properties of model units in intermediate layers to those of V4 neurons from the literature. In particular, we investigated the issue of shape representation in visual area V1 and V4 using oriented bars and various types of gratings (polar, hyperbolic, and Cartesian), as used in several physiology experiments. Our computational model was able to reproduce several physiological findings, such as the broadening distribution of the orientation bandwidths and the emergence of a bias toward non-Cartesian stimuli. Interestingly, the simulation results suggest that some V4 neurons receive input from afferents with spatially separated receptive fields, leading to experimentally testable predictions. However, the simulations also show that the stimulus set of Cartesian and non-Cartesian gratings is not sufficiently complex to probe shape tuning in higher areas, necessitating the use of more complex stimulus sets.
</description>
<pubDate>Mon, 08 Sep 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7278</guid>
<dc:date>2003-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>Direction Estimation of Pedestrian from Images</title>
<link>https://hdl.handle.net/1721.1/7277</link>
<description>Direction Estimation of Pedestrian from Images
Shimizu, Hiroaki; Poggio, Tomaso
The capability of estimating the walking direction of people would be useful in many applications such as those involving autonomous cars and robots.  We introduce an approach for estimating the walking direction of people from images, based on learning the correct classification of a still image by using SVMs. We find that the performance of the system can be improved by classifying each image of a walking sequence and combining the outputs of the classifier.  Experiments were performed to evaluate our system and estimate the trade-off between number of images in walking sequences and performance.
</description>
<pubDate>Wed, 27 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7277</guid>
<dc:date>2003-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Dissociated Dipoles: Image representation via non-local comparisons</title>
<link>https://hdl.handle.net/1721.1/7276</link>
<description>Dissociated Dipoles: Image representation via non-local comparisons
Balas, Benjamin J.; Sinha, Pawan
A fundamental question in visual neuroscience is how to represent image structure. The most common representational schemes rely on differential operators that compare adjacent image regions. While well-suited to encoding local relationships, such operators have significant drawbacks. Specifically, each filter's span is confounded with the size of its sub-fields, making it difficult to compare small regions across large distances. We find that such long-distance comparisons are more tolerant to common image transformations than purely local ones, suggesting they may provide a useful vocabulary for image encoding. . We introduce the "Dissociated Dipole," or "Sticks" operator, for encoding non-local image relationships. This operator de-couples filter span from sub-field size, enabling parametric movement between edge and region-based representation modes. We report on the perceptual plausibility of the operator, and the computational advantages of non-local encoding. Our results suggest that non-local encoding may be an effective scheme for representing image structure.
</description>
<pubDate>Wed, 13 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7276</guid>
<dc:date>2003-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Perceptual Evaluation of Video-Realistic Speech</title>
<link>https://hdl.handle.net/1721.1/7275</link>
<description>Perceptual Evaluation of Video-Realistic Speech
Geiger, Gadi; Ezzat, Tony; Poggio, Tomaso
abstract  With many visual speech animation techniques now  available, there is a clear need for systematic  perceptual evaluation schemes. We describe here our  scheme and its application to a new video-realistic  (potentially indistinguishable from real recorded video)  visual-speech animation system, called Mary 101.  Two types of experiments were performed: a)  distinguishing visually between real and synthetic  image- sequences of the same utterances, ("Turing  tests") and b) gauging visual speech recognition by  comparing lip-reading performance of the real and  synthetic image-sequences of the same utterances  ("Intelligibility tests").  Subjects that were presented randomly with either real  or synthetic image-sequences could not tell the  synthetic from the real sequences above chance level.  The same subjects when asked to lip-read the  utterances from the same image-sequences  recognized speech from real image-sequences  significantly better than from synthetic ones. However,  performance for both, real and synthetic, were at levels  suggested in the literature on lip-reading. We conclude  from the two experiments that the animation of Mary  101 is adequate for providing a percept of a talking  head. However, additional effort is required to improve  the animation for lip-reading purposes like  rehabilitation and language learning.  In addition, these two tasks could be considered as  explicit and implicit perceptual discrimination tasks. In  the explicit task (a), each stimulus is classified directly  as a synthetic or real image-sequence by detecting a  possible difference between the synthetic and the real  image-sequences. The implicit perceptual  discrimination task (b) consists of a comparison  between visual recognition of speech of real and  synthetic image-sequences. Our results suggest that  implicit perceptual discrimination is a more sensitive  method for discrimination between synthetic and real  image-sequences than explicit perceptual  discrimination.
</description>
<pubDate>Fri, 28 Feb 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7275</guid>
<dc:date>2003-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Relative Contributions of Internal and External Features to Face Recognition</title>
<link>https://hdl.handle.net/1721.1/7274</link>
<description>Relative Contributions of Internal and External Features to Face Recognition
Jarudi, Izzat N.; Sinha, Pawan
The central challenge in face recognition lies in  understanding the role different facial features play in  our judgments of identity. Notable in this regard are the  relative contributions of the internal (eyes, nose and  mouth) and external (hair and jaw-line) features. Past  studies that have investigated this issue have typically  used high-resolution images or good-quality line  drawings as facial stimuli. The results obtained are  therefore most relevant for understanding the  identification of faces at close range. However, given  that real-world viewing conditions are rarely optimal, it  is also important to know how image degradations,  such as loss of resolution caused by large viewing  distances, influence our ability to use internal and  external features. Here, we report experiments  designed to address this issue. Our data characterize  how the relative contributions of internal and external  features change as a function of image resolution.  While we replicated results of previous studies that  have shown internal features of familiar faces to be  more useful for recognition than external features at  high resolution, we found that the two feature sets  reverse in importance as resolution decreases. These  results suggest that the visual system uses a highly  non-linear cue-fusion strategy in combining internal  and external features along the dimension of image  resolution and that the configural cues that relate the  two feature sets play an important role in judgments of  facial identity.
</description>
<pubDate>Sat, 01 Mar 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7274</guid>
<dc:date>2003-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exact Solution of the Nonlinear Dynamics of Recurrent Neural Mechanisms for Direction Selectivity</title>
<link>https://hdl.handle.net/1721.1/7273</link>
<description>Exact Solution of the Nonlinear Dynamics of Recurrent Neural Mechanisms for Direction Selectivity
Giese, M.A.; Xie, X.
Different theoretical models have tried to  investigate the feasibility of recurrent neural  mechanisms for achieving direction selectivity  in the visual cortex. The mathematical  analysis of such models has been restricted  so far to the case of purely linear networks.  We present an exact analytical solution of the  nonlinear dynamics of a class of direction  selective recurrent neural models with  threshold nonlinearity. Our mathematical  analysis shows that such networks have  form-stable stimulus-locked traveling pulse  solutions that are appropriate for modeling  the responses of direction selective cortical  neurons. Our analysis shows also that the  stability of such solutions can break down  giving raise to a different class of solutions  ("lurching activity waves") that are  characterized by a specific spatio-temporal  periodicity. These solutions cannot arise in  models for direction selectivity with purely  linear spatio-temporal filtering.
</description>
<pubDate>Thu, 01 Aug 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7273</guid>
<dc:date>2002-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biologically Plausible Neural Model for the Recognition of Biological Motion and Actions</title>
<link>https://hdl.handle.net/1721.1/7272</link>
<description>Biologically Plausible Neural Model for the Recognition of Biological Motion and Actions
Giese, Martin Alexander; Poggio, Tomaso
The visual recognition of complex movements  and actions is crucial for communication and  survival in many species. Remarkable  sensitivity and robustness of biological  motion perception have been demonstrated in  psychophysical experiments. In recent years,  neurons and cortical areas involved in action  recognition have been identified in  neurophysiological and imaging studies.  However, the detailed neural mechanisms  that underlie the recognition of such complex  movement patterns remain largely unknown.  This paper reviews the experimental results  and summarizes them in terms of a  biologically plausible neural model. The  model is based on the key assumption that  action recognition is based on learned  prototypical patterns and exploits information  from the ventral and the dorsal pathway. The  model makes specific predictions that  motivate new experiments.
</description>
<pubDate>Thu, 01 Aug 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7272</guid>
<dc:date>2002-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Stock Order Flows and Learning Market-Making from Data</title>
<link>https://hdl.handle.net/1721.1/7271</link>
<description>Modeling Stock Order Flows and Learning Market-Making from Data
Kim, Adlar J.; Shelton, Christian R.
Stock markets employ specialized traders,  market-makers, designed to provide liquidity and volume to the market by  constantly supplying both supply and demand. In this paper, we  demonstrate a novel method for modeling the market as a dynamic system  and a reinforcement learning algorithm that learns profitable  market-making strategies when run on this model.  The sequence of buys and sells for a  particular stock, the order flow, we model as an Input-Output Hidden Markov  Model fit to historical data. When combined with the dynamics of  the order book, this creates a highly non-linear and difficult dynamic  system. Our reinforcement learning algorithm, based on likelihood ratios,  is run on this partially-observable environment. We  demonstrate learning results for two separate real stocks.
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7271</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Categorization in IT and PFC: Model and Experiments</title>
<link>https://hdl.handle.net/1721.1/7270</link>
<description>Categorization in IT and PFC: Model and Experiments
Knoblich, Ulf; Freedman, David J.; Riesenhuber, Maximilian
In a recent experiment, Freedman et al.  recorded from inferotemporal (IT) and prefrontal cortices (PFC) of monkeys  performing a "cat/dog" categorization task (Freedman 2001 and  Freedman, Riesenhuber, Poggio, Miller 2001). In this paper we analyze the tuning properties of view-tuned  units in our HMAX model of object recognition in cortex (Riesenhuber  1999) using the same paradigm and stimuli  as in the experiment. We then compare the simulation results to the monkey  inferotemporal neuron population data. We find that view-tuned  model IT units that were trained without any explicit category  information can show category-related tuning as observed in the  experiment. This suggests that the tuning properties of experimental IT  neurons might primarily be shaped by bottom-up stimulus-space  statistics, with little influence of top-down task-specific  information. The population of experimental PFC neurons, on the other hand,  shows tuning properties that cannot be explained just by stimulus  tuning. These analyses are compatible with a model of object recognition  in cortex (Riesenhuber 2000)  in which a population of shape-tuned  neurons provides a general basis for neurons tuned to  different recognition tasks.
</description>
<pubDate>Thu, 18 Apr 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7270</guid>
<dc:date>2002-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>Stimulus Simplification and Object Representation: A Modeling Study</title>
<link>https://hdl.handle.net/1721.1/7269</link>
<description>Stimulus Simplification and Object Representation: A Modeling Study
Knoblich, Ulf; Riesenhuber, Maximilan
Tsunoda et al. (2001) recently studied the  nature of object representation in monkey  inferotemporal cortex using a combination of  optical imaging and extracellular recordings.  In particular, they examined IT neuron  responses to complex natural objects and  "simplified" versions thereof. In that study, in  42% of the cases, optical imaging revealed a  decrease in the number of activation patches  in IT as stimuli were "simplified". However, in  58% of the cases, "simplification" of the  stimuli actually led to the appearance of  additional activation patches in IT. Based on  these results, the authors propose a scheme  in which an object is represented by  combinations of active and inactive columns  coding for individual features.  We examine the patterns of activation caused  by the same stimuli as used by Tsunoda et al.  in our model of object recognition in cortex  (Riesenhuber 99). We find that object-tuned  units can show a pattern of appearance and  disappearance of features identical to the  experiment. Thus, the data of Tsunoda et al.  appear to be in quantitative agreement with a  simple object-based representation in which  an object's identity is coded by its similarities  to reference objects. Moreover, the agreement  of simulations and experiment suggests that  the simplification procedure used by Tsunoda  (2001) is not necessarily an accurate method  to determine neuronal tuning.
</description>
<pubDate>Fri, 15 Mar 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7269</guid>
<dc:date>2002-03-15T00:00:00Z</dc:date>
</item>
<item>
<title>Bagging Regularizes</title>
<link>https://hdl.handle.net/1721.1/7268</link>
<description>Bagging Regularizes
Poggio, Tomaso; Rifkin, Ryan; Mukherjee, Sayan; Rakhlin, Alex
Intuitively, we expect that averaging --- or  bagging --- different regressors with low correlation should  smooth their behavior and be somewhat similar to regularization. In this  note we make this intuition precise. Using an almost classical  definition of stability, we prove that a certain form of averaging  provides generalization bounds with a rate of convergence of the  same order as Tikhonov regularization --- similar to fashionable RKHS-based learning algorithms.
</description>
<pubDate>Fri, 01 Mar 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7268</guid>
<dc:date>2002-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Global Depth Perception from Familiar Scene Structure</title>
<link>https://hdl.handle.net/1721.1/7267</link>
<description>Global Depth Perception from Familiar Scene Structure
Torralba, Antonio; Oliva, Aude
In the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges and junctions may provide a 3D model of the scene but it will not inform about the actual "size" of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, this is computationally complex due to the difficulty of the object recognition process. Here we propose a source of information for absolute depth estimation that does not rely on specific objects: we introduce a procedure for absolute depth estimation based on the recognition of the whole scene. The shape of the space of the scene and the structures present in the scene are strongly related to the scale of observation. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene, and therefore its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.
</description>
<pubDate>Sat, 01 Dec 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7267</guid>
<dc:date>2001-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Role of color in face recognition</title>
<link>https://hdl.handle.net/1721.1/7266</link>
<description>Role of color in face recognition
Yip, Andrew; Sinha, Pawan
One of the key challenges in face perception lies in determining the contribution of different cues to face identification. In this study, we focus on the role of color cues. Although color appears to be a salient attribute of faces, past research has suggested that it confers little recognition advantage for identifying people. Here we report experimental results suggesting that color cues do play a role in face recognition and their contribution becomes evident when shape cues are degraded. Under such conditions, recognition performance with color images is significantly better than that with grayscale images. Our experimental results also indicate that the contribution of color may lie not so much in providing diagnostic cues to identity as in aiding low-level image-analysis processes such as segmentation.
</description>
<pubDate>Thu, 13 Dec 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7266</guid>
<dc:date>2001-12-13T00:00:00Z</dc:date>
</item>
<item>
<title>Generalization over contrast and mirror reversal, but not figure-ground reversal, in an "edge-based</title>
<link>https://hdl.handle.net/1721.1/7265</link>
<description>Generalization over contrast and mirror reversal, but not figure-ground reversal, in an "edge-based
Riesenhuber, Maximilian
Baylis &amp; Driver (Nature Neuroscience, 2001) have recently presented data on the response of neurons in macaque inferotemporal cortex (IT) to various stimulus transformations. They report that neurons can generalize over contrast and mirror reversal, but not over figure-ground reversal. This finding is taken to demonstrate that ``the selectivity of IT neurons is not determined simply by the distinctive contours in a display, contrary to simple edge-based models of shape recognition'', citing our recently presented model of object recognition in cortex (Riesenhuber &amp; Poggio, Nature Neuroscience, 1999). In this memo, I show that the main effects of the experiment can be obtained by performing the appropriate simulations in our simple feedforward model. This suggests for IT cell tuning that the possible contributions of explicit edge assignment processes postulated in (Baylis &amp; Driver, 2001) might be smaller than expected.
</description>
<pubDate>Mon, 10 Dec 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7265</guid>
<dc:date>2001-12-10T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-Based Approach to Estimation of Morphable Model Parameters</title>
<link>https://hdl.handle.net/1721.1/7264</link>
<description>Learning-Based Approach to Estimation of Morphable Model Parameters
Kumar, Vinay; Poggio, Tomaso
We describe the key role played by partial  evaluation in the Supercomputing Toolkit, a  parallel computing system for scientific  applications that effectively exploits the vast  amount of parallelism exposed by partial  evaluation. The Supercomputing Toolkit  parallel processor and its associated partial  evaluation-based compiler have been used  extensively by scientists at MIT, and have  made possible recent results in astrophysics  showing that the motion of the planets in our  solar system is chaotically unstable.
</description>
<pubDate>Fri, 01 Sep 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7264</guid>
<dc:date>2000-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Speech Synthesis by Morphing Visemes</title>
<link>https://hdl.handle.net/1721.1/7263</link>
<description>Visual Speech Synthesis by Morphing Visemes
Ezzat, Tony; Poggio, Tomaso
We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a small set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject which is specifically designed to elicit one instantiation of each viseme. Using optical flow methods, correspondence from every viseme to every other viseme is computed automatically. By morphing along this correspondence, a smooth transition between viseme images may be generated. A complete visual utterance is constructed by concatenating viseme transitions. Finally, phoneme and timing information extracted from a text-to-speech synthesizer is exploited to determine which viseme transitions to use, and the rate at which the morphing process should occur. In this manner, we are able to synchronize the visual speech stream with the audio speech stream, and hence give the impression of a photorealistic talking face.
</description>
<pubDate>Sat, 01 May 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7263</guid>
<dc:date>1999-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the V(subscript gamma) Dimension for Regression in Reproducing Kernel Hilbert Spaces</title>
<link>https://hdl.handle.net/1721.1/7262</link>
<description>On the V(subscript gamma) Dimension for Regression in Reproducing Kernel Hilbert Spaces
Evgeniou, Theodoros; Pontil, Massimiliano
This paper presents a computation of the $V_gamma$ dimension for regression in bounded subspaces of Reproducing Kernel Hilbert Spaces (RKHS) for the Support Vector Machine (SVM) regression $epsilon$-insensitive loss function, and general $L_p$ loss functions. Finiteness of the RV_gamma$ dimension is shown, which also proves uniform convergence in probability for regression machines in RKHS subspaces that use the $L_epsilon$ or general $L_p$ loss functions. This paper presenta a novel proof of this result also for the case that a bias is added to the functions in the RKHS.
</description>
<pubDate>Sat, 01 May 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7262</guid>
<dc:date>1999-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Framework for Regularization Networks and Support Vector Machines</title>
<link>https://hdl.handle.net/1721.1/7261</link>
<description>A Unified Framework for Regularization Networks and Support Vector Machines
Evgeniou, Theodoros; Pontil, Massimiliano; Poggio, Tomaso
Regularization Networks and Support Vector  Machines are techniques for solving certain  problems of learning from examples -- in  particular the regression problem of  approximating a multivariate function from  sparse data. We present both formulations in  a unified framework, namely in the context of  Vapnik's theory of statistical learning which  provides a general foundation for the learning  problem, combining functional analysis and  statistics.
</description>
<pubDate>Mon, 01 Mar 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7261</guid>
<dc:date>1999-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multivariate Density Estimation: An SVM Approach</title>
<link>https://hdl.handle.net/1721.1/7260</link>
<description>Multivariate Density Estimation: An SVM Approach
Mukherjee, Sayan; Vapnik, Vladimir
We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. The algorithm is based upon a Support Vector Machine (SVM) approach to solving inverse operator problems. The algorithm is implemented and tested on simulated data from different distributions and different dimensionalities, gaussians and laplacians in $R^2$ and $R^{12}$. A comparison in performance is made with Gaussian Mixture Models (GMMs). Our algorithm does as well or better than the GMMs for the simulations tested and has the added advantage of being automated with respect to parameters.
</description>
<pubDate>Thu, 01 Apr 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7260</guid>
<dc:date>1999-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Noise Model of Support Vector Machine Regression</title>
<link>https://hdl.handle.net/1721.1/7259</link>
<description>On the Noise Model of Support Vector Machine Regression
Pontil, Massimiliano; Mukherjee, Sayan; Girosi, Federico
Support Vector Machines Regression (SVMR)  is a regression technique which has been  recently introduced by V. Vapnik and his  collaborators (Vapnik, 1995; Vapnik, Golowich  and Smola, 1996). In SVMR the goodness of  fit is measured not by the usual quadratic loss  function (the mean square error), but by a  different loss function called Vapnik"s  $epsilon$- insensitive loss function, which is  similar to the "robust" loss functions  introduced by Huber (Huber, 1981). The  quadratic loss function is well justified under  the assumption of Gaussian additive noise.  However, the noise model underlying the  choice of Vapnik's loss function is less clear.  In this paper the use of Vapnik's loss function is shown to be  equivalent to a model of additive and  Gaussian noise, where the variance and  mean of the Gaussian are random variables.  The probability distributions for the variance  and mean will be stated explicitly. While this  work is presented in the framework of SVMR,  it can be extended to justify non-quadratic loss  functions in any Maximum Likelihood or  Maximum A Posteriori approach. It applies not  only to Vapnik's loss function, but to a much  broader class of loss functions.
</description>
<pubDate>Thu, 01 Oct 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7259</guid>
<dc:date>1998-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Regression to Classification in Support Vector Machines</title>
<link>https://hdl.handle.net/1721.1/7258</link>
<description>From Regression to Classification in Support Vector Machines
Pontil, Massimiliano; Rifkin, Ryan; Evgeniou, Theodoros
We study the relation between support vector machines (SVMs) for regression (SVMR) and SVM for classification (SVMC). We show that for a given SVMC solution there exists a SVMR solution which is equivalent for a certain choice of the parameters. In particular our result is that for $epsilon$ sufficiently close to one, the optimal hyperplane and threshold for the SVMC problem with regularization parameter C_c are equal to (1-epsilon)^{- 1} times the optimal hyperplane and threshold for SVMR with regularization parameter C_r = (1-epsilon)C_c. A direct consequence of this result is that SVMC can be seen as a special case of SVMR.
</description>
<pubDate>Sun, 01 Nov 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7258</guid>
<dc:date>1998-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating Dependency Structure as a Hidden Variable</title>
<link>https://hdl.handle.net/1721.1/7257</link>
<description>Estimating Dependency Structure as a Hidden Variable
Meila, Marina; Jordan, Michael I.; Morris, Quaid
This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. We also show that the single tree classifier acts like an implicit feature selector, thus making the classification performance insensitive to irrelevant attributes. Experimental results demonstrate the excellent performance of the new model both in density estimation and in classification.
</description>
<pubDate>Tue, 01 Sep 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7257</guid>
<dc:date>1998-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparse Correlation Kernel Analysis and Reconstruction</title>
<link>https://hdl.handle.net/1721.1/7256</link>
<description>Sparse Correlation Kernel Analysis and Reconstruction
Papgeorgiou, Constantine P.; Girosi, Federico; Poggio, Tomaso
This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.
</description>
<pubDate>Fri, 01 May 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7256</guid>
<dc:date>1998-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Notes on PCA, Regularization, Sparsity and Support Vector Machines</title>
<link>https://hdl.handle.net/1721.1/7255</link>
<description>Notes on PCA, Regularization, Sparsity and Support Vector Machines
Poggio, Tomaso; Girosi, Federico
We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.
</description>
<pubDate>Fri, 01 May 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7255</guid>
<dc:date>1998-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Invariances in Inferotemporal Cell Tuning</title>
<link>https://hdl.handle.net/1721.1/7254</link>
<description>Modeling Invariances in Inferotemporal Cell Tuning
Riesenhuber, Maximilian; Poggio, Tomaso
In macaque inferotemporal cortex (IT), neurons have been found to respond selectively to complex shapes while showing broad tuning ("invariance") with respect to stimulus transformations such as translation and scale changes and a limited tuning to rotation in depth. Training monkeys with novel, paperclip-like objects, Logothetis et al. could investigate whether these invariance properties are due to experience with exhaustively many transformed instances of an object or if there are mechanisms that allow the cells to show response invariance also to previously unseen instances of that object. They found object-selective cells in anterior IT which exhibited limited invariance to various transformations after training with single object views. While previous models accounted for the tuning of the cells for rotations in depth and for their selectivity to a specific object relative to a population of distractor objects, the model described here attempts to explain in a biologically plausible way the additional properties of translation and size invariance. Using the same stimuli as in the experiment, we find that model IT neurons exhibit invariance properties which closely parallel those of real neurons. Simulations show that the model is capable of unsupervised learning of view-tuned neurons. The model also allows to make experimentally testable predictions regarding novel stimulus transformations and combinations of stimuli.
</description>
<pubDate>Sun, 01 Mar 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7254</guid>
<dc:date>1998-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Models for Co-occurrence Data</title>
<link>https://hdl.handle.net/1721.1/7253</link>
<description>Statistical Models for Co-occurrence Data
Hofmann, Thomas; Puzicha, Jan
Modeling and predicting co-occurrences of events is a fundamental problem of unsupervised learning. In this contribution we develop a statistical framework for analyzing co-occurrence data in a general setting where elementary observations are joint occurrences of pairs of abstract objects from two finite sets. The main challenge for statistical models in this context is to overcome the inherent data sparseness and to estimate the probabilities for pairs which were rarely observed or even unobserved in a given sample set. Moreover, it is often of considerable interest to extract grouping structure or to find a hierarchical data organization. A novel family of mixture models is proposed which explain the observed data by a finite number of shared aspects or clusters. This provides a common framework for statistical inference and structure discovery and also includes several recently proposed models as special cases. Adopting the maximum likelihood principle, EM algorithms are derived to fit the model parameters. We develop improved versions of EM which largely avoid overfitting problems and overcome the inherent locality of EM--based optimization. Among the broad variety of possible applications, e.g., in information retrieval, natural language processing, data mining, and computer vision, we have chosen document retrieval, the statistical analysis of noun/adjective co-occurrence and the unsupervised segmentation of textured images to test and evaluate the proposed algorithms.
</description>
<pubDate>Sun, 01 Feb 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7253</guid>
<dc:date>1998-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Slow and Smooth: A Bayesian Theory for the Combination of Local Motion Signals in Human Vision</title>
<link>https://hdl.handle.net/1721.1/7252</link>
<description>Slow and Smooth: A Bayesian Theory for the Combination of Local Motion Signals in Human Vision
Weiss, Yar; Adelson, Edward H.
In order to estimate the motion of an object, the visual system needs to combine multiple local measurements, each of which carries some degree of ambiguity. We present a model of motion perception whereby measurements from different image regions are combined according to a Bayesian estimator --- the estimated motion maximizes the posterior probability assuming a prior favoring slow and smooth velocities. In reviewing a large number of previously published phenomena we find that the Bayesian estimator predicts a wide range of psychophysical results. This suggests that the seemingly complex set of illusions arise from a single computational strategy that is optimal under reasonable assumptions.
</description>
<pubDate>Sun, 01 Feb 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7252</guid>
<dc:date>1998-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Degeneracy of Linear Reconstruction from Three Views: Linear Line Complex and Applications</title>
<link>https://hdl.handle.net/1721.1/7251</link>
<description>On Degeneracy of Linear Reconstruction from Three Views: Linear Line Complex and Applications
Stein, Gideon P.; Shashua, Amnon
This paper investigates the linear degeneracies of projective structure estimation from point and line features across three views. We show that the rank of the linear system of equations for recovering the trilinear tensor of three views reduces to 23 (instead of 26) in the case when the scene is a Linear Line Complex (set of lines in space intersecting at a common line) and is 21 when the scene is planar. The LLC situation is only linearly degenerate, and we show that one can obtain a unique solution when the admissibility constraints of the tensor are accounted for. The line configuration described by an LLC, rather than being some obscure case, is in fact quite typical. It includes, as a particular example, the case of a camera moving down a hallway in an office environment or down an urban street. Furthermore, an LLC situation may occur as an artifact such as in direct estimation from spatio-temporal derivatives of image brightness. Therefore, an investigation into degeneracies and their remedy is important also in practice.
</description>
<pubDate>Mon, 01 Dec 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7251</guid>
<dc:date>1997-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparse Representations of Multiple Signals</title>
<link>https://hdl.handle.net/1721.1/7250</link>
<description>Sparse Representations of Multiple Signals
Evgeniou, Theodoros; Poggio, Tomaso
We discuss the problem of finding sparse representations of a class of signals. We formalize the problem and prove it is NP-complete both in the case of a single signal and that of multiple ones. Next we develop a simple approximation method to the problem and we show experimental results using artificially generated signals. Furthermore,we use our approximation method to find sparse representations of classes of real signals, specifically of images of pedestrians. We discuss the relation between our formulation of the sparsity problem and the problem of finding representations of objects that are compact and appropriate for detection and classification.
</description>
<pubDate>Mon, 01 Sep 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7250</guid>
<dc:date>1997-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Belief Propagation and Revision in Networks with Loops</title>
<link>https://hdl.handle.net/1721.1/7249</link>
<description>Belief Propagation and Revision in Networks with Loops
Weiss, Yair
Local belief propagation rules of the sort proposed by Pearl(1988) are guaranteed to converge to the optimal beliefs for singly connected networks. Recently, a number of researchers have empirically demonstrated good performance of these same algorithms on networks with loops, but a theoretical understanding of this performance has yet to be achieved. Here we lay the foundation for an understanding of belief propagation in networks with loops. For networks with a single loop, we derive ananalytical relationship between the steady state beliefs in the loopy network and the true posterior probability. Using this relationship we show a category of networks for which the MAP estimate obtained by belief update and by belief revision can be proven to be optimal (although the beliefs will be incorrect). We show how nodes can use local information in the messages they receive in order to correct the steady state beliefs. Furthermore we prove that for all networks with a single loop, the MAP estimate obtained by belief revisionat convergence is guaranteed to give the globally optimal sequence of states. The result is independent of the length of the cycle and the size of the statespace. For networks with multiple loops, we introduce the concept of a "balanced network" and show simulati.
</description>
<pubDate>Sat, 01 Nov 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7249</guid>
<dc:date>1997-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Recognition and Categorization on the Basis of Similarities to Multiple Class Prototypes</title>
<link>https://hdl.handle.net/1721.1/7248</link>
<description>Visual Recognition and Categorization on the Basis of Similarities to Multiple Class Prototypes
Edelman, Shimon; Duvdevani-Bar, Sharon
To recognize a previously seen object, the  visual system must overcome the variability in  the object's appearance caused by factors  such as illumination and pose. Developments  in computer vision suggest that it may be  possible to counter the influence of these  factors, by learning to interpolate between  stored views of the target object, taken under  representative combinations of viewing  conditions. Daily life situations, however,  typically require categorization, rather than  recognition, of objects. Due to the open-ended  character both of natural kinds and of artificial  categories, categorization cannot rely on  interpolation between stored examples.  Nonetheless, knowledge of several  representative members, or prototypes, of  each of the categories of interest can still  provide the necessary computational  substrate for the categorization of new  instances. The resulting representational  scheme based on similarities to prototypes  appears to be computationally viable, and is  readily mapped onto the mechanisms of  biological vision revealed by recent  psychophysical and physiological studies.
</description>
<pubDate>Mon, 01 Sep 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7248</guid>
<dc:date>1997-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Segmentation without Classification in a Model of the Primary Visual Cortex</title>
<link>https://hdl.handle.net/1721.1/7247</link>
<description>Visual Segmentation without Classification in a Model of the Primary Visual Cortex
Li, Zhaoping
Stimuli outside classical receptive fields  significantly influence the neurons' activities in  primary visual cortex. We propose that such  contextual influences are used to segment  regions by detecting the breakdown of  homogeneity or translation invariance in the  input, thus computing global region  boundaries using local interactions. This is  implemented in a biologically based model of  V1, and demonstrated in examples of texture  segmentation and figure-ground segregation.  By contrast with traditional approaches,  segmentation occurs without classification or  comparison of features within or between  regions and is performed by exactly the same  neural circuit responsible for the dual problem  of the grouping and enhancement of contours.
</description>
<pubDate>Fri, 01 Aug 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7247</guid>
<dc:date>1997-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Properties of Support Vector Machines</title>
<link>https://hdl.handle.net/1721.1/7246</link>
<description>Properties of Support Vector Machines
Pontil, Massimiliano; Verri, Alessandro
Support Vector Machines (SVMs) perform pattern recognition between two point classes by finding a decision surface determined by certain points of the training set, termed Support Vectors (SV). This surface, which in some feature space of possibly infinite dimension can be regarded as a hyperplane, is obtained from the solution of a problem of quadratic programming that depends on a regularization parameter. In this paper we study some mathematical properties of support vectors and show that the decision surface can be written as the sum of two orthogonal terms, the first depending only on the margin vectors (which are SVs lying on the margin), the second proportional to the regularization parameter. For almost all values of the parameter, this enables us to predict how the decision surface varies for small parameter changes. In the special but important case of feature space of finite dimension m, we also show that there are at most m+1 margin vectors and observe that m+1 SVs are usually sufficient to fully determine the decision surface. For relatively small m this latter result leads to a consistent reduction of the SV number.
</description>
<pubDate>Fri, 01 Aug 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7246</guid>
<dc:date>1997-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating Dependency Structure as a Hidden Variable</title>
<link>https://hdl.handle.net/1721.1/7245</link>
<description>Estimating Dependency Structure as a Hidden Variable
Meila, Marina; Jordan, Michael I.; Morris, Quaid
This paper introduces a probability model, the  mixture of trees that can account for sparse,  dynamically changing dependence  relationships. We present a family of efficient  algorithms that use EMand the Minimum  Spanning Tree algorithm to find the ML and  MAP mixtureof trees for a variety of priors,  including the Dirichlet and the MDL priors.
</description>
<pubDate>Sun, 01 Jun 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7245</guid>
<dc:date>1997-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Translation Invariance in Object Recognition, and Its Relation to Other Visual Transformations</title>
<link>https://hdl.handle.net/1721.1/7244</link>
<description>Translation Invariance in Object Recognition, and Its Relation to Other Visual Transformations
Dill, Marcus; Edelman, Shimon
Human object recognition is generally considered to tolerate changes of the stimulus position in the visual field. A number of recent studies, however, have cast doubt on the completeness of translation invariance. In a new series of experiments we tried to investigate whether positional specificity of short-term memory is a general property of visual perception. We tested same/different discrimination of computer graphics models that were displayed at the same or at different locations of the visual field, and found complete translation invariance, regardless of the similarity of the animals and irrespective of direction and size of the displacement (Exp. 1 and 2). Decisions were strongly biased towards same decisions if stimuli appeared at a constant location, while after translation subjects displayed a tendency towards different decisions. Even if the spatial order of animal limbs was randomized ("scrambled animals"), no deteriorating effect of shifts in the field of view could be detected (Exp. 3). However, if the influence of single features was reduced (Exp. 4 and 5) small but significant effects of translation could be obtained. Under conditions that do not reveal an influence of translation, rotation in depth strongly interferes with recognition (Exp. 6). Changes of stimulus size did not reduce performance (Exp. 7). Tolerance to these object transformations seems to rely on different brain mechanisms, with translation and scale invariance being achieved in principle, while rotation invariance is not.
</description>
<pubDate>Sun, 01 Jun 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7244</guid>
<dc:date>1997-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perceiving Illumination Inconsistencies in Scenes</title>
<link>https://hdl.handle.net/1721.1/7243</link>
<description>Perceiving Illumination Inconsistencies in Scenes
Ostrovsky, Yuri; Cavanagh, Patrick; Sinha, Pawan
The human visual system is adept at detecting and encoding statistical regularities in its spatio-temporal environment. Here we report an unexpected failure of this ability in the context of perceiving inconsistencies in illumination distributions across a scene. Contrary to predictions from previous studies [Enns and Rensink, 1990; Sun and Perona, 1996a, 1996b, 1997], we find that the visual system displays a remarkable lack of sensitivity to illumination inconsistencies, both in experimental stimuli and in images of real scenes. Our results allow us to draw inferences regarding how the visual system encodes illumination distributions across scenes. Specifically, they suggest that the visual system does not verify the global consistency of locally derived estimates of illumination direction.
</description>
<pubDate>Mon, 05 Nov 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7243</guid>
<dc:date>2001-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting Faces in Impoverished Images</title>
<link>https://hdl.handle.net/1721.1/7242</link>
<description>Detecting Faces in Impoverished Images
Torralba, Antonio; Sinha, Pawan
The ability to detect faces in images is of critical ecological significance. It is a pre-requisite for other important face perception tasks such as person identification, gender classification and affect analysis. Here we address the question of how the visual system classifies images into face and non-face patterns. We focus on face detection in impoverished images, which allow us to explore information thresholds required for different levels of performance. Our experimental results provide lower bounds on image resolution needed for reliable discrimination between face and non-face patterns and help characterize the nature of facial representations used by the visual system under degraded viewing conditions. Specifically, they enable an evaluation of the contribution of luminance contrast, image orientation and local context on face-detection performance.
</description>
<pubDate>Mon, 05 Nov 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7242</guid>
<dc:date>2001-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Multiclass Text Classification with the Support Vector Machine</title>
<link>https://hdl.handle.net/1721.1/7241</link>
<description>Improving Multiclass Text Classification with the Support Vector Machine
Rennie, Jason D. M.; Rifkin, Ryan
We compare Naive Bayes and Support Vector Machines on the task of multiclass text classification. Using a variety of approaches to combine the underlying binary classifiers, we find that SVMs substantially outperform Naive Bayes. We present full multiclass results on two well-known text data sets, including the lowest error to date on both data sets. We develop a new indicator of binary performance to show that the SVM's lower multiclass error is a result of its improved binary performance. Furthermore, we demonstrate and explore the surprising result that one-vs-all classification performs favorably compared to other approaches even though it has no error-correcting properties.
</description>
<pubDate>Tue, 16 Oct 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7241</guid>
<dc:date>2001-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Biologically Plausible Neural Circuits for Realization of Maximum Operations</title>
<link>https://hdl.handle.net/1721.1/7240</link>
<description>Biologically Plausible Neural Circuits for Realization of Maximum Operations
Yu, Angela J.; Giese, Martin A.; Poggio, Tomaso A
Object recognition in the visual cortex is based on a hierarchical architecture, in which specialized brain regions along the ventral pathway extract object features of increasing levels of complexity, accompanied by greater invariance in stimulus size, position, and orientation. Recent theoretical studies postulate a non-linear pooling function, such as the maximum (MAX) operation could be fundamental in achieving such invariance. In this paper, we are concerned with neurally plausible mechanisms that may be involved in realizing the MAX operation. Four canonical circuits are proposed, each based on neural mechanisms that have been previously discussed in the context of cortical processing. Through simulations and mathematical analysis, we examine the relative performance and robustness of these mechanisms. We derive experimentally verifiable predictions for each circuit and discuss their respective physiological considerations.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7240</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contextual Priming for Object Detection</title>
<link>https://hdl.handle.net/1721.1/7239</link>
<description>Contextual Priming for Object Detection
Torralba, Antonio; Sinha, Pawan
There is general consensus that context can be a rich source of information about an object's identity, location and scale. In fact, the structure of many real-world scenes is governed by strong configurational rules akin to those that apply to a single object. Here we introduce a simple probabilistic framework for modeling the relationship between context and object properties based on the correlation between the statistics of low-level features across the entire scene and the objects that it contains. The resulting scheme serves as an effective procedure for object priming, context driven focus of attention and automatic scale-selection on real-world scenes.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7239</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiclass Classification of SRBCTs</title>
<link>https://hdl.handle.net/1721.1/7238</link>
<description>Multiclass Classification of SRBCTs
Yeo, Gene; Poggio, Tomaso
A novel approach to multiclass tumor classification using Artificial Neural Networks (ANNs) was introduced in a recent paper cite{Khan2001}. The method successfully classified and diagnosed small, round blue cell tumors (SRBCTs) of childhood into four distinct categories, neuroblastoma (NB), rhabdomyosarcoma (RMS), non-Hodgkin lymphoma (NHL) and the Ewing family of tumors (EWS), using cDNA gene expression profiles of samples that included both tumor biopsy material and cell lines. We report that using an approach similar to the one reported by Yeang et al cite{Yeang2001}, i.e. multiclass classification by combining outputs of binary classifiers, we achieved equal accuracy with much fewer features. We report the performances of 3 binary classifiers (k-nearest neighbors (kNN), weighted-voting (WV), and support vector machines (SVM)) with 3 feature selection techniques (Golub's Signal to Noise (SN) ratios cite{Golub99}, Fisher scores (FSc) and Mukherjee's SVM feature selection (SVMFS))cite{Sayan98}.
</description>
<pubDate>Sat, 25 Aug 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7238</guid>
<dc:date>2001-08-25T00:00:00Z</dc:date>
</item>
<item>
<title>Role of Low-level Mechanisms in Brightness Perception</title>
<link>https://hdl.handle.net/1721.1/7237</link>
<description>Role of Low-level Mechanisms in Brightness Perception
Sinha, Pawan; Torralba, Antonio
Brightness judgments are a key part of the primate brain's visual analysis of the environment. There is general consensus that the perceived brightness of an image region is based not only on its actual luminance, but also on the photometric structure of its neighborhood. However, it is unclear precisely how a region's context influences its perceived brightness. Recent research has suggested that brightness estimation may be based on a sophisticated analysis of scene layout in terms of transparency, illumination and shadows. This work has called into question the role of low-level mechanisms, such as lateral inhibition, as explanations for brightness phenomena. Here we describe experiments with displays for which low-level and high-level analyses make qualitatively different predictions, and with which we can quantitatively assess the trade-offs between low-level and high-level factors. We find that brightness percepts in these displays are governed by low-level stimulus properties, even when these percepts are inconsistent with higher-level interpretations of scene layout. These results point to the important role of low-level mechanisms in determining brightness percepts.
</description>
<pubDate>Wed, 01 Aug 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7237</guid>
<dc:date>2001-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognizing Indoor Scenes</title>
<link>https://hdl.handle.net/1721.1/7236</link>
<description>Recognizing Indoor Scenes
Torralba, Antonio; Sinha, Pawan
We propose a scheme for indoor place identification based on the recognition of global scene views. Scene views are encoded using a holistic representation that provides low-resolution spatial and spectral information. The holistic nature of the representation dispenses with the need to rely on specific objects or local landmarks and also renders it robust against variations in object configurations. We demonstrate the scheme on the problem of recognizing scenes in video sequences captured while walking through an office environment. We develop a method for distinguishing between 'diagnostic' and 'generic' views and also evaluate changes in system performances as a function of the amount of training data available and the complexity of the representation.
</description>
<pubDate>Wed, 25 Jul 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7236</guid>
<dc:date>2001-07-25T00:00:00Z</dc:date>
</item>
<item>
<title>Perceptually-based Comparison of Image Similarity Metrics</title>
<link>https://hdl.handle.net/1721.1/7235</link>
<description>Perceptually-based Comparison of Image Similarity Metrics
Russell, Richard; Sinha, Pawan
The image comparison operation ??sessing how well one image matches another ??rms a critical component of many image analysis systems and models of human visual processing. Two norms used commonly for this purpose are L1 and L2, which are specific instances of the Minkowski metric. However, there is often not a principled reason for selecting one norm over the other. One way to address this problem is by examining whether one metric better captures the perceptual notion of image similarity than the other. With this goal, we examined perceptual preferences for images retrieved on the basis of the L1 versus the L2 norm. These images were either small fragments without recognizable content, or larger patterns with recognizable content created via vector quantization. In both conditions the subjects showed a consistent preference for images matched using the L1 metric. These results suggest that, in the domain of natural images of the kind we have used, the L1 metric may better capture human notions of image similarity.
</description>
<pubDate>Sun, 01 Jul 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7235</guid>
<dc:date>2001-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Audiomomma Music Recommendation System</title>
<link>https://hdl.handle.net/1721.1/7234</link>
<description>The Audiomomma Music Recommendation System
Alvira, Mariano; Paris, Jim; Rifkin, Ryan
We design and implement a system that recommends musicians to listeners. The basic idea is to keep track of what artists a user listens to, to find other users with similar tastes, and to recommend other artists that these similar listeners enjoy. The system utilizes a client-server architecture, a web-based interface, and an SQL database to store and process information. We describe Audiomomma-0.3, a proof-of-concept implementation of the above ideas.
</description>
<pubDate>Sun, 01 Jul 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7234</guid>
<dc:date>2001-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Markets for Product Concepts</title>
<link>https://hdl.handle.net/1721.1/7233</link>
<description>Experimental Markets for Product Concepts
Chan, Nicholas T.; Dahan, Ely; Lo, Andrew W.; Poggio, Tomaso
Market prices are well known to efficiently collect and aggregate diverse information regarding the value of commodities and assets. The role of markets has been particularly suitable to pricing financial securities. This article provides an alternative application of the pricing mechanism to marketing research - using pseudo-securities markets to measure preferences over new product concepts. Surveys, focus groups, concept tests and conjoint studies are methods traditionally used to measure individual and aggregate preferences. Unfortunately, these methods can be biased, costly and time-consuming to conduct. The present research is motivated by the desire to efficiently measure preferences and more accurately predict new product success, based on the efficiency and incentive-compatibility of security trading markets. The article describes a novel market research method, pro-vides insight into why the method should work, and compares the results of several trading experiments against other methodologies such as concept testing and conjoint analysis.
</description>
<pubDate>Sun, 01 Jul 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7233</guid>
<dc:date>2001-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Feature Selection for Face Detection</title>
<link>https://hdl.handle.net/1721.1/7232</link>
<description>Feature Selection for Face Detection
Serre, Thomas; Heisele, Bernd; Mukherjee, Sayan; Poggio, Tomaso
We present a new method to select features for a face detection system using Support Vector Machines (SVMs). In the first step we reduce the dimensionality of the input space by projecting the data into a subset of eigenvectors. The dimension of the subset is determined by a classification criterion based on minimizing a bound on the expected error probability of an SVM. In the second step we select features from the SVM feature space by removing those that have low contributions to the decision function of the SVM.
</description>
<pubDate>Fri, 01 Sep 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7232</guid>
<dc:date>2000-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Models of Object Recognition in Cortex: A Review</title>
<link>https://hdl.handle.net/1721.1/7231</link>
<description>Computational Models of Object Recognition in Cortex: A Review
Riesenhuber, Maximilian; Poggio, Tomaso
Understanding how biological visual systems perform object recognition is one of the ultimate goals in computational neuroscience. Among the biological models of recognition the main distinctions are between feedforward and feedback and between object-centered and view-centered. From a computational viewpoint the different recognition tasks - for instance categorization and identification - are very similar, representing different trade-offs between specificity and invariance. Thus the different tasks do not strictly require different classes of models. The focus of the review is on feedforward, view-based models that are supported by psychophysical and physiological data.
</description>
<pubDate>Mon, 07 Aug 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7231</guid>
<dc:date>2000-08-07T00:00:00Z</dc:date>
</item>
<item>
<title>People Recognition in Image Sequences by Supervised Learning</title>
<link>https://hdl.handle.net/1721.1/7230</link>
<description>People Recognition in Image Sequences by Supervised Learning
Nakajima, Chikahito; Pontil, Massimiliano; Heisele, Bernd; Poggio, Tomaso
We describe a system that learns from examples to recognize people in images taken indoors. Images of people are represented by color-based and shape-based features. Recognition is carried out through combinations of Support Vector Machine classifiers (SVMs). Different types of multiclass strategies based on SVMs are explored and compared to k-Nearest Neighbors classifiers (kNNs). The system works in real time and shows high performance rates for people recognition throughout one day.
</description>
<pubDate>Thu, 01 Jun 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7230</guid>
<dc:date>2000-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Face Detection in Still Gray Images</title>
<link>https://hdl.handle.net/1721.1/7229</link>
<description>Face Detection in Still Gray Images
Heisele, Bernd; Poggio, Tomaso; Pontil, Massimiliano
We present a trainable system for detecting frontal and near-frontal views of faces in still gray images using Support Vector Machines (SVMs). We first consider the problem of detecting the whole face pattern by a single SVM classifer. In this context we compare different types of image features, present and evaluate a new method for reducing the number of features and discuss practical issues concerning the parameterization of SVMs and the selection of training data. The second part of the paper describes a component-based method for face detection consisting of a two-level hierarchy of SVM classifers. On the first level, component classifers independently detect components of a face, such as the eyes, the nose, and the mouth. On the second level, a single classifer checks if the geometrical configuration of the detected components in the image matches a geometrical model of a face.
</description>
<pubDate>Mon, 01 May 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7229</guid>
<dc:date>2000-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Individual is Nothing, the Class Everything: Psychophysics and Modeling of Recognition in Obect Classes</title>
<link>https://hdl.handle.net/1721.1/7222</link>
<description>The Individual is Nothing, the Class Everything: Psychophysics and Modeling of Recognition in Obect Classes
Riesenhuber, Maximilian; Poggio, Tomaso
Most psychophysical studies of object recognition have focussed on the recognition and representation of individual objects subjects had previously explicitely been trained on. Correspondingly, modeling studies have often employed a 'grandmother'-type representation where the objects to be recognized were represented by individual units. However, objects in the natural world are commonly members of a class containing a number of visually similar objects, such as faces, for which physiology studies have provided support for a representation based on a sparse population code, which permits generalization from the learned exemplars to novel objects of that class. In this paper, we present results from psychophysical and modeling studies intended to investigate object recognition in natural ('continuous') object classes. In two experiments, subjects were trained to perform subordinate level discrimination in a continuous object class - images of computer-rendered cars - created using a 3D morphing system. By comparing the recognition performance of trained and untrained subjects we could estimate the effects of viewpoint-specific training and infer properties of the object class-specific representation learned as a result of training. We then compared the experimental findings to simulations, building on our recently presented HMAX model of object recognition in cortex, to investigate the computational properties of a population-based object class representation as outlined above. We find experimental evidence, supported by modeling results, that training builds a viewpoint- and class-specific representation that supplements a pre-existing repre-sentation with lower shape discriminability but possibly greater viewpoint invariance.
</description>
<pubDate>Mon, 01 May 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7222</guid>
<dc:date>2000-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Object Perception with Random Image Structure Evolution</title>
<link>https://hdl.handle.net/1721.1/7221</link>
<description>Exploring Object Perception with Random Image Structure Evolution
Sadr, Javid; Sinha, Pawan
We have developed a technique called RISE (Random Image Structure Evolution), by which one may systematically sample continuous paths in a high-dimensional image space. A basic RISE sequence depicts the evolution of an object's image from a random field, along with the reverse sequence which depicts the transformation of this image back into randomness. The processing steps are designed to ensure that important low-level image attributes such as the frequency spectrum and luminance are held constant throughout a RISE sequence. Experiments based on the RISE paradigm can be used to address some key open issues in object perception. These include determining the neural substrates underlying object perception, the role of prior knowledge and expectation in object perception, and the developmental changes in object perception skills from infancy to adulthood.
</description>
<pubDate>Thu, 01 Mar 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7221</guid>
<dc:date>2001-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Electronic Market-Maker</title>
<link>https://hdl.handle.net/1721.1/7220</link>
<description>An Electronic Market-Maker
Chan, Nicholas Tung; Shelton, Christian
This paper presents an adaptive learning model for market-making under the reinforcement learning framework. Reinforcement learning is a learning technique in which agents aim to maximize the long-term accumulated rewards. No knowledge of the market environment, such as the order arrival or price process, is assumed. Instead, the agent learns from real-time market experience and develops explicit market-making strategies, achieving multiple objectives including the maximizing of profits and minimization of the bid-ask spread. The simulation results show initial success in bringing learning techniques to building market-making algorithms.
</description>
<pubDate>Tue, 17 Apr 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7220</guid>
<dc:date>2001-04-17T00:00:00Z</dc:date>
</item>
<item>
<title>An Empirical Comparison of SNoW and SVMs for Face Detection</title>
<link>https://hdl.handle.net/1721.1/7219</link>
<description>An Empirical Comparison of SNoW and SVMs for Face Detection
Alvira, Mariano; Rifkin, Ryan
Impressive claims have been made for the performance of the SNoW algorithm on face detection tasks by Yang et. al. [7]. In particular, by looking at both their results and those of Heisele et. al. [3], one could infer that the SNoW system performed substantially better than an SVM-based system, even when the SVM used a polynomial kernel and the SNoW system used a particularly simplistic 'primitive' linear representation. We evaluated the two approaches in a controlled experiment, looking directly at performance on a simple, fixed-sized test set, isolating out 'infrastructure' issues related to detecting faces at various scales in large images. We found that SNoW performed about as well as linear SVMs, and substantially worse than polynomial SVMs.
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7219</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Policy Improvement for POMDPs Using Normalized Importance Sampling</title>
<link>https://hdl.handle.net/1721.1/7218</link>
<description>Policy Improvement for POMDPs Using Normalized Importance Sampling
Shelton, Christian R.
We present a new method for estimating the expected return of a POMDP from experience. The estimator does not assume any knowle ge of the POMDP and allows the experience to be gathered with an arbitrary set of policies. The return is estimated for any new policy of the POMDP. We motivate the estimator from function-approximation and importance sampling points-of-view and derive its theoretical properties. Although the estimator is biased, it has low variance and the bias is often irrelevant when the estimator is used for pair-wise comparisons.We conclude by extending the estimator to policies with memory and compare its performance in a greedy search algorithm to the REINFORCE algorithm showing an order of magnitude reduction in the number of trials required.
</description>
<pubDate>Tue, 20 Mar 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7218</guid>
<dc:date>2001-03-20T00:00:00Z</dc:date>
</item>
<item>
<title>Observations on Cortical Mechanisms for Object Recognition andsLearning</title>
<link>https://hdl.handle.net/1721.1/7217</link>
<description>Observations on Cortical Mechanisms for Object Recognition andsLearning
Poggio, Tomaso; Hurlbert, Anya
This paper sketches a hypothetical cortical  architecture for visual 3D object recognition  based on a recent computational model. The  view-centered scheme relies on modules for  learning from examples, such as Hyperbf-like  networks. Such models capture a class of  explanations we call Memory-Based Models  (MBM) that contains sparse population  coding, memory-based recognition, and  codebooks of prototypes. Unlike the  sigmoidal units of some artificial neural  networks, the units of MBMs are consistent  with the description of cortical neurons. We  describe how an example of MBM may be  realized in terms of cortical circuitry and  biophysical mechanisms, consistent with  psychophysical and physiological data.
</description>
<pubDate>Wed, 01 Dec 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7217</guid>
<dc:date>1993-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric and Algebraic Aspects of 3D Affine and Projective Structures from Perspective 2D Views</title>
<link>https://hdl.handle.net/1721.1/7216</link>
<description>Geometric and Algebraic Aspects of 3D Affine and Projective Structures from Perspective 2D Views
Shashua, Amnon
We investigate the differences --- conceptually  and algorithmically --- between affine and  projective frameworks for the tasks of visual  recognition and reconstruction from  perspective views. It is shown that an affine  invariant exists between any view and a fixed  view chosen as a reference view. This implies  that for tasks for which a reference view can  be chosen, such as in alignment schemes for  visual recognition, projective invariants are not  really necessary. We then use the affine  invariant to derive new algebraic connections  between perspective views. It is shown that  three perspective views of an object are  connected by certain algebraic functions of  image coordinates alone (no structure or  camera geometry needs to be involved).
</description>
<pubDate>Thu, 01 Jul 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7216</guid>
<dc:date>1993-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>3D Object Recognition: Symmetry and Virtual Views</title>
<link>https://hdl.handle.net/1721.1/7215</link>
<description>3D Object Recognition: Symmetry and Virtual Views
Vetter, Thomas; Poggio, Tomaso; B'ulthoff, Heinrich
Many 3D objects in the world around us are  strongly constrained. For instance, not only  cultural artifacts but also many natural objects  are bilaterally symmetric. Thoretical  arguments suggest and psychophysical  experiments confirm that humans may be  better in the recognition of symmetric objects.  The hypothesis of symmetry-induced virtual  views together with a network model that  successfully accounts for human recognition  of generic 3D objects leads to predictions that  we have verified with psychophysical  experiments.
</description>
<pubDate>Tue, 01 Dec 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7215</guid>
<dc:date>1992-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Example Based Image Analysis and Synthesis</title>
<link>https://hdl.handle.net/1721.1/7214</link>
<description>Example Based Image Analysis and Synthesis
Beymer, David; Shashua, Amnon; Poggio, Tomaso
Image analysis and graphics synthesis can  be achieved with learning techniques using  directly image examples without physically-based, 3D models. In our technique: -- the  mapping from novel images to a vector of  "pose" and "expression" parameters can be  learned from a small set of example images  using a function approximation technique that  we call an analysis network; -- the inverse  mapping from input "pose" and "expression"  parameters to output images can be  synthesized from a small set of example  images and used to produce new images  using a similar synthesis network. The  techniques described here have several  applications in computer graphics, special  effects, interactive multimedia and very low  bandwidth teleconferencing.
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7214</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conditions for Viewpoint Dependent Face Recognition</title>
<link>https://hdl.handle.net/1721.1/7213</link>
<description>Conditions for Viewpoint Dependent Face Recognition
Schyns, Philippe G.; Bulthoff, Heinrich H.
Poggio and Vetter (1992) showed that  learning one view of a bilaterally symmetric  object could be sufficient for its recognition, if  this view allows the computation of a  symmetric, "virtual," view. Faces are roughly  bilaterally symmetric objects. Learning a side-view--which always has a symmetric view--should allow for better generalization  performances than learning the frontal view.  Two psychophysical experiments tested these  predictions. Stimuli were views of shaded 3D  models of laser-scanned faces. The first  experiment tested whether a particular view of  a face was canonical. The second experiment  tested which single views of a face give rise to  best generalization performances. The results  were compatible with the symmetry  hypothesis: Learning a side view allowed  better generalization performances than  learning the frontal view.
</description>
<pubDate>Sun, 01 Aug 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7213</guid>
<dc:date>1993-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Priors Stabilizers and Basis Functions: From Regularization to Radial, Tensor and Additive Splines</title>
<link>https://hdl.handle.net/1721.1/7212</link>
<description>Priors Stabilizers and Basis Functions: From Regularization to Radial, Tensor and Additive Splines
Girosi, Federico; Jones, Michael; Poggio, Tomaso
We had previously shown that regularization  principles lead to approximation schemes, as  Radial Basis Functions, which are equivalent  to networks with one layer of hidden units,  called Regularization Networks. In this paper  we show that regularization networks  encompass a much broader range of  approximation schemes, including many of  the popular general additive models,  Breiman's hinge functions and some forms of  Projection Pursuit Regression. In the  probabilistic interpretation of regularization,  the different classes of basis functions  correspond to different classes of prior  probabilities on the approximating function  spaces, and therefore to different types of  smoothness assumptions. In the final part of  the paper, we also show a relation between  activation functions of the Gaussian and  sigmoidal type.
</description>
<pubDate>Tue, 01 Jun 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7212</guid>
<dc:date>1993-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measure Fields for Function Approximation</title>
<link>https://hdl.handle.net/1721.1/7211</link>
<description>Measure Fields for Function Approximation
Marroquin, Jose L.
The computation of a piecewise smooth  function that approximates a finite set of data  points may be decomposed into two  decoupled tasks: first, the computation of the  locally smooth models, and hence,  the segmentation of the data into classes  that consist on the sets of points best  approximated by each model, and second, the  computation of the normalized discriminant  functions for each induced class. The  approximating function may then be computed  as the optimal estimator with respect to this  measure field. We give an efficient procedure  for effecting both computations, and for the  determination of the optimal number of  components.
</description>
<pubDate>Tue, 01 Jun 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7211</guid>
<dc:date>1993-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometric Structure of the Adaptive Controller of the Human Arm</title>
<link>https://hdl.handle.net/1721.1/7210</link>
<description>Geometric Structure of the Adaptive Controller of the Human Arm
Shadmehr, Reza; Mussa-Ivaldi, Ferdinando
The objects with which the hand interacts with  may significantly change the dynamics of the  arm. How does the brain adapt control of arm  movements to this new dynamic? We show  that adaptation is via composition of a model  of the task's dynamics. By exploring  generalization capabilities of this adaptation  we infer some of the properties of the  computational elements with which the brain  formed this model: the elements have broad  receptive fields and encode the learned  dynamics as a map structured in an intrinsic  coordinate system closely related to the  geometry of the skeletomusculature. The low--level nature of these elements suggests that  they may represent asset of primitives with  which a movement is represented in the CNS.
</description>
<pubDate>Thu, 01 Jul 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7210</guid>
<dc:date>1993-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Formulation for Active Learning with Applications to Object Detection</title>
<link>https://hdl.handle.net/1721.1/7209</link>
<description>A Formulation for Active Learning with Applications to Object Detection
Sung, Kah Kay; Niyogi, Partha
We discuss a formulation for active example  selection for function learning problems. This  formulation is obtained by adapting Fedorov's  optimal experiment design to the learning  problem. We specifically show how to  analytically derive example selection  algorithms for certain well defined function  classes. We then explore the behavior and  sample complexity of such active learning  algorithms. Finally, we view object detection  as a special case of function learning and  show how our formulation reduces to a useful  heuristic to choose examples to reduce the  generalization error.
</description>
<pubDate>Thu, 06 Jun 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7209</guid>
<dc:date>1996-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Forecasting Global Temperature Variations by Neural Networks</title>
<link>https://hdl.handle.net/1721.1/7208</link>
<description>Forecasting Global Temperature Variations by Neural Networks
Miyano, Takaya; Girosi, Federico
Global temperature variations between 1861 and 1984 are forecast usingsregularization networks, multilayer perceptrons and linearsautoregression. The regularization network, optimized by stochasticsgradient descent associated with colored noise, gives the bestsforecasts. For all the models, prediction errors noticeably increasesafter 1965. These results are consistent with the hypothesis that thesclimate dynamics is characterized by low-dimensional chaos and thatsthe it may have changed at some point after 1965, which is alsosconsistent with the recent idea of climate change.s
</description>
<pubDate>Mon, 01 Aug 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7208</guid>
<dc:date>1994-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Quadric Reference Surface: Theory and Applications</title>
<link>https://hdl.handle.net/1721.1/7207</link>
<description>The Quadric Reference Surface: Theory and Applications
Shashua, Amnon; Toelg, Sebastian
The conceptual component of this work is  about "reference surfaces'' which are the dual  of reference frames often used for shape  representation purposes. The theoretical  component of this work involves the question  of whether one can find a unique (and simple)  mapping that aligns two arbitrary perspective  views of an opaque textured quadric surface  in 3D, given (i) few corresponding points in  the two views, or (ii) the outline conic of the  surface in one view (only) and few  corresponding points in the two views. The  practical component of this work is concerned  with applying the theoretical results as tools  for the task of achieving full correspondence  between views of arbitrary objects.
</description>
<pubDate>Wed, 01 Jun 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7207</guid>
<dc:date>1994-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Mixtures of Experts and the EM Algorithm</title>
<link>https://hdl.handle.net/1721.1/7206</link>
<description>Hierarchical Mixtures of Experts and the EM Algorithm
Jordan, Michael I.; Jacobs, Robert A.
We present a tree-structured architecture for  supervised learning. The statistical model  underlying the architecture is a hierarchical  mixture model in which both the mixture  coefficients and the mixture components are  generalized linear models (GLIM's). Learning  is treated as a maximum likelihood problem;  in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the  parameters of the architecture. We also  develop an on-line learning algorithm in which  the parameters are updated incrementally.  Comparative simulation results are presented  in the robot dynamics domain.
</description>
<pubDate>Sun, 01 Aug 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7206</guid>
<dc:date>1993-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Convergence of Stochastic Iterative Dynamic Programming Algorithms</title>
<link>https://hdl.handle.net/1721.1/7205</link>
<description>On the Convergence of Stochastic Iterative Dynamic Programming Algorithms
Jaakkola, Tommi; Jordan, Michael I.; Singh, Satinder P.
Recent developments in the area of  reinforcement learning have yielded a number  of new algorithms for the prediction and  control of Markovian environments. These  algorithms, including the TD(lambda)  algorithm of Sutton (1988) and the Q-learning  algorithm of Watkins (1989), can be motivated  heuristically as approximations to dynamic  programming (DP). In this paper we provide a  rigorous proof of convergence of these DP-based learning algorithms by relating them to  the powerful techniques of stochastic  approximation theory via a new convergence  theorem. The theorem establishes a general  class of convergent algorithms to which both  TD(lambda) and Q-learning belong.
</description>
<pubDate>Sun, 01 Aug 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7205</guid>
<dc:date>1993-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>How are Three-Deminsional Objects Represented in the Brain?</title>
<link>https://hdl.handle.net/1721.1/7204</link>
<description>How are Three-Deminsional Objects Represented in the Brain?
Buelthoff, Heinrich H.; Edelman, Shimon Y.; Tarr, Michael J.
We discuss a variety of object recognition  experiments in which human subjects were  presented with realistically rendered images  of computer-generated three-dimensional  objects, with tight control over stimulus shape,  surface properties, illumination, and  viewpoint, as well as subjects' prior exposure  to the stimulus objects. In all experiments  recognition performance was: (1) consistently  viewpoint dependent; (2) only partially aided  by binocular stereo and other depth  information, (3) specific to viewpoints that  were familiar; (4) systematically disrupted by  rotation in depth more than by deforming the  two-dimensional images of the stimuli. These  results are consistent with recently advanced  computational theories of recognition based  on view interpolation.
</description>
<pubDate>Fri, 01 Apr 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7204</guid>
<dc:date>1994-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reciprocal Interactions Between Motion and Form Perception</title>
<link>https://hdl.handle.net/1721.1/7203</link>
<description>Reciprocal Interactions Between Motion and Form Perception
Sinha, Pawan
The processes underlying the perceptual  analysis of visual form are believed to have  minimal interaction with those subserving the  perception of visual motion (Livingstone and  Hubel, 1987; Victor and Conte, 1990). Recent  reports of functionally and anatomically  segregated parallel streams in the primate  visual cortex seem to support this hypothesis  (Ungerlieder and Mishkin, 1982; VanEssen  and Maunsell, 1983; Shipp and Zeki, 1985;  Zeki and Shipp, 1988; De Yoe et al., 1994).  Here we present perceptual evidence that is  at odds with this view and instead suggests  strong symmetric interactions between the  form and motion processes. In one direction,  we show that the introduction of specific static  figural elements, say 'F', in a simple motion  sequence biases an observer to perceive a  particular motion field, say 'M'. In the reverse  direction, the imposition of the same motion  field 'M' on the original sequence leads the  observer to perceive illusory static figural  elements 'F'. A specific implication of these  findings concerns the possible existence of  (what we call) motion end-stopped units in the  primate visual system. Such units might  constitute part of a mechanism for signalling  subjective occluding contours based on  motion-field discontinuities.
</description>
<pubDate>Fri, 21 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7203</guid>
<dc:date>1995-04-21T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from Incomplete Data</title>
<link>https://hdl.handle.net/1721.1/7202</link>
<description>Learning from Incomplete Data
Ghahramani, Zoubin; Jordan, Michael I.
Real-world learning tasks often involve high-dimensional data sets with complex patterns  of missing features. In this paper we review  the problem of learning from incomplete data  from two statistical perspectives---the  likelihood-based and the Bayesian. The goal  is two-fold: to place current neural network  approaches to missing data within a  statistical framework, and to describe a set of  algorithms, derived from the likelihood-based  framework, that handle clustering,  classification, and function approximation  from incomplete data in a principled and  efficient manner. These algorithms are based  on mixture modeling and make two distinct  appeals to the Expectation-Maximization (EM)  principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture  components and for coping with the missing  data.
</description>
<pubDate>Tue, 24 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7202</guid>
<dc:date>1995-01-24T00:00:00Z</dc:date>
</item>
<item>
<title>Cooperative Physics of Fly Swarms: An Emergent Behavior</title>
<link>https://hdl.handle.net/1721.1/7201</link>
<description>Cooperative Physics of Fly Swarms: An Emergent Behavior
Poggio, M.; Poggio, Tomaso A
We have simulated the behavior of several  artificial flies, interacting visually with each  other. Each fly is described by a simple  tracking system (Poggio and Reichardt, 1973;  Land and Collett, 1974) which summarizes  behavioral experiments in which individual  flies fixate a target. Our main finding is that the  interaction of theses implemodules gives rise  to a variety of relatively complex behaviors. In  particular, we observe a swarm-like behavior  of a group of many artificial flies for certain  reasonable ranges of our tracking system  parameters.
</description>
<pubDate>Tue, 11 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7201</guid>
<dc:date>1995-04-11T00:00:00Z</dc:date>
</item>
<item>
<title>Sequential Optimal Recovery: A Paradigm for Active Learning</title>
<link>https://hdl.handle.net/1721.1/7200</link>
<description>Sequential Optimal Recovery: A Paradigm for Active Learning
Niyogi, Partha
In most classical frameworks for learning  from examples, it is assumed that examples  are randomly drawn and presented to the  learner. In this paper, we consider the  possibility of a more active learner who is  allowed to choose his/her own examples. Our  investigations are carried out in a function  approximation setting. In particular, using  arguments from optimal recovery (Micchelli  and Rivlin, 1976), we develop an adaptive  sampling strategy (equivalent to adaptive  approximation) for arbitrary approximation  schemes. We provide a general formulation of  the problem and show how it can be  regarded as sequential optimal recovery. We  demonstrate the application of this general  formulation to two special cases of functions  on the real line 1) monotonically increasing  functions and 2) functions with bounded  derivative. An extensive investigation of the  sample complexity of approximating these  functions is conducted yielding both  theoretical and empirical results on test  functions. Our theoretical results (stated  insPAC-style), along with the simulations  demonstrate the superiority of our active  scheme over both passive learning as well as  classical optimal recovery. The analysis of  active function approximation is conducted in  a worst-case setting, in contrast with other  Bayesian paradigms obtained from optimal  design (Mackay, 1992).
</description>
<pubDate>Fri, 12 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7200</guid>
<dc:date>1995-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Object Recognition in Noisy Images Using Simulated Annealing</title>
<link>https://hdl.handle.net/1721.1/7199</link>
<description>Fast Object Recognition in Noisy Images Using Simulated Annealing
Betke, Margrit; Makris, Nicholas
A fast simulated annealing algorithm is  developed for automatic object recognition.  The normalized correlation coefficient is used  as a measure of the match between a  hypothesized object and an image. Templates  are generated on-line during the search by  transforming model images. Simulated  annealing reduces the search time by orders  of magnitude with respect to an exhaustive  search. The algorithm is applied to the  problem of how landmarks, for example, traffic  signs, can be recognized by an autonomous  vehicle or a navigating robot. The algorithm  works well in noisy, real-world images of  complicated scenes for model images with  high information content.
</description>
<pubDate>Wed, 25 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7199</guid>
<dc:date>1995-01-25T00:00:00Z</dc:date>
</item>
<item>
<title>A Dynamical Systems Model for Language Change</title>
<link>https://hdl.handle.net/1721.1/7198</link>
<description>A Dynamical Systems Model for Language Change
Niyogi, Partha; Berwick, Robert
Formalizing linguists' intuitions of language  change as a dynamical system, we quantify  the time course of language change including  sudden vs. gradual changes in languages.  We apply the computer model to the historical  loss of Verb Second from Old French to  modern French, showing that otherwise  adequate grammatical theories can fail our  new evolutionary criterion.
</description>
<pubDate>Fri, 01 Dec 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7198</guid>
<dc:date>1995-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Verb Classes and Alternations in Bangla, German, English, and Korean</title>
<link>https://hdl.handle.net/1721.1/7197</link>
<description>Verb Classes and Alternations in Bangla, German, English, and Korean
Jones, Douglas A.; Berwick, Robert C.; Cho, Franklin; Khan, Zeeshan; Kohl, Karen T.; Nomura, Naoyuki; Radhakrishnan, Anand; Sauerland, Ulrich; Ulicny, Brian
In this report, we investigate the relationship  between the semantic and syntactic  properties of verbs. Our work is based on the  English Verb Classes and Alternations of  (Levin, 1993). We explore how these classes  are manifested in other languages, in  particular, in Bangla, German, and Korean.  Our report includes a survey and classification  of several hundred verbs from these  languages into the cross-linguistic  equivalents of Levin's classes. We also  explore ways in which our findings may be  used to enhance WordNet in two ways:  making the English syntactic information of  WordNet more fine-grained, and making  WordNet multilingual.
</description>
<pubDate>Mon, 06 May 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7197</guid>
<dc:date>1996-05-06T00:00:00Z</dc:date>
</item>
<item>
<title>The Logical Problem of Language Change</title>
<link>https://hdl.handle.net/1721.1/7196</link>
<description>The Logical Problem of Language Change
Niyogi, Partha; Berwick, Robert
This paper considers the problem of  language change. Linguists must explain not  only how languages are learned but also how  and why they have evolved along certain  trajectories and not others. While the  language learning problem has focused on  the behavior of individuals and how they  acquire a particular grammar from a class of  grammars ${cal G}$, here we consider a  population of such learners and investigate  the emergent, global population  characteristics of linguistic communities over  several generations. We argue that language  change follows logically from specific  assumptions about grammatical theories and  learning paradigms. In particular, we are able  to transform parameterized theories and  memoryless acquisition algorithms into  grammatical dynamical systems, whose  evolution depicts a population's evolving  linguistic composition. We investigate the  linguistic and computational consequences of  this model, showing that the formalization  allows one to ask questions about diachronic  that one otherwise could not ask, such as the  effect of varying initial conditions on the  resulting diachronic trajectories. From a more  programmatic perspective, we give an  example of how the dynamical system model  for language change can serve as a way to  distinguish among alternative grammatical  theories, introducing a formal diachronic  adequacy criterion for linguistic theories.
</description>
<pubDate>Fri, 01 Dec 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7196</guid>
<dc:date>1995-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Convergence Properties of the EM Algorithm for Gaussian Mixtures</title>
<link>https://hdl.handle.net/1721.1/7195</link>
<description>On Convergence Properties of the EM Algorithm for Gaussian Mixtures
Jordan, Michael; Xu, Lei
"Expectation-Maximization'' (EM) algorithm and  gradient-based approaches for maximum  likelihood learning of finite Gaussian  mixtures. We show that the EM step in  parameter space is obtained from the  gradient via a projection matrix $P$, and we  provide an explicit expression for the matrix.  We then analyze the convergence of EM in  terms of special properties of $P$ and provide  new results analyzing the effect that $P$ has  on the likelihood surface. Based on these  mathematical results, we present a  comparative discussion of the advantages  and disadvantages of EM and other  algorithms for the learning of Gaussian  mixture models.
</description>
<pubDate>Fri, 21 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7195</guid>
<dc:date>1995-04-21T00:00:00Z</dc:date>
</item>
<item>
<title>View-Based Strategies for 3D Object Recognition</title>
<link>https://hdl.handle.net/1721.1/7194</link>
<description>View-Based Strategies for 3D Object Recognition
Sinha, Pawan; Poggio, Tomaso
A persistent issue of debate in the area of 3D  object recognition concerns the nature of the  experientially acquired object models in the  primate visual system. One prominent  proposal in this regard has expounded the  use of object centered models, such as  representations of the objects' 3D structures  in a coordinate frame independent of the  viewing parameters [Marr and Nishihara,  1978]. In contrast to this is another proposal  which suggests that the viewing parameters  encountered during the learning phase might  be inextricably linked to subsequent  performance on a recognition task [Tarr and  Pinker, 1989; Poggio and Edelman, 1990].  The 'object model', according to this idea, is  simply a collection of the sample views  encountered during training. Given that object  centered recognition strategies have the  attractive feature of leading to viewpoint  independence, they have garnered much of  the research effort in the field of computational  vision. Furthermore, since human recognition  performance seems remarkably robust in the  face of imaging variations [Ellis et al., 1989], it  has often been implicitly assumed that the  visual system employs an object centered  strategy. In the present study we examine this  assumption more closely. Our experimental  results with a class of novel 3D structures  strongly suggest the use of a view-based  strategy by the human visual system even  when it has the opportunity of constructing  and using object-centered models. In fact, for  our chosen class of objects, the results seem  to support a stronger claim: 3D object  recognition is 2D view-based.
</description>
<pubDate>Fri, 21 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7194</guid>
<dc:date>1995-04-21T00:00:00Z</dc:date>
</item>
<item>
<title>Example Based Learning for View-Based Human Face Detection</title>
<link>https://hdl.handle.net/1721.1/7193</link>
<description>Example Based Learning for View-Based Human Face Detection
Sung, Kah Kay; Poggio, Tomaso
We present an example-based learning  approach for locating vertical frontal views of  human faces in complex scenes. The  technique models the distribution of human  face patterns by means of a few view-based  "face'' and "non-face'' prototype clusters. At  each image location, the local pattern is  matched against the distribution-based  model, and a trained classifier determines,  based on the local difference measurements,  whether or not a human face exists at the  current image location. We provide an  analysis that helps identify the critical  components of our system.
</description>
<pubDate>Tue, 24 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7193</guid>
<dc:date>1995-01-24T00:00:00Z</dc:date>
</item>
<item>
<title>Active Learning with Statistical Models</title>
<link>https://hdl.handle.net/1721.1/7192</link>
<description>Active Learning with Statistical Models
Cohn, David A.; Ghahramani, Zoubin; Jordan, Michael I.
For many types of learners one can compute  the statistically 'optimal' way to select data.  We review how these techniques have been  used with feedforward neural networks. We  then show how the same principles may be  used to select data for two alternative,  statistically-based learning architectures:  mixtures of Gaussians and locally weighted  regression. While the techniques for neural  networks are expensive and approximate, the  techniques for mixtures of Gaussians and  locally weighted regression are both efficient  and accurate.
</description>
<pubDate>Tue, 21 Mar 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7192</guid>
<dc:date>1995-03-21T00:00:00Z</dc:date>
</item>
<item>
<title>The Unsupervised Acquisition of a Lexicon from Continuous Speech</title>
<link>https://hdl.handle.net/1721.1/7191</link>
<description>The Unsupervised Acquisition of a Lexicon from Continuous Speech
Marcken, Carl de
We present an unsupervised learning  algorithm that acquires a natural-language  lexicon from raw speech. The algorithm is  based on the optimal encoding of symbol  sequences in an MDL framework, and uses a  hierarchical representation of language that  overcomes many of the problems that have  stymied previous grammar-induction  procedures. The forward mapping from  symbol sequences to the speech stream is  modeled using features based on articulatory  gestures. We present results on the  acquisition of lexicons and language models  from raw speech, text, and phonetic  transcripts, and demonstrate that our  algorithm compares very favorably to other  reported results with respect to segmentation  performance and statistical efficiency.
</description>
<pubDate>Thu, 18 Jan 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7191</guid>
<dc:date>1996-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>Vector-Based Integration of Local and Long-Range Information in Visual Cortex</title>
<link>https://hdl.handle.net/1721.1/7190</link>
<description>Vector-Based Integration of Local and Long-Range Information in Visual Cortex
Somers, David C.; Todorov, Emanuel V.; Siapas, Athanassios G.; Sur, Mriganka
Integration of inputs by cortical neurons  provides the basis for the complex information  processing performed in the cerebral cortex.  Here, we propose a new analytic framework  for understanding integration within cortical  neuronal receptive fields. Based on the  synaptic organization of cortex, we argue that  neuronal integration is a systems--level  process better studied in terms of local  cortical circuitry than at the level of single  neurons, and we present a method for  constructing self-contained modules which  capture (nonlinear) local circuit interactions.  In this framework, receptive field elements  naturally have dual (rather than the traditional  unitary influence since they drive both  excitatory and inhibitory cortical neurons. This  vector-based analysis, in contrast to  scalarsapproaches, greatly simplifies  integration by permitting linear summation of  inputs from both "classical" and  "extraclassical" receptive field regions. We  illustrate this by explaining two complex visual  cortical phenomena, which are incompatible  with scalar notions of neuronal integration.
</description>
<pubDate>Thu, 18 Jan 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7190</guid>
<dc:date>1996-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks</title>
<link>https://hdl.handle.net/1721.1/7189</link>
<description>Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks
Jaakkola, Tommi S.; Saul, Lawrence K.; Jordan, Michael I.
Sigmoid type belief networks, a class of  probabilistic neural networks, provide a  natural framework for compactly representing  probabilistic information in a variety of  unsupervised and supervised learning  problems. Often the parameters used in  these networks need to be learned from  examples. Unfortunately, estimating the  parameters via exact probabilistic calculations  (i.e, the EM-algorithm) is intractable even for  networks with fairly small numbers of hidden  units. We propose to avoid the infeasibility of  the E step by bounding likelihoods instead of  computing them exactly. We introduce  extended and complementary representations  for these networks and show that the  estimation of the network parameters can be  made fast (reduced to quadratic optimization)  by performing the estimation in either of the  alternative domains. The complementary  networks can be used for continuous density  estimation as well.
</description>
<pubDate>Fri, 09 Feb 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7189</guid>
<dc:date>1996-02-09T00:00:00Z</dc:date>
</item>
<item>
<title>Factorial Hidden Markov Models</title>
<link>https://hdl.handle.net/1721.1/7188</link>
<description>Factorial Hidden Markov Models
Ghahramani, Zoubin; Jordan, Michael I.
We present a framework for learning in  hidden Markov models with distributed state  representations. Within this framework, we  derive a learning algorithm based on the  Expectation--Maximization (EM) procedure for  maximum likelihood estimation. Analogous to  the standard Baum-Welch update rules, the  M-step of our algorithm is exact and can be  solved analytically. However, due to the  combinatorial nature of the hidden state  representation, the exact E-step is intractable.  A simple and tractable mean field  approximation is derived. Empirical results  on a set of problems suggest that both the  mean field approximation and Gibbs  sampling are viable alternatives to  the computationally expensive exact  algorithm.
</description>
<pubDate>Fri, 09 Feb 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7188</guid>
<dc:date>1996-02-09T00:00:00Z</dc:date>
</item>
<item>
<title>Model-Based Matching of Line Drawings by Linear Combinations of Prototypes</title>
<link>https://hdl.handle.net/1721.1/7187</link>
<description>Model-Based Matching of Line Drawings by Linear Combinations of Prototypes
Jones, Michael J.; Poggio, Tomaso
We describe a technique for finding pixelwise  correspondences between two images by  using models of objects of the same class to  guide the search. The object models are  'learned' from example images (also called  prototypes) of an object class. The models  consist of a linear combination ofsprototypes.  The flow fields giving pixelwise  correspondences between a base prototype  and each of the other prototypes must be  given. A novel image of an object of the same  class is matched to a model by minimizing an  error between the novel image and the current  guess for the closest modelsimage.  Currently, the algorithm applies to line  drawings of objects. An extension to real grey  level images is discussed.
</description>
<pubDate>Thu, 18 Jan 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7187</guid>
<dc:date>1996-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>Neural Networks</title>
<link>https://hdl.handle.net/1721.1/7186</link>
<description>Neural Networks
Jordan, Michael I.; Bishop, Christopher M.
We present an overview of current research  on artificial neural networks, emphasizing a  statistical perspective. We view neural  networks as parameterized graphs that make  probabilistic assumptions about data, and  view learning algorithms as methods for  finding parameter values that look probable in  the light of the data. We discuss basic issues  in representation and learning, and treat  some of the practical issues that arise in  fitting networks to data. We also discuss links  between neural networks and the general  formalism of graphical models.
</description>
<pubDate>Wed, 13 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7186</guid>
<dc:date>1996-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Independence Networks for Hidden Markov Probability Models</title>
<link>https://hdl.handle.net/1721.1/7185</link>
<description>Probabilistic Independence Networks for Hidden Markov Probability Models
Smyth, Padhraic; Heckerman, David; Jordan, Michael
Graphical techniques for modeling the  dependencies of randomvariables have been  explored in a variety of different areas  includingstatistics, statistical physics, artificial  intelligence, speech recognition, image  processing, and genetics.Formalisms for  manipulating these models have been  developedrelatively independently in these  research communities. In this paper  weexplore hidden Markov models (HMMs)  and related structures within the general  framework of probabilistic  independencenetworks (PINs). The paper  contains a self-contained review of the basic  principles of PINs.It is shown that the well-known forward-backward (F-B) and  Viterbialgorithms for HMMs are special cases  of more general inference algorithms  forarbitrary PINs. Furthermore, the existence  of inference and estimationalgorithms for  more general graphical models provides a  set of analysistools for HMM practitioners who  wish to explore a richer class of  HMMstructures.Examples of relatively complex  models to handle sensorfusion and  coarticulationin speech recognitionare  introduced and treated within the graphical  model framework toillustrate the advantages  of the general approach.
</description>
<pubDate>Wed, 13 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7185</guid>
<dc:date>1996-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Linear, Sparse, Factorial Codes</title>
<link>https://hdl.handle.net/1721.1/7184</link>
<description>Learning Linear, Sparse, Factorial Codes
Olshausen, Bruno A.
In previous work (Olshausen &amp; Field 1996),  an algorithm was described for learning linear  sparse codes which, when trained on natural  images, produces a set of basis functions  that are spatially localized, oriented, and  bandpass (i.e., wavelet-like). This note shows  how the algorithm may be interpreted within a  maximum-likelihood framework. Several  useful insights emerge from this connection:  it makes explicit the relation to statistical  independence (i.e., factorial coding), it shows  a formal relationship to the algorithm of Bell  and Sejnowski (1995), and it suggests how to  adapt parameters that were previously fixed.
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7184</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model-Based Matching by Linear Combinations of Prototypes</title>
<link>https://hdl.handle.net/1721.1/7183</link>
<description>Model-Based Matching by Linear Combinations of Prototypes
Jones, Michael J.; Poggio, Tomaso
We describe a method for modeling object  classes (such as faces) using 2D example  images and an algorithm for matching a  model to a novel image. The object class  models are "learned'' from example images  that we call prototypes. In addition to the  images, the pixelwise correspondences  between a reference prototype and each of the  other prototypes must also be provided. Thus  a model consists of a linear combination of  prototypical shapes and textures. A stochastic  gradient descent algorithm is used to match a  model to a novel image by minimizing the  error between the model and the novel image.  Example models are shown as well as  example matches to novel images. The  robustness of the matching algorithm is also  evaluated. The technique can be used for a  number of applications including the  computation of correspondence between  novel images of a certain known class, object  recognition, image synthesis and image  compression.
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7183</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Image Based Rendering Using Algebraic Techniques</title>
<link>https://hdl.handle.net/1721.1/7182</link>
<description>Image Based Rendering Using Algebraic Techniques
Evgeniou, Theodoros
This paper presents an image-based  rendering system using algebraic relations  between different views of an object. The  system uses pictures of an object taken from  known positions. Given three such images it  can generate "virtual'' ones as the object  would look from any position near the ones  that the two input images were taken from.  The extrapolation from the example images  can be up to about 60 degrees of rotation. The  system is based on the trilinear constraints  that bind any three view so fan object. As a  side result, we propose two new methods for  camera calibration. We developed and used  one of them. We implemented the system  and tested it on real images of objects and  faces. We also show experimentally that even  when only two images taken from unknown  positions are given, the system can be used  to render the object from other view points as  long as we have a good estimate of the  internal parameters of the camera used and  we are able to find good correspondence  between the example images. In addition, we  present the relation between these algebraic  constraints and a factorization method for  shape and motion estimation. As a result we  propose a method for motion estimation in  the special case of orthographic projection.
</description>
<pubDate>Fri, 01 Nov 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7182</guid>
<dc:date>1996-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model Selection in Summary Evaluation</title>
<link>https://hdl.handle.net/1721.1/7181</link>
<description>Model Selection in Summary Evaluation
Perez-Breva, Luis; Yoshimi, Osamu
A difficulty in the design of automated text  summarization   algorithms is in the objective evaluation.  Viewing summarization   as a tradeoff between length and  information content, we introduce   a technique based on a hierarchy of  classifiers to rank, through   model selection, different summarization  methods. This summary   evaluation technique allows for broader  comparison of   summarization methods than the traditional  techniques of summary   evaluation. We present an empirical study  of two simple, albeit   widely used, summarization methods that  shows the different usages   of this automated task-based evaluation  system and confirms the   results obtained with human-based  evaluation methods over smaller   corpora.
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7181</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classifiers</title>
<link>https://hdl.handle.net/1721.1/7180</link>
<description>Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classifiers
Schoelkopf, B.; Sung, K.; Burges, C.; Girosi, F.; Niyogi, P.; Poggio, Tomaso A; Vapnik, V.
The Support Vector (SV) machine is a novel  type of learning machine, based on statistical  learning theory, which contains polynomial  classifiers, neural networks, and radial basis  function (RBF) networks as special cases. In  the RBF case, the SV algorithm automatically  determines centers, weights and threshold  such as to minimize an upper bound on the  expected test error. The present study is  devoted to an experimental comparison of  these machines with a classical approach,  where the centers are determined by $k$--means clustering and the weights are found  using error backpropagation. We consider  three machines, namely a classical RBF  machine, an SV machine with Gaussian  kernel, and a hybrid system with the centers  determined by the SV method and the weights  trained by error backpropagation. Our results  show that on the US postal service database  of handwritten digits, the SV machine  achieves the highest test accuracy, followed  by the hybrid approach. The SV approach is  thus not only theoretically well--founded, but  also superior in a practical application.
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7180</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Image-Based View Synthesis</title>
<link>https://hdl.handle.net/1721.1/7179</link>
<description>Image-Based View Synthesis
Avidan, Shai; Evgeniou, Theodoros; Shashua, Amnon; Poggio, Tomaso
We present a new method for rendering novel  images of flexible 3D objects from a small  number of example images in  correspondence. The strength of the method  is the ability to synthesize images whose  viewing position is significantly far away from  the viewing cone of the example images  ("view extrapolation"), yet without ever  modeling the 3D structure of the scene. The  method relies on synthesizing a chain of  "trilinear tensors" that governs the warping  function from the example images to the novel  image, together with a multi-dimensional  interpolation function that synthesizes the  non-rigid motions of the viewed object from  the virtual camera position. We show that two  closely spaced example images alone are  sufficient in practice to synthesize a significant  viewing cone, thus demonstrating the ability of  representing an object by a relatively small  number of model images --- for the purpose  of cheap and fast viewers that can run on  standard hardware.
</description>
<pubDate>Wed, 01 Jan 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7179</guid>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Detailed Look at Scale and Translation Invariance in a Hierarchical Neural Model of Visual Object Recognition</title>
<link>https://hdl.handle.net/1721.1/7178</link>
<description>A Detailed Look at Scale and Translation Invariance in a Hierarchical Neural Model of Visual Object Recognition
Schneider, Robert; Riesenhuber, Maximilian
The HMAX model has recently been proposed  by Riesenhuber &amp; Poggio as a  hierarchical  model of position- and size-invariant object  recognition in visual cortex. It has also turned  out to model successfully a number of other  properties of the ventral visual stream (the  visual pathway thought to be crucial for object  recognition in cortex), and particularly of (view-tuned) neurons in macaque inferotemporal  cortex, the brain area at the top of the ventral  stream. The original modeling study only  used ``paperclip'' stimuli, as in the  corresponding physiology experiment, and did  not explore systematically how model units'  invariance properties depended on model  parameters. In this study, we aimed at a  deeper understanding of the inner workings of  HMAX and its performance for various  parameter settings and ``natural'' stimulus  classes. We examined HMAX responses for  different stimulus sizes and positions  systematically and found a dependence of  model units' responses on stimulus position  for which a quantitative description is offered.  Interestingly, we find that scale invariance  properties of hierarchical neural models are  not independent of stimulus class, as  opposed to translation invariance, even  though both are affine transformations within  the image plane.
</description>
<pubDate>Thu, 01 Aug 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7178</guid>
<dc:date>2002-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A View on Dyslexia</title>
<link>https://hdl.handle.net/1721.1/7177</link>
<description>A View on Dyslexia
Geiger, Gad; Lettvin, Jerome Y.
We describe here, briefly, a perceptual non-reading measure which reliably distinguishes  between dyslexic persons and ordinary  readers. More importantly, we describe a  regimen of practice with which dyslexics learn  a new perceptual strategy for reading. Two  controlled experiment on dyslexics children  demonstrate the regimen's efficiency.
</description>
<pubDate>Sun, 01 Jun 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7177</guid>
<dc:date>1997-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Triangulation by Continuous Embedding</title>
<link>https://hdl.handle.net/1721.1/7176</link>
<description>Triangulation by Continuous Embedding
Meila, Marina; Jordan, Michael I.
When triangulating a belief network we aim to  obtain a junction tree of minimum state  space. Searching for the optimal triangulation  can be cast as a search over all the  permutations of the network's vaeriables. Our  approach is to embed the discrete set of  permutations in a convex continuous domain  D. By suitably extending the cost function over  D and solving the continous nonlinear  optimization task we hope to obtain a good  triangulation with respect to the  aformentioned cost. In this paper we  introduce an upper bound to the total junction  tree weight as the cost function. The  appropriatedness of this choice is discussed  and explored by simulations. Then we present  two ways of embedding the new objective  function into continuous domains and show  that they perform well compared to the best  known heuristic.
</description>
<pubDate>Sat, 01 Mar 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7176</guid>
<dc:date>1997-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pre-Attentive Segmentation in the Primary Visual Cortex</title>
<link>https://hdl.handle.net/1721.1/7175</link>
<description>Pre-Attentive Segmentation in the Primary Visual Cortex
Li, Zhaoping
Stimuli outside classical receptive fields have  been shown to exert significant influence over  the activities of neurons in primary visual  cortexWe propose that contextual influences  are used for pre-attentive visual  segmentation, in a new framework called  segmentation without classification. This  means that segmentation of an image into  regions occurs without classification of  features within a region or comparison of  features between regions. This segmentation  framework is simpler than previous  computational approaches, making it  implementable by V1 mechanisms, though  higher leve l visual mechanisms are needed  to refine its output. However, it easily handles  a class of segmentation problems that are  tricky in conventional methods. The cortex  computes global region boundaries by  detecting the breakdown of homogeneity or  translation invariance in the input, using local  intra-cortical interactions mediated by the  horizontal connections. The difference  between contextual influences near and far  from region boundaries makes neural  activities near region boundaries higher than  elsewhere, making boundaries more salient  for perceptual pop-out. This proposal is  implemented in a biologically based model of  V1, and demonstrated using examples of  texture segmentation and figure-ground  segregation. The model performs  segmentation in exactly the same neural  circuit that solves the dual problem of the  enhancement of contours, as is suggested by  experimental observations. Its behavior is  compared with psychophysical and  physiological data on segmentation, contour  enhancement, and contextual influences. We  discuss the implications of segmentation  without classification and the predictions of  our V1 model, and relate it to other  phenomena such as asymmetry in visual  search.
</description>
<pubDate>Tue, 30 Jun 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7175</guid>
<dc:date>1998-06-30T00:00:00Z</dc:date>
</item>
<item>
<title>Information Dissemination and Aggregation in Asset Markets with Simple Intelligent Traders</title>
<link>https://hdl.handle.net/1721.1/7174</link>
<description>Information Dissemination and Aggregation in Asset Markets with Simple Intelligent Traders
Chan, Nicholas; LeBaron, Blake; Lo, Andrew; Poggio, Tomaso
Various studies of asset markets have shown  that traders are capable of learning and  transmitting information through prices in  many situations. In this paper we replace  human traders with intelligent software  agents in a series of simulated markets.  Using these simple learning agents, we are  able to replicate several features of the  experiments with human subjects, regarding  (1) dissemination of information from  informed to uninformed traders, and (2)  aggregation of information spread over  different traders.
</description>
<pubDate>Tue, 01 Sep 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7174</guid>
<dc:date>1998-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Trainable Object Detection System: Car Detection in Static Images</title>
<link>https://hdl.handle.net/1721.1/7173</link>
<description>A Trainable Object Detection System: Car Detection in Static Images
Papageorgiou, Constantine P.; Poggio, Tomaso
This paper describes a general, trainable  architecture for object detection that has  previously been applied to face and  peoplesdetection with a new application to car  detection in static images. Our technique is a  learning based approach that uses a set of  labeled training data from which an implicit  model of an object class -- here, cars -- is  learned. Instead of pixel representations that  may be noisy and therefore not provide a  compact representation for learning, our  training images are transformed from pixel  space to that of Haar wavelets that respond to  local, oriented, multiscale intensity  differences. These feature vectors are then  used to train a support vector machine  classifier. The detection of cars in images is  an important step in applications such as  traffic monitoring, driver assistance systems,  and surveillance, among others. We show  several examples of car detection on out-of-sample images and show an ROC curve that  highlights the performance of our system.
</description>
<pubDate>Wed, 13 Oct 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7173</guid>
<dc:date>1999-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-Based Approach to Real Time Tracking and Analysis of Faces</title>
<link>https://hdl.handle.net/1721.1/7172</link>
<description>Learning-Based Approach to Real Time Tracking and Analysis of Faces
Kumar, Vinay P.; Poggio, Tomaso
This paper describes a trainable system  capable of tracking faces and facialsfeatures  like eyes and nostrils and estimating basic  mouth features such as sdegrees of  openness and smile in real time. In  developing this system, we have addressed  the twin issues of image representation and  algorithms for learning. We have used the  invariance properties of image  representations based on Haar wavelets to  robustly capture various facial features.  Similarly, unlike previous approaches this  system is entirely trained using examples and  does not rely on a priori (hand-crafted)  models of facial features based on optical  flow or facial musculature. The system works  in several stages that begin with face  detection, followed by localization of facial  features and estimation of mouth parameters.  Each of these stages is formulated as a  problem in supervised learning from  examples. We apply the new and robust  technique of support vector machines (SVM)  for classification in the stage of skin  segmentation, face detection and eye  detection. Estimation of mouth parameters is  modeled as a regression from a sparse  subset of coefficients (basis functions) of an  overcomplete dictionary of Haar wavelets.
</description>
<pubDate>Thu, 23 Sep 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7172</guid>
<dc:date>1999-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>Rotation Invariant Real-time Face Detection and Recognition System</title>
<link>https://hdl.handle.net/1721.1/7171</link>
<description>Rotation Invariant Real-time Face Detection and Recognition System
Ho, Purdy
In this report, a face recognition system that is capable of detecting and recognizing frontal and rotated faces was developed. Two face recognition methods focusing on the aspect of pose invariance are presented and evaluated - the whole face approach and the component-based approach. The main challenge of this project is to develop a system that is able to identify faces under different viewing angles in realtime. The development of such a system will enhance the capability and robustness of current face recognition technology.  The whole-face approach recognizes faces by classifying a single feature vector consisting of the gray values of the whole face image. The component-based approach  first locates the facial components and extracts them. These components are normalized and combined into a single feature vector for classification. The Support Vector Machine (SVM) is used as the classifier for both approaches. Extensive tests with respect to the robustness against pose changes are performed on a  database that includes faces rotated up to about 40 degrees in depth. The component-based approach clearly outperforms the whole-face approach on all tests. Although this approach isproven to be more reliable, it is still too slow for real-time applications. That is the reason why a real-time face recognition system using the whole-face approach is implemented to recognize people in color video sequences.
</description>
<pubDate>Thu, 31 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7171</guid>
<dc:date>2001-05-31T00:00:00Z</dc:date>
</item>
<item>
<title>A Note on Object Class Representation and Categorical Perception</title>
<link>https://hdl.handle.net/1721.1/7170</link>
<description>A Note on Object Class Representation and Categorical Perception
Riesenhuber, Maximilian; Poggio, Tomaso
We present a novel scheme ("Categorical  Basis Functions", CBF) for object class  representation in the brain and contrast it to  the "Chorus of Prototypes" scheme recently  proposed by Edelman. The power and  flexibility of CBF is demonstrated in two  examples. CBF is then applied to investigate  the phenomenon of Categorical Perception, in  particular the finding by Bulthoff et al. (1998) of  categorization of faces by gender without  corresponding Categorical Perception. Here,  CBF makes predictions that can be tested in a  psychophysical experiment. Finally,  experiments are suggested to further test  CBF.
</description>
<pubDate>Fri, 17 Dec 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7170</guid>
<dc:date>1999-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>A Note on the Generalization Performance of Kernel Classifiers with Margin</title>
<link>https://hdl.handle.net/1721.1/7169</link>
<description>A Note on the Generalization Performance of Kernel Classifiers with Margin
Evgeniou, Theodoros; Pontil, Massimiliano
We present distribution independent bounds on the generalization misclassification performance of a family of kernel classifiers with margin. Support Vector Machine classifiers (SVM) stem out of this class of machines. The bounds are derived through computations of the $V_gamma$ dimension of a family of loss functions where the SVM one belongs to. Bounds that use functions of margin distributions (i.e. functions of the slack variables of SVM) are derived.
</description>
<pubDate>Mon, 01 May 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7169</guid>
<dc:date>2000-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Grounded Abstractions for Artificial Intelligence Programming</title>
<link>https://hdl.handle.net/1721.1/7116</link>
<description>Building Grounded Abstractions for Artificial Intelligence Programming
Hearn, Robert A.
Most Artificial Intelligence (AI) work can be characterized as either ``high-level'' (e.g., logical, symbolic)  or ``low-level'' (e.g., connectionist networks, behavior-based robotics). Each approach suffers from  particular drawbacks. High-level AI uses abstractions that often have no relation to the way real,  biological brains work. Low-level AI, on the other hand, tends to lack the powerful abstractions that are  needed to express complex structures and relationships. I have tried to combine the best features of  both approaches, by building a set of programming abstractions defined in terms of simple, biologically  plausible components. At the ``ground level'', I define a primitive, perceptron-like computational unit.  I then show how more abstract computational units may be implemented in terms of the primitive  units, and show the utility of the abstract units in sample networks. The new units make it possible to  build networks using concepts such as long-term memories, short-term memories, and frames. As a  demonstration of these abstractions, I have implemented a simulator for ``creatures'' controlled by a  network of abstract units. The creatures exist in a simple 2D world, and exhibit behaviors such as  catching mobile prey and sorting colored blocks into matching boxes. This program demonstrates that  it is possible to build systems that can interact effectively with a dynamic physical environment, yet use  symbolic representations to control aspects of their behavior.
</description>
<pubDate>Wed, 16 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7116</guid>
<dc:date>2004-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>BioJADE: A Design and Simulation Tool for Synthetic Biological Systems</title>
<link>https://hdl.handle.net/1721.1/7115</link>
<description>BioJADE: A Design and Simulation Tool for Synthetic Biological Systems
Goler, Jonathan A.
The next generations of both biological engineering and computer engineering demand that control be  exerted at the molecular level. Creating, characterizing and controlling synthetic biological systems  may provide us with the ability to build cells that are capable of a plethora of activities, from  computation to synthesizing nanostructures. To develop these systems, we must have a set of tools not  only for synthesizing systems, but also designing and simulating them. The BioJADE project provides a  comprehensive, extensible design and simulation platform for synthetic biology. BioJADE is a graphical  design tool built in Java, utilizing a database back end, and supports a range of simulations using an  XML communication protocol. BioJADE currently supports a library of over 100 parts with which it can  compile designs into actual DNA, and then generate synthesis instructions to build the physical parts.  The BioJADE project contributes several tools to Synthetic Biology. BioJADE in itself is a powerful tool for  synthetic biology designers. Additionally, we developed and now make use of a centralized BioBricks  repository, which enables the sharing of BioBrick components between researchers, and vastly reduces  the barriers to entry for aspiring Synthetic Biologists.
</description>
<pubDate>Fri, 28 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7115</guid>
<dc:date>2004-05-28T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Commonsense Categorical Knowledge in a Thread Memory System</title>
<link>https://hdl.handle.net/1721.1/7114</link>
<description>Learning Commonsense Categorical Knowledge in a Thread Memory System
Stamatoiu, Oana L.
If we are to understand how we can build machines capable of broad purpose learning and reasoning, we must first aim to build systems that can represent, acquire, and reason about the kinds of commonsense knowledge that we humans have about the world. This endeavor suggests steps such as identifying the kinds of knowledge people commonly have about the world, constructing suitable knowledge representations, and exploring the mechanisms that people use to make judgments about the everyday world. In this work, I contribute to these goals by proposing an architecture for a system that can learn commonsense knowledge about the properties and behavior of objects in the world. The architecture described here augments previous machine learning systems in four ways: (1) it relies on a seven dimensional notion of context, built from information recently given to the system, to learn and reason about objects' properties; (2) it has multiple methods that it can use to reason about objects, so that when one method fails, it can fall back on others; (3) it illustrates the usefulness of reasoning about objects by thinking about their similarity to other, better known objects, and by inferring properties of objects from the categories that they belong to; and (4) it represents an attempt to build an autonomous learner and reasoner, that sets its own goals for learning about the world and deduces new facts by reflecting on its acquired knowledge. This thesis describes this architecture, as well as a first implementation, that can learn from sentences such as ``A blue bird flew to the tree'' and ``The small bird flew to the cage'' that birds can fly. One of the main contributions of this work lies in suggesting a further set of salient ideas about how we can build broader purpose commonsense artificial learners and reasoners.
</description>
<pubDate>Tue, 18 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7114</guid>
<dc:date>2004-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Generative Temporal Planning with Complex Processes</title>
<link>https://hdl.handle.net/1721.1/7113</link>
<description>Generative Temporal Planning with Complex Processes
Kennell, Jonathan
Autonomous vehicles are increasingly being used in mission-critical applications, and robust methods are needed for controlling these inherently unreliable and complex systems. This thesis advocates the use of model-based programming, which allows mission designers to program autonomous missions at the level of a coach or wing commander. To support such a system, this thesis presents the Spock generative planner. To generate plans, Spock must be able to piece together vehicle commands and team tactics that have a complex behavior represented by concurrent processes. This is in contrast to traditional planners, whose operators represent simple atomic or durative actions. Spock represents operators using the RMPL language, which describes behaviors using parallel and sequential compositions of state and activity episodes. RMPL is useful for controlling mobile autonomous missions because it allows mission designers to quickly encode expressive activity models using object-oriented design methods and an intuitive set of activity combinators. Spock also is significant in that it uniformly represents operators and plan-space processes in terms of Temporal Plan Networks, which support temporal flexibility for robust plan execution. Finally, Spock is implemented as a forward progression optimal planner that walks monotonically forward through plan processes, closing any open conditions and resolving any conflicts. This thesis describes the Spock algorithm in detail, along with example problems and test results.
</description>
<pubDate>Tue, 18 May 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7113</guid>
<dc:date>2004-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Fluorescence Assay for Polymerase Arrival Rates</title>
<link>https://hdl.handle.net/1721.1/7112</link>
<description>Fluorescence Assay for Polymerase Arrival Rates
Che, Austin
To engineer complex synthetic biological systems will require modular design, assembly, and characterization strategies. The RNA polymerase arrival rate (PAR) is defined to be the rate that RNA polymerases arrive at a specified location on the DNA. Designing and characterizing biological modules in terms of RNA polymerase arrival rates provides for many advantages in the construction and modeling of biological systems.  PARMESAN is an in vitro method for measuring polymerase arrival rates using pyrrolo-dC, a fluorescent DNA base that can substitute for cytosine. Pyrrolo-dC shows a detectable fluorescence difference when in single-stranded versus double-stranded DNA. During transcription, RNA polymerase separates the two strands of DNA, leading to a change in the fluorescence of pyrrolo-dC. By incorporating pyrrolo-dC at specific locations in the DNA, fluorescence changes can be taken as a direct measurement of the polymerase arrival rate.
</description>
<pubDate>Sun, 31 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7112</guid>
<dc:date>2003-08-31T00:00:00Z</dc:date>
</item>
<item>
<title>Representation and Detection of Shapes in Images</title>
<link>https://hdl.handle.net/1721.1/7111</link>
<description>Representation and Detection of Shapes in Images
Felzenszwalb, Pedro F.
We present a set of techniques that can be used to represent and detect shapes in images. Our methods revolve around a particular shape representation based on the description of objects using triangulated polygons. This representation is similar to the medial axis transform and has important properties from a computational perspective. The first problem we consider is the detection of non-rigid objects in images using deformable models. We present an efficient algorithm to solve this problem in a wide range of situations, and show examples in both natural and medical images. We also consider the problem of learning an accurate non-rigid shape model for a class of objects from examples. We show how to learn good models while constraining them to the form required by the detection algorithm. Finally, we consider the problem of low-level image segmentation and grouping. We describe a stochastic grammar that generates arbitrary triangulated polygons while capturing Gestalt principles of shape regularity. This grammar is used as a prior model over random shapes in a low level algorithm that detects objects in images.
</description>
<pubDate>Fri, 08 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7111</guid>
<dc:date>2003-08-08T00:00:00Z</dc:date>
</item>
<item>
<title>Compact Representations for Fast Nonrigid Registration of Medical Images</title>
<link>https://hdl.handle.net/1721.1/7110</link>
<description>Compact Representations for Fast Nonrigid Registration of Medical Images
Timoner, Samson
We develop efficient techniques for the non-rigid registration of medical images by using representations that adapt to the anatomy found in such images.   Images of anatomical structures typically have uniform intensity interiors and smooth boundaries. We create methods to represent such regions compactly using tetrahedra. Unlike voxel-based representations, tetrahedra can accurately describe the expected smooth surfaces of medical objects. Furthermore, the interior of such objects can be represented using a small number of tetrahedra. Rather than describing a medical object using tens of thousands of voxels, our representations generally contain only a few thousand elements.  Tetrahedra facilitate the creation of efficient non-rigid registration algorithms based on finite element methods (FEM). We create a fast, FEM-based method to non-rigidly register segmented anatomical structures from two subjects. Using our compact tetrahedral representations, this method generally requires less than one minute of processing time on a desktop PC.  We also create a novel method for the non-rigid registration of gray scale images. To facilitate a fast method, we create a tetrahedral representation of a displacement field that automatically adapts to both the anatomy in an image and to the displacement field. The resulting algorithm has a computational cost that is dominated by the number of nodes in the mesh (about 10,000), rather than the number of voxels in an image (nearly 10,000,000). For many non-rigid registration problems, we can find a transformation from one image to another in five minutes. This speed is important as it allows use of the algorithm during surgery.  We apply our algorithms to find correlations between the shape of anatomical structures and the presence of schizophrenia. We show that a study based on our representations outperforms studies based on other representations. We also use the results of our non-rigid registration algorithm as the basis of a segmentation algorithm. That algorithm also outperforms other methods in our tests, producing smoother segmentations and more accurately reproducing manual segmentations.
</description>
<pubDate>Fri, 04 Jul 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7110</guid>
<dc:date>2003-07-04T00:00:00Z</dc:date>
</item>
<item>
<title>Gait Analysis for Classification</title>
<link>https://hdl.handle.net/1721.1/7109</link>
<description>Gait Analysis for Classification
Lee, Lily
This thesis describes a representation of gait appearance for the purpose of person identification and classification. This gait representation is based on simple localized image features such as moments extracted from orthogonal view video silhouettes of human walking motion. A suite of time-integration methods, spanning a range of coarseness of time aggregation and modeling of feature distributions, are applied to these image features to create a suite of gait sequence representations. Despite their simplicity, the resulting feature vectors contain enough information to perform well on human identification and gender classification tasks. We demonstrate the accuracy of recognition on gait video sequences collected over different days and times and under varying lighting environments. Each of the integration methods are investigated for their advantages and disadvantages. An improved gait representation is built based on our experiences with the initial set of gait representations. In addition, we show gender classification results using our gait appearance features, the effect of our heuristic feature selection method, and the significance of individual features.
</description>
<pubDate>Thu, 26 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7109</guid>
<dc:date>2003-06-26T00:00:00Z</dc:date>
</item>
<item>
<title>Teaching an Old Robot New Tricks: Learning Novel Tasks via Interaction with People and Things</title>
<link>https://hdl.handle.net/1721.1/7108</link>
<description>Teaching an Old Robot New Tricks: Learning Novel Tasks via Interaction with People and Things
Marjanovic, Matthew J.
As AI has begun to reach out beyond its symbolic, objectivist roots into the embodied, experientialist realm, many projects are exploring different aspects of creating machines which interact with and respond to the world as humans do. Techniques for visual processing, object recognition, emotional response, gesture production and recognition, etc., are necessary components of a complete humanoid robot. However, most projects invariably concentrate on developing a few of these individual components, neglecting the issue of how all of these pieces would eventually fit together.  The focus of the work in this dissertation is on creating a framework into which such specific competencies can be embedded, in a way that they can interact with each other and build layers of new functionality. To be of any practical value, such a framework must satisfy the real-world constraints of functioning in real-time with noisy sensors and actuators. The humanoid robot Cog provides an unapologetically adequate platform from which to take on such a challenge.  This work makes three contributions to embodied AI. First, it offers a general-purpose architecture for developing behavior-based systems distributed over networks of PC's. Second, it provides a motor-control system that simulates several biological features which impact the development of motor behavior. Third, it develops a framework for a system which enables a robot to learn new behaviors via interacting with itself and the outside world. A few basic functional modules are built into this framework, enough to demonstrate the robot learning some very simple behaviors taught by a human trainer.  A primary motivation for this project is the notion that it is practically impossible to build an "intelligent" machine unless it is designed partly to build itself. This work is a proof-of-concept of such an approach to integrating multiple perceptual and motor systems into a complete learning agent.
</description>
<pubDate>Fri, 20 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7108</guid>
<dc:date>2003-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Online Learning of Non-stationary Sequences</title>
<link>https://hdl.handle.net/1721.1/7107</link>
<description>Online Learning of Non-stationary Sequences
Monteleoni, Claire
We consider an online learning scenario in which the learner can make predictions on the basis of a fixed set of experts. The performance of each expert may change over time in a manner unknown to the learner. We formulate a class of universal learning algorithms for this problem by expressing them as simple Bayesian algorithms operating on models analogous to Hidden Markov Models (HMMs). We derive a new performance bound for such algorithms which is considerably simpler than existing bounds. The bound provides the basis for learning the rate at which the identity of the optimal expert switches over time. We find an analytic expression for the a priori resolution at which we need to learn the rate parameter. We extend our scalar switching-rate result to models of the switching-rate that are governed by a matrix of parameters, i.e. arbitrary homogeneous HMMs. We apply and examine our algorithm in the context of the problem of energy management in wireless networks. We analyze the new results in the framework of Information Theory.
</description>
<pubDate>Thu, 12 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7107</guid>
<dc:date>2003-06-12T00:00:00Z</dc:date>
</item>
<item>
<title>Safe Distributed Coordination of Heterogeneous Robots through Dynamic Simple Temporal Networks</title>
<link>https://hdl.handle.net/1721.1/7106</link>
<description>Safe Distributed Coordination of Heterogeneous Robots through Dynamic Simple Temporal Networks
Wehowsky, Andreas F.
Research on autonomous intelligent systems has focused on how robots can robustly carry out missions in uncertain and harsh environments with very little or no human intervention. Robotic execution languages such as RAPs, ESL, and TDL improve robustness by managing functionally redundant procedures for achieving goals. The model-based programming approach extends this by guaranteeing correctness of execution through pre-planning of non-deterministic timed threads of activities. Executing model-based programs effectively on distributed autonomous platforms requires distributing this pre-planning process. This thesis presents a distributed planner for modelbased programs whose planning and execution is distributed among agents with widely varying levels of processor power and memory resources. We make two key contributions. First, we reformulate a model-based program, which describes cooperative activities, into a hierarchical dynamic simple temporal network. This enables efficient distributed coordination of robots and supports deployment on heterogeneous robots. Second, we introduce a distributed temporal planner, called DTP, which solves hierarchical dynamic simple temporal networks with the assistance of the distributed Bellman-Ford shortest path algorithm. The implementation of DTP has been demonstrated successfully on a wide range of randomly generated examples and on a pursuer-evader challenge problem in simulation.
</description>
<pubDate>Fri, 30 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7106</guid>
<dc:date>2003-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>From First Contact to Close Encounters: A Developmentally Deep Perceptual System for a Humanoid Robot</title>
<link>https://hdl.handle.net/1721.1/7105</link>
<description>From First Contact to Close Encounters: A Developmentally Deep Perceptual System for a Humanoid Robot
Fitzpatrick, Paul
This thesis presents a perceptual system for a  humanoid robot that integrates abilities such as object  localization and recognition with the deeper  developmental machinery required to forge those  competences out of raw physical experiences. It shows  that a robotic platform can build up and maintain a  system for object localization, segmentation, and  recognition, starting from very little. What the robot  starts with is a direct solution to achieving figure/ground  separation: it simply 'pokes around' in a region of visual  ambiguity and watches what happens. If the arm  passes through an area, that area is recognized as free  space. If the arm collides with an object, causing it to  move, the robot can use that motion to segment the  object from the background. Once the robot can  acquire reliable segmented views of objects, it learns  from them, and from then on recognizes and segments  those objects without further contact. Both low-level and  high-level visual features can also be learned in this  way, and examples are presented for both: orientation  detection and affordance recognition, respectively. The  motivation for this work is simple. Training on large  corpora of annotated real-world data has proven crucial  for creating robust solutions to perceptual problems  such as speech recognition and face detection. But the  powerful tools used during training of such systems are  typically stripped away at deployment. Ideally they  should remain, particularly for unstable tasks such as  object detection, where the set of objects needed in a  task tomorrow might be different from the set of objects  needed today. The key limiting factor is access to  training data, but as this thesis shows, that need not be  a problem on a robotic platform that can actively probe  its environment, and carry out experiments to resolve  ambiguity. This work is an instance of a general  approach to learning a new perceptual judgment:  find special situations in which the perceptual judgment  is easy and study these situations to find correlated  features that can be observed more generally.
</description>
<pubDate>Sun, 01 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7105</guid>
<dc:date>2003-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Statistical Image-Based Shape Model for Visual Hull Reconstruction and 3D Structure Inference</title>
<link>https://hdl.handle.net/1721.1/7104</link>
<description>A Statistical Image-Based Shape Model for Visual Hull Reconstruction and 3D Structure Inference
Grauman, Kristen
We present a statistical image-based shape +  structure model for Bayesian visual hull reconstruction  and 3D structure inference. The 3D shape of a class of  objects is represented by sets of contours from  silhouette views simultaneously observed from multiple  calibrated cameras. Bayesian reconstructions of new  shapes are then estimated using a prior density constructed with a mixture model and probabilistic  principal components analysis. We show how the use  of a class-specific prior in a visual hull reconstruction  can reduce the effect of segmentation errors from the  silhouette extraction process. The proposed method is  applied to a data set of pedestrian images, and  improvements in the approximate 3D models under  various noise conditions are shown. We further  augment the shape model to incorporate structural  features of interest; unknown structural parameters for a  novel set of contours are then inferred via the Bayesian  reconstruction process. Model matching and parameter  inference are done entirely in the image domain and  require no explicit 3D construction. Our shape model  enables accurate estimation of structure despite  segmentation errors or missing views in the input  silhouettes, and works even with only a single input  view. Using a data set of thousands of pedestrian  images generated from a synthetic model, we can  accurately infer the 3D locations of 19 joints on the  body based on observed silhouette contours from real images.
</description>
<pubDate>Thu, 22 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7104</guid>
<dc:date>2003-05-22T00:00:00Z</dc:date>
</item>
<item>
<title>Segmentation and Alignment of Speech and Sketching in a Design Environment</title>
<link>https://hdl.handle.net/1721.1/7103</link>
<description>Segmentation and Alignment of Speech and Sketching in a Design Environment
Adler, Aaron D.
Sketches are commonly used in the early stages of  design. Our previous system allows users to sketch mechanical systems that  the computer interprets. However, some parts of the mechanical  system might be too hard or too complicated to express in the sketch.  Adding speech recognition to create a multimodal system would move  us toward our goal of creating a more natural user interface. This  thesis examines the relationship between the verbal and sketch input,  particularly how to segment and align the two inputs. Toward this end,  subjects were recorded while they sketched and talked. These  recordings were transcribed, and a set of rules to perform segmentation  and alignment was created. These rules represent the knowledge that  the computer needs to perform segmentation and alignment. The  rules successfully interpreted the 24 data sets that they were given.
</description>
<pubDate>Sat, 01 Feb 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7103</guid>
<dc:date>2003-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stereo-Based Head Pose Tracking Using Iterative Closest Point and Normal Flow Constraint</title>
<link>https://hdl.handle.net/1721.1/7102</link>
<description>Stereo-Based Head Pose Tracking Using Iterative Closest Point and Normal Flow Constraint
Morency, Louis-Philippe
In this text, we present two stereo-based head tracking  techniques along with a fast 3D model acquisition  system. The first tracking technique is a robust  implementation of stereo-based head tracking  designed for interactive environments with uncontrolled  lighting. We integrate fast face detection and drift  reduction algorithms with a gradient-based stereo rigid  motion tracking technique. Our system can  automatically segment and track a user's head under large rotation and illumination variations. Precision and  usability of this approach are compared with previous  tracking methods for cursor control and target selection  in both desktop and interactive room environments.  The second tracking technique is designed to improve  the robustness of head pose tracking for fast  movements. Our iterative hybrid tracker combines  constraints from the ICP (Iterative Closest Point)  algorithm and normal flow constraint. This new  technique is more precise for small movements and  noisy depth than ICP alone, and more robust for large  movements than the normal flow constraint alone. We present experiments which  test the accuracy of our approach on sequences of real  and synthetic stereo images.  The 3D model acquisition system we present quickly  aligns intensity and depth images, and reconstructs a  textured 3D mesh. 3D views are registered with shape  alignment based on our iterative hybrid tracker. We  reconstruct the 3D model using a new Cubic Ray  Projection merging algorithm which takes advantage of  a novel data structure: the linked voxel space. We  present experiments to test the accuracy of our  approach on 3D face modelling using real-time stereo  images.
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7102</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reinforcement Learning by Policy Search</title>
<link>https://hdl.handle.net/1721.1/7101</link>
<description>Reinforcement Learning by Policy Search
Peshkin, Leonid
One objective of artificial intelligence is to model the  behavior of an intelligent agent interacting with its environment. The  environment's transformations can be modeled as a Markov chain,  whose state is partially observable to the agent and affected by its actions;  such processes are known as partially observable Markov decision processes  (POMDPs). While the environment's dynamics are assumed to obey certain  rules, the agent does not know them and must learn.  In this dissertation we focus on the agent's adaptation  as captured by the reinforcement learning framework. This means learning  a policy---a mapping of observations into actions---based  on feedback from the environment. The learning can be viewed as browsing  a set of policies while evaluating them by trial through interaction with the  environment.  The set of policies is constrained by the architecture of  the agent's controller. POMDPs require a controller to have  a memory. We investigate controllers with memory, including  controllers with  external memory, finite state controllers and distributed  controllers for multi-agent systems. For these various  controllers we work out the details of the algorithms which learn by ascending  the gradient of expected cumulative reinforcement.   Building on statistical learning theory and experiment  design theory, a policy evaluation algorithm is developed for the case of  experience re-use. We address the question of sufficient experience for  uniform convergence of policy evaluation and obtain sample complexity bounds  for various estimators. Finally, we demonstrate the performance of the  proposed algorithms on several domains, the most complex of which is simulated  adaptive packet routing in a telecommunication network.
</description>
<pubDate>Fri, 14 Feb 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7101</guid>
<dc:date>2003-02-14T00:00:00Z</dc:date>
</item>
<item>
<title>Using Analogy to Acquire Commonsense Knowledge from Human Contributors</title>
<link>https://hdl.handle.net/1721.1/7100</link>
<description>Using Analogy to Acquire Commonsense Knowledge from Human Contributors
Chklovski, Timothy
The goal of the work reported here is to capture the  commonsense knowledge of non-expert human  contributors. Achieving this goal will enable more  intelligent human-computer interfaces and pave the  way for computers to reason about our world. In the  domain of natural language processing, it will provide  the world knowledge much needed for semantic  processing of natural language. To acquire knowledge  from contributors not trained in knowledge engineering,  I take the following four steps: (i) develop a knowledge  representation (KR) model for simple assertions in  natural language, (ii) introduce cumulative analogy, a  class of nearest-neighbor based analogical reasoning  algorithms over this representation, (iii) argue that  cumulative analogy is well suited for knowledge  acquisition (KA) based on a theoretical analysis of  effectiveness of KA with this approach, and (iv) test the  KR model and the effectiveness of the cumulative  analogy algorithms empirically. To investigate  effectiveness of cumulative analogy for KA empirically,  Learner, an open source system for KA by cumulative  analogy has been implemented, deployed, and  evaluated. (The site "1001 Questions," is available at  http://teach-computers.org/learner.html). Learner  acquires assertion-level knowledge by constructing  shallow semantic analogies between a KA topic and its  nearest neighbors and posing these analogies as  natural language questions to human contributors.  Suppose, for example, that based on the knowledge  about "newspapers" already present in the knowledge  base, Learner judges "newspaper" to be similar to  "book" and "magazine." Further suppose that  assertions "books contain information" and "magazines  contain information" are also already in the knowledge  base. Then Learner will use cumulative analogy from  the similar topics to ask humans whether "newspapers  contain information." Because similarity between topics  is computed based on what is already known about  them, Learner exhibits bootstrapping behavior --- the  quality of its questions improves as it gathers more  knowledge. By summing evidence for and against  posing any given question, Learner also exhibits noise  tolerance, limiting the effect of incorrect similarities. The  KA power of shallow semantic analogy from nearest  neighbors is one of the main findings of this thesis. I  perform an analysis of commonsense knowledge  collected by another research effort that did not rely on  analogical reasoning and demonstrate that indeed  there is sufficient amount of correlation in the  knowledge base to motivate using cumulative analogy  from nearest neighbors as a KA method. Empirically,  evaluating the percentages of questions answered  affirmatively, negatively and judged to be nonsensical  in the cumulative analogy case compares favorably  with the baseline, no-similarity case that relies on  random objects rather than nearest neighbors. Of the  questions generated by cumulative analogy,  contributors answered 45% affirmatively, 28%  negatively and marked 13% as nonsensical; in the  control, no-similarity case 8% of questions were  answered affirmatively, 60% negatively and 26% were  marked as nonsensical.
</description>
<pubDate>Wed, 12 Feb 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7100</guid>
<dc:date>2003-02-12T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Subsystems in Biology through Dimensionality Reduction, Graph Partitioning and Analytical Modeling</title>
<link>https://hdl.handle.net/1721.1/7099</link>
<description>Understanding Subsystems in Biology through Dimensionality Reduction, Graph Partitioning and Analytical Modeling
Kim, Philip Mjong-Hyon Shin
Biological systems exhibit rich and complex behavior through the orchestrated interplay of a large array of components. It is hypothesized that separable subsystems with some degree of functional autonomy exist; deciphering their independent behavior and functionality would greatly facilitate understanding the system as a whole. Discovering and analyzing such subsystems are hence pivotal problems in the quest to gain a quantitative understanding of complex biological systems.  In this work, using approaches from machine learning, physics and graph theory, methods for the identification and analysis of such subsystems were developed. A novel methodology, based on a recent machine learning algorithm known as non-negative matrix factorization (NMF), was developed to discover such subsystems in a set of large-scale gene expression data. This set of subsystems was then used to predict functional relationships between genes, and this approach was shown to score significantly higher than conventional methods when benchmarking them against existing databases. Moreover, a mathematical treatment was developed to treat simple network subsystems based only on their topology (independent of particular parameter values). Application to a problem of experimental interest demonstrated the need for extentions to the conventional model to fully explain the experimental data.  Finally, the notion of a subsystem was evaluated from a topological perspective. A number of different protein networks were examined to analyze their topological properties with respect to separability, seeking to find separable subsystems. These networks were shown to exhibit separability in a nonintuitive fashion, while the separable subsystems were of strong biological significance. It was demonstrated that the separability property found was not due to incomplete or biased data, but is likely to reflect biological structure.
</description>
<pubDate>Wed, 05 Feb 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7099</guid>
<dc:date>2003-02-05T00:00:00Z</dc:date>
</item>
<item>
<title>Achieving Real-Time Mode Estimation through Offline Compilation</title>
<link>https://hdl.handle.net/1721.1/7098</link>
<description>Achieving Real-Time Mode Estimation through Offline Compilation
Van Eepoel, John M.
As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing  these missions are becoming progressively more complex. This  increases the potential for mission failure due to hardware malfunctions  and unexpected spacecraft behavior. A solution to this problem lies in the  development of an advanced fault management system. Fault  management enables spacecraft to respond to failures and take repair  actions so that it may continue its mission. The two main approaches  developed for spacecraft fault management have been rule-based and  model-based systems. Rules map sensor information to system  behaviors, thus achieving fast response times, and making the actions of  the fault management system explicit. These rules are developed by  having a human reason through the interactions between spacecraft  components. This process is limited by the number of interactions a  human can reason about correctly. In the model-based approach, the  human provides component models, and the fault management system  reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit  the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode  Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft  behavior more accurately. Reasoning related to fault detection is  compiled in an off-line process into a set of concurrent, localized  diagnostic rules. These are then combined on-line along with sensor  information to reconstruct the diagnosis of the system. These rules  enable a human to inspect the diagnostic consequences of CME.  Additionally, CME is capable of reasoning through component  interactions automatically and still provide fast and correct responses.  The implementation of this engine has been tested against the NEAR  spacecraft advanced rule-based system, resulting in detection of failures  beyond that of the rules. This evolution in fault detection will enable future  missions to explore the furthest reaches of the solar system without the  burden of human intervention to repair failed components.
</description>
<pubDate>Tue, 22 Oct 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7098</guid>
<dc:date>2002-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Surface Reflectance Recognition and Real-World Illumination Statistics</title>
<link>https://hdl.handle.net/1721.1/7097</link>
<description>Surface Reflectance Recognition and Real-World Illumination Statistics
Dror, Ron O.
Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions.   Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature.   These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance.
</description>
<pubDate>Tue, 01 Oct 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7097</guid>
<dc:date>2002-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>ADAM: A Decentralized Parallel Computer Architecture Featuring Fast Thread and Data Migration and a Uniform Hardware Abstraction</title>
<link>https://hdl.handle.net/1721.1/7096</link>
<description>ADAM: A Decentralized Parallel Computer Architecture Featuring Fast Thread and Data Migration and a Uniform Hardware Abstraction
Huang, Andrew "bunnie"
The furious pace of Moore's Law is driving  computer architecture into a realm where the the speed of light is the  dominant factor in system latencies. The number of clock cycles to span  a chip are increasing, while the number of bits that can be accessed  within a clock cycle is decreasing. Hence, it is becoming more  difficult to hide latency. One alternative solution is to reduce latency by  migrating threads and data, but the overhead of existing  implementations has previously made migration an unserviceable solution so  far.  I present an architecture, implementation, and  mechanisms that reduces the overhead of migration to the point where  migration is a viable supplement to other latency hiding  mechanisms, such as multithreading. The architecture is abstract,  and presents programmers with a simple, uniform fine-grained  multithreaded parallel programming model with implicit memory management. In  other words, the spatial nature and implementation details (such as  the number of processors) of a parallel machine are entirely hidden from  the programmer. Compiler writers are  encouraged to devise programming languages for the machine that guide a  programmer to express their ideas in terms of objects, since objects exhibit  an inherent physical locality of data and code. The machine  implementation can then leverage this locality to automatically distribute  data and threads across the physical machine by using a set of  high performance migration mechanisms.  An implementation of this architecture could  migrate a null thread in 66 cycles -- over a factor of 1000 improvement  over previous work. Performance also scales well; the time  required to move a typical thread is only 4 to 5 times that of a null  thread. Data migration performance is similar, and scales  linearly with data block size. Since the performance of the migration  mechanism is on par with that of an L2 cache, the implementation  simulated in my work has no data caches and relies instead on  multithreading and the migration mechanism to hide and reduce access  latencies.
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7096</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Procedures as a Representation for Data in a Computer Program for Understanding Natural Language</title>
<link>https://hdl.handle.net/1721.1/7095</link>
<description>Procedures as a Representation for Data in a Computer Program for Understanding Natural Language
Winograd, Terry
This paper describes a system for the  computer understanding of English. The  system answers questions, executes  commands, and accepts information in  normal English dialog. It uses semantic  information and context to understand  discourse and to disambiguate sentences. It  combines a complete syntactic analysis of  each sentence with a "heuristic understander"  which uses different kinds of information  about a sentence, other parts of the  discourse, and general information about the  world in deciding what the sentence means. It  is based on the belief that a computer cannot  deal reasonably with language unless it can  "understand" the subject it is discussing. The  program is given a detailed model of the  knowledge needed by a simple robot having  only a hand and an eye. We can give it  instructions to manipulate toy objects,  interrogate it about the scene, and give it  information it will use in deduction. In addition  to knowing the properties of toy objects, the  program has a simple model of its own  mentality. It can remember and discuss its  plans and actions as well as carry them out. It  enters into a dialog with a person, responding  to English sentences with actions and  English replies, and asking for clarification  when its heuristic programs cannot  understand a sentence through use of context  and physical knowledge.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7095</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Analysis of Visual Properties of Curved Objects</title>
<link>https://hdl.handle.net/1721.1/7094</link>
<description>Computer Analysis of Visual Properties of Curved Objects
Krakauer, Lawrence J.
A method is presented for the visual analysis  of objects by computer. It is particularly well  suited for opaque objects with smoothly  curved surfaces. The method extracts  information about the object's surface  properties, including measures of its  specularity, texture, and regularity. It also aids  in determining the object's shape. The  application of this method to a simple  recognition task ??e recognition of fruit ??  discussed. The results on a more complex  smoothly curved object, a human face, are  also considered.
</description>
<pubDate>Sat, 01 May 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7094</guid>
<dc:date>1971-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Reinforcement-Learning Approach to Power Management</title>
<link>https://hdl.handle.net/1721.1/7093</link>
<description>A Reinforcement-Learning Approach to Power Management
Steinbach, Carl
We describe an adaptive, mid-level approach  to the wireless device power management problem. Our approach  is based on reinforcement learning, a machine learning  framework for autonomous agents. We describe how our  framework can be applied to the power management problem in both  infrastructure and ad~hoc wireless networks. From this thesis we conclude that  mid-level power management policies can outperform low-level policies and  are more convenient to implement than high-level policies. We also  conclude that power management policies need to adapt to the  user and network, and that a mid-level power management framework  based on reinforcement learning fulfills these requirements.
</description>
<pubDate>Wed, 01 May 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7093</guid>
<dc:date>2002-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Virtual Machine for a Type-omega Denotational Proof Language</title>
<link>https://hdl.handle.net/1721.1/7092</link>
<description>A Virtual Machine for a Type-omega Denotational Proof Language
III, Teodoro Arvizo
In this thesis, I designed and implemented a  virtual machine (VM) for a monomorphic  variant of Athena, a type-omega denotational  proof language (DPL). This machine  attempts to maintain the minimum state required to evaluate Athena phrases. This  thesis also includes the design and  implementation of a compiler for  monomorphic Athena that compiles to the VM.  Finally, it includes details on my  implementation of a read-eval-print loop that  glues together the VM core and the compiler  to provide a full, user-accessible  interface to monomorphic Athena. The Athena  VM provides the same basis for DPLs that the  SECD machine does for pure, functional  programming and the Warren Abstract Machine does for Prolog.
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7092</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reconfigurable Architectures for General-Purpose Computing</title>
<link>https://hdl.handle.net/1721.1/7091</link>
<description>Reconfigurable Architectures for General-Purpose Computing
DeHon, Andre
General-purpose computing devices allow us  to (1) customize computation after fabrication  and (2) conserve area by reusing expensive  active circuitry for different functions in time.  We define RP-space, a restricted domain of  the general-purpose architectural space  focussed on reconfigurable computing  architectures. Two dominant features  differentiate reconfigurable from special-purpose architectures and account for most of  the area overhead associated with RP  devices: (1) instructions which tell the device  how to behave, and (2) flexible interconnect  which supports task dependent dataflow  between operations. We can characterize RP-space by the allocation and structure of these  resources and compare the efficiencies of  architectural points across broad application  characteristics. Conventional FPGAs fall at  one extreme end of this space and their  efficiency ranges over two orders of  magnitude across the space of application  characteristics. Understanding RP-space and  its consequences allows us to pick the best  architecture for a task and to search for more  robust design points in the space. Our DPGA,  a fine- grained computing device which adds  small, on-chip instruction memories to FPGAs  is one such design point. For typical logic  applications and finite- state machines, a  DPGA can implement tasks in one-third the  area of a traditional FPGA. TSFPGA, a variant  of the DPGA which focuses on heavily time-switched interconnect, achieves circuit  densities close to the DPGA, while reducing  typical physical mapping times from hours to  seconds. Rigid, fabrication-time organization  of instruction resources significantly narrows  the range of efficiency for conventional  architectures. To avoid this performance  brittleness, we developed MATRIX, the first  architecture to defer the binding of instruction  resources until run-time, allowing the  application to organize resources according to  its needs. Our focus MATRIX design point is  based on an array of 8-bit ALU and register-file building blocks interconnected via a byte-wide network. With today's silicon, a single  chip MATRIX array can deliver over 10 Gop/s  (8-bit ops). On sample image processing  tasks, we show that MATRIX yields 10-20x the  computational density of conventional  processors. Understanding the cost structure  of RP-space helps us identify these  intermediate architectural points and may  provide useful insight more broadly in guiding  our continual search for robust and efficient  general-purpose computing structures.
</description>
<pubDate>Sun, 01 Sep 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7091</guid>
<dc:date>1996-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrent Smalltalk on the Message-Driven Processor</title>
<link>https://hdl.handle.net/1721.1/7090</link>
<description>Concurrent Smalltalk on the Message-Driven Processor
Horwat, Waldemar
Concurrent Smalltalk is the primary language  used for programming the J- Machine, a MIMD  message-passing computer containing  thousands of 36-bit processors connected by  a very low latency network. This thesis  describes in detail Concurrent Smalltalk and  its implementation on the J-Machine,  including the Optimist II global optimizing  compiler and Cosmos fine-grain parallel  operating system. Quantitative and qualitative  results are presented.
</description>
<pubDate>Sun, 01 Sep 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7090</guid>
<dc:date>1991-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maximum Entropy Discrimination</title>
<link>https://hdl.handle.net/1721.1/7089</link>
<description>Maximum Entropy Discrimination
Jaakkola, Tommi; Meila, Marina; Jebara, Tony
We present a general framework for  discriminative estimation based on the  maximum entropy principle and its  extensions. All calculations involve  distributions over structures and/or  parameters rather than specific settings and  reduce to relative entropy projections. This  holds even when the data is not separable  within the chosen parametric class, in the  context of anomaly detection rather than  classification, or when the labels in the  training set are uncertain or incomplete.  Support vector machines are naturally  subsumed under this class and we provide  several extensions. We are also able to  estimate exactly and efficiently discriminative  distributions over tree structures of class-conditional models within this framework.  Preliminary experimental results are indicative  of the potential in these techniques.
</description>
<pubDate>Wed, 01 Dec 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7089</guid>
<dc:date>1999-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Hierarchical Cache Coherent Protocol</title>
<link>https://hdl.handle.net/1721.1/7088</link>
<description>A Hierarchical Cache Coherent Protocol
Wallach, Deborah A.
As the number of processors in distributed-memory multiprocessors grows, efficiently  supporting a shared-memory programming  model becomes difficult. We have designed  the Protocol for Hierarchical Directories (PHD)  to allow shared-memory support for systems  containing massive numbers of processors.  PHD eliminates bandwidth problems by using  a scalable network, decreases hot-spots by  not relying on a single point to distribute  blocks, and uses a scalable amount of space  for its directories. PHD provides a shared-memory model by synthesizing a global  shared memory from the local memories of  processors. PHD supports sequentially  consistent read, write, and test- and-set  operations. This thesis also introduces a  method of describing locality for hierarchical  protocols and employs this method in the  derivation of an abstract model of the protocol  behavior. An embedded model, based on the  work of Johnson[ISCA19], describes the  protocol behavior when mapped to a k-ary n-cube. The thesis uses these two models to  study the average height in the hierarchy that  operations reach, the longest path messages  travel, the number of messages that  operations generate, the inter-transaction  issue time, and the protocol overhead for  different locality parameters, degrees of  multithreading, and machine sizes. We  determine that multithreading is only useful  for approximately two to four threads; any  additional interleaving does not decrease the  overall latency. For small machines and high  locality applications, this limitation is due  mainly to the length of the running threads.  For large machines with medium to low  locality, this limitation is due mainly to the  protocol overhead being too large. Our study  using the embedded model shows that in  situations where the run length between  references to shared memory is at least an  order of magnitude longer than the time to  process a single state transition in the  protocol, applications exhibit good  performance. If separate controllers for  processing protocol requests are included,  the protocol scales to 32k processor  machines as long as the application exhibits  hierarchical locality: at least 22% of the global  references must be able to be satisfied  locally; at most 35% of the global references  are allowed to reach the top level of the  hierarchy.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7088</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from Ambiguity</title>
<link>https://hdl.handle.net/1721.1/7087</link>
<description>Learning from Ambiguity
Maron, Oded
There are many learning problems for which  the examples given by the teacher are  ambiguously labeled. In this thesis, we will  examine one framework of learning from  ambiguous examples known as Multiple-Instance learning. Each example is a bag,  consisting of any number of instances. A bag  is labeled negative if all instances in it are  negative. A bag is labeled positive if at least  one instance in it is positive. Because the  instances themselves are not labeled, each  positive bag is an ambiguous example. We  would like to learn a concept which will  correctly classify unseen bags. We have  developed a measure called Diverse Density  and algorithms for learning from multiple-instance examples. We have applied these  techniques to problems in drug design, stock  prediction, and image database retrieval.  These serve as examples of how to translate  the ambiguity in the application domain into  bags, as well as successful examples of  applying Diverse Density techniques.
</description>
<pubDate>Tue, 01 Dec 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7087</guid>
<dc:date>1998-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Piezoelectric Ultrasonic Micromotors</title>
<link>https://hdl.handle.net/1721.1/7086</link>
<description>Piezoelectric Ultrasonic Micromotors
Flynn, Anita M.
This report describes development of micro-fabricated piezoelectric ultrasonic motors and  bulk-ceramic piezoelectric ultrasonic motors.  Ultrasonic motors offer the advantage of low  speed, high torque operation without the need  for gears. They can be made compact and  lightweight and provide a holding torque in the  absence of applied power, due to the traveling  wave frictional coupling mechanism between  the rotor and the stator. This report covers  modeling, simulation, fabrication and testing  of ultrasonic motors. Design of experiments  methods were also utilized to find optimal  motor parameters. A suite of 8 mm diameter x  3 mm tall motors were machined for these  studies and maximum stall torques as large  as 10^(- 3) Nm, maximum no-load speeds of  1710 rpm and peak power outputs of 27 mW  were realized. Aditionally, this report  describes the implementation of a  microfabricated ultrasonic motor using thin-film lead zirconate titanate. In a joint project  with the Pennsylvania State University  Materials Research Laboratory and MIT  Lincoln Laboratory, 2 mm and 5 mm diameter  stator structures were fabricated on 1 micron  thick silicon nitride membranes. Small glass  lenses placed down on top spun at 100-300  rpm with 4 V excitation at 90 kHz. The large  power densities and stall torques of these  piezoelectric ultrasonic motors offer  tremendous promis for integrated machines:  complete intelligent, electro-mechanical  autonomous systems mass-produced in a  single fabrication process.
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7086</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Distributed Systems Using Linear Naming</title>
<link>https://hdl.handle.net/1721.1/7085</link>
<description>Implementing Distributed Systems Using Linear Naming
Bawden, Alan
Linear graph reduction is a simple  computational model in which the cost of  naming things is explicitly represented. The  key idea is the notion of "linearity". A name is  linear if it is only used once, so with linear  naming you cannot create more than one  outstanding reference to an entity. As a result,  linear naming is cheap to support and easy to  reason about. Programs can be translated  into the linear graph reduction model such  that linear names in the program are  implemented directly as linear names in the  model. Nonlinear names are supported by  constructing them out of linear names. The  translation thus exposes those places where  the program uses names in expensive,  nonlinear ways. Two applications  demonstrate the utility of using linear graph  reduction: First, in the area of distributed  computing, linear naming makes it easy to  support cheap cross-network references and  highly portable data structures, Linear naming  also facilitates demand driven migration of  tasks and data around the network without  requiring explicit guidance from the  programmer. Second, linear graph reduction  reveals a new characterization of the  phenomenon of state. Systems in which state  appears are those which depend on certain -global- system properties. State is not a  localizable phenomenon, which suggests that  our usual object oriented metaphor for state is  flawed.
</description>
<pubDate>Mon, 01 Mar 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7085</guid>
<dc:date>1993-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prototype of a Configurable Web-Based Assessment System</title>
<link>https://hdl.handle.net/1721.1/7084</link>
<description>Prototype of a Configurable Web-Based Assessment System
Hall, Miguel
The MIT Prototype Educational Assessment  System provides subjects and courses at MIT  with the ability to perform online assessment.  The system includes polices to handle  harassment and electronic "flaming" while  protecting privacy. Within these frameworks,  individual courses and subjects can make  their own policy decisions about such matters  as to when assessments can occur, who can  submit assessments, and how anonymous  assessments are. By allowing assessment to  take place continually and allowing both  students and staff to participate, the system  can provide a forum for the online discussion  of subjects. Even in the case of scheduled  assessments, the system can provide  advantages over end-of-term assessment,  since the scheduled assessments can occur  several times during the semester, allowing  subjects to identify and adjust those areas  that could use improvement. Subjects can  also develop customized questionnaires,  perhaps in response to previous  assessments, to suit their needs.
</description>
<pubDate>Sat, 01 Jun 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7084</guid>
<dc:date>1996-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Model Control of a Hexapod Walking Robot</title>
<link>https://hdl.handle.net/1721.1/7083</link>
<description>Virtual Model Control of a Hexapod Walking Robot
Torres, Ann L.
Since robots are typically designed with an  individual actuator at each joint, the control of  these systems is often difficult and non-intuitive. This thesis explains a more intuitive  control scheme called Virtual Model Control.  This thesis also demonstrates the simplicity  and ease of this control method by using it to  control a simulated walking hexapod. Virtual  Model Control uses imagined mechanical  components to create virtual forces, which are  applied through the joint torques of real  actuators. This method produces a  straightforward means of controlling joint  torques to produce a desired robot behavior.  Due to the intuitive nature of this control  scheme, the design of a virtual model  controller is similar to the design of a  controller with basic mechanical components.  The ease of this control scheme facilitates the  use of a high level control system which can  be used above the low level virtual model  controllers to modulate the parameters of the  imaginary mechanical components. In order  to apply Virtual Model Control to parallel  mechanisms, a solution to the force  distribution problem is required. This thesis  uses an extension of Gardner`s Partitioned  Force Control method which allows for the  specification of constrained degrees of  freedom. This virtual model control technique  was applied to a simulated hexapod robot.  Although the hexapod is a highly non-linear,  parallel mechanism, the virtual models  allowed text-book control solutions to be used  while the robot was walking. Using a simple  linear control law, the robot walked while  simultaneously balancing a pendulum and  tracking an object.
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7083</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Model Control of a Biped Walking Robot</title>
<link>https://hdl.handle.net/1721.1/7082</link>
<description>Virtual Model Control of a Biped Walking Robot
Pratt, Jerry E.
The transformation from high level task  specification to low level motion control is a  fundamental issue in sensorimotor control in  animals and robots. This thesis develops a  control scheme called virtual model control  which addresses this issue. Virtual model  control is a motion control language which  uses simulations of imagined mechanical  components to create forces, which are  applied through joint torques, thereby creating  the illusion that the components are  connected to the robot. Due to the intuitive  nature of this technique, designing a virtual  model controller requires the same skills as  designing the mechanism itself. A high level  control system can be cascaded with the low  level virtual model controller to modulate the  parameters of the virtual mechanisms.  Discrete commands from the high level  controller would then result in fluid motion. An  extension of Gardner's Partitioned Actuator  Set Control method is developed. This  method allows for the specification of  constraints on the generalized forces which  each serial path of a parallel mechanism can  apply. Virtual model control has been applied  to a bipedal walking robot. A simple algorithm  utilizing a simple set of virtual components  has successfully compelled the robot to walk  eight consecutive steps.
</description>
<pubDate>Fri, 01 Dec 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7082</guid>
<dc:date>1995-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Reasoning About Classical Mechanics</title>
<link>https://hdl.handle.net/1721.1/7081</link>
<description>Automated Reasoning About Classical Mechanics
Wong, Leon
In recent years, researchers in artificial  intelligence have become interested in  replicating human physical reasoning talents  in computers. One of the most important skills  in this area is predicting how physical  systems will behave. This thesis discusses  an implemented program that generates  algebraic descriptions of how systems of rigid  bodies evolve over time. Discussion about the  design of this program identifies a physical  reasoning paradigm and knowledge  representation approach based on  mathematical model construction and  algebraic reasoning. This paradigm offers  several advantages over methods that have  become popular in the field, and seems  promising for reasoning about a wide variety  of classical mechanics problems.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7081</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligence by Design: Principles of Modularity and Coordination for Engineerin</title>
<link>https://hdl.handle.net/1721.1/7080</link>
<description>Intelligence by Design: Principles of Modularity and Coordination for Engineerin
Bryson, Joanna J.
All intelligence relies on search --- for  example, the search for an intelligent agent's  next action. Search is only likely to succeed in resource-bounded agents if they have already  been biased towards finding the right answer.  In artificial agents, the primary source of bias is engineering.   This dissertation describes an approach,  Behavior-Oriented Design (BOD) for  engineering complex agents. A complex agent  is one that must arbitrate between potentially  conflicting goals or behaviors.  Behavior-oriented design builds on work in  behavior-based and hybrid architectures for agents, and the object  oriented approach to software engineering.   The primary contributions of this dissertation  are:     1.The BOD architecture: a modular  architecture with each module providing  specialized representations to facilitate  learning.    This includes one pre-specified module  and representation for action selection or  behavior arbitration. The specialized    representation underlying BOD action  selection is Parallel-rooted, Ordered,  Slip-stack Hierarchical (POSH) reactive plans.     2.The BOD development process: an  iterative process that alternately scales the  agent's capabilities then optimizes the agent  for    simplicity, exploiting tradeoffs between the  component representations. This ongoing  process for controlling complexity not only    provides bias for the behaving agent, but  also facilitates its maintenance and  extendibility.   The secondary contributions of this  dissertation include two implementations of  POSH action selection, a procedure for  identifying useful idioms in agent architectures and  using them to distribute knowledge across  agent paradigms, several examples of  applying BOD idioms to established architectures, an  analysis and comparison of the attributes and  design trends of a large number of agent architectures, a comparison of biological  (particularly mammalian) intelligence to  artificial agent architectures, a novel model of primate transitive inference, and many other  examples of BOD agents and BOD  development.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7080</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Communications Systems Through Shared Context</title>
<link>https://hdl.handle.net/1721.1/7079</link>
<description>Generating Communications Systems Through Shared Context
Beal, Jacob
In a distributed model of intelligence, peer components need to communicate with one another. I present a system which enables two agents connected by a thick twisted bundle of wires to bootstrap a simple communication system from observations of a shared environment. The agents learn a large vocabulary of symbols, as well as inflections on those symbols which allow thematic role-frames to be transmitted. Language acquisition time is rapid and linear in the number of symbols and inflections. The final communication system is robust and performance degrades gradually in the face of problems.
</description>
<pubDate>Tue, 01 Jan 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7079</guid>
<dc:date>2002-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>2D-3D Rigid-Body Registration of X-Ray Fluoroscopy and CT Images</title>
<link>https://hdl.handle.net/1721.1/7078</link>
<description>2D-3D Rigid-Body Registration of X-Ray Fluoroscopy and CT Images
Zollei, Lilla
The registration of pre-operative volumetric datasets to intra- operative two-dimensional images provides an improved way of verifying patient position and medical instrument loca- tion. In applications from orthopedics to neurosurgery, it has a great value in maintaining up-to-date information about changes due to intervention. We propose a mutual information- based registration algorithm to establish the proper align- ment. For optimization purposes, we compare the perfor- mance of the non-gradient Powell method and two slightly di erent versions of a stochastic gradient ascent strategy: one using a sparsely sampled histogramming approach and the other Parzen windowing to carry out probability density approximation.   Our main contribution lies in adopting the stochastic ap- proximation scheme successfully applied in 3D-3D registra- tion problems to the 2D-3D scenario, which obviates the need for the generation of full DRRs at each iteration of pose op- timization. This facilitates a considerable savings in compu- tation expense. We also introduce a new probability density estimator for image intensities via sparse histogramming, de- rive gradient estimates for the density measures required by the maximization procedure and introduce the framework for a multiresolution strategy to the problem. Registration results are presented on uoroscopy and CT datasets of a plastic pelvis and a real skull, and on a high-resolution CT- derived simulated dataset of a real skull, a plastic skull, a plastic pelvis and a plastic lumbar spine segment.
</description>
<pubDate>Wed, 01 Aug 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7078</guid>
<dc:date>2001-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Feature Point Detection and Curve Approximation for Early Processing of Freehand Sketches</title>
<link>https://hdl.handle.net/1721.1/7077</link>
<description>Feature Point Detection and Curve Approximation for Early Processing of Freehand Sketches
Sezgin, Tevfik Metin
Freehand sketching is both a natural and crucial part of design, yet is unsupported by current design automation software. We are working to combine the flexibility and ease of use of paper and pencil with the processing power of a computer to produce a design environment that feels as natural as paper, yet is considerably smarter. One of the most basic steps in accomplishing this is converting the original digitized pen strokes in the sketch into the intended geometric objects using feature point detection and approximation. We demonstrate how multiple sources of information can be combined for  feature detection in strokes and apply this technique using two approaches to  signal processing, one using simple average based thresholding and a second  using scale space.
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7077</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable Self-Assembly: Constructing Global Shape using Biologically-inspire</title>
<link>https://hdl.handle.net/1721.1/7076</link>
<description>Programmable Self-Assembly: Constructing Global Shape using Biologically-inspire
Nagpal, Radhika
In this thesis I present a language for  instructing a sheet of identically-programmed, flexible, autonomous  agents (``cells'') to assemble themselves into a predetermined  global shape, using local interactions. The global shape is described  as a folding construction on a continuous sheet, using a set of axioms  from paper-folding (origami). I provide a means of automatically  deriving the cell program, executed by all cells, from the global  shape description.  With this language, a wide variety of global  shapes and patterns can be synthesized, using only local interactions  between identically-programmed cells. Examples  include flat layered shapes, all plane Euclidean constructions, and a  variety of tessellation patterns. In contrast to approaches based on  cellular automata or evolution, the cell program is directly derived  from the global shape description and is composed from a small  number of biologically-inspired primitives: gradients, neighborhood query,  polarity inversion, cell-to-cell contact and flexible folding. The cell  programs are robust, without relying on regular cell  placement, global coordinates, or synchronous operation and can tolerate a  small amount of random cell death. I show that an average cell  neighborhood of 15 is sufficient to reliably self-assemble complex  shapes and geometric patterns on randomly distributed cells.  The language provides many insights into the  relationship between local and global descriptions of behavior,  such as the advantage of constructive languages, mechanisms for  achieving global robustness, and mechanisms for achieving scale-independent shapes from a single cell program. The language suggests a  mechanism by which many related shapes can be created by the same cell  program, in the manner of D'Arcy Thompson's famous coordinate  transformations. The thesis illuminates how complex morphology and  pattern can emerge from local interactions, and how one can engineer  robust self-assembly.
</description>
<pubDate>Fri, 01 Jun 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7076</guid>
<dc:date>2001-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling, Estimation, and Control of Robot-Soil Interactions</title>
<link>https://hdl.handle.net/1721.1/7075</link>
<description>Modeling, Estimation, and Control of Robot-Soil Interactions
Hong, Won
This thesis presents the development of hardware, theory, and experimental methods to enable a robotic manipulator arm to interact with soils and estimate soil properties from interaction forces. Unlike the majority of robotic systems interacting with soil, our objective is parameter estimation, not excavation. To this end, we design our manipulator with a flat plate for easy modeling of interactions. By using a flat plate, we take advantage of the wealth of research on the similar problem of earth pressure on retaining walls.  There are a number of existing earth pressure models. These models typically provide estimates of force which are in uncertain relation to the true force. A recent technique, known as numerical limit analysis, provides upper and lower bounds on the true force. Predictions from the numerical limit analysis technique are shown to be in good agreement with other accepted models.  Experimental methods for plate insertion, soil-tool interface friction estimation, and control of applied forces on the soil are presented. In addition, a novel graphical technique for inverting the soil models is developed, which is an improvement over standard nonlinear optimization. This graphical technique utilizes the uncertainties associated with each set of force measurements to obtain all possible parameters which could have produced the measured forces.  The system is tested on three cohesionless soils, two in a loose state and one in a loose and dense state. The results are compared with friction angles obtained from direct shear tests. The results highlight a number of key points. Common assumptions are made in soil modeling. Most notably, the Mohr-Coulomb failure law and perfectly plastic behavior. In the direct shear tests, a marked dependence of friction angle on the normal stress at low stresses is found. This has ramifications for any study of friction done at low stresses. In addition, gradual failures are often observed for vertical tools and tools inclined away from the direction of motion. After accounting for the change in friction angle at low stresses, the results show good agreement with the direct shear values.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7075</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Multi-class Text Classification with Naive Bayes</title>
<link>https://hdl.handle.net/1721.1/7074</link>
<description>Improving Multi-class Text Classification with Naive Bayes
Rennie, Jason D. M.
There are numerous text documents available in electronic form. More and more are becoming available every day. Such documents represent a massive amount of information that is easily accessible. Seeking value in this huge collection requires organization; much of the work of organizing documents can be automated through text classification. The accuracy and our understanding of such systems greatly influences their usefulness. In this paper, we seek 1) to advance the understanding of commonly used text classification techniques, and 2) through that understanding, improve the tools that are available for text classification. We begin by clarifying the assumptions made in the derivation of Naive Bayes, noting basic properties and proposing ways for its extension and improvement. Next, we investigate the quality of Naive Bayes parameter estimates and their impact on classification. Our analysis leads to a theorem which gives an explanation for the improvements that can be found in multiclass classification with Naive Bayes using Error-Correcting Output Codes. We use experimental evidence on two commonly-used data sets to exhibit an application of the theorem. Finally, we show fundamental flaws in a commonly-used feature selection algorithm and develop a statistics-based framework for text feature selection. Greater understanding of Naive Bayes and the properties of text allows us to make better use of it in text classification.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7074</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Object Recognition with Pictorial Structures</title>
<link>https://hdl.handle.net/1721.1/7073</link>
<description>Object Recognition with Pictorial Structures
Felzenszwalb, Pedro F.
This thesis presents a statistical framework for object recognition. The framework is motivated by the pictorial structure models introduced by Fischler and Elschlager nearly 30 years ago. The basic idea is to model an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. The problem of detecting an object in an image and the problem of learning an object model using training examples are naturally formulated under a statistical approach. We present efficient algorithms to solve these problems in our framework. We demonstrate our techniques by training models to represent faces and human bodies. The models are then used to locate the corresponding objects in novel images.
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7073</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Architect's Collaborator: Toward Intelligent Tools for Conceptual Design</title>
<link>https://hdl.handle.net/1721.1/7072</link>
<description>The Architect's Collaborator: Toward Intelligent Tools for Conceptual Design
Koile, Kimberle
In early stages of architectural design, as in  other design domains, the language used is often very abstract. In architectural design, for  example, architects and their clients use experiential terms such as "private" or "open"  to describe spaces. If we are to build programs that can help designers during this  early-stage design, we must give those programs the capability to deal with concepts  on the level of such abstractions. The work reported in this thesis sought to do that,  focusing on two key questions: How are  abstract terms such as "private" and "open" translated  into physical form? How might one build a tool to assist designers with this process? The Architect's Collaborator (TAC) was built to  explore these issues. It is a design assistant that supports iterative design refinement, and  that represents and reasons about how experiential qualities are manifested in  physical form. Given a starting design and a  set of design goals, TAC explores the space of  possible designs in search of solutions that  satisfy the goals. It employs a strategy we've called  dependency-directed redesign: it evaluates a design with respect to a set of goals, then  uses an explanation of the evaluation to guide proposal and refinement of repair  suggestions; it then carries out the repair  suggestions to create new designs. A series of experiments was run to study  TAC's behavior. Issues of control structure,  goal set size, goal order, and modification operator  capabilities were explored. In addition, TAC's use as a design assistant was studied  in an experiment using a house in the  process of being redesigned. TAC's use as an  analysis tool was studied in an experiment  using Frank Lloyd Wright's Prairie houses.
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7072</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicate Dispatching in the Common Lisp Object System</title>
<link>https://hdl.handle.net/1721.1/7071</link>
<description>Predicate Dispatching in the Common Lisp Object System
Ucko, Aaron Mark
I have added support for predicate dispatching, a powerful generalization of other dispatching mechanisms, to the Common Lisp Object System (CLOS). To demonstrate its utility, I used predicate dispatching to enhance Weyl, a computer algebra system which doubles as a CLOS library. My result is Dispatching-Enhanced Weyl (DEW), a computer algebra system that I have demonstrated to be well suited for both users and programmers.
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7071</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Control of an Anthropomorphic Robotic Finger with Multi-point Tactile Sensation</title>
<link>https://hdl.handle.net/1721.1/7070</link>
<description>Design and Control of an Anthropomorphic Robotic Finger with Multi-point Tactile Sensation
Banks, Jessica
The goal of this research is to develop the prototype of a tactile sensing platform for anthropomorphic manipulation research. We investigate this problem through the fabrication and simple control of a planar 2-DOF robotic finger inspired by anatomic consistency, self-containment, and adaptability. The robot is equipped with a tactile sensor array based on optical transducer technology whereby localized changes in light intensity within an illuminated foam substrate correspond to the distribution and magnitude of forces applied to the sensor surface plane.   The integration of tactile perception is a key component in realizing robotic systems which organically interact with the world. Such natural behavior is characterized by compliant performance that can initiate internal, and respond to external, force application in a dynamic environment. However, most of the current manipulators that support some form of haptic feedback either solely derive proprioceptive sensation or only limit tactile sensors to the mechanical fingertips. These constraints are due to the technological challenges involved in high resolution, multi-point tactile perception.  In this work, however, we take the opposite approach, emphasizing the role of full-finger tactile feedback in the refinement of manual capabilities. To this end, we propose and implement a control framework for sensorimotor coordination analogous to infant-level grasping and fixturing reflexes. This thesis details the mechanisms used to achieve these sensory, actuation, and control objectives, along with the design philosophies and biological influences behind them. The results of behavioral experiments with a simple tactilely-modulated control scheme are also described. The hope is to integrate the modular finger into an %engineered analog of the human hand with a complete haptic system.
</description>
<pubDate>Tue, 01 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7070</guid>
<dc:date>2001-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Informational Complexity of Learning from Examples</title>
<link>https://hdl.handle.net/1721.1/7069</link>
<description>The Informational Complexity of Learning from Examples
Niyogi, Partha
This thesis attempts to quantify the amount of  information needed to learn certain tasks. The  tasks chosen vary from learning functions in a  Sobolev space using radial basis function  networks to learning grammars in the  principles and parameters framework of  modern linguistic theory. These problems are  analyzed from the perspective of  computational learning theory and certain  unifying perspectives emerge.
</description>
<pubDate>Sun, 01 Sep 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7069</guid>
<dc:date>1996-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Genetic Algorithms to Efficient Organization</title>
<link>https://hdl.handle.net/1721.1/7068</link>
<description>From Genetic Algorithms to Efficient Organization
Yuret, Deniz
The work described in this thesis began as  an inquiry into the nature and use of  optimization programs based on "genetic  algorithms." That inquiry led, eventually, to  three powerful heuristics that are broadly  applicable in gradient-ascent programs: First,  remember the locations of local maxima and  restart the optimization program at a place  distant from previously located local maxima.  Second, adjust the size of probing steps to  suit the local nature of the terrain, shrinking  when probes do poorly and growing when  probes do well. And third, keep track of the  directions of recent successes, so as to  probe preferentially in the direction of most  rapid ascent. These algorithms lie at the core  of a novel optimization program that illustrates  the power to be had from deploying them  together. The efficacy of this program is  demonstrated on several test problems  selected from a variety of fields, including De  Jong's famous test-problem suite, the  traveling salesman problem, the problem of  coordinate registration for image guided  surgery, the energy minimization problem for  determining the shape of organic molecules,  and the problem of assessing the structure of  sedimentary deposits using seismic data.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7068</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Segmentation of Brain Tissue from Magnetic Resonance Images</title>
<link>https://hdl.handle.net/1721.1/7067</link>
<description>Segmentation of Brain Tissue from Magnetic Resonance Images
Kapur, Tina
Segmentation of medical imagery is a  challenging problem due to the complexity of  the images, as well as to the absence of  models of the anatomy that fully capture the  possible deformations in each structure.  Brain tissue is a particularly complex  structure, and its segmentation is an  important step for studies in temporal change  detection of morphology, as well as for 3D  visualization in surgical planning. In this  paper, we present a method for segmentation  of brain tissue from magnetic resonance  images that is a combination of three existing  techniques from the Computer Vision  literature: EM segmentation, binary  morphology, and active contour models. Each  of these techniques has been customized for  the problem of brain tissue segmentation in a  way that the resultant method is more robust  than its components. Finally, we present the  results of a parallel implementation of this  method on IBM's supercomputer Power  Visualization System for a database of 20  brain scans each with 256x256x124 voxels  and validate those against segmentations  generated by neuroanatomy experts.
</description>
<pubDate>Sun, 01 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7067</guid>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Coupled Micro-Macro Actuators</title>
<link>https://hdl.handle.net/1721.1/7066</link>
<description>Parallel Coupled Micro-Macro Actuators
Morrell, John Bryant
This thesis presents a new actuator system  consisting of a micro-actuator and a macro-actuator coupled in parallel via a compliant  transmission. The system is called the  Parallel Coupled Micro-Macro Actuator, or  PaCMMA. In this system, the micro-actuator is  capable of high bandwidth force control due to  its low mass and direct-drive connection to  the output shaft. The compliant transmission  of the macro-actuator reduces the impedance  (stiffness) at the output shaft and increases  the dynamic range of force. Performance  improvement over single actuator systems  was expected in force control, impedance  control, force distortion and reduction of  transient impact forces. A set of quantitative  measures is proposed and the actuator  system is evaluated against them: Force  Control Bandwidth, Position Bandwidth,  Dynamic Range, Impact Force, Impedance  ("Backdriveability'"), Force Distortion and  Force Performance Space. Several theoretical  performance limits are derived from the  saturation limits of the system. A control law is  proposed and control system performance is  compared to the theoretical limits. A prototype  testbed was built using permanenent magnet  motors and an experimental comparison was  performed between this actuator concept and  two single actuator systems. The following  performance was observed: Force bandwidth  of 56Hz, Torque Dynamic Range of 800:1,  Peak Torque of 1040mNm, Minimum Torque  of 1.3mNm. Peak Impact Force was reduced  by an order of magnitude. Distortion at small  amplitudes was reduced substantially.  Backdriven impedance was reduced by 2-3  orders of magnitude. This actuator system  shows promise for manipulator design as  well as psychophysical tests of human  performance.
</description>
<pubDate>Mon, 01 Jan 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7066</guid>
<dc:date>1996-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alignment by Maximization of Mutual Information</title>
<link>https://hdl.handle.net/1721.1/7065</link>
<description>Alignment by Maximization of Mutual Information
Viola, Paul A.
A new information-theoretic approach is  presented for finding the pose of an object in  an image. The technique does not require  information about the surface properties of the  object, besides its shape, and is robust with  respect to variations of illumination. In our  derivation, few assumptions are made about  the nature of the imaging process. As a result  the algorithms are quite general and can  foreseeably be used in a wide variety of  imaging situations. Experiments are  presented that demonstrate the approach  registering magnetic resonance (MR) images  with computed tomography (CT) images,  aligning a complex 3D object model to real  scenes including clutter and occlusion,  tracking a human head in a video sequence  and aligning a view-based 2D object model to  real images. The method is based on a  formulation of the mutual information between  the model and the image called EMMA. As  applied here the technique is intensity-based,  rather than feature-based. It works well in  domains where edge or gradient-magnitude  based methods have difficulty, yet it is more  robust than traditional correlation. Additionally,  it has an efficient implementation that is  based on stochastic approximation. Finally,  we will describe a number of additional real-world applications that can be solved  efficiently and reliably using EMMA. EMMA can  be used in machine learning to find maximally  informative projections of high-dimensional  data. EMMA can also be used to detect and  correct corruption in magnetic resonance  images (MRI).
</description>
<pubDate>Wed, 01 Mar 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7065</guid>
<dc:date>1995-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Embodiment and Manipulation Learning Process for a Humanoid Hand</title>
<link>https://hdl.handle.net/1721.1/7064</link>
<description>Embodiment and Manipulation Learning Process for a Humanoid Hand
Matsuoka, Yoky
Babies are born with simple manipulation  capabilities such as reflexes to perceived  stimuli. Initial discoveries by babies are  accidental until they become coordinated and  curious enough to actively investigate their  surroundings. This thesis explores the  development of such primitive learning  systems using an embodied light-weight  hand with three fingers and a thumb. It is self-contained having four motors and 36  exteroceptor and proprioceptor sensors  controlled by an on-palm microcontroller.  Primitive manipulation is learned from  sensory inputs using competitive learning,  back-propagation algorithm and  reinforcement learning strategies. This hand  will be used for a humanoid being developed  at the MIT Artificial Intelligence Laboratory.
</description>
<pubDate>Mon, 01 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7064</guid>
<dc:date>1995-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thread Scheduling Mechanisms for Multiple-Context Parallel Processors</title>
<link>https://hdl.handle.net/1721.1/7063</link>
<description>Thread Scheduling Mechanisms for Multiple-Context Parallel Processors
Fiske, James A. Stuart
Scheduling tasks to efficiently use the  available processor resources is crucial to  minimizing the runtime of applications on  shared-memory parallel processors. One  factor that contributes to poor processor  utilization is the idle time caused by long  latency operations, such as remote memory  references or processor synchronization  operations. One way of tolerating this latency  is to use a processor with multiple hardware  contexts that can rapidly switch to executing  another thread of computation whenever a  long latency operation occurs, thus increasing  processor utilization by overlapping  computation with communication. Although  multiple contexts are effective for tolerating  latency, this effectiveness can be limited by  memory and network bandwidth, by cache  interference effects among the multiple  contexts, and by critical tasks sharing  processor resources with less critical tasks.  This thesis presents techniques that increase  the effectiveness of multiple contexts by  intelligently scheduling threads to make more  efficient use of processor pipeline, bandwidth,  and cache resources. This thesis proposes  thread prioritization as a fundamental  mechanism for directing the thread schedule  on a multiple-context processor. A priority is  assigned to each thread either statically or  dynamically and is used by the thread  scheduler to decide which threads to load in  the contexts, and to decide which context to  switch to on a context switch. We develop a  multiple-context model that integrates both  cache and network effects, and shows how  thread prioritization can both maintain high  processor utilization, and limit increases in  critical path runtime caused by multithreading.  The model also shows that in order to be  effective in bandwidth limited applications,  thread prioritization must be extended to  prioritize memory requests. We show how  simple hardware can prioritize the running of  threads in the multiple contexts, and the  issuing of requests to both the local memory  and the network. Simulation experiments  show how thread prioritization is used in a  variety of applications. Thread prioritization  can improve the performance of  synchronization primitives by minimizing the  number of processor cycles wasted in  spinning and devoting more cycles to critical  threads. Thread prioritization can be used in  combination with other techniques to improve  cache performance and minimize cache  interference between different working sets in  the cache. For applications that are critical  path limited, thread prioritization can improve  performance by allowing processor resources  to be devoted preferentially to critical threads.  These experimental results show that thread  prioritization is a mechanism that can be used  to implement a wide range of scheduling  policies.
</description>
<pubDate>Thu, 01 Jun 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7063</guid>
<dc:date>1995-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced Reality Visualization in a Surgical Environment</title>
<link>https://hdl.handle.net/1721.1/7062</link>
<description>Enhanced Reality Visualization in a Surgical Environment
Mellor, J.P.
Enhanced reality visualization is the process  of enhancing an image by adding to it  information which is not present in the original  image. A wide variety of information can be  added to an image ranging from hidden lines  or surfaces to textual or iconic data about a  particular part of the image. Enhanced reality  visualization is particularly well suited to  neurosurgery. By rendering brain structures  which are not visible, at the correct location in  an image of a patient's head, the surgeon is  essentially provided with X-ray vision. He can  visualize the spatial relationship between  brain structures before he performs a  craniotomy and during the surgery he can see  what's under the next layer before he cuts  through. Given a video image of the patient  and a three dimensional model of the  patient's brain the problem enhanced reality  visualization faces is to render the model from  the correct viewpoint and overlay it on the  original image. The relationship between the  coordinate frames of the patient, the patient's  internal anatomy scans and the image plane  of the camera observing the patient must be  established. This problem is closely related to  the camera calibration problem. This report  presents a new approach to finding this  relationship and develops a system for  performing enhanced reality visualization in a  surgical environment. Immediately prior to  surgery a few circular fiducials are placed  near the surgical site. An initial registration of  video and internal data is performed using a  laser scanner. Following this, our method is  fully automatic, runs in nearly real-time, is  accurate to within a pixel, allows both patient  and camera motion, automatically corrects for  changes to the internal camera parameters  (focal length, focus, aperture, etc.) and  requires only a single image.
</description>
<pubDate>Sun, 01 Jan 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7062</guid>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contact Sensing: A Sequential Decision Approach to Sensing Manipulation Contact</title>
<link>https://hdl.handle.net/1721.1/7061</link>
<description>Contact Sensing: A Sequential Decision Approach to Sensing Manipulation Contact
Eberman, Brian Scott
This paper describes a new statistical,  model-based approach to building a contact  state observer. The observer uses  measurements of the contact force and  position, and prior information about the task  encoded in a graph, to determine the current  location of the robot in the task configuration  space. Each node represents what the  measurements will look like in a small region  of configuration space by storing a predictive,  statistical, measurement model. This  approach assumes that the measurements  are statistically block independent conditioned  on knowledge of the model, which is a fairly  good model of the actual process. Arcs in the  graph represent possible transitions between  models. Beam Viterbi search is used to  match measurement history against possible  paths through the model graph in order to  estimate the most likely path for the robot. The  resulting approach provides a new decision  process that can be use as an observer for  event driven manipulation programming. The  decision procedure is significantly more  robust than simple threshold decisions  because the measurement history is used to  make decisions. The approach can be used  to enhance the capabilities of autonomous  assembly machines and in quality control  applications.
</description>
<pubDate>Mon, 01 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7061</guid>
<dc:date>1995-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parameter Estimation in Chaotic Systems</title>
<link>https://hdl.handle.net/1721.1/7060</link>
<description>Parameter Estimation in Chaotic Systems
Hung, Elmer S.
This report examines how to estimate the  parameters of a chaotic system given noisy  observations of the state behavior of the  system. Investigating parameter estimation  for chaotic systems is interesting because of  possible applications for high-precision  measurement and for use in other signal  processing, communication, and control  applications involving chaotic systems. In this  report, we examine theoretical issues  regarding parameter estimation in chaotic  systems and develop an efficient algorithm to  perform parameter estimation. We discover  two properties that are helpful for performing  parameter estimation on non-structurally  stable systems. First, it turns out that most  data in a time series of state observations  contribute very little information about the  underlying parameters of a system, while a  few sections of data may be extraordinarily  sensitive to parameter changes. Second, for  one-parameter families of systems, we  demonstrate that there is often a preferred  direction in parameter space governing how  easily trajectories of one system can  "shadow'" trajectories of nearby systems.  This asymmetry of shadowing behavior in  parameter space is proved for certain families  of maps of the interval. Numerical evidence  indicates that similar results may be true for a  wide variety of other systems. Using the two  properties cited above, we devise an  algorithm for performing parameter  estimation. Standard parameter estimation  techniques such as the extended Kalman  filter perform poorly on chaotic systems  because of divergence problems. The  proposed algorithm achieves accuracies  several orders of magnitude better than the  Kalman filter and has good convergence  properties for large data sets.
</description>
<pubDate>Sat, 01 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7060</guid>
<dc:date>1995-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Passive Dynamics in the Control of Gymnastic Maneuvers</title>
<link>https://hdl.handle.net/1721.1/7059</link>
<description>Passive Dynamics in the Control of Gymnastic Maneuvers
Playter, Robert
The control of aerial gymnastic maneuvers is challenging because these maneuvers frequently involve complex rotational motion and because the performer has limited control of the maneuver during flight. A performer can influence a maneuver using a sequence of limb movements during flight. However, the same sequence may not produce reliable performances in the presence of off-nominal conditions. How do people compensate for variations in performance to reliably produce aerial maneuvers? In this report I explore the role that passive dynamic stability may play in making the performance of aerial maneuvers simple and reliable.  I present a control strategy comprised of active and passive components for performing robot front somersaults in the laboratory. I show that passive dynamics can neutrally stabilize the layout somersault which involves an "inherently unstable" rotation about the intermediate principal axis. And I show that a strategy that uses open loop joint torques plus passive dynamics leads to more reliable 1 1/2 twisting front somersaults in simulation than a strategy that uses prescribed limb motion.  Results are presented from laboratory experiments on gymnastic robots, from dynamic simulation of humans and robots, and from linear stability analyses of these systems.
</description>
<pubDate>Wed, 01 Mar 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7059</guid>
<dc:date>1995-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Asymptotically Zero Energy Computing Using Split-Level Charge Recovery Logic</title>
<link>https://hdl.handle.net/1721.1/7058</link>
<description>Asymptotically Zero Energy Computing Using Split-Level Charge Recovery Logic
Younis, Saed G.
The dynamic power requirement of CMOS circuits is rapidly becoming a major concern in the design of personal information systems and large computers. In this work we present a number of new CMOS logic families, Charge Recovery Logic (CRL) as well as the much improved Split-Level Charge Recovery Logic (SCRL), within which the transfer of charge between the nodes occurs quasistatically. Operating quasistatically, these logic families have an energy dissipation that drops linearly with operating frequency, i.e., their power consumption drops quadratically with operating frequency as opposed to the linear drop of conventional CMOS. The circuit techniques in these new families rely on constructing an explicitly reversible pipelined logic gate, where the information necessary to recover the energy used to compute a value is provided by computing its logical inverse. Information necessary to uncompute the inverse is available from the subsequent inverse logic stage. We demonstrate the low energy operation of SCRL by presenting the results from the testing of the first fully quasistatic 8 x 8 multiplier chip (SCRL-1) employing SCRL circuit techniques.
</description>
<pubDate>Wed, 01 Jun 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7058</guid>
<dc:date>1994-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analysis of the Effect of Gaussian Error in Object Recognition</title>
<link>https://hdl.handle.net/1721.1/7057</link>
<description>An Analysis of the Effect of Gaussian Error in Object Recognition
Sarachik, Karen Beth
Object recognition is complicated by clutter,  occlusion, and sensor error. Since pose  hypotheses are based on image feature  locations, these effects can lead to false  negatives and positives. In a typical  recognition algorithm, pose hypotheses are  tested against the image, and a score is  assigned to each hypothesis. We use a  statistical model to determine the score  distribution associated with correct and  incorrect pose hypotheses, and use binary  hypothesis testing techniques to distinguish  between them. Using this approach we can  compare algorithms and noise models, and  automatically choose values for internal  system thresholds to minimize the probability  of making a mistake.
</description>
<pubDate>Tue, 01 Feb 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7057</guid>
<dc:date>1994-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heterogeneous Multi-Robot Cooperation</title>
<link>https://hdl.handle.net/1721.1/7056</link>
<description>Heterogeneous Multi-Robot Cooperation
Parker, Lynne E.
This report addresses the problem of  achieving cooperation within small- to  medium- sized teams of heterogeneous  mobile robots. I describe a software  architecture I have developed, called  ALLIANCE, that facilitates robust, fault tolerant,  reliable, and adaptive cooperative control. In  addition, an extended version of ALLIANCE,  called L-ALLIANCE, is described, which  incorporates a dynamic parameter update  mechanism that allows teams of mobile  robots to improve the efficiency of their  mission performance through learning. A  number of experimental results of  implementing these architectures on both  physical and simulated mobile robot teams  are described. In addition, this report presents  the results of studies of a number of issues in  mobile robot cooperation, including fault  tolerant cooperative control, adaptive action  selection, distributed control, robot  awareness of team member actions,  improving efficiency through learning, inter-robot communication, action recognition, and  local versus global control.
</description>
<pubDate>Tue, 01 Feb 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7056</guid>
<dc:date>1994-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Methods for Synthesizing Whole-Hand Grasps from Generalized Prototypes</title>
<link>https://hdl.handle.net/1721.1/7055</link>
<description>Parallel Methods for Synthesizing Whole-Hand Grasps from Generalized Prototypes
Pollard, Nancy S.
This report addresses the problem of  acquiring objects using articulated robotic  hands. Standard grasps are used to make the  problem tractable, and a technique is  developed for generalizing these standard  grasps to increase their flexibility to variations  in the problem geometry. A generalized grasp  description is applied to a new problem  situation using a parallel search through hand  configuration space, and the result of this  operation is a global overview of the space of  good solutions. The techniques presented in  this report have been implemented, and the  results are verified using the Salisbury three-finger robotic hand.
</description>
<pubDate>Sat, 01 Jan 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7055</guid>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maygen: A Symbolic Debugger Generation System</title>
<link>https://hdl.handle.net/1721.1/7054</link>
<description>Maygen: A Symbolic Debugger Generation System
Tsien, Christine L.
With the development of high-level languages  for new computer architectures comes the  need for appropriate debugging tools as well.  One method for meeting this need would be  to develop, from scratch, a symbolic debugger  with the introduction of each new language  implementation for any given architecture.  This, however, seems to require unnecessary  duplication of effort among developers. This  paper describes Maygen, a "debugger  generation system," designed to efficiently  provide the desired language-dependent and  architecture-dependent debuggers. A  prototype of the Maygen system has been  implemented and is able to handle the  semantically different languages of C and  OPAL.
</description>
<pubDate>Sat, 01 May 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7054</guid>
<dc:date>1993-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Translucent Procedures, Abstraction without Opacity</title>
<link>https://hdl.handle.net/1721.1/7053</link>
<description>Translucent Procedures, Abstraction without Opacity
Rozas, Guillermo J.
This report introduces TRANSLUCENT  PROCEDURES as a new mechanism for  implementing behavioral abstractions. Like  an ordinary procedure, a translucent  procedure can be invoked, and thus provides  an obvious way to capture a BEHAVIOR.  Translucent procedures, like ordinary  procedures, can be manipulated as first-class  objects and combined using functional  composition. But unlike ordinary procedures,  translucent procedures have structure that  can be examined in well-specified non-destructive ways, without invoking the  procedure.
</description>
<pubDate>Fri, 01 Oct 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7053</guid>
<dc:date>1993-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Internal Camera Calibration Using Rotation and Geometric Shapes</title>
<link>https://hdl.handle.net/1721.1/7052</link>
<description>Internal Camera Calibration Using Rotation and Geometric Shapes
Stein, Gideon P.
This paper describes a simple method for  internal camera calibration for computer  vision. This method is based on tracking  image features through a sequence of  images while the camera undergoes pure  rotation. The location of the features relative to  the camera or to each other need not be  known and therefore this method can be used  both for laboratory calibration and for self  calibration in autonomous robots working in  unstructured environments. A second method  of calibration is also presented. This method  uses simple geometric objects such as  spheres and straight lines to The camera  parameters. Calibration is performed using  both methods and the results compared.
</description>
<pubDate>Mon, 01 Feb 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7052</guid>
<dc:date>1993-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design of Shape from Motion Constraints</title>
<link>https://hdl.handle.net/1721.1/7051</link>
<description>The Design of Shape from Motion Constraints
Caine, Michael E.
This report presents a set of representations  methodologies and tools for the purpose of  visualizing, analyzing and designing functional  shapes in terms of constraints on motion. The  core of the research is an interactive  computational environment that provides an  explicit visual representation of motion  constraints produced by shape interactions,  and a series of tools that allow for the  manipulation of motion constraints and their  underlying shapes for the purpose of design.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7051</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explorations of the Practical Issues of Learning Prediction-Control Tasks Using Temporal Difference Learning Methods</title>
<link>https://hdl.handle.net/1721.1/7050</link>
<description>Explorations of the Practical Issues of Learning Prediction-Control Tasks Using Temporal Difference Learning Methods
Isbell, Charles L.
There has been recent interest in using  temporal difference learning methods to  attack problems of prediction and control.  While these algorithms have been brought to  bear on many problems, they remain poorly  understood. It is the purpose of this thesis to  further explore these algorithms, presenting a  framework for viewing them and raising a  number of practical issues and exploring  those issues in the context of several case  studies. This includes applying the  TD(lambda) algorithm to: 1) learning to play  tic-tac-toe from the outcome of self-play and of  play against a perfectly-playing opponent and  2) learning simple one-dimensional  segmentation tasks.
</description>
<pubDate>Tue, 01 Dec 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7050</guid>
<dc:date>1992-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Attentional Selection in Object Recognition</title>
<link>https://hdl.handle.net/1721.1/7049</link>
<description>Attentional Selection in Object Recognition
Tanveer, S.; Mahmood, F.
A key problem in object recognition is  selection, namely, the problem of identifying  regions in an image within which to start the  recognition process, ideally by isolating  regions that are likely to come from a single  object. Such a selection mechanism has  been found to be crucial in reducing the  combinatorial search involved in the matching  stage of object recognition. Even though  selection is of help in recognition, it has  largely remained unsolved because of the  difficulty in isolating regions belonging to  objects under complex imaging conditions  involving occlusions, changing illumination,  and object appearances. This thesis presents  a novel approach to the selection problem by  proposing a computational model of visual  attentional selection as a paradigm for  selection in recognition. In particular, it  proposes two modes of attentional selection,  namely, attracted and pay attention modes as  being appropriate for data and model-driven  selection in recognition. An implementation of  this model has led to new ways of extracting  color, texture and line group information in  images, and their subsequent use in isolating  areas of the scene likely to contain the model  object. Among the specific results in this  thesis are: a method of specifying color by  perceptual color categories for fast color  region segmentation and color-based  localization of objects, and a result showing  that the recognition of texture patterns on  model objects is possible under changes in  orientation and occlusions without detailed  segmentation. The thesis also presents an  evaluation of the proposed model by  integrating with a 3D from 2D object  recognition system and recording the  improvement in performance. These results  indicate that attentional selection can  significantly overcome the computational  bottleneck in object recognition, both due to a  reduction in the number of features, and due  to a reduction in the number of matches  during recognition using the information  derived during selection. Finally, these  studies have revealed a surprising use of  selection, namely, in the partial solution of the  pose of a 3D object.
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7049</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Site Controller: A System for Computer-Aided Civil Engineering and Construction</title>
<link>https://hdl.handle.net/1721.1/7048</link>
<description>Site Controller: A System for Computer-Aided Civil Engineering and Construction
Greenspun, Philip
A revolution\0\0\0 in earthmoving, a $100 billion  industry, can be achieved with three  components: the GPS location system,  sensors and computers in bulldozers, and  SITE CONTROLLER, a central computer  system that maintains design data and  directs operations. The first two components  are widely available; I built SITE  CONTROLLER to complete the triangle and  describe it here. SITE CONTROLLER assists  civil engineers in the design, estimation, and  construction of earthworks, including  hazardous waste site remediation. The core  of SITE CONTROLLER is a site modelling  system that represents existing and  prospective terrain shapes, roads, hydrology,  etc. Around this core are analysis, simulation,  and vehicle control tools. Integrating these  modules into one program enables civil  engineers and contractors to use a single  interface and database throughout the life of a  project.
</description>
<pubDate>Mon, 01 Feb 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7048</guid>
<dc:date>1993-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geometry and Photometry in 3D Visual Recognition</title>
<link>https://hdl.handle.net/1721.1/7047</link>
<description>Geometry and Photometry in 3D Visual Recognition
Shashua, Amnon
The report addresses the problem of visual  recognition under two sources of variability:  geometric and photometric. The geometric  deals with the relation between 3D objects  and their views under orthographic and  perspective projection. The photometric deals  with the relation between 3D matte objects  and their images under changing illumination  conditions. Taken together, an alignment-based method is presented for recognizing  objects viewed from arbitrary viewing  positions and illuminated by arbitrary settings  of light sources.
</description>
<pubDate>Sun, 01 Nov 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7047</guid>
<dc:date>1992-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Object Recognition</title>
<link>https://hdl.handle.net/1721.1/7046</link>
<description>Statistical Object Recognition
Wells, William M. III
Two formulations of model-based object  recognition are described. MAP Model  Matching evaluates joint hypotheses of match  and pose, while Posterior Marginal Pose  Estimation evaluates the pose only. Local  search in pose space is carried out with the  Expectation--Maximization (EM) algorithm.  Recognition experiments are described  where the EM algorithm is used to refine and  evaluate pose hypotheses in 2D and 3D.  Initial hypotheses for the 2D experiments  were generated by a simple indexing method:  Angle Pair Indexing. The Linear Combination  of Views method of Ullman and Basri is  employed as the projection model in the 3D  experiments.
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7046</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Recurrent Networks for Dimensionality Reduction</title>
<link>https://hdl.handle.net/1721.1/7045</link>
<description>Using Recurrent Networks for Dimensionality Reduction
Jones, Michael J.
This report explores how recurrent neural  networks can be exploited for learning high-dimensional mappings. Since recurrent  networks are as powerful as Turing  machines, an interesting question is how  recurrent networks can be used to simplify the  problem of learning from examples. The main  problem with learning high-dimensional  functions is the curse of dimensionality which  roughly states that the number of examples  needed to learn a function increases  exponentially with input dimension. This  thesis proposes a way of avoiding this  problem by using a recurrent network to  decompose a high-dimensional function into  many lower dimensional functions connected  in a feedback loop.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7045</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Taming Chaotic Circuits</title>
<link>https://hdl.handle.net/1721.1/7044</link>
<description>Taming Chaotic Circuits
Bradley, Elizabeth
Control algorithms that exploit chaotic  behavior can vastly improve the performance  of many practical and useful systems. The  program Perfect Moment is built around a  collection of such techniques. It autonomously  explores a dynamical system's behavior,  using rules embodying theorems and  definitions from nonlinear dynamics to zero in  on interesting and useful parameter ranges  and state-space regions. It then constructs a  reference trajectory based on that information  and causes the system to follow it. This  program and its results are illustrated with  several examples, among them the phase-locked loop, where sections of chaotic  attractors are used to increase the capture  range of the circuit.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7044</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Analysis and Synthesis of Controllers for Dynamical Systems Based On P</title>
<link>https://hdl.handle.net/1721.1/7043</link>
<description>Automatic Analysis and Synthesis of Controllers for Dynamical Systems Based On P
Zhao, Feng
I present a novel design methodology for the  synthesis of automatic controllers, together  with a computational environment---the  Control Engineer's Workbench---integrating a  suite of programs that automatically analyze  and design controllers for high-performance,  global control of nonlinear systems. This work  demonstrates that difficult control synthesis  tasks can be automated, using programs that  actively exploit and efficiently represent  knowledge of nonlinear dynamics and phase  space and effectively use the representation  to guide and perform the control design. The  Control Engineer's Workbench combines  powerful numerical and symbolic  computations with artificial intelligence  reasoning techniques. As a demonstration,  the Workbench automatically designed a  high-quality maglev controller that  outperforms a previous linear design by a  factor of 20.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7043</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot Motion Vision by Fixation</title>
<link>https://hdl.handle.net/1721.1/7042</link>
<description>Robot Motion Vision by Fixation
Taalebinezhaad, M. Ali
In many motion-vision scenarios, a camera  (mounted on a moving vehicle) takes images  of an environment to find the "motion'' and  shape. We introduce a direct-method called  fixation for solving this motion-vision problem  in its general case. Fixation uses neither  feature-correspondence nor optical-flow.  Instead, spatio-temporal brightness gradients  are used directly. In contrast to previous direct  methods, fixation does not restrict the motion  or the environment. Moreover, fixation method  neither requires tracked images as its input  nor uses mechanical tracking for obtaining  fixated images. The experimental results on  real images are presented and the  implementation issues and techniques are  discussed.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7042</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Vector Signal Processing Approach to Color</title>
<link>https://hdl.handle.net/1721.1/7041</link>
<description>A Vector Signal Processing Approach to Color
Sung, Kah-Kay
Surface (Lambertain) color is a useful visual  cue for analyzing material composition of  scenes. This thesis adopts a signal  processing approach to color vision. It  represents color images as fields of 3D  vectors, from which we extract region and  boundary information. The first problem we  face is one of secondary imaging effects that  makes image color different from surface  color. We demonstrate a simple but effective  polarization based technique that corrects for  these effects. We then propose a systematic  approach of scalarizing color, that allows us to  augment classical image processing tools  and concepts for multi-dimensional color  signals.
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7041</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Why are There so Few Female Computer Scientists?</title>
<link>https://hdl.handle.net/1721.1/7040</link>
<description>Why are There so Few Female Computer Scientists?
Spertus, Ellen
This report examines why women pursue  careers in computer science and related  fields far less frequently than men do. In 1990,  only 13% of PhDs in computer science went  to women, and only 7.8% of computer science  professors were female. Causes include the  different ways in which boys and girls are  raised, the stereotypes of female engineers,  subtle biases that females face, problems  resulting from working in predominantly male  environments, and sexual biases in  language. A theme of the report is that  women's underrepresentation is not primarily  due to direct discrimination but to  subconscious behavior that perpetuates the  status quo.
</description>
<pubDate>Thu, 01 Aug 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7040</guid>
<dc:date>1991-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Region-Based Feature Interpretation for Recognizing 3D Models in 2D Images</title>
<link>https://hdl.handle.net/1721.1/7039</link>
<description>Region-Based Feature Interpretation for Recognizing 3D Models in 2D Images
Clemens, David T.
In model-based vision, there are a huge  number of possible ways to match model  features to image features. In addition to  model shape constraints, there are important  match-independent constraints that can  efficiently reduce the search without the  combinatorics of matching. I demonstrate two  specific modules in the context of a complete  recognition system, Reggie. The first is a  region-based grouping mechanism to find  groups of image features that are likely to  come from a single object. The second is an  interpretive matching scheme to make explicit  hypotheses about occlusion and instabilities  in the image features.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7039</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Structure of Human Language</title>
<link>https://hdl.handle.net/1721.1/7038</link>
<description>Computational Structure of Human Language
Ristad, Eric Sven
The central thesis of this report is that human  language is NP-complete. That is, the  process of comprehending and producing  utterances is bounded above by the class NP,  and below by NP-hardness. This constructive  complexity thesis has two empirical  consequences. The first is to predict that a  linguistic theory outside NP is unnaturally  powerful. The second is to predict that a  linguistic theory easier than NP-hard is  descriptively inadequate. To prove the lower  bound, I show that the following three  subproblems of language comprehension  are all NP-hard: decide whether a given  sound is possible sound of a given language;  disambiguate a sequence of words; and  compute the antecedents of pronouns. The  proofs are based directly on the empirical  facts of the language user's knowledge,  under an appropriate idealization. Therefore,  they are invariant across linguistic theories.  (For this reason, no knowledge of linguistic  theory is needed to understand the proofs,  only knowledge of English.) To illustrate the  usefulness of the upper bound, I show that  two widely-accepted analyses of the language  user's knowledge (of syntactic ellipsis and  phonological dependencies) lead to  complexity outside of NP (PSPACE-hard and  Undecidable, respectively). Next, guided by  the complexity proofs, I construct alternate  linguisitic analyses that are strictly superior on  descriptive grounds, as well as being less  complex computationally (in NP). The report  also presents a new framework for linguistic  theorizing, that resolves important puzzles in  generative linguistics, and guides the  mathematical investigation of human  language.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7038</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrent Aggregates (CA): An Object-Oriented Language for Fine-Grained Message-Passing Machines</title>
<link>https://hdl.handle.net/1721.1/7037</link>
<description>Concurrent Aggregates (CA): An Object-Oriented Language for Fine-Grained Message-Passing Machines
Chien, Andrew Andai
Fine-grained parallel machines have the  potential for very high speed computation. To  program massively-concurrent MIMD  machines, programmers need tools for  managing complexity. These tools should not  restrict program concurrency. Concurrent  Aggregates (CA) provides multiple-access  data abstraction tools, Aggregates, which can  be used to implement abstractions with  virtually unlimited potential for concurrency.  Such tools allow programmers to modularize  programs without reducing concurrency. I  describe the design, motivation,  implementation and evaluation of Concurrent  Aggregates. CA has been used to construct a  number of application programs. Multi-access  data abstractions are found to be useful in  constructing highly concurrent programs.
</description>
<pubDate>Sun, 01 Jul 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7037</guid>
<dc:date>1990-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The PHD: A Planar, Harmonic Drive Robot for Joint Torque Control</title>
<link>https://hdl.handle.net/1721.1/7036</link>
<description>The PHD: A Planar, Harmonic Drive Robot for Joint Torque Control
Thompson, Bruce R.
This thesis details the development of a  model of a seven degree of freedom  manipulator for position control. Then, it goes  on to discuss the design and construction of a  the PHD, a robot built to serve two purposes:  first, to perform research on joint torque  control schemes, and second, to determine  the important dynamic characteristics of the  Harmonic Drive. The PHD, is a planar, three  degree of freedom arm with torque sensors  integral to each joint. Preliminary testing has  shown that a simple linear spring model of  the Harmonic Drive's flexibility is suitable in  many situations.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7036</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pi: A Parallel Architecture Interface for Multi-Model Execution</title>
<link>https://hdl.handle.net/1721.1/7035</link>
<description>Pi: A Parallel Architecture Interface for Multi-Model Execution
Wills, Donald Scott
This thesis defines Pi, a parallel architecture  interface that separates model and machine  issues, allowing them to be addressed  independently. This provides greater flexibility  for both the model and machine builder. Pi  addresses a set of common parallel model  requirements including low latency  communication, fast task switching, low cost  synchronization, efficient storage  management, the ability to exploit locality, and  efficient support for sequential code. Since Pi  provides generic parallel operations, it can  efficiently support many parallel programming  models including hybrids of existing models.  Pi also forms a basis of comparison for  architectural components.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7035</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Control of a Closed-Loop Brushless Torque Actuator</title>
<link>https://hdl.handle.net/1721.1/7034</link>
<description>Design and Control of a Closed-Loop Brushless Torque Actuator
Levin, Michael Dean
This report explores the design and control  issues associated with a brushless actuator  capable of achieving extremely high torque  accuracy. Models of several different motor - sensor configurations were studied to  determine dynamic characteristics. A reaction  torque sensor fixed to the motor stator was  implemented to decouple the transmission  dynamics from the sensor. This resulted in a  compact actuator with higher bandwidth and  precision than could be obtained with an  inline or joint sensor. Testing demonstrated  that closed-loop torque accuracy was within  0.1%, and the mechanical bandwidth  approached 300 Hz.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7034</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis and Implementation of Robust Grasping Behaviors</title>
<link>https://hdl.handle.net/1721.1/7033</link>
<description>Analysis and Implementation of Robust Grasping Behaviors
Chammas, Camille Z.
This thesis addresses the problem of  developing automatic grasping capabilities for  robotic hands. Using a 2-jointed and a 4-jointed nmodel of the hand, we establish the  geometric conditions necessary for achieving  form closure grasps of cylindrical objects. We  then define and show how to construct the  grasping pre-image for quasi-static (friction  dominated) and zero-G (inertia dominated)  motions for sensorless and sensor-driven  grasps with and without arm motions. While  the approach does not rely on detailed  modeling, it is computationally inexpensive,  reliable, and easy to implement. Example  behaviors were successfully implemented on  the Salisbury hand and on a planar 2-fingered, 4 degree-of-freedom hand.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7033</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Passive and Active Grasping with a Prehensile Robot End-Effector</title>
<link>https://hdl.handle.net/1721.1/7032</link>
<description>Passive and Active Grasping with a Prehensile Robot End-Effector
Greiner, Helen
This report presents a design of a new type of  robot end-effector with inherent mechanical  grasping capabilities. Concentrating on  designing an end-effector to grasp a simple  class of objects, cylindrical, allowed a design  with only one degree of actuation. The key  features of this design are high bandwidth  response to forces, passive grasping  capabilities, ease of control, and ability to  wrap around objects with simple geometries  providing form closure. A prototype of this  mechanism was built to evaluate these  features.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7032</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Control of Human Arm Movement Models and Mechanical Constraints</title>
<link>https://hdl.handle.net/1721.1/7031</link>
<description>The Control of Human Arm Movement Models and Mechanical Constraints
Bennett, David J.
A serial-link manipulator may form a mobile  closed kinematic chain when interacting with  the environment, if it is redundant with respect  to the task degrees of freedom (DOFs) at the  endpoint. If the mobile closed chain assumes  a number of configurations, then loop  consistency equations permit the manipulator  and task kinematics to be calibrated  simultaneously using only the joint angle  readings; endpoint sensing is not required.  Example tasks include a fixed endpoint (0  DOF task), the opening of a door (1 DOF task),  and point contact (3 DOF task). Identifiability  conditions are derived for these various tasks.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7031</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dataflow Computation for the J-Machine</title>
<link>https://hdl.handle.net/1721.1/7030</link>
<description>Dataflow Computation for the J-Machine
Spertus, Ellen
The dataflow model of computation exposes  and exploits parallelism in programs without  requiring programmer annotation; however,  instruction- level dataflow is too fine-grained  to be efficient on general-purpose  processors. A popular solution is to develop a  "hybrid'' model of computation where regions  of dataflow graphs are combined into  sequential blocks of code. I have  implemented such a system to allow the J-Machine to run Id programs, leaving exposed  a high amount of parallelism --- such as  among loop iterations. I describe this system  and provide an analysis of its strengths and  weaknesses and those of the J-Machine,  along with ideas for improvement.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7030</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Noise Reduction Using Low Weight and Constant Weight Coding Techniques</title>
<link>https://hdl.handle.net/1721.1/7029</link>
<description>Noise Reduction Using Low Weight and Constant Weight Coding Techniques
Tabor, Jeff F.
Signalling off-chip requires significant current.  As a result, a chip's power-supply current  changes drastically during certain output-bus  transitions. These current fluctuations cause  a voltage drop between the chip and circuit  board due to the parasitic inductance of the  power-supply package leads. Digital  designers often go to great lengths to reduce  this "transmitted" noise. Cray, for instance,  carefully balances output signals using a  technique called differential signalling to  guarantee a chip has constant output current.  Transmitted-noise reduction costs Cray a  factor of two in output pins and wires. Coding  achieves similar results at smaller costs.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7029</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>MARVEL: A System for Recognizing World Locations with Stereo Vision</title>
<link>https://hdl.handle.net/1721.1/7028</link>
<description>MARVEL: A System for Recognizing World Locations with Stereo Vision
Braunegg, David Jerome
To use a world model, a mobile robot must be  able to determine its own position in the  world. To support truly autonomous  navigation, I present MARVEL, a system that  builds and maintains its own models of world  locations and uses these models to  recognize its world position from stereo vision  input. MARVEL is designed to be robust with  respect to input errors and to respond to a  gradually changing world by updating its world  location models. I present results from real-world tests of the system that demonstrate its  reliability. MARVEL fits into a world modeling  system under development.
</description>
<pubDate>Fri, 01 Jun 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7028</guid>
<dc:date>1990-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Distributed Model for Mobile Robot Environment-Learning and Navigation</title>
<link>https://hdl.handle.net/1721.1/7027</link>
<description>A Distributed Model for Mobile Robot Environment-Learning and Navigation
Mataric, Maja J.
A distributed method for mobile robot  navigation, spatial learning, and path planning  is presented. It is implemented on a sonar-based physical robot, Toto, consisting of three  competence layers: 1) Low-level navigation: a  collection of reflex-like rules resulting in  emergent boundary-tracing. 2) Landmark  detection: dynamically extracts landmarks  from the robot's motion. 3) Map learning:  constructs a distributed map of landmarks.  The parallel implementation allows for  localization in constant time. Spreading of  activation computes both topological and  physical shortest paths in linear time. The  main issues addressed are: distributed,  procedural, and qualitative representation and  computation, emergent behaviors, dynamic  landmarks, minimized communication.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7027</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fat-Tree Routing for Transit</title>
<link>https://hdl.handle.net/1721.1/7026</link>
<description>Fat-Tree Routing for Transit
DeHon, Andre
The Transit network provides high-speed,  low-latency, fault-tolerant interconnect for  high-performance, multiprocessor computers.  The basic connection scheme for Transit  uses bidelta style, multistage networks to  support up to 256 processors. Scaling to  larger machines by simply extending the  bidelta network topology will result in a  uniform degradation of network latency  between all processors. By employing a fat-tree network structure in larger systems, the  network provides locality and universality  properties which can help minimize the  impact of scaling on network latency. This  report details the topology and construction  issues associated with integrating Transit  routing technology into fat-tree interconnect  topologies.
</description>
<pubDate>Thu, 01 Feb 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7026</guid>
<dc:date>1990-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>KAM: Automatic Planning and Interpretation of Numerical Experiments Using Geometrical Methods</title>
<link>https://hdl.handle.net/1721.1/7025</link>
<description>KAM: Automatic Planning and Interpretation of Numerical Experiments Using Geometrical Methods
Yip, Kenneth Man-Kam
KAM is a computer program that can  automatically plan, monitor, and interpret  numerical experiments with Hamiltonian  systems with two degrees of freedom. The  program has recently helped solve an open  problem in hydrodynamics. Unlike other  approaches to qualitative reasoning about  physical system dynamics, KAM embodies a  significant amount of knowledge about  nonlinear dynamics. KAM's ability to control  numerical experiments arises from the fact  that it not only produces pictures for us to see,  but also looks at (sic---in its mind's eye)  the pictures it draws to guide its own actions.  KAM is organized in three semantic levels:  orbit recognition, phase space searching, and  parameter space searching. Within each level  spatial properties and relationships that are  not explicitly represented in the initial  representation are extracted by applying three  operations ---(1) aggregation, (2) partition,  and (3) classification--- iteratively.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7025</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three-Dimensional Motion Estimation Using Shading Information in Multiple Frames</title>
<link>https://hdl.handle.net/1721.1/7024</link>
<description>Three-Dimensional Motion Estimation Using Shading Information in Multiple Frames
Schott, Jean-Pierre
A new formulation for recovering the structure  and motion parameters of a moving patch  using both motion and shading information is  presented. It is based on a new differential  constraint equation (FICE) that links the  spatiotemporal gradients of irradiance to the  motion and structure parameters and the  temporal variations of the surface shading.  The FICE separates the contribution to the  irradiance spatiotemporal gradients of the  gradients due to texture from those due to  shading and allows the FICE to be used for  textured and textureless surface. The new  approach, combining motion and shading  information, leads directly to two different  contributions: it can compensate for the  effects of shading variations in recovering the  shape and motion; and it can exploit the  shading/illumination effects to recover motion  and shape when they cannot be recovered  without it. The FICE formulation is also  extended to multiple frames.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7024</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modelling the Somantic Electrical Response of Hippocampal Pyramidal Neurons</title>
<link>https://hdl.handle.net/1721.1/7023</link>
<description>Modelling the Somantic Electrical Response of Hippocampal Pyramidal Neurons
Borg-Graham, Lyle J.
A modeling study of hippocampal pyramidal  neurons is described. This study is based on  simulations using HIPPO, a program which  simulates the somatic electrical activity of  these cells. HIPPO is based on a)  descriptions of eleven non-linear  conductances that have been either reported  for this class of cell in the literature or  postulated in the present study, and b) an  approximation of the electrotonic structure of  the cell that is derived in this thesis, based on  data for the linear properties of these cells.  HIPPO is used a) to integrate empirical data  from a variety of sources on the electrical  characteristics of this type of cell, b) to  investigate the functional significance of the  various elements that underly the electrical  behavior, and c) to provide a tool for the  electrophysiologist to supplement direct  observation of these cells and provide a  method of testing speculations regarding  parameters that are not accessible.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7023</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Probabilistic Strategies for Robot Tasks</title>
<link>https://hdl.handle.net/1721.1/7022</link>
<description>On Probabilistic Strategies for Robot Tasks
Erdmann, Michael A.
Robots must act purposefully and  successfully in an uncertain world. Sensory  information is inaccurate or noisy, actions  may have a range of effects, and the robot's  environment is only partially and imprecisely  modeled. This thesis introduces active  randomization by a robot, both in selecting  actions to execute and in focusing on sensory  information to interpret, as a basic tool for  overcoming uncertainty. An example of  randomization is given by the strategy of  shaking a bin containing a part in order to  orient the part in a desired stable state with  some high probability. Another example  consists of first using reliable sensory  information to bring two parts close together,  then relying on short random motions to  actually mate the two parts, once the part  motions lie below the available sensing  resolution. Further examples include tapping  parts that are tightly wedged, twirling gears  before trying to mesh them, and vibrating  parts to facilitate a mating operation.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7022</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computation of Color</title>
<link>https://hdl.handle.net/1721.1/7021</link>
<description>The Computation of Color
Hurlbert, Anya C.
This thesis takes an interdisciplinary  approach to the study of color vision,  focussing on the phenomenon of color  constancy formulated as a computational  problem. The primary contributions of the  thesis are (1) the demonstration of a formal  framework for lightness algorithms; (2) the  derivation of a new lightness algorithm based  on regularization theory; (3) the synthesis of  an adaptive lightness algorithm using  "learning" techniques; (4) the development of  an image segmentation algorithm that uses  luminance and color information to mark  material boundaries; and (5) an experimental  investigation into the cues that human  observers use to judge the color of the  illuminant. Other computational approaches  to color are reviewed and some of their links  to psychophysics and physiology are  explored.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/7021</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Implementation of a Flexible Robot</title>
<link>https://hdl.handle.net/1721.1/6983</link>
<description>Design and Implementation of a Flexible Robot
Christian, Andrew Dean
This robot has low natural frequencies of  vibration. Insights into the problems of  designing joint and link flexibility are  discussed. The robot has three flexible rotary  actuators and two flexible, interchangeable  links, and is controlled by three independent  processors on a VMEbus. Results from  experiments on the control of residual  vibration for different types of robot motion are  presented. Impulse prefiltering and slowly  accelerating moves are compared and shown  to be effective at reducing residual vibration.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6983</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Colony Architecture for an Artificial Creature</title>
<link>https://hdl.handle.net/1721.1/6982</link>
<description>A Colony Architecture for an Artificial Creature
Connell, Jonathan
This report describes a working autonomous  mobile robot whose only goal is to collect and  return empty soda cans. It operates in an  unmodified office environment occupied by  moving people. The robot is controlled by a  collection of over 40 independent "behaviors''  distributed over a loosely coupled network of  24 processors. Together this ensemble helps  the robot locate cans with its laser  rangefinder, collect them with its on-board  manipulator, and bring them home using a  compass and an array of proximity sensors.  We discuss the advantages of using such a  multi-agent control system and show how to  decompose the required tasks into  component activities. We also examine the  benefits and limitations of spatially local,  stateless, and independent computation by  the agents.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6982</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Compilation Strategy for Numerical Programs Based on Partial Evaluation</title>
<link>https://hdl.handle.net/1721.1/6981</link>
<description>A Compilation Strategy for Numerical Programs Based on Partial Evaluation
Berlin, Andrew A.
This work demonstrates how partial  evaluation can be put to practical use in the  domain of high-performance numerical  computation. I have developed a technique for  performing partial evaluation by using  placeholders to propagate intermediate  results. For an important class of numerical  programs, a compiler based on this  technique improves performance by an order  of magnitude over conventional compilation  techniques. I show that by eliminating  inherently sequential data-structure  references, partial evaluation exposes the  low-level parallelism inherent in a  computation. I have implemented several  parallel scheduling and analysis programs  that study the tradeoffs involved in the design  of an architecture that can effectively utilize this  parallelism. I present these results using the  9- body gravitational attraction problem as an  example.
</description>
<pubDate>Wed, 01 Feb 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6981</guid>
<dc:date>1989-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multisensor Modeling Underwater with Uncertain Information</title>
<link>https://hdl.handle.net/1721.1/6980</link>
<description>Multisensor Modeling Underwater with Uncertain Information
Stewart, W. Kenneth, Jr.
This thesis develops an approach to the  construction of multidimensional stochastic  models for intelligent systems exploring an  underwater environment. It describes  methods for building models by a three- dimensional spatial decomposition of  stochastic, multisensor feature vectors. New  sensor information is incrementally  incorporated into the model by stochastic  backprojection. Error and ambiguity are  explicitly accounted for by blurring a spatial  projection of remote sensor data before  incorporation. The stochastic models can be  used to derive surface maps or other  representations of the environment. The  methods are demonstrated on data sets from  multibeam bathymetric surveying, towed  sidescan bathymetry, towed sidescan  acoustic imagery, and high-resolution  scanning sonar aboard a remotely operated  vehicle.
</description>
<pubDate>Fri, 01 Jul 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6980</guid>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Derivation of an Efficient Rule System Pattern Matcher</title>
<link>https://hdl.handle.net/1721.1/6979</link>
<description>Derivation of an Efficient Rule System Pattern Matcher
Wertheimer, Jeremy M.
Formalizing algorithm derivations is a  necessary prerequisite for developing  automated algorithm design systems. This  report describes a derivation of an algorithm  for incrementally matching conjunctive  patterns against a growing database. This  algorithm, which is modeled on the Rete  matcher used in the OPS5 production system,  forms a basis for efficiently implementing a  rule system. The highlights of this derivation  are: (1) a formal specification for the rule  system matching problem, (2) derivation of an  algorithm for this task using a lattice-theoretic  model of conjunctive and disjunctive variable  substitutions, and (3) optimization of this  algorithm, using finite differencing, for  incrementally processing new data.
</description>
<pubDate>Wed, 01 Feb 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6979</guid>
<dc:date>1989-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance Evaluation of the Scheme 86 and HP Precision Architecture</title>
<link>https://hdl.handle.net/1721.1/6978</link>
<description>Performance Evaluation of the Scheme 86 and HP Precision Architecture
Wu, Henry M.
The Scheme86 and the HP Precision  Architectures represent different trends in  computer processor design. The former uses  wide micro-instructions, parallel hardware,  and a low latency memory interface. The latter  encourages pipelined implementation and  visible interlocks. To compare the merits of  these approaches, algorithms frequently  encountered in numerical and symbolic  computation were hand-coded for each  architecture. Timings were done in simulators  and the results were evaluated to determine  the speed of each design. Based on these  measurements, conclusions were drawn as  to which aspects of each architecture are  suitable for a high- performance computer.
</description>
<pubDate>Sat, 01 Apr 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6978</guid>
<dc:date>1989-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theory of Quantitative Inference Applied to a Mechanical Design Compiler</title>
<link>https://hdl.handle.net/1721.1/6977</link>
<description>A Theory of Quantitative Inference Applied to a Mechanical Design Compiler
Ward, Allen C.
This thesis presents the ideas underlying a  computer program that takes as input a  schematic of a mechanical or hydraulic power  transmission system, plus specifications and  a utility function, and returns catalog numbers  from predefined catalogs for the optimal  selection of components implementing the  design. Unlike programs for designing single  components or systems, the program  provides the designer with a high level  "language" in which to compose new  designs. It then performs some of the detailed  design process. The process of  "compilation" is based on a formalization of  quantitative inferences about hierarchically  organized sets of artifacts and operating  conditions. This allows the design  compilation without the exhaustive  enumeration of alternatives.
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6977</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Unsupervised Learning in Feedforward Neural Networks</title>
<link>https://hdl.handle.net/1721.1/6976</link>
<description>Optimal Unsupervised Learning in Feedforward Neural Networks
Sanger, Terence D.
We investigate the properties of feedforward  neural networks trained with Hebbian learning  algorithms. A new unsupervised algorithm is  proposed which produces statistically  uncorrelated outputs. The algorithm causes  the weights of the network to converge to the  eigenvectors of the input correlation with  largest eigenvalues. The algorithm is closely  related to the technique of Self-supervised  Backpropagation, as well as other algorithms  for unsupervised learning. Applications of the  algorithm to texture processing, image  coding, and stereo depth edge detection are  given. We show that the algorithm can lead to  the development of filters qualitatively similar  to those found in primate visual cortex.
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6976</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Dynamic Structure of Everyday Life</title>
<link>https://hdl.handle.net/1721.1/6975</link>
<description>The Dynamic Structure of Everyday Life
Agre, Philip E.
Computational theories of action have  generally understood the organized nature of  human activity through the construction and  execution of plans. By consigning the  phenomena of contingency and improvisation  to peripheral roles, this view has led to  impractical technical proposals. As an  alternative, I suggest that contingency is a  central feature of everyday activity and that  improvisation is the central kind of human  activity. I also offer a computational model of  certain aspects of everyday routine activity  based on an account of improvised activity  called running arguments and an account  of representation for situated agents called  deictic representation .
</description>
<pubDate>Sat, 01 Oct 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6975</guid>
<dc:date>1988-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Systems Approach to the Torque Control of a Permanent Magnet Brushless Motor</title>
<link>https://hdl.handle.net/1721.1/6974</link>
<description>A Systems Approach to the Torque Control of a Permanent Magnet Brushless Motor
Paul, Benjamin J.
Many approaches to force control have  assumed the ability to command torques  accurately. Concurrently, much research has  been devoted to developing accurate torque  actuation schemes. Often, torque sensors  have been utilized to close a feedback loop  around output torque. In this paper, the torque  control of a brushless motor is investigated  through: the design, construction, and  utilization of a joint torque sensor for feedback  control; and the development and  implementation of techniques for phase  current based feedforeward torque control. It  is concluded that simply closing a torque loop  is no longer necessarily the best alternative  since reasonably accurate current based  torque control is achievable.
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6974</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Concurrent Smalltalk Compiler for the Message-Driven Processor</title>
<link>https://hdl.handle.net/1721.1/6973</link>
<description>A Concurrent Smalltalk Compiler for the Message-Driven Processor
Horwat, Waldemar
This thesis describes Optimist, an optimizing  compiler for the Concurrent Smalltalk  language developed by the Concurrent VLSI  Architecture Group. Optimist compiles  Concurrent Smalltalk to the assembly  language of the Message-Driven Processor  (MDP). The compiler includes numerous  optimization techniques such as dead code  elimination, dataflow analysis, constant  folding, move elimination, concurrency  analysis, duplicate code merging, tail  forwarding, use of register variables, as well  as various MDP-specific optimizations in the  code generator.  The MDP presents some unique challenges  and opportunities for compilation. Due to the  MDP's small memory size, it is critical that the  size of the generated code be as small as  possible. The MDP is an inherently concurrent  processor with efficient mechanisms for  sending and receiving messages; the  compiler takes advantage of these  mechanisms. The MDP's tagged architecture  allows very efficient support of object-oriented  languages such as Concurrent Smalltalk.  The initial goals for the MDP were to have the  MDP execute about twenty instructions per  method and contain 4096 words of memory.  This compiler shows that these goals are too  optimistic -- most methods are longer, both in  terms of code size and running time. Thus,  the memory size of the MDP should be  increased.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6973</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Task-Level Robot Learning</title>
<link>https://hdl.handle.net/1721.1/6972</link>
<description>Task-Level Robot Learning
Aboaf, Eric W.
We are investigating how to program robots  so that they learn from experience. Our goal is  to develop principled methods of learning that  can improve a robot's performance of a wide  range of dynamic tasks. We have developed  task-level learning that successfully improves  a robot's performance of two complex tasks,  ball-throwing and juggling. With task- level  learning, a robot practices a task, monitors its  own performance, and uses that experience  to adjust its task-level commands. This  learning method serves to complement other  approaches, such as model calibration, for  improving robot performance.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6972</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model-Based Troubleshooting of Digital Systems</title>
<link>https://hdl.handle.net/1721.1/6971</link>
<description>Model-Based Troubleshooting of Digital Systems
Hamscher, Walter Charles
This thesis describes a methodology, a  representation, and an implemented program  for troubleshooting digital circuit boards at  roughly the level of expertise one might expect  in a human novice. Existing methods for  model-based troubleshooting have not scaled  up to deal with complex circuits, in part  because traditional circuit models do not  explicitly represent aspects of the device that  troubleshooters would consider important.  For complex devices the model of the target  device should be constructed with the goal of  troubleshooting explicitly in mind. Given that  methodology, the principal contributions of the  thesis are ways of representing complex  circuits to help make troubleshooting feasible.  Temporally coarse behavior descriptions are  a particularly powerful simplification.  Instantiating this idea for the circuit domain  produces a vocabulary for describing digital  signals. The vocabulary has a level of  temporal detail sufficient to make useful  predictions abut the response of the circuit  while it remains coarse enough to make  those predictions computationally tractable.  Other contributions are principles for using  these representations. Although not  embodied in a program, these principles are  sufficiently concrete that models can be  constructed manually from existing circuit  descriptions such as schematics, part  specifications, and state diagrams. One such  principle is that if there are components with  particularly likely failure modes or failure  modes in which their behavior is drastically  simplified, this knowledge should be  incorporated into the model. Further  contributions include the solution of technical  problems resulting from the use of explicit  temporal representations and design  descriptions with tangled hierarchies.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6971</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Robot Dynamic Performance for Endpoint Force Control</title>
<link>https://hdl.handle.net/1721.1/6970</link>
<description>Modeling Robot Dynamic Performance for Endpoint Force Control
Eppinger, Steven D.
This research aims to understand the  fundamental dynamic behavior of servo-controlled machinery in response to various  types of sensory feedback. As an example of  such a system, we study robot force control, a  scheme which promises to greatly expand the  capabilities of industrial robots by allowing  manipulators to interact with uncertain and  dynamic tasks. Dynamic models are  developed which allow the effects of actuator  dynamics, structural flexibility, and workpiece  interaction to be explored in the frequency and  time domains. The models are used first to  explain the causes of robot force control  instability, and then to find methods of  improving this performance.
</description>
<pubDate>Thu, 01 Sep 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6970</guid>
<dc:date>1988-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boundaries and Topological Algorithms</title>
<link>https://hdl.handle.net/1721.1/6969</link>
<description>Boundaries and Topological Algorithms
Fleck, Margaret Morrison
This thesis develops a model for the  topological structure of situations. In this  model, the topological structure of space is  altered by the presence or absence of  boundaries, such as those at the edges of  objects. This allows the intuitive meaning of  topological concepts such as region  connectivity, function continuity, and  preservation of topological structure to be  modeled using the standard mathematical  definitions. The thesis shows that these  concepts are important in a wide range of  artificial intelligence problems, including low-level vision, high-level vision, natural  language semantics, and high-level  reasoning.
</description>
<pubDate>Thu, 01 Sep 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6969</guid>
<dc:date>1988-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Compliant Motion of Objects with an Articulated Hand</title>
<link>https://hdl.handle.net/1721.1/6968</link>
<description>Generating Compliant Motion of Objects with an Articulated Hand
Chiu, Stephen L.
The flexibility of the robot is the key to its  success as a viable aid to production.  Flexibility of a robot can be explained in two  directions. The first is to increase the physical  generality of the robot such that it can be  easily reconfigured to handle a wide variety of  tasks. The second direction is to increase the  ability of the robot to interact with its  environment such that tasks can still be  successfully completed in the presence of  uncertainties. The use of articulated hands  are capable of adapting to a wide variety of  grasp shapes, hence reducing the need for  special tooling. The availability of low mass,  high bandwidth points close to the  manipulated object also offers significant  improvements I the control of fine motions.  This thesis provides a framework for using  articulated hands to perform local  manipulation of objects. N particular, it  addresses the issues in effecting compliant  motions of objects in Cartesian space. The  Stanford/JPL hand is used as an example to  illustrate a number of concepts. The  examples provide a unified methodology for  controlling articulated hands grasping with  point contacts. We also present a high-level  hand programming system based on the  methodologies developed in this thesis.  Compliant motion of grasped objects and  dexterous manipulations can be easily  described in the LISP-based hand  programming language.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6968</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of Grouping in Visual Object Recognition.</title>
<link>https://hdl.handle.net/1721.1/6967</link>
<description>The Use of Grouping in Visual Object Recognition.
Jacobs, David W.
The report describes a recognition system  called GROPER, which performs grouping by  using distance and relative orientation  constraints that estimate the likelihood of  different edges in an image coming from the  same object. The thesis presents both a  theoretical analysis of the grouping problem  and a practical implementation of a grouping  system. GROPER also uses an indexing  module to allow it to make use of knowledge  of different objects, any of which might appear  in an image. We test GROPER by comparing  it to a similar recognition system that does not  use grouping.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6967</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis and Control of Robot Manipulators with Kinematic Redundancy</title>
<link>https://hdl.handle.net/1721.1/6966</link>
<description>Analysis and Control of Robot Manipulators with Kinematic Redundancy
Chang, Pyung H.
A closed-form solution formula for the  kinematic control of manipulators with  redundancy is derived, using the Lagrangian  multiplier method. Differential relationship  equivalent to the Resolved Motion Method has  been also derived. The proposed method is  proved to provide with the exact equilibrium  state for the Resolved Motion Method. This  exactness in the proposed method fixes the  repeatability problem in the Resolved Motion  Method, and establishes a fixed  transformation from workspace to the joint  space. Also the method, owing to the  exactness, is demonstrated to give more  accurate trajectories than the Resolved Motion  Method. In addition, a new performance  measure for redundancy control has been  developed. This measure, if used with  kinematic control methods, helps achieve  dexterous movements including singularity  avoidance. Compared to other measures  such as the manipulability measure and the  condition number, this measure tends to give  superior performances in terms of preserving  the repeatability property and providing with  smoother joint velocity trajectories. Using the  fixed transformation property, Taylor's  Bounded Deviation Paths Algorithm has been  extended to the redundant manipulators.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6966</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Dynamics Model of a Cartesian Robot</title>
<link>https://hdl.handle.net/1721.1/6965</link>
<description>Structural Dynamics Model of a Cartesian Robot
Reynoso, Alfonso Garcia
Methods are developed for predicting vibration  response characteristics of systems which  change configuration during operation. A  cartesian robot, an example of such a  position-dependent system, served as a test  case for these methods and was studied in  detail.   The chosen system model was formulated  using the technique of Component Mode  Synthesis (CMS). The model assumes that  he system is slowly varying, and connects the  carriages to each other and to the robot  structure at the slowly varying connection  points. The modal data required for each  component is obtained experimentally in  order to get a realistic model. The analysis  results in prediction of vibrations that are  produced by the inertia forces as well as  gravity and friction forces which arise when  the robot carriages move with some  prescribed motion.  Computer simulations and experimental  determinations are conducted in order to  calculate the vibrations at the robot end-effector. Comparisons are shown to validate  the model in two ways: for fixed configuration  the mode shapes and natural frequencies are  examined, and then for changing  configuration the residual vibration at the end  of the mode is evaluated.  A preliminary study was done on a  geometrically nonlinear system which also  has position-dependency. The system  consisted of a flexible four-bar linkage with  elastic input and output shafts. The behavior  of the rocker-beam is analyzed for different  boundary conditions to show how some  limiting cases are obtained. A dimensional  analysis leads to an evaluation of the  consequences of dynamic similarity on the  resulting vibration.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6965</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Object Categorization</title>
<link>https://hdl.handle.net/1721.1/6964</link>
<description>Natural Object Categorization
Bobick, Aaron F.
This thesis addresses the problem of categorizing natural objects. To provide a criteria for categorization we propose that the purpose of a categorization is to support the inference of unobserved properties of objects from the observed properties. Because no such set of categories can be constructed in an arbitrary world, we present the Principle of Natural Modes as a claim about the structure of the world.  We first define an evaluation function that measures how well a set of categories supports the inference goals of the observer. Entropy measures for property uncertainty and category uncertainty are combined through a free parameter that reflects the goals of the observer. Natural categorizations are shown to be those that are stable with respect to this free parameter. The evaluation function is tested in the domain of leaves and is found to be sensitive to the structure of the natural categories corresponding to the different species.  We next develop a categorization paradigm that utilizes the categorization evaluation function in recovering natural categories. A statistical hypothesis generation algorithm is presented that is shown to be an effective categorization procedure. Examples drawn from several natural domains are presented, including data known to be a difficult test case for numerical categorization techniques. We next extend the categorization paradigm such that multiple levels of natural categories are recovered; by means of recursively invoking the categorization procedure both the genera and species are recovered in a population of anaerobic bacteria.  Finally, a method is presented for evaluating the utility of features in recovering natural categories. This method also provides a mechanism for determining which features are constrained by the different processes present in a multiple modal world.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6964</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>UNITRAN: A Principle-Based Approach to Machine Translation</title>
<link>https://hdl.handle.net/1721.1/6963</link>
<description>UNITRAN: A Principle-Based Approach to Machine Translation
Dorr, Bonnie Jean
Machine translation has been a particularly  difficult problem in the area of Natural  Language Processing for over two decades.  Early approaches to translation failed since  interaction effects of complex phenomena in  part made translation appear to be  unmanageable. Later approaches to the  problem have succeeded (although only  bilingually), but are based on many language-specific rules of a context-free nature. This  report presents an alternative approach to  natural language translation that relies on  principle-based descriptions of grammar  rather than rule-oriented descriptions. The  model that has been constructed is based on  abstract principles as developed by Chomsky  (1981) and several other researchers working  within the "Government and Binding" (GB)  framework. Thus, the grammar is viewed as a  modular system of principles rather than a  large set of ad hoc language-specific rules.
</description>
<pubDate>Tue, 01 Dec 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6963</guid>
<dc:date>1987-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>An O(N) Algorithm for Three-Dimensional N-Body Simulations</title>
<link>https://hdl.handle.net/1721.1/6962</link>
<description>An O(N) Algorithm for Three-Dimensional N-Body Simulations
Zhao, Feng
We develop an algorithm that computes the  gravitational potentials and forces on N point-masses interacting in three-dimensional  space. The algorithm, based on analytical  techniques developed by Rokhlin and  Greengard, runs in order N time. In contrast to  other fast N-body methods such as tree  codes, which only approximate the interaction  potentials and forces, this method is exact ??  computes the potentials and forces to within  any prespecified tolerance up to machine  precision. We present an implementation of  the algorithm for a sequential machine. We  numerically verify the algorithm, and compare  its speed with that of an O(N2) direct force  computation. We also describe a parallel  version of the algorithm that runs on the  Connection Machine in order 0(logN) time. We  compare experimental results with those of  the sequential implementation and discuss  how to minimize communication overhead on  the parallel machine.
</description>
<pubDate>Thu, 01 Oct 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6962</guid>
<dc:date>1987-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Government-Binding Based Parser for Warlpiri, a Free-Word Order Language</title>
<link>https://hdl.handle.net/1721.1/6961</link>
<description>A Government-Binding Based Parser for Warlpiri, a Free-Word Order Language
Kashket, Michael B.
Free-word order languages have long posed  significant problems for standard parsing  algorithms. This thesis presents an  implemented parser, based on Government-Binding (GB) theory, for a particular free-word  order language, Warlpiri, an aboriginal  language of central Australia. The words in a  sentence of a free-word order language may  swap about relatively freely with little effect on  meaning: the permutations of a sentence  mean essentially the same thing. It is  assumed that this similarity in meaning is  directly reflected in the syntax. The parser  presented here properly processes free word  order because it assigns the same syntactic  structure to the permutations of a single  sentence. The parser also handles fixed word  order, as well as other phenomena. On the  view presented here, there is no such thing as  a "configurational" or "non-configurational"  language. Rather, there is a spectrum of  languages that are more or less ordered. The  operation of this parsing system is quite  different in character from that of more  traditional rule-based parsing systems, e.g.,  context-free parsers. In this system, parsing is  carried out via the construction of two different  structures, one encoding precedence  information and one encoding hierarchical  information. This bipartite representation is  the key to handling both free- and fixed-order  phenomena. This thesis first presents an  overview of the portion of Warlpiri that can be  parsed. Following this is a description of the  linguistic theory on which the parser is based.  The chapter after that describes the  representations and algorithms of the parser.  In conclusion, the parser is compared to  related work. The appendix contains a  substantial list of test cases ??th  grammatical and ungrammatical ??at the  parser has actually processed.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6961</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing the Dexterity of a Robot Hand Using Controlled Slip</title>
<link>https://hdl.handle.net/1721.1/6960</link>
<description>Enhancing the Dexterity of a Robot Hand Using Controlled Slip
Brock, David L.
Humans can effortlessly manipulate objects  in their hands, dexterously sliding and twisting  them within their grasp. Robots, however,  have none of these capabilities, they simply  grasp objects rigidly in their end effectors. To  investigate this common form of human  manipulation, an analysis of controlled  slipping of a grasped object within a robot  hand was performed. The Salisbury robot  hand demonstrated many of these controlled  slipping techniques, illustrating many results  of this analysis.  First, the possible slipping motions were  found as a function of the location, orientation,  and types of contact between the hand and  object. Second, for a given grasp, the contact  types were determined as a function of the  grasping force and the external forces on the  object. Finally, by changing the grasping  force, the robot modified the constraints on  the object and affect controlled slipping  slipping motions.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6960</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>ONTIC: A Knowledge Representation System for Mathematics</title>
<link>https://hdl.handle.net/1721.1/6959</link>
<description>ONTIC: A Knowledge Representation System for Mathematics
McAllester, David Allen
Ontic is an interactive system for developing  and verifying mathematics. Ontic's verification  mechanism is capable of automatically  finding and applying information from a library  containing hundreds of mathematical facts.  Starting with only the axioms of Zermelo-Fraenkel set theory, the Ontic system has  been used to build a data base of definitions  and lemmas leading to a proof of the Stone  representation theorem for Boolean lattices.  The Ontic system has been used to explore  issues in knowledge representation,  automated deduction, and the automatic use  of large data bases.
</description>
<pubDate>Wed, 01 Jul 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6959</guid>
<dc:date>1987-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formal Multilevel Hierarchical Verification of Synchronous MOS Circuits</title>
<link>https://hdl.handle.net/1721.1/6958</link>
<description>Formal Multilevel Hierarchical Verification of Synchronous MOS Circuits
Weise, Daniel Wayne
I have designed and implemented a system  for the multilevel verification of synchronous  MOS VLSI circuits. The system, called Silica  Pithecus, accepts the schematic of an MOS  circuit and a specification of the circuit's  intended digital behavior. Silica Pithecus  determines if the circuit meets its  specification. If the circuit fails to meet its  specification Silica Pithecus returns to the  designer the reason for the failure. Unlike  earlier verifiers which modelled primitives  (e.g., transistors) as unidirectional digital  devices, Silica Pithecus models primitives  more realistically. Transistors are modelled  as bidirectional devices of varying  resistances, and nodes are modelled as  capacitors. Silica Pithecus operates  hierarchically, interactively, and incrementally.  Major contributions of this research include a  formal understanding of the relationship  between different behavioral descriptions  (e.g., signal, boolean, and arithmetic  descriptions) of the same device, and a  formalization of the relationship between the  structure, behavior, and context of device.  Given these formal structures my methods  find sufficient conditions on the inputs of  circuits which guarantee the correct operation  of the circuit in the desired descriptive  domain. These methods are algorithmic and  complete. They also handle complex  phenomena such as races and charge  sharing. Informal notions such as races and  hazards are shown to be derivable from the  correctness conditions used by my methods.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6958</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principle-Based Parsing</title>
<link>https://hdl.handle.net/1721.1/6957</link>
<description>Principle-Based Parsing
Berwick, Robert C.
During the past few years, there has been  much discussion of a shift from rule-based  systems to principle-based systems for  natural language processing. This paper  outlines the major computational advantages  of principle-based parsing, its differences  from the usual rule-based approach, and  surveys several existing principle-based  parsing systems used for handling  languages as diverse as Warlpiri, English,  and Spanish, as well as language translation.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6957</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Texture Boundaries in Images</title>
<link>https://hdl.handle.net/1721.1/6956</link>
<description>Finding Texture Boundaries in Images
Voorhees, Harry
Texture provides one cue for identifying the  physical cause of an intensity edge, such as  occlusion, shadow, surface orientation or  reflectance change. Marr, Julesz, and others  have proposed that texture is represented by  small lines or blobs, called 'textons' by Julesz  [1981a], together with their attributes, such as  orientation, elongation, and intensity.  Psychophysical studies suggest that texture  boundaries are perceived where distributions  of attributes over neighborhoods of textons  differ significantly. However, these studies,  which deal with synthetic images, neglect to  consider two important questions: How can  these textons be extracted from images of  natural scenes? And how, exactly, are texture  boundaries then found? This thesis proposes  answers to these questions by presenting an  algorithm for computing blobs from natural  images and a statistic for measuring the  difference between two sample distributions  of blob attributes. As part of the blob detection  algorithm, methods for estimating image  noise are presented, which are applicable to  edge detection as well.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6956</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Object Recognition Using Libraries of Parameterized Model Sub-Parts</title>
<link>https://hdl.handle.net/1721.1/6955</link>
<description>Hierarchical Object Recognition Using Libraries of Parameterized Model Sub-Parts
Ettinger, Gil J.
This thesis describes the development of a  model-based vision system that exploits  hierarchies of both object structure and object  scale. The focus of the research is to use  these hierarchies to achieve robust  recognition based on effective organization  and indexing schemes for model libraries.  The goal of the system is to recognize  parameterized instances of non-rigid model  objects contained in a large knowledge base  despite the presence of noise and occlusion.  Robustness is achieved by developing a  system that can recognize viewed objects that  are scaled or mirror-image instances of the  known models or that contain components  sub-parts with different relative scaling,  rotation, or translation than in models. The  approach taken in this thesis is to develop an  object shape representation that incorporates  a component sub-part hierarchy- to allow for  efficient and correct indexing into an  automatically generated model library as well  as for relative parameterization among sub-parts, and a scale hierarchy- to allow for a  general to specific recognition procedure.  After analysis of the issues and inherent  tradeoffs in the recognition process, a system  is implemented using a representation based  on significant contour curvature changes and  a recognition engine based on geometric  constraints of feature properties. Examples of  the system's performance are given, followed  by an analysis of the results. In conclusion,  the system's benefits and limitations are  presented.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6955</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational Model for Observation in Quantum Mechanics</title>
<link>https://hdl.handle.net/1721.1/6954</link>
<description>A Computational Model for Observation in Quantum Mechanics
Rozas, Guillermo Juan
A computational model of observation in  quantum mechanics is presented. The  model provides a clean and simple  computational paradigm which can be used  to illustrate and possibly explain some of the  unintuitive and unexpected behavior of some  quantum mechanical systems. As examples,  the model is used to simulate three seminal  quantum mechanical experiments. The  results obtained agree with the predictions of  quantum mechanics (and physical  measurements), yet the model is perfectly  deterministic and maintains a notion of  locality.
</description>
<pubDate>Sun, 01 Mar 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6954</guid>
<dc:date>1987-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>AFL-1: A Programming Language for Massively Concurrent Computers</title>
<link>https://hdl.handle.net/1721.1/6953</link>
<description>AFL-1: A Programming Language for Massively Concurrent Computers
Blelloch, Guy
Computational models are arising is which  programs are constructed by specifying large  networks of very simple computational  devices. Although such models can  potentially make use of a massive amount of  concurrency, their usefulness as a  programming model for the design of  complex systems will ultimately be decided by  the ease in which such networks can be  programmed (constructed). This thesis  outlines a language for specifying  computational networks. The language (AFL-1) consists of a set of primitives, ad a  mechanism to group these elements into  higher level structures. An implementation of  this language runs on the Thinking Machines  Corporation, Connection machine. Two  significant examples were programmed in the  language, an expert system (CIS), and a  planning system (AFPLAN). These systems  are explained and analyzed in terms of how  they compare with similar systems written in  conventional languages.
</description>
<pubDate>Sat, 01 Nov 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6953</guid>
<dc:date>1986-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>ACTORS: A Model of Concurrent Computation in Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/6952</link>
<description>ACTORS: A Model of Concurrent Computation in Distributed Systems
Agha, Gul Abdulnabi
A foundational model of concurrency is  developed in this thesis. We examine issues  in the design of parallel systems and show  why the actor model is suitable for exploiting  large-scale parallelism. Concurrency in actors  is constrained only by the availability of  hardware resources and by the logical  dependence inherent in the computation.  Unlike dataflow and functional programming,  however, actors are dynamically  reconfigurable and can model shared  resources with changing local state.  Concurrency is spawned in actors using  asynchronous message-passing, pipelining,  and the dynamic creation of actors. This  thesis deals with some central issues in  distributed computing. Specifically, problems  of divergence and deadlock are addressed.  For example, actors permit dynamic deadlock  detection and removal. The problem of  divergence is contained because  independent transactions can execute  concurrently and potentially infinite processes  are nevertheless available for interaction.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6952</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>TEMPEST: A Template Editor for Structured Text</title>
<link>https://hdl.handle.net/1721.1/6951</link>
<description>TEMPEST: A Template Editor for Structured Text
Sterpe, Peter J.
TEMPEST is a full-screen text editor that  incorporates a structural paradigm in addition  to the more traditional textual paradigm  provided by most editors. While the textual  paradigm treats the text as a sequence of  characters, the structural paradigm treats it as  a collection of named blocks which the user  can define, group, and manipulate. Blocks  can be defined to correspond to the structural  features of he text, thereby providing more  meaningful objects to operate on than  characters of lines.  The structural representation of the text is kept  in the background, giving TEMPEST the  appearance of a typical text editor. The  structural and textual interfaces coexist  equally, however, so one can always operate  on the text from wither point of view.  TEMPEST's representation scheme provides  no semantic understanding of structure. This  approach sacrifices depth, but affords a broad  range of applicability and requires very little  computational overhead. A prototype has  been implemented to illustrate the feasibility  and potential areas of application of the  central ideas. It was developed and runs on  an IBM Personal Computer.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6951</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Justified Generalization: Acquiring Procedures from Examples</title>
<link>https://hdl.handle.net/1721.1/6950</link>
<description>Justified Generalization: Acquiring Procedures from Examples
Andreae, Peter Merrett
This thesis describes an implemented  system called NODDY for acquiring  procedures from examples presented by a  teacher. Acquiring procedures form examples  involves several different generalization tasks.  Generalization is an underconstrained task,  and the main issue of machine learning is  how to deal with this underconstraint. The  thesis presents two principles for  constraining generalization on which NODDY  is based. The first principle is to exploit  domain based constraints. NODDY  demonstrated how such constraints can be  used both to reduce the space of possible  generalizations to manageable size, and how  to generate negative examples out of positive  examples to further constrain the  generalization. The second principle is to  avoid spurious generalizations by requiring  justification before adopting a generalization.  NODDY demonstrates several different ways  of justifying a generalization and proposes a  way of ordering and searching a space of  candidate generalizations based on how  much evidence would be required to justify  each generalization.  Acquiring procedures also involves three  types of constructive generalizations: inferring  loops (a kind of group), inferring complex  relations and state variables, and inferring  predicates. NODDY demonstrates three  constructive generalization methods for these  kinds of generalization.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6950</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Motion Planning with Uncertainty</title>
<link>https://hdl.handle.net/1721.1/6949</link>
<description>On Motion Planning with Uncertainty
Erdmann, Michael Andreas
Robots must successfully plan and execute  tasks in the presence of uncertainty.  Uncertainty arises from errors in modeling,  sensing, and control. Planning in the  presence of uncertainty constitutes one facet  of the general motion planning problem in  robotics. This problem is concerned with the  automatic synthesis of motion strategies from  high level task specification and geometric  models of environments.  In order to develop successful motion  strategies, it is necessary to understand the  effect of uncertainty on the geometry of object  interactions. Object interactions, both static  and dynamic, may be represented in  geometrical terms. This thesis investigates  geometrical tools for modeling and  overcoming uncertainty.  The thesis describes an algorithm for  computing backprojections o desired task  configurations. Task goals and motion states  are specified in terms of a moving object's  configuration space. Backprojections specify  regions in configuration space from which  particular motions are guaranteed to  accomplish a desired task. The  backprojection algorithm considers surfaces  in configuration space that facilitate sliding  towards the goal, while avoiding surfaces on  which motions may prematurely halt.  In executing a motion for a backprojection  region, a plan executor must be able to  recognize that a desired task has been  accomplished. Since sensors are subject to  uncertainty, recognition of task success is not  always possible. The thesis considers the  structure of backprojection regions and of task  goals that ensures goal recognizability.  The thesis also develops a representation of  friction in configuration space, in terms of a  friction cone analogous to the real space  friction cone. The friction cone provides the  backprojection algorithm with a geometrical  tool for determining points at which motions  may halt.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6949</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Circuit Grammar For Operational Amplifier Design</title>
<link>https://hdl.handle.net/1721.1/6948</link>
<description>A Circuit Grammar For Operational Amplifier Design
Ressler, Andrew Lewis
Electrical circuit designers seldom create  really new topologies or use old ones in a  novel way. Most designs are known  combinations of common configurations  tailored for the particular problem at hand. In  this thesis I show that much of the behavior of  a designer engaged in such ordinary design  can be modelled by a clearly defined  computational mechanism executing a set of  stylized rules. Each of my rules embodies a  particular piece of the designer's knowledge.  A circuit is represented as a hierarchy of  abstract objects, each of which is composed  of other objects. The leaves of this tree  represent the physical devices from which  physical circuits are fabricated. By analogy  with context-free languages, a class of circuits  is generated by a phrase-structure grammar  of which each rule describes how one type of  abstract object can be expanded into a  combination of more concrete parts.  Circuits are designed by first postulating an  abstract object which meets the particular  design requirements. This object is then  expanded into a concrete circuit by successive  refinement using rules of my grammar. There  are in general many rules which can be used  to expand a given abstract component.  Analysis must be done at each level of the  expansion to constrain the search to a  reasonable set. Thus the rule of my circuit  grammar provide constraints which allow the  approximate qualitative analysis of partially  instantiated circuits. Later, more careful  analysis in terms of more concrete  components may lead to the rejection of a line  of expansion which at first looked promising. I  provide special failure rules to direct the repair  in this case.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6948</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning for Conjunctive Goals</title>
<link>https://hdl.handle.net/1721.1/6947</link>
<description>Planning for Conjunctive Goals
Chapman, David
The problem of achieving conjunctive goals  has been central to domain independent  planning research; the nonlinear constraint-posting approach has been most successful.  Previous planners of this type have been  comlicated, heuristic, and ill-defined. I have  combined and distilled the state of the art into  a simple, precise, implemented algorithm  (TWEAK) which I have proved correct and  complete. I analyze previous work on domain-independent conjunctive planning; in  retrospect it becomes clear that all conjunctive  planners, linear and nonlinear, work the same  way. The efficiency of these planners depends  on the traditional add/delete-list  representation for actions, which drastically  limits their usefulness. I present theorems  that suggest that efficient general purpose  planning with more expressive action  representations is impossible, and suggest  ways to avoid this problem.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6947</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Presentation Based User Interface</title>
<link>https://hdl.handle.net/1721.1/6946</link>
<description>Presentation Based User Interface
Ciccarelli, Eugene C., IV
A prototype presentation system base is  described. It offers mechanisms, tools, and  ready-made parts for building user interfaces.  A general user interface model underlies the  base, organized around the concept of a  presentation: a visible text or graphic for  conveying information. Te base and model  emphasize domain independence and style  independence, to apply to the widest possible  range of interfaces.  The primitive presentation system model  treats the interface as a system of processes  maintaining a semantic relation between an  application data base and a presentation data  base, the symbolic screen description  containing presentations. A presenter  continually updates the presentation data  base from the application data base. The  user manipulates presentations with a  presentation editor. A recognizer translates  the user's presentation manipulation into  application data base commands. The  primitive presentation system can be  extended to model more complex systems by  attaching additional presentation systems. In  order to illustrate the model's generality and  descriptive capabilities, extended model  structures for several existing user interfaces  are discussed.  The base provides support for building the  application and presentation data bases,  linked together into a single, uniform network,  including descriptions of classes of objects  as we as the objects themselves. The base  provides an initial presentation data base  network graphics to continually display it, and  editing functions. A variety of tools and  mechanisms help create and control  presenters and recognizers. To demonstrate  the base's utility, three interfaces to an  operating system were constructed,  embodying different styles: icons, menu, and  graphical annotation.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6946</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Switching Between Discrete and Continuous Process Models to Predict Molecular Genetic Activity</title>
<link>https://hdl.handle.net/1721.1/6945</link>
<description>Switching Between Discrete and Continuous Process Models to Predict Molecular Genetic Activity
Weld, Daniel Sabey
Two kinds of process models have been  used in programs that reason about change:  Discrete and continuous models. We  describe the design and implementation of a  qualitative simulator, PEPTIDE, which uses  both kinds of process models to predict the  behavior of molecular energetic systems. The  program uses a discrete process model to  simulate both situations involving abrupt  changes in quantities and the actions of small  numbers of molecules. It uses a continuous  process model to predict gradual changes in  quantities. A novel technique, called  aggregation, allows the simulator to switch  between theses models through the  recognition and summary of cycles. The  flexibility of PEPTIDE's aggregator allows the  program to detect cycles within cycles and  predict the behavior of complex situations.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6945</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Motion Planning with Six Degrees of Freedom</title>
<link>https://hdl.handle.net/1721.1/6944</link>
<description>Motion Planning with Six Degrees of Freedom
Donald, Bruce R.
The motion planning problem is of central  importance to the fields of robotics, spatial  planning, and automated design. In robotics  we are interested in the automatic synthesis  of robot motions, given high-level  specifications of tasks and geometric models  of the robot and obstacles. The Mover's  problem is to find a continuous, collision-free  path for a moving object through an  environment containing obstacles. We  present an implemented algorithm for the  classical formulation of the three-dimensional  Mover's problem: given an arbitrary rigid  polyhedral moving object P with three  translational and three rotational degrees of  freedom, find a continuous, collision-free path  taking P from some initial configuration to a  desired goal configuration.   This thesis describes the first known  implementation of a complete algorithm (at a  given resolution) for the full six degree of  freedom Movers' problem. The algorithm  transforms the six degree of freedom  planning problem into a point navigation  problem in a six-dimensional configuration  space (called C-Space). The C-Space  obstacles, which characterize the physically  unachievable configurations, are directly  represented by six-dimensional manifolds  whose boundaries are five dimensional C-surfaces. By characterizing these surfaces  and their intersections, collision-free paths  may be found by the closure of three  operators which (i) slide along 5-dimensional  intersections of level C-Space obstacles; (ii)  slide along 1- to 4-dimensional intersections  of level C-surfaces; and (iii) jump between 6  dimensional obstacles.  Implementing the point navigation operators  requires solving fundamental  representational and algorithmic questions:  we will derive new structural properties of the  C-Space constraints and shoe how to  construct and represent C-Surfaces and their  intersection manifolds. A definition and new  theoretical results are presented for a six-dimensional C-Space extension of the  generalized Voronoi diagram, called the C-Voronoi diagram, whose structure we relate to  the C-surface intersection manifolds. The  representations and algorithms we develop  impact many geometric planning problems,  and extend to Cartesian manipulators with six  degrees of freedom.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6944</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallelism in Manipulator Dynamics</title>
<link>https://hdl.handle.net/1721.1/6943</link>
<description>Parallelism in Manipulator Dynamics
Lathrop, Richard D.
This paper addresses the problem of  efficiently computing the motor torques  required to drive a lower-pair kinematic chain  (e.g., a typical manipulator arm in free motion,  or a mechanical leg in the swing phase) given  the desired trajectory; i.e., the Inverse  Dynamics problem. It investigates the high  degree of parallelism inherent in the  computations, and presents two  "mathematically exact" formulations especially  suited to high-speed, highly parallel  implementations using special-purpose  hardware or VLSI devices. In principle, the  formulations should permit the calculations to  run at a speed bounded only by I/O. The first  presented is a parallel version of the recent  linear Newton-Euler recursive algorithm. The  time cost is also linear in the number of joints,  but the real-time coefficients are reduced by  almost two orders of magnitude. The second  formulation reports a new parallel algorithm  which shows that it is possible to improve  upon the linear time dependency. The real  time required to perform the calculations  increases only as the [log2] of the number of  joints. Either formulation is susceptible to a  systolic pipelined architecture in which  complete sets of joint torques emerge at  successive intervals of four floating-point  operations. Hardware requirements  necessary to support the algorithm are  considered and found not to be excessive,  and a VLSI implementation architecture is  suggested. We indicate possible applications  to incorporating dynamical considerations into  trajectory planning, e.g. it may be possible to  build an on-line trajectory optimizer.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6943</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>KBEmacs: A Step Toward the Programmer's Apprentice</title>
<link>https://hdl.handle.net/1721.1/6942</link>
<description>KBEmacs: A Step Toward the Programmer's Apprentice
Waters, Richard C.
The Knowledge-Based Editor in Emacs  (KBEmacs) is the current demonstration  system implemented as part of the  Programmer's Apprentice project. KBEmacs  is capable of acting as a semi-expert  assistant to a person who is writing a  program ??king over some parts of the  programming task. Using KBEmacs, it is  possible to construct a program by issuing a  series of high level commands. This series of  commands can be as much as an order of  magnitude shorter than the program is  describes. KBEmacs is capable of operating  on Ada and Lisp programs of realistic size  and complexity. Although KBEmacs is neither  fast enough nor robust enough to be  considered a true prototype, both of these  problems could be overcome if the system  were to be reimplemented.
</description>
<pubDate>Wed, 01 May 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6942</guid>
<dc:date>1985-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure and Interpretation of Computer Programs</title>
<link>https://hdl.handle.net/1721.1/6941</link>
<description>Structure and Interpretation of Computer Programs
Abelson, Harold; Sussman, Gerald Jay
"The Structure and Interpretation of Computer  Programs" is the entry-level subject in  Computer Science at the Massachusetts  Institute of Technology. It is required of all  students at MIT who major in Electrical  Engineering or in Computer Science, as one  fourth of the "common core curriculum," which  also includes two subjects on circuits and  linear systems and a subject on the design of  digital systems. We have been involved in the  development of this subject since 1978, and  we have taught this material in its present  form since the fall of 1980 to approximately  600 students each year. Most of these  students have had little or no prior formal  training in computation, although most have  played with computers a bit and a few have  had extensive programming or hardware  design experience. Our design of this  introductory Computer Science subject  reflects two major concerns. First we want to  establish the idea that a computer language  is not just a way of getting a computer to  perform operations, but rather that it is a novel  formal medium for expressing ideas about  methodology. Thus, programs must be written  for people to read, and only incidentally for  machines to execute. Secondly, we believe  that the essential material to be addressed by  a subject at this level, is not the syntax of  particular programming language constructs,  nor clever algorithms for computing particular  functions of efficiently, not even the  mathematical analysis of algorithms and the  foundations of computing, but rather the  techniques used to control the intellectual  complexity of large software systems.
</description>
<pubDate>Fri, 01 Jul 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6941</guid>
<dc:date>1983-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Issues in the Design and Implementation of Act 2</title>
<link>https://hdl.handle.net/1721.1/6940</link>
<description>Issues in the Design and Implementation of Act 2
Theriault, Daniel G.
Act2 is a highly concurrent programming  language designed to exploit the processing  power available from parallel computer  architectures. The language supports  advanced concepts in software engineering,  providing high-level constructs suitable for  implementing artificially-intelligent  applications. Act2 is based on the Actor model  of computation, consisting of virtual  computational agents which communicate by  message-passing. Act2 serves as a  framework in which to integrate an actor  language, a description and reasoning  system, and a problem-solving and resource  management system. This document  describes issues in Act2's design and the  implementation of an interpreter for the  language.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6940</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Edges and Lines in Images</title>
<link>https://hdl.handle.net/1721.1/6939</link>
<description>Finding Edges and Lines in Images
Canny, John Francis
The problem of detecting intensity changes in  images is canonical in vision. Edge detection  operators are typically designed to optimally  estimate first or second derivative over some  (usually small) support. Other criteria such as  output signal to noise ratio or bandwidth have  also been argued for. This thesis is an  attempt to formulate a set of edge detection  criteria that capture as directly as possible the  desirable properties of an edge operator.  Variational techniques are used to find a  solution over the space of all linear shift  invariant operators. The first criterion is that  the detector have low probability of error i.e.  failing to mark edges or falsely marking non-edges. The second is that the marked points  should be as close as possible to the centre  of the true edge. The third criterion is that  there should be low probability of more than  one response to a single edge. The technique  is used to find optimal operators for step  edges and for extended impulse profiles  (ridges or valleys in two dimensions). The  extension of the one dimensional operators to  two dimentions is then discussed. The result  is a set of operators of varying width, length  and orientation. The problem of combining  these outputs into a single description is  discussed, and a set of heuristics for the  integration are given.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6939</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Multiple-Context Equality-Based Reasoning System</title>
<link>https://hdl.handle.net/1721.1/6938</link>
<description>A Multiple-Context Equality-Based Reasoning System
Barton, George Edward, Jr.
Expert systems are too slow. This work  attacks that problem by speeding up a useful  system component that remembers facts and  tracks down simple consequences. The  redesigned component can assimilate new  facts more quickly because it uses a compact,  grammar-based internal representation to  deal with whole classes of equivalent  expressions at once. It can support faster  hypothetical reasoning because it remembers  the consequences of several assumption  sets at once. The new design is targeted for  situations in which many of the stored facts  are equalities. The deductive machinery  considered here supplements stored  premises with simple new conclusions. The  stored premises include permanently  asserted facts and temporarily adopted  assumptions. The new conclusions are  derived by substituting equals for equals and  using the properties of the logical connectives  AND, Or, and NOT. The deductive system  provides supporting premises for its derived  conclusions. Reasoning that involves  quantifiers is beyond the scope of its limited  and automatic operation. The expert system of  which the reasoning system is a component  is expected to be responsible for overall  control of reasoning.
</description>
<pubDate>Fri, 01 Apr 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6938</guid>
<dc:date>1983-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Representation of Image Texture</title>
<link>https://hdl.handle.net/1721.1/6937</link>
<description>The Representation of Image Texture
Riley, Michael Dennis
This thesis explores how to represent image  texture in order to obtain information about the  geometry and structure of surfaces, with  particular emphasis on locating surface  discontinuities. Theoretical and  psychophysical results lead to the following  conclusions for the representation of image  texture: (1) A texture edge primitive is needed  to identify texture change contours, which are  formed by an abrupt change in the 2-D  organization of similar items in an image. The  texture edge can be used for locating  discontinuities in surface structure and  surface geometry and for establishing motion  correspondence. (2) Abrupt changes in  attributes that vary with changing surface  geometry ??ientation, density, length, and  width ??ould be used to identify  discontinuities in surface geometry and  surface structure. (3) Texture tokens are  needed to separate the effects of different  physical processes operating on a surface.  They represent the local structure of the  image texture. Their spatial variation can be  used in the detection of texture discontinuities  and texture gradients, and their temporal  variation may be used for establishing motion  correspondence. What precisely constitutes  the texture tokens is unknown; it appears,  however, that the intensity changes alone will  not suffice, but local groupings of them may.  (4) The above primitives need to be assigned  rapidly over a large range in an image.
</description>
<pubDate>Tue, 01 Sep 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6937</guid>
<dc:date>1981-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Accountable Source-To-Source Transformation System</title>
<link>https://hdl.handle.net/1721.1/6936</link>
<description>An Accountable Source-To-Source Transformation System
Steele, Barbara Sue Kerne
Though one is led to believe that program  transformation systems which perform  source-to-source transformations enable the  user to understand and appreciate the  resulting source program, this is not always  the case. Transformations are capable of  behaving and/or interacting in unexpected  ways. The user who is interested in  understanding the whats, whys, wheres, and  hows of the transformation process is left  without tools for discovering them. I provide an  initial step towards the solution of this  problem in the form of an accountable source-to-source transformation system. It carefully  records the information necessary to answer  such questions, and provides mechanisms  for the retrieval of this information. It is  observed that though this accountable system  allows the user access to relevant facts from  which he may draw conclusions, further study  is necessary to make the system capable of  analyzing these facts itself.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6936</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Foundations of Actor Semantics</title>
<link>https://hdl.handle.net/1721.1/6935</link>
<description>Foundations of Actor Semantics
Clinger, William Douglas
The actor message-passing model of  concurrent computation has inspired new  ideas in the areas of knowledge-based  systems, programming languages and their  semantics, and computer systems  architecture. The model itself grew out of  computer languages such as Planner,  Smalltalk, and Simula, and out of the use of  continuations to interpret imperative  constructs within A-calculus. The  mathematical content of the model has been  developed by Carl Hewitt, Irene Greif, Henry  Baker, and Giuseppe Attardi. This thesis  extends and unifies their work through the  following observations. The ordering laws  postulated by Hewitt and Baker can be proved  using a notion of global time. The most  general ordering laws are in fact equivalent to  an axiom of realizability in global time.  Independence results suggest that some  notion of global time is essential to any model  of concurrent computation. Since  nondeterministic concurrency is more  fundamental than deterministic sequential  computation, there may be no need to take  fixed points in the underlying domain of a  power domain. Power domains built from  incomplete domains can solve the problem of  providing a fixed point semantics for a class of  nondeterministic programming languages in  which a fair merge can be written. The event  diagrams of Greif's behavioral semantics,  augmented by Baker's pending events, form  an incomplete domain. Its power domain is  the semantic domain in which programs  written in actor-based languages are  assigned meanings. This denotational  semantics is compatible with behavioral  semantics. The locality laws postulated by  Hewitt and Baker may be proved for the  semantics of an actor-based language.  Altering the semantics slightly can falsify the  locality laws. The locality laws thus constrain  what counts as an actor semantics.
</description>
<pubDate>Fri, 01 May 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6935</guid>
<dc:date>1981-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inspection Methods in Programming</title>
<link>https://hdl.handle.net/1721.1/6934</link>
<description>Inspection Methods in Programming
Rich, Charles
The work reported here lies in the area of  overlap between artificial intelligence software  engineering. As research in artificial  intelligence, it is a step towards a model of  problem solving in the domain of  programming. In particular, this work focuses  on the routine aspects of programming which  involve the application of previous experience  with similar programs. I call this programming  by inspection. Programming is viewed here  as a kind of engineering activity. Analysis and  synthesis by inspection area prominent part of  expert problem solving in many other  engineering disciplines, such as electrical  and mechanical engineering. The notion of  inspections methods in programming  developed in this work is motivated by similar  notions in other areas of engineering. This  work is also motivated by current practical  concerns in the area of software engineering.  The inadequacy of current programming  technology is universally recognized. Part of  the solution to this problem will be to increase  the level of automation in programming. I  believe that the next major step in the  evolution of more automated programming  will be interactive systems which provide a  mixture of partially automated program  analysis, synthesis and verification. One such  system being developed at MIT, called the  programmer's apprentice, is the immediate  intended application of this work. This report  concentrates on the knowledge are of the  programmer's apprentice, which is the form of  a taxonomy of commonly used algorithms and  data structures. To the extent that a  programmer is able to construct and  manipulate programs in terms of the forms in  such a taxonomy, he may relieve himself of  many details and generally raise the  conceptual level of his interaction with the  system, as compared with present day  programming environments. Also, since it is  practical to expand a great deal of effort pre-analyzing the entries in a library, the difficulty  of verifying the correctness of programs  constructed this way is correspondingly  reduced. The feasibility of this approach is  demonstrated by the design of an initial library  of common techniques for manipulating  symbolic data. This document also reports on  the further development of a formalism called  the plan calculus for specifying computations  in a programming language independent  manner. This formalism combines both data  and control abstraction in a uniform  framework that has facilities for representing  multiple points of view and side effects.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6934</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Definition and Implementation of a Computer Programming Language Based on Constraints</title>
<link>https://hdl.handle.net/1721.1/6933</link>
<description>The Definition and Implementation of a Computer Programming Language Based on Constraints
Steele, Guy Lewis, Jr.
The constraint paradigm is a model of  computation in which values are deduced  whenever possible, under the limitation that  deductions be local in a certain sense. One  may visualize a constraint 'program' as a  network of devices connected by wires. Data  values may flow along the wires, and  computation is performed by the devices. A  device computes using only locally available  information (with a few exceptions), and  places newly derived values on other, locally  attached wires. In this way computed values  are propagated. An advantage of the  constraint paradigm (not unique to it) is that a  single relationship can be used in more than  one direction. The connections to a device are  not labelled as inputs and outputs; a device  will compute with whatever values are  available, and produce as many new values  as it can. General theorem provers are  capable of such behavior, but tend to suffer  from combinatorial explosion; it is not usually  useful to derive all the possible  consequences of a set of hypotheses. The  constraint paradigm places a certain kind of  limitation on the deduction process. The  limitations imposed by the constraint  paradigm are not the only one possible. It is  argued, however, that they are restrictive  enough to forestall combinatorial explosion in  many interesting computational situations, yet  permissive enough to allow useful  computations in practical situations.  Moreover, the paradigm is intuitive: It is easy  to visualize the computational effects of these  particular limitations, and the paradigm is a  natural way of expressing programs for  certain applications, in particular relationships  arising in computer-aided design. A number  of implementations of constraint-based  programming languages are presented. A  progression of ever more powerful languages  is described, complete implementations are  presented and design difficulties and  alternatives are discussed. The goal  approached, though not quite reached, is a  complete programming system which will  implicitly support the constraint paradigm to  the same extent that LISP, say, supports  automatic storage management.
</description>
<pubDate>Fri, 01 Aug 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6933</guid>
<dc:date>1980-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementation of a Theory of Edge Detection</title>
<link>https://hdl.handle.net/1721.1/6932</link>
<description>Implementation of a Theory of Edge Detection
Hildreth, Ellen C.
This report describes the implementation of a  theory of edge detection, proposed by Marr  and Hildreth (1979). According to this theory,  the image is first processed independently  through a set of different size filters, whose  shape is the Laplacian of a Gaussian, ***.  Zero-crossings in the output of these filters  mark the positions of intensity changes at  different resolutions. Information about these  zero-crossings is then used for deriving a full  symbolic description of changes in intensity in  the image, called the raw primal sketch. The  theory is closely tied with early processing in  the human visual systems. In this report, we  first examine the critical properties of the initial  filters used in the edge detection process,  both from a theoretical and practical  standpoint. The implementation is then used  as a test bed for exploring aspects of the  human visual system; in particular, acuity and  hyperacuity. Finally, we present some  preliminary results concerning the  relationship between zero-crossings detected  at different resolutions, and some  observations relevant to the process by which  the human visual system integrates  descriptions of intensity changes obtained at  different resolutions.
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6932</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of Equality in Deduction and Knowledge Representation</title>
<link>https://hdl.handle.net/1721.1/6931</link>
<description>The Use of Equality in Deduction and Knowledge Representation
McAllester, David Allen
This report describes a system which  maintains canonical expressions for  designators under a set of equalities.  Substitution is used to maintain all knowledge  in terms of these canonical expressions. A  partial order on designators, termed the  better-name relation, is used in the choice of  canonical expressions. It is shown that with  an appropriate better-name relation an  important engineering reasoning technique,  propagation of constraints, can be  implemented as a special case of this  substitution process. Special purpose  algebraic simplification procedures are  embedded such that they interact effectively  with the equality system. An electrical circuit  analysis system is developed which relies  upon constraint propagation and algebraic  simplification as primary reasoning  techniques. The reasoning is guided by a  better-name relation in which referentially  transparent terms are preferred to referentially  opaque ones. Multiple description of  subcircuits are shown to interact strongly with  the reasoning mechanism.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6931</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal and Teleological Reasoning in Circuit Recognition</title>
<link>https://hdl.handle.net/1721.1/6930</link>
<description>Causal and Teleological Reasoning in Circuit Recognition
Kleer, Johan De
This thesis presents a theory of human-like  reasoning in the general domain of designed  physical systems, and in particular, electronic  circuits. One aspect of the theory, causal  analysis, describes how the behavior of  individual components can be combined to  explain the behavior of composite systems.  Another aspect of the theory, teleological  analysis, describes how the notion that the  system has a purpose can be used to aid this  causal analysis. The theory is implemented  as a computer program, which, given a circuit  topology, can construct by qualitative causal  analysis a mechanism graph describing the  functional topology of the system. This  functional topology is then parsed by a  grammar for common circuit functions.  Ambiguities are introduced into the analysis  by the approximate qualitative nature of the  analysis. For example, there are often several  possible mechanisms which might describe  the circuit's function. These are  disambiguated by teleological analysis. The  requirement that each component be  assigned an appropriate purpose in the  functional topology imposes a severe  constraint which eliminates all the  ambiguities. Since both analyses are based  on heuristics, the chosen mechanism is a  rationalization of how the circuit functions, and  does not guarantee that the circuit actually  does function. This type of coarse  understanding of circuits is useful for  analysis, design and troubleshooting.
</description>
<pubDate>Sat, 01 Sep 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6930</guid>
<dc:date>1979-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Analysis of the Logical Structure of Programs</title>
<link>https://hdl.handle.net/1721.1/6929</link>
<description>Automatic Analysis of the Logical Structure of Programs
Waters, Richard C.
This report presents a method for viewing  complex programs as built up out of simpler  ones. The central idea is that typical programs  are built up in a small number of stereotyped  ways. The method is designed to make it  easier for an automatic system to work with  programs. It focuses on how the primitive  operations performed by a program are  combined together in order to produce the  actions of the program as a whole. It does not  address the issue of how complex data  structures are built up from simpler ones, nor  the relationships between data structures and  the operations performed on them.
</description>
<pubDate>Fri, 01 Dec 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6929</guid>
<dc:date>1978-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Structure of Mathematical Knowledge</title>
<link>https://hdl.handle.net/1721.1/6928</link>
<description>The Structure of Mathematical Knowledge
Michener, Edwina Rissland
This report develops a conceptual framework  in which to talk about mathematical  knowledge. There are several broad  categories of mathematical knowledge:  results which contain the traditional logical  aspects of mathematics; examples which  contain illustrative material; and concepts  which include formal and informal ideas, that  is, definitions and heuristics.
</description>
<pubDate>Tue, 01 Aug 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6928</guid>
<dc:date>1978-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Motor Control and Learning by the State Space Model</title>
<link>https://hdl.handle.net/1721.1/6927</link>
<description>Motor Control and Learning by the State Space Model
Raibert, Marc H.
A model is presented that deals with  problems of motor control, motor learning,  and sensorimotor integration. The equations  of motion for a limb are parameterized and  used in conjunction with a quantized, multi-dimensional memory organized by state  variables. Descriptions of desired trajectories  are translated into motor commands which  will replicate the specified motions. The initial  specification of a movement is free of  information regarding the mechanics of the  effector system. Learning occurs without the  use of error correction when practice data are  collected and analyzed.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6927</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Truth Maintenance Systems for Problem Solving</title>
<link>https://hdl.handle.net/1721.1/6926</link>
<description>Truth Maintenance Systems for Problem Solving
Doyle, Jon
The thesis developed here is that reasoning  programs which take care to record the logical  justifications for program beliefs can apply  several powerful, but simple, domain-independent algorithms to (1) maintain the  consistency of program beliefs, (2) realize  substantial search efficiencies, and (3)  automatically summarize explanations of  program beliefs. These algorithms are the  recorded justifications to maintain the  consistency and well founded basis of the set  of beliefs. The set of beliefs can be efficiently  updated in an incremental manner when  hypotheses are retracted and when new  information is discovered. The recorded  justifications also enable the pinpointing of  exactly whose assumptions which support  any particular belief. The ability to pinpoint the  underlying assumptions is the basis for an  extremely powerful domain-independent  backtracking method. This method, called  Dependency-Directed Backtracking, offers  vastly improved performance over traditional  backtracking algorithms.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6926</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representing Knowledge of Large-Scale Space</title>
<link>https://hdl.handle.net/1721.1/6925</link>
<description>Representing Knowledge of Large-Scale Space
Kuipers, Benjamin J.
This dissertation presents a model of the  knowledge a person has about the spatial  structure of a large-scale environment: the  "cognitive map". The functions of the cognitive  map are to assimilate new information about  the environment, to represent the current  position, and to answer route-finding and  relative-position problems. This model (called  the TOUR model) analyzes the cognitive map  in terms of symbolic descriptions of the  environment and operations on those  descriptions. Knowledge about a particular  environment is represented in terms of route  descriptions, a topological network of paths  and places, multiple frames of reference for  relative positions, dividing boundaries, and a  structure of containing regions. The current  position is described by the "You Are Here"  pointer, which acts as a working memory and  a focus of attention. Operations on the  cognitive map are performed by inference  rules which act to transfer information among  different descriptions and the "You Are Here"  pointer. The TOUR model shows how the  particular descriptions chosen to represent  spatial knowledge support assimilation of  new information from local observations into  the cognitive map, and how the cognitive map  solves route-finding and relative-position  problems. A central theme of this research is  that the states of partial knowledge supported  by a representation are responsible for its  ability to function with limited information of  computational resources. The  representations in the TOUR model provide a  rich collection of states of partial knowledge,  and therefore exhibit flexible, "common-sense" behavior.
</description>
<pubDate>Fri, 01 Jul 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6925</guid>
<dc:date>1977-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Use of Analogy to Achieve New Expertise</title>
<link>https://hdl.handle.net/1721.1/6924</link>
<description>Use of Analogy to Achieve New Expertise
Brown, Richard
We will take the view that the end result of  problem solving in some world should be  increased expertness. In the context of  computers, increasing expertness means  writing programs. This thesis is about a  process, reasoning by analogy that writes  programs. Analogy relates one problem world  to another. We will call the world in which we  have an expert problem solver the IMAGE  world, and the other world the DOMAIN world.  Analogy will construct an expert problem  solver in the domain world using the image  world expert for inspiration.
</description>
<pubDate>Fri, 01 Apr 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6924</guid>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flexibility and Efficiency in a Computer Program for Designing Circuits</title>
<link>https://hdl.handle.net/1721.1/6923</link>
<description>Flexibility and Efficiency in a Computer Program for Designing Circuits
Mcdermott, Drew Vincent
This report is concerned with the problem of achieving flexibility (additivity, modularity) and efficiency (performance, expertise) simultaneously in one AI program. It deals with the domain of elementary electronic circuit design. The proposed solution is to provide a deduction-driven problem solver with built-in-control-structure concepts. This problem solver and its knowledge base in the applicaitn areas of design and electronics are descrbed. The prgram embodying it is being used to explore the solutionof some modest problems in circuit design. It is concluded that shallow reasoning about problem-solver plans is necessary for flexibility, and can be implemented with reasonable efficiency.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6923</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design of a Mechanical Assembly System</title>
<link>https://hdl.handle.net/1721.1/6922</link>
<description>The Design of a Mechanical Assembly System
Lozano-Perez, Tomas
This thesis describes a mechanical assembly system called LAMA (Language for Automatic Mechanical Assembly). The goal of the work was to create a mechanical assembly system that transforms a high-level description of an automatic assembly operation into a program or execution by a computer controlled manipulator. This system allows the initial description of the assembly to be in terms of the desired effects on the parts being assembled. Languages such as WAVE [Bolles &amp; Paul] and MINI [Silver] fail to meet this goal by requiring the assembly operation to be described in terms of manipulator motions. This research concentrates on the spatial complexity of mechanical assembly operations. The assembly problem is seen as the problem of achieving a certain set of geometrical constraints between basic objects while avoiding unwanted collisions. The thesis explores how these two facets, desired constraints and unwanted collisions, affect the primitive operations of the domain.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6922</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Knowledge, Casual Reasoning and the Localization of Failures</title>
<link>https://hdl.handle.net/1721.1/6921</link>
<description>Qualitative Knowledge, Casual Reasoning and the Localization of Failures
Brown, Allen
This report investigates some techinques appropriate to representing the knowledge necessary for understanding a class of electronic machines -- radio receivers. A computational performance model - WATSON - is presented. WATSONs task is to isolate failures in radio receivers whose principles of operation have been appropriately described in his knowledge base. The thesis of the report is that hierarchically organized representational structures are essential to the understanding of complex mechanisms. Such structures lead not only to descriptions of machine operation at many levels of detail, but also offer a powerful means of organizing "specialist" knowledge for the repair of machines when they are broken.
</description>
<pubDate>Mon, 01 Nov 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6921</guid>
<dc:date>1976-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Initial Report on a LISP Programmer's Apprentice</title>
<link>https://hdl.handle.net/1721.1/6920</link>
<description>Initial Report on a LISP Programmer's Apprentice
Rich, Charles; Shrobe, Howard E.
This is an initial report on the design and partial implementation of a LISP programmers apprentice, an interactive programming system to be used by an expert programmer in the design, coding, and maintenance of large, complex programs.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6920</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hypothesis Formation and Evaluation in Medical Diagnosis</title>
<link>https://hdl.handle.net/1721.1/6919</link>
<description>Hypothesis Formation and Evaluation in Medical Diagnosis
Rubin, Ann D.
This thesis describes some aspects of a computer system for doing medical diagnosis in the specialized field of kidney disease. Because such a system faces the spectre of combinatorial explosion, this discussion concentrates on heuristics which control the number of concurrent hypotheses and efficient "compiled" representations of medical knowledge. In particular, the differential diagnosis of hematuria (blood in the urine) is discussed in detail. A protocol of a simulated doctor/patient interaction is presented and analyzed to determine the crucial structures and processes involved in the diagnosis procedure. The data structure proposed for representing medical information revolves around elementary hypotheses which are activated when certain disposing of findings, activating hypotheses, evaluating hypotheses locally and combining hypotheses globally is examined for its heuristic implications. The thesis attempts to fit the problem of medical diagnosis into the framework of other Artifcial Intelligence problems and paradigms and in particular explores the notions of pure search vs. heuristic methods, linearity and interaction, local vs. global knowledge and the structure of hypotheses within the world of kidney disease.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6919</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Planning System for Robot Construction Tasks</title>
<link>https://hdl.handle.net/1721.1/6918</link>
<description>A Planning System for Robot Construction Tasks
Fahlman, Scott E.
This paper describes BUILD, a computer  program which generates plans for building  specified structures out of simple objects  such as toy blocks. A powerful heuristic  control structure enables BUILD to use a  number of sophisticated construction  techniques in its plans. Among these are the  incorporation of pre-existing structure into the  final design, pre-assembly of movable sub-structures on the table, and use of the extra  blocks as temporary supports and  counterweights in the course of construction.  BUILD does its planning in a modeled 3-space in which blocks of various shapes and  sizes can be represented in any orientation  and location. The modeling system can  maintain several world models at once, and  contains modules for displaying states,  testing them for inter-object contact and  collision, and for checking the stability of  complex structures involving frictional forces.  Various alternative approaches are  discussed, and suggestions are included for  the extension of BUILD-like systems to other  domains. Also discussed are the merits of  BUILD's implementation language,  CONNIVER, for this type of problem solving.
</description>
<pubDate>Tue, 01 May 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6918</guid>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Progress in Vision and Robotics</title>
<link>https://hdl.handle.net/1721.1/6917</link>
<description>Progress in Vision and Robotics
Winston, Patrick H.
The Vision Flashes are informal working  papers intended primarily to stimulate internal  interaction among participants in the A.I.  Laboratory's Vision and Robotics group. Many  of them report highly tentative conclusions or  incomplete work. Others deal with highly  detailed accounts of local equipment and  programs that lack general interest. Still  others are of great importance, but lack the  polish and elaborate attention to proper  referencing that characterizes the more formal  literature. Nevertheless, the Vision Flashes  collectively represent the only documentation  of an important fraction of the work done in  machine vision and robotics. The purpose of  this report is to make the findings more  readily available, but since they are not  revised as presented here, readers should  keep in mind the original purpose of the  papers!
</description>
<pubDate>Tue, 01 May 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6917</guid>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Description and Theoretical Analysis (Using Schemata) of Planner: A Language for Proving Theorems and Manipulating Models in a Robot</title>
<link>https://hdl.handle.net/1721.1/6916</link>
<description>Description and Theoretical Analysis (Using Schemata) of Planner: A Language for Proving Theorems and Manipulating Models in a Robot
Hewitt, Carl
Planner is a formalism for proving theorems  and manipulating models in a robot. The  formalism is built out of a number of problem-solving primitives together with a hierarchical  multiprocess backtrack control structure.  Statements can be asserted and perhaps  later withdrawn as the state of the world  changes. Under BACKTRACK control  structure, the hierarchy of activations of  functions previously executed is maintained  so that it is possible to revert to any previous  state. Thus programs can easily manipulate  elaborate hypothetical tentative states. In  addition PLANNER uses multiprocessing so  that there can be multiple loci of changes in  state. Goals can be established and  dismissed when they are satisfied. The  deductive system of PLANNER is subordinate  to the hierarchical control structure in order to  maintain the desired degree of control. The  use of a general-purpose matching language  as the basis of the deductive system  increases the flexibility of the system. Instead  of explicitly naming procedures in calls,  procedures can be invoked implicitly by  patterns of what the procedure is supposed to  accomplish. The language is being applied to  solve problems faced by a robot, to write  special purpose routines from goal oriented  language, to express and prove properties of  procedures, to abstract procedures from  protocols of their actions, and as a semantic  base for English.
</description>
<pubDate>Sat, 01 Apr 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6916</guid>
<dc:date>1972-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Parallel Processing Model of Musical Structures</title>
<link>https://hdl.handle.net/1721.1/6915</link>
<description>A Parallel Processing Model of Musical Structures
Smoliar, Stephen W.
Euterpe is a real-time computer system for  the modeling of musical structures. It provides  a formalism wherein familiar concepts of  musical analysis may be readily expressed.  This is verified by its application to the  analysis of a wide variety of conventional  forms of music: Gregorian chant, Mediaeval  polyphony, Back counterpoint, and sonata  form. It may be of further assistance in the  real-time experiments in various techniques  of thematic development. Finally, the system  is endowed with sound-synthesis apparatus  with which the user may prepare tapes for  musical performances.
</description>
<pubDate>Wed, 01 Sep 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6915</guid>
<dc:date>1971-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining the Scope of English Quantifiers</title>
<link>https://hdl.handle.net/1721.1/6914</link>
<description>Determining the Scope of English Quantifiers
Vanlehn, Kurt A.
How can one represent the meaning of English sentences in a formal logical notation  such that the translation of English into this  logical form is simple and general? This  report answers this question for a particular  kind of meaning, namely quantifier scope, and  for a particular part of the translation, namely the syntactic influence on the  translation. Rules are presented which  predict, for example, that the sentence:  Everyone in this room speaks at least two  languages. has the quantifier scope AE in  standard predicate calculus, while the  sentence: At lease two languages are spoken  by everyone in this room. has the quantifier  scope EA. Three different logical forms are  presented, and their translation rules are  examined. One of the logical forms is  predicate calculus. The translation rules for it  were developed by Robert May (May 19 77). The other two logical forms are Skolem  form and a simple computer programming  language. The translation rules for these two  logical forms are new. All three sets of  translation rules are shown to be general, in  the sense that the same rules express the  constraints that syntax imposes on certain  other linguistic phenomena. For example, the  rules that constrain the translation into  Skolem form are shown to constrain definite  np anaphora as well. A large body of carefully  collected data is presented, and used to  assess the empirical accuracy of each of the  theories. None of the three theories is vastly  superior to the others. However, the report  concludes by suggesting that a combination  of the two newer theories would have the  greatest generality and the highest empirical  accuracy.
</description>
<pubDate>Thu, 01 Jun 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6914</guid>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>RABBIT: A Compiler for SCHEME</title>
<link>https://hdl.handle.net/1721.1/6913</link>
<description>RABBIT: A Compiler for SCHEME
Steele, Guy Lewis, Jr.
We have developed a compiler for the  lexically-scoped dialect of LISP known as  SCHEME. The compiler knows relatively little  about specific data manipulation primitives  such as arithmetic operators, but  concentrates on general issues of  environment and control. Rather than having  specialized knowledge about a large variety of  control and environment constructs, the  compiler handles only a small basis set  which reflects the semantics of lambda-calculus. All of the traditional imperative  constructs, such as sequencing, assignment,  looping, GOTO, as well as many standard  LISP constructs such as AND, OR, and  COND, are expressed in macros in terms of  the applicative basis set. A small number of  optimization techniques, coupled with the  treatment of function calls as GOTO  statements, serve to produce code as good  as that produced by more traditional  compilers. The macro approach enables  speedy implementation of new constructs as  desired without sacrificing efficiency in the  generated code. A fair amount of analysis is  devoted to determining whether environments  may be stack-allocated or must be heap-allocated. Heap-allocated environments are  necessary in general because SCHEME  (unlike Algol 60 and Algol 68, for example)  allows procedures with free lexically scoped  variables to be returned as the values of other  procedures; the Algol stack-allocation  environment strategy does not suffice. The  methods used here indicate that a heap-allocating generalization of the "display"  technique leads to an efficient implementation  of such "upward funargs". Moreover, compile-time optimization and analysis can eliminate  many "funargs" entirely, and so far fewer  environment structures need be allocated at  run time than might be expected. A subset of  SCHEME (rather than triples, for example)  serves as the representation intermediate  between the optimized SCHEME code and the  final output code; code is expressed in this  subset in the so-called continuation-passing  style. As a subset of SCHEME, it enjoys the  same theoretical properties; one could even  apply the same optimizer used on the input  code to the intermediate code. However, the  subset is so chosen that all temporary  quantities are made manifest as variables,  and no control stack is needed to evaluate it.  As a result, this apparently applicative  representation admits an imperative  interpretation which permits easy transcription  to final imperative machine code. These  qualities suggest that an applicative language  like SCHEME is a better candidate for an  UNCOL than the more imperative candidates  proposed to date.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6913</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative and Quantitative Knowledge in Classical Mechanics</title>
<link>https://hdl.handle.net/1721.1/6912</link>
<description>Qualitative and Quantitative Knowledge in Classical Mechanics
Kleer, Johan De
This thesis investigates what knowledge is  necessary to solve mechanics problems. A  program NEWTON is described which  understands and solves problems in  mechanics mini-world of objects moving on  surfaces. Facts and equations such as those  given in mechanics text need to be  represented. However, this is far from  sufficient to solve problems. Human problem  solvers rely on "common sense" and  "qualitative" knowledge which the physics text  tacitly assumes to be present. A mechanics  problem solver must embody such  knowledge. Quantitative knowledge given  by equations and more qualitative common  sense knowledge are the major research  points exposited in this thesis. The major  issue in solving problems is planning.  Planning involves tentatively outlining a  possible path to the solution without actually  solving the problem. Such a plan needs to be  constructed and debugged in the process of  solving the problem. Envisionment, or  qualitative simulation of the event, plays a  central role in this planning process.
</description>
<pubDate>Mon, 01 Dec 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6912</guid>
<dc:date>1975-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Semantic Descriptions From Drawings of Scenes With Shadows</title>
<link>https://hdl.handle.net/1721.1/6911</link>
<description>Generating Semantic Descriptions From Drawings of Scenes With Shadows
Waltz, David L.
The research reported here concerns the  principles used to automatically generate  three-dimensional representations from line  drawings of scenes. The computer programs  involved look at scenes which consist of  polyhedra and which may contain shadows  and various kinds of coincidentally aligned  scene features. Each generated description  includes information about edge shape  (convex, concave, occluding, shadow, etc.),  about the type of illumination for each region  (illuminated, projected shadow, or oriented  away from the light source), and about the  spacial orientation of regions. The methods  used are based on the labeling schemes of  Huffman and Clowes; this research provides  a considerable extension to their work and  also gives theoretical explanations to the  heuristic scene analysis work of Guzman,  Winston, and others.
</description>
<pubDate>Wed, 01 Nov 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6911</guid>
<dc:date>1972-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparsely Faceted Arrays: A Mechanism Supporting Parallel Allocation, Communication, and Garbage Collection</title>
<link>https://hdl.handle.net/1721.1/6910</link>
<description>Sparsely Faceted Arrays: A Mechanism Supporting Parallel Allocation, Communication, and Garbage Collection
Brown, Jeremy Hanford
Conventional parallel computer architectures  do not provide support for non-uniformly distributed objects. In this  thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different  processors in a distributed, shared memory parallel  processing system. Sparsely faceted arrays address the disconnect  between the global distributed arrays provided by conventional architectures  (e.g. the Cray T3 series), and the requirements of high-level  parallel programming methods that wish to use objects that are  distributed over only a subset of processing elements. A sparsely  faceted array names a virtual globally-distributed array, but actual  facets are lazily allocated. By providing simple semantics and  making efficient use of memory, SFAs enable efficient  implementation of a variety of non-uniformly distributed data structures and  related algorithms. I present example applications which use  SFAs, and describe and evaluate simple hardware mechanisms for  implementing SFAs.  Keeping track of which nodes have allocated  facets for a particular SFA is an important task that suggests the  need for automatic memory management, including garbage collection.  To address this need, I first argue that conventional tracing  techniques such as mark/sweep and copying GC are inherently unscalable in  parallel systems. I then present a parallel memory-management  strategy, based on reference-counting, that is capable of garbage  collecting sparsely faceted arrays. I also discuss opportunities  for hardware support of this garbage collection strategy.  I have implemented a high-level hardware/OS  simulator featuring hardware support for sparsely faceted arrays  and automatic garbage collection. I describe the simulator and  outline a few of the numerous details associated with a "real"  implementation of SFAs and SFA-aware garbage collection. Simulation  results are used throughout this thesis in the evaluation of hardware  support mechanisms.
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6910</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>BUILD: A Tool for Maintaining Consistency in Modular Systems</title>
<link>https://hdl.handle.net/1721.1/6909</link>
<description>BUILD: A Tool for Maintaining Consistency in Modular Systems
Robbins, Richard Elliot
Build is a tool for keeping modular systems in  a consistent state by managing the  construction tasks (e.g. compilation, linking,  etc.) associated with such systems. It  employs a user supplied system model and a  procedural description of a task to be  performed in order to perform the task. This  differs from existing tools which do not  explicitly separate knowledge about systems  from knowledge about how systems are  manipulated. BUILD provides a static  framework for modeling systems and  handling construction requests that makes  use of programming environment specific  definitions. By altering the set of definitions,  BUILD can be extended to work with new  programming environments to perform new  tasks.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6909</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compliance and Force Control for Computer Controlled Manipulators</title>
<link>https://hdl.handle.net/1721.1/6908</link>
<description>Compliance and Force Control for Computer Controlled Manipulators
Mason, Matthew Thomas
Compliant motion occurs when the  manipulator position is constrained by the  task geometry. Compliant motion may be  produced either by a passive mechanical  compliance built in to the manipulator, or by  an active compliance implemented in the  control servo loop. The second method, called  force control, is the subject of this report. In  particular, this report presents a theory of  force control based on formal models of the  manipulator, and the task geometry. The ideal  effector is used to model the manipulator, and  the task geometry is modeled by the ideal  surface, which is the locus of all positions  accessible to the ideal effector. Models are  also defined for the goal trajectory, position  control, and force control.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6908</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic Mathematical Laboratory</title>
<link>https://hdl.handle.net/1721.1/6907</link>
<description>Symbolic Mathematical Laboratory
Martin, William A.
A large computer program has been  developed to aid applied mathematicians in  the solution of problems in non-numerical  analysis which involve tedious manipulations  of mathematical expressions. The  mathematician uses typed commands and a  light pen to direct the computer in the  application of mathematical transformations;  the intermediate results are displayed in  standard text-book format so that the system  user can decide the next step in the problem  solution. Three problems selected from the  literature have been solved to illustrate the  use of the system. A detailed analysis of the  problems of input, transformation, and display  of mathematical expressions is also  presented.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6907</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>ADEPT: A Heuristic Program for Proving Theorems of Group Theory</title>
<link>https://hdl.handle.net/1721.1/6906</link>
<description>ADEPT: A Heuristic Program for Proving Theorems of Group Theory
Norton, Lewis Mark
A computer program, named ADEPT (A  Distinctly Empirical Prover of Theorems), has  been written which proves theorems taken  from the abstract theory of groups. Its  operation is basically heuristic, incorporating  many of the techniques of the human  mathematician in a "natural" way. This  program has proved almost 100 theorems, as  well as serving as a vehicle for testing and  evaluating special-purpose heuristics. A  detailed description of the program is  supplemented by accounts of its performance  on a number of theorems, thus providing  many insights into the particular problems  inherent in the design of a procedure capable  of proving a variety of theorems from this  domain. Suggestions have been formulated  for further efforts along these lines, and  comparisons with related work previously  reported in the literature have been made.
</description>
<pubDate>Thu, 01 Sep 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6906</guid>
<dc:date>1966-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>PILOT: A Step Toward Man-Computer Symbiosis</title>
<link>https://hdl.handle.net/1721.1/6905</link>
<description>PILOT: A Step Toward Man-Computer Symbiosis
Teitelman, Warren
PILOT is a programming system constructed  in LISP. It is designed to facilitate the  development of programs by easing the  familiar sequence: write some code, run the  program, make some changes, write some  more code, run the program again, etc. As a  program becomes more complex, making  these changes becomes harder and harder  because the implications of changes are  harder to anticipate. In the PILOT system, the  computer plays an active role in this  evolutionary process by providing the means  whereby changes can be effected  immediately, and in ways that seem natural to  the user. The user of PILOT feels that he is  giving advice, or making suggestions, to the  computer about the operation of his  programs, and that the system then performs  the work necessary. The PILOT system is  thus an interface between the user and his  program, monitoring both in the requests of  the user and operation of his program. The  user may easily modify the PILOT system  itself by giving it advice about its own  operation. This allows him to develop his own  language and to shift gradually onto PILOT the  burden of performing routine but increasingly  complicated tasks. In this way, he can  concentrate on the conceptual difficulties in  the original problem, rather than on the  niggling tasks of editing, rewriting, or adding  to his programs. Two detailed examples are  presented. PILOT is a first step toward  computer systems that will help man to  formulate problems in the same way they now  help him to solve them. Experience with it  supports the claim that such "symbiotic  systems" allow the programmer to attack and  solve more difficult problems.
</description>
<pubDate>Thu, 01 Sep 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6905</guid>
<dc:date>1966-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>SIR: A Computer Program for Semantic Information Retrieval</title>
<link>https://hdl.handle.net/1721.1/6904</link>
<description>SIR: A Computer Program for Semantic Information Retrieval
Raphael, Bertram
SIR is a computer system, programmed in the  LISP language, which accepts information  and answers questions expressed in a  restricted form of English. This system  demonstrates what can reasonably be called  an ability to "understand" semantic  information. SIR's semantic and deductive  ability is based on the construction of an  internal model, which uses word associations  and property lists, for the relational  information normally conveyed in  conversational statements. A format-matching  procedure extracts semantic content from  English sentences. If an input sentence is  declarative, the system adds appropriate  information to the model. If an input sentence  is a question, the system searches the model  until it either finds the answer or determines  why it cannot find the answer. In all cases SIR  reports its conclusions. The system has  some capacity to recognize exceptions to  general rules, resolve certain semantic  ambiguities, and modify its model structure in  order to save computer memory space.  Judging from its conversational ability, SIR, is  a first step toward intelligent man-machine  communication. The author proposes a next  step by describing how to construct a more  general system which is less complex and yet  more powerful than SIR. This proposed  system contains a generalized version of the  SIR model, a formal logical system called  SIR1, and a computer program for testing the  truth of SIR1 statements with respect to the  generalized model by using partial proof  procedures in the predicate calculus. The  thesis also describes the formal properties of  SIR1 and how they relate to the logical  structure of SIR.
</description>
<pubDate>Mon, 01 Jun 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6904</guid>
<dc:date>1964-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Language Input for a Computer Problem Solving System</title>
<link>https://hdl.handle.net/1721.1/6903</link>
<description>Natural Language Input for a Computer Problem Solving System
Bobrow, Daniel G.
The STUDENT problem solving system,  programmed in LISP, accepts as input a  comfortable but restricted subset of English  which can express a wide variety of algebra  story problems. STUDENT finds the solution  to a large class of these problems. STUDENT  can utilize a store of global information not  specific to any one problem, and may make  assumptions about the interpretation of  ambiguities in the wording of the problem  being solved. If it uses such information or  makes any assumptions, STUDENT  communicates this fact to the user. The thesis  includes a summary of other English  language questions-answering systems. All  these systems, and STUDENT, are evaluated  according to four standard criteria. The  linguistic analysis in STUDENT is a first  approximation to the analytic portion of a  semantic theory of discourse outlined in the  thesis. STUDENT finds the set of kernel  sentences which are the base of the input  discourse, and transforms this sequence of  kernel sentences into a set of simultaneous  equations which form the semantic base of  the STUDENT system. STUDENT then tries to  solve this set of equations for the values of  requested unknowns. If it is successful it  gives the answers in English. If not,  STUDENT asks the user for more information,  and indicates the nature of the desired  information. The STUDENT system is a first  step toward natural language communication  with computers. Further work on the semantic  theory proposed should result in much more  sophisticated systems.
</description>
<pubDate>Tue, 01 Sep 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6903</guid>
<dc:date>1964-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Recognition of Three-Dimensional Objects in a Visual Scene</title>
<link>https://hdl.handle.net/1721.1/6902</link>
<description>Computer Recognition of Three-Dimensional Objects in a Visual Scene
Guzman-Arenas, Adolfo
Methods are presented (1) to partition or  decompose a visual scene into the bodies  forming it; (2) to position these bodies in  three-dimensional space, by combining two  scenes that make a stereoscopic pair; (3) to  find the regions or zones of a visual scene  that belong to its background; (4) to carry out  the isolation of objects in (1) when the input  has inaccuracies. Running computer  programs implement the methods, and many  examples illustrate their behavior. The input is  a two-dimensional line-drawing of the scene,  assumed to contain three-dimensional  bodies possessing flat faces (polyhedra);  some of them may be partially occluded.  Suggestions are made for extending the work  to curved objects. Some comparisons are  made with human visual perception. The  main conclusion is that it is possible to  separate a picture or scene into the  constituent objects exclusively on the basis of  monocular geometric properties (on the basis  of pure form); in fact, successful methods are  shown.
</description>
<pubDate>Sun, 01 Dec 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6902</guid>
<dc:date>1968-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>CARPS: A Program which Solves Calculus Word Problems</title>
<link>https://hdl.handle.net/1721.1/6901</link>
<description>CARPS: A Program which Solves Calculus Word Problems
Charniak, Eugene
A program was written to solve calculus word  problems. The program, CARPS (CALculus  Rate Problem Solver), is restricted to rate  problems. The overall plan of the program is  similar to Bobrow's STUDENT, the primary  difference being the introduction of  "structures" as the internal model in CARPS.  Structures are stored internally as trees. Each  structure is designed to hold the information  gathered about one object. A description of  CARPS is given by working through two  problems, one in great detail. Also included is  a critical analysis of STUDENT.
</description>
<pubDate>Mon, 01 Jul 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6901</guid>
<dc:date>1968-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic Integration</title>
<link>https://hdl.handle.net/1721.1/6900</link>
<description>Symbolic Integration
Moses, Joel
SIN and SOLDIER are heuristic programs in  LISP which solve symbolic integration  problems. SIN (Symbolic INtegrator) solves  indefinite integration problems at the difficulty  approaching those in the larger integral  tables. SIN contains several more methods  than are used in the previous symbolic  integration program SAINT, and solves most  of the problems attempted by SAINT in less  than one second. SOLDIER (SOLution of  Ordinary Differential Equations Routine)  solves first order, first degree ordinary  differential equations at the level of a good  college sophomore and at an average of  about five seconds per problem attempted.  The differences in philosophy and operation  between SAINT and SIN are described, and  suggestions for extending the work presented  are made.
</description>
<pubDate>Fri, 01 Sep 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6900</guid>
<dc:date>1967-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Syntax-Based Analytic Reading of Musical Scores</title>
<link>https://hdl.handle.net/1721.1/6899</link>
<description>Syntax-Based Analytic Reading of Musical Scores
Forte, Allen
As part of a larger research project in musical  structure, a program has been written which  "reads" scores encoded in an input language  isomorphic to music notation. The program is  believed to be the first of its kind. From a  small number of parsing rules the program  derives complex configurations, each of which  is associated with a set of reference points in  a numerical representation of a time-continuum. The logical structure of the  program is such that all and only the defined  classes of events are represented in the  output. Because the basis of the program is  syntactic (in the sense that parsing operations  are performed on formal structures in the  input string), many extensions and  refinements can be made without excessive  difficulty. The program can be applied to any  music which can be represented in the input  language. At present, however, it constitutes  the first stage in the development of a set of  analytic tools for the study of so-called atonal  music, the revolutionary and little understood  music which has exerted a decisive influence  upon contemporary practice of the art. The  program and the approach to automatic data-structuring may be of interest to linguists and  scholars in other fields concerned with basic  studies of complex structures produced by  human beings.
</description>
<pubDate>Sat, 01 Apr 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6899</guid>
<dc:date>1967-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reasoning from Incomplete Knowledge in a Procedural Deduction System</title>
<link>https://hdl.handle.net/1721.1/6898</link>
<description>Reasoning from Incomplete Knowledge in a Procedural Deduction System
Moore, Robert Carter
One very useful idea in AI research has been the notion of an explicit model of a problem situation. Procedural deduction languages, such as PLANNER, have been valuable tools for building these models. But PLANNER and its relatives are very limited in their ability to describe situations which are only partially specified. This thesis explores methods of increasing the ability of procedural deduction systems to deal with incomplete knowledge. The thesis examines in detail, problems involving negation, implication, disjunction, quantification, and equality. Control structure issues and the problem of modelling change under incomplete knowledge are also considered. Extensive comparisons are also made with systems for mechanica theorem proving.
</description>
<pubDate>Mon, 01 Dec 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6898</guid>
<dc:date>1975-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Shape Description of Objects by Selection and Modification of Prototypes</title>
<link>https://hdl.handle.net/1721.1/6897</link>
<description>Hierarchical Shape Description of Objects by Selection and Modification of Prototypes
Hollerbach, John M.
An approach towards shape description, based on prototype modification and generalized cylinders, has been developed and applied to the object domains pottery and polyhedra: (1) A program describes and identifies pottery from vase outlines entered as lists of points. The descriptions have been modeled after descriptions by archeologists, with the result that identifications made by the program are remarkably consisten with those of the archeologists. It has been possible to quantify their shape descriptors, which are everyday terms in our language applied to many sorts of objects besides pottery, so that the resulting descriptions seem very natural. (2) New parsing strategies for polyhedra overcome some limitations of previous work. A special feature is that the processes of parsing and identification are carried out simultaneously.
</description>
<pubDate>Sat, 01 Nov 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6897</guid>
<dc:date>1975-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer System for Visual Recognition Using Active Knowledge</title>
<link>https://hdl.handle.net/1721.1/6896</link>
<description>Computer System for Visual Recognition Using Active Knowledge
Freuder, Eugene C.
A system for visual recognition is described,  with implications for the general problem of  representation of knowledge to assist control.  The immediate objective is a computer  system that will recognize objects in a visual  scene, specifically hammers. The computer  receives an array of light intensities from a  device like a television camera. It is to locate  and identify the hammer if one is present. The  computer must produce from the numerical  "sensory data" a symbolic description that  constitutes its perception of the scene. Of  primary concern is the control of the  recognition process. Control decisions  should be guided by the partial results  obtained on the scene. If a hammer handle is  observed this should suggest that the handle  is part of a hammer and advise where to look  for the hammer head. The particular  knowledge that a handle has been found  combines with general knowledge about  hammers to influence the recognition  process. This use of knowledge to direct  control is denoted here by the term "active  knowledge". A descriptive formalism is  presented for visual knowledge which  identifies the relationships relevant to the  active use of the knowledge. A control  structure is provided which can apply  knowledge organized in this fashion actively to  the processing of a given scene.
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6896</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Progress in Artificial Intelligence</title>
<link>https://hdl.handle.net/1721.1/6895</link>
<description>New Progress in Artificial Intelligence
Winston, Patrick H.
This report concentrates on progress during  the last two years at the M.I.T. Artificial  Intelligence Laboratory. Topics covered  include the representation of knowledge,  understanding English, learning and  debugging, understanding vision and  productivity technology. It is stressed that  these various areas are tied closely together  through certain fundamental issues and  problems.
</description>
<pubDate>Sun, 01 Sep 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6895</guid>
<dc:date>1974-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational Model of Skill Acquisition</title>
<link>https://hdl.handle.net/1721.1/6894</link>
<description>A Computational Model of Skill Acquisition
Sussman, Gerald J.
This thesis confronts the nature of the  process of learning an intellectual skill, the  ability to solve problems efficiently in a  particular domain of discourse. The  investigation is synthetic; a computational  performance model, HACKER, is displayed.  Hacker is a computer problem-solving system  whose performance improves with practice.  HACKER maintains performance knowledge  as a library of procedures indexed by  descriptions of the problem types for which  the procedures are appropriate. When applied  to a problem, HACKER tries to use a  procedure from this "Answer Library". If no  procedure is found to be applicable, HACKER  writes one using more general knowledge of  the problem domain and of programming  techniques. This new program may be  generalized and added to the Answer Library.
</description>
<pubDate>Wed, 01 Aug 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6894</guid>
<dc:date>1973-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Simple Picture Programs</title>
<link>https://hdl.handle.net/1721.1/6893</link>
<description>Understanding Simple Picture Programs
Goldstein, Ira P.
What are the characteristics of the process by which an intent is transformed into a plan and then a program? How is a program debugged? This paper analyzes these questions in the context of understanding simple turtle programs. To understand and debug a program, a description of its intent is required. For turtle programs, this is a model of the desired geometric picture. a picture language is provided for this purpose. Annotation is necessary for documenting the performance of a program in such a way that the system can examine the procedures behavior as well as consider hypothetical lines of development due to tentative debugging edits. A descriptive framework representing both causality and teleology is developed. To understand the relation between program and model, the plan must be known. The plan is a description of the methodology for accomplishing the model. Concepts are explicated for translating the global intent of a declarative model into the local imperative code of a program. Given the plan, model and program, the system can interpret the picture and recognize inconsistencies. The description of the discrepancies between the picture actually produced by the program and the intended scene is the input to a debugging system. Repair of the program is based on a combination of general debugging techniques and specific fixing knowledge associated with the geometric model primitives. In both the plan and repairing the bugs, the system exhibits an interesting style of analysis. It is capable of debugging itself and reformulating its analysis of a plan or bug in response to self-criticism. In this fashion, it can qualitatively reformulate its theory of the program or error to account for surprises or anomalies.
</description>
<pubDate>Mon, 01 Apr 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6893</guid>
<dc:date>1974-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward A Model Of Children's Story Comprehension</title>
<link>https://hdl.handle.net/1721.1/6892</link>
<description>Toward A Model Of Children's Story Comprehension
Charniak, Eugene
How does a person answer questions about  children's stories? For example, consider  'Janet wanted Jack's paints. She looked at the  picture he was painting and said 'Those  paints make your picture look funny.' The  question to ask is 'Why did Janet say that?'.  We propose a model which answers such  questions by relating the story to background  real world knowledge. The model tries to  generate and answer important questions  about the story as it goes along. Within this  model we examine two questions about the  story as it goes along. Within this model we  examine two problems, how to organize this  real world knowledge, and how it enters into  more traditional linguistic questions such as  deciding noun phrase reference.
</description>
<pubDate>Fri, 01 Dec 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6892</guid>
<dc:date>1972-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information Processing and Transmission in Cellular Automata</title>
<link>https://hdl.handle.net/1721.1/6891</link>
<description>Information Processing and Transmission in Cellular Automata
Banks, Edwin Roger
A cellular automaton is an iterative array of  very simple identical information processing  machines called cells. Each cell can  communicate with neighboring cells. At  discrete moments of time the cells can  change from one state to another as a  function of the states of the cell and its  neighbors. Thus on a global basis, the  collection of cells is characterized by some  type of behavior. The goal of this investigation  was to determine just how simple the  individual cells could be while the global  behavior achieved some specified criterion of  complexity ??ually the ability to perform a  computation or to reproduce some pattern.  The chief result described in this thesis is that  an array of identical square cells (in two  dimensions), each cell of which  communicates directly with only its four  nearest edge neighbors and each of which  can exist in only two states, can perform any  computation. This computation proceeds in a  straight forward way. A configuration is a  specification of the states of all the cells in  some area of the iterative array. Another result  described in this thesis is the existence of a  self-reproducing configuration in an array of  four-state cells, a reduction of four states from  the previously known eight-state case. The  technique of information processing in  cellular arrays involves the synthesis of some  basic components. Then the desired  behaviors are obtained by the interconnection  of these components. A chapter on  components describes some sets of basic  components. Possible applications of the  results of this investigation, descriptions of  some interesting phenomena (for vanishingly  small cells), and suggestions for further study  are given later.
</description>
<pubDate>Fri, 01 Jan 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6891</guid>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dependency Directed Reasoning for Complex Program Understanding</title>
<link>https://hdl.handle.net/1721.1/6890</link>
<description>Dependency Directed Reasoning for Complex Program Understanding
Shrobe, Howard Elliot
Artificial Intelligence research involves the  creation of extremely complex programs  which must possess the capability to  introspect, learn, and improve their expertise.  Any truly intelligent program must be able to  create procedures and to modify them as it  gathers information from its experience.  [Sussman, 1975] produced such a system for  a  'mini-world'; but truly intelligent programs  must be considerably more complex. A crucial  stepping stone in AI research is the  development of a system which can  understand complex programs well enough to  modify them. There is also a complexity  barrier in the world of commercial software  which is making the cost of software  production and maintenance prohibitive. Here  too a system which is capable of  understanding complex programs is a  necessary step. The Programmer's  Apprentice Project [Rich and Shrobe, 76] is  attempting to develop an interactive  programming tool which will help expert  programmers deal with the complexity  involved in engineering a large software  system. This report describes REASON, the  deductive component of the programmer's  apprentice. REASON is intended to help  expert programmers in the process of  evolutionary program design. REASON  utilizes the engineering techniques of  modelling, decomposition, and analysis by  inspection to determine how modules interact  to achieve the desired overall behavior of a  program. REASON coordinates its various  sources of knowledge by using a  dependency-directed structure which records  the justification for each deduction it makes.  Once a program has been analyzed these  justifications can be summarized into a  teleological structure called a plan which  helps the system understand the impact of a  proposed program modification.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6890</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reflectance Map Techniques for Analyzing Surface Defects in Metal Castings</title>
<link>https://hdl.handle.net/1721.1/6889</link>
<description>Reflectance Map Techniques for Analyzing Surface Defects in Metal Castings
Woodham, Robert J.
This report explores the relation between  image intensity and object shape. It is shown  that image intensity is related to surface  orientation and that a variation in image  intensity is related to surface curvature.  Computational methods are developed which  use the measured intensity variation across  surfaces of smooth objects to determine  surface orientation. In general, surface  orientation is not determined locally by the  intensity value recorded at each image point.  Tools are needed to explore the problem of  determining surface orientation from image  intensity. The notion of gradient space ,  popularized by Huffman and Mackworth, is  used to represent surface orientation. The  notion of a reflectance map, originated by  Horn, is used to represent the relation  between surface orientation image intensity.  The image Hessian is defined and used to  represent surface curvature. Properties of  surface curvature are expressed as  constraints on possible surface orientations  corresponding to a given image point.  Methods are presented which embed  assumptions about surface curvature in  algorithms for determining surface orientation  from the intensities recorded in a single view.  If additional images of the same object are  obtained by varying the direction of incident  illumination, then surface orientation is  determined locally by the intensity values  recorded at each image point. This fact is  exploited in a new technique called  photometric stereo. The visual inspection of  surface defects in metal castings is  considered. Two casting applications are  discussed. The first is the precision  investment casting of turbine blades and  vanes for aircraft jet engines. In this  application, grain size is an important process  variable. The existing industry standard for  estimating the average grain size of metals is  implemented and demonstrated on a sample  turbine vane. Grain size can be computed  form the measurements obtained in an  image, once the foreshortening effects of  surface curvature are accounted for. The  second is the green sand mold casting of  shuttle eyes for textile looms. Here, physical  constraints inherent to the casting process  translate into these constraints, it is  necessary to interpret features of intensity as  features of object shape. Both applications  demonstrate that successful visual inspection  requires the ability to interpret observed  changes in intensity in the context of surface  topography. The theoretical tools developed in  this report provide a framework for this  interpretation.
</description>
<pubDate>Thu, 01 Jun 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6889</guid>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A System for Representing and Using Real-World Knowledge</title>
<link>https://hdl.handle.net/1721.1/6888</link>
<description>A System for Representing and Using Real-World Knowledge
Fahlman, Scott E.
This report describes a knowledge-base  system in which the information is stored in a  network of small parallel processing  elements ??de and link units ??ich are  controlled by an external serial computer. This  network is similar to the semantic network  system of Quillian, but is much more tightly  controlled. Such a network can perform  certain critical deductions and searches very  quickly; it avoids many of the problems of  current systems, which must use complex  heuristics to limit and guided their searches. It  is argued (with examples) that the key  operation in a knowledge-base system is the  intersection of large explicit and semi-explicit  sets. The parallel network system does this in  a small, essentially constant number of  cycles; a serial machine takes time  proportional to the size of the sets, except in  special cases.
</description>
<pubDate>Thu, 01 Dec 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6888</guid>
<dc:date>1977-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Aspects of Pattern Recognition by Computer</title>
<link>https://hdl.handle.net/1721.1/6887</link>
<description>Some Aspects of Pattern Recognition by Computer
Guzman-Arenas, Adolfo
A computer may gather a lot of information  from its environment in an optical or graphical  manner. A scene, as seen for instance from a  TV camera or a picture, can be transformed  into a symbolic description of points and lines  or surfaces. This thesis describes several  programs, written in the language CONVERT,  for the analysis of such descriptions in order  to recognize, differentiate and identify desired  objects or classes of objects in the scene.  Examples are given in each case. Although  the recognition might be in terms of  projections of 2-dim and 3-dim objects, we do  not deal with stereoscopic information. One of  our programs (Polybrick) identifies  parallelepipeds in a scene which may contain  partially hidden bodies and non-parallelepipedic objects. The program TD  works mainly with 2-dimensional figures,  although under certain conditions  successfully identifies 3-dim objects.  Overlapping objects are identified when they  are transparent. A third program, DT, works  with 3-dim and 2-dim objects, and does not  identify objects which are not completely  seen. Important restrictions and suppositions  are: (a) the input is assumed perfect  (noiseless), and in a symbolic format; (b) no  perspective deformation is considered. A  portion of this thesis is devoted to the study of  models (symbolic representations) of the  objects we want to identify; different schemes,  some of them already in use, are discussed.  Focusing our attention on the more general  problem of identification of general objects  when they substantially overlap, we propose  some schemes for their recognition, and also  analyze some problems that are met.
</description>
<pubDate>Wed, 01 Feb 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6887</guid>
<dc:date>1967-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assimilation of New Information by a Natural Language Understanding System</title>
<link>https://hdl.handle.net/1721.1/6886</link>
<description>Assimilation of New Information by a Natural Language Understanding System
McDermott, Drew V.
This work describes a program, called  TOPLE, which uses a procedural model of the  world to understand simple declarative  sentences. It accepts sentences in a modified  predicate calculus symbolism, and uses  plausible reasoning to visualize scenes,  resolve ambiguous pronoun and noun phrase  references, explain events, and make  conditional predications. Because it does  plausible deduction, with tentative  conclusions, it must contain a formalism for  describing its reasons for its conclusions and  what the alternatives are. When an  inconsistency is detected in its world model, it  uses its recorded information to resolve it,  one way or another. It uses simulation  techniques to make deductions about  creatures motivation and behavior, assuming  they are goal-directed beings like itself.
</description>
<pubDate>Fri, 01 Feb 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6886</guid>
<dc:date>1974-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shape from Shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View</title>
<link>https://hdl.handle.net/1721.1/6885</link>
<description>Shape from Shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View
Horn, Berthold K.P.
A method will be described for finding the  shape of a smooth apaque object form a  monocular image, given a knowledge of the  surface photometry, the position of the  lightsource and certain auxiliary information to  resolve ambiguities. This method is  complementary to the use of stereoscopy  which relies on matching up sharp detail and  will fail on smooth objects. Until now the  image processing of single views has been  restricted to objects which can meaningfully  be considered two-dimensional or bounded  by plane surfaces. It is possible to derive a  first-order non-linear partial differential  equation in two unknowns relating the  intensity at the image points to the shape of  the objects. This equation can be solved by  means of an equivalent set of five ordinary  differential equations. A curve traced out by  solving this set of equations for one set of  starting values is called a characteristic strip.  Starting one of these strips from each point on  some initial curve will produce the whole  solution surface. The initial curves can  usually be constructed around so-called  singular points. A number of applications of  this metod will be discussed including one to  lunar topography and one to the scanning  electron microscope. In both of these cases  great simplifications occur in the equations. A  note on polyhedra follows and a quantitative  theory of facial make-up is touched upon. An  implementation of some of these ideas on the  PDP-6 computer with its attached image-dissector camera at the Artificial intelligence  Laboratory will be described, and also a  nose-recognition program.
</description>
<pubDate>Sun, 01 Nov 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6885</guid>
<dc:date>1970-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Structural Descriptions from Examples</title>
<link>https://hdl.handle.net/1721.1/6884</link>
<description>Learning Structural Descriptions from Examples
Winston, Patrick H.
The research here described centers on how  a machine can recognize concepts and learn  concepts to be recognized. Explanations are  found in computer programs that build and  manipulate abstract descriptions of scenes  such as those children construct from toy  blocks. One program uses sample scenes to  create models of simple configurations like  the three-brick arch. Another uses the  resulting models in making identifications.  Throughout emphasis is given to the  importance of using good descriptions when  exploring how machines can come to  perceive and understand the visual  environment.
</description>
<pubDate>Tue, 01 Sep 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6884</guid>
<dc:date>1970-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Model for Deliberation, Action, and Introspection</title>
<link>https://hdl.handle.net/1721.1/6883</link>
<description>A Model for Deliberation, Action, and Introspection
Doyle, Jon
This thesis investigates the problem of  controlling or directing the reasoning and  actions of a computer program. The basic  approach explored is to view reasoning as a  species of action, so that a program might  apply its reasoning powers to the task of  deciding what inferences to make as well as  deciding what other actions to take. A design  for the architecture of reasoning programs is  proposed. This architecture involves self-consciousness, intentional actions, deliberate  adaptations, and a form of decision-making  based on dialectical argumentation. A  program based on this architecture inspects  itself, describes aspects of itself, and uses  this self-reference and these self-descriptions  in making decisions and taking actions. The  program's mental life includes awareness of  its own concepts, beliefs, desires, intentions,  inferences, actions, and skills. All of these are  represented by self-descriptions in a single  sort of language, so that the program has  access to all of these aspects of itself, and  can reason about them in the same terms.
</description>
<pubDate>Thu, 01 May 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6883</guid>
<dc:date>1980-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Recognition of Prismatic Solids</title>
<link>https://hdl.handle.net/1721.1/6882</link>
<description>Computer Recognition of Prismatic Solids
Griffith, Arnold K.
An investigation is made into the problem of  constructing a model of the appearance to an  optical input device of scenes consisting of  plane-faced geometric solids. The goal is to  study algorithms which find the real straight  edges in the scenes, taking into account  smooth variations in intensity over faces of the  solids, blurring of edges and noise. A general  mathematical analysis is made of optimal  methods for identifying the edge lines in  figures, given a raster of intensities covering  the entire field of view. There is given in  addition a suboptimal statistical decision  procedure, based on the model, for the  identification of a line within a narrow band on  the field of view given an array of intensities  from within the band. A computer program  has been written and extensively tested which  implements this procedure and extracts lines  from real scenes. Other programs were  written which judge the completeness of  extracted sets of lines, and propose and test  for additional lines which had escaped initial  detection. The performance of these  programs is discussed in relation to the  theory derived from the model, and with  regard to their use of global information in  detecting and proposing lines.
</description>
<pubDate>Sat, 01 Aug 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6882</guid>
<dc:date>1970-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognition of Topological Invariants by Iterative Arrays</title>
<link>https://hdl.handle.net/1721.1/6881</link>
<description>Recognition of Topological Invariants by Iterative Arrays
Beyer, Wendel Terry
A study is made of the recognition and  transformation of figures by iterative arrays of  finite state automata. A figure is a finite  rectangular two-dimensional array of  symbols. The iterative arrays considered are  also finite, rectangular, and two-dimensional.  The automata comprising any given array are  called cells and are assumed to be  isomorphic and to operate synchronously with  the state of a cell at time t+1 being a function  of the states of it and its four nearest  neighbors at time t. At time t=0 each cell is  placed in one of a fixed number of initial  states. The pattern of initial states thus  introduced represents the figure to be  processed. The resulting sequence of array  states represents a computation based on  the input figure. If one waits for a specially  designated cell to indicate acceptance or  rejection of the figure, the array is said to be  working on a recognition problem. If one waits  for the array to come to a stable configuration  representing an output figure, the array is said  to be working on a transformation problem.
</description>
<pubDate>Wed, 01 Oct 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6881</guid>
<dc:date>1969-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Computational Theory of Definite Anaphora Comprehension in English Discourse</title>
<link>https://hdl.handle.net/1721.1/6880</link>
<description>Towards a Computational Theory of Definite Anaphora Comprehension in English Discourse
Sidner, Candace Lee
This report investigates the process of focussing as a description and explanation of the comprehension of certain anaphoric expressions in English discourse. The investigation centers on the interpretation of definite anaphora, that is, on the personal pronouns, and noun phrases used with a definite article the, this or that. Focussing is formalized as a process in which a speaker centers attention on a particular aspect of the discourse. An algorithmic description specifies what the speaker can focus on and how the speaker may change the focus of the discourse as the discourse unfolds. The algorithm allows for a simple focussing mechanism to be constructed: and element in focus, an ordered collection of alternate foci, and a stack of old foci. The data structure for the element in focus is a representation which encodes a limted set of associations between it and other elements from teh discourse as well as from general knowledge.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6880</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of Handwriting</title>
<link>https://hdl.handle.net/1721.1/6879</link>
<description>Theory of Handwriting
Hollerbach, John
Handwriting production is viewed as a  constrained modulation of an underlying  oscillatory process. Coupled oscillations in  horizontal and vertical directions produce  letter forms, and when superimposed on a  rightward constant velocity horizontal sweep  result in spatially separated letters.  Modulation of the vertical oscillation is  responsible for control of letter height, either  through altering the frequency or altering the  acceleration amplitude. Modulation of the  horizontal oscillation is responsible for control  of corner shape through altering phase or  amplitude. The vertical velocity zero crossing  in the velocity space diagram is important  from the standpoint of control. Changing the  horizontal velocity value at this zero crossing  controls corner shape, and such changes can  be effected through modifying the horizontal  oscillation amplitude and phase. Changing  the slope at this zero crossing controls writing  slant; this slope depends on the horizontal  and vertical velocity zero amplitudes and on  the relative phase difference. Letter height  modulation is also best applied at the vertical  velocity zero crossing to preserve an even  baseline. The corner shape and slant  constraints completely determine the  amplitude and phase relations between the  two oscillations. Under these constraints  interletter separation is not an independent  parameter. This theory applies generally to a  number of acceleration oscillation patterns  such as sinusoidal, rectangular and  trapezoidal oscillations. The oscillation theory  also provides an explanation for how  handwriting might degenerate with speed. An  implementation of the theory in the context of  the spring muscle model is developed. Here  sinusoidal oscillations arise from a purely  mechanical sources; orthogonal antagonistic  spring pairs generate particular cycloids  depending on the initial conditions.  Modulating between cycloids can be achieved  by changing the spring zero settings at the  appropriate times. Frequency can be  modulated either by shifting between  coactivation and alternating activation of the  antagonistic springs or by presuming variable  spring constant springs. An acceleration and  position measuring apparatus was developed  for measurements of human handwriting.  Measurements of human writing are  consistent with the oscillation theory. It is  shown that the minimum energy movement  for the spring muscle is bang-coast-bang. For  certain parameter values a singular arc  solution can be shown to be minimizing.  Experimental measurements however  indicate that handwriting is not a minimum  energy movement.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6879</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Image Irradiance Equation: Its Solution and Application</title>
<link>https://hdl.handle.net/1721.1/6878</link>
<description>The Image Irradiance Equation: Its Solution and Application
Bruss, Anna R.
How much information about the shape of an  object can be inferred from its image? In  particular, can the shape of an object be  reconstructed by measuring the light it reflects  from points on its surface? These questions  were raised by Horn [HO70] who formulated a  set of conditions such that the image  formation can be described in terms of a first  order partial differential equation, the image  irradiance equation. In general, an image  irradiance equation has infinitely many  solutions. Thus constraints necessary to find  a unique solution need to be identified. First  we study the continuous image irradiance  equation. It is demonstrated when and how  the knowledge of the position of edges on a  surface can be used to reconstruct the  surface. Furthermore we show how much  about the shape of a surface can be deduced  from so called singular points. At these points  the surface orientation is uniquely determined  by the measured brightness. Then we  investigate images in which certain types of  silhouettes, which we call b-silhouettes, can  be detected. In particular we answer the  following question in the affirmative: Is there a  set of constraints which assure that if an  image irradiance equation has a solution, it is  unique? To this end we postulate three  constraints upon the image irradiance  equation and prove that they are sufficient to  uniquely reconstruct the surface from its  image. Furthermore it is shown that any two of  these constraints are insufficient to assure a  unique solution to an image irradiance  equation. Examples are given which illustrate  the different issues. Finally, an overview of  known numerical methods for computing  solutions to an image irradiance equation are  presented.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6878</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local Rotational Symmetries</title>
<link>https://hdl.handle.net/1721.1/6877</link>
<description>Local Rotational Symmetries
Fleck, Margaret Morrison
This thesis describes a new representation  for two-dimensional round regions called  Local Rotational Symmetries. Local  Rotational Symmetries are intended as a  companion to Brady's Smoothed Local  Symmetry Representation for elongated  shapes. An algorithm for computing Local  Rotational Symmetry representations at  multiple scales of resolution has been  implemented and results of this  implementation are presented. These results  suggest that Local Rotational Symmetries  provide a more robustly computable and  perceptually accurate description of round  regions than previous proposed  representations. In the course of developing  this representation, it has been necessary to  modify the way both Smoothed Local  Symmetries and Local Rotational Symmetries  are computed. First, grey-scale image  smoothing proves to be better than boundary  smoothing for creating representations at  multiple scales of resolution, because it is  more robust and it allows qualitative changes  in representations between scales. Secondly,  it is proposed that shape representations at  different scales of resolution be explicitly  related, so that information can be passed  between scales and computation at each  scale can be kept local. Such a model for  multi-scale computation is desirable both to  allow efficient computation and to accurately  model human perceptions.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6877</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reasoning Modeled as a Society of Communicating Experts</title>
<link>https://hdl.handle.net/1721.1/6876</link>
<description>Reasoning Modeled as a Society of Communicating Experts
Steels, Luc
This report describes a domain independent  reasoning system. The system uses a frame-based knowledge representation language  and various reasoning techniques including  constraint propagation, progressive  refinement, natural deduction and explicit  control of reasoning. A computational  architecture based on active objects which  operate by exchanging messages is  developed and it is shown how this  architecture supports reasoning activity. The  user interacts with the system by specifying  frames and by giving descriptions defining the  problem situation. The system uses its  reasoning capacity to build up a model of the  problem situation from which a solution can  interactively be extracted. Examples are  discussed from a variety of domains,  including electronic circuits, mechanical  devices and music. The main thesis is that a  reasoning system is best viewed as a parallel  system whose control and data are  distributed over a large network of processors  that interact by exchanging messages. Such a  system will be metaphorically described as a  society of communicating experts.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6876</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creation of Computer Animation from Story Descriptions</title>
<link>https://hdl.handle.net/1721.1/6875</link>
<description>Creation of Computer Animation from Story Descriptions
Kahn, Kenneth Michael
This report describes a computer system that  creates simple computer animation in  response to high-level, vague, and incomplete  descriptions of films. It makes its films by  collecting and evaluating suggestions from  several different bodies of knowledge. The  order in which it makes its choices is  influenced by the focus of the film. Difficult  choices are postponed to be resumed when  more of the film has been determined. The  system was implemented in an object-oriented language based upon computational  entities called "actors". The goal behind the  construction of the system is that, whenever  faced with a choice, it should sensibly choose  between alternatives based upon the  description of the film and as much general  knowledge as possible. The system is  presented as a computational model of  creativity and aesthetics.
</description>
<pubDate>Wed, 01 Aug 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6875</guid>
<dc:date>1979-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Process Theory</title>
<link>https://hdl.handle.net/1721.1/6874</link>
<description>Qualitative Process Theory
Forbus, Kenneth D.
Objects move, collide, flow, bend, heat up,  cool down, stretch, compress and boil. These  and other things that cause changes in  objects over time are intuitively characterized  as processes. To understand common sense  physical reasoning and make programs that  interact with the physical world as well as  people do we must understand qualitative  reasoning about processes, when they will  occur, their effects, and when they will stop.  Qualitative Process theory defines a simple  notion of physical process that appears useful  as a language in which to write dynamical  theories. Reasoning about processes also  motivates a new qualitative representation for  quantity in terms of inequalities, called  quantity space. This report describes the  basic concepts of Qualitative Process theory,  several different kinds of reasoning that can  be performed with them, and discusses its  impact on other issues in common sense  reasoning about the physical world, such as  causal reasoning and measurement  interpretation. Several extended examples  illustrate the utility of the theory, including  figuring out that a boiler can blow up, that an  oscillator with friction will eventually stop, and  how to say that you can pull with a string but  not push with it. This report also describes  GIZMO, an implemented computer program  which uses Qualitative Process theory to  make predictions and interpret simple  measurements. The represnetations and  algorithms used in GIZMO are described in  detail, and illustrated using several examples.
</description>
<pubDate>Sun, 01 Jul 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6874</guid>
<dc:date>1984-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Analysis of MOS Circuits</title>
<link>https://hdl.handle.net/1721.1/6873</link>
<description>Qualitative Analysis of MOS Circuits
Williams, Brian C.
With the push towards sub-micron  technology, transistor models have become  increasingly complex. The number of  components in integrated circuits has forced  designer's efforts and skills towards higher  levels of design. This has created a gap  between design expertise and the  performance demands increasingly imposed  by the technology. To alleviate this problem,  software tools must be developed that provide  the designer with expert advice on circuit  performance and design. This requires a  theory that links the intuitions of an expert  circuit analyst with the corresponding  principles of formal theory (i.e. algebra,  calculus, feedback analysis, network theory,  and electrodynamics), and that makes each  underlying assumption explicit.
</description>
<pubDate>Sun, 01 Jul 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6873</guid>
<dc:date>1984-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Solution of Inverse Problems</title>
<link>https://hdl.handle.net/1721.1/6872</link>
<description>Probabilistic Solution of Inverse Problems
Marroquin, Jose Luis
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6872</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redundant Sensors for Mobile Robot Navigation</title>
<link>https://hdl.handle.net/1721.1/6871</link>
<description>Redundant Sensors for Mobile Robot Navigation
Flynn, Anita M.
Redundant sensors are needed on a mobile  robot so that the accuracy with which it  perceives its surroundings can be increased.  Sonar and infrared sensors are used here in  tandem, each compensating for deficiencies  in the other. The robot combines the data  from both sensors to build a representation  which is more accurate than if either sensor  were used alone. Another representation, the  curvature primal sketch, is extracted from this  perceived workspace and is used as the input  to two path planning programs: one based on  configuration space and one based on a  generalized cone formulation of free space.
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6871</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Shape Descriptions: Generating and Generalizing Models of Visual Objects</title>
<link>https://hdl.handle.net/1721.1/6870</link>
<description>Learning Shape Descriptions: Generating and Generalizing Models of Visual Objects
Connell, Jonathan Hudson
We present the results of an implemented  system for learning structural prototypes from  grey-scale images. We show how to divide  an object into subparts and how to encode the  properties of these subparts and the relations  between them. We discuss the importance of  hierarchy and grouping in representing  objects and show how a notion of visual  similarities can be embedded in the  description language. Finally we exhibit a  learning algorithm that forms class models  from the descriptions produced and uses  these models to recognize new members of  the class.
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6870</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface Perception from Local Analysis of Texture and Contour</title>
<link>https://hdl.handle.net/1721.1/6869</link>
<description>Surface Perception from Local Analysis of Texture and Contour
Stevens, Kent A.
The visual analysis of surface shape from  texture and surface contour is treated within a  computational framework. The aim of this  study is to determine valid constraints that are  sufficient to allow surface orientation and  distance (up to a multiplicative constant) to be  computed from the image of surface texture  and of surface contours.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6869</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representing and Reasoning About Change in Geologic Interpretation</title>
<link>https://hdl.handle.net/1721.1/6868</link>
<description>Representing and Reasoning About Change in Geologic Interpretation
Simmons, Reid Gordon
Geologic interpretation is the task of inferring a sequence of events to explain how a given geologic region could have been formed. This report describes the design and implementation of one part of a geologic interpretation problem solver -- a system which uses a simulation technique called imagining to check the validity of a candidate sequence of events. Imagining uses a combination of qualitative and quantitative simulations to reason about the changes which occured to the geologic region. The spatial changes which occur are simulated by constructing a sequence of diagrams. The quantitative simulation needs numeric parameters which are determined by using the qualitative simulation to establish the cumulative changes to an object and by using a description of the current geologic region to make quantitative measurements. The diversity of reasoning skills used in imagining has necessitated the development of multiple representations, each specialized for a different task. Representations to facilitate doing temporal, spatial and numeric reasoning are described in detail. We have also found it useful to explicitly represent processes. Both the qualitative and quantitative simulations use a discrete 'layer cake' model of geologic processes, but each uses a separate representation, specialized to support the type of simulation. These multiple representations have enabled us to develop a powerful, yet modular, system for reasoning about change.
</description>
<pubDate>Thu, 01 Dec 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6868</guid>
<dc:date>1983-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Structural and Functional Information in Diagnostic Design</title>
<link>https://hdl.handle.net/1721.1/6867</link>
<description>Using Structural and Functional Information in Diagnostic Design
Hamscher, Walter
We wish to design a diagnostic for a device  from knowledge of its structure and function.  the diagnostic should achieve both coverage  of the faults that can occur in the device, and  should strive to achieve specificity in its  diagnosis when it detects a fault. A system is  described that uses a simple model of  hardware structure and function, representing  the device in terms of its internal primitive  functions and connections. The system  designs a diagnostic in three steps. First, an  extension of path sensitization is used to  design a test for each of the connections in  teh device. Next, the resulting tests are  improved by increasing their specificity. Finally  the tests are ordered so that each relies on  the fewest possible connections. We describe  an implementation of this system and show  examples of the results for some simple  devices.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6867</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Algorithm for Parsing Flow Graphs</title>
<link>https://hdl.handle.net/1721.1/6866</link>
<description>An Algorithm for Parsing Flow Graphs
Brotsky, Daniel Carl
This report describes research about flow  graphs - labeled, directed, acyclic graphs  which abstract representations used in a  variety of Artificial Intelligence applications.  Flow graphs may be derived from flow  grammars much as strings may be derived  from string grammars; this derivation process  forms a useful model for the stepwise  refinement processes used in programming  and other engineering domains. The central  result of this report is a parsing algorithm for  flow graphs. Given a flow grammar and a flow  graph, the algorithm determines whether the  grammar generates the graph and, if so, finds  all possible derivations for it. The author has  implemented the algorithm in LISP. The intent  of this report is to make flow-graph parsing  available as an analytic tool for researchers in  Artificial Intelligence. The report explores the  intuitions behind the parsing algorithm,  contains numerous, extensive examples of its  behavior, and provides some guidance for  those who wish to customize the algorithm to  their own uses.
</description>
<pubDate>Thu, 01 Mar 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6866</guid>
<dc:date>1984-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Simple Model of Circuit Design</title>
<link>https://hdl.handle.net/1721.1/6865</link>
<description>A Simple Model of Circuit Design
Roylance, Gerald
A simple analog circuit designer has been  implemented as a rule based system. The  system can design voltage followers. Miller  integrators, and bootstrap ramp generators  from functional descriptions of what these  circuits do. While the designer works in a  simple domain where all components are  ideal, it demonstrates the abilities of skilled  designers. While the domain is electronics,  the design ideas are useful in many other  engineering domains, such as mechanical  engineering, chemical engineering, and  numerical programming. Most circuit design  systems are given the circuit schematic and  use arithmetic constraints to select  component values. This circuit designer is  different because it designs the schematic.  The designer uses a unidirectional  CONTROL relation to find the schematic. The  circuit designs are built around this relation; it  restricts the search space, assigns purposes  to components and finds design bugs.
</description>
<pubDate>Thu, 01 May 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6865</guid>
<dc:date>1980-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Computer Games to Facilitate Learning</title>
<link>https://hdl.handle.net/1721.1/6864</link>
<description>Designing Computer Games to Facilitate Learning
White, Barbara Y.
The aim of this thesis was to explore the  design of interactive computer learning  environments. The particular learning domain  selected was Newtonian dynamics.  Newtonian dynamics was chosen because it  is an important area of physics with which  many students have difficulty and because  controlling Newtonian motion takes  advantage of the computer's graphics and  interactive capabilities. The learning  environment involved games which simulated  the motion of a spaceship on a display  screen. The purpose of the games was to  focus the students' attention on various  aspects of the implications of Newton's laws.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6864</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Study of Qualitative and Geometric Knowledge in Reasoning about Motion</title>
<link>https://hdl.handle.net/1721.1/6863</link>
<description>A Study of Qualitative and Geometric Knowledge in Reasoning about Motion
Forbus, Kenneth D.
Reasoning about motion is an important part  of our commonsense knowledge, involving  fluent spatial reasoning. This work studies the  qualitative and geometric knowledge required  to reason in a world that consists of balls  moving through space constrained by  collisions with surfaces, including dissipative  forces and multiple moving objects. An analog  geometry representation serves the program  as a diagram, allowing many spatial  questions to be answered by numeric  calculation. It also provides the foundation for  the construction and use of place vocabulary,  the symbolic descriptions of space required to  do qualitative reasoning about motion in the  domain. The actual motion of a ball is  described as a network consisting of  descriptions of qualitatively distinct types of  motion. Implementing the elements of these  networks in a constraint language allows the  same elements to be used for both analysis  and simulation of motion. A qualitative  description of the actual motion is also used  to check the consistency of assumptions  about motion. A process of qualitative  simulation is used to describe the kinds of  motion possible from some state. The  ambiguity inherent in such a description can  be reduced by assumptions about physical  properties of the ball or assumptions about its  motion. Each assumption directly rules out  some kinds of motion, but other knowledge is  required to determine the indirect  consequences of making these assumptions.  Some of this knowledge is domain dependent  and relies heavily on spatial descriptions.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6863</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coherent Behavior from Incoherent Knowledge Sources in the Automatic Synthesis of Numerical Computer Programs</title>
<link>https://hdl.handle.net/1721.1/6862</link>
<description>Coherent Behavior from Incoherent Knowledge Sources in the Automatic Synthesis of Numerical Computer Programs
Brown, Richard
A fundamental problem in artificial intelligence  is obtaining coherent behavior in rule-based  problem solving systems. A good quantitative  measure of coherence is time behavior; a  system that never, in retrospect, applied a rule  needlessly is certainly coherent; a system  suffering from combinatorial blowup is  certainly behaving incoherently. This report  describes a rule-based problem solving  system for automatically writing and improving  numerical computer programs from  specifications. The specifications are in terms  of "constraints" among inputs and outputs.  The system has solved program synthesis  problems involving systems of equations,  determining that methods of successive  approximation converge, transforming  recursion to iteration, and manipulating power  series (using differing organizations, control  structures, and argument-passing  techniques).
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6862</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shape from Contour</title>
<link>https://hdl.handle.net/1721.1/6861</link>
<description>Shape from Contour
Witkin, Andrew P.
The problem of using image contours to infer  the shapes and orientations of surfaces is  treated as a problem of statistical estimation.  The basis for solving this problem lies in an  understanding of the geometry of contour  formation, coupled with simple statistical  models of the contour generating process.  This approach is first applied to the special  case of surfaces known to be planar. The  distortion of contour shape imposed by  projection is treated as a signal to be  estimated, and variations of non-projective  origin are treated as noise. The resulting  method is then extended to the estimation of  curved surfaces, and applied successfully to  natural images. Next, the geometric treatment  is further extended by relating countour  curvature to surface curvature, using cast  shadows as a model for contour generation.  This geometric relation, combined with a  statistical model, provides a measure of  goodness-of-fit between a surface and an  image contour. The goodness-of-fit measure  is applied to the problem of establishing  registration between an image and a surface  model. Finally, the statistical estimation  strategy is experimentally compared to human  perception of orientation: human observers'  judgements of tilt correspond closely to the  estimates produced by the planar strategy.
</description>
<pubDate>Sat, 01 Nov 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6861</guid>
<dc:date>1980-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Assembly Using Feature Localization</title>
<link>https://hdl.handle.net/1721.1/6860</link>
<description>Automated Assembly Using Feature Localization
Gordon, Steven Jeffrey
Automated assembly of mechanical devices  is studies by researching methods of  operating assembly equipment in a variable  manner; that is, systems which may be  configured to perform many different  assembly operations are studied. The  general parts assembly operation involves the  removal of alignment errors within some  tolerance and without damaging the parts.  Two methods for eliminating alignment errors  are discussed: a priori suppression and  measurement and removal. Both methods  are studied with the more novel measurement  and removal technique being studied in  greater detail. During the study of this  technique, a fast and accurate six degree-of-freedom position sensor based on a light-stripe vision technique was developed.  Specifications for the sensor were derived  from an assembly-system error analysis.  Studies on extracting accurate information  from the sensor by optimally reducing  redundant information, filtering quantization  noise, and careful calibration procedures  were performed. Prototype assembly systems  for both error elimination techniques were  implemented and used to assemble several  products. The assembly system based on the  a priori suppression technique uses a  number of mechanical assembly tools and  software systems which extend the  capabilities of industrial robots. The need for  the tools was determined through an  assembly task analysis of several consumer  and automotive products. The assembly  system based on the measurement and  removal technique used the six degree-of-freedom position sensor to measure part  misalignments. Robot commands for aligning  the parts were automatically calculated based  on the sensor data and executed.
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6860</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Coupled Depth/Slope Approach to Surface Reconstruction</title>
<link>https://hdl.handle.net/1721.1/6859</link>
<description>The Coupled Depth/Slope Approach to Surface Reconstruction
Harris, John G.
Reconstructing a surface from sparse  sensory data is a well known problem in  computer vision. Early vision modules  typically supply sparse depth, orientation and  discontinuity information. The surface  reconstruction module incorporates these  sparse and possibly conflicting  measurements of a surface into a consistent,  dense depth map. The coupled depth/slope  model developed here provides a novel  computational solution to the surface  reconstruction problem. This method  explicitly computes dense slope  representation as well as dense depth  representations. This marked change from  previous surface reconstruction algorithms  allows a natural integration of orientation  constraints into the surface description, a  feature not easily incorporated into earlier  algorithms. In addition, the coupled depth/ slope model generalizes to allow for varying  amounts of smoothness at different locations  on the surface.  This computational model helps  conceptualize the problem and leads to two  possible implementations- analog and digital.  The model can be implemented as an  electrical or biological analog network since  the only computations required at each locally  connected node are averages, additions and  subtractions. A parallel digital algorithm can  be derived by using finite difference  approximations. The resulting system of  coupled equations can be solved iteratively on  a mesh-pf-processors computer, such as the  Connection Machine. Furthermore,  concurrent multi-grid methods are designed  to speed the convergence of this digital  algorithm.
</description>
<pubDate>Sun, 01 Jun 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6859</guid>
<dc:date>1986-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Roles of Knowledge in Motor Learning</title>
<link>https://hdl.handle.net/1721.1/6858</link>
<description>Roles of Knowledge in Motor Learning
Atkeson, Christopher Granger
The goal of this thesis is to apply the  computational approach to motor learning,  i.e., describe the constraints that enable  performance improvement with experience  and also the constraints that must be  satisfied by a motor learning system, describe  what is being computed in order to achieve  learning, and why it is being computed. The  particular tasks used to assess motor  learning are loaded and unloaded free arm  movement, and the thesis includes work on  rigid body load estimation, arm model  estimation, optimal filtering for model  parameter estimation, and trajectory learning  from practice. Learning algorithms have been  developed and implemented in the context of  robot arm control. The thesis demonstrates  some of the roles of knowledge in learning.  Powerful generalizations can be made on the  basis of knowledge of system structure, as is  demonstrated in the load and arm model  estimation algorithms. Improving the  performance of parameter estimation  algorithms used in learning involves  knowledge of the measurement noise  characteristics, as is shown in the derivation  of optimal filters. Using trajectory errors to  correct commands requires knowledge of  how command errors are transformed into  performance errors, i.e., an accurate model of  the dynamics of the controlled system, as is  demonstrated in the trajectory learning work.  The performance demonstrated by the  algorithms developed in this thesis should be  compared with algorithms that use less  knowledge, such as table based schemes to  learn arm dynamics, previous single trajectory  learning algorithms, and much of traditional  adaptive control.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6858</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning and Teaching Compliant Motion Strategies</title>
<link>https://hdl.handle.net/1721.1/6857</link>
<description>Planning and Teaching Compliant Motion Strategies
Buckley, Stephen J.
This thesis presents a new high level robot  programming system. The programming  system can be used to construct strategies  consisting of compliant motions, in which a  moving robot slides along obstacles in its  environment. The programming system is  referred to as high level because the user is  spared of many robot-level details, such as  the specification of conditional tests, motion  termination conditions, and compliance  parameters. Instead, the user specifies task-level information, including a geometric model  of the robot and its environment. The user  may also have to specify some suggested  motions.  There are two main system components. The  first component is an interactive teaching  system which accepts motion commands  from a user and attempts to build a compliant  motion strategy using the specified motions  as building blocks. The second component is  an autonomous compliant motion planner,  which is intended to spare the user from  dealing with "simple" problems. The planner  simplifies the representation of the  environment by decomposing the  configuration space of the robot into a finite  state space, whose states are vertices,  edges, faces, and combinations thereof.  States are inked to each other by arcs, which  represent reliable compliant motions. Using  best first search, states are expanded until a  strategy is found from the start state to a  global state. This component represents one  of the first implemented compliant motion  planners.  The programming system has been  implemented on a Symbolics 3600 computer,  and tested on several examples. One of the  resulting compliant motion strategies was  successfully executed on an IBM 7565 robot  manipulator.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6857</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Program Recognition</title>
<link>https://hdl.handle.net/1721.1/6856</link>
<description>Automated Program Recognition
Wills, Linda M.
The key to understanding a program is  recognizing familiar algorithmic fragments  and data structures in it. Automating this  recognition process will make it easier to  perform many tasks which require program  understanding, e.g., maintenance,  modification, and debugging. This report  describes a recognition system, called the  Recognizer, which automatically identifies  occurrences of stereotyped computational  fragments and data structures in programs.  The Recognizer is able to identify these  familiar fragments and structures, even  though they may be expressed in a wide  range of syntactic forms. It does so  systematically and efficiently by using a  parsing technique. Two important advances  have made this possible. The first is a  language-independent graphical  representation for programs and  programming structures which canonicalizes  many syntactic features of programs. The  second is an efficient graph parsing  algorithm.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6856</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>ARLO: Another Representation Language Offer</title>
<link>https://hdl.handle.net/1721.1/6855</link>
<description>ARLO: Another Representation Language Offer
Haase, Kenneth W., Jr.
This paper describes ARLO, a representation  language loosely modelled after Greiner and  Lenant's RLL-1. ARLO is a structure-based  representation language for describing  structure-based representation languages,  including itself. A given representation  language is specified in ARLO by a collection  of structures describing how its descriptions  are interpreted, defaulted, and verified. This  high level description is compiles into lisp  code and ARLO structures whose  interpretation fulfills the specified semantics  of the representation. In addition, ARLO itself- as a representation language for expressing  and compiling partial and complete language  specifications- is described and interpreted in  the same manner as the language it  describes and implements. This self-description can be extended of modified to  expand or alter the expressive power of  ARLO's initial configuration. Languages  which describe themselves like ARLO- provide powerful mediums for systems which  perform automatic self-modification,  optimization, debugging, or documentation. AI  systems implemented in such a self-descriptive language can reflect on their own  capabilities and limitations, applying general  learning and problem solving strategies to  enlarge or alleviate them.
</description>
<pubDate>Wed, 01 Oct 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6855</guid>
<dc:date>1986-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Image Chunking: Defining Spatial Building Blocks for Scene Analysis</title>
<link>https://hdl.handle.net/1721.1/6854</link>
<description>Image Chunking: Defining Spatial Building Blocks for Scene Analysis
Mahoney, James V.
Rapid judgments about the properties and  spatial relations of objects are the crux of  visually guided interaction with the world.  Vision begins, however, with essentially  pointwise representations of the scene, such  as arrays of pixels or small edge fragments.  For adequate time-performance in  recognition, manipulation, navigation, and  reasoning, the processes that extract  meaningful entities from the pointwise  representations must exploit parallelism.  This report develops a framework for the fast  extraction of scene entities, based on a  simple, local model of parallel  computation.sAn image chunk is a subset of  an image that can act as a unit in the course  of spatial analysis. A parallel preprocessing  stage constructs a variety of simple chunks  uniformly over the visual array. On the basis  of these chunks, subsequent serial  processes locate relevant scene components  and assemble detailed descriptions of them  rapidly. This thesis defines image chunks  that facilitate the most potentially time-consuming operations of spatial analysis---boundary tracing, area coloring, and the  selection of locations at which to apply  detailed analysis. Fast parallel processes for  computing these chunks from images, and  chunk-based formulations of indexing, tracing,  and coloring, are presented. These  processes have been simulated and  evaluated on the lisp machine and the  connection machine.
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6854</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Manipulator Grasping and Pushing Operations</title>
<link>https://hdl.handle.net/1721.1/6853</link>
<description>Manipulator Grasping and Pushing Operations
Mason, Matthew Thomas
The primary goal of this research is to develop theoretical tools for analysis, synthesis, application of primitive manipulator operations. The primary method is to extend and apply traditional tools of classical mechanics. The results are of such a general nature that they address many different aspects of industrial robotics, including effector and sensor design, planning and programming tools and design of auxiliary equipment. Some of the manipulator operations studied are: (1) Grasping an object. The object will usually slide and rotate during the period between first contact and prehension. (2) Placing an object. The object may slip slightly in the fingers upon contact with the table as the base aligns with the table. (3) Pushing. Often the final stage of mating two parts involves pushing one object into the other.
</description>
<pubDate>Tue, 01 Jun 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6853</guid>
<dc:date>1982-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>TYPICAL: A Knowledge Representation System for Automated Discovery and Inference</title>
<link>https://hdl.handle.net/1721.1/6852</link>
<description>TYPICAL: A Knowledge Representation System for Automated Discovery and Inference
Haase, Kenneth W., Jr.
TYPICAL is a package for describing and  making automatic inferences about a broad  class of SCHEME predicate functions. These  functions, called types following popular  usage, delineate classes of primitive  SCHEME objects, composite data structures,  and abstract descriptions. TYPICAL types are  generated by an extensible combinator  language from either existing types or  primitive terminals. These generated types  are located in a lattice of predicate  subsumption which captures necessary  entailment between types; if satisfaction of  one type necessarily entail satisfaction of  another, the first type is below the second in  the lattice. The inferences make by TYPICAL  computes the position of the new definition  within the lattice and establishes it there. This  information is then accessible to both later  inferences and other programs (reasoning  systems, code analyzers, etc) which may  need the information for their own purposes.  TYPICAL was developed as a representation  language for the discovery program Cyrano;  particular examples are given of TYPICAL's  application in the Cyrano program.
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6852</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Error Detection and Recovery for Robot Motion Planning with Uncertainty</title>
<link>https://hdl.handle.net/1721.1/6851</link>
<description>Error Detection and Recovery for Robot Motion Planning with Uncertainty
Donald, Bruce Randall
Robots must plan and execute tasks in the  presence of uncertainty. Uncertainty arises  from sensing errors, control errors, and  uncertainty in the geometry of the  environment. The last, which is called model  error, has received little previous attention. We  present a framework for computing motion  strategies that are guaranteed to succeed in  the presence of all three kinds of uncertainty.  The motion strategies comprise sensor-based gross motions, compliant motions,  and simple pushing motions.
</description>
<pubDate>Wed, 01 Jul 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6851</guid>
<dc:date>1987-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning by Failing to Explain</title>
<link>https://hdl.handle.net/1721.1/6850</link>
<description>Learning by Failing to Explain
Hall, Robert Joseph
Explanation-based Generalization requires  that the learner obtain an explanation of why a  precedent exemplifies a concept. It is,  therefore, useless if the system fails to find  this explanation. However, it is not necessary  to give up and resort to purely empirical  generalization methods. In fact, the system  may already know almost everything it needs  to explain the precedent. Learning by Failing  to Explain is a method which is able to exploit  current knowledge to prune complex  precedents, isolating the mysterious parts of  the precedent. The idea has two parts: the  notion of partially analyzing a precedent to get  rid of the parts which are already explainable,  and the notion of re-analyzing old rules in  terms of new ones, so that more general  rules are obtained.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6850</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Synthesis of Stable Force-Closure Grasps</title>
<link>https://hdl.handle.net/1721.1/6849</link>
<description>The Synthesis of Stable Force-Closure Grasps
Nguyen; Van-Duc
This thesis addresses the problem of  synthesizing grasps that are force-closure  and stable. The synthesis of force-closure  grasps constructs independent regions of  contact for the fingertips, such that the motion  of the grasped object is totally constrained.  The synthesis of stable grasps constructs  virtual springs at the contacts, such that the  grasped object is stable, and has a desired  stiffness matrix about its stable equilibrium.  A grasp on an object is force-closure if and  only if we can exert, through the set of  contacts, arbitrary forces and moments on the  object. So force-closure implies equilibrium  exists because zero forces and moment is  spanned. In the reverse direction, we prove  that a non-marginal equilibrium grasp is also  a force-closure grasp, if it has at least two  point contacts with friction in 2D, or two soft-finger contacts or three hard-finger contacts in  3D.  Next, we prove that all force-closure grasps  can be made stable, by using either active or  passive springs at the contacts. The thesis  develops a simple relation between the  stability and stiffness of the grasp and the  spatial configuration of the virtual springs at  the contacts. The stiffness of the grasp  depends also on whether the points of contact  stick, or slide without friction on straight or  curved surfaces of the object.  The thesis presents fast and simple  algorithms for directly constructing stable fore-closure grasps based on the shape of the  grasped object. The formal framework of  force-closure and stable grasps provides a  partial explanation to why we stably grasp  objects to easily, and to why our fingers are  better soft than hard.
</description>
<pubDate>Tue, 01 Jul 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6849</guid>
<dc:date>1986-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contact Sensors for Dexterous Robotic Hands</title>
<link>https://hdl.handle.net/1721.1/6848</link>
<description>Contact Sensors for Dexterous Robotic Hands
Siegel, David Mark
This thesis examines a tactile sensor and a  thermal sensor for use with the Utah-MIT  dexterous four fingered hand. Sensory  feedback is critical or full utilization of its  advanced manipulatory capabilities. The hand  itself provides tendon tensions and joint  angles information. However, planned control  algorithms require more information than  these sources can provide. The tactile sensor  utilizes capacitive transduction with a novel  design based entirely on silicone elastomers.  It provides an 8 x 8 array of force cells with 1.9  mm center-to-center spacing. A pressure  resolution of 8 significant bits is available over  a 0 to 200 grams per square mm range. The  thermal sensor measures a material's heat  conductivity by radiating heat into an object  and measuring the resulting temperature  variations. This sensor has a 4 x 4 array of  temperature cells with 3.5 mm center-to-center spacing. Experiments show that the  thermal sensor can discriminate among  material by detecting differences in their  thermal conduction properties. Both sensors  meet the stringent mounting requirements  posed by the Utah-MIT hand. Combining them  together to form a sensor with both tactile and  thermal capabilities will ultimately be  possible. The computational requirements for  controlling a sensor equipped dexterous hand  are severe. Conventional single processor  computers do not provide adequate  performance. To overcome these difficulties, a  computational architecture based on  interconnecting high performance  microcomputers and a set of software  primitives tailored for sensor driven control  has been proposed. The system has been  implemented and tested on the Utah-MIT  hand. The hand, equipped with tactile and  thermal sensors and controlled by its  computational architecture, is one of the most  advanced robotic manipulatory devices  available worldwide. Other ongoing projects  will exploit these tools and allow the hand to  perform tasks that exceed the capabilities of  current generation robots.
</description>
<pubDate>Sun, 01 Jun 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6848</guid>
<dc:date>1986-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heuristics for Job-Shop Scheduling</title>
<link>https://hdl.handle.net/1721.1/6847</link>
<description>Heuristics for Job-Shop Scheduling
Pasch, Kenneth Alan
Two methods of obtaining approximate  solutions to the classic General Job-shop  Scheduling Program are investigated. The  first method is iterative. A sampling of the  solution space is used to decide which of a  collection of space pruning constraints are  consistent with "good" schedules. The  selected space pruning constraints are then  used to reduce the search space and the  sampling is repeated. This approach can be  used either to verify whether some set of  space pruning constraints can prune with  discrimination or to generate solutions  directly. Schedules can be represented as  trajectories through a Cartesian space.  Under the objective criteria of Minimum  maximum Lateness family of "good"  schedules (trajectories) are geometric  neighbors (reside with some "tube") in this  space. This second method of generating  solutions takes advantage of this adjacency  by pruning the space from the outside in thus  converging gradually upon this "tube." One  the average this methods significantly  outperforms an array of the Priority Dispatch  rules when the object criteria is that of  Minimum Maximum Lateness. It also  compares favorably with a recent relaxation  procedure.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6847</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theories of Comparative Analysis</title>
<link>https://hdl.handle.net/1721.1/6846</link>
<description>Theories of Comparative Analysis
Weld, Daniel S.
Comparative analysis is the problem of  predicting how a system will react to  perturbations in its parameters, and why. For  example, comparative analysis could be  asked to explain why the period of an  oscillating spring/block system would  increase if the mass of the block were larger.  This thesis formalizes the task of comparative  analysis and presents two solution  techniques: differential qualitative (DQ)  analysis and exaggeration. Both techniques  solve many comparative analysis problems,  providing explanations suitable for use by  design systems, automated diagnosis,  intelligent tutoring systems, and explanation  based generalization. This thesis explains  the theoretical basis for each technique,  describes how they are implemented, and  discusses the difference between the two.  DQ analysis is sound; it never generates an  incorrect answer to a comparative analysis  question. Although exaggeration does  occasionally produce misleading answers, it  solves a larger class of problems than DQ  analysis and frequently results in simpler  explanations.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6846</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computation and Pre-Parametric Design</title>
<link>https://hdl.handle.net/1721.1/6845</link>
<description>Computation and Pre-Parametric Design
Ulrich, Karl T.
My work is broadly concerned with the  question "How can designs bessynthesized  computationally?" The project deals primarily  with mechanical devices and focuses on pre-parametric design: design at the level of detail  of a blackboard sketch rather than at the level  of detail of an engineering drawing. I explore  the project ideas in the domain of single-input  single-output dynamic systems, like pressure  gauges, accelerometers, and pneumatic  cylinders. The problem solution consists of  two steps: 1) generate a schematic  description of the device in terms of idealized  functional elements, and then 2) from the  schematic description generate a physical  description.
</description>
<pubDate>Thu, 01 Sep 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6845</guid>
<dc:date>1988-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Residual Vibration Reduction in Computer Controlled Machines</title>
<link>https://hdl.handle.net/1721.1/6844</link>
<description>Residual Vibration Reduction in Computer Controlled Machines
Singer, Neil C.
Control of machines that exhibit flexibility  becomes important when designers attempt  to push the state of the art with faster, lighter  machines. Three steps are necessary for the  control of a flexible planet. First, a good  model of the plant must exist. Second, a good  controller must be designed. Third, inputs to  the controller must be constructed using  knowledge of the system dynamic response.  There is a great deal of literature pertaining to  modeling and control but little dealing with the  shaping of system inputs. Chapter 2  examines two input shaping techniques  based on frequency domain analysis. The  first involves the use of the first deriviate of a  gaussian exponential as a driving function  template. The second, acasual filtering,  involves removal of energy from the driving  functions at the resonant frequencies of the  system. Chapter 3 presents a linear  programming technique for generating  vibration-reducing driving functions for  systems. Chapter 4 extends the results of the  previous chapter by developing a direct  solution to the new class of driving functions.  A detailed analysis of the new technique is  presented from five different perspectives and  several extensions are presented.  Chapter 5  verifies the theories of the previous two  chapters with hardware experiments.  Because the new technique resembles  common signal filtering, chapter 6 compares  the new approach to eleven standard filters.  The new technique will be shown to result in  less residual vibrations, have better  robustness to system parameter uncertainty,  and require less computation than other  currently used shaping techniques.
</description>
<pubDate>Wed, 01 Feb 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6844</guid>
<dc:date>1989-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control of Vibration in Mechanical Systems Using Shaped Reference Inputs</title>
<link>https://hdl.handle.net/1721.1/6843</link>
<description>Control of Vibration in Mechanical Systems Using Shaped Reference Inputs
Meckl, Peter Heinrich
Dynamic systems which undergo rapid  motion can excite natural frequencies that  lead to residual vibration at the end of motion.  This work presents a method to shape force  profiles that reduce excitation energy at the  natural frequencies in order to reduce  residual vibration for fast moves. Such  profiles are developed using a ramped  sinusoid function and its harmonics,  choosing coefficients to reduce spectral  energy at the natural frequencies of the  system. To improve robustness with respect  to parameter uncertainty, spectral energy is  reduced for a range of frequencies  surrounding the nominal natural frequency.  An additional set of versine profiles are also  constructed to permit motion at constant  speed for velocity-limited systems. These  shaped force profiles are incorporated into a  simple closed-loop system with position and  velocity feedback. The force input is doubly  integrated to generate a shaped position  reference for the controller to follow. This  control scheme is evaluated on the MIT  Cartesian Robot. The shaped inputs  generate motions with minimum residual  vibration when actuator saturation is avoided.  Feedback control compensates for the effect  of friction Using only a knowledge of the  natural frequencies of the system to shape  the force inputs, vibration can also be  attenuated in modes which vibrate in  directions other than the motion direction.  When moving several axes, the use of shaped  inputs allows minimum residual vibration  even when the natural frequencies are  dynamically changing by a limited amount.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6843</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining Associational and Causal Reasoning to Solve Interpretation and Planning Problems</title>
<link>https://hdl.handle.net/1721.1/6842</link>
<description>Combining Associational and Causal Reasoning to Solve Interpretation and Planning Problems
Simmons, Reid G.
This report describes a paradigm for  combining associational and causal  reasoning to achieve efficient and robust  problem-solving behavior. The Generate,  Test and Debug (GTD) paradigm generates  initial hypotheses using associational  (heuristic) rules. The tester verifies  hypotheses, supplying the debugger with  causal explanations for bugs found if the test  fails. The debugger uses domain-independent causal reasoning techniques to  repair hypotheses, analyzing domain models  and the causal explanations produced by the  tester to determine how to replace faulty  assumptions made by the generator. We  analyze the strengths and weaknesses of  associational and causal reasoning  techniques, and present a theory of  debugging plans and interpretations. The  GTD paradigm has been implemented and  tested in the domains of geologic  interpretation, the blocks world, and Tower of  Hanoi problems.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6842</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three-Dimensional Recognition of Solid Objects from a Two-Dimensional Image</title>
<link>https://hdl.handle.net/1721.1/6841</link>
<description>Three-Dimensional Recognition of Solid Objects from a Two-Dimensional Image
Huttenlocher, Daniel Peter
This thesis addresses the problem of  recognizing solid objects in the three-dimensional world, using two-dimensional  shape information extracted from a single  image. Objects can be partly occluded and  can occur in cluttered scenes. A model based  approach is taken, where stored models are  matched to an image. The matching problem  is separated into two stages, which employ  different representations of objects. The first  stage uses the smallest possible number of  local features to find transformations from a  model to an image. This minimizes the  amount of search required in recognition. The  second stage uses the entire edge contour of  an object to verify each transformation. This  reduces the chance of finding false matches.
</description>
<pubDate>Sat, 01 Oct 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6841</guid>
<dc:date>1988-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Qualitative Analysis of Ordinary Differential Equations Using Piecewise Linear Approximations</title>
<link>https://hdl.handle.net/1721.1/6840</link>
<description>Automatic Qualitative Analysis of Ordinary Differential Equations Using Piecewise Linear Approximations
Sacks, Elisha
This paper explores automating the qualitative analysis of physical systems. It describes a program, called PLR, that takes parameterized ordinary differential equations as input and produces a qualitative description of the solutions for all initial values. PLR approximates intractable nonlinear systems with piecewise linear ones, analyzes the approximations, and draws conclusions about the original systems. It chooses approximations that are accurate enough to reproduce the essential properties of their nonlinear prototypes, yet simple enough to be analyzed completely and efficiently.  It derives additional properties, such as boundedness or periodicity, by theoretical methods. I demonstrate PLR on several common nonlinear systems and on published examples from mechanical engineering.
</description>
<pubDate>Tue, 01 Mar 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6840</guid>
<dc:date>1988-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hypothesizing Device Mechanisms: Opening Up the Black Box</title>
<link>https://hdl.handle.net/1721.1/6839</link>
<description>Hypothesizing Device Mechanisms: Opening Up the Black Box
Doyle, Richard James
I describe an approach to forming hypotheses  about hidden mechanism configurations  within devices given external observations  and a vocabulary of primitive mechanisms. An  implemented causal modelling system called  JACK constructs explanations for why a  second piece of toast comes out lighter, why  the slide in a tire gauge does not slip back  inside when the gauge is removed from the  tire, and how in a refrigerator a single  substance can serve as a heat sink for the  interior and a heat source for the exterior. I  report the number of hypotheses admitted for  each device example, and provide empirical  results which isolate the pruning power due to  different constraint sources.
</description>
<pubDate>Wed, 01 Jun 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6839</guid>
<dc:date>1988-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dependency-Directed Localization of Software Bugs</title>
<link>https://hdl.handle.net/1721.1/6838</link>
<description>Dependency-Directed Localization of Software Bugs
Kuper, Ron I.
Software bugs are violated specifications.  Debugging is the process that culminates in  repairing a program so that it satisfies its  specification. An important part of debugging  is localization, whereby the smallest region of  the program that manifests the bug is found.  The Debugging Assistant (DEBUSSI)  localizes bugs by reasoning about logical  dependencies. DEBUSSI manipulates the  assumptions that underlie a bug  manifestation, eventually localizing the bug to  one particular assumption. At the same time,  DEBUSSI acquires specification information,  thereby extending its understanding of the  buggy program. The techniques used for  debugging fully implemented code are also  appropriate for validating partial designs.
</description>
<pubDate>Mon, 01 May 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6838</guid>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Test Generation Guided Design for Testability</title>
<link>https://hdl.handle.net/1721.1/6837</link>
<description>Test Generation Guided Design for Testability
Wu, Peng
This thesis presents a new approach to  building a design for testability (DFT) system.  The system takes a digital circuit description,  finds out the problems in testing it, and  suggests circuit modifications to correct those  problems. The key contributions of the thesis  research are (1) setting design for testability  in the context of test generation (TG), (2) using  failures during FG to focus on testability  problems, and (3) relating circuit  modifications directly to the failures. A natural  functionality set is used to represent the  maximum functionalities that a component  can have. The current implementation has  only primitive domain knowledge and needs  other work as well. However, armed with the  knowledge of TG, it has already  demonstrated its ability and produced some  interesting results on a simple  microprocessor.
</description>
<pubDate>Fri, 01 Jul 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6837</guid>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generalizing on Multiple Grounds: Performance Learning in Model-Based Technology</title>
<link>https://hdl.handle.net/1721.1/6836</link>
<description>Generalizing on Multiple Grounds: Performance Learning in Model-Based Technology
Resnick, Paul
This thesis explores ways to augment a  model-based diagnostic program with a  learning component, so that it speeds up as it  solves problems. Several learning  components are proposed, each exploiting a  different kind of similarity between diagnostic  examples. Through analysis and  experiments, we explore the effect each  learning component has on the performance  of a model-based diagnostic program. We  also analyze more abstractly the performance  effects of Explanation-Based Generalization,  a technology that is used in several of the  proposed learning components.
</description>
<pubDate>Wed, 01 Feb 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6836</guid>
<dc:date>1989-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Effect of Transmission Design on Force-Controlled Manipulator Performance</title>
<link>https://hdl.handle.net/1721.1/6835</link>
<description>The Effect of Transmission Design on Force-Controlled Manipulator Performance
Townsend, William T. (William Thomas)
Previous research in force control has  focused on the choice of appropriate servo  implementation without corresponding regard  to the choice of mechanical hardware. This  report analyzes the effect of mechanical  properties such as contact compliance,  actuator-to-joint compliance, torque ripple,  and highly nonlinear dry friction in the  transmission mechanisms of a manipulator.  A set of requisites for high performance then  guides the development of mechanical-design and servo strategies for improved  performance. A single-degree-of-freedom  transmission testbed was constructed that  confirms the predicted effect of Coulomb  friction on robustness; design and  construction of a cable-driven, four-degree-of- freedom, "whole-arm" manipulator illustrates  the recommended design strategies.
</description>
<pubDate>Fri, 01 Apr 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6835</guid>
<dc:date>1988-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dexterous Robotic Hands: Kinematics and Control</title>
<link>https://hdl.handle.net/1721.1/6834</link>
<description>Dexterous Robotic Hands: Kinematics and Control
Narasimhan, Sundar
This report presents issues relating to the  kinematics and control of dexterous robotic  hands using the Utah-MIT hand as an  illustrative example. The emphasis  throughout is on the actual implementation  and testing of the theoretical concepts  presented. The kinematics of such hands is  interesting and complicated owing to the large  number of degrees of freedom involved. The  implementation of position and force control  algorithms on such tendon driven hands has  previously suffered from inefficient  formulations and a lack of sophisticated  computer hardware. Both these problems  are addressed in this report.  A multiprocessor architecture has been built  with high performance microcomputers on  which real-time algorithms can be  efficiently implemented. A large software  library has also been built to facilitate flexible  software development on this architecture.  The position and force control algorithms  described herein have been implemented  and tested on this hardware.
</description>
<pubDate>Tue, 01 Nov 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6834</guid>
<dc:date>1988-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Knowledge in Visual Shape Representation</title>
<link>https://hdl.handle.net/1721.1/6833</link>
<description>The Role of Knowledge in Visual Shape Representation
Saund, Eric
This report shows how knowledge about the  visual world can be built into a shape  representation in the form of a descriptive  vocabulary making explicit the important  geometrical relationships comprising objects'  shapes. Two computational tools are offered:  (1) Shapestokens are placed on a Scale-Space Blackboard, (2) Dimensionality-reduction captures deformation classes in  configurations of tokens. Knowledge lies in  the token types and deformation classes  tailored to the constraints and regularities  ofparticular shape worlds. A hierarchical  shape vocabulary has been implemented  supporting several later visual tasks in the  two-dimensional shape domain of the dorsal  fins of fishes.
</description>
<pubDate>Sat, 01 Oct 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6833</guid>
<dc:date>1988-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multistep Methods for Integrating the Solar System</title>
<link>https://hdl.handle.net/1721.1/6832</link>
<description>Multistep Methods for Integrating the Solar System
Skordos, Panayotis S.
High order multistep methods, run at constant  stepsize, are very effective for integrating the  Newtonian solar system for extended periods  of time. I have studied the stability and error  growth of these methods when applied to  harmonic oscillators and two-body systems  like the Sun-Jupiter pair. I have also tried to  design better multistep integrators than the  traditional Stormer and Cowell methods, and I  have found a few interesting ones.
</description>
<pubDate>Fri, 01 Jul 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6832</guid>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Interpreting Stereo Disparity</title>
<link>https://hdl.handle.net/1721.1/6831</link>
<description>On Interpreting Stereo Disparity
Wildes, Richard P.
The problems under consideration center  around the interpretation of binocular stereo  disparity. In particular, the goal is to establish  a set of mappings from stereo disparity to  corresponding three-dimensional scene  geometry. An analysis has been developed  that shows how disparity information can be  interpreted in terms of three-dimensional  scene properties, such as surface  depth, discontinuities, and orientation. These  theoretical developments have been  embodied in a set of computer algorithms  for the recovery of scene geometry from input  stereo disparity. The results of applying these  algorithms to several disparity maps  are presented. Comparisons are made to the  interpretation of stereo disparity by biological  systems.
</description>
<pubDate>Wed, 01 Feb 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6831</guid>
<dc:date>1989-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Circuit Tests by Exploiting Designed Behavior</title>
<link>https://hdl.handle.net/1721.1/6830</link>
<description>Generating Circuit Tests by Exploiting Designed Behavior
Shirley, Mark Harper
This thesis describes two programs for  generating tests for digital circuits that exploit  several kinds of expert knowledge not used by  previous approaches. First, many test  generation problems can be solved efficiently  using operation relations, a novel  representation of circuit behavior that  connects internal component operations with  directly executable circuit operations.  Operation relations can be computed  efficiently by searching traces of simulated  circuit behavior. Second, experts write test  programs rather than test vectors because  programs are more readable and compact.  Test programs can be constructed  automatically by merging program fragments  using expert-supplied goal-refinement rules  and domain-independent planning  techniques.
</description>
<pubDate>Thu, 01 Dec 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6830</guid>
<dc:date>1988-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Navigation: Constructing and Utilizing Simple Maps of an Indoor Environment</title>
<link>https://hdl.handle.net/1721.1/6829</link>
<description>Visual Navigation: Constructing and Utilizing Simple Maps of an Indoor Environment
Sarachik, Karen Beth
The goal of this work is to navigate through an  office environmentsusing only visual  information gathered from four cameras  placed onboard a mobile robot. The method  is insensitive to physical changes within the  room it is inspecting, such as moving objects.  Forward and rotational motion vision are used  to find doors and rooms, and these can be  used to build topological maps. The map is  built without the use of odometry or trajectory  integration. The long term goal of the project  described here is for the robot to build simple  maps of its environment and to localize itself  within this framework.
</description>
<pubDate>Wed, 01 Mar 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6829</guid>
<dc:date>1989-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Evaluation of the Hamal Parallel Computer</title>
<link>https://hdl.handle.net/1721.1/6828</link>
<description>Design and Evaluation of the Hamal Parallel Computer
Grossman, J.P.
Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or  even billions of nodes. Associated with such large systems is a new set of  design challenges. Many problems must be addressed by an architecture in  order for it to be successful; of these, we focus on three in particular.  First, a scalable memory system is required. Second, the network messaging  protocol must be fault-tolerant. Third, the overheads of thread creation,  thread management and synchronization must be extremely low.  This thesis presents the complete system design for Hamal, a shared-memory  architecture which addresses these concerns and is directly scalable to one  million nodes. Virtual memory and distributed objects are implemented in a  manner that requires neither inter-node synchronization nor the storage of  globally coherent translations at each node. We develop a lightweight  fault-tolerant messaging protocol that guarantees message delivery and  idempotence across a discarding network. A number of hardware mechanisms  provide efficient support for massive multithreading and fine-grained  synchronization.  Experiments are conducted in simulation, using a trace-driven network  simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters  for the messaging protocol which optimize performance. A discarding network  is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network.  Our simulations of Hamal demonstrate the effectiveness of its thread  management and synchronization primitives. In particular, we find  register-based synchronization to be an extremely efficient mechanism which  can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.
</description>
<pubDate>Thu, 05 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6828</guid>
<dc:date>2002-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Time-Frequency Representations for Speech Signals</title>
<link>https://hdl.handle.net/1721.1/6827</link>
<description>Time-Frequency Representations for Speech Signals
Riley, Michael D.
This work addresses two related questions.  The first question is what joint time-frequency  energy representations are most appropriate  for auditory signals, in particular, for speech  signals in sonorant regions. The quadratic  transforms of the signal are examined, a large  class that includes, for example, the  spectrograms and the Wigner distribution.  Quasi-stationarity is not assumed, since this  would neglect dynamic regions. A set of  desired properties is proposed for the  representation: (1) shift-invariance, (2)  positivity, (3) superposition, (4) locality, and (5)  smoothness. Several relations among these  properties are proved: shift-invariance and  positivity imply the transform is a  superposition of spectrograms; positivity and  superposition are equivalent conditions when  the transform is real; positivity limits the  simultaneous time and frequency resolution  (locality) possible for the transform, defining  an uncertainty relation for joint time-frequency  energy representations; and locality and  smoothness tradeoff by the 2-D generalization  of the classical uncertainty relation. The  transform that best meets these criteria is  derived, which consists of two-dimensionally  smoothed Wigner distributions with (possibly  oriented) 2-D guassian kernels. These  transforms are then related to time-frequency  filtering, a method for estimating the time-varying 'transfer function' of the vocal tract,  which is somewhat analogous to ceptstral  filtering generalized to the time-varying case.  Natural speech examples are provided. The  second question addressed is how to obtain  a rich, symbolic description of the phonetically  relevant features in these time-frequency  energy surfaces, the so-called schematic  spectrogram. Time-frequency ridges, the 2-D  analog of spectral peaks, are one feature that  is proposed. If non-oriented kernels are used  for the energy representation, then the ridge  tops can be identified, with zero-crossings in  the inner product of the gradient vector and the  direction of greatest downward curvature. If  oriented kernels are used, the method can be  generalized to give better orientation selectivity  (e.g., at intersecting ridges) at the cost of  poorer time-frequency locality. Many speech  examples are given showing the performance  for some traditionally difficult cases: semi-vowels and glides, nasalized vowels,  consonant-vowel transitions, female speech,  and imperfect transmission channels.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6827</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Modern Differential Geometric Approach to Shape from Shading</title>
<link>https://hdl.handle.net/1721.1/6826</link>
<description>A Modern Differential Geometric Approach to Shape from Shading
Saxberg, Bror V. H.
How the visual system extracts shape  information from a single grey-level image  can be approached by examining how the  information about shape is contained in the  image. This technical report considers the  characteristic equations derived by Horn as a  dynamical system. Certain image critical  points generate dynamical system critical  points. The stable and unstable manifolds of  these critical points correspond to convex and  concave solution surfaces, giving more  general existence and uniqueness results. A  new kind of highly parallel, robust shape from  shading algorithm is suggested on  neighborhoods of these critical points. The  information at bounding contours in the image  is also analyzed.
</description>
<pubDate>Thu, 01 Jun 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6826</guid>
<dc:date>1989-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Summarizing Qualitative Behavior from Measurements of NonlinearsCircuits</title>
<link>https://hdl.handle.net/1721.1/6825</link>
<description>Summarizing Qualitative Behavior from Measurements of NonlinearsCircuits
Lee, Michelle Kwok
This report describes a program which  automatically characterizes the behavior of  any driven, nonlinear, electrical circuit. To do  this, the program autonomously selects  interesting input parameters, drives  the circuit, measures its response, performs  a set of numeric computations on the  measured data, interprets the results, and  decomposes the circuit's parameter space  into regions of qualitatively distinct behavior.  The output is a two-dimensional portrait  summarizing the high-level, qualitative  behavior of the circuit for every point in the  graph, an accompanying textual explanation  describing any interesting patterns observed  in the diagram, and a symbolic description of  the circuit's behavior which can be passed on  to other programs for further analysis.
</description>
<pubDate>Mon, 01 May 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6825</guid>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Theory of Representation Design</title>
<link>https://hdl.handle.net/1721.1/6824</link>
<description>Toward a Theory of Representation Design
Baalen, Jeffrey Van
This research is concerned with designing  representations for analytical reasoning  problems (of the sort found on the GRE and  LSAT). These problems test the ability to draw  logical conclusions. A computer program was  developed that takes as input a  straightforward predicate calculus translation  of a problem, requests additional information  if necessary, decides what to represent and  how, designs representations capturing the  constraints of the problem, and creates and  executes a LISP program that uses those  representations to produce a solution. Even  though these problems are typically difficult  for theorem provers to solve, the LISP  program that uses the  designed representations is very efficient.
</description>
<pubDate>Mon, 01 May 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6824</guid>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust 2-D Model-Based Object Recognition</title>
<link>https://hdl.handle.net/1721.1/6823</link>
<description>Robust 2-D Model-Based Object Recognition
Cass, Todd A.
Techniques, suitable for parallel  implementation, for robust 2D model-based  object recognition in the presence of sensor  error are studied. Models and scene data are  represented as local geometric features and  robust hypothesis of feature matchings and  transformations is considered. Bounds on  the error in the image feature geometry are  assumed constraining possible matchings  and transformations. Transformation  sampling is introduced as a simple, robust,  polynomial-time, and highly parallel method of  searching the space of transformations to  hypothesize feature matchings. Key to the  approach is that error in image feature  measurement is explicitly accounted for. A  Connection Machine implementation and  experiments on real images are presented.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6823</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Special-Purpose Computing to Examine Chaotic Behavior in Nonlinear Mappings</title>
<link>https://hdl.handle.net/1721.1/6822</link>
<description>Using Special-Purpose Computing to Examine Chaotic Behavior in Nonlinear Mappings
Nieh, Jason
Studying chaotic behavior in nonlinear  systems requires numerous computations in  order to simulate the behavior of such  systems. The Standard Map Machine was  designed and implemented as a special  computer for performing these intensive  computations with high-speed and high-precision. Its impressive performance is due  to its simple architecture specialized to the  numerical computations required of nonlinear  systems. This report discusses the design  and implementation of the Standard Map  Machine and its use in the study of nonlinear  mappings; in particular, the study of the  standard map.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6822</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Structure of GPSG Models: Revised Generalized Phrase Structure Grammar</title>
<link>https://hdl.handle.net/1721.1/6821</link>
<description>Computational Structure of GPSG Models: Revised Generalized Phrase Structure Grammar
Ristad, Eric Sven
The primary goal of this report is to  demonstrate how considerations from  computational complexity theory can inform  grammatical theorizing. To this end,  generalized phrase structure grammar  (GPSG) linguistic theory is revised so that its  power more closely matches the limited ability  of an ideal speaker--hearer: GPSG  Recognition is EXP-POLY time hard, while  Revised GPSG Recognition is NP-complete. A  second goal is to provide a theoretical  framework within which to better understand  the wide range of existing GPSG models,  embodied in formal definitions as well as in  implemented computer programs.  A grammar for English and an informal  explanation of the GPSG/RGPSG syntactic  features are included in appendices.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6821</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamically Stable Legged Locomotion (September 1985-Septembers1989)</title>
<link>https://hdl.handle.net/1721.1/6820</link>
<description>Dynamically Stable Legged Locomotion (September 1985-Septembers1989)
Raibert, Marc H.; Brown, H. Benjamin, Jr.; Chepponis, Michael; Koechling, Jeff; Hodgins, Jessica K.; Dustman, Diane; Brennan, W. Kevin; Barrett, David S.; Thompson, Clay M.; Hebert, John Daniell; Lee, Woojin; Borvansky, Lance
This report documents our work in exploring  active balance for dynamic legged systems for  the period from September 1985 through  September 1989. The purpose of this  research is to build a foundation of knowledge  that can lead both to the construction of useful  legged vehicles and to a better understanding  of animal locomotion. In this report we focus  on the control of biped locomotion, the use of  terrain footholds, running at high speed, biped  gymnastics, symmetry in running, and the  mechanical design of articulated legs.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6820</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vision, Instruction, and Action</title>
<link>https://hdl.handle.net/1721.1/6819</link>
<description>Vision, Instruction, and Action
Chapman, David
This thesis describes Sonja, a system which  uses instructions in the course of visually-guided activity. The thesis explores an  integration of research in vision, activity, and  natural language pragmatics. Sonja's visual  system demonstrates the use of several  intermediate visual processes, particularly  visual search and routines, previously  proposed on psychophysical grounds. The  computations Sonja performs are compatible  with the constraints imposed by  neuroscientifically plausible hardware.  Although Sonja can operate autonomously, it  can also make flexible use of instructions  provided by a human advisor. The system  grounds its understanding of these  instructions in perception and action.
</description>
<pubDate>Sun, 01 Apr 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6819</guid>
<dc:date>1990-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Acquisition of Evolving Informal Descriptions</title>
<link>https://hdl.handle.net/1721.1/6818</link>
<description>Automated Acquisition of Evolving Informal Descriptions
Reubenstein, Howard B.
The Listener is an automated system that  unintrusively performs knowledge acquisition  from informal input. The Listener develops a  coherent internal representation of a  description from an initial set of disorganized,  imprecise, incomplete, ambiguous, and  possibly inconsistent statements. The  Listener can produce a summary document  from its internal representation to facilitate  communication, review, and validation. A  special purpose Listener, called the  Requirements Apprentice (RA), has been  implemented in the software requirements  acquisition domain. Unlike most other  requirements analysis tools, which start from  a formal description language, the focus of  the RA is on the transition between informal  and formal specifications.
</description>
<pubDate>Fri, 01 Jun 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6818</guid>
<dc:date>1990-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Grasping Problem: Toward Task-Level Programming for an Articulated Hand</title>
<link>https://hdl.handle.net/1721.1/6817</link>
<description>The Grasping Problem: Toward Task-Level Programming for an Articulated Hand
Pollard, Nancy S.
This report presents a system for generating  a stable, feasible, and reachable grasp of a  polyhedral object. A set of contact points on  the object is found that can result in a stable  grasp; a feasible grasp is found in which the  robot contacts the object at those contact  points; and a path is constructed from the  initial configuration of the robot to the stable,  feasible final grasp configuration. The  algorithm described in the report is designed  for the Salisbury hand mounted on a Puma  560 arm, but a similar approach could be  used to develop grasping systems for other  robots.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6817</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model Selection for Solving Kinematics Problems</title>
<link>https://hdl.handle.net/1721.1/6816</link>
<description>Model Selection for Solving Kinematics Problems
Goh, Choon P.
There has been much interest in the area of  model-based reasoning within the Artificial  Intelligence community, particularly in its  application to diagnosis and troubleshooting.  The core issue in this thesis, simply put, is,  model-based reasoning is fine, but whence  the model? Where do the models come  from? How do we know we have the right  models? What does the right model mean  anyway? Our work has three major  components. The first component deals with  how we determine whether a piece of  information is relevant to solving a problem.  We have three ways of determining relevance:  derivational, situational and an order-of-magnitude reasoning process. The second  component deals with the defining and  building of models for solving problems. We  identify these models, determine what we  need to know about them, and importantly,  determine when they are appropriate.  Currently, the system has a collection of four  basic models and two hybrid models. This  collection of models has been successfully  tested on a set of fifteen simple kinematics  problems. The third major component of our  work deals with how the models are selected.
</description>
<pubDate>Sat, 01 Sep 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6816</guid>
<dc:date>1990-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Improvement by Automatic Redistribution of Intermediate Results</title>
<link>https://hdl.handle.net/1721.1/6815</link>
<description>Program Improvement by Automatic Redistribution of Intermediate Results
Hall, Robert Joseph
Introducing function sharing into designs  allows eliminating costly structure by adapting  existing structure to perform its function. This  can eliminate many inefficiencies of reusing  general componentssin specific contexts.  "Redistribution of intermediate results''  focuses on instances where adaptation  requires only addition/deletion of data flow  and unused code removal. I show that this  approach unifies and extends several well-known optimization classes. The system  performs search and screening by deriving,  using a novel explanation-based  generalization technique, operational filtering  predicates from input teleological information.  The key advantage is to focus the system's  effort on optimizations that are easier to prove  safe.
</description>
<pubDate>Fri, 01 Feb 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6815</guid>
<dc:date>1991-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Early Detection of Motion Boundaries</title>
<link>https://hdl.handle.net/1721.1/6814</link>
<description>The Early Detection of Motion Boundaries
Spoerri, Anselm
This thesis shows how to detect boundaries  on the basis of motion information alone. The  detection is performed in two stages: (i) the  local estimation of motion discontinuities and  of the visual flowsfield; (ii) the extraction of  complete boundaries belonging to differently  moving objects. For the first stage, three new  methods are presented: the "Bimodality  Tests,'' the "Bi-distribution Test,'' and the  "Dynamic Occlusion Method.'' The second  stage consists of applying the "Structural  Saliency Method,'' by Sha'ashua and Ullman  to extract complete and unique boundaries  from the output of the first stage. The  developed methods can successfully  segment complex motion sequences.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6814</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Basis Reduction Algorithms and Subset Sum Problems</title>
<link>https://hdl.handle.net/1721.1/6813</link>
<description>Basis Reduction Algorithms and Subset Sum Problems
LaMacchia, Brian A.
This thesis investigates a new approach to  lattice basis reduction suggested by M.  Seysen. Seysen's algorithm attempts to  globally reduce a lattice basis, whereas the  Lenstra, Lenstra, Lovasz (LLL) family of  reduction algorithms concentrates on local  reductions. We show that Seysen's algorithm  is well suited for reducing certain classes of  lattice bases, and often requires much less  time in practice than the LLL algorithm. We  also demonstrate how Seysen's algorithm for  basis reduction may be applied to subset  sum problems. Seysen's technique, used in  combination with the LLL algorithm, and other  heuristics, enables us to solve a much larger  class of subset sum problems than was  previously possible.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6813</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>MIT Scheme Reference Manual</title>
<link>https://hdl.handle.net/1721.1/6812</link>
<description>MIT Scheme Reference Manual
Hanson, Chris
MIT Scheme is an implementation of the  Scheme programming language that runs on  many popular workstations. The MIT Scheme  Reference Manual describes the special  forms, procedures, and datatypes  provided by the implementation for use by  application programmers.
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6812</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiple Mode Vibration Suppression in Controlled Flexible Systems</title>
<link>https://hdl.handle.net/1721.1/6811</link>
<description>Multiple Mode Vibration Suppression in Controlled Flexible Systems
Hyde, James M.
Prior research has led to the development of  input command shapers that can reduce  residual vibration in single- or multiple-mode  flexible systems. We present a method for the  development of multiple-mode shapers which  are simpler to implement and produce  smaller response delays than previous  designs. An MIT / NASA experimental flexible  structure, MACE, is employed as a test article  for the validation of the new shaping method.  We examine the results of tests conducted  on simulations of MACE. The new shapers  are shown to be effective in suppressing  multiple-mode vibration, even in the  presence of mild kinematic and  dynamic non-linearities.
</description>
<pubDate>Wed, 01 May 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6811</guid>
<dc:date>1991-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Parallel Crossbar Routing Chip for a Shared Memory Multiprocessor</title>
<link>https://hdl.handle.net/1721.1/6810</link>
<description>A Parallel Crossbar Routing Chip for a Shared Memory Multiprocessor
Minsky, Henry
This thesis describes the design and  implementation of an integrated circuit and  associated packaging to be used as the  building block for the data routing network of  a large scale shared memory  multiprocessor system. A general purpose  multiprocessor depends on high-bandwidth,  low-latency communications  between computing elements. This  thesis describes the design and construction  of RN1, a novel self-routing, enhanced  crossbar switch as a CMOS VLSI chip. This  chip provides the basic building block for a  scalable pipelined routing network with byte-wide data channels. A series of RN1 chips  can be cascaded with no additional internal  network components to form a  multistage fault-tolerant routing switch. The  chip is designed to operate at clock  frequencies up to 100Mhz using Hewlett-Packard's HP34 $1.2\\mu$ process. This  aggressive performance goal demands that  special attention be paid to optimization of  the logic architecture and circuit design.
</description>
<pubDate>Fri, 01 Mar 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6810</guid>
<dc:date>1991-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reliable Interconnection Networks for Parallel Computers</title>
<link>https://hdl.handle.net/1721.1/6809</link>
<description>Reliable Interconnection Networks for Parallel Computers
Dennison, Larry R.
This technical report describes a new  protocol, the Unique Token Protocol, for  reliable message communication. This  protocol eliminates the need for end-to-end  acknowledgments and minimizes the  communication effort when no dynamic errors  occur. Various properties of end-to-end  protocols are presented. The unique token  protocol solves the associated problems. It  eliminates source buffering by maintaining in  the network at least two copies of a message.  A token is used to decide if a message was  delivered to the destination exactly once. This  technical report also presents a possible  implementation of the protocol in a worm-hole  routed, 3-D mesh network.
</description>
<pubDate>Tue, 01 Oct 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6809</guid>
<dc:date>1991-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Temporal Surface Reconstruction</title>
<link>https://hdl.handle.net/1721.1/6808</link>
<description>Temporal Surface Reconstruction
Heel, Joachim
This thesis investigates the problem of  estimating the three-dimensional structure of  a scene from a sequence of images.  Structure information is recovered from  images continuously using shading, motion  or other visual mechanisms. A Kalman filter  represents structure in a dense depth map.  With each new image, the filter first updates  the current depth map by a minimum variance  estimate that best fits the new image data and  the previous estimate. Then the structure  estimate is predicted for the next time step by  a transformation that accounts for relative  camera motion. Experimental evaluation  shows the significant improvement in quality  and computation time that can be achieved  using this technique.
</description>
<pubDate>Wed, 01 May 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6808</guid>
<dc:date>1991-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Coupled Multi-ALU Processing Node for a Highly Parallel Computer</title>
<link>https://hdl.handle.net/1721.1/6807</link>
<description>A Coupled Multi-ALU Processing Node for a Highly Parallel Computer
Keckler, Stephen W.
This report describes Processor Coupling, a  mechanism for controlling multiple ALUs on a  single integrated circuit to exploit both  instruction-level and inter-thread parallelism.  A compiler statically schedules individual  threads to discover available intra-thread  instruction-level parallelism. The runtime  scheduling mechanism interleaves threads,  exploiting inter-thread parallelism to maintain  high ALU utilization. ALUs are assigned to  threads on a cycle byscycle basis, and several  threads can be active concurrently. Simulation  results show that Processor Coupling  performs well both on single threaded and  multi-threaded applications. The experiments  address the effects of memory latencies,  function unit latencies, and communication  bandwidth between function units.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6807</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Program Recognition by Graph Parsing</title>
<link>https://hdl.handle.net/1721.1/6806</link>
<description>Automated Program Recognition by Graph Parsing
Wills, Linda M.
Recognizing standard computational  structures (cliches) in a program can help an  experienced programmer understand the  program. We develop a graph parsing  approach to automating program recognition  in which programs and cliches are  represented in an attributed graph grammar  formalism and recognition is achieved by  graph parsing. In studying this approach, we  evaluate our representation's ability to  suppress many common forms of variation  which hinder recognition. We investigate the  expressiveness of our graph grammar  formalism for capturing programming cliches.  We empirically and analytically study the  computational cost of our recognition  approach with respect to two medium-sized,  real-world simulator programs.
</description>
<pubDate>Wed, 01 Jul 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6806</guid>
<dc:date>1992-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Analysis of Multistage Interconnection Network Performance</title>
<link>https://hdl.handle.net/1721.1/6805</link>
<description>Probabilistic Analysis of Multistage Interconnection Network Performance
Sobalvarro, Patrick G.
We present methods of calculating the value  of two performance parameters for multipath,  multistage interconnection networks: the  normalized throughput and the probability of  successful message transmission. We  develop a set of exact equations for the  loading probability mass functions of network  channels and a program for solving them  exactly. We also develop a Monte Carlo  method for approxmiate solution of the  equations, and show that the resulting  approximation method will always calculate  the values of the performance parameters  more quickly than direct simulation.
</description>
<pubDate>Wed, 01 Apr 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6805</guid>
<dc:date>1992-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Directional Selectivity in Vertebrate Retina: An Experimental and Computational Study</title>
<link>https://hdl.handle.net/1721.1/6804</link>
<description>On Directional Selectivity in Vertebrate Retina: An Experimental and Computational Study
Borg-Graham, Lyle J.
This thesis describes an investigation of  retinal directional selectivity. We show  intracellular (whole-cell patch) recordings in  turtle retina which indicate that this  computation occurs prior to the ganglion cell,  and we describe a pre-ganglionic circuit  model to account for this and other findings  which places the non-linear spatio-temporal  filter at individual, oriented amacrine cell  dendrites. The key non-linearity is provided by  interactions between excitatory and inhibitory  synaptic inputs onto the dendrites, and their  distal tips provide directionally selective  excitatory outputs onto ganglion cells.  Detailed simulations of putative cells support  this model, given reasonable parameter  constraints. The performance of the model  also suggests that this computational  substructure may be relevant within the  dendritic trees of CNS neurons in general.
</description>
<pubDate>Wed, 01 Jan 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6804</guid>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Modeling the Behavior of a Harmonic Drive Gear Transmission</title>
<link>https://hdl.handle.net/1721.1/6803</link>
<description>Understanding and Modeling the Behavior of a Harmonic Drive Gear Transmission
Tuttle, Timothy D.
In my research, I have performed an extensive  experimental investigation of harmonic-drive  properties such as stiffness, friction, and  kinematic error. From my experimental  results, I have found that these properties can  be sharply non-linear and highly dependent  on operating conditions. Due to the complex  interaction of these poorly behaved  transmission properties, dynamic response  measurements showed surprisingly agitated  behavior, especially around system  resonance. Theoretical models developed to  mimic the observed response illustrated that  non-linear frictional effects cannot be ignored  in any accurate harmonic-drive  representation. Additionally, if behavior  around system resonance must be replicated,  kinematic error and transmission compliance  as well as frictional dissipation from gear-tooth rubbing must all be incorporated into the  model.
</description>
<pubDate>Fri, 01 May 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6803</guid>
<dc:date>1992-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shaping Inputs to Reduce Vibration in Flexible Space Structures</title>
<link>https://hdl.handle.net/1721.1/6802</link>
<description>Shaping Inputs to Reduce Vibration in Flexible Space Structures
Chang, Kenneth W.
Future NASA plans to launch large space  strucutres solicit the need for effective  vibration control schemes which can solve the  unique problems associated with unwanted  residual vibration in flexible spacecraft. In this  work, a unique method of input command  shaping called impulse shaping is examined.  A theoretical background is presented along  with some insight into the methdos of  calculating multiple mode sequences. The  Middeck Active Control Experiment (MACE) is  then described as the testbed for hardware  experiments. These results are shown and  some of the difficulties of dealing with  nonlinearities are discussed. The paper is  concluded with some conclusions about  calculating and implementing impulse  shaping in complex nonlinear systems.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6802</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equivalence and Reduction of Hidden Markov Models</title>
<link>https://hdl.handle.net/1721.1/6801</link>
<description>Equivalence and Reduction of Hidden Markov Models
Balasubramanian, Vijay
This report studies when and why two Hidden  Markov Models (HMMs) may represent the  same stochastic process. HMMs are  characterized in terms of equivalence classes  whose elements represent identical  stochastic processes. This characterization  yields polynomial time algorithms to detect  equivalent HMMs. We also find fast  algorithms to reduce HMMs to essentially  unique and minimal canonical  representations. The reduction to a canonical  form leads to the definition of 'Generalized  Markov Models' which are essentially HMMs  without the positivity constraint on their  parameters. We discuss how this  generalization can yield more parsimonious  representations of stochastic processes at  the cost of the probabilistic interpretation of  the model parameters.
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6801</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimizing Residual Vibrations in Flexible Systems</title>
<link>https://hdl.handle.net/1721.1/6800</link>
<description>Minimizing Residual Vibrations in Flexible Systems
Rappole, B. Whitney, Jr.
Residual vibrations degrade the performance  of many systems. Due to the lightweight and  flexible nature of space structures, controlling  residual vibrations is especially difficult. Also,  systems such as the Space Shuttle remote  Manipulator System have frequencies that vary  significantly based upon configuration and  loading. Recently, a technique of minimizing  vibrations in flexible structures by command  input shaping was developed. This document  presents research completed in developing a  simple, closed- form method of calculating  input shaping sequences for two-mode  systems and a system to adapt the command  input shaping technique to known changes in  system frequency about the workspace. The  new techniques were tested on a three-link,  flexible manipulator.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6800</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust and Efficient 3D Recognition by Alignment</title>
<link>https://hdl.handle.net/1721.1/6799</link>
<description>Robust and Efficient 3D Recognition by Alignment
Alter, Tao Daniel
Alignment is a prevalent approach for  recognizing 3D objects in 2D images. A major  problem with current implementations is how  to robustly handle errors that propagate from  uncertainties in the locations of image  features. This thesis gives a technique for  bounding these errors. The technique makes  use of a new solution to the problem of  recovering 3D pose from three matching point  pairs under weak-perspective projection.  Furthermore, the error bounds are used to  demonstrate that using line segments for  features instead of points significantly  reduces the false positive rate, to the extent  that alignment can remain reliable even in  cluttered scenes.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6799</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Parallelizing Compiler Based on Partial Evaluation</title>
<link>https://hdl.handle.net/1721.1/6798</link>
<description>A Parallelizing Compiler Based on Partial Evaluation
Surati, Rajeev
We constructed a parallelizing compiler that  utilizes partial evaluation to achieve efficient  parallel object code from very high-level data  independent source programs. On several  important scientific applications, the compiler  attains parallel performance equivalent to or  better than the best observed results from the  manual restructuring of code. This is the first  attempt to capitalize on partial evaluation's  ability to expose low-level parallelism. New  static scheduling techniques are used to  utilize the fine-grained parallelism of the  computations. The compiler maps the  computation graph resulting from partial  evaluation onto the Supercomputer Toolkit, an  eight VLIW processor parallel computer.
</description>
<pubDate>Thu, 01 Jul 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6798</guid>
<dc:date>1993-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Photo-topography by Fusing Shape-from-Shading and Stereo</title>
<link>https://hdl.handle.net/1721.1/6797</link>
<description>Robust Photo-topography by Fusing Shape-from-Shading and Stereo
Thompson, Clay Matthew
Methods for fusing two computer vision  methods are discussed and several example  algorithms are presented to illustrate the  variational method of fusing algorithms. The  example algorithms seek to determine planet  topography given two images taken from two  different locations with two different lighting  conditions. The algorithms each employ  assingle cost function that combines the  computer vision methods of shape-from-shading and stereo in different ways. The  algorithms are closely coupled and take into  account all the constraints of the photo-topography problem. The algorithms are run  on four synthetic test image sets of varying  difficulty.
</description>
<pubDate>Mon, 01 Feb 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6797</guid>
<dc:date>1993-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognizing 3-D Objects Using 2-D Images</title>
<link>https://hdl.handle.net/1721.1/6796</link>
<description>Recognizing 3-D Objects Using 2-D Images
Jacobs, David W.
We discuss a strategy for visual recognition by  forming groups of salient image features, and  then using these groups to index into a data  base to find all of the matching groups of  model features. We discuss the most space  efficient possible method of representing 3-D  models for indexing from 2-D data, and show  how to account for sensing error when  indexing. We also present a convex grouping  method that is robust and efficient, both  theoretically and in practice. Finally, we  combine these modules into a complete  recognition system, and test its performance  on many real images.
</description>
<pubDate>Thu, 01 Apr 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6796</guid>
<dc:date>1993-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Lifetime-based Garbage Collector for LISP Systems on General-Purpose Computers</title>
<link>https://hdl.handle.net/1721.1/6795</link>
<description>A Lifetime-based Garbage Collector for LISP Systems on General-Purpose Computers
Sobalvarro, Patrick
Garbage collector performance in LISP  systems on custom hardware has been  substantially improved by the adoption of  lifetime-based garbage collection techniques.  To date, however, successful lifetime-based  garbage collectors have required special-purpose hardware, or at least privileged  access to data structures maintained by the  virtual memory system. I present here a  lifetime-based garbage collector requiring no  special-purpose hardware or virtual memory  system support, and discuss its performance.
</description>
<pubDate>Mon, 01 Feb 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6795</guid>
<dc:date>1988-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Feature Extraction Without Edge Detection</title>
<link>https://hdl.handle.net/1721.1/6794</link>
<description>Feature Extraction Without Edge Detection
Chaney, Ronald D.
Information representation is a critical issue  in machine vision. The representation  strategy in the primitive stages of a vision  system has enormous implications for the  performance in subsequent stages. Existing  feature extraction paradigms, like edge  detection, provide sparse and unreliable  representations of the image information. In  this thesis, we propose a novel feature  extraction paradigm. The features consist of  salient, simple parts of regions bounded by  zero-crossings. The features are dense,  stable, and robust. The primary advantage of  the features is that they have abstract  geometric attributes pertaining to their size  and shape. To demonstrate the utility of the  feature extraction paradigm, we apply it to  passive navigation. We argue that the  paradigm is applicable to other early vision  problems.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6794</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Qualitative Modeling of Dynamic Physical Systems</title>
<link>https://hdl.handle.net/1721.1/6793</link>
<description>Automatic Qualitative Modeling of Dynamic Physical Systems
Amsterdam, Jonathan
This report describes MM, a computer  program that can model a variety of  mechanical and fluid systems. Given a  system's structure and qualitative behavior,  MM searches for models using an energy-based modeling framework. MM uses  general facts about physical systems to relate  behavioral and model properties. These facts  enable a more focussed search for models  than would be obtained by mere comparison  of desired and predicted behaviors. When  these facts do not apply, MM uses behavior-constrained qualitative simulation to verify  candidate models efficiently. MM can also  design experiments to distinguish among  multiple candidate models.
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6793</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mid-Level Vision and Recognition of Non-Rigid Objects</title>
<link>https://hdl.handle.net/1721.1/6792</link>
<description>Mid-Level Vision and Recognition of Non-Rigid Objects
Subirana-Vilanova, J. Brian
We address mid-level vision for the  recognition of non-rigid objects. We align  model and image using frame curves - which  are object or "figure/ground" skeletons.  Frame curves are computed, without  discontinuities, using Curved Inertia Frames,  a provably global scheme implemented on  the Connection Machine, based on: non-cartisean networks; a definition of curved axis  of inertia; and a ridge detector. I present  evidence against frame alignment in human  perception. This suggests: frame curves have  a role in figure/ground segregation and in  fuzzy boundaries; their outside/near/top/ incoming regions are more salient; and that  perception begins by setting a reference  frame (prior to early vision), and proceeds by  processing convex structures.
</description>
<pubDate>Sat, 01 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6792</guid>
<dc:date>1995-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Agent Control of an Autonomous Robot with Many Sensors and Actuators</title>
<link>https://hdl.handle.net/1721.1/6791</link>
<description>Robust Agent Control of an Autonomous Robot with Many Sensors and Actuators
Ferrell, Cynthia
This thesis presents methods for  implementing robust hexpod locomotion on  an autonomous robot with many sensors and  actuators. The controller is based on the  Subsumption Architecture and is fully  distributed over approximately 1500 simple,  concurrent processes. The robot, Hannibal,  weighs approximately 6 pounds and is  equipped with over 100 physical sensors, 19  degrees of freedom, and 8 on board  computers. We investigate the following  topics in depth: distributed control of a  complex robot, insect-inspired locomotion  control for gait generation and rough terrain  mobility, and fault tolerance. The controller  was implemented, debugged, and tested on  Hannibal. Through a series of experiments,  we examined Hannibal's gait generation,  rough terrain locomotion, and fault tolerance  performance. These results demonstrate that  Hannibal exhibits robust, flexible, real-time  locomotion over a variety of terrain and  tolerates a multitude of hardware failures.
</description>
<pubDate>Sat, 01 May 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6791</guid>
<dc:date>1993-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust, High-Speed Network Design for Large-Scale Multiprocessing</title>
<link>https://hdl.handle.net/1721.1/6790</link>
<description>Robust, High-Speed Network Design for Large-Scale Multiprocessing
DeHon, Andre
As multiprocessor system size scales  upward, two important aspects of  multiprocessor systems will generally get  worse rather than better: (1) interprocessor  communication latency will increase and (2)  the probability that some component in the  system will fail will increase. These problems  can prevent us from realizing the potential  benefits of large-scale multiprocessing. In  this report we consider the problem of  designing networks which simultaneously  minimize communication latency while  maximizing fault tolerance. Using a synergy of  techniques including connection topologies,  routing protocols, signalling techniques, and  packaging technologies we assemble  integrated, system-level solutions to this  network design problem.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6790</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesizing Regularity Exposing Attributes in Large Protein Databases</title>
<link>https://hdl.handle.net/1721.1/6789</link>
<description>Synthesizing Regularity Exposing Attributes in Large Protein Databases
de la Maza, Michael
This thesis describes a system that  synthesizes regularity exposing attributes  from large protein databases. After  processing primary and secondary structure  data, this system discovers an amino acid  representation that captures what are thought  to be the three most important amino acid  characteristics (size, charge, and  hydrophobicity) for tertiary structure prediction.  A neural network trained using this 16 bit  representation achieves a performance  accuracy on the secondary structure  prediction problem that is comparable to the  one achieved by a neural network trained  using the standard 24 bit amino acid  representation. In addition, the thesis  describes bounds on secondary structure  prediction accuracy, derived using an optimal  learning algorithm and the probably  approximately correct (PAC) model.
</description>
<pubDate>Sat, 01 May 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6789</guid>
<dc:date>1993-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>AMAR: A Computational Model of Autosegmental Phonology</title>
<link>https://hdl.handle.net/1721.1/6788</link>
<description>AMAR: A Computational Model of Autosegmental Phonology
Albro, Daniel M.
This report describes a computational system  with which phonologists may describe a  natural language in terms of autosegmental  phonology, currently the most advanced theory  pertaining to the sound systems of human  languages. This system allows linguists to  easily test autosegmental hypotheses against  a large corpus of data. The system was  designed primarily with tonal systems in  mind, but also provides support for tree or  feature matrix representation of phonemes  (as in The Sound Pattern of English), as well  as syllable structures and other aspects of  phonological theory. Underspecification is  allowed, and trees may be specified before,  during, and after rule application. The  association convention is automatically  applied, and other principles such as the  conjunctivity condition are supported. The  method of representation was designed such  that rules are designated in as close a  fashion as possible to the existing  conventions of autosegmental theory while  adhering to a textual constraint for maximum  portability.
</description>
<pubDate>Fri, 01 Oct 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6788</guid>
<dc:date>1993-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emacs Lisp in Edwin SScheme</title>
<link>https://hdl.handle.net/1721.1/6787</link>
<description>Emacs Lisp in Edwin SScheme
Birkholz, Matthew
The MIT-Scheme program development  environment includes a general-purpose text  editor, Edwin, that has an extension language,  Edwin Scheme. Edwin is very similar to  another general-purpose text editor, GNU  Emacs, which also has an extension  language, Emacs Lisp. The popularity of GNU  Emacs has lead to a large library of tools  written in Emacs Lisp. The goal of this thesis  is to implement a useful subset of Emacs  Lisp in Edwin Scheme. This subset was  chosen to be sufficient for simple operation of  the GNUS news reading program.
</description>
<pubDate>Wed, 01 Sep 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6787</guid>
<dc:date>1993-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Chemical Mechanisms in Neural Computation and Learning</title>
<link>https://hdl.handle.net/1721.1/6786</link>
<description>The Role of Chemical Mechanisms in Neural Computation and Learning
Hiller, Martha J.
Most computational models of neurons  assume that their electrical characteristics are  of paramount importance. However, all long-term changes in synaptic efficacy, as well as  many short-term effects, are mediated by  chemical mechanisms. This technical report  explores the interaction between electrical  and chemical mechanisms in neural learning  and development. Two neural systems that  exemplify this interaction are described and  modelled. The first is the mechanisms  underlying habituation, sensitization, and  associative learning in the gill withdrawal  reflex circuit in Aplysia, a marine snail. The  second is the formation of retinotopic  projections in the early visual pathway during  embryonic development.
</description>
<pubDate>Tue, 23 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6786</guid>
<dc:date>1995-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for Parallelizing Search Paths in Phrasing</title>
<link>https://hdl.handle.net/1721.1/6785</link>
<description>Methods for Parallelizing Search Paths in Phrasing
Marcken, Carl de
Many search problems are commonly solved  with combinatoric algorithms that  unnecessarily duplicate and serialize work at  considerable computational expense. There  are techniques available that can eliminate  redundant computations and perform  remaining operations concurrently, effectively  reducing the branching factors of these  algorithms. This thesis applies these  techniques to the problem of parsing natural  language. The result is an efficient  programming language that can reduce some  of the expense associated with principle-based parsing and other search problems.  The language is used to implement various  natural language parsers, and the  improvements are compared to those that  result from implementing more deterministic  theories of language processing.
</description>
<pubDate>Sat, 01 Jan 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6785</guid>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Naive Physics, Event Perception, Lexical Semantics, and Language Acquisition</title>
<link>https://hdl.handle.net/1721.1/6784</link>
<description>Naive Physics, Event Perception, Lexical Semantics, and Language Acquisition
Siskind, Jeffrey M.
This thesis proposes a computational model  of how children may come to learn the  meanings of words in their native language.  The proposed model is divided into two  separate components. One component  produces semantic descriptions of visually  observed events while the other correlates  those descriptions with co-occurring  descriptions of those events in natural  language. The first part of this thesis  describes three implementations of the  correlation process whereby representations  of the meanings of whole utterances can be  decomposed into fragments assigned as  representations of the meanings of individual  words. The second part of this thesis  describes an implemented computer  program that recognizes the occurrence of  simple spatial motion events in simulated  video input.
</description>
<pubDate>Thu, 01 Apr 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6784</guid>
<dc:date>1993-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Radial Basis Function Approach to Financial Time Series Analysis</title>
<link>https://hdl.handle.net/1721.1/6783</link>
<description>A Radial Basis Function Approach to Financial Time Series Analysis
Hutchinson, James M.
Nonlinear multivariate statistical techniques  on fast computers offer the potential to  capture more of the dynamics of the high  dimensional, noisy systems underlying  financial markets than traditional models,  while making fewer restrictive assumptions.  This thesis presents a collection of practical  techniques to address important estimation  and confidence issues for Radial Basis  Function networks arising from such a data  driven approach, including efficient methods  for parameter estimation and pruning, a  pointwise prediction error estimator, and a  methodology for controlling the "data mining''  problem. Novel applications in the finance  area are described, including customized,  adaptive option pricing and stock price  prediction.
</description>
<pubDate>Wed, 01 Dec 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6783</guid>
<dc:date>1993-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Logging and Recovery in a Highly Concurrent Database</title>
<link>https://hdl.handle.net/1721.1/6782</link>
<description>Logging and Recovery in a Highly Concurrent Database
Keen, John S.
This report addresses the problem of fault  tolerance to system failures for database  systems that are to run on highly concurrent  computers. It assumes that, in general, an  application may have a wide distribution in the  lifetimes of its transactions. Logging remains  the method of choice for ensuring fault  tolerance. Generational garbage collection  techniques manage the limited disk space  reserved for log information; this technique  does not require periodic checkpoints and is  well suited for applications with a broad range  of transaction lifetimes. An arbitrarily large  collection of parallel log streams provide the  necessary disk bandwidth.
</description>
<pubDate>Wed, 01 Jun 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6782</guid>
<dc:date>1994-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>SodaBot: A Software Agent Environment and Construction System</title>
<link>https://hdl.handle.net/1721.1/6781</link>
<description>SodaBot: A Software Agent Environment and Construction System
Coen, Michael H.
This thesis presents SodaBot, a general-purpose software agent user-environment  and construction system. Its primary  component is the basic software agent --- a  computational framework for building agents  which is essentially an agent operating  system. We also present a new language for  programming the basic software agent whose  primitives are designed around human-level  descriptions of agent activity. Via this  programming language, users can easily  implement a wide-range of typical software  agent applications, e.g. personal on-line  assistants and meeting scheduling agents.  The SodaBot system has been implemented  and tested, and its description comprises the  bulk of this thesis.
</description>
<pubDate>Wed, 02 Nov 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6781</guid>
<dc:date>1994-11-02T00:00:00Z</dc:date>
</item>
<item>
<title>The Named-State Register File</title>
<link>https://hdl.handle.net/1721.1/6780</link>
<description>The Named-State Register File
Nuth, Peter R.
This thesis introduces the Named-State  Register File, a fine-grain, fully-associative  register file. The NSF allows fast context  switching between concurrent threads as well  as efficient sequential program performance.  The NSF holds more live data than  conventional register files, and requires less  spill and reload traffic to switch between  contexts. This thesis demonstrates an  implementation of the Named-State Register  File and estimates the access time and chip  area required for different organizations.  Architectural simulations of large sequential  and parallel applications show that the NSF  can reduce execution time by 9% to 17%  compared to alternative register files.
</description>
<pubDate>Sun, 01 Aug 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6780</guid>
<dc:date>1993-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specialization of Perceptual Processes</title>
<link>https://hdl.handle.net/1721.1/6779</link>
<description>Specialization of Perceptual Processes
Horswill, Ian
In this report, I discuss the use of vision to  support concrete, everyday activity. I will argue  that a variety of interesting tasks can be  solved using simple and inexpensive vision  systems. I will provide a number of working  examples in the form of a state-of-the-art  mobile robot, Polly, which uses vision to give  primitive tours of the seventh floor of the MIT AI  Laboratory. By current standards, the robot  has a broad behavioral repertoire and is both  simple and inexpensive (the complete robot  was built for less than $20,000 using  commercial board-level components). The  approach I will use will be to treat the structure  of the agent's activity---its task and  environment---as positive resources for the  vision system designer. By performing a  careful analysis of task and environment, the  designer can determine a broad space of  mechanisms which can perform the desired  activity. My principal thesis is that for a broad  range of activities, the space of applicable  mechanisms will be broad enough to include  a number mechanisms which are simple and  economical. The simplest mechanisms that  solve a given problem will typically be quite  specialized to that problem. One thus worries  that building simple vision systems will be  require a great deal of {it ad-hoc} engineering  that cannot be transferred to other problems.  My second thesis is that specialized systems  can be analyzed and understood in a  principled manner, one that allows general  lessons to be extracted from specialized  systems. I will present a general approach to  analyzing specialization through the use of  transformations that provably improve  performance. By demonstrating a sequence  of transformations that derive a specialized  system from a more general one, we can  summarize the specialization of the former in  a compact form that makes explicit the  additional assumptions that it makes about  its environment. The summary can be used  to predict the performance of the system in  novel environments. Individual  transformations can be recycled in the design  of future systems.
</description>
<pubDate>Sat, 22 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6779</guid>
<dc:date>1995-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Computing 3-D Motion in Custom Analog and Digital VLSI</title>
<link>https://hdl.handle.net/1721.1/6778</link>
<description>Computing 3-D Motion in Custom Analog and Digital VLSI
Dron, Lisa
This thesis examines a complete design  framework for a real-time, autonomous  system with specialized VLSI hardware for  computing 3-D camera motion. In the  proposed architecture, the first step is to  determine point correspondences between  two images. Two processors, a CCD array  edge detector and a mixed analog/digital  binary block correlator, are proposed for this  task. The report is divided into three parts.  Part I covers the algorithmic analysis; part II  describes the design and test of a 32$\time  $32 CCD edge detector fabricated through  MOSIS; and part III compares the design of  the mixed analog/digital correlator to a fully  digital implementation.
</description>
<pubDate>Mon, 28 Nov 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6778</guid>
<dc:date>1994-11-28T00:00:00Z</dc:date>
</item>
<item>
<title>Learning World Models in Environments with Manifest Causal Structure</title>
<link>https://hdl.handle.net/1721.1/6777</link>
<description>Learning World Models in Environments with Manifest Causal Structure
Bergman, Ruth
This thesis examines the problem of an  autonomous agent learning a causal world  model of its environment. Previous  approaches to learning causal world models  have concentrated on environments that are  too "easy" (deterministic finite state  machines) or too "hard" (containing much  hidden state). We describe a new domain --- environments with manifest causal structure --- for learning. In such environments the  agent has an abundance of perceptions of its  environment. Specifically, it perceives almost  all the relevant information it needs to  understand the environment. Many  environments of interest have manifest causal  structure and we show that an agent can learn  the manifest aspects of these environments  quickly using straightforward learning  techniques. We present a new algorithm to  learn a rule-based causal world model from  observations in the environment. The  learning algorithm includes (1) a low level  rule-learning algorithm that converges on a  good set of specific rules, (2) a concept  learning algorithm that learns concepts by  finding completely correlated perceptions, and  (3) an algorithm that learns general rules. In  addition this thesis examines the problem of  finding a good expert from a sequence of  experts. Each expert has an "error rate"; we  wish to find an expert with a low error rate.  However, each expert's error rate and the  distribution of error rates are unknown. A new  expert-finding algorithm is presented and an  upper bound on the expected error rate of the  expert is derived.
</description>
<pubDate>Fri, 05 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6777</guid>
<dc:date>1995-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Series Elastic Actuators</title>
<link>https://hdl.handle.net/1721.1/6776</link>
<description>Series Elastic Actuators
Williamson, Matthew M.
This thesis presents the design, construction,  control and evaluation of a novel force  controlled actuator. Traditional force controlled  actuators are designed from the premise that  "Stiffer is better''. This approach gives a high  bandwidth system, prone to problems of  contact instability, noise, and low power  density. The actuator presented in this thesis  is designed from the premise that "Stiffness  isn't everything". The actuator, which  incorporates a series elastic element, trades  off achievable bandwidth for gains in stable,  low noise force control, and protection against  shock loads. This thesis reviews related work  in robot force control, presents theoretical  descriptions of the control and expected  performance from a series elastic actuator,  and describes the design of a test actuator  constructed to gather performance data.  Finally the performance of the system is  evaluated by comparing the performance data  to theoretical predictions.
</description>
<pubDate>Thu, 07 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6776</guid>
<dc:date>1995-09-07T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Fixation and Visual Attention in Object Recognition</title>
<link>https://hdl.handle.net/1721.1/6775</link>
<description>The Role of Fixation and Visual Attention in Object Recognition
Ratan, Aparna Lakshmi
This research project is a study of the role of  fixation and visual attention in object  recognition. In this project, we build an active  vision system which can recognize a target  object in a cluttered scene efficiently and  reliably. Our system integrates visual cues  like color and stereo to perform figure/ground  separation, yielding candidate regions on  which to focus attention. Within each image  region, we use stereo to extract features that  lie within a narrow disparity range about the  fixation position. These selected features are  then used as input to an alignment-style  recognition system. We show that visual  attention and fixation significantly reduce the  complexity and the false identifications in  model-based recognition using Alignment  methods. We also demonstrate that stereo  can be used effectively as a figure/ground  separator without the need for accurate  camera calibration.
</description>
<pubDate>Fri, 21 Jul 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6775</guid>
<dc:date>1995-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and Example Selection for Object and Pattern Detection</title>
<link>https://hdl.handle.net/1721.1/6774</link>
<description>Learning and Example Selection for Object and Pattern Detection
Sung, Kah-Kay
This thesis presents a learning based approach for detecting classes of objects and patterns with variable image appearance but highly predictable image boundaries. It consists of two parts. In part one, we introduce our object and pattern detection approach using a concrete human face detection example. The approach first builds a distribution-based model of the target pattern class in an appropriate feature space to describe the target's variable image appearance. It then learns from examples a similarity measure for matching new patterns against the distribution-based target model. The approach makes few assumptions about the target pattern class and should therefore be fairly general, as long as the target class has predictable image boundaries. Because our object and pattern detection approach is very much learning-based, how well a system eventually performs depends heavily on the quality of training examples it receives. The second part of this thesis looks at how one can select high quality examples for function approximation learning tasks. We propose an {em active learning} formulation for function approximation, and show for three specific approximation function classes, that the active example selection strategy learns its target with fewer data samples than random sampling. We then simplify the original active learning formulation, and show how it leads to a tractable example selection paradigm, suitable for use in many object and pattern detection problems.
</description>
<pubDate>Wed, 13 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6774</guid>
<dc:date>1996-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>SketchIT: A Sketch Interpretation Tool for Conceptual Mechanical Design</title>
<link>https://hdl.handle.net/1721.1/6773</link>
<description>SketchIT: A Sketch Interpretation Tool for Conceptual Mechanical Design
Stahovich, Thomas F.
We describe a program called SketchIT  capable of producing multiple families of  designs from a single sketch. The program is  given a rough sketch (drawn using line  segments for part faces and icons for springs  and kinematic joints) and a description of the  desired behavior. The sketch is "rough" in the  sense that taken literally, it may not work.  From this single, perhaps flawed sketch and  the behavior description, the program  produces an entire family of working designs.  The program also produces design variants,  each of which is itself a family of designs.  SketchIT represents each family of designs  with a "behavior ensuring parametric model"  (BEP-Model), a parametric model augmented  with a set of constraints that ensure the  geometry provides the desired behavior. The  construction of the BEP-Model from the sketch  and behavior description is the primary task  and source of difficulty in this undertaking.  SketchIT begins by abstracting the sketch to  produce a qualitative configuration space (qc-space) which it then uses as its primary  representation of behavior. SketchIT modifies  this initial qc-space until qualitative simulation  verifies that it produces the desired behavior.   SketchIT's task is then to find geometries that  implement this qc-space. It does this using a  library of qc-space fragments. Each fragment  is a piece of parametric geometry with a set of  constraints that ensure the geometry  implements a specific kind of boundary (qcs-curve) in qc-space. SketchIT assembles the  fragments to produce the BEP-Model.  SketchIT produces design variants by  mapping the qc-space to multiple  implementations, and by transforming rotating  parts to translating parts and vice versa.
</description>
<pubDate>Wed, 13 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6773</guid>
<dc:date>1996-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>Pose-Invariant Face Recognition Using Real and Virtual Views</title>
<link>https://hdl.handle.net/1721.1/6772</link>
<description>Pose-Invariant Face Recognition Using Real and Virtual Views
Beymer, David
The problem of automatic face recognition is  to visually identify a person in an input image.  This task is performed by matching the input  face against the faces of known people in a  database of faces. Most existing work in face  recognition has limited the scope of the  problem, however, by dealing primarily with  frontal views, neutral expressions, and fixed  lighting conditions. To help generalize  existing face recognition systems, we look at  the problem of recognizing faces under a  range of viewpoints. In particular, we consider  two cases of this problem: (i) many example  views are available of each person, and (ii)  only one view is available per person,  perhaps a driver's license or passport  photograph. Ideally, we would like to address  these two cases using a simple view-based  approach, where a person is represented in  the database by using a number of views on  the viewing sphere. While the view-based  approach is consistent with case (i), for case  (ii) we need to augment the single real view of  each person with synthetic views from other  viewpoints, views we call 'virtual views'. Virtual  views are generated using prior knowledge of  face rotation, knowledge that is 'learned' from  images of prototype faces. This prior  knowledge is used to effectively rotate in  depth the single real view available of each  person. In this thesis, I present the view-based face recognizer, techniques for  synthesizing virtual views, and experimental  results using real and virtual views in the  recognizer.
</description>
<pubDate>Thu, 28 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6772</guid>
<dc:date>1996-03-28T00:00:00Z</dc:date>
</item>
<item>
<title>An Analog VLSI Chip for Estimating the Focus of Expansion</title>
<link>https://hdl.handle.net/1721.1/6771</link>
<description>An Analog VLSI Chip for Estimating the Focus of Expansion
McQuirk, Ignacio Sean
For applications involving the control of  moving vehicles, the recovery of relative  motion between a camera and its  environment is of high utility. This thesis  describes the design and testing of a real-time analog VLSI chip which estimates the  focus of expansion (FOE) from measured  time-varying images. Our approach assumes  a camera moving through a fixed world with  translational velocity; the FOE is the projection  of the translation vector onto the image plane.  This location is the point towards which the  camera is moving, and other points appear to  be expanding outward from. By way of the  camera imaging parameters, the location of  the FOE gives the direction of 3-D translation.  The algorithm we use for estimating the FOE  minimizes the sum of squares of the  differences at every pixel between the  observed time variation of brightness and the  predicted variation given the assumed  position of the FOE. This minimization is not  straightforward, because the relationship  between the brightness derivatives depends  on the unknown distance to the surface being  imaged. However, image points where  brightness is instantaneously constant play a  critical role. Ideally, the FOE would be at the  intersection of the tangents to the iso-brightness contours at these "stationary"  points. In practice, brightness derivatives are  hard to estimate accurately given that the  image is quite noisy. Reliable results can  nevertheless be obtained if the image  contains many stationary points and the point  is found that minimizes the sum of squares of  the perpendicular distances from the tangents  at the stationary points. The FOE chip  calculates the gradient of this least-squares  minimization sum, and the estimation is  performed by closing a feedback loop around  it. The chip has been implemented using an  embedded CCD imager for image acquisition  and a row-parallel processing scheme. A 64  x 64 version was fabricated in a 2um CCD/ BiCMOS process through MOSIS with a  design goal of 200 mW of on-chip power, a  top frame rate of 1000 frames/second, and a  basic accuracy of 5%. A complete  experimental system which estimates the  FOE in real time using real motion and image  scenes is demonstrated.
</description>
<pubDate>Wed, 21 Aug 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6771</guid>
<dc:date>1996-08-21T00:00:00Z</dc:date>
</item>
<item>
<title>Internet Fish</title>
<link>https://hdl.handle.net/1721.1/6770</link>
<description>Internet Fish
LaMacchia, Brian A.
I have invented "Internet Fish," a novel class of  resource-discovery tools designed to help  users extract useful information from the  Internet. Internet Fish (IFish) are semi-autonomous, persistent information brokers;  users deploy individual IFish to gather and  refine information related to a particular topic.  An IFish will initiate research, continue to  discover new sources of information, and  keep tabs on new developments in that topic.  As part of the information-gathering process  the user interacts with his IFish to find out  what it has learned, answer questions it has  posed, and make suggestions for guidance.  Internet Fish differ from other Internet  resource discovery systems in that they are  persistent, personal and dynamic. As part of  the information-gathering process IFish  conduct extended, long-term conversations  with users as they explore. They incorporate  deep structural knowledge of the organization  and services of the net, and are also capable  of on-the-fly reconfiguration, modification and  expansion. Human users may dynamically  change the IFish in response to changes in  the environment, or IFish may initiate such  changes itself. IFish maintain internal state,  including models of its own structure,  behavior, information environment and its  user; these models permit an IFish to perform  meta-level reasoning about its own structure.  To facilitate rapid assembly of particular IFish  I have created the Internet Fish Construction  Kit. This system provides enabling  technology for the entire class of Internet Fish  tools; it facilitates both creation of new IFish  as well as additions of new capabilities to  existing ones. The Construction Kit includes  a collection of encapsulated heuristic  knowledge modules that may be combined in  mix-and-match fashion to create a particular  IFish; interfaces to new services written with  the Construction Kit may be immediately  added to "live" IFish. Using the Construction  Kit I have created a demonstration IFish  specialized for finding World-Wide Web  documents related to a given group of  documents. This "Finder" IFish includes  heuristics that describe how to interact with  the Web in general, explain how to take  advantage of various public indexes and  classification schemes, and provide a method  for discovering similarity relationships among  documents.
</description>
<pubDate>Thu, 01 Aug 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6770</guid>
<dc:date>1996-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Proceedings of the First PHANToM User's Group Workshop</title>
<link>https://hdl.handle.net/1721.1/6769</link>
<description>The Proceedings of the First PHANToM User's Group Workshop
Salisbury, J. Kenneth; Srinivasan, Mandayam A.
These proceedings summarize the results of  the First PHANToM User's Group Workshop  held September 27-30, 1996 MIT. The goal of  the workshop was to bring together a group of  active users of the PHANToM Haptic Interface  to discuss the scientific and engineering  challenges involved in bringing haptics into  widespread use, and to explore the future  possibilities of this exciting technology. With  over 50 attendees and 25 presentations the  workshop provided the first large forum for  users of a common haptic interface to share  results and engage in collaborative  discussions. Short papers from the  presenters are contained herein and address  the following topics: Research Effort  Overviews, Displays and Effects, Applications  in Teleoperation and Training, Tools for  Simulated Worlds and, Data Visualization.
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6769</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Intelligent Structures: Active Control of Buckling</title>
<link>https://hdl.handle.net/1721.1/6768</link>
<description>Towards Intelligent Structures: Active Control of Buckling
Berlin, Andrew A.
The buckling of compressively-loaded  members is one of the most important factors  limiting the overall strength and stability of a  structure. I have developed novel techniques  for using active control to wiggle a structural  element in such a way that buckling is  prevented. I present the results of analysis,  simulation, and experimentation to show that  buckling can be prevented through computer-controlled adjustment of dynamical  behavior.sI have constructed a small-scale  railroad-style truss bridge that contains  compressive members that actively resist  buckling through the use of piezo-electric  actuators. I have also constructed a prototype  actively controlled column in which the control  forces are applied by tendons, as well as a  composite steel column that incorporates  piezo-ceramic actuators that are used to  counteract buckling. Active control of buckling  allows this composite column to support 5.6  times more load than would otherwise be  possible.sThese techniques promise to lead  to intelligent physical structures that are both  stronger and lighter than would otherwise be  possible.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6768</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proceedings of the Second PHANToM User's Group Workshop</title>
<link>https://hdl.handle.net/1721.1/6767</link>
<description>Proceedings of the Second PHANToM User's Group Workshop
Salisbury, J. Kenneth; Srinivasan, Mandayam A.
On October 19-22, 1997 the Second PHANToM Users Group Workshop was held at the MIT Endicott House in Dedham, Massachusetts. Designed as a forum for sharing results and insights, the workshop was attended by more than 60 participants from 7 countries. These proceedings report on workshop presentations in diverse areas including rigid and compliant rendering, tool kits, development environments, techniques for scientific data visualization, multi-modal issues and a programming tutorial.
</description>
<pubDate>Mon, 01 Dec 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6767</guid>
<dc:date>1997-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatically Recovering Geometry and Texture from Large Sets of Calibrated Images</title>
<link>https://hdl.handle.net/1721.1/6766</link>
<description>Automatically Recovering Geometry and Texture from Large Sets of Calibrated Images
Mellor, J.P.
Three-dimensional models which contain  both geometry and texture have numerous  applications such as urban planning, physical  simulation, and virtual environments. A major  focus of computer vision (and recently  graphics) research is the automatic recovery  of three-dimensional models from two-dimensional images. After many years of  research this goal is yet to be achieved. Most  practical modeling systems require  substantial human input and unlike automatic  systems are not scalable. This thesis  presents a novel method for automatically  recovering dense surface patches using large  sets (1000's) of calibrated images taken from  arbitrary positions within the scene. Physical  instruments, such as Global Positioning  System (GPS), inertial sensors, and  inclinometers, are used to estimate the  position and orientation of each image.  Essentially, the problem is to find  corresponding points in each of the images.  Once a correspondence has been  established, calculating its three-dimensional  position is simply a matter of geometry. Long  baseline images improve the accuracy. Short  baseline images and the large number of  images greatly simplifies the correspondence  problem. The initial stage of the algorithm is  completely local and scales linearly with the  number of images. Subsequent stages are  global in nature, exploit geometric constraints,  and scale quadratically with the complexity of  the underlying scene. We describe  techniques for: 1) detecting and localizing  surface patches; 2) refining camera  calibration estimates and rejecting false  positive surfels; and 3) grouping surface  patches into surfaces and growing the  surface along a two-dimensional manifold.  We also discuss a method for producing high  quality, textured three-dimensional models  from these surfaces. Some of the most  important characteristics of this approach are  that it: 1) uses and refines noisy calibration  estimates; 2) compensates for large  variations in illumination; 3) tolerates  significant soft occlusion (e.g. tree branches);  and 4) associates, at a fundamental level, an  estimated normal (i.e. no frontal-planar  assumption) and texture with each surface  patch.
</description>
<pubDate>Fri, 22 Oct 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6766</guid>
<dc:date>1999-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Proceedings of the Fourth PHANTOM Users Group Workshop</title>
<link>https://hdl.handle.net/1721.1/6765</link>
<description>Proceedings of the Fourth PHANTOM Users Group Workshop
Salisbury, J. Kenneth; Srinivasan, Mandayam A.
This Report contains the proceedings of the  Fourth Phantom Users Group Workshop  contains 17 papers presented October 9-12,  1999 at MIT Endicott House in Dedham  Massachusetts. The workshop included  sessions on, Tools for Programmers,  Dynamic Environments, Perception and  Cognition, Haptic Connections, Collision  Detection / Collision Response, Medical and  Seismic Applications, and Haptics Going  Mainstream. The proceedings include papers  that cover a variety of subjects in computer  haptics including rendering, contact  determination, development libraries, and  applications in medicine, path planning, data  interaction and training.
</description>
<pubDate>Thu, 04 Nov 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6765</guid>
<dc:date>1999-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>A Constant-Factor Approximation Algorithm for Embedding Unweighted Graphs into Trees</title>
<link>https://hdl.handle.net/1721.1/6742</link>
<description>A Constant-Factor Approximation Algorithm for Embedding Unweighted Graphs into Trees
Badoiu, Mihai; Indyk, Piotr; Sidiropoulos, Anastasios
We present a constant-factor approximation algorithm for computing an embedding of the shortest path metric of an unweighted graph into a tree, that minimizes the multiplicative distortion.
</description>
<pubDate>Mon, 05 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6742</guid>
<dc:date>2004-07-05T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Approximations of the Frequency Moments</title>
<link>https://hdl.handle.net/1721.1/6741</link>
<description>Optimal Approximations of the Frequency Moments
Indyk, Piotr; Woodruff, David
We give a one-pass, O~(m^{1-2/k})-space algorithm for estimating the k-th frequency moment of a data stream for any real k&gt;2. Together with known lower bounds, this resolves the main problem left open by Alon, Matias, Szegedy, STOC'96. Our algorithm enables deletions as well as insertions of stream elements.
</description>
<pubDate>Fri, 02 Jul 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6741</guid>
<dc:date>2004-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Contextual models for object detection using boosted random fields</title>
<link>https://hdl.handle.net/1721.1/6740</link>
<description>Contextual models for object detection using boosted random fields
Torralba, Antonio; Murphy, Kevin P.; Freeman, William T.
We seek to both detect and segment objects in images. To exploit both local image data as well as contextual information, we introduce Boosted Random Fields (BRFs), which uses Boosting to learn the graph structure and local evidence of a conditional random field (CRF). The graph structure is learned by assembling graph fragments in an additive model. The connections between individual pixels are not very informative, but by using dense graphs, we can pool information from large regions of the image; dense models also support efficient inference. We show how contextual information from other objects can improve detection performance, both in terms of accuracy and speed, by using a computational cascade. We apply our system to detect stuff and things in office and street scenes.
</description>
<pubDate>Fri, 25 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6740</guid>
<dc:date>2004-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>How People Re-find Information When the Web Changes</title>
<link>https://hdl.handle.net/1721.1/6739</link>
<description>How People Re-find Information When the Web Changes
Teevan, Jaime
This paper investigates how people return to information in a dynamic information environment. For example, a person might want to return to Web content via a link encountered earlier on a Web page, only to learn that the link has since been removed. Changes can benefit users by providing new information, but they hinder returning to previously viewed information. The observational study presented here analyzed instances, collected via a Web search, where people expressed difficulty re-finding information because of changes to the information or its environment. A number of interesting observations arose from this analysis, including that the path originally taken to get to the information target appeared important in its re-retrieval, whereas, surprisingly, the temporal aspects of when the information was seen before were not. While people expressed frustration when problems arose, an explanation of why the change had occurred was often sufficient to allay that frustration, even in the absence of a solution. The implications of these observations for systems that support re-finding in dynamic environments are discussed.
</description>
<pubDate>Fri, 18 Jun 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6739</guid>
<dc:date>2004-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>A Unified Statistical and Information Theoretic Framework for Multi-modal Image Registration</title>
<link>https://hdl.handle.net/1721.1/6738</link>
<description>A Unified Statistical and Information Theoretic Framework for Multi-modal Image Registration
Zollei, Lilla; Fisher, John; Wells, William
We formulate and interpret several multi-modal registration methods in the context of a unified statistical and information theoretic framework.  A unified interpretation clarifies the implicit assumptions of each method yielding a better understanding of their relative strengths and weaknesses. Additionally, we discuss a generative statistical model from which we derive a novel analysis tool, the "auto-information function", as a means of assessing and exploiting the common spatial dependencies inherent in multi-modal imagery. We analytically derive useful properties of the "auto-information" as well as verify them empirically on multi-modal imagery. Among the useful aspects of the "auto-information function" is that it can be computed from imaging modalities independently and it allows one to decompose the search space of registration problems.
</description>
<pubDate>Wed, 28 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6738</guid>
<dc:date>2004-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Contextual Influences on Saliency</title>
<link>https://hdl.handle.net/1721.1/6737</link>
<description>Contextual Influences on Saliency
Torralba, Antonio
This article describes a model for including scene/context priors in attention guidance. In the proposed scheme, visual context information can be available early in the visual processing chain, in order to modulate the saliency of image regions and to provide an efficient short cut for object detection and recognition. The scene is represented by means of a low-dimensional global description obtained from low-level features. The global scene features are then used to predict the probability of presence of the target object in the scene, and its location and scale, before exploring the image. Scene information can then be used to modulate the saliency of image regions early during the visual processing in order to provide an efficient short cut for object detection and recognition.
</description>
<pubDate>Wed, 14 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6737</guid>
<dc:date>2004-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Sharing visual features for multiclass and multiview object detection</title>
<link>https://hdl.handle.net/1721.1/6736</link>
<description>Sharing visual features for multiclass and multiview object detection
Torralba, Antonio; Murphy, Kevin P.; Freeman, William T.
We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects.  We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.
</description>
<pubDate>Wed, 14 Apr 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6736</guid>
<dc:date>2004-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Visual Hulls: Example-Based 3D Shape Estimation from a Single Silhouette</title>
<link>https://hdl.handle.net/1721.1/6735</link>
<description>Virtual Visual Hulls: Example-Based 3D Shape Estimation from a Single Silhouette
Grauman, Kristen; Shakhnarovich, Gregory; Darrell, Trevor
Recovering a volumetric model of a person, car, or other object of interest from a single snapshot would be useful for many computer graphics applications. 3D model estimation in general is hard, and currently requires active sensors, multiple views, or integration over time. For a known object class, however, 3D shape can be successfully inferred from a single snapshot. We present a method for generating a ``virtual visual hull''-- an estimate of the 3D shape of an object from a known class, given a single silhouette observed from an unknown viewpoint. For a given class, a large database of multi-view silhouette examples from calibrated, though possibly varied, camera rigs are collected. To infer a novel single view input silhouette's virtual visual hull, we search for 3D shapes in the database which are most consistent with the observed contour. The input is matched to component single views of the multi-view training examples. A set of viewpoint-aligned virtual views are generated from the visual hulls corresponding to these examples. The 3D shape estimate for the input is then found by interpolating between the contours of these aligned views. When the underlying shape is ambiguous given a single view silhouette, we produce multiple visual hull hypotheses; if a sequence of input images is available, a dynamic programming approach is applied to find the maximum likelihood path through the feasible hypotheses over time. We show results of our algorithm on real and synthetic images of people.
</description>
<pubDate>Wed, 28 Jan 2004 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6735</guid>
<dc:date>2004-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>RamboNodes for the Metropolitan Ad Hoc Network</title>
<link>https://hdl.handle.net/1721.1/6734</link>
<description>RamboNodes for the Metropolitan Ad Hoc Network
Beal, Jacob; Gilbert, Seth
We present an algorithm to store data robustly in a large, geographically distributed network by means of localized regions of data storage that move in response to changing conditions. For example, data might migrate away from failures or toward regions of high demand. The PersistentNode algorithm provides this service robustly, but with limited safety guarantees. We use the RAMBO framework to transform PersistentNode into RamboNode, an algorithm that guarantees atomic consistency in exchange for increased cost and decreased liveness. In addition, a half-life analysis of RamboNode shows that it is robust against continuous low-rate failures. Finally, we provide experimental simulations for the algorithm on 2000 nodes, demonstrating how it services requests and examining how it responds to failures.
</description>
<pubDate>Wed, 17 Dec 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6734</guid>
<dc:date>2003-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Contour Matching Using Approximate Earth Mover's Distance</title>
<link>https://hdl.handle.net/1721.1/6733</link>
<description>Fast Contour Matching Using Approximate Earth Mover's Distance
Grauman, Kristen; Darrell, Trevor
Weighted graph matching is a good way to align a pair of shapes represented by a set of descriptive local features; the set of correspondences produced by the minimum cost of matching features from one shape to the features of the other often reveals how similar the two shapes are. However, due to the complexity of computing the exact minimum cost matching, previous algorithms could only run efficiently when using a limited number of features per shape, and could not scale to perform retrievals from large databases. We present a contour matching algorithm that quickly computes the minimum weight matching between sets of descriptive local features using a recently introduced low-distortion embedding of the Earth Mover's Distance (EMD) into a normed space. Given a novel embedded contour, the nearest neighbors in a database of embedded contours are retrieved in sublinear time via approximate nearest neighbors search. We demonstrate our shape matching method on databases of 10,000 images of human figures and 60,000 images of handwritten digits.
</description>
<pubDate>Fri, 05 Dec 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6733</guid>
<dc:date>2003-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Mobilized ad-hoc networks: A reinforcement learning approach</title>
<link>https://hdl.handle.net/1721.1/6732</link>
<description>Mobilized ad-hoc networks: A reinforcement learning approach
Chang, Yu-Han; Ho, Tracey; Kaelbling, Leslie Pack
Research in mobile ad-hoc networks has focused on situations in which nodes have no control over their movements. We investigate an important but overlooked domain in which nodes do have control over their movements. Reinforcement learning methods can be used to control both packet routing decisions and node mobility, dramatically improving the connectivity of the network. We first motivate the problem by presenting theoretical bounds for the connectivity improvement of partially mobile networks and then present superior empirical results under a variety of different scenarios in which the mobile nodes in our ad-hoc network are embedded with adaptive routing policies and learned movement policies.
</description>
<pubDate>Thu, 04 Dec 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6732</guid>
<dc:date>2003-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>Evolving Robocode Tank Fighters</title>
<link>https://hdl.handle.net/1721.1/6731</link>
<description>Evolving Robocode Tank Fighters
Eisenstein, Jacob
In this paper, I describe the application of genetic programming  to evolve a controller for a robotic tank in a simulated environment. The purpose is to explore how genetic techniques can best be applied  to produce controllers based on subsumption and behavior oriented  languages such as REX. As part of my implementation, I developed  TableRex, a modification of REX that can be expressed on a fixed-length genome. Using a fixed subsumption architecture of TableRex modules,  I evolved robots that beat some of the most competitive hand-coded  adversaries.
</description>
<pubDate>Tue, 28 Oct 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6731</guid>
<dc:date>2003-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>Learning object segmentation from video data</title>
<link>https://hdl.handle.net/1721.1/6730</link>
<description>Learning object segmentation from video data
Ross, Michael G.; Kaelbling, Leslie Pack
This memo describes the initial results of a project to create a self-supervised algorithm for learning object segmentation from video data. Developmental psychology and computational experience have demonstrated that the motion segmentation of objects is a simpler, more primitive process than the detection of object boundaries by static image cues. Therefore, motion information provides a plausible supervision signal for learning the static boundary detection task and for evaluating performance on a test set. A video camera and previously developed background subtraction algorithms can automatically produce a large database of motion-segmented images for minimal cost. The purpose of this work is to use the information in such a database to learn how to detect the object boundaries in novel images using static information, such as color, texture, and shape.  This work was funded in part by the Office of Naval Research contract #N00014-00-1-0298, in part by the Singapore-MIT Alliance agreement of 11/6/98, and in part by a National Science Foundation Graduate Student Fellowship.
</description>
<pubDate>Mon, 08 Sep 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6730</guid>
<dc:date>2003-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>Permutation Tests for Classification</title>
<link>https://hdl.handle.net/1721.1/6723</link>
<description>Permutation Tests for Classification
Mukherjee, Sayan; Golland, Polina; Panchenko, Dmitry
We introduce and explore an approach to estimating statistical significance of classification accuracy, which is particularly useful in scientific applications of machine learning where high dimensionality of the data and the small number of training examples render most standard convergence bounds too loose to yield a meaningful guarantee of the generalization ability of the classifier. Instead, we estimate statistical significance of the observed classification accuracy, or the likelihood of observing such accuracy by chance due to spurious correlations of the high-dimensional data patterns with the class labels in the given training set. We adopt permutation testing, a non-parametric technique previously developed in classical statistics for hypothesis testing in the generative setting (i.e., comparing two probability distributions). We demonstrate the method on real examples from neuroimaging studies and DNA microarray analysis and suggest a theoretical analysis of the procedure that relates the asymptotic behavior of the test to the existing convergence bounds.
</description>
<pubDate>Thu, 28 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6723</guid>
<dc:date>2003-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>Near-Optimal Distributed Failure Circumscription</title>
<link>https://hdl.handle.net/1721.1/6722</link>
<description>Near-Optimal Distributed Failure Circumscription
Beal, Jacob
Small failures should only disrupt a small part of a network. One way to do this is by marking the surrounding area as untrustworthy --- circumscribing the failure. This can be done with a distributed algorithm using hierarchical clustering and neighbor relations, and the resulting circumscription is near-optimal for convex failures.
</description>
<pubDate>Mon, 11 Aug 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6722</guid>
<dc:date>2003-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Delegation, Arbitration and High-Level Service Discovery as Key Elements of a Software Infrastructure for Pervasive Computing</title>
<link>https://hdl.handle.net/1721.1/6721</link>
<description>Delegation, Arbitration and High-Level Service Discovery as Key Elements of a Software Infrastructure for Pervasive Computing
Gajos, Krzysztof; Shrobe, Howard
The dream of pervasive computing is slowly becoming  a reality. A number of projects around the world are constantly contributing ideas and solutions  that are bound to change the way we interact with our environments and with one another. An  essential component of the future is a software infrastructure that is capable of supporting interactions  on scales ranging from a single physical space to intercontinental collaborations. Such  infrastructure must help applications adapt to very diverse environments and must protect people's  privacy and respect their personal preferences. In this paper we indicate a number of limitations present  in the software infrastructures proposed so far (including our previous work). We then describe the  framework for building an infrastructure that satisfies the abovementioned criteria. This  framework hinges on the concepts of delegation, arbitration and high-level service discovery.  Components of our own implementation of such an infrastructure are presented.
</description>
<pubDate>Sun, 01 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6721</guid>
<dc:date>2003-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Activity Zones for Context-Aware Computing</title>
<link>https://hdl.handle.net/1721.1/6720</link>
<description>Activity Zones for Context-Aware Computing
Koile, Kimberle; Tollmar, Konrad; Demirdjian, David; Shrobe, Howard; Darrell, Trevor
Location is a primary cue in many context-aware  computing systems, and is often represented as  a global coordinate, room number, or Euclidean  distance various landmarks. A user?s concept of  location, however, is often defined in terms of regions in  which common activities occur. We show  how to partition a space into such regions based on  patterns of observed user location and  motion. These regions, which we call activity zones,  represent regions of similar user activity, and  can be used to trigger application actions, retrieve  information based on previous context, and  present information to users. We suggest that context-aware applications can benefit from a  location representation learned from observing users.  We describe an implementation of our  system and present two example applications whose  behavior is controlled by users? entry, exit,  and presence in the zones.
</description>
<pubDate>Tue, 10 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6720</guid>
<dc:date>2003-06-10T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Classes Correlated to a Hierarchy</title>
<link>https://hdl.handle.net/1721.1/6719</link>
<description>Learning Classes Correlated to a Hierarchy
Shih, Lawrence; Karger, David
Trees are a common way of organizing large amounts  of information by placing items with similar  characteristics near one another in the tree. We  introduce a classification problem where a given tree  structure gives us information on the best way to label  nearby elements. We suggest there are many practical  problems that fall under this domain. We propose a  way to map the classification problem onto a standard  Bayesian inference problem. We also give a fast,  specialized inference algorithm that incrementally  updates relevant probabilities. We apply this algorithm  to web-classification problems and show that our  algorithm empirically works well.
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6719</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Essential Dynamics Algorithm: Essential Results</title>
<link>https://hdl.handle.net/1721.1/6718</link>
<description>The Essential Dynamics Algorithm: Essential Results
Martin, Martin C.
This paper presents a novel algorithm for learning in a  class of stochastic Markov decision processes (MDPs)  with continuous state and action spaces that trades  speed for accuracy. A transform of the stochastic MDP  into a deterministic one is presented which captures the  essence of the original dynamics, in a sense made  precise. In this transformed MDP, the calculation of  values is greatly simplified. The online algorithm  estimates the model of the transformed MDP and  simultaneously does policy search against it. Bounds  on the error of this approximation are proven, and  experimental results in a bicycle riding domain are  presented. The algorithm learns near optimal policies  in orders of magnitude fewer interactions with the  stochastic MDP, using less domain knowledge. All  code used in the experiments is available on the  project's web site.
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6718</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Robust Amorphous Hierarchy from Persistent Nodes</title>
<link>https://hdl.handle.net/1721.1/6717</link>
<description>A Robust Amorphous Hierarchy from Persistent Nodes
Beal, Jacob
For a very large network deployed in space with only nearby nodes able to talk to each other, we want to do tasks like robust routing and data storage. One way to organize the network is via a hierarchy, but hierarchies often have a few critical nodes whose death can disrupt organization over long distances. I address this with a system of distributed aggregates called Persistent Nodes, such that spatially local failures disrupt the hierarchy in an area proportional to the diameter of the failure. I describe and analyze this system, which has been implemented in simulation.
</description>
<pubDate>Thu, 01 May 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6717</guid>
<dc:date>2003-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light Field Morphable Models</title>
<link>https://hdl.handle.net/1721.1/6716</link>
<description>Light Field Morphable Models
Christoudias, Chris Mario; Morency, Louis-Philippe; Darrell, Trevor
Statistical shape and texture appearance models are  powerful image representations, but previously had  been restricted to 2D or simple 3D shapes. In this paper  we present a novel 3D morphable model based on  image-based rendering techniques, which can  represent complex lighting conditions, structures, and  surfaces. We describe how to construct a manifold of  the multi-view appearance of an object class using light  fields and show how to match a 2D image of an object  to a point on this manifold. In turn we use the  reconstructed light field to render novel views of the  object. Our technique overcomes the limitations of  polygon based appearance models and uses light  fields that are acquired in real-time.
</description>
<pubDate>Fri, 18 Apr 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6716</guid>
<dc:date>2003-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Pose Estimation with Parameter Sensitive Hashing</title>
<link>https://hdl.handle.net/1721.1/6715</link>
<description>Fast Pose Estimation with Parameter Sensitive Hashing
Shakhnarovich, Gregory; Viola, Paul; Darrell, Trevor
Example-based methods are effective for parameter  estimation problems when the underlying system is simple or the  dimensionality of the input is low. For complex and high-dimensional  problems such as pose estimation, the number of required examples and the  computational complexity rapidly becme prohibitively high. We  introduce a new algorithm that learns a set of hashing functions that  efficiently index examples relevant to a particular estimation task.  Our algorithm extends a recently developed method for  locality-sensitive hashing, which finds approximate neighbors in time  sublinear in the number of examples. This method depends critically  on the choice of hash functions; we show how to find the set of hash  functions that are optimally relevant to a particular estimation problem.  Experiments demonstrate that the resulting algorithm, which we call Parameter-Sensitive Hashing, can rapidly and  accurately estimate the articulated pose of human figures from a large  database of example images.
</description>
<pubDate>Fri, 18 Apr 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6715</guid>
<dc:date>2003-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring 3D Structure with a Statistical Image-Based Shape Model</title>
<link>https://hdl.handle.net/1721.1/6714</link>
<description>Inferring 3D Structure with a Statistical Image-Based Shape Model
Grauman, Kristen; Shakhnarovich, Gregory; Darrell, Trevor
We present an image-based approach to infer 3D  structure parameters using a probabilistic "shape+structure''  model. The 3D shape of a class of objects may be represented by sets  of contours from silhouette views simultaneously observed from  multiple calibrated cameras. Bayesian reconstructions of new shapes can  then be estimated using a prior density constructed with a mixture model  and probabilistic principal components analysis. We  augment the shape model to incorporate structural features of interest;  novel examples with missing structure parameters may then be  reconstructed to obtain estimates of these parameters. Model matching and  parameter inference are done entirely in the image domain and require no  explicit 3D construction. Our shape model enables accurate  estimation of structure despite segmentation errors or missing views  in the input silhouettes, and works even with only a single input  view. Using a dataset of thousands of pedestrian images generated  from a synthetic model, we can perform accurate inference of the 3D  locations of 19 joints on the body based on observed silhouette  contours from real images.
</description>
<pubDate>Thu, 17 Apr 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6714</guid>
<dc:date>2003-04-17T00:00:00Z</dc:date>
</item>
<item>
<title>Surviving the Information Explosion: How People Find Their Electronic Information</title>
<link>https://hdl.handle.net/1721.1/6713</link>
<description>Surviving the Information Explosion: How People Find Their Electronic Information
Alvarado, Christine; Teevan, Jaime; Ackerman, Mark S.; Karger, David
We report on a study of how people look for information within email, files, and the Web. When locating a document or searching for a specific answer, people relied on their contextual knowledge of their information target to help them find it, often associating the target with a specific document. They appeared to prefer to use this contextual information as a guide in navigating locally in small steps to the desired document rather than directly jumping to their target. We found this behavior was especially true for people with unstructured information organization. We discuss the implications of our findings for the design of personal information management tools.
</description>
<pubDate>Tue, 15 Apr 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6713</guid>
<dc:date>2003-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>Persistent Nodes for Reliable Memory in Geographically Local Networks</title>
<link>https://hdl.handle.net/1721.1/6712</link>
<description>Persistent Nodes for Reliable Memory in Geographically Local Networks
Beal, Jacob
A Persistent Node is a redundant distributed mechanism for storing a key/value pair reliably in a geographically local network. In this paper, I develop a method of establishing Persistent Nodes in an amorphous matrix. I address issues of construction, usage, atomicity guarantees and reliability in the face of stopping failures. Applications include routing, congestion control, and data storage in gigascale networks.
</description>
<pubDate>Tue, 15 Apr 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6712</guid>
<dc:date>2003-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>Context-Based Vision System for Place and Object Recognition</title>
<link>https://hdl.handle.net/1721.1/6711</link>
<description>Context-Based Vision System for Place and Object Recognition
Torralba, Antonio; Murphy, Kevin P.; Freeman, William T.; Rubin, Mark A.
While navigating in an environment, a vision system  has to be able to recognize where it is and what the  main objects in the scene are. In this paper we present  a context-based vision system for place and object  recognition. The goal is to identify familiar locations  (e.g., office 610, conference room 941, Main Street), to  categorize new environments (office, corridor, street)  and to use that information to provide contextual priors  for object recognition (e.g., table, chair, car, computer). We present a low-dimensional global image representation that provides  relevant information for place recognition and  categorization, and how such contextual information  introduces strong priors that simplify object recognition.  We have trained the system to recognize over 60  locations (indoors and outdoors) and to suggest the  presence and locations of more than 20 different object  types. The algorithm has been integrated into a mobile system that provides real-time feedback to the  user.
</description>
<pubDate>Wed, 19 Mar 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6711</guid>
<dc:date>2003-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Learning and Language Via Communication Bootstrapping</title>
<link>https://hdl.handle.net/1721.1/6710</link>
<description>Leveraging Learning and Language Via Communication Bootstrapping
Beal, Jacob
In a Communication Bootstrapping system, peer  components with different perceptual worlds invent symbols and syntax  based on correlations between their percepts. I propose that  Communication Bootstrapping can also be used to acquire functional  definitions of words and causal reasoning knowledge. I illustrate this  point with several examples, then sketch the architecture of a  system in progress which attempts to execute this task.
</description>
<pubDate>Mon, 17 Mar 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6710</guid>
<dc:date>2003-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>(Semi-)Predictive Discretization During Model Selection</title>
<link>https://hdl.handle.net/1721.1/6709</link>
<description>(Semi-)Predictive Discretization During Model Selection
Steck, Harald; Jaakkola, Tommi S.
In this paper, we present an approach to discretizing  multivariate continuous data while learning the  structure of a graphical model. We derive the joint  scoring function from the principle of predictive  accuracy, which inherently ensures the optimal trade-off between goodness of fit and model complexity  (including the number of discretization levels). Using  the so-called finest grid implied by the data, our scoring  function depends only on the number of data points in  the various discretization levels. Not only can it be  computed efficiently, but it is also independent of the  metric used in the continuous space. Our experiments  with gene expression data show that discretization  plays a crucial role regarding the resulting network  structure.
</description>
<pubDate>Tue, 25 Feb 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6709</guid>
<dc:date>2003-02-25T00:00:00Z</dc:date>
</item>
<item>
<title>Generalized Low-Rank Approximations</title>
<link>https://hdl.handle.net/1721.1/6708</link>
<description>Generalized Low-Rank Approximations
Srebro, Nathan; Jaakkola, Tommi
We study the frequent problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving {\\em weighted} low rank approximation problems, which, unlike simple matrix factorization problems, do not admit a closed form solution in general. We analyze, in addition, the nature of locally optimal solutions that arise in this context, demonstrate the utility of accommodating the weights in reconstructing the underlying low rank representation, and extend the formulation to non-Gaussian noise models such as classification (collaborative filtering).
</description>
<pubDate>Wed, 15 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6708</guid>
<dc:date>2003-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Programming in the Formulation of Ideas</title>
<link>https://hdl.handle.net/1721.1/6707</link>
<description>The Role of Programming in the Formulation of Ideas
Sussman, Gerald Jay; Wisdom, Jack
Classical mechanics is deceptively simple. It  is surprisingly easy to get the right answer with fallacious reasoning  or without real understanding. To address this problem we  use computational techniques to communicate a deeper  understanding of Classical Mechanics. Computational algorithms are  used to express the methods used in the analysis of dynamical  phenomena. Expressing the methods in a computer language forces them to be  unambiguous and computationally effective. The task of  formulating a method as a computer-executable program and debugging  that program is a powerful exercise in the learning process. Also, once  formalized procedurally, a mathematical idea becomes a tool that can  be used directly to compute results.
</description>
<pubDate>Fri, 01 Nov 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6707</guid>
<dc:date>2002-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Swimming in Space-Time</title>
<link>https://hdl.handle.net/1721.1/6706</link>
<description>Swimming in Space-Time
Wisdom, Jack
Cyclic changes in the shape of a quasi-rigid  body on a curved manifold can lead to net translation and/or  rotation of the body in the manifold. Presuming space-time is a  curved manifold as portrayed by general relativity, translation in space can  be accomplished simply by cyclic changes in the shape of a body,  without any thrust or external forces.
</description>
<pubDate>Fri, 01 Nov 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6706</guid>
<dc:date>2002-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiple Resolution Image Classification</title>
<link>https://hdl.handle.net/1721.1/6705</link>
<description>Multiple Resolution Image Classification
Bouvrie, Jake V.
Binary image classifiction is a problem that  has received much attention  in recent years. In this paper we evaluate a  selection of popular  techniques in an effort to find a feature set/ classifier combination which  generalizes well to full resolution image data.  We then apply that system  to images at one-half through one-sixteenth  resolution, and consider the  corresponding error rates. In addition, we  further observe generalization  performance as it depends on the number of  training images, and lastly,  compare the system's best error rates to that  of a human performing an  identical classification task given teh same  set of test images.
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6705</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shape Recipes: Scene Representations that Refer to the Image</title>
<link>https://hdl.handle.net/1721.1/6704</link>
<description>Shape Recipes: Scene Representations that Refer to the Image
Freeman, William T.; Torralba, Antonio
The goal of low-level vision is to estimate an  underlying scene, given an observed image. Real-world scenes  (e.g., albedos or shapes) can be very complex, conventionally requiring  high dimensional representations which are hard to estimate  and store. We propose a low-dimensional representation, called a  scene recipe, that relies on the image itself to describe the  complex scene configurations. Shape recipes are an  example: these are the regression coefficients that predict the  bandpassed shape from bandpassed image data. We describe the  benefits of this representation, and show two uses  illustrating their properties: (1) we improve stereo shape estimates by  learning shape recipes at low resolution and applying them at full resolution;  (2) Shape recipes implicitly contain information about lighting  and materials and we use them for material segmentation.
</description>
<pubDate>Sun, 01 Sep 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6704</guid>
<dc:date>2002-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recovering Intrinsic Images from a Single Image</title>
<link>https://hdl.handle.net/1721.1/6703</link>
<description>Recovering Intrinsic Images from a Single Image
Tappen, Marshall F.; Freeman, William T.; Adelson, Edward H.
We present an algorithm that uses multiple  cues to recover shading and reflectance intrinsic images from a single  image. Using both color information and a classifier trained to  recognize gray-scale patterns, each image derivative is classified as being  caused by shading or a change in the surface's reflectance.  Generalized Belief Propagation is then used to propagate information from  areas where the correct classification is clear to areas where it is  ambiguous. We also show results on real images.
</description>
<pubDate>Sun, 01 Sep 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6703</guid>
<dc:date>2002-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Dirichlet Prior and Bayesian Regularization</title>
<link>https://hdl.handle.net/1721.1/6702</link>
<description>On the Dirichlet Prior and Bayesian Regularization
Steck, Harald; Jaakkola, Tommi S.
A common objective in learning a model from  data is to recover its network structure, while the model  parameters are of minor interest. For example, we may wish to recover  regulatory networks from high-throughput data sources. In this paper  we examine how Bayesian regularization using a Dirichlet prior over the  model parameters affects the learned model structure in a  domain with discrete variables. Surprisingly, a weak prior in the  sense of smaller equivalent sample size leads to a strong  regularization of the model structure (sparse graph) given a sufficiently  large data set. In particular, the empty graph is obtained in the  limit of a vanishing strength of prior belief. This is  diametrically opposite to what one may expect in this limit, namely the  complete graph from an (unregularized) maximum likelihood estimate.  Since the prior affects the parameters as expected, the prior strength  balances a "trade-off" between regularizing the parameters or the  structure of the model. We demonstrate the benefits of optimizing this  trade-off in the sense of predictive accuracy.
</description>
<pubDate>Sun, 01 Sep 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6702</guid>
<dc:date>2002-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>1968-1969 Progress Report</title>
<link>https://hdl.handle.net/1721.1/6701</link>
<description>1968-1969 Progress Report
Minsky, Marvin; Papert, Seymour A.
This report mainly summarizes the Project  MAC A.I. Group work between July 1968 and  June 1969 but covers some work up to  February 1970. The work on computer vision  is described in detail. This summary should  be read in conjunction with last year's A.I.  Group Report which is included at the end of  this Memo.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6701</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revised Report on the Algorithmic Language Scheme</title>
<link>https://hdl.handle.net/1721.1/6700</link>
<description>Revised Report on the Algorithmic Language Scheme
Rees, Jonathan; Clinger, William
Data and procedures and the values they  amass, Higher-order functions to combine  and mix and match, Objects with their local  state, the message they pass, A property, a  package, the control of point for a catch- In the  Lambda Order they are all first-class. One  thing to name them all, one things to define  them, one thing to place them in  environments and bind them, in the Lambda  Order they are all first-class. Keywords:  Scheme, Lisp, functional programming,  computer languages.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6700</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tense, Aspect and the Cognitive Representation of Time</title>
<link>https://hdl.handle.net/1721.1/6699</link>
<description>Tense, Aspect and the Cognitive Representation of Time
Yip, Kenneth Man-Kam
This paper explores the relationships  between a computation theory of temporal  representation (as developed by James Allen)  and a formal linguistic theory of tense (as  developed by Norbert Hornstein) and aspect.  It aims to provide explicit answers to four  fundamental questions: (1) what is the  computational justification for the primitive of a  linguistic theory; (2) what is the computational  explanation of the formal grammatical  constraints; (3) what are the processing  constraints imposed on the learnability and  markedness of these theoretical constructs;  and (4) what are the constraints that a  linguistic theory imposes on representations.  We show that one can effectively exploit the  interface between the language faculty and  the cognitive faculties by using linguistic  constraints to determine restrictions on the  cognitive representation and vice versa.  Three main results are obtained: (1) We  derive an explanation of an observed  grammatical constraint on tense?? Linear  Order Constraint??m the information  monotonicity property of the constraint  propagation algorithm of Allen's temporal  system: (2) We formulate a principle of  markedness for the basic tense structures  based on the computational efficiency of the  temporal representations; and (3) We show  Allen's interval-based temporal system is not  arbitrary, but it can be used to explain  independently motivated linguistic constraints  on tense and aspect interpretations.  We also claim that the methodology of  research developed in this study??oss-level" investigation of independently motivated  formal grammatical theory and computational  models??a powerful paradigm with which  to attack representational problems in basic  cognitive domains, e.g., space, time,  causality, etc.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6699</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vision by Man and Machine</title>
<link>https://hdl.handle.net/1721.1/6698</link>
<description>Vision by Man and Machine
Poggio, Tomaso
The development of increasingly  sophisticated and powerful computers in the  last few decades has frequently stimulated  comparisons between them and the human  brain. Such comparisons will become more  earnest as computers are applied more and  more to tasks formerly associated with  essentially human activities and capabilities.  The expectation of a coming generation of  "intelligent" computers and robots with  sensory, motor and even "intellectual" skills  comparable in quality to (and quantitatively  surpassing) our own is becoming more  widespread and is, I believe, leading to a new  and potentially productive analytical science of  "information processing".  In no field has this new approach been so  precisely formulated and so thoroughly  exemplified as in the field of vision. As the  dominant sensory modality of man, vision is  one of the major keys to our mastery of the  environment, to our understanding and control  of the objects which surround us. If we wish  to created robots capable of performing  complex manipulative tasks in a changing  environment, we must surely endow them  with (among other things) adequate visual  powers. How can we set about designing  such flexible and adaptive robots? In  designing them, can we make use of our  rapidly growing knowledge of the human  brain, and if so, how at the same time, can our  experiences in designing artificial vision  systems help us to understand how the brain  analyzes visual information?
</description>
<pubDate>Thu, 01 Mar 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6698</guid>
<dc:date>1984-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structure from Stereo and Motion</title>
<link>https://hdl.handle.net/1721.1/6697</link>
<description>Structure from Stereo and Motion
Richards, Whitman
Stereopsis and motion parallax are two  methods for recovering three dimensional  shape. Theoretical analyses of each method  show that neither alone can recover rigid 3D  shapes correctly unless other information,  such as perspective, is included. The  solutions for recovering rigid structure from  motion have a reflection ambiguity; the depth  scale of the stereoscopic solution will not be  known unless the fixation distance is  specified in units of interpupil separation.  (Hence the configuration will appear  distorted.) However, the correct configuration  and the disposition of a rigid 3D shape can be  recovered if stereopsis and motion are  integrated, for then a unique solution follows  from a set of linear equations. The correct  interpretation requires only three points and  two stereo views.
</description>
<pubDate>Thu, 01 Sep 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6697</guid>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Universal Computation in an Evolutionary System</title>
<link>https://hdl.handle.net/1721.1/6696</link>
<description>Implementing Universal Computation in an Evolutionary System
Werfel, Justin
Evolutionary algorithms are a common tool in  engineering and in the study of natural  evolution. Here we take their use in a new  direction by showing how they can be made to  implement a universal computer. We  consider populations of individuals with  genes whose values are the variables of  interest. By allowing them to interact with one  another in a specified environment with  limited resources, we demonstrate the ability  to construct any arbitrary logic circuit. We  explore models based on the limits of small  and large populations, and show examples of  such a system in action, implementing a  simple logic circuit.
</description>
<pubDate>Mon, 01 Jul 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6696</guid>
<dc:date>2002-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Properties and Applications of Shape Recipes</title>
<link>https://hdl.handle.net/1721.1/6695</link>
<description>Properties and Applications of Shape Recipes
Torralba, Antonio; Freeman, William T.
In low-level vision, the representation of scene  properties such as shape, albedo, etc., are very high  dimensional as they have to describe complicated structures. The  approach proposed here is to let the image itself bear as much of the  representational burden as possible. In many situations, scene and  image are closely related and it is possible to find a functional relationship  between them. The scene information can be represented in  reference to the image where the functional specifies how to translate the  image into the associated scene. We illustrate the use of this  representation for encoding shape information. We show how  this representation has appealing properties such as locality and  slow variation across space and scale. These properties provide a way of  improving shape estimates coming from other sources of information like  stereo.
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6695</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Keeping Secrets in Hardware: the Microsoft Xbox(TM) Case Study</title>
<link>https://hdl.handle.net/1721.1/6694</link>
<description>Keeping Secrets in Hardware: the Microsoft Xbox(TM) Case Study
Huang, Andrew "bunnie"
This paper discusses the hardware foundations of the cryptosystem employed by the Xbox(TM) video game console from Microsoft. A secret boot block overlay is buried within a system ASIC. This secret boot block decrypts and verifies portions of an external FLASH-type ROM. The presence of the secret boot block is camouflaged by a decoy boot block in the external ROM. The code contained within the secret boot block is transferred to the CPU in the clear over a set of high-speed busses where it can be extracted using simple custom hardware. The paper concludes with recommendations for improving the Xbox security system. One lesson of this study is that the use of a high-performance bus alone is not a sufficient security measure, given the advent of inexpensive, fast rapid prototyping services and high-performance FPGAs.
</description>
<pubDate>Sun, 26 May 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6694</guid>
<dc:date>2002-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Trajectory Mapping ("TM''): A New Non-Metric Scaling Technique</title>
<link>https://hdl.handle.net/1721.1/6693</link>
<description>Trajectory Mapping ("TM''): A New Non-Metric Scaling Technique
Richards, Whitman; Koenderink, Jan J.
Trajectory Mapping "TM'' is a new scaling  technique designed to recover the  parameterizations, axes, and paths used to  traverse a feature space. Unlike  Multidimensional Scaling (MDS), there is no  assumption that the space is homogenous or  metric. Although some metric ordering  information is obtained with TM, the main  output is the feature parameterizations that  partition the given domain of object samples  into different categories. Following an  introductory example, the technique is further  illustrated using first a set of colors and then a  collection of textures taken from Brodatz  (1966).
</description>
<pubDate>Wed, 01 Dec 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6693</guid>
<dc:date>1993-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Seeing 'Ghost' Solutions in Stereo Vision</title>
<link>https://hdl.handle.net/1721.1/6692</link>
<description>Seeing 'Ghost' Solutions in Stereo Vision
Weinshall, Daphna
A unique matching is a stated objective of  most computational theories of stereo vision.  This report describes situations where  humans perceive a small number of surfaces  carried by non-unique matching of random dot  patterns, although a unique solution exists  and is observed unambiguously in the  perception of isolated features. We find both  cases where non-unique matchings compete  and suppress each other and cases where  they are all perceived as transparent surfaces.  The circumstances under which each  behavior occurs are discussed and a  possible explanation is sketched. It appears  that matching reduces many false targets to a  few, but may still yield multiple solutions in  some cases through a (possibly different)  process of surface interpolation.
</description>
<pubDate>Thu, 01 Sep 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6692</guid>
<dc:date>1988-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Inference in a Mechanical Design Compiler</title>
<link>https://hdl.handle.net/1721.1/6691</link>
<description>Quantitative Inference in a Mechanical Design Compiler
Ward, Allen C.; Seering, Warren
This paper presents the ideas underlying a  program that takes as input a schematic of a  mechanical or hydraulic power transmission  system, plus specifications and a utility  function, and returns catalog numbers from  predefined catalogs for the optimal selection  of components implementing the design. It  thus provides the designer with a high level  "language" in which to compose new  designs, then performs some of the detailed  design process for him. The program is  based on a formalization of quantitative  inferences about hierarchically organized sets  of artifacts and operating conditions, which  allows design compilation without the  exhaustive enumeration of alternatives.
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6691</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Meta-Rules: Reasoning About Control</title>
<link>https://hdl.handle.net/1721.1/6690</link>
<description>Meta-Rules: Reasoning About Control
Davis; Randall
How can we insure that knowledge  embedded in a program is applied effectively?  Traditionally the answer to this question has  been sought in different problem solving  paradigms and in different approaches to  encoding and indexing knowledge. Each of  these is useful with a certain variety of  problem, but they all share a common  problem: they become ineffective in the face of  a sufficiently large knowledge base. How then  can we make it possible for a system to  continue to function in the face of a very large  number of plausibly useful chunks of  knowledge? In response to this question we  propose a framework for viewing issues of  knowledge indexing and retrieval, a  framework that includes what appears to be a  useful perspective on the concept of a  strategy. We view strategies as a means of  controlling invocation in situations where  traditional selection mechanisms become  ineffective. We examine ways to effect such  control, and describe meta-rules, a means of  specifying strategies which offers a number of  advantages. We consider at some length how  and when it is useful to reason about control,  and explore the advantages meta-rules offer  for doing this.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6690</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computer as Coach: An Athletic Paradigm for Intellectual Education</title>
<link>https://hdl.handle.net/1721.1/6689</link>
<description>The Computer as Coach: An Athletic Paradigm for Intellectual Education
Goldstein, Ira
Over the next five years, computer games will  find their way into a vast number of American  homes, creating a unique educational  opportunity: the development of "computer  coaches" for the serious intellectual skills  required by some of these games. From the  player's perspective, the coach will provide  advice regarding strategy and tactics for better  play. But, from the perspective of the coach,  the request for help is an opportunity to tutor  basic mathematical, scientific or other kinds  of knowledge that the game exercises.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6689</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Floyd-Hoare Verifiers "Considered Harmful"</title>
<link>https://hdl.handle.net/1721.1/6688</link>
<description>Floyd-Hoare Verifiers "Considered Harmful"
Shrobe, Howard E.
The Floyd-Hoare methodology completely  dominates the field of program verification  and has contributed much to our  understanding of how programs might be  analyzed. Useful but limited verifiers have  been developed using Floyd-Hoare  techniques. However, it has long been known  that it is difficult to handle side effects on  shared data structures within the Floyd-Hoare  framework. Most examples of successful  Floyd-Hoare axioms for assignment to  complex data structures, similar statements  have been used by London. This paper  demonstrates an error in these formalizations  and suggests a different style of verification.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6688</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Actors and Continuous Functionals</title>
<link>https://hdl.handle.net/1721.1/6687</link>
<description>Actors and Continuous Functionals
Hewitt, Carl; Baker, Henry
This paper presents precise versions of  some "laws" that must be satisfied by  computations involving communicating  parallel processes. The laws take the form of  stating plausible restrictions on the histories  of computations that are physically realizable.  The laws are very general in that they are  obeyed by parallel processes executing on a  time varying number of distributed physical  processors.
</description>
<pubDate>Fri, 01 Jul 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6687</guid>
<dc:date>1977-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced Programming Language Features for Executable Design Patterns "Better Patterns Through Reflection</title>
<link>https://hdl.handle.net/1721.1/6686</link>
<description>Advanced Programming Language Features for Executable Design Patterns "Better Patterns Through Reflection
Sullivan, Gregory T.
The Design Patterns book [GOF95] presents  24 time-tested patterns that consistently appear in well-designed  software systems. Each pattern is presented with a description of the  design problem the pattern addresses, as well as sample  implementation code and design considerations. This paper explores how the  patterns from the "Gang of Four'', or "GOF'' book, as it is often called,  appear when similar problems are addressed using a dynamic,  higher-order, object-oriented programming language. Some of the  patterns disappear -- that is, they are supported directly by language features,  some patterns are simpler or have a different focus, and some are  essentially unchanged.
</description>
<pubDate>Fri, 22 Mar 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6686</guid>
<dc:date>2002-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>Learning with Deictic Representation</title>
<link>https://hdl.handle.net/1721.1/6685</link>
<description>Learning with Deictic Representation
Finney, Sarah; Gardiol, Natalia H.; Kaelbling, Leslie Pack; Oates, Tim
Most reinforcement learning methods operate  on propositional representations of the world state. Such  representations are often intractably large and generalize poorly. Using  a deictic representation is believed to be a viable  alternative: they promise generalization while allowing the use of  existing reinforcement-learning methods. Yet, there  are few experiments on learning with deictic representations reported  in the literature. In this paper we explore the effectiveness of two  forms of deictic representation and a naive propositional  representation in a simple blocks-world domain. We find,  empirically, that the deictic representations actually worsen performance.  We conclude with a discussion of possible causes of these  results and strategies for more effective learning in domains with objects.
</description>
<pubDate>Wed, 10 Apr 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6685</guid>
<dc:date>2002-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>The Image Dissector "Eyes"</title>
<link>https://hdl.handle.net/1721.1/6684</link>
<description>The Image Dissector "Eyes"
Horn, B.K.P.
This is a collection of data on the construction operation and performance of the two image dissector cameras. Some of this data is useful in deciding whether certain shortcomings are significant for a given application and if so how to compensate for them.
</description>
<pubDate>Fri, 01 Aug 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6684</guid>
<dc:date>1969-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shape-Time Photography</title>
<link>https://hdl.handle.net/1721.1/6683</link>
<description>Shape-Time Photography
Freeman, William T.; Zhang, Hao
We introduce a new method to describe, in a single image, changes in  shape over time. We acquire both range and image information with a  stationary stereo camera. From the pictures taken, we display a  composite image consisting of the image data from the  surface closest to the camera at every pixel. This reveals the 3-d  relationships over time by easy-to-interpret occlusion relationships  in the composite image. We call the composite a shape-time  photograph.   Small errors in depth measurements cause artifacts in the shape-time  images. We correct most of these using a Markov network to estimate  the most probable front surface, taking into account the depth  measurements, their uncertainties, and layer continuity assumptions.
</description>
<pubDate>Thu, 10 Jan 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6683</guid>
<dc:date>2002-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Vision-Based Interfaces: How to Use Your Head in Dual Pointing Tasks</title>
<link>https://hdl.handle.net/1721.1/6682</link>
<description>Exploring Vision-Based Interfaces: How to Use Your Head in Dual Pointing Tasks
Darrell, Trevor; Checka, Neal; Oh, Alice; Morency, Louis-Philippe
The utility of vision-based face tracking for dual pointing tasks is evaluated. We first describe a 3-D face tracking technique based on real-time parametric motion-stereo, which is non-invasive, robust, and self-initialized. The tracker provides a real-time estimate of a ?frontal face ray? whose intersection with the display surface plane is used as a second stream of input for scrolling or pointing, in paral-lel with hand input. We evaluated the performance of com-bined head/hand input on a box selection and coloring task: users selected boxes with one pointer and colors with a second pointer, or performed both tasks with a single pointer. We found that performance with head and one hand was intermediate between single hand performance and dual hand performance. Our results are consistent with previously reported dual hand conflict in symmetric pointing tasks, and suggest that a head-based input stream should be used for asymmetric control.
</description>
<pubDate>Tue, 01 Jan 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6682</guid>
<dc:date>2002-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>PDP-6 LISP (LISP 1.5) Revised</title>
<link>https://hdl.handle.net/1721.1/6681</link>
<description>PDP-6 LISP (LISP 1.5) Revised
listed, None
This is a mosaic description of PDP-6 LISP,  intended for readers familiar with the LISP 1.5  Programmer's Manual or who have used LISP  on some other computer. Many of the  features, such as the display, are subject to  change. Thus, consult a PDP-6 system  programmer for any differences which may  exist between LISP of Oct. 14, 1966 and  present LISP on the system tape.
</description>
<pubDate>Sat, 01 Apr 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6681</guid>
<dc:date>1967-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simplifying transformations for type-alpha certificates</title>
<link>https://hdl.handle.net/1721.1/6680</link>
<description>Simplifying transformations for type-alpha certificates
Arkoudas, Konstantine
This paper presents an algorithm for simplifying NDL deductions. An array of simplifying transformations are rigorously defined. They are shown to be terminating, and to respect the formal semantis of the language. We also show that the transformations never increase the size or complexity of a deduction---in the worst case, they produce deductions of the same size and complexity as the original. We present several examples of proofs containing various types of "detours", and explain how our procedure eliminates them, resulting in smaller and cleaner deductions. All of the given transformations are fully implemented in SML-NJ. The complete code listing is presented, along with explanatory comments. Finally, although the transformations given here are defined for NDL, we point out that they can be applied to any type-alpha DPL that satisfies a few simple conditions.
</description>
<pubDate>Tue, 13 Nov 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6680</guid>
<dc:date>2001-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>Stable Mixing of Complete and Incomplete Information</title>
<link>https://hdl.handle.net/1721.1/6679</link>
<description>Stable Mixing of Complete and Incomplete Information
Corduneanu, Adrian; Jaakkola, Tommi
An increasing number of parameter estimation tasks involve the use of at least two information sources, one complete but limited, the other abundant but incomplete. Standard algorithms such as EM (or em) used in this context are unfortunately not stable in the sense that they can lead to a dramatic loss of accuracy with the inclusion of incomplete observations. We provide a more controlled solution to this problem through differential equations that govern the evolution of locally optimal solutions (fixed points) as a function of the source weighting. This approach permits us to explicitly identify any critical (bifurcation) points leading to choices unsupported by the available complete data. The approach readily applies to any graphical model in O(n^3) time where n is the number of parameters. We use the naive Bayes model to illustrate these ideas and demonstrate the effectiveness of our approach in the context of text classification problems.
</description>
<pubDate>Thu, 08 Nov 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6679</guid>
<dc:date>2001-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting Digital Forgeries Using Bispectral Analysis</title>
<link>https://hdl.handle.net/1721.1/6678</link>
<description>Detecting Digital Forgeries Using Bispectral Analysis
Farid, Hany
With the rapid increase in low-cost and sophisticated digital technology the need for techniques to authenticate digital material will become more urgent. In this paper we address the problem of authenticating digital signals assuming no explicit prior knowledge of the original. The basic approach that we take is to assume that in the frequency domain a "natural" signal has weak higher-order statistical correlations. We then show that "un-natural" correlations are introduced if this signal is passed through a non-linearity (which would almost surely occur in the creation of a forgery). Techniques from polyspectral analysis are then used to detect the presence of these correlations. We review the basics of polyspectral analysis, show how and why these tools can be used in detecting forgeries and show their effectiveness in analyzing human speech.
</description>
<pubDate>Wed, 01 Dec 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6678</guid>
<dc:date>1999-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Monitoring Activities from Multiple Video Streams: Establishing a Common Coordinate Frame</title>
<link>https://hdl.handle.net/1721.1/6677</link>
<description>Monitoring Activities from Multiple Video Streams: Establishing a Common Coordinate Frame
Stein, Gideon P.; Romano, Raquel; Lee, Lily
Passive monitoring of large sites typically requires coordination between multiple cameras, which in turn requires methods for automatically relating events between distributed cameras. This paper tackles the problem of self-calibration of multiple cameras which are very far apart, using feature correspondences to determine the camera geometry. The key problem is finding such correspondences. Since the camera geometry and photometric characteristics vary considerably between images, one cannot use brightness and/or proximity constraints. Instead we apply planar geometric constraints to moving objects in the scene in order to align the scene"s ground plane across multiple views. We do not assume synchronized cameras, and we show that enforcing geometric constraints enables us to align the tracking data in time. Once we have recovered the homography which aligns the planar structure in the scene, we can compute from the homography matrix the 3D position of the plane and the relative camera positions. This in turn enables us to recover a homography matrix which maps the images to an overhead view. We demonstrate this technique in two settings: a controlled lab setting where we test the effects of errors in internal camera calibration, and an uncontrolled, outdoor setting in which the full procedure is applied to external camera calibration and ground plane recovery. In spite of noise in the internal camera parameters and image data, the system successfully recovers both planar structure and relative camera positions in both settings.
</description>
<pubDate>Thu, 01 Apr 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6677</guid>
<dc:date>1999-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Accelerated Chow and Liu Algorithm: Fitting Tree Distributions to High Dimensional Sparse Data</title>
<link>https://hdl.handle.net/1721.1/6676</link>
<description>An Accelerated Chow and Liu Algorithm: Fitting Tree Distributions to High Dimensional Sparse Data
Meila, Marina
Chow and Liu introduced an algorithm for fitting a multivariate distribution with a tree (i.e. a density model that assumes that there are only pairwise dependencies between variables) and that the graph of these dependencies is a spanning tree. The original algorithm is quadratic in the dimesion of the domain, and linear in the number of data points that define the target distribution $P$. This paper shows that for sparse, discrete data, fitting a tree distribution can be done in time and memory that is jointly subquadratic in the number of variables and the size of the data set. The new algorithm, called the acCL algorithm, takes advantage of the sparsity of the data to accelerate the computation of pairwise marginals and the sorting of the resulting mutual informations, achieving speed ups of up to 2-3 orders of magnitude in the experiments.
</description>
<pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6676</guid>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Separating Reflections from Images Using Independent Components Analysis</title>
<link>https://hdl.handle.net/1721.1/6675</link>
<description>Separating Reflections from Images Using Independent Components Analysis
Farid, Hany; Adelson, Edward H.
The image of an object can vary dramatically depending on lighting, specularities/reflections and shadows. It is often advantageous to separate these incidental variations from the intrinsic aspects of an image. Along these lines this paper describes a method for photographing objects behind glass and digitally removing the reflections off the glass leaving the image of the objects behind the glass intact. We describe the details of this method which employs simple optical techniques and independent components analysis (ICA) and show its efficacy with several examples.
</description>
<pubDate>Tue, 01 Sep 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6675</guid>
<dc:date>1998-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Restructuring Sparse High Dimensional Data for Effective Retrieval</title>
<link>https://hdl.handle.net/1721.1/6674</link>
<description>Restructuring Sparse High Dimensional Data for Effective Retrieval
Isbell, Charles; Viola, Paul
The task in text retrieval is to find the subset of  a collection of documents relevant to a user's  information request, usually expressed as a  set of words. Classically, documents and  queries are represented as vectors of word  counts. In its simplest form, relevance is  defined to be the dot product between a  document and a query vector--a measure of  the number of common terms. A central  difficulty in text retrieval is that the presence or  absence of a word is not sufficient to  determine relevance to a query. Linear  dimensionality reduction has been proposed  as a technique for extracting underlying  structure from the document collection. In  some domains (such as vision)  dimensionality reduction reduces  computational complexity. In text retrieval it is  more often used to improve retrieval  performance. We propose an alternative and  novel technique that produces sparse  representations constructed from sets of  highly-related words. Documents and queries  are represented by their distance to these  sets. and relevance is measured by the  number of common clusters. This technique  significantly improves retrieval performance,  is efficient to compute and shares properties  with the optimal linear projection operator and  the independent components of documents.
</description>
<pubDate>Fri, 01 May 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6674</guid>
<dc:date>1998-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sparse Representations for Fast, One-Shot Learning</title>
<link>https://hdl.handle.net/1721.1/6673</link>
<description>Sparse Representations for Fast, One-Shot Learning
Yip, Kenneth; Sussman, Gerald Jay
Humans rapidly and reliably learn many kinds of regularities and generalizations. We propose a novel model of fast learning that exploits the properties of sparse representations and the constraints imposed by a plausible hardware mechanism. To demonstrate our approach we describe a computational model of acquisition in the domain of morphophonology. We encapsulate phonological information as bidirectional boolean constraint relations operating on the classical linguistic representations of speech sounds in term of distinctive features. The performance model is described as a hardware mechanism that incrementally enforces the constraints. Phonological behavior arises from the action of this mechanism. Constraints are induced from a corpus of common English nouns and verbs. The induction algorithm compiles the corpus into increasingly sophisticated constraints. The algorithm yields one-shot learning from a few examples. Our model has been implemented as a computer program. The program exhibits phonological behavior similar to that of young children. As a bonus the constraints that are acquired can be interpreted as classical linguistic rules.
</description>
<pubDate>Sat, 01 Nov 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6673</guid>
<dc:date>1997-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coordinate-Independent Computations on Differential Equations</title>
<link>https://hdl.handle.net/1721.1/6672</link>
<description>Coordinate-Independent Computations on Differential Equations
Lin, Kevin K.
This project investigates the computational representation of differentiable manifolds, with the primary goal of solving partial differential equations using multiple coordinate systems on general n- dimensional spaces. In the process, this abstraction is used to perform accurate integrations of ordinary differential equations using multiple coordinate systems. In the case of linear partial differential equations, however, unexpected difficulties arise even with the simplest equations.
</description>
<pubDate>Sun, 01 Mar 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6672</guid>
<dc:date>1998-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recovery of Three-Dimensional Objects from Single Perspective Images</title>
<link>https://hdl.handle.net/1721.1/6671</link>
<description>Recovery of Three-Dimensional Objects from Single Perspective Images
Marill, Thomas
Any three-dimensional wire-frame object constructed out of parallelograms can be recovered from a single perspective two-dimensional image. A procedure for performing the recovery is given.
</description>
<pubDate>Sun, 01 Mar 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6671</guid>
<dc:date>1998-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Binocular, Foveated Active Vision System</title>
<link>https://hdl.handle.net/1721.1/6670</link>
<description>A Binocular, Foveated Active Vision System
Scassellati, Brian
This report documents the design and implementation of a binocular, foveated active vision system as part of the Cog project at the MIT Artificial Intelligence Laboratory. The active vision system features a three degree of freedom mechanical platform that supports four color cameras, a motion control system, and a parallel network of digital signal processors for image processing. To demonstrate the capabilities of the system, we present results from four sample visual-motor tasks.
</description>
<pubDate>Sun, 01 Mar 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6670</guid>
<dc:date>1998-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Algorithm for Group Formation and Maximal Independent Set in an Amorphous Computer</title>
<link>https://hdl.handle.net/1721.1/6669</link>
<description>An Algorithm for Group Formation and Maximal Independent Set in an Amorphous Computer
Nagpal, Radhika; Coore, Daniel
Amorphous computing is the study of programming ultra-scale computing environments of smart sensors and actuators cite{white-paper}. The individual elements are identical, asynchronous, randomly placed, embedded and communicate locally via wireless broadcast. Aggregating the processors into groups is a useful paradigm for programming an amorphous computer because groups can be used for specialization, increased robustness, and efficient resource allocation. This paper presents a new algorithm, called the clubs algorithm, for efficiently aggregating processors into groups in an amorphous computer, in time proportional to the local density of processors. The clubs algorithm is well-suited to the unique characteristics of an amorphous computer. In addition, the algorithm derives two properties from the physical embedding of the amorphous computer: an upper bound on the number of groups formed and a constant upper bound on the density of groups. The clubs algorithm can also be extended to find the maximal independent set (MIS) and $Delta + 1$ vertex coloring in an amorphous computer in $O(log N)$ rounds, where $N$ is the total number of elements and $Delta$ is the maximum degree.
</description>
<pubDate>Sun, 01 Feb 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6669</guid>
<dc:date>1998-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Estimation of Motion and Extended Scene Structure from a Moving Stereo Rig</title>
<link>https://hdl.handle.net/1721.1/6668</link>
<description>Direct Estimation of Motion and Extended Scene Structure from a Moving Stereo Rig
Stein, Gideon P.; Shashua, Amnon
We describe a new method for motion estimation and 3D reconstruction from stereo image sequences obtained by a stereo rig moving through a rigid world. We show that given two stereo pairs one can compute the motion of the stereo rig directly from the image derivatives (spatial and temporal). Correspondences are not required. One can then use the images from both pairs combined to compute a dense depth map. The motion estimates between stereo pairs enable us to combine depth maps from all the pairs in the sequence to form an extended scene reconstruction and we show results from a real image sequence. The motion computation is a linear least squares computation using all the pixels in the image. Areas with little or no contrast are implicitly weighted less so one does not have to explicitly apply a confidence measure.
</description>
<pubDate>Tue, 01 Dec 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6668</guid>
<dc:date>1998-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Perturbation Scheme for Spherical Arrangements with Application to Molecular Modeling</title>
<link>https://hdl.handle.net/1721.1/6667</link>
<description>A Perturbation Scheme for Spherical Arrangements with Application to Molecular Modeling
Halperin, Dan; Shelton, Christian R.
We describe a software package for computing and manipulating the subdivision of a sphere by a collection of (not necessarily great) circles and for computing the boundary surface of the union of spheres. We present problems that arise in the implementation of the software and the solutions that we have found for them. At the core of the paper is a novel perturbation scheme to overcome degeneracies and precision problems in computing spherical arrangements while using floating point arithmetic. The scheme is relatively simple, it balances between the efficiency of computation and the magnitude of the perturbation, and it performs well in practice. In one O(n) time pass through the data, it perturbs the inputs necessary to insure no potential degeneracies and then passes the perturbed inputs on to the geometric algorithm. We report and discuss experimental results. Our package is a major component in a larger package aimed to support geometric queries on molecular models; it is currently employed by chemists working in "rational drug design." The spherical subdivisions are used to construct a geometric model of a molecule where each sphere represents an atom. We also give an overview of the molecular modeling package and detail additional features and implementation issues.
</description>
<pubDate>Mon, 01 Dec 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6667</guid>
<dc:date>1997-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Paradigms for Structure in an Amorphous Computer</title>
<link>https://hdl.handle.net/1721.1/6666</link>
<description>Paradigms for Structure in an Amorphous Computer
Coore, Daniel; Nagpal, Radhika; Weiss, Ron
Recent developments in microfabrication and nanotechnology will enable the inexpensive manufacturing of massive numbers of tiny computing elements with sensors and actuators. New programming paradigms are required for obtaining organized and coherent behavior from the cooperation of large numbers of unreliable processing elements that are interconnected in unknown, irregular, and possibly time-varying ways. Amorphous computing is the study of developing and programming such ultrascale computing environments. This paper presents an approach to programming an amorphous computer by spontaneously organizing an unstructured collection of processing elements into cooperative groups and hierarchies. This paper introduces a structure called an AC Hierarchy, which logically organizes processors into groups at different levels of granularity. The AC hierarchy simplifies programming of an amorphous computer through new language abstractions, facilitates the design of efficient and robust algorithms, and simplifies the analysis of their performance. Several example applications are presented that greatly benefit from the AC hierarchy. This paper introduces three algorithms for constructing multiple levels of the hierarchy from an unstructured collection of processors.
</description>
<pubDate>Wed, 01 Oct 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6666</guid>
<dc:date>1997-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Supporting Dynamic Languages on the Java Virtual Machine</title>
<link>https://hdl.handle.net/1721.1/6665</link>
<description>Supporting Dynamic Languages on the Java Virtual Machine
Shivers, Olin
In this note, I propose two extensions to the Java virtual machine (or VM) to allow dynamic languages such as Dylan, Scheme and Smalltalk to be efficiently implemented on the VM. These extensions do not affect the performance of pure Java programs on the machine. The first extension allows for efficient encoding of dynamic data; the second allows for efficient encoding of language-specific computational elements.
</description>
<pubDate>Mon, 01 Apr 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6665</guid>
<dc:date>1996-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognition of Surface Reflectance Properties from a Single Image under Unknown Real-World Illumination</title>
<link>https://hdl.handle.net/1721.1/6664</link>
<description>Recognition of Surface Reflectance Properties from a Single Image under Unknown Real-World Illumination
Dror, Ron O.; Edward H. Adelson,; Willsky, Alan S.
This paper describes a machine vision system that classifies reflectance properties of surfaces such as metal, plastic, or paper, under unknown real-world illumination. We demonstrate performance of our algorithm for surfaces of arbitrary geometry. Reflectance estimation under arbitrary omnidirectional illumination proves highly underconstrained. Our reflectance estimation algorithm succeeds by learning relationships between surface reflectance and certain statistics computed from an observed image, which depend on statistical regularities in the spatial structure of real-world illumination. Although the algorithm assumes known geometry, its statistical nature makes it robust to inaccurate geometry estimates.
</description>
<pubDate>Sun, 21 Oct 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6664</guid>
<dc:date>2001-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>How do Humans Determine Reflectance Properties under Unknown Illumination?</title>
<link>https://hdl.handle.net/1721.1/6663</link>
<description>How do Humans Determine Reflectance Properties under Unknown Illumination?
Fleming, Roland W.; Dror, Ron O.; Adelson, Edward H.
Under normal viewing conditions, humans find it easy to distinguish between objects made out of different materials such as plastic, metal, or paper. Untextured materials such as these have different surface reflectance properties, including lightness and gloss. With single isolated images and unknown illumination conditions, the task of estimating surface reflectance is highly underconstrained, because many combinations of reflection and illumination are consistent with a given image. In order to work out how humans estimate surface reflectance properties, we asked subjects to match the appearance of isolated spheres taken out of their original contexts. We found that subjects were able to perform the task accurately and reliably without contextual information to specify the illumination. The spheres were rendered under a variety of artificial illuminations, such as a single point light source, and a number of photographically-captured real-world illuminations from both indoor and outdoor scenes. Subjects performed more accurately for stimuli viewed under real-world patterns of illumination than under artificial illuminations, suggesting that subjects use stored assumptions about the regularities of real-world illuminations to solve the ill-posed problem.
</description>
<pubDate>Sun, 21 Oct 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6663</guid>
<dc:date>2001-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Type-omega DPLs</title>
<link>https://hdl.handle.net/1721.1/6662</link>
<description>Type-omega DPLs
Arkoudas, Konstantine
Type-omega DPLs (Denotational Proof Languages) are languages for proof presentation and search that offer strong soundness guarantees. LCF-type systems such as HOL offer similar guarantees, but their soundness relies heavily on static type systems. By contrast, DPLs  ensure soundness dynamically, through their evaluation semantics; no type system is necessary. This is possible owing to a novel two-tier syntax  that separates deductions from computations, and to the abstraction of assumption bases, which is factored into the semantics of the language and allows for sound evaluation.   Every type-omega DPL properly contains a type-alpha DPL, which can be used to present proofs in a lucid and detailed form, exclusively in terms of primitive inference rules. Derived inference rules are expressed  as user-defined methods, which are "proof recipes" that take arguments  and dynamically perform appropriate deductions. Methods arise naturally  via parametric abstraction over type-alpha proofs. In that light, the  evaluation of a method call can be viewed as a computation that carries  out a type-alpha deduction. The type-alpha proof "unwound" by such a method  call is called the "certificate" of the call. Certificates can be checked  by exceptionally simple type-alpha interpreters, and thus they are useful  whenever we wish to minimize our trusted base.   Methods are statically closed over lexical environments, but dynamically scoped over assumption bases. They can take other methods as arguments, they can iterate, and they can branch conditionally. These capabilities,  in tandem with the bifurcated syntax of type-omega DPLs and their dynamic assumption-base semantics, allow the user to define methods in  a style that is disciplined enough to ensure soundness yet fluid enough  to permit succinct and perspicuous expression of arbitrarily sophisticated derived inference rules.   We demonstrate every major feature of type-omega DPLs by defining and studying NDL-omega, a higher-order, lexically scoped, call-by-value type-omega DPL for classical zero-order natural deduction---a simple choice that allows us to focus on type-omega syntax and semantics rather than on the subtleties of the underlying logic. We start by illustrating how type-alpha DPLs naturally lead to type-omega DPLs by way of abstraction; present the formal syntax and semantics of NDL-omega; prove several results about it, including soundness; give numerous examples of methods; point out connections to the lambda-phi calculus, a very general framework for type-omega DPLs; introduce a notion of computational and deductive cost; define several instrumented interpreters for computing such costs and for generating certificates; explore the use of type-omega DPLs as general programming languages; show that DPLs do not have to be type-less by formulating a static Hindley-Milner polymorphic type system for NDL-omega; discuss some idiosyncrasies of type-omega DPLs such as the potential divergence of proof checking; and compare type-omega DPLs to other approaches to proof presentation and discovery. Finally, a complete implementation of NDL-omega in SML-NJ is given for users who want to run the examples and experiment with the language.
</description>
<pubDate>Tue, 16 Oct 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6662</guid>
<dc:date>2001-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Type-alpha DPLs</title>
<link>https://hdl.handle.net/1721.1/6661</link>
<description>Type-alpha DPLs
Arkoudas, Konstantine
This paper introduces Denotational Proof Languages (DPLs). DPLs are languages for presenting, discovering, and checking formal proofs. In particular, in this paper we discus type-alpha DPLs---a simple class of DPLs  for which termination is guaranteed and proof checking can be  performed in time linear in the size of the proof.  Type-alpha DPLs allow for lucid proof presentation and for efficient proof checking, but not for proof search.  Type-omega DPLs allow for search as well as simple presentation and checking, but termination is no longer guaranteed and  proof checking may diverge. We do not study type-omega DPLs here.   We start by listing some common characteristics of DPLs. We  then illustrate with a particularly simple example: a toy  type-alpha DPL called PAR, for deducing parities. We present the abstract syntax of PAR, followed by two  different kinds of formal semantics: evaluation and denotational.  We then relate the two semantics and show how proof checking  becomes tantamount to evaluation. We proceed to develop the  proof theory of PAR, formulating and studying certain  key notions such as observational equivalence that pervade all DPLs.   We then present NDL, a type-alpha DPL for classical zero-order  natural deduction. Our presentation of NDL mirrors that of PAR,  showing how every basic concept that was introduced in PAR resurfaces in NDL. We present sample proofs of several well-known tautologies of propositional logic that demonstrate our thesis that DPL proofs are  readable, writable, and concise. Next we contrast DPLs to typed logics based  on the Curry-Howard isomorphism, and discuss the distinction between pure and augmented DPLs. Finally we consider the issue of  implementing DPLs, presenting an implementation of PAR in SML and one in Athena, and end with some concluding remarks.
</description>
<pubDate>Fri, 05 Oct 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6661</guid>
<dc:date>2001-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Explanation of Big "P" as of March 20, 1959</title>
<link>https://hdl.handle.net/1721.1/6660</link>
<description>Explanation of Big "P" as of March 20, 1959
Russell, S. R.
ERROR is a routine to provide a common location for all routines. Its celling sequence is: SXD SERROR,4 TSX SERROR+1,4 The above is normally followed immediately by up to 20 registers of BCD remarks terminated by a word of 1's. This may be left out, however. ERROR prints out the remark, if any, the location of the TSX that entered error, restores the console except for the AC overflow, and transfers to the user's error routine specified by the calling sequence of SETUP.
</description>
<pubDate>Sun, 01 Mar 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6660</guid>
<dc:date>1959-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Object-Independent Modes of Variation with Feature Flow Fields</title>
<link>https://hdl.handle.net/1721.1/6659</link>
<description>Learning Object-Independent Modes of Variation with Feature Flow Fields
Miller, Erik G.; Tieu, Kinh; Stauffer, Chris P.
We present a unifying framework in which "object-independent" modes of variation are learned from continuous-time data such as video sequences. These modes of variation can be used as "generators" to produce a manifold of images of a new object from a single  example of that object.   We develop the framework in the context of a well-known example: analyzing the modes of spatial deformations of a scene under camera movement. Our method learns a close approximation to the standard affine deformations that are expected from the geometry of the situation, and  does so in a completely unsupervised (i.e. ignorant of the geometry of the situation) fashion. We stress that it is learning a "parameterization", not just the parameter values, of the data. We then demonstrate how we have used the same framework to derive a novel  data-driven model of joint color change in images due to common lighting variations. The model is superior to previous models of color change in describing non-linear color changes due to lighting.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6659</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Range Segmentation Using Visibility Constraints</title>
<link>https://hdl.handle.net/1721.1/6658</link>
<description>Range Segmentation Using Visibility Constraints
Taycher, Leonid; Darrell, Trevor
Visibility constraints can aid the segmentation of foreground objects observed with multiple range images. In our approach, points are defined as foreground if they can be determined to occlude some {em empty space} in the scene. We present an efficient algorithm to estimate foreground points in each range view using explicit epipolar search. In cases where the background pattern is stationary, we show how visibility constraints from other views can generate virtual background values at points with no valid depth in the primary view. We demonstrate the performance of both algorithms for detecting people in indoor office environments.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6658</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gait Dynamics for Recognition and Classification</title>
<link>https://hdl.handle.net/1721.1/6657</link>
<description>Gait Dynamics for Recognition and Classification
Lee, Lily
This paper describes a representation of the dynamics of human walking action for the purpose of person identification and classification by gait appearance. Our gait representation is based on simple features such as moments extracted from video silhouettes of human walking motion. We claim that our gait dynamics representation is rich enough for the task of recognition and classification. The use of our feature representation is demonstrated in the task of person recognition from video sequences of orthogonal views of people walking. We demonstrate the accuracy of recognition on gait video sequences collected over different days and times, and under varying lighting environments. In addition, preliminary results are shown on gender classification using our gait dynamics features.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6657</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface Reflectance Estimation and Natural Illumination Statistics</title>
<link>https://hdl.handle.net/1721.1/6656</link>
<description>Surface Reflectance Estimation and Natural Illumination Statistics
Dror, Ron O.; Adelson, Edward H.; Willsky, Alan S.
Humans recognize optical reflectance properties of surfaces such as metal, plastic, or paper from a single image without knowledge of illumination. We develop a machine vision system to perform similar recognition tasks automatically. Reflectance estimation under unknown, arbitrary illumination proves highly underconstrained due to the variety of potential illumination distributions and surface reflectance properties. We have found that the spatial structure of real-world illumination possesses some of the statistical regularities observed in the natural image statistics literature. A human or computer vision system may be able to exploit this prior information to determine the most likely surface reflectance given an observed image. We develop an algorithm for reflectance classification under unknown real-world illumination, which learns relationships between surface reflectance and certain features (statistics) computed from a single observed image. We also develop an automatic feature selection method.
</description>
<pubDate>Sat, 01 Sep 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6656</guid>
<dc:date>2001-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Synthesis of Stable Grasps in the Plane</title>
<link>https://hdl.handle.net/1721.1/6655</link>
<description>The Synthesis of Stable Grasps in the Plane
Nguyen; Van-Duc
This paper addresses the problem of synthesizing stable grasps on arbitrary planar polygons. Each finger is a virtual spring whose stiffnes and compression can be programmed. The contacts between the finger tips and the object are point contacts without friction. We prove that all force-closure grasps can be made stable, and it costs 0(n) time to synthesize a set of n virtual springs such that a given force closure grasp is stable. We can also choose the compliance center and the stiffness matrix of the grasp, and so choose the compliant behavior of the grasped object about its equilibrium. The planning and execution of grasps and assembly operations become easier and less sensitive to errors.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6655</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational Model for the Acquisition and Use of Phonological Knowledge</title>
<link>https://hdl.handle.net/1721.1/6654</link>
<description>A Computational Model for the Acquisition and Use of Phonological Knowledge
Yip, Kenneth; Sussman, Gerald Jay
Does knowledge of language consist of symbolic rules? How do children learn and use their linguistic knowledge? To elucidate these questions, we present a computational model that acquires phonological knowledge from a corpus of common English nouns and verbs. In our model the phonological knowledge is encapsulated as boolean constraints operating on classical linguistic representations of speech sounds in term of distinctive features. The learning algorithm compiles a corpus of words into increasingly sophisticated constraints. The algorithm is incremental, greedy, and fast. It yields one-shot learning of phonological constraints from a few examples. Our system exhibits behavior similar to that of young children learning phonological knowledge. As a bonus the constraints can be interpreted as classical linguistic rules. The computational model can be implemented by a surprisingly simple hardware mechanism. Our mechanism also sheds light on a fundamental AI question: How are signals related to symbols?
</description>
<pubDate>Fri, 01 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6654</guid>
<dc:date>1996-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computing Upper and Lower Bounds on Likelihoods in Intractable Networks</title>
<link>https://hdl.handle.net/1721.1/6653</link>
<description>Computing Upper and Lower Bounds on Likelihoods in Intractable Networks
Jaakkola, Tommi S.; Jordan, Michael I.
We present techniques for computing upper and lower bounds on the likelihoods of partial instantiations of variables in sigmoid and noisy-OR networks. The bounds determine confidence intervals for the desired likelihoods and become useful when the size of the network (or clique size) precludes exact computations. We illustrate the tightness of the obtained bounds by numerical experiments.
</description>
<pubDate>Fri, 01 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6653</guid>
<dc:date>1996-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mean Field Theory for Sigmoid Belief Networks</title>
<link>https://hdl.handle.net/1721.1/6652</link>
<description>Mean Field Theory for Sigmoid Belief Networks
Saul, Lawrence K.; Jaakkola, Tommi; Jordan, Michael I.
We develop a mean field theory for sigmoid  belief networks based on ideas from  statistical mechanics. Our mean field theory  provides a tractable approximation to the true  probability distribution in these networks; it  also yields a lower bound on the likelihood of  evidence. We demonstrate the utility of this  framework on a benchmark problem in  statistical pattern recognition -- the  classification of handwritten digits.
</description>
<pubDate>Thu, 01 Aug 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6652</guid>
<dc:date>1996-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Fine Motion by Markov Mixtures of Experts</title>
<link>https://hdl.handle.net/1721.1/6651</link>
<description>Learning Fine Motion by Markov Mixtures of Experts
Meila, Marina; Jordan, Michael I.
Compliant control is a standard method for performing fine manipulation tasks, like grasping and assembly, but it requires estimation of the state of contact between the robot arm and the objects involved. Here we present a method to learn a model of the movement from measured data. The method requires little or no prior knowledge and the resulting model explicitly estimates the state of contact. The current state of contact is viewed as the hidden state variable of a discrete HMM. The control dependent transition probabilities between states are modeled as parametrized functions of the measurement We show that their parameters can be estimated from measurements concurrently with the estimation of the parameters of the movement in each state of contact. The learning algorithm is a variant of the EM procedure. The E step is computed exactly; solving the M step exactly would require solving a set of coupled nonlinear algebraic equations in the parameters. Instead, gradient ascent is used to produce an increase in likelihood.
</description>
<pubDate>Wed, 01 Nov 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6651</guid>
<dc:date>1995-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Three-Dimensional Interpretation of a Class of Simple Line-Drawings</title>
<link>https://hdl.handle.net/1721.1/6650</link>
<description>The Three-Dimensional Interpretation of a Class of Simple Line-Drawings
Marill, Thomas
We provide a theory of the three-dimensional interpretation of a class of line-drawings called p-images, which are interpreted by the human vision system as parallelepipeds ("boxes"). Despite their simplicity, p-images raise a number of interesting vision questions: *Why are p-images seen as three-dimensional objects? Why not just as flatimages? *What are the dimensions and pose of the perceived objects? *Why are some p-images interpreted as rectangular boxes, while others are seen as skewed, even though there is no obvious distinction between the images? *When p-images are rotated in three dimensions, why are the image-sequences perceived as distorting objects---even though structure-from-motion would predict that rigid objects would be seen? *Why are some three-dimensional parallelepipeds seen as radically different when viewed from different viewpoints? We show that these and related questions can be answered with the help of a single mathematical result and an associated perceptual principle. An interesting special case arises when there are right angles in the p-image. This case represents a singularity in the equations and is mystifying from the vision point of view. It would seem that (at least in this case) the vision system does not follow the ordinary rules of geometry but operates in accordance with other (and as yet unknown) principles.
</description>
<pubDate>Sun, 01 Oct 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6650</guid>
<dc:date>1995-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Attention in Binocular Rivalry as Revealed Through Optokinetic Nystagmus</title>
<link>https://hdl.handle.net/1721.1/6649</link>
<description>The Role of Attention in Binocular Rivalry as Revealed Through Optokinetic Nystagmus
Leopold, D.A.; Fitzgibbons, J.C.; Logothetis, N.K.
When stimuli presented to the two eyes differ considerably, stable binocular fusion fails, and the subjective percept alternates between the two monocular images, a phenomenon known as binocular rivalry. The influence of attention over this perceptual switching has long been studied, and although there is evidence that attention can affect the alternation rate, its role in the overall dynamics of the rivalry process remains unclear. The present study investigated the relationship between the attention paid to the rivalry stimulus, and the dynamics of the perceptual alternations. Specifically, the temporal course of binocular rivalry was studied as the subjects performed difficult nonvisual and visual concurrent tasks, directing their attention away from the rivalry stimulus. Periods of complete perceptual dominance were compared for the attended condition, where the subjects reported perceptual changes, and the unattended condition, where one of the simultaneous tasks was performed. During both the attended and unattended conditions, phases of rivalry dominance were obtained by analyzing the subject"s optokinetic nystagmus recorded by an electrooculogram, where the polarity of the nystagmus served as an objective indicator of the perceived direction of motion. In all cases, the presence of a difficult concurrent task had little or no effect on the statistics of the alternations, as judged by two classic tests of rivalry, although the overall alternation rate showed a small but significant increase with the concurrent task. It is concluded that the statistical patterns of rivalry alternations are not governed by attentional shifts or decision-making on the part of the subject.
</description>
<pubDate>Wed, 01 Nov 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6649</guid>
<dc:date>1995-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Physiology of Bistable Percepts</title>
<link>https://hdl.handle.net/1721.1/6648</link>
<description>On the Physiology of Bistable Percepts
Logothetis, N.K.; Leopold, D.A.
Binocular rivalry refers to the alternating perceptions experienced when two dissimilar patterns are stereoscopically viewed. To study the neural mechanism that underlies such competitive interactions, single cells were recorded in the visual areas V1, V2, and V4, while monkeys reported the perceived orientation of rivaling sinusoidal grating patterns. A number of neurons in all areas showed alternating periods of excitation and inhibition that correlated with the perceptual dominance and suppression of the cell"s preferred orientation. The remaining population of cells were not influenced by whether or not the optimal stimulus orientation was perceptually suppressed. Response modulation during rivalry was not correlated with cell attributes such as monocularity, binocularity, or disparity tuning. These results suggest that the awareness of a visual pattern during binocular rivalry arises through interactions between neurons at different levels of visual pathways, and that the site of suppression is unlikely to correspond to a particular visual area, as often hypothesized on the basis of psychophysical observations. The cell-types of modulating neurons and their overwhelming preponderance in higher rather than in early visual areas also suggests -- together with earlier psychophysical evidence -- the possibility of a common mechanism underlying rivalry as well as other bistable percepts, such as those experienced with ambiguous figures.
</description>
<pubDate>Wed, 01 Nov 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6648</guid>
<dc:date>1995-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Minimizing Statistical Bias with Queries</title>
<link>https://hdl.handle.net/1721.1/6647</link>
<description>Minimizing Statistical Bias with Queries
Cohn, David A.
I describe an exploration criterion that  attempts to minimize the error of a learner by  minimizing its estimated squared bias. I  describe experiments with locally-weighted  regression on two simple kinematics  problems, and observe that this "bias-only"  approach outperforms the more common  "variance-only" exploration approach, even in  the presence of noise.
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6647</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Network Charge-Orineted MOS Transistor Model</title>
<link>https://hdl.handle.net/1721.1/6646</link>
<description>A Network Charge-Orineted MOS Transistor Model
Katzenelson, Jacob; Unikovski, Aharon
The MOS transistor physical model as described in [3] is presented here as a network model. The goal is to obtain an accurate model, suitable for simulation, free from certain problems reported in the literature [13], and conceptually as simple as possible. To achieve this goal the original model had to be extended and modified. The paper presents the derivation of the network model from physical equations, including the corrections which are required for simulation and which compensate for simplifications introduced in the original physical model. Our intrinsic MOS model consists of three nonlinear voltage-controlled capacitors and a dependent current source. The charges of the capacitors and the current of the current source are functions of the voltages $V_{gs}$, $V_{bs}$, and $V_{ds}$. The complete model consists of the intrinsic model plus the parasitics. The apparent simplicity of the model is a result of hiding information in the characteristics of the nonlinear components. The resulted network model has been checked by simulation and analysis. It is shown that the network model is suitable for simulation: It is defined for any value of the voltages; the functions involved are continuous and satisfy Lipschitz conditions with no jumps at region boundaries; Derivatives have been computed symbolically and are available for use by the Newton-Raphson method. The model"s functions can be measured from the terminals. It is also shown that small channel effects can be included in the model. Higher frequency effects can be modeled by using a network consisting of several sections of the basic lumped model. Future plans include a detailed comparison of the network model with models such as SPICE level 3 and a comparison of the multi- section higher frequency model with experiments.
</description>
<pubDate>Tue, 01 Aug 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6646</guid>
<dc:date>1995-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extracting Salient Curves from Images: An Analysis of the Saliency Network</title>
<link>https://hdl.handle.net/1721.1/6645</link>
<description>Extracting Salient Curves from Images: An Analysis of the Saliency Network
Alter, T.D.; Basri, Ronen
The Saliency Network proposed by Shashua  and Ullman is a well-known approach to the  problem of extracting salient curves from  images while performing gap completion.  This paper analyzes the Saliency Network.  The Saliency Network is attractive for several  reasons. First, the network generally prefers  long and smooth curves over short or wiggly  ones. While computing saliencies, the  network also fills in gaps with smooth  completions and tolerates noise. Finally, the  network is locally connected, and its size is  proportional to the size of the image.  Nevertheless, our analysis reveals certain  weaknesses with the method. In particular,  we show cases in which the most salient  element does not lie on the perceptually most  salient curve. Furthermore, in some cases the  saliency measure changes its preferences  when curves are scaled uniformly. Also, we  show that for certain fragmented curves the  measure prefers large gaps over a few small  gaps of the same total size. In addition, we  analyze the time complexity required by the  method. We show that the number of steps  required for convergence in serial  implementations is quadratic in the size of the  network, and in parallel implementations is  linear in the size of the network. We discuss  problems due to coarse sampling of the  range of possible orientations. We show that  with proper sampling the complexity of the  network becomes cubic in the size of the  network. Finally, we consider the possibility of  using the Saliency Network for grouping. We  show that the Saliency Network recovers the  most salient curve efficiently, but it has  problems with identifying any salient curve  other than the most salient one.
</description>
<pubDate>Tue, 01 Aug 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6645</guid>
<dc:date>1995-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Template Matching: Matched Spatial Filters and Beyond</title>
<link>https://hdl.handle.net/1721.1/6644</link>
<description>Template Matching: Matched Spatial Filters and Beyond
Brunelli, Roberto; Poggio, Tomaso
Template matching by means of cross-correlation is common practice in pattern recognition. However, its sensitivity to deformations of the pattern and the broad and unsharp peaks it produces are significant drawbacks. This paper reviews some results on how these shortcomings can be removed. Several techniques (Matched Spatial Filters, Synthetic Discriminant Functions, Principal Components Projections and Reconstruction Residuals) are reviewed and compared on a common task: locating eyes in a database of faces. New variants are also proposed and compared: least squares Discriminant Functions and the combined use of projections on eigenfunctions and the corresponding reconstruction residuals. Finally, approximation networks are introduced in an attempt to improve filter design by the introduction of nonlinearity.
</description>
<pubDate>Sun, 01 Oct 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6644</guid>
<dc:date>1995-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>MIT SchMUSE: Class-Based Remote Delegation in a Capricious Distributed Environment</title>
<link>https://hdl.handle.net/1721.1/6643</link>
<description>MIT SchMUSE: Class-Based Remote Delegation in a Capricious Distributed Environment
Blair, Michael R.; Cohen, Natalya; LaMacchia, David M.; Zuzga, Brian K.
MIT SchMUSE (pronounced "shmooz") is a concurrent, distributed, delegation-based object-oriented interactive environment with persistent storage. It is designed to run in a "capricious" network environment, where servers can migrate from site to site and can regularly become unavailable. Our design introduces a new form of unique identifiers called "globally unique tickets" that provide globally unique time/space stamps for objects and classes without being location specific. Object location is achieved by a distributed hierarchical lazy lookup mechanism that we call "realm resolution." We also introduce a novel mechanism called "message deferral" for enhanced reliability in the face of remote delegation. We conclude with a comparison to related work and a projection of future work on MIT SchMUSE.
</description>
<pubDate>Mon, 01 Feb 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6643</guid>
<dc:date>1993-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three Cuts for Accelerated Interval Propagation</title>
<link>https://hdl.handle.net/1721.1/6642</link>
<description>Three Cuts for Accelerated Interval Propagation
McAllester, D.; Henlenryck, P. Van; Kapur, T.
This paper addresses the problem of  nonlinear multivariate root finding. In an  earlier paper we described a system called  Newton which finds roots of systems of  nonlinear equations using refinements of  interval methods. The refinements are  inspired by AI constraint propagation  techniques. Newton is competative with  continuation methods on most benchmarks  and can handle a variety of cases that are  infeasible for continuation methods. This  paper presents three "cuts" which we believe  capture the essential theoretical ideas behind  the success of Newton. This paper describes  the cuts in a concise and abstract manner  which, we believe, makes the theoretical  content of our work more apparent. Any  implementation will need to adopt some  heuristic control mechanism. Heuristic control  of the cuts is only briefly discussed here.
</description>
<pubDate>Mon, 01 May 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6642</guid>
<dc:date>1995-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vectorizing Face Images by Interpreting Shape and Texture Computations</title>
<link>https://hdl.handle.net/1721.1/6641</link>
<description>Vectorizing Face Images by Interpreting Shape and Texture Computations
Beymer, David
The correspondence problem in computer vision is basically a matching task between two or more sets of features. In this paper, we introduce a vectorized image representation, which is a feature-based representation where correspondence has been established with respect to a reference image. This representation has two components: (1) shape, or (x, y) feature locations, and (2) texture, defined as the image grey levels mapped onto the standard reference image. This paper explores an automatic technique for "vectorizing" face images. Our face vectorizer alternates back and forth between computation steps for shape and texture, and a key idea is to structure the two computations so that each one uses the output of the other. A hierarchical coarse-to-fine implementation is discussed, and applications are presented to the problems of facial feature detection and registration of two arbitrary faces.
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6641</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Face Recognition from One Example View</title>
<link>https://hdl.handle.net/1721.1/6640</link>
<description>Face Recognition from One Example View
Beymer, David; Poggio, Tomaso
If we are provided a face database with only  one example view per person, is it possible to  recognize new views of them under a variety of  different poses, especially views rotated in  depth from the original example view? We  investigate using prior knowledge about faces  plus each single example view to generate  virtual views of each person, or views of the  face as seen from different poses. Prior  knowledge of faces is represented in an  example-based way, using 2D views of a  prototype face seen rotating in depth. The  synthesized virtual views are evaluated as  example views in a view-based approach to  pose-invariant face recognition. They are  shown to improve the recognition rate over the  scenario where only the single real view is  used.
</description>
<pubDate>Fri, 01 Sep 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6640</guid>
<dc:date>1995-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison Between Subsonic Flow Simulation and Physical Measurements of Flue Pipes</title>
<link>https://hdl.handle.net/1721.1/6639</link>
<description>Comparison Between Subsonic Flow Simulation and Physical Measurements of Flue Pipes
Skordos, Panayotis; Sussman, Gerald Jay
Direct simulations of wind musical instruments using the compressible Navier Stokes equations have recently become possible through the use of parallel computing and through developments in numerical methods. As a first demonstration, the flow of air and the generation of musical tones inside a soprano recorder are simulated numerically. In addition, physical measurements are made of the acoustic signal generated by the recorder at different blowing speeds. The comparison between simulated and physically measured behavior is encouraging and points towards ways of improving the simulations.
</description>
<pubDate>Sat, 01 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6639</guid>
<dc:date>1995-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aeroacoustics on Non-Dedicated Workstations</title>
<link>https://hdl.handle.net/1721.1/6638</link>
<description>Aeroacoustics on Non-Dedicated Workstations
Skordos, Panayotis A.
The simulation of subsonic aeroacoustic problems such as the flow-generated sound of wind instruments is well suited for parallel computing on a cluster of non-dedicated workstations. Simulations are demonstrated which employ 20 non-dedicated Hewlett-Packard workstations (HP9000/715), and achieve comparable performance on this problem as a 64-node CM-5 dedicated supercomputer with vector units. The success of the present approach depends on the low communication requirements of the problem (low communication to computation ratio) which arise from the coarse-grain decomposition of the problem and the use of local-interaction methods. Many important problems may be suitable for this type of parallel computing including computer vision, circuit simulation, and other subsonic flow problems.
</description>
<pubDate>Sat, 01 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6638</guid>
<dc:date>1995-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial Reference Frames for Object Recognition: Tuning for Rotations in Depth</title>
<link>https://hdl.handle.net/1721.1/6637</link>
<description>Spatial Reference Frames for Object Recognition: Tuning for Rotations in Depth
Logothetis, N.K.; Pauls, J.; Poggio, Tomaso A
The inferior temporal cortex (IT) of monkeys is  thought to play an essential role in visual  object recognition. Inferotemporal neurons  are known to respond to complex visual  stimuli, including patterns like faces, hands,  or other body parts. What is the role of such  neurons in object recognition? The present  study examines this question in combined  psychophysical and electrophysiological  experiments, in which monkeys learned to  classify and recognize novel visual 3D  objects. A population of neurons in IT were  found to respond selectively to such objects  that the monkeys had recently learned to  recognize. A large majority of these cells  discharged maximally for one view of the  object, while their response fell off gradually  as the object was rotated away from the  neuron"s preferred view. Most neurons  exhibited orientation-dependent responses  also during view-plane rotations. Some  neurons were found tuned around two views  of the same object, while a very small number  of cells responded in a view- invariant  manner. For five different objects that were  extensively used during the training of the  animals, and for which behavioral  performance became view-independent,  multiple cells were found that were tuned  around different views of the same object. No  selective responses were ever encountered  for views that the animal systematically failed  to recognize. The results of our experiments  suggest that neurons in this area can develop  a complex receptive field organization as a  consequence of extensive training in the  discrimination and recognition of objects.  Simple geometric features did not appear to  account for the neurons" selective responses.  These findings support the idea that a  population of neurons -- each tuned to a  different object aspect, and each showing a  certain degree of invariance to image  transformations -- may, as an assembly,  encode complex 3D objects. In such a  system, several neurons may be active for any  given vantage point, with a single unit acting  like a blurred template for a limited  neighborhood of a single view.
</description>
<pubDate>Wed, 01 Mar 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6637</guid>
<dc:date>1995-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The M-Machine Multicomputer</title>
<link>https://hdl.handle.net/1721.1/6636</link>
<description>The M-Machine Multicomputer
Fillo, Marco; Keckler, Stephen W.; Dally, William J.; Carter, Nicholas P.; Chang, Andrew; Gurevich, Yevgeny; Lee, Whay S.
The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M- Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 12 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms maximize both single thread performance and overall system throughput.
</description>
<pubDate>Wed, 01 Mar 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6636</guid>
<dc:date>1995-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linear Object Classes and Image Synthesis from a Single Example Image</title>
<link>https://hdl.handle.net/1721.1/6635</link>
<description>Linear Object Classes and Image Synthesis from a Single Example Image
Vetter, Thomas; Poggio, Tomaso
The need to generate new views of a 3D  object from a single real image arises in  several fields, including graphics and object  recognition. While the traditional approach  relies on the use of 3D models, we have  recently introduced techniques that are  applicable under restricted conditions but  simpler. The approach exploits image  transformations that are specific to the  relevant object class and learnable from  example views of other "prototypical" objects  of the same class. In this paper, we introduce  such a new technique by extending the notion  of linear class first proposed by Poggio and  Vetter. For linear object classes it is shown  that linear transformations can be learned  exactly from a basis set of 2D prototypical  views. We demonstrate the approach on  artificial objects and then show preliminary  evidence that the technique can effectively  "rotate" high- resolution face images from a  single 2D view.
</description>
<pubDate>Wed, 01 Mar 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6635</guid>
<dc:date>1995-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Note of Zipf's Law, Natural Languages, and Noncoding DNA Regions</title>
<link>https://hdl.handle.net/1721.1/6634</link>
<description>A Note of Zipf's Law, Natural Languages, and Noncoding DNA Regions
Niyogi, Partha; Berwick, Robert C.
In Phys. Rev. Letters (73:2), Mantegna et al. conclude on the basis of Zipf rank frequency data that noncoding DNA sequence regions are more like natural languages than coding regions. We argue on the contrary that an empirical fit to Zipf"s "law" cannot be used as a criterion for similarity to natural languages. Although DNA is a presumably "organized system of signs" in Mandelbrot"s (1961) sense, and observation of statistical featurs of the sort presented in the Mantegna et al. paper does not shed light on the similarity between DNA's "gramar" and natural language grammars, just as the observation of exact Zipf-like behavior cannot distinguish between the underlying processes of tossing an M-sided die or a finite-state branching process.
</description>
<pubDate>Wed, 01 Mar 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6634</guid>
<dc:date>1995-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimation of Pose and Illuminant Direction for Face Processing</title>
<link>https://hdl.handle.net/1721.1/6633</link>
<description>Estimation of Pose and Illuminant Direction for Face Processing
Brunelli, Roberto
In this paper three problems related to the analysis of facial images are addressed: the illuminant direction, the compensation of illumination effects and, finally, the recovery of the pose of the face, restricted to in-depth rotations. The solutions proposed for these problems rely on the use of computer graphics techniques to provide images of faces under different illumination and pose, starting from a database of frontal views under frontal illumination.
</description>
<pubDate>Tue, 01 Nov 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6633</guid>
<dc:date>1994-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards an Example-Based Image Compression Architecture for Video-Conferencing</title>
<link>https://hdl.handle.net/1721.1/6632</link>
<description>Towards an Example-Based Image Compression Architecture for Video-Conferencing
Toleg, Sebastian; Poggio, Tomaso
This paper consists of two major parts. First,  we present the outline of a simple approach  to very-low bandwidth video-conferencing  system relying on an example-based  hierarchical image compression scheme. In  particular, we discuss the use of example  images as a model, the number of required  examples, faces as a class of semi-rigid  objects, a hierarchical model based on  decomposition into different time-scales, and the decomposition of face images into  patches of interest. In the second part, we  present several algorithms for image  processing and animation as well as  experimental evaluations. Among the original  contributions of this paper is an automatic  algorithm for pose estimation and  normalization. We also review and compare  different algorithms for finding the nearest  neighbors in a database for a new input as  well as a generalized algorithm for blending  patches of interest in order to synthesize new  images. Finally, we outline the possible  integration of several algorithms to illustrate a  simple model-based video-conference  system.
</description>
<pubDate>Wed, 01 Jun 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6632</guid>
<dc:date>1994-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural Network Exploration Using Optimal Experiment Design</title>
<link>https://hdl.handle.net/1721.1/6631</link>
<description>Neural Network Exploration Using Optimal Experiment Design
Cohn, David A.
We consider the question "How should one  act when the only goal is to learn as much as  possible?" Building on the theoretical results  of Fedorov [1972] and MacKay [1992], we  apply techniques from Optimal Experiment  Design (OED) to guide the query/action  selection of a neural network learner. We  demonstrate that these techniques allow the  learner to minimize its generalization error by  exploring its domain efficiently and  completely. We conclude that, while not a  panacea, OED-based query/action has much  to offer, especially in domains where its high  computational costs can be tolerated.
</description>
<pubDate>Wed, 01 Jun 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6631</guid>
<dc:date>1994-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relative Affine Structure: Canonical Model for 3D from 2D Geometry and Applications</title>
<link>https://hdl.handle.net/1721.1/6630</link>
<description>Relative Affine Structure: Canonical Model for 3D from 2D Geometry and Applications
Shashua, Ammon; Navab, Nassir
We propose an affine framework for  perspective views, captured by a single  extremely simple equation based on a viewer-centered invariant we call "relative affine  structure". Via a number of corollaries of our  main results we show that our framework  unifies previous work --- including Euclidean,  projective and affine --- in a natural and  simple way, and introduces new, extremely  simple, algorithms for the tasks of  reconstruction from multiple views,  recognition by alignment, and certain image  coding applications.
</description>
<pubDate>Wed, 01 Jun 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6630</guid>
<dc:date>1994-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partial Evaluation for Scientific Computing: The Supercomputer Toolkit Experience</title>
<link>https://hdl.handle.net/1721.1/6629</link>
<description>Partial Evaluation for Scientific Computing: The Supercomputer Toolkit Experience
Berlin, Andrew; Surati, Rajeev
We describe the key role played by partial  evaluation in the Supercomputer Toolkit, a  parallel computing system for scientific  applications that effectively exploits the vast  amount of parallelism exposed by partial  evaluation. The Supercomputer Toolkit  parallel processor and its associated partial  evaluation-based compiler have been used  extensively by scientists at M.I.T., and have  made possible recent results in astrophysics  showing that the motion of the planets in our  solar system is chaotically unstable.
</description>
<pubDate>Sun, 01 May 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6629</guid>
<dc:date>1994-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Simulation of Subsonic Fluid Dynamics on a Cluster of Workstations</title>
<link>https://hdl.handle.net/1721.1/6628</link>
<description>Parallel Simulation of Subsonic Fluid Dynamics on a Cluster of Workstations
Skordos, Panayotis A.
An effective approach of simulating fluid dynamics on a cluster of non- dedicated workstations is presented. The approach uses local interaction algorithms, small communication capacity, and automatic migration of parallel processes from busy hosts to free hosts. The approach is well- suited for simulating subsonic flow problems which involve both hydrodynamics and acoustic waves; for example, the flow of air inside wind musical instruments. Typical simulations achieve $80\\%$ parallel efficiency (speedup/processors) using 20 HP-Apollo workstations. Detailed measurements of the parallel efficiency of 2D and 3D simulations are presented, and a theoretical model of efficiency is developed which fits closely the measurements. Two numerical methods of fluid dynamics are tested: explicit finite differences, and the lattice Boltzmann method.
</description>
<pubDate>Fri, 01 Dec 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6628</guid>
<dc:date>1995-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Piecemeal Learning of an Unknown Environment</title>
<link>https://hdl.handle.net/1721.1/6627</link>
<description>Piecemeal Learning of an Unknown Environment
Betke, Margrit; Rivest, Ronald L.; Singh, Mona
We introduce a new learning problem:  learning a graph by piecemeal search, in  which the learner must return every so often to  its starting point (for refueling, say). We  present two linear-time piecemeal-search  algorithms for learning city-block graphs: grid  graphs with rectangular obstacles.
</description>
<pubDate>Tue, 01 Mar 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6627</guid>
<dc:date>1994-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Viewer-Centered Object Recognition in Monkeys</title>
<link>https://hdl.handle.net/1721.1/6626</link>
<description>Viewer-Centered Object Recognition in Monkeys
Logothetis, N.K.; Pauls, J.; Poggio, Tomaso A
How does the brain recognize three-dimensional objects? We trained monkeys to recognize computer rendered objects presented from an arbitrarily chosen training view, and subsequently tested their ability to generalize recognition for other views. Our results provide additional evidence in favor of with a recognition model that accomplishes view-invariant performance by storing a limited number of object views or templates together with the capacity to interpolate between the templates (Poggio and Edelman, 1990).
</description>
<pubDate>Fri, 01 Apr 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6626</guid>
<dc:date>1994-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>View-Based Models of 3D Object Recognition and Class-Specific Invariances</title>
<link>https://hdl.handle.net/1721.1/6625</link>
<description>View-Based Models of 3D Object Recognition and Class-Specific Invariances
Logothetis, Nikos K.; Vetter, Thomas; Hurlbert, Anya; Poggio, Tomaso
This paper describes the main features of a view-based model of object recognition. The model tries to capture general properties to be expected in a biological architecture for object recognition. The basic module is a regularization network in which each of the hidden units is broadly tuned to a specific view of the object to be recognized.
</description>
<pubDate>Fri, 01 Apr 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6625</guid>
<dc:date>1994-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Relationship Between Generalization Error, Hypothesis Complexity, and Sample Complexity for Radial Basis Functions</title>
<link>https://hdl.handle.net/1721.1/6624</link>
<description>On the Relationship Between Generalization Error, Hypothesis Complexity, and Sample Complexity for Radial Basis Functions
Niyogi, Partha; Girosi, Federico
In this paper, we bound the generalization  error of a class of Radial Basis Function  networks, for certain well defined function  learning tasks, in terms of the number of  parameters and number of examples. We  show that the total generalization error is  partly due to the insufficient representational  capacity of the network (because of its finite  size) and partly due to insufficient information  about the target function (because of finite  number of samples). We make several  observations about generalization error which  are valid irrespective of the approximation  scheme. Our result also sheds light on ways  to choose an appropriate network architecture  for a particular problem.
</description>
<pubDate>Tue, 01 Feb 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6624</guid>
<dc:date>1994-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Object Recognition By Alignment Using Invariant Projections of Planar Surfaces</title>
<link>https://hdl.handle.net/1721.1/6623</link>
<description>Object Recognition By Alignment Using Invariant Projections of Planar Surfaces
Nagao, Kanji; Grimson W. Eric L.
In order to recognize an object in an image, we must determine the best transformation from object model to the image. In this paper, we show that for features from coplanar surfaces which undergo linear transformations in space, there exist projections invariant to the surface motions up to rotations in the image field. To use this property, we propose a new alignment approach to object recognition based on centroid alignment of corresponding feature groups. This method uses only a single pair of 2D model and data. Experimental results show the robustness of the proposed method against perturbations of feature positions.
</description>
<pubDate>Tue, 01 Feb 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6623</guid>
<dc:date>1994-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Garbage Collection is Fast, But a Stack is Faster</title>
<link>https://hdl.handle.net/1721.1/6622</link>
<description>Garbage Collection is Fast, But a Stack is Faster
Miller, James S.; Rozas, Guillermo J.
Prompted by claims that garbage collection can outperform stack allocation when sufficient physical memory is available, we present a careful analysis and set of cross-architecture measurements comparing these two approaches for the implementation of continuation (procedure call) frames. When the frames are allocated on a heap they require additional space, increase the amount of data transferred between memory and registers, and, on current architectures, require more instructions. We find that stack allocation of continuation frames outperforms heap allocation in some cases by almost a factor of three. Thus, stacks remain an important implementation technique for procedure calls, even in the presence of an efficient, compacting garbage collector and large amounts of memory.
</description>
<pubDate>Tue, 01 Mar 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6622</guid>
<dc:date>1994-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Face Recognition Under Varying Pose</title>
<link>https://hdl.handle.net/1721.1/6621</link>
<description>Face Recognition Under Varying Pose
Beymer, David J.
While researchers in computer vision and pattern recognition have worked on automatic techniques for recognizing faces for the last 20 years, most systems specialize on frontal views of the face. We present a face recognizer that works under varying pose, the difficult part of which is to handle face rotations in depth. Building on successful template-based systems, our basic approach is to represent faces with templates from multiple model views that cover different poses from the viewing sphere. Our system has achieved a recognition rate of 98% on a data base of 62 people containing 10 testing and 15 modelling views per person.
</description>
<pubDate>Wed, 01 Dec 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6621</guid>
<dc:date>1993-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Convergence Results for the EM Approach to Mixtures of Experts Architectures</title>
<link>https://hdl.handle.net/1721.1/6620</link>
<description>Convergence Results for the EM Approach to Mixtures of Experts Architectures
Jordan, Michael I.; Xu, Lei
The Expectation-Maximization (EM) algorithm  is an iterative approach to maximum  likelihood parameter estimation. Jordan and  Jacobs (1993) recently proposed an EM  algorithm for the mixture of experts  architecture of Jacobs, Jordan, Nowlan and  Hinton (1991) and the hierarchical mixture of  experts architecture of Jordan and Jacobs  (1992). They showed empirically that the EM  algorithm for these architectures yields  significantly faster convergence than gradient  ascent. In the current paper we provide a  theoretical analysis of this algorithm. We  show that the algorithm can be regarded as a  variable metric algorithm with its searching  direction having a positive projection on the  gradient of the log likelihood. We also analyze  the convergence of the algorithm and provide  an explicit expression for the convergence  rate. In addition, we describe an acceleration  technique that yields a significant speedup in  simulation experiments.
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6620</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Algebraic Functions For Recognition</title>
<link>https://hdl.handle.net/1721.1/6619</link>
<description>Algebraic Functions For Recognition
Shashua, Ammon
In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment --- yielding a direct method that cuts through the computations of camera transformation, scene structure and epipolar geometry. The proof of the central result may be of further interest as it demonstrates certain regularities across homographies of the plane and introduces new view invariants. Experiments on simulated and real image data were conducted, including a comparative analysis with epipolar intersection and the linear combination methods, with results indicating a greater degree of robustness in practice and a higher level of performance in re-projection tasks.
</description>
<pubDate>Sat, 01 Jan 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6619</guid>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formalizing Triggers: A Learning Model for Finite Spaces</title>
<link>https://hdl.handle.net/1721.1/6618</link>
<description>Formalizing Triggers: A Learning Model for Finite Spaces
Niyogi, Partha; Berwick, Robert C.
In a recent seminal paper, Gibson and Wexler  (1993) take important steps to formalizing the  notion of language learning in a (finite) space  whose grammars are characterized by a finite  number of parameters. They introduce the  Triggering Learning Algorithm (TLA) and show  that even in finite space convergence may be  a problem due to local maxima. In this paper  we explicitly formalize learning in finite parameter space as a Markov structure  whose states are parameter settings. We  show that this captures the dynamics of TLA  completely and allows us to explicitly compute  the rates of convergence for TLA and other  variants of TLA e.g. random walk. Also  included in the paper are a corrected version  of GW's central convergence proof, a list of  "problem states" in addition to local maxima,  and batch and PAC-style learning bounds for  the model.
</description>
<pubDate>Mon, 01 Nov 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6618</guid>
<dc:date>1993-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pattern Motion Perception: Feature Tracking or Integration of Component Motions?</title>
<link>https://hdl.handle.net/1721.1/6617</link>
<description>Pattern Motion Perception: Feature Tracking or Integration of Component Motions?
Sinha, Pawan
A key question regarding primate visual  motion perception is whether the motion of 2D  patterns is recovered by tracking distinctive  localizable features [Lorenceau and Gorea,  1989; Rubin and Hochstein, 1992] or by  integrating ambiguous local motion estimates  [Adelson and Movshon, 1982; Wilson and  Kim, 1992]. For a two-grating plaid pattern,  this translates to either tracking the grating  intersections or to appropriately combining  the motion estimates for each grating. Since  both component and feature information are  simultaneously available in any plaid pattern  made of contrast defined gratings, it is  unclear how to determine which of the two  schemes is actually used to recover the  plaid"s motion. To address this problem, we  have designed a plaid pattern made with  subjective, rather than contrast defined,  gratings. The distinguishing characteristic of  such a plaid pattern is that it contains no  contrast defined intersections that may be  tracked. We find that notwithstanding the  absence of such features, observers can  accurately recover the pattern velocity.  Additionally we show that the hypothesis of  tracking "illusory features" to estimate pattern  motion does not stand up to experimental  test. These results present direct evidence in support of the idea that calls for  the integration of component motions over the  one that mandates tracking localized features  to recover 2D pattern motion. The localized  features, we suggest, are used primarily as  providers of grouping information - which  component motion signals to integrate and  which not to.
</description>
<pubDate>Sat, 01 Oct 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6617</guid>
<dc:date>1994-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting the Parallelism Exposed by Partial Evaluation</title>
<link>https://hdl.handle.net/1721.1/6616</link>
<description>Exploiting the Parallelism Exposed by Partial Evaluation
Berlin, Andrew A.; Surati, Rajeev J.
We describe an approach to parallel compilation that seeks to harness the vast amount of fine-grain parallelism that is exposed through partial evaluation of numerically-intensive scientific programs. We have constructed a compiler for the Supercomputer Toolkit parallel processor that uses partial evaluation to break down data abstractions and program structure, producing huge basic blocks that contain large amounts of fine-grain parallelism. We show that this fine-grain prarllelism can be effectively utilized even on coarse-grain parallel architectures by selectively grouping operations together so as to adjust the parallelism grain-size to match the inter-processor communication capabilities of the target architecture.
</description>
<pubDate>Thu, 01 Apr 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6616</guid>
<dc:date>1993-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analytical Representation of Contours</title>
<link>https://hdl.handle.net/1721.1/6615</link>
<description>Analytical Representation of Contours
Chaney, Ronald D.
The interpretation and recognition of noisy contours, such as silhouettes, have proven to be difficult. One obstacle to the solution of these problems has been the lack of a robust representation for contours. The contour is represented by a set of pairwise tangent circular arcs. The advantage of such an approach is that mathematical properties such as orientation and curvature are explicityly represented. We introduce a smoothing criterion for the contour tht optimizes the tradeoff between the complexity of the contour and proximity of the data points. The complexity measure is the number of extrema of curvature present in the contour. The smoothing criterion leads us to a true scale-space for contours. We describe the computation of the contour representation as well as the computation of relevant properties of the contour. We consider the potential application of the representation, the smoothing paradigm, and the scale-space to contour interpretation and recognition.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6615</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognition by Prototypes</title>
<link>https://hdl.handle.net/1721.1/6614</link>
<description>Recognition by Prototypes
Basri, Ronen
A scheme for recognizing 3D objects from  single 2D images is introduced. The scheme  proceeds in two stages. In the first stage, the  categorization stage, the image is compared  to prototype objects. For each prototype, the  view that most resembles the image is  recovered, and, if the view is found to be  similar to the image, the class identity of the  object is determined. In the second stage, the  identification stage, the observed object is  compared to the individual models of its  class, where classes are expected to contain  objects with relatively similar shapes. For  each model, a view that matches the image is  sought. If such a view is found, the object's  specific identity is determined. The advantage  of categorizing the object before it is identified  is twofold. First, the image is compared to a  smaller number of models, since only models  that belong to the object's class need to be  considered. Second, the cost of comparing  the image to each model in a classis very low,  because correspondence is computed once  for the whoel class. More specifically, the  correspondence and object pose computed in  the categorization stage to align the prototype  with the image are reused in the identification  stage to align the individual models with the  image. As a result, identification is reduced to  a series fo simple template comparisons.  The paper concludes with an algorithm for  constructing optimal prototypes for classes of  objects.
</description>
<pubDate>Tue, 01 Dec 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6614</guid>
<dc:date>1992-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Extensions of the K-Means Algorithm for Image Segmentation and Pattern Classification</title>
<link>https://hdl.handle.net/1721.1/6613</link>
<description>Some Extensions of the K-Means Algorithm for Image Segmentation and Pattern Classification
Marroquin, Jose L.; Girosi, Federico
In this paper we present some extensions to  the k-means algorithm for vector quantization  that permit its efficient use in image  segmentation and pattern classification tasks.  It is shown that by introducing state variables  that correspond to certain statistics of the  dynamic behavior of the algorithm, it is  possible to find the representative centers fo  the lower dimensional maniforlds that define  the boundaries between classes, for clouds  of multi-dimensional, mult-class data; this  permits one, for example, to find class  boundaries directly from sparse data (e.g., in  image segmentation tasks) or to efficiently  place centers for pattern classification (e.g.,  with local Gaussian classifiers). The same  state variables can be used to define  algorithms for determining adaptively the  optimal number of centers for clouds of data  with space-varying density. Some examples of  the applicatin of these extensions are also  given.
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6613</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Tracking</title>
<link>https://hdl.handle.net/1721.1/6612</link>
<description>Visual Tracking
Taalebinezhaad, M. Ali
A typical robot vision scenario might involve a vehicle moving with an unknown 3D motion (translation and rotation) while taking intensity images of an arbitrary environment. This paper describes the theory and implementation issues of tracking any desired point in the environment. This method is performed completely in software without any need to mechanically move the camera relative to the vehicle. This tracking technique is simple an inexpensive. Furthermore, it does not use either optical flow or feature correspondence. Instead, the spatio-temporal gradients of the input intensity images are used directly. The experimental results presented support the idea of tracking in software. The final result is a sequence of tracked images where the desired point is kept stationary in the images independent of the nature of the relative motion. Finally, the quality of these tracked images are examined using spatio-temporal gradient maps.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6612</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>3D Pose from Three Corresponding Points Under Weak-Perspective Projection</title>
<link>https://hdl.handle.net/1721.1/6611</link>
<description>3D Pose from Three Corresponding Points Under Weak-Perspective Projection
Alter, T.D.
Model-based object recognition commonly  involves using a minimal set of matched  model and image points to compute the pose  of the model in image coordinates.  Furthermore, recognition systems often rely  on the "weak-perspective" imaging model in  place of the perspective imaging model. This  paper discusses computing the pose of a  model from three corresponding points under  weak-perspective projection. A new solution to  the problem is proposed which, like previous  solutins, involves solving a biquadratic  equation. Here the biquadratic is motivate  geometrically and its solutions, comprised of  an actual and a false solution, are interpreted  graphically. The final equations take a new  form, which lead to a simple expression for  the image position of any unmatched model  point.
</description>
<pubDate>Wed, 01 Jul 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6611</guid>
<dc:date>1992-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Localization and Positioning Using Combinations of Model Views</title>
<link>https://hdl.handle.net/1721.1/6610</link>
<description>Localization and Positioning Using Combinations of Model Views
Rivlin, Ehud; Basri, Ronen
A method for localization and positioning in an  indoor environment is presented. The method  is based on representing the scene as a set  of 2D views and predicting the appearances  of novel views by linear combinations of the  model views. The method is accurate under  weak perspective projection. Analysis of this  projection as well as experimental results  demonstrate that in many cases it is sufficient  to accurately describe the scene. When weak  perspective approximation is invalid, an  iterative solution to account for the perspective  distortions can be employed. A simple  algorithm for repositioning, the task of  returning to a previously visited position  defined by a single view, is derived from this  method.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6610</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical Flow From 1D Correlation: Application to a Simple Time-To-Crash Detector</title>
<link>https://hdl.handle.net/1721.1/6609</link>
<description>Optical Flow From 1D Correlation: Application to a Simple Time-To-Crash Detector
Ancona, Nicola; Poggio, Tomaso
In the first part of this paper we show that a  new technique exploiting 1D correlation of 2D  or even 1D patches between successive  frames may be sufficient to compute a  satisfactory estimation of the optical flow field.  The algorithm is well-suited to VLSI  implementations. The sparse measurements  provided by the technique can be used to  compute qualitative properties of the flow for a  number of different visual tsks. In particular,  the second part of the paper shows how to  combine our 1D correlation technique with a  scheme for detecting expansion or rotation  ([5]) in a simple algorithm which also  suggests interesting biological implications.  The algorithm provides a rough estimate of  time-to-crash. It was tested on real image  sequences. We show its performance and  compare the results to previous approaches.
</description>
<pubDate>Fri, 01 Oct 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6609</guid>
<dc:date>1993-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Distance Metric Between 3D Models and 3D Images for Recognition and Classification</title>
<link>https://hdl.handle.net/1721.1/6608</link>
<description>Distance Metric Between 3D Models and 3D Images for Recognition and Classification
Basri, Ronen; Weinshall, Daphna
Similarity measurements between 3D objects and 2D images are useful for the tasks of object recognition and classification. We distinguish between two types of similarity metrics: metrics computed in image-space (image metrics) and metrics computed in transformation-space (transformation metrics). Existing methods typically use image and the nearest view of the object. Example for such a measure is the Euclidean distance between feature points in the image and corresponding points in the nearest view. (Computing this measure is equivalent to solving the exterior orientation calibration problem.) In this paper we introduce a different type of metrics: transformation metrics. These metrics penalize for the deformatoins applied to the object to produce the observed image. We present a transformation metric that optimally penalizes for "affine deformations" under weak-perspective. A closed-form solution, together with the nearest view according to this metric, are derived. The metric is shown to be equivalent to the Euclidean image metric, in the sense that they bound each other from both above and below. For Euclidean image metric we offier a sub-optimal closed-form solution and an iterative scheme to compute the exact solution.
</description>
<pubDate>Wed, 01 Jul 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6608</guid>
<dc:date>1992-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intellectual Property in Computing: (How) Should Software Be Protected? An Industry Perspective</title>
<link>https://hdl.handle.net/1721.1/6607</link>
<description>Intellectual Property in Computing: (How) Should Software Be Protected? An Industry Perspective
Ernst, Michael D.
The future of the software industry is today being shaped in the courtroom. Most discussions of intellectual property to date, however, have been frames as debates about how the existing law --- promulgated long before the computer revolution --- should be applied to software. This memo is a transcript of a panel discussion on what forms of legal protection should apply to software to best serve both the industry and society in general. After addressing that question we can consider what laws would bring this about.
</description>
<pubDate>Fri, 01 May 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6607</guid>
<dc:date>1992-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Nonlinear Planning</title>
<link>https://hdl.handle.net/1721.1/6588</link>
<description>Systematic Nonlinear Planning
McAllester, David; Rosenblatt, David
This paper presents a simple, sound,  complete, and systematic algorithm for  domain independent STRIPS planning.  Simplicity is achieved by starting with a  ground procedure and then applying a  general and independently verifiable, lifting  transformation. Previous planners have been  designed directly as lifted procedures. Our  ground procedure is a ground version of  Tate's NONLIN procedure. In Tate's procedure  one is not required to determine whether a  prerequisite of a step in an unfinished plan is  guarnateed to hold in all linearizations. This  allows Tate"s procedure to avoid the use of  Chapman"s modal truth criterion.  Systematicity is the property that the same  plan, or partial plan, is never examined more  than once. Systematicity is achieved through a  simple modification of Tate's procedure.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6588</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Motivated Action Theory: A Formal Theory of Causal Reasoning</title>
<link>https://hdl.handle.net/1721.1/6587</link>
<description>Motivated Action Theory: A Formal Theory of Causal Reasoning
Stein, Lynn Andrea; Morgenstern, Leora
When we reason about change over time, causation provides an implicit preference: we prefer sequences of situations in which one situation leads causally to the next, rather than sequences in which one situation follows another at random and without causal connections. In this paper, we explore the problem of temporal reasoning --- reasoning about change over time --- and the crucial role that causation plays in our intuitions. We examine previous approaches to temporal reasoning, and their shortcomings, in light of this analysis. We propose a new system for causal reasoning, motivated action theory, which builds upon causation as a crucial preference creterion. Motivated action theory solves the traditional problems of both forward and backward reasoning, and additionally provides a basis for a new theory of explanation.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6587</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Calculation of Blocking Probabilities in Multistage Interconnection Networks with Redundant Paths</title>
<link>https://hdl.handle.net/1721.1/6586</link>
<description>Calculation of Blocking Probabilities in Multistage Interconnection Networks with Redundant Paths
Sobalvarro, Patrick G.
The blocking probability of a network is a  common measure of its performance. There  exist means of quickly calculating the blocking  probabilities of Banyan networks; however,  because Banyan networks have no redundant  paths, they are not inherently fault-tolerant,  and so their use in large-scale  multiprocessors is problematic. Unfortunately,  the addition of multiple paths between  message sources and sinks in a network  complicates the calculation of blocking  probabilities. A methodology for exact  calculation of blocking probabilities for small  networks with redundant paths is presented  here, with some discussion of its potential  use in approximating blocking probabilities for  large networks with redundant paths.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6586</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Perceptual Learning in Visual Hyperacuity</title>
<link>https://hdl.handle.net/1721.1/6585</link>
<description>Fast Perceptual Learning in Visual Hyperacuity
Poggio, Tomaso; Fahle, Manfred; Edelman, Shimon
In many different spatial discrimination tasks, such as in determining the sign of the offset in a vernier stimulus, the human visual system exhibits hyperacuity-level performance by evaluating spatial relations with the precision of a fraction of a photoreceptor"s diameter. We propose that this impressive performance depends in part on a fast learning process that uses relatively few examples and occurs at an early processing stage in the visual pathway. We show that this hypothesis is plausible by demonstrating that it is possible to synthesize, from a small number of examples of a given task, a simple (HyperBF) network that attains the required performance level. We then verify with psychophysical experiments some of the key predictions of our conjecture. In particular, we show that fast timulus-specific learning indeed takes place in the human visual system and that this learning does not transfer between two slightly different hyperacuity tasks.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6585</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Autonomous Motion Vision</title>
<link>https://hdl.handle.net/1721.1/6584</link>
<description>Towards Autonomous Motion Vision
Taalebinezhaad, M. Ali
Earlier, we introduced a direct method called fixation for the recovery of shape and motion in the general case. The method uses neither feature correspondence nor optical flow. Instead, it directly employs the spatiotemporal gradients of image brightness. This work reports the experimental results of applying some of our fixation algorithms to a sequence of real images where the motion is a combination of translation and rotation. These results show that parameters such as the fization patch size have crucial effects on the estimation of some motion parameters. Some of the critical issues involved in the implementaion of our autonomous motion vision system are also discussed here. Among those are the criteria for automatic choice of an optimum size for the fixation patch, and an appropriate location for the fixation point which result in good estimates for important motion parameters. Finally, a calibration method is described for identifying the real location of the rotation axis in imaging systems.
</description>
<pubDate>Wed, 01 Apr 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6584</guid>
<dc:date>1992-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>On The Uniqueness of Correspondence Under Orthographic and Perspective Projections</title>
<link>https://hdl.handle.net/1721.1/6583</link>
<description>On The Uniqueness of Correspondence Under Orthographic and Perspective Projections
Basri, Ronen
The task of shape recovery from a motion sequence requires the establishment of correspondence between image points. The two processes, the matching process and the shape recovery one, are traditionally viewed as independent. Yet, information obtained during the process of shape recovery can be used to guide the matching process. This paper discusses the mutual relationship between the two processes. The paper is divided into two parts. In the first part we review the constraints imposed on the correspondence by rigid transformations and extend them to objects that undergo general affine (non rigid) transformation (including stretch and shear), as well as to rigid objects with smooth surfaces. In all these cases corresponding points lie along epipolar lines, and these lines can be recovered from a small set of corresponding points. In the second part of the paper we discuss the potential use of epipolar lines in the matching process. We present an algorithm that recovers the correspondence from three contour images. The algorithm was implemented and used to construct object models for recognition. In addition we discuss how epipolar lines can be used to solve the aperture problem.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6583</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Alignment of Objects With Smooth Surfaces: Error Analysis of the Curvature Method</title>
<link>https://hdl.handle.net/1721.1/6582</link>
<description>The Alignment of Objects With Smooth Surfaces: Error Analysis of the Curvature Method
Basri, Ronen
The recognition of objects with smooth bounding surfaces from their contour images is considerably more complicated than that of objects with sharp edges, since in the former case the set of object points that generates the silhouette contours changes from one view to another. The "curvature method", developed by Basri and Ullman [1988], provides a method to approximate the appearance of such objects from different viewpoints. In this paper we analyze the curvature method. We apply the method to ellipsoidal objects and compute analytically the error obtained for different rotations of the objects. The error depends on the exact shape of the ellipsoid (namely, the relative lengths of its axes), and it increases a sthe ellipsoid becomes "deep" (elongated in the Z-direction). We show that the errors are usually small, and that, in general, a small number of models is required to predict the appearance of an ellipsoid from all possible views. Finally, we show experimentally that the curvature method applies as well to objects with hyperbolic surface patches.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6582</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Model and Control of an Artificial Muscle Based on Contractile Polymers</title>
<link>https://hdl.handle.net/1721.1/6581</link>
<description>Dynamic Model and Control of an Artificial Muscle Based on Contractile Polymers
Brock, David L.
A dynamic model and control system of an artificial muscle is presented. The artificial muscle is based on a contractile polymer gel which undergoes abrupt volume changes in response to variations in external conditions. The device uses an acid-base reaction to directly convert chemical to mechanical energy. A nonlinear sliding mode control system is proposed to track desired joint trajectories of a single link controlled by two antagonist muscles. Both the model and controller were implemented and produced acceptable tracking performance at 2Hz.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6581</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Review of Artificial Muscle Based on Contractile Polymers</title>
<link>https://hdl.handle.net/1721.1/6580</link>
<description>Review of Artificial Muscle Based on Contractile Polymers
Brock, David L.
An artificial muscle with strength and speed  equal to that of a human muscle may soon be  possible. Polymer gels exhibit abrubt volume  changes in response to variations in their  external conditions -- shrinking or swelling up  to 1000 times their original volume. Through  the conversion of chemical or electrical energy  into mechanical work, a number of devices  have already been constructed which produce  forces up to 100N/cm2 and contraction rates  on the order of a second. Through the  promise of an artificial muscle is real, many  fundamental physical and engineering  questions remain before the extent or limit of  these devices is known.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6580</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maxwell's Demon, Rectifiers, and the Second Law: Computer Simulation of Smoluchowski's Trapdoor</title>
<link>https://hdl.handle.net/1721.1/6579</link>
<description>Maxwell's Demon, Rectifiers, and the Second Law: Computer Simulation of Smoluchowski's Trapdoor
Skordos, P.A.; Zurek, W.H.
We have simulated numerically an automated  Maxwell's demon inspired by Smoluchowski's  ideas of 1912. Two gas chambers of equal  area are connected via an opening that is  covered by a trapdoor. The trapdoor can open  to the left but not to the right, and is intended  to rectify naturally occurring variations in  density between the two chambers. Our  results confirm that though the trapdoor  behaves as a rectifier when large density  differences are imposed by external means, it  can not extract useful work from the thermal  motion of the molecules when left on its own.
</description>
<pubDate>Sun, 01 Sep 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6579</guid>
<dc:date>1991-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Scale Vector-Ridge-Detection for Perceptual Organization Without Edges</title>
<link>https://hdl.handle.net/1721.1/6578</link>
<description>Multi-Scale Vector-Ridge-Detection for Perceptual Organization Without Edges
Subirana-Vilanova, J. Brian; Sung, Kah Kay
We present a novel ridge detector that finds ridges on vector fields. It is designed to automatically find the right scale of a ridge even in the presence of noise, multiple steps and narrow valleys. One of the key features of such ridge detector is that it has a zero response at discontinuities. The ridge detector can be applied to scalar and vector quantities such as color. We also present a parallel perceptual organization scheme based on such ridge detector that works without edges; in addition to perceptual groups, the scheme computes potential focus of attention points at which to direct future processing. The relation to human perception and several theoretical findings supporting the scheme are presented. We also show results of a Connection Machine implementation of the scheme for perceptual organization (without edges) using color.
</description>
<pubDate>Tue, 01 Dec 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6578</guid>
<dc:date>1992-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resolving Ambiguity in Nonmonotonic Inheritance Hierarchies</title>
<link>https://hdl.handle.net/1721.1/6577</link>
<description>Resolving Ambiguity in Nonmonotonic Inheritance Hierarchies
Stein, Lynn Andrea
This paper describes a theory of inheritance  theories. We present an original theory of  inheritance in nonmonotonic hierarchies. The  structures on which this theory is based  delineate a framework that subsumes most  inheritance theories in the literature, providing  a new foundation for inheritance. * Our path-based theory is sound and complete w.r.t. a  direct model-theoretic semantics. * Both the  credulous and the skeptical conclusions of  this theory are polynomial-time computable. *  We prove that true skeptical inheritance is not  contained in the language of path-based  inheritance. Because our techniques are  modular w.r.t. the definition of specificity, they  generalize to provide a unified framework for a  broad class of inheritance theories. By  describing multiple inheritance theories in the  same "language" of credulous extensions, we  make principled comparisons rather than the  ad-hoc examination of specific examples  makes up most of the comparative  inheritance work.
</description>
<pubDate>Thu, 01 Aug 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6577</guid>
<dc:date>1991-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recovering Three-Dimensional Structure from Motion with Surface Reconstruction</title>
<link>https://hdl.handle.net/1721.1/6576</link>
<description>Recovering Three-Dimensional Structure from Motion with Surface Reconstruction
Hildreth, Ellen C.; Ando, Hiroshi; Anderson, Richard; Treue, Stefan
We address the computational role that the  construction of a complete surface  representation may play in the recovery of 3--D  structure from motion. We present a model  that combines a feature--based structure--from- -motion algorithm with smooth surface  interpolation. This model can represent  multiple surfaces in a given viewing direction,  incorporates surface constraints from object  boundaries, and groups image features using  their 2--D image motion. Computer  simulations relate the model's behavior to  perceptual observations. In a companion  paper, we discuss further perceptual  experiments regarding the role of surface  reconstruction in the human recovery of 3--D  structure from motion.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6576</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Matching of Doubly Ambiguous Stereograms</title>
<link>https://hdl.handle.net/1721.1/6575</link>
<description>The Matching of Doubly Ambiguous Stereograms
Weinshall, Daphna
I have previously described psychophysical  experiments that involved the perception of  many transparent layers, corresponding to  multiple matching, in doubly ambiguous  random dot stereograms. Additional  experiments are described in the first part of  this paper. In one experiment, subjects were  required to report the density of dots on each  transparent layer. In another experiment, the  minimal density of dots on each layer, which  is required for the subjects to perceive it as a  distinct transparent layer, was measured. The  difficulties encountered by stereo matching  algorithms, when applied to doubly  ambiguous stereograms, are described in the  second part of this paper. Algorithms that can  be modified to perform consistently with  human perception, and the constraints  imposed on their parameters by human  perception, are discussed.
</description>
<pubDate>Mon, 01 Jul 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6575</guid>
<dc:date>1991-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sequence-Seeking and Counter Streams: A Model for Information Processing in the Cortex</title>
<link>https://hdl.handle.net/1721.1/6574</link>
<description>Sequence-Seeking and Counter Streams: A Model for Information Processing in the Cortex
Ullman, Shimon
This paper presents a model for the general flow in the neocortex. The basic process, called "sequence-seeking," is a search for a sequence of mappings or transformations, linking source and target representations. The search is bi-directional, "bottom-up" as well as "top-down," and it explores in parallel a large numbe rof alternative sequences. This operation is implemented in a structure termed "counter streams," in which multiple sequences are explored along two separate, complementary pathways which seeking to meet. The first part of the paper discusses the general sequence-seeking scheme and a number of related processes, such as the learning of successful sequences, context effects, and the use of "express lines" and partial matches. The second part discusses biological implications of the model in terms of connections within and between cortical areas. The model is compared with existing data, and a number of new predictions are proposed.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6574</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principles, Opportunism and Seeing in Design: A Computational Approach</title>
<link>https://hdl.handle.net/1721.1/6573</link>
<description>Principles, Opportunism and Seeing in Design: A Computational Approach
Papazian, Pegor
This thesis introduces elements of a theory of  design activity and a computational framework  for developing design systems. The theory  stresses the opportunistic nature of designing  and the complementary roles of focus and  distraction, the interdependence of evaluation  and generation, the multiplicity of ways of  seeing over the history of a design session  versus the exclusivity of a given way of seeing  over an arbitrarily short period, and the  incommensurability of criteria used to  evaluate a design. The thesis argues for a  principle based rather than rule based  approach to designing documents. The  Discursive Generator is presented as a  computational framework for implementing  specific design systems, and a simple  system for arranging blocks according to a set  of formal principles is developed by way of  illustration. Both shape grammars and  constraint based systems are used to  contrast current trends in design automation  with the discursive approach advocated in the  thesis. The Discursive Generator is shown to  have some important properties lacking in  other types of systems, such as dynamism,  robustness and the ability to deal with partial  designs. When studied in terms of a search  metaphor, the Discursive Generator is shown  to exhibit behavior which is radically different  from some traditional search techniques, and  to avoid some of the well-known difficulties  associated with them.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6573</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Design of a Maglev Controller in State Space</title>
<link>https://hdl.handle.net/1721.1/6572</link>
<description>Automatic Design of a Maglev Controller in State Space
Zhao, Feng; Thorton, Richard
We describe the automatic synthesis of a  global nonlinear controller for stabilizing a  magnetic levitation system. The synthesized  control system can stabilize the maglev  vehicle with large initial displacements from  an equilibrium, and possesses a much larger  operating region than the classical linear  feedback design for the same system. The  controller is automatically synthesized by a  suite of computational tools. This work  demonstrates that the difficult control  synthesis task can be automated, using  programs that actively exploit knowledge of  nonlinear dynamics and state space and  combine powerful numerical and symbolic  computations with spatial-reasoning  techniques.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6572</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Limitations of Non Model-Based Recognition Schemes</title>
<link>https://hdl.handle.net/1721.1/6571</link>
<description>Limitations of Non Model-Based Recognition Schemes
Moses, Yael; Ullman, Shimon
Different approaches to visual object  recognition can be divided into two general  classes: model-based vs. non model-based  schemes. In this paper we establish some  limitation on the class of non model-based  recognition schemes. We show that every  function that is invariant to viewing position of  all objects is the trivial (constant) function. It  follows that every consistent recognition  scheme for recognizing all 3-D objects must  in general be model based. The result is  extended to recognition schemes that are  imperfect (allowed to make mistakes) or  restricted to certain classes of objects.
</description>
<pubDate>Wed, 01 May 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6571</guid>
<dc:date>1991-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recovering Heading for Visually-Guided Navigation</title>
<link>https://hdl.handle.net/1721.1/6570</link>
<description>Recovering Heading for Visually-Guided Navigation
Hildreth, Ellen C.
We present a model for recovering the  direction of heading of an observer who is  moving relative to a scene that may contain  self-moving objects. The model builds upon  an algorithm proposed by Rieger and Lawton  (1985), which is based on earlier work by  Longuet-Higgens and Prazdny (1981). The  algorithm uses velocity differences computed  in regions of high depth variation to estimate  the location of the focus of expansion, which  indicates the observer's heading direction.  We relate the behavior of the proposed model  to psychophysical observations regarding the  ability of human observers to judge their  heading direction, and show how the model  can cope with self-moving objects in the  environment. We also discuss this model in  the broader context of a navigational system  that performs tasks requiring rapid sensing  and response through the interaction of  simple task-specific routines.
</description>
<pubDate>Sat, 01 Jun 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6570</guid>
<dc:date>1991-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligence Without Reason</title>
<link>https://hdl.handle.net/1721.1/6569</link>
<description>Intelligence Without Reason
Brooks, Rodney A.
Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligenge has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation.
</description>
<pubDate>Mon, 01 Apr 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6569</guid>
<dc:date>1991-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Connection Between GRBF and MLP</title>
<link>https://hdl.handle.net/1721.1/6566</link>
<description>A Connection Between GRBF and MLP
Maruyama, Minoru; Girosi, Federico; Poggio, Tomaso
Both multilayer perceptrons (MLP) and  Generalized Radial Basis Functions (GRBF)  have good approximation properties,  theoretically and experimentally. Are they  related? The main point of this paper is to  show that for normalized inputs, multilayer  perceptron networks are radial function  networks (albeit with a non-standard radial  function). This provides an interpretation of the  weights w as centers t of the radial function  network, and therefore as equivalent to  templates. This insight may be useful for  practical applications, including better  initialization procedures for MLP. In the  remainder of the paper, we discuss the  relation between the radial functions that  correspond to the sigmoid for normalized  inputs and well-behaved radial basis  functions, such as the Gaussian. In particular,  we observe that the radial function associated  with the sigmoid is an activation function that  is good approximation to Gaussian basis  functions for a range of values of the bias  parameter. The implication is that a MLP  network can always simulate a Gaussian  GRBF network (with the same number of units  but less parameters); the converse is true  only for certain values of the bias parameter.  Numerical experiments indicate that this  constraint is not always satisfied in practice by  MLP networks trained with backpropagation.  Multiscale GRBF networks, on the other hand,  can approximate MLP networks with a similar  number of parameters.
</description>
<pubDate>Wed, 01 Apr 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6566</guid>
<dc:date>1992-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Green Theorems and Qualitative Properties of the Optical Flow</title>
<link>https://hdl.handle.net/1721.1/6565</link>
<description>Green Theorems and Qualitative Properties of the Optical Flow
Poggio, Tomaso; Verri, Allessandro; Torre, Vincent
How can one compute qualitative properties of the optical flow, such as expansion or rotation, in a way which is robust and invariant to the position of the focus of expansion or the center of rotation? We suggest a particularly simple algorithm, well-suited to VLSI implementations, that exploits well-known relations between the integral and differential properties of vector fields and their linear behaviour near singularities.
</description>
<pubDate>Mon, 01 Apr 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6565</guid>
<dc:date>1991-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Models of Noise and Robust Estimates</title>
<link>https://hdl.handle.net/1721.1/6564</link>
<description>Models of Noise and Robust Estimates
Girosi, Federico
Given n noisy observations g; of the same quantity f, it is common use to give an estimate of f by minimizing the function Eni=1(gi-f)2. From a statistical point of view this corresponds to computing the Maximum likelihood estimate, under the assumption of Gaussian noise. However, it is well known that this choice leads to results that are very sensitive to the presence of outliers in the data. For this reason it has been proposed to minimize the functions of the form Eni=1V(gi-f), where V is a function that increases less rapidly than the square. Several choices for V have been proposed and successfully used to obtain "robust" estimates. In this paper we show that, for a class of functions V, using these robust estimators corresponds to assuming that data are corrupted by Gaussian noise whose variance fluctuates according to some given probability distribution, that uniquely determines the shape of V.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6564</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Efficient Correspondence Based Algorithm for 2D and 3D Model Based Recognition</title>
<link>https://hdl.handle.net/1721.1/6563</link>
<description>An Efficient Correspondence Based Algorithm for 2D and 3D Model Based Recognition
Breuel, Thomas M.
A polynomial time algorithm (pruned  correspondence search, PCS) with good  average case performance for solving a wide  class of geometric maximal matching  problems, including the problem of  recognizing 3D objects from a single 2D  image, is presented. Efficient verification  algorithms, based on a linear representation  of location constraints, are given for the case  of affine transformations among vector  spaces and for the case of rigid 2D and 3D  transformations with scale. Some preliminary  experiments suggest that PCS is a practical  algorithm. Its similarity to existing  correspondence based algorithms means  that a number of existing techniques for  speedup can be incorporated into PCS to  improve its performance.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6563</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Supporting Reuse and Evolution in Software Design</title>
<link>https://hdl.handle.net/1721.1/6562</link>
<description>Supporting Reuse and Evolution in Software Design
Tan, Yang Meng
Program design is an area of programming that can benefit significantly from machine-mediated assistance. A proposed tool, called the Design Apprentice (DA), can assist a programmer in the detailed design of programs. The DA supports software reuse through a library of commonly-used algorithmic fragments, or cliches, that codifies standard programming. The cliche library enables the programmer to describe the design of a program concisely. The DA can detect some kinds of inconsistencies and incompleteness in program descriptions. It automates detailed design by automatically selecting appropriate algorithms and data structures. It supports the evolution of program designs by keeping explicit dependencies between the design decisions made. These capabilities of the DA are underlaid bya model of programming, called programming by successive elaboration, which mimics the way programmers interact. Programming by successive elaboration is characterized by the use of breadth-first exposition of layered program descriptions and the successive modifications of descriptions. A scenario is presented to illustrate the concept of the DA. Technques for automating the detailed design process are described. A framework is given in which designs are incrementally augmented and modified by a succession of design steps. A library of cliches and a suite of design steps needed to support the scenario are presented.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6562</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Nondeterministic Minimization Algorithm</title>
<link>https://hdl.handle.net/1721.1/6560</link>
<description>A Nondeterministic Minimization Algorithm
Caprile, Bruno; Girosi, Federico
The problem of minimizing a multivariate  function is recurrent in many disciplines as  Physics, Mathematics, Engeneering and, of  course, Computer Science. In this paper we  describe a simple nondeterministic algorithm  which is based on the idea of adaptive noise,  and that proved to be particularly effective in  the minimization of a class of multivariate,  continuous valued, smooth functions,  associated with some recent extension of  regularization theory by Poggio and Girosi  (1990). Results obtained by using this  method and a more traditional gradient  descent technique are also compared.
</description>
<pubDate>Sat, 01 Sep 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6560</guid>
<dc:date>1990-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Line Kinematics for Whole-Arm Manipulation</title>
<link>https://hdl.handle.net/1721.1/6561</link>
<description>Line Kinematics for Whole-Arm Manipulation
Eberman, Brian; Brock, David L.
A Whole-Arm Manipulator uses every surface  to both sense and interact with the  environment. To facilitate the analysis and  control of a Whole-Arm Manipulator, line  geometry is used to describe the location and  trajectory of the links. Applications of line  kinematics are described and implemented  on the MIT Whole-Arm Manipulator (WAM-1).
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6561</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theory of How the Brain Might Work</title>
<link>https://hdl.handle.net/1721.1/6559</link>
<description>A Theory of How the Brain Might Work
Poggio, Tomaso
I wish to propose a quite speculative new version of the grandmother cell theory to explain how the brain, or parts of it, may work. In particular, I discuss how the visual system may learn to recognize 3D objects. The model would apply directly to the cortical cells involved in visual face recognition. I will also outline the relation of our theory to existing models of the cerebellum and of motor control. Specific biophysical mechanisms can be readily suggested as part of a basic type of neural circuitry that can learn to approximate multidimensional input-output mappings from sets of examples and that is expected to be replicated in different regions of the brain and across modalities. The main points of the theory are: -the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. -these modules are realized as HyperBF networks (Poggio and Girosi, 1990a,b). -HyperBF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as look-up tables. I will conclude with some speculations about the trade-off between memory and computation and the evolution of intelligence.
</description>
<pubDate>Sat, 01 Dec 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6559</guid>
<dc:date>1990-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The 1990 AI Fair</title>
<link>https://hdl.handle.net/1721.1/6558</link>
<description>The 1990 AI Fair
Flynn, Anita M.
This year, as the finale to the Artificial  Intelligence Laboratory's annual Winter  Olympics, the Lab staged an AI Fair ??night  devoted to displaying the wide variety of  talents and interests within the laboratory. The  Fair provided an outlet for creativity and fun in  a carnival-like atmosphere. Students  organized events from robot boat races to  face-recognition vision contests. Research  groups came together to make posters and  booths explaining their work. The robots rolled  down out of the labs, networks were turned  over to aerial combat computer games and  walls were decorated with posters of zany  ideas for the future. Everyone pitched in, and  this photograph album is a pictorial account of  the fun that night at the AI Fair.
</description>
<pubDate>Wed, 01 Aug 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6558</guid>
<dc:date>1990-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Affine Matching with Bounded Sensor Error: A Study of Geometric Hashing and Alignment</title>
<link>https://hdl.handle.net/1721.1/6557</link>
<description>Affine Matching with Bounded Sensor Error: A Study of Geometric Hashing and Alignment
Grimson W. Eric L.; Huttenlocher, Daniel P.; Jacobs, David W.
Affine transformations are often used in  recognition systems, to approximate the  effects of perspective projection. The  underlying mathematics is for exact feature  data, with no positional uncertainty. In  practice, heuristics are added to handle  uncertainty. We provide a precise analysis of  affine point matching, obtaining an expression  for the range of affine-invariant values  consistent with bounded uncertainty. This  analysis reveals that the range of affine-invariant values depends on the actual $x$-$y$-positions of the features, i.e. with  uncertainty, affine representations are not  invariant with respect to the Cartesian  coordinate system. We analyze the effect of  this on geometric hashing and alignment  recognition methods.
</description>
<pubDate>Thu, 01 Aug 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6557</guid>
<dc:date>1991-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Viewpoint-Specific Representations in Three-Dimensional Object Recognition</title>
<link>https://hdl.handle.net/1721.1/6556</link>
<description>Viewpoint-Specific Representations in Three-Dimensional Object Recognition
Edelman, Shimon; Bulthoff, Heinrich H.
We report a series of psychophysical  experiments that explore different aspects of  the problem of object representation and  recognition in human vision. Contrary to the  paradigmatic view which holds that the  representations are three-dimensional and  object-centered, the results consistently  support the notion of view-specific  representations that include at most partial  depth information. In simulated experiments  that involved the same stimuli shown to the  human subjects, computational models built  around two-dimensional multiple-view  representations replicated our main  psychophysical results, including patterns of  generalization errors and the time course of  perceptual learning.
</description>
<pubDate>Wed, 01 Aug 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6556</guid>
<dc:date>1990-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transition Space</title>
<link>https://hdl.handle.net/1721.1/6555</link>
<description>Transition Space
Borchardt, Gary C.
Informal causal descriptions of physical  systems abound in sources such as  encyclopedias, reports and user's manuals.  Yet these descriptions remain largely opaque  to computer processing. This paper proposes  a representational framework in which such  descriptions are viewed as providing partial  specifications of paths in a space of possible  transitions, or transition space. In this  framework, the task of comprehending  informal causal descriptions emerges as one  of completing the specifications of paths in  transition space---filling causal gaps and  relating accounts of activity varied by analogy  and abstraction. The use of the representation  and its operations is illustrated in the context  of a simple description concerning rocket  propulsion.
</description>
<pubDate>Thu, 01 Nov 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6555</guid>
<dc:date>1990-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Iterate Manual</title>
<link>https://hdl.handle.net/1721.1/6554</link>
<description>The Iterate Manual
Amsterdam, Jonathan
This is the manual for version 1.1 of Iterate, a  powerful iteration macro for Common Lisp.  Iterate is similar to Loop but provides  numerous additional features, is well  integrated with Lisp, and is extensible.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6554</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Repairing Learned Knowledge Using Experience</title>
<link>https://hdl.handle.net/1721.1/6553</link>
<description>Repairing Learned Knowledge Using Experience
Winston, Patrick H.; Rao, Satayjit
Explanation-based learning occurs when  something useful is retained from an  explanation, usually an account of how some  particular problem can be solved given a  sound theory. Many real-world explanations  are not based on sound theory, however, and  wrong things may be learned accidentally, as  subsequent failures will likely demonstrate. In  this paper, we describe ways to isolate the  facts that cause failures, ways to explain why  those facts cause problems, and ways to  repair learning mistakes. In particular, our  program learns to distinguish pails from cups  after making a few mistakes.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6553</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Olympic Robot Building Manual</title>
<link>https://hdl.handle.net/1721.1/6552</link>
<description>Olympic Robot Building Manual
Flynn, Anita
The 1989 AI Lab Winter Olympics will take a  slightly different twist from previous  Olympiads. Although there will still be a dozen  or so athletic competitions, the annual talent  show finale will now be a display not of  human talent, but of robot talent. Spurred on  by the question, "Why aren't there more robots  running around the AI Lab?", Olympic Robot  Building is an attempt to teach everyone how  to build a robot and get them started. Robot  kits will be given out the last week of classes  before the Christmas break and teams have  until the Robot Talent Show, January 27th, to  build a machine that intelligently connects  perception to action. There is no constraint on  what can be built; participants are free to pick  their own problems and solution  implementations. As Olympic Robot Building  is purposefully a talent show, there is no  particular obstacle course to be traversed or  specific feat to be demonstrated. The hope is  that this format will promote creativity, freedom  and imagination. This manual provides a  guide to overcoming all the practical problems  in building things. What follows are tutorials  on the components supplied in the kits: a  microprocessor circuit "brain", a variety of  sensors and motors, a mechanical building  block system, a complete software  development environment, some example  robots and a few tips on debugging and  prototyping. Parts given out in the kits can be  used, ignored or supplemented, as the kits  are designed primarily to overcome the  intertia of getting started. If all goes well, then  come February, there should be all kinds of  new members running around the AI Lab!
</description>
<pubDate>Thu, 01 Dec 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6552</guid>
<dc:date>1988-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Behavior Language; User's Guide</title>
<link>https://hdl.handle.net/1721.1/6551</link>
<description>The Behavior Language; User's Guide
Brooks, Rodney A.
The Behavior Language is a rule-based real-time parallel robot programming language  originally based on ideas from [Brooks 86],  [Connell 89], and [Maes 89]. It compiles into a  modified and extended version of the  subsumption architecture [Brooks 86] and  thus has backends for a number of  processors including the Motorola 68000 and  68HCll, the Hitachi 6301, and Common Lisp.  Behaviors are groups of rules which are  activatable by a number of different schemes.  There are no shared data structures across  behaviors, but instead all communication is  by explicit message passing. All rules are  assumed to run in parallel and  asynchronously. It includes the earlier notions  of inhibition and suppression, along with a  number of mechanisms for spreading of  activation.
</description>
<pubDate>Sun, 01 Apr 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6551</guid>
<dc:date>1990-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Effect of Indexing on the Complexity of Object Recognition</title>
<link>https://hdl.handle.net/1721.1/6550</link>
<description>The Effect of Indexing on the Complexity of Object Recognition
Grimson, W. Eric L.
Many current recognition systems use constrained search to locate objects in cluttered environments. Previous formal analysis has shown that the expected amount of search is quadratic in the number of model and data features, if all the data is known to come from a sinlge object, but is exponential when spurious data is included. If one can group the data into subsets likely to have come from a single object, then terminating the search once a "good enough" interpretation is found reduces the expected search to cubic. Without successful grouping, terminated search is still exponential. These results apply to finding instances of a known object in the data. In this paper, we turn to the problem of selecting models from a library, and examine the combinatorics of determining that a candidate object is not present in the data. We show that the expected search is again exponential, implying that naï¶¥ approaches to indexing are likely to carry an expensive overhead, since an exponential amount of work is needed to week out each of the incorrect models. The analytic results are shown to be in agreement with empirical data for cluttered object recognition.
</description>
<pubDate>Sun, 01 Apr 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6550</guid>
<dc:date>1990-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fault-Tolerant Design for Multistage Routing Networks</title>
<link>https://hdl.handle.net/1721.1/6549</link>
<description>Fault-Tolerant Design for Multistage Routing Networks
DeHon, Andre; Knight, Tom; Minsky, Marvin
As the size of digital systems increases, the  mean time between single component  failures diminishes. To avoid component  related failures, large computers must be  fault-tolerant. In this paper, we focus on  methods for achieving a high degree of fault-tolerance in multistage routing networks. We  describe a multipath scheme for providing  end-to-end fault-tolerance on large networks.  The scheme improves routing performance  while keeping network latency low. We also  describe the novel routing component, RN1,  which implements this scheme, showing how  it can be the basic building block for fault-tolerant multistage routing networks.
</description>
<pubDate>Sun, 01 Apr 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6549</guid>
<dc:date>1990-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shaping Inputs to Reduce Vibration: A Vector Diagram Approach</title>
<link>https://hdl.handle.net/1721.1/6548</link>
<description>Shaping Inputs to Reduce Vibration: A Vector Diagram Approach
Singhose, William
This paper describes a method for limiting  vibration in flexible systems by shaping the  system inputs. Unlike most previous attempts  at input shaping, this method does not require  an extensive system model or lengthy  numerical computation; only knowledge of the  system natural frequency and damping ratio  are required. The effectiveness of this method  when there are errors in the system model is  explored and quantified. An algorithm is  presented which, given an upper bound on  acceptable residual vibration amplitude,  determines a shaping strategy that is  insensitive to errors in the estimated natural  frequency. A procedure for shaping inputs to  systems with input constraints is outlined.  The shaping method is evaluated by dynamic  simulations and hardware experiments.
</description>
<pubDate>Thu, 01 Mar 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6548</guid>
<dc:date>1990-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples</title>
<link>https://hdl.handle.net/1721.1/6530</link>
<description>Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples
Girosi, Federico; Poggio, Tomaso; Caprile, Bruno
Learning an input-output mapping from a set  of examples can be regarded as synthesizing  an approximation of a multi-dimensional  function. From this point of view, this form of  learning is closely related to regularization  theory. In this note, we extend the theory by  introducing ways of dealing with two aspects  of learning: learning in the presence of  unreliable examples and learning from  positive and negative examples. The first  extension corresponds to dealing with outliers  among the sparse data. The second one  corresponds to exploiting information about  points or regions in the range of the function  that are forbidden.
</description>
<pubDate>Sun, 01 Jul 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6530</guid>
<dc:date>1990-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perceptual Organization, Figure-Ground, Attention and Saliency</title>
<link>https://hdl.handle.net/1721.1/6529</link>
<description>Perceptual Organization, Figure-Ground, Attention and Saliency
Subirana-Vilanova, J. Brian; Richards, Whitman
Notions of figure-ground, inside-outside are  difficult to define in a computational sense, yet  seem intuitively meaningful. We propose that  "figure" is an attention-directed region of  visual information processing, and has a non-discrete boundary. Associated with "figure" is  a coordinate frame and a "frame curve" which  helps initiate the shape recognition process  by selecting and grouping convex image  chunks for later matching- to-model. We show  that human perception is biased to see  chunks outside the frame as more salient  than those inside. Specific tasks, however,  can reverse this bias. Near/far, top/bottom and  expansion/contraction also behave similarly.
</description>
<pubDate>Thu, 01 Aug 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6529</guid>
<dc:date>1991-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Recognition of Tractability in Inference Relations</title>
<link>https://hdl.handle.net/1721.1/6528</link>
<description>Automatic Recognition of Tractability in Inference Relations
McAllester, David
A procedure is given for recognizing sets of  inference rules that generate polynomial time  decidable inference relations. The procedure  can automatically recognize the tractability of  the inference rules underlying congruence  closure. The recognition of tractability for that  particular rule set constitutes mechanical  verification of a theorem originally proved  independently by Kozen and Shostak. The  procedure is algorithmic, rather than heuristic,  and the class of automatically recognizable  tractable rule sets can be precisely  characterized. A series of examples of rule  sets whose tractability is non-trivial, yet  machine recognizable, is also given. The  technical framework developed here is viewed  as a first step toward a general theory of  tractable inference relations.
</description>
<pubDate>Thu, 01 Feb 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6528</guid>
<dc:date>1990-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Computation of Vernier Offsets, Curvature and Chevrons in Humans</title>
<link>https://hdl.handle.net/1721.1/6527</link>
<description>Parallel Computation of Vernier Offsets, Curvature and Chevrons in Humans
Fahle, Manfred
A vernier offset is detected at once among  straight lines, and reaction times are almost  independent of the number of simultaneously  presented stimuli (distractors), indicating  parallel processing of vernier offsets.  Reaction times for identifying a vernier offset  to one side among verniers offset to the  opposite side increase with the number of  distractors, indicating serial processing. Even  deviations below a photoreceptor diameter  can be detected at once. The visual system  thus attains positional accuracy below the  photoreceptor diameter simultaneously at  different positions. I conclude that deviation  from straightness, or change of orientation, is  detected in parallel over the visual field.  Discontinuities or gradients in orientation may  represent an elementary feature of vision.
</description>
<pubDate>Fri, 01 Dec 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6527</guid>
<dc:date>1989-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Consequences of Agreement and Ambiguity in Natural Language</title>
<link>https://hdl.handle.net/1721.1/6526</link>
<description>Computational Consequences of Agreement and Ambiguity in Natural Language
Ristad, Eric Sven; Berwick, Robert C.
The computer science technique of  computational complexity analysis can  provide powerful insights into the algorithm-neutral analysis of information processing  tasks. Here we show that a simple, theory-neutral linguistic model of syntactic  agreement and ambiguity demonstrates that  natural language parsing may be  computationally intractable. Significantly, we  show that it may be syntactic features rather  than rules that can cause this difficulty.  Informally, human languages and the  computationally intractable Satisfiability (SAT)  problem share two costly computional  mechanisms: both enforce agreement among  symbols across unbounded distances  (Subject-Verb agreement) and both allow  ambiguity (is a word a Noun or a Verb?).
</description>
<pubDate>Tue, 01 Nov 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6526</guid>
<dc:date>1988-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grouping For Recognition</title>
<link>https://hdl.handle.net/1721.1/6525</link>
<description>Grouping For Recognition
Jacobs, David W.
This paper presents a new method of  grouping edges in order to recognize objects.  This grouping method succeeds on images  of both two- and three- dimensional objects.  So that the recognition system can consider  first the collections of edges most likely to  lead to the correct recognition of objects, we  order groups of edges based on the  likelihood that a single object produced them.  The grouping module estimates this  likelihood using the distance that separates  edges and their relative orientation. This  ordering greatly reduces the amount of  computation required to locate objects and  improves the system's robustness to error.
</description>
<pubDate>Wed, 01 Nov 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6525</guid>
<dc:date>1989-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Language Syntax and First Order Preference</title>
<link>https://hdl.handle.net/1721.1/6524</link>
<description>Natural Language Syntax and First Order Preference
McAllester, David; Givan, Robert
We have argued elsewhere that first order inference can be made more efficient by using non-standard syntax for first order logic. In this paper we show how a fragment of English syntax under Montague semantics provides the foundation of a new inference procedure. This procedure seems more effective than corresponding procedures based on either classical syntax of our previously proposed taxonomic syntax. This observation may provide a functional explanation for some of the syntactic structure of English.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6524</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Disparity Gradients and Depth Scaling</title>
<link>https://hdl.handle.net/1721.1/6523</link>
<description>Disparity Gradients and Depth Scaling
Bulthoff, Henrich; Fahle, Manfred
The binocular perception of shape and depth  relations between objects can change  considerably if the viewing direction is  changed only by a small angle. We explored  this effect psychophysically and found a  strong depth reduction effect for large disparity  gradients. The effect is found to be strongest  for horizontally oriented stimuli, and stronger  for line stimuli than for points. This depth  scaling effect is discussed in a computational  framework of stereo based on a Baysian  approach which allows integration of  information from different types of matching  primitives weighted according to their  robustness.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6523</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Bifurcation Interpreter: A Step Towards the Automatic Analysis of Dynamical Systems</title>
<link>https://hdl.handle.net/1721.1/6522</link>
<description>The Bifurcation Interpreter: A Step Towards the Automatic Analysis of Dynamical Systems
Abelson, Harold
The Bifurcation Interpreter is a computer  program that autonomously explores the  steady-state orbits of one-parameter families  of periodically- driven oscillators. To report its  findings, the Interpreter generates schematic  diagrams and English text descriptions  similar to those appearing in the science and  engineering research literature. Given a  system of equations as input, the Interpreter  uses symbolic algebra to automatically  generate numerical procedures that simulate  the system. The Interpreter incorporates  knowledge about dynamical systems theory,  which it uses to guide the simulations, to  interpret the results, and to minimize the  effects of numerical error.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6522</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Comparison of Hardware Implementations for Low-Level Vision Algorithms</title>
<link>https://hdl.handle.net/1721.1/6521</link>
<description>A Comparison of Hardware Implementations for Low-Level Vision Algorithms
Gamble, Ed
Early and intermediate vision algorithms,  such as smoothing and discontinuity  detection, are often implemented on general-purpose serial, and more recently, parallel  computers. Special-purpose hardware  implementations of low-level vision  algorithms may be needed to achieve real-time processing. This memo reviews and  analyzes some hardware implementations of  low-level vision algorithms. Two types of  hardware implementations are considered:  the digital signal processing chips of Ruetz  (and Broderson) and the analog VLSI circuits  of Carver Mead. The advantages and  disadvantages of these two approaches for  producing a general, real-time vision system  are considered.
</description>
<pubDate>Wed, 01 Nov 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6521</guid>
<dc:date>1989-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Descriptive Simulation: Combining Symbolic and Numerical Methods in the Analysis of Chemical Reaction Mechanisms</title>
<link>https://hdl.handle.net/1721.1/6520</link>
<description>Descriptive Simulation: Combining Symbolic and Numerical Methods in the Analysis of Chemical Reaction Mechanisms
Eisenberg, Michael
The Kineticist's Workbench is a computer  program currently under development whose  purpose is to help chemists understand,  analyze, and simplify complex chemical  reaction mechanisms. This paper discusses  one module of the program that numerically  simulates mechanisms and constructs  qualitative descriptions of the simulation  results. These descriptions are given in terms  that are meaningful to the working chemist  (e.g., steady states, stable oscillations, and  so on); and the descriptions (as well as the  data structures used to construct them) are  accessible as input to other programs.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6520</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Vision: A Critical Review</title>
<link>https://hdl.handle.net/1721.1/6519</link>
<description>Computational Vision: A Critical Review
Edelman, Shimon; Weinshall, Daphna
We review the progress made in  computational vision, as represented by  Marr's approach, in the last fifteen years. First,  we briefly outline computational theories  developed for low, middle and high-level  vision. We then discuss in more detail  solutions proposed to three representative  problems in vision, each dealing with a  different level of visual processing. Finally, we  discuss modifications to the currently  established computational paradigm that  appear to be dictated by the recent  developments in vision.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6519</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognizing Three-Dimensional Objects without the Use of Models</title>
<link>https://hdl.handle.net/1721.1/6518</link>
<description>Recognizing Three-Dimensional Objects without the Use of Models
Marill, Thomas
We present an approach to the problem of  recognizing three-dimensional objects from  line-drawings. In this approach there are no  models. The system needs only to be given a  single picture of an object; it can then  recognize the object in arbitrary orientations.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6518</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Free Indexation: Combinatorial Analysis and a Compositional Algorithm</title>
<link>https://hdl.handle.net/1721.1/6517</link>
<description>Free Indexation: Combinatorial Analysis and a Compositional Algorithm
Fong, Sandiway
In the principles-and-parameters model of  language, the principle known as "free  indexation'' plays an important part in  determining the referential properties of  elements such as anaphors and  pronominals. This paper addresses two  issues. (1) We investigate the combinatorics  of free indexation. In particular, we show that  free indexation must produce an exponential  number of referentially distinct structures. (2)  We introduce a compositional free indexation  algorithm. We prove that the algorithm is  "optimal.'' More precisely, by relating the  compositional structure of the formulation to  the combinatorial analysis, we show that the  algorithm enumerates precisely all possible  indexings, without duplicates.
</description>
<pubDate>Fri, 01 Dec 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6517</guid>
<dc:date>1989-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognition by Linear Combinations of Models</title>
<link>https://hdl.handle.net/1721.1/6516</link>
<description>Recognition by Linear Combinations of Models
Ullman, Shimon; Basri, Ronen
Visual object recognition requires the matching of an image with a set of models stored in memory. In this paper we propose an approach to recognition in which a 3-D object is represented by the linear combination of 2-D images of the object. If M = {M1,...Mk} is the set of pictures representing a given object, and P is the 2-D image of an object to be recognized, then P is considered an instance of M if P = Eki=aiMi for some constants ai. We show that this approach handles correctly rigid 3-D transformations of objects with sharp as well as smooth boundaries, and can also handle non-rigid transformations. The paper is divided into two parts. In the first part we show that the variety of views depicting the same object under different transformations can often be expressed as the linear combinations of a small number of views. In the second part we suggest how this linear combinatino property may be used in the recognition process.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6516</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Battling Reality</title>
<link>https://hdl.handle.net/1721.1/6515</link>
<description>Battling Reality
Flynn, Anita M.; Brooks, Rodney A.
In the four years that the MIT Mobile Robot Project has benn in existence, we have built ten robots that focus research in various areas concerned with building intelligent systems. Towards this end, we have embarked on trying to build useful autonomous creatures that live and work in the real world. Many of the preconceived notions entertained before we started building our robots turned out to be misguided. Some issues we thought would be hard have worked successfully from day one and subsystems we imagined to be trivial have become tremendous time sinks. Oddly enough, one of our biggest failures has led to some of our favorite successes. This paper describes the changing paths our research has taken due to the lessons learned from the practical realities of building robots.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6515</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Self-Organizing Multiple-View Representation of 3D Objects</title>
<link>https://hdl.handle.net/1721.1/6514</link>
<description>A Self-Organizing Multiple-View Representation of 3D Objects
Edelman, Shimon; Weinshall, Daphna
We explore representation of 3D objects in which several distinct 2D views are stored for each object. We demonstrate the ability of a two-layer network of thresholded summation units to support such representations. Using unsupervised Hebbian relaxation, we trained the network to recognise ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited a substantial generalisation capability. In simulated psychophysical experiments, the network's behavior was qualitatively similar to that of human subjects.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6514</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compiling Scientific Code Using Partial Evaluation</title>
<link>https://hdl.handle.net/1721.1/6513</link>
<description>Compiling Scientific Code Using Partial Evaluation
Berlin, Andrew; Weise, Daniel
Scientists are faced with a dilemma: either  they can write abstract programs that express  their understanding of a problem, but which  do not execute efficiently; or they can write  programs that computers can execute  efficiently, but which are difficult to write and  difficult to understand. We have developed a  compiler that uses partial evaluation and  scheduling techniques to provide a solution to  this dilemma.
</description>
<pubDate>Sat, 01 Jul 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6513</guid>
<dc:date>1989-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Perceptual Buildup of Three-Dimensional Structure from Motion</title>
<link>https://hdl.handle.net/1721.1/6512</link>
<description>The Perceptual Buildup of Three-Dimensional Structure from Motion
Hildreth, Ellen C.; Grzywacz, Norberto M.; Adelson, Edward H.; Inada, Victor K.
We present psychophysical experiments that  measure the accuracy of perceived 3D  structure derived from relative image motion.  The experiments are motivated by Ullman's  incremental rigidity scheme, which builds up  3D structure incrementally over an extended  time. Our main conclusions are: first, the  human system derives an accurate model of  the relative depths of moving points, even in  the presence of noise; second, the accuracy  of 3D structure improves with time, eventually  reaching a plateau; and third, the 3D structure  currently perceived depends on previous 3D  models. Through computer simulations, we  relate the psychophysical observations to the  behavior of Ullman's model.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6512</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theory of Networks for Appxoimation and Learning</title>
<link>https://hdl.handle.net/1721.1/6511</link>
<description>A Theory of Networks for Appxoimation and Learning
Poggio, Tomaso; Girosi, Federico
Learning an input-output mapping from a set  of examples, of the type that many neural  networks have been constructed to perform,  can be regarded as synthesizing an  approximation of a multi-dimensional  function, that is solving the problem of  hypersurface reconstruction. From this point  of view, this form of learning is closely related  to classical approximation techniques, such  as generalized splines and regularization  theory. This paper considers the problems of  an exact representation and, in more detail, of  the approximation of linear and nolinear  mappings in terms of simpler functions of  fewer variables. Kolmogorov's theorem  concerning the representation of functions of  several variables in terms of functions of one  variable turns out to be almost irrelevant in the  context of networks for learning. We develop a  theoretical framework for approximation  based on regularization techniques that leads  to a class of three-layer networks that we call  Generalized Radial Basis Functions (GRBF),  since they are mathematically related to the  well-known Radial Basis Functions, mainly  used for strict interpolation tasks. GRBF  networks are not only equivalent to  generalized splines, but are also closely  related to pattern recognition methods such  as Parzen windows and potential functions  and to several neural network algorithms,  such as Kanerva's associative memory,  backpropagation and Kohonen's topology  preserving map. They also have an interesting  interpretation in terms of prototypes that are  synthesized and optimally combined during  the learning stage. The paper introduces  several extensions and applications of the  technique and discusses intriguing analogies  with neurobiological data.
</description>
<pubDate>Sat, 01 Jul 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6511</guid>
<dc:date>1989-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stimulus Familiarity Determines Recognition Strategy for Novel 3-D Objects</title>
<link>https://hdl.handle.net/1721.1/6510</link>
<description>Stimulus Familiarity Determines Recognition Strategy for Novel 3-D Objects
Edelman, Shimon; Bulthoff, Heinrich; Weinshall, Daphna
We describe a psychophysical investigation of  the effects of object complexity and familiarity  on the variation of recognition time and  recognition accuracy over different views of  novel 3D objects. Our findings indicate that  with practice the response times for different  views become more uniform and the initially  orderly dependency of the response time on  the distance to a "good" view disappears.  One possible interpretation of our results is in  terms of a tradeoff between memory needed  for storing specific-view representations of  objects and time spent in recognizing the  objects.
</description>
<pubDate>Sat, 01 Jul 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6510</guid>
<dc:date>1989-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Curved Inertia Frames: Visual Attention and Perceptual Organization Using Convexity and Symmetry</title>
<link>https://hdl.handle.net/1721.1/6509</link>
<description>Curved Inertia Frames: Visual Attention and Perceptual Organization Using Convexity and Symmetry
Subirana-Vilanova, J. Brian
In this paper we present an approach to  perceptual organization and attention based  on Curved Inertia Frames (C.I.F.), a novel  definition of "curved axis of inertia'' tolerant to  noisy and spurious data. The definition is  useful because it can find frames that  correspond to large, smooth, convex,  symmetric and central parts. It is novel  because it is global and can detect curved  axes. We discuss briefly the relation to human  perception, the recognition of non-rigid  objects, shape description, and extensions to  finding "features", inside/outside relations,  and long- smooth ridges in arbitrary surfaces.
</description>
<pubDate>Tue, 01 Oct 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6509</guid>
<dc:date>1991-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Perception of Three-Dimensional Objects</title>
<link>https://hdl.handle.net/1721.1/6508</link>
<description>Computer Perception of Three-Dimensional Objects
Marill, Thomas
We first pose the following problem: to  develop a program which takes line-drawings  as input and constructs three-dimensional  objects as output, such that the output objects  are the same as the ones we see when we  look at the input line-drawing. We then  introduce the principle of minimum standard-deviation of angles (MSDA) and discuss a  program based on MSDA. We present the  results of testing this program with a variety of  line- drawings and show that the program  constitutes a solution to the stated problem  over the range of line-drawings tested. Finally,  we relate this work to its historical  antecedents in the psychological and  computer-vision literature.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6508</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Taxonomic Syntax for First-Order Inference</title>
<link>https://hdl.handle.net/1721.1/6507</link>
<description>Taxonomic Syntax for First-Order Inference
McAllester, David; Givan, Robert
Most knowledge representation languages  are based on classes and taxonomic  relationships between classes. Taxonomic  hierarchies without defaults or exceptions are  semantically equivalent to a collection of  formulas in first order predicate calculus.  Although designers of knowledge  representation languages often express an  intuitive feeling that there must be some  advantage to representing facts as taxonomic  relationships rather than first order formulas,  there are few, if any, technical results  supporting this intuition. We attempt to  remedy this situation by presenting a  taxonomic syntax for first order predicate  calculus and a series of theorems that  support the claim that taxonomic syntax is  superior to classical syntax.
</description>
<pubDate>Thu, 01 Jun 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6507</guid>
<dc:date>1989-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Feature Matching for Object Localization in the Presence of Uncertainty</title>
<link>https://hdl.handle.net/1721.1/6506</link>
<description>Feature Matching for Object Localization in the Presence of Uncertainty
Cass, Todd Anthony
We consider the problem of matching model and sensory data features in the presence of geometric uncertainty, for the purpose of object localization and identification. The problem is to construct sets of model feature and sensory data feature pairs that are geometrically consistent given that there is uncertainty in the geometry of the sensory data features. If there is no geometric uncertainty, polynomial-time algorithms are possible for feature matching, yet these approaches can fail when there is uncertainty in the geometry of data features. Existing matching and recognition techniques which account for the geometric uncertainty in features either cannot guarantee finding a correct solution, or can construct geometrically consistent sets of feature pairs yet have worst case exponential complexity in terms of the number of features. The major new contribution of this work is to demonstrate a polynomial-time algorithm for constructing sets of geometrically consistent feature pairs given uncertainty in the geometry of the data features. We show that under a certain model of geometric uncertainty the feature matching problem in the presence of uncertainty is of polynomial complexity. This has important theoretical implications by demonstrating an upper bound on the complexity of the matching problem, an by offering insight into the nature of the matching problem itself. These insights prove useful in the solution to the matching problem in higher dimensional cases as well, such as matching three-dimensional models to either two or three-dimensional sensory data. The approach is based on an analysis of the space of feasible transformation parameters. This paper outlines the mathematical basis for the method, and describes the implementation of an algorithm for the procedure. Experiments demonstrating the method are reported.
</description>
<pubDate>Tue, 01 May 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6506</guid>
<dc:date>1990-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Computation of 3D Shape Invariants and the Focus of Expansion</title>
<link>https://hdl.handle.net/1721.1/6505</link>
<description>Direct Computation of 3D Shape Invariants and the Focus of Expansion
Weinshall, Daphna
Structure from motion often refers to the  computation of 3D structure from a matched  sequence of images. However, a depth map  of a surface is difficult to compute and may not  be a good representation for storage and  recognition. Given matched images, I will first  show that the sign of the normal curvature in a  given direction at a given point in the image  can be computed from a simple difference of  slopes of line-segments in one image. Using  this result, local surface patches can be  classified as convex, concave, parabolic  (cylindrical), hyperbolic (saddle point) or  planar. At the same time the translational  component of the optical flow is obtained,  from which the focus of expansion can be  computed.
</description>
<pubDate>Mon, 01 May 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6505</guid>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>XP. A Common Lisp Pretty Printing System</title>
<link>https://hdl.handle.net/1721.1/6504</link>
<description>XP. A Common Lisp Pretty Printing System
Waters, Richard C.
XP provides efficient and flexible support for  pretty printing in Common Lisp. Its single  greatest advantage is that it allows the full  benefits of pretty printing to be obtained when  printing data structures, as well as when  printing program code. XP is efficient,  because it is based on a linear time algorithm  that uses a small fixed amount of storage. XP  is flexible, because users can control the  exact form of the output via a set of special  format directives. XP can operate on arbitrary  data structures, because facilities are  provided for specifying pretty printing methods  for any type of object.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6504</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>XP. A Common Lisp Pretty Printing System</title>
<link>https://hdl.handle.net/1721.1/6503</link>
<description>XP. A Common Lisp Pretty Printing System
Waters, Richard C.
XP provides efficient and flexible support for  pretty printing in Common Lisp. Its single  greatest advantage is that it allows the full  benefits of pretty printing to be obtained when  printing data structures, as well as when  printing program code. XP is efficient,  because it is based on a linear time algorithm  that uses only a small fixed amount of  storage. XP is flexible, because users can  control the exact form of the output via a set of  special format directives. XP can operate on  arbitrary data structures, because facilities are  provided for specifying pretty printing methods  for any type of object. XP also modifies the  way abbreviation based on length, nesting  depth, and circularity is supported so that they  automatically apply to user-defined functions  that perform output ??g., print functions for  structures. In addition, a new abbreviation  mechanism is introduced that can be used to  limit the total numbers of lines printed.
</description>
<pubDate>Wed, 01 Mar 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6503</guid>
<dc:date>1989-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using English For Indexing and Retrieving</title>
<link>https://hdl.handle.net/1721.1/6502</link>
<description>Using English For Indexing and Retrieving
Katz, Boris
This paper describes a natural language system START. The system analyzes English text and automatically transforms it into an appropriate representation, the knowledge base, which incorporates the information found in the text. The user gains access to information stored in the knowledge base by querying it in English. The system analyzes the query and decides through a matching process what information in the knowledge base is relevant to the question. Then it retrieves this information and formulates its response also in English.
</description>
<pubDate>Sat, 01 Oct 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6502</guid>
<dc:date>1988-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligence in Scientific Computing</title>
<link>https://hdl.handle.net/1721.1/6501</link>
<description>Intelligence in Scientific Computing
Abelson, Harold; Eisenberg, Michael; Halfact, Mathew; Katzenelson, Jacob; Sacks, Elisha; Sussman, Gerald Jay; Wisdom, Jack; Yip, Ken
Combining numerical techniques with ideas  from symbolic computation and with methods  incorporating knowledge of science and  mathematics leads to a new category of  intelligent computational tools for scientists  and engineers. These tools autonomously  prepare simulation experiments from high-level specifications of physical models. For  computationally intensive experiments, they  automatically design special-purpose  numerical engines optimized to perform the  necessary computations. They actively  monitor numerical and physical experiments.  They interpret experimental data and  formulate numerical results in qualitative  terms. They enable their human users to  control computational experiments in terms of  high-level behavioral descriptions.
</description>
<pubDate>Tue, 01 Nov 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6501</guid>
<dc:date>1988-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Robot that Walks: Emergent Behaviors from a Carefully Evolved Network</title>
<link>https://hdl.handle.net/1721.1/6500</link>
<description>A Robot that Walks: Emergent Behaviors from a Carefully Evolved Network
Brooks, Rodney A.
Most animals have significant behavioral  expertise built in without having to explicitly  learn it all from scratch. This expertise is a  product of evolution of the organism; it can be  viewed as a very long term form of learning  which provides a structured system within  which individuals might learn more  specialized skills or abilities. This paper  suggests one possible mechanism for  analagous robot evolution by describing a  carefully designed series of networks, each  one being a strict augmentation of the  previous one, which control a six legged  walking machine capable of walking over  rough terrain and following a person passively  sensed in the infrared spectrum. As the  completely decentralized networks are  augmented, the robot's performance and  behavior repertoire demonstrably improve.  The rationale for such demonstrations is that  they may provide a hint as to the requirements  for automatically building massive networks to  carry out complex sensory-motor tasks. The  experiments with an actual robot ensure that  an essence of reality is maintained and that  no critical problems have been ignored.
</description>
<pubDate>Wed, 01 Feb 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6500</guid>
<dc:date>1989-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Optimal Scale for Edge Detection</title>
<link>https://hdl.handle.net/1721.1/6499</link>
<description>An Optimal Scale for Edge Detection
Geiger, Davi; Poggio, Tomaso
Many problems in early vision are ill posed.  Edge detection is a typical example. This  paper applies regularization techniques to the  problem of edge detection. We derive an  optimal filter for edge detection with a size  controlled by the regularization parameter $\\ lambda $ and compare it to the Gaussian  filter. A formula relating the signal-to-noise  ratio to the parameter $\\lambda $ is derived  from regularization analysis for the case of  small values of $\\lambda$. We also discuss  the method of Generalized Cross Validation  for obtaining the optimal filter scale. Finally,  we use our framework to explain two  perceptual phenomena: coarsely quantized  images becoming recognizable by either  blurring or adding noise.
</description>
<pubDate>Thu, 01 Sep 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6499</guid>
<dc:date>1988-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Networks for Machine Vision</title>
<link>https://hdl.handle.net/1721.1/6498</link>
<description>Parallel Networks for Machine Vision
Horn, Berthold K.P.
The amount of computation required to solve  many early vision problems is prodigious, and  so it has long been thought that systems that  operate in a reasonable amount of time will  only become feasible when parallel systems  become available. Such systems now exist in  digital form, but most are large and expensive.  These machines constitute an invaluable test-bed for the development of new algorithms,  but they can probably not be scaled down  rapidly in both physical size and cost, despite  continued advances in semiconductor  technology and machine architecture. Simple  analog networks can perform interesting  computations, as has been known for a long  time. We have reached the point where it is  feasible to experiment with implementation of  these ideas in VLSI form, particularly if we  focus on networks composed of locally  interconnected passive elements, linear  amplifiers, and simple nonlinear  components. While there have been  excursions into the development of ideas in  this area since the very beginnings of work on  machine vision, much work remains to be  done. Progress will depend on careful  attention to matching of the capabilities of  simple networks to the needs of early vision.  Note that this is not at all intended to be  anything like a review of the field, but merely a  collection of some ideas that seem to be  interesting.
</description>
<pubDate>Thu, 01 Dec 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6498</guid>
<dc:date>1988-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Operating Environment for the Jellybean Machine</title>
<link>https://hdl.handle.net/1721.1/6497</link>
<description>An Operating Environment for the Jellybean Machine
Totty, Brian K.
The Jellybean Machine is a scalable MIMD  concurrent processor consisting of special  purpose RISC processors loosely coupled  into a low latency network. I have developed  an operating system to provide the supportive  environment required to efficiently coordinate  the collective power of the distributed  processing elements. The system services  are developed in detail, and may be of interest  to other designers of fine grain, distributed  memory processing networks.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6497</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Message-Driven Processor Architecture: Verson 11</title>
<link>https://hdl.handle.net/1721.1/6496</link>
<description>Message-Driven Processor Architecture: Verson 11
Dally, William; Chien, Andrew; Fiske, Stuart; Horwat, Waldemar; Keen, John; Nuth, Peter; Larivee, Jerry; Totty, Brian
The Message-Driven Processor is a node of a  large-scale multiprocessor being developed  by the Concurrent VLSI Architecture Group. It  is intended to support fine-grained, message  passing, parallel computation. It contains  several novel architectural features, such as a  low-latency network interface, extensive type-checking hardware, and on-chip memory that  can be used as an associative lookup table.  This document is a programmer's guide to  the MDP. It describes the processor's register  architecture, instruction set, and the data  types supported by the processor. It also  details the MDP's message sending and  exception handling facilities.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6496</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating the Illuminant Color from the Shading of a Smooth Surface</title>
<link>https://hdl.handle.net/1721.1/6495</link>
<description>Estimating the Illuminant Color from the Shading of a Smooth Surface
Lee, Hsien-Che
\0\05{\0\0\0\0\0\0\0\0 a uniform wall illuminated by a  spot light often gives a strong impression of  the illuminant color. How can it be possible to  know if it is a white wall illuminated by yellow  light or a yellow wall illuminated by white  light? If the wall is a Lambertian reflector, it  would not be possible to tell the difference.  However, in the real world, some amount of  specular reflection is almost always present.  In this memo, it is shown that the computation  is possible in most practical cases.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6495</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Differential and Matching Methods for Optical Flow</title>
<link>https://hdl.handle.net/1721.1/6494</link>
<description>Analysis of Differential and Matching Methods for Optical Flow
Little, James J.; Verri, Alessandro
Several algorithms for optical flow are studied  theoretically and experimentally. Differential  and matching methods are examined; these  two methods have differing domains of  application- differential methods are best  when displacements in the image are small  (&lt;2 pixels) while matching methods work well  for moderate displacements but do not  handle sub-pixel motions. Both types of  optical flow algorithm can use either local or  global constraints, such as spatial  smoothness. Local matching and differential  techniques and global differential techniques  will be examined. Most algorithms for optical  flow utilize weak assumptions on the local  variation of the flow and on the variation of  image brightness. Strengthening these  assumptions improves the flow computation.  The computational consequence of this is a  need for larger spatial and temporal support.  Global differential approaches can be  extended to local (patchwise) differential  methods and local differential methods using  higher derivatives. Using larger support is  valid when constraint on the local shape of the  flow are satisfied. We show that a simple  constraint on the local shape of the optical  flow, that there is slow spatial variation in the  image plane, is often satisfied. We show how  local differential methods imply the  constraints for related methods using higher  derivatives. Experiments show the behavior of  these optical flow methods on velocity fields  which so not obey the assumptions.  Implementation of these methods highlights  the importance of numerical differentiation.  Numerical approximation of derivatives  require care, in two respects: first, it is  important that the temporal and spatial  derivatives be matched, because of the  significant scale differences in space and  time, and, second, the derivative estimates  improve with larger support.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6494</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Saliency: The Detection of Globally Salient Structures Using a Locally Connected Network</title>
<link>https://hdl.handle.net/1721.1/6493</link>
<description>Structural Saliency: The Detection of Globally Salient Structures Using a Locally Connected Network
Ullman, Shimon; Sha'ashua, Amnon
Certain salient structures in images attract  our immediate attention without requiring a  systematic scan. We present a method for  computing saliency by a simple iterative  scheme, using a uniform network of locally  connected processing elements. The network  uses an optimization approach to produce a  "saliency map," a representation of the image  emphasizing salient locations. The main  properties of the network are: (i) the  computations are simple and local, (ii)  globally salient structures emerge with a  small number of iterations, and (iii) as a by-product of the computations, contours are  smoothed and gaps are filled in.
</description>
<pubDate>Fri, 01 Jul 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6493</guid>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Alignment of Objects with Smooth Surfaces</title>
<link>https://hdl.handle.net/1721.1/6492</link>
<description>The Alignment of Objects with Smooth Surfaces
Ullman, Shimon; Basri, Ronen
This paper examines the recognition of rigid  objects bounded by smooth surfaces using  an alignment approach. The projected image  of such an object changes during rotation in a  manner that is difficult to predict. A method to  approach this problem is suggested, using  the 3D surface curvature at the points along  the silhouette. The curvature information  requires a single number for each point along  the object's silhouette, the magnitude of the  curvature vector at the point. We have  implemented and tested this method on  images of complex 3D objects; it was found to  give accurate predictions of the objects'  appearances for large transformations. A  small number of models can be used to  predict the new appearance of an object from  any viewpoint.
</description>
<pubDate>Fri, 01 Jul 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6492</guid>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model-Based Reasoning: Troubleshooting</title>
<link>https://hdl.handle.net/1721.1/6491</link>
<description>Model-Based Reasoning: Troubleshooting
Davis, Randall; Hamscher, Walter C.
To determine why something has stopped working, it is useful to know how it was supposed to work in the first place. That simple observation underlies some of the considerable interest generated in recent years on the topic of model-based reasoning, particularly its application to diagnosis and troubleshooting. This paper surveys the current state of the art, reviewing areas that are well understood and exploring areas that present challenging research topics. It views the fundamental paradigm as the interaction of prediction and observation, and explores it by examining three fundamental subproblems: Generating hypotheses by reasoning from a symptom to a collection of components whose misbehavior may plausibly have caused that symptom; testing each hypothesis to see whether it can account for all available observations of device behavior; then discriminating among the ones that survive testing. We analyze each of these independently at the knowledge level, i.e., attempting to understand what reasoning capabilities arise from the different varieties of knowledge available to the program. We find that while a wide range of apparently diverse model-based systems have been built for diagnosis and troubleshooting, they are for the most part variations on the central theme outlined here. Their diversity lies primarily in the varying amounts and kinds of knowledge they bring to bear at each stage of the process; the underlying paradigm is fundamentally the same. Our survey of this familiar territory leads to a second major conclusion of the paper: Diagnostic reasoning from a model is reasonably understood. Given a model of behavior and structure, we know how to use it in a variety of ways to produce a diagnosis. There is, by contrast, a rich supply of open research issues in the modeling process itself. In a sense we know how to do model-based reasoning; we do not know how to model the behavior of complex devices, how to create models, and how to select the "right" model for the task at hand.
</description>
<pubDate>Fri, 01 Jul 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6491</guid>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>What Are Plans For?</title>
<link>https://hdl.handle.net/1721.1/6487</link>
<description>What Are Plans For?
Agre, Philip E.; Chapman, David
What plans are like depends on how they're  used. We contrast two views of plan use. On  the plan-as-program-view, plan use is the execution of an effective procedure. On the  plan-as-communication view, plan use is like  following natural language instructions. We  have begun work on computational models of  plans-as-communication, building on our  previous work on improvised activity and on  ideas from sociology.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6487</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Demystifying Quantum Mechanics: A Simple Universe with Quantum Uncertainty</title>
<link>https://hdl.handle.net/1721.1/6486</link>
<description>Demystifying Quantum Mechanics: A Simple Universe with Quantum Uncertainty
Drescher, Gary L.
An artificial universe is defined that has  entirely deterministic laws with exclusively  local interactions, and that exhibits the  fundamental quantum uncertainty  phenomenon: superposed states mutually  interfere, but only to the extent that no  observation distinguishes among them.  Showing how such a universe could be  elucidates interpretational issues of actual  quantum mechanics. The artificial universe is  a much-simplified version of Everett's real-world model (the so-called multiple-worlds  formulation). In the artificial world, as in  Everett's model, the tradeoff between  interference and observation is deducible  from the universe formalism. Artificial world  examples analogous to the quantum double-slit experiment and the EPR experiment are  presented.
</description>
<pubDate>Thu, 01 Dec 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6486</guid>
<dc:date>1988-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Combinatorics of Object Recognition in Cluttered Environments Using Constrained Search</title>
<link>https://hdl.handle.net/1721.1/6485</link>
<description>The Combinatorics of Object Recognition in Cluttered Environments Using Constrained Search
Grimson, W. Eric L.
When clustering techniques such as the  Hough transform are used to isolate likely  subspaces of the search space, empirical  performance in cluttered scenes improves  considerably. In this paper we establish  formal bounds on the combinatorics of this  approach. Under some simple assumptions,  we show that the expected complexity of  recognizing isolated objects is quadratic in  the number of model and sensory fragments,  but that the expected complexity of recognizing  objects in cluttered environments is  exponential in the size of the correct  interpretation. We also provide formal bounds  on the efficacy of using the Hough transform  to preselect likely subspaces, showing that  the problem remains exponential, but that in  practical terms, the size of the problem is  significantly decreased.
</description>
<pubDate>Mon, 01 Feb 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6485</guid>
<dc:date>1988-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pattern-Directed Invocation with Changing Equations</title>
<link>https://hdl.handle.net/1721.1/6484</link>
<description>Pattern-Directed Invocation with Changing Equations
Feldman, Yishai A.; Rich, Charles
The interaction of pattern-directed invocation  with equality in an automated reasoning  system gives rise to a completeness  problem. In such systems, a demon needs to  be invoked not only when its pattern exactly  matches a term in the reasoning data base,  but also when it is possible to create a variant  that matches. An incremental algorithm has  been developed which solves this problem  without generating all possible variants of  terms in the database. The algorithm is  shown to be complete for a class of demons,  called transparent demons, in which there  is a well-behaved logical relationship  between the pattern and the body of the  demon.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6484</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Herbert: A Second Generation Mobile Robot</title>
<link>https://hdl.handle.net/1721.1/6483</link>
<description>Herbert: A Second Generation Mobile Robot
Brooks, Rodney A.; Connell, Jonathan; Ning, Peter
In mobile robot research we believe the structure of the platform, its capabilities, the choice of sensors, their capabilities, and the choice of processors, both onboard and offboard, greatly constrains the direction of research activity centered on the platform. We examine the design and tradeoffs in a low cost mobile platform we have built while paying careful attention to issues of sensing, manipulation, onboard processing and debuggability of the total system. The robot, named Herbert, is a completely autonomous mobile robot with an onboard parallel processor and special hardware support for the subsumption architecture [Brooks (1986)], an onboard manipulator and a laser range scanner. All processors are simple low speed 8-bit micro-processors. The robot is capable of real time three dimensional vision, while simultaneously carrying out manipulator and navigation tasks.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6483</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Lexical Conceptual Approach to Generation for Machine Translation</title>
<link>https://hdl.handle.net/1721.1/6482</link>
<description>A Lexical Conceptual Approach to Generation for Machine Translation
Dorr, Bonnie J.
Current approaches to generation for  machine translation make use of direct-replacement templates, large grammars, and  knowledge-based inferencing techniques. Not  only are rules language-specific, but they are  too simplistic to handle sentences that exhibit  more complex phenomena. Furthermore,  these systems are not easily extendable to  other languages because the rules that map  the internal representation to the surface form  are entirely dependent on both the domain of  the system and the language being  generated. Finally an adequate interlingual  representation has not yet been discovered;  thus, knowledge-based inferencing is  necessary and syntactic cross-linguistic  generalization cannot be exploited. This report  introduces a plan for the development of a  theoretically based computational scheme of  natural language generation for a translation  system. The emphasis of the project is the  mapping from the lexical conceptual structure  of sentences to an underlying or "base"  syntactic structure called deep structure. This  approach tackles the problems of thematic  and structural divergence, i.e., it allows  generation of target language sentences that  are not thematically or structurally equivalent  to their conceptually equivalent source  language counterparts. Two other more  secondary tasks, construction of a dictionary  and mapping from dep structure to surface  structure, will also be discussed. The  generator operates on a constrained  grammatical theory rather than on a set of  surface level transformations. If the endeavor  succeeds, there will no longer be a need for  large, detailed grammars; general  knowledge-based inferencing will not be  necessary; lexical selection and syntactic  realization will bw facilitated; and the model  will be general enough for extension to other  languages.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6482</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Soft Objects: A Paradigm for Object Oriented Programming</title>
<link>https://hdl.handle.net/1721.1/6481</link>
<description>Soft Objects: A Paradigm for Object Oriented Programming
Haase, Kenneth
This paper introduces soft objects, a new  paradigm for object oriented programming.  This paradigm replaces the traditional notion  of object classes with the specification of  transforming procedures which transform  simpler objects into more complicated  objects. These transforming procedures  incrementally construct new objects by adding  new state or providing handlers for new  messages. Unlike other incremental  approaches (e.g. the inherited exist handlers  of Object Logo [Drescher, 1987]),  transforming procedures are strict functions  which always return new objects; rather than  conflating objects and object abstractions  (classes), soft objects distinctly separates  objects and their abstractions. The  composition of these transforming  procedures replaces the inheritance  schemes of class oriented approaches; order  of composition of transforming procedure  makes explicit the inheritance  indeterminancies introduced by multiple  super classes. Issues regarding semantics,  efficiency, and security are discussed in the  context of several alternative implementation  models and the code of a complete  implementation is provided in an appendix.
</description>
<pubDate>Thu, 01 Mar 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6481</guid>
<dc:date>1990-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Utilizing Dynamic Stability to Orient Parts</title>
<link>https://hdl.handle.net/1721.1/6480</link>
<description>Utilizing Dynamic Stability to Orient Parts
Singer, Neil C.; Seering, Warren P.
The intent of this research is to study the  dynamic behavior of a solid body resting on a  moving surface. Results of the study are then  used to propose methods for controlling the  orientation of parts in preparation for  automatic assembly. Two dynamic models  are discussed in detail. The first examines the  impacts required to cause reorientation of a  part. The second investigates the use of  oscillatory motion to selectively reorient parts.  This study demonstrates that the dynamic  behaviors of solid bodies, under the  conditions mentioned above, vary  considerably with small changes in geometry  or orientation.
</description>
<pubDate>Mon, 01 Feb 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6480</guid>
<dc:date>1988-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge Base Integration: What Can We Learn from Database Integration Research?</title>
<link>https://hdl.handle.net/1721.1/6479</link>
<description>Knowledge Base Integration: What Can We Learn from Database Integration Research?
Lee, Jintae
This paper examines the issues and the  solutions that have been studied in database  (DB) integration research and tries to draw  lessons from them for knowledge base (KB)  integration.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6479</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Standard Architecture for Controlling Robots</title>
<link>https://hdl.handle.net/1721.1/6478</link>
<description>A Standard Architecture for Controlling Robots
Narasimhan, Sundar; Siegel, David M.; Hollerbach, John M.
This paper describes a fully implemented  computational architecture that controls the  Utah-MIT dextrous hand and other complex  robots. Robots like the Utah-MIT hand are  characterized by large numbers of actuators  and sensors, and require high servo rates.  Consequently, powerful and flexible computer  architectures are needed to control them. The  architecture described in this paper derives its  power from the highly efficient real-time  environment provided for its control  processors, coupled with a development host  that enables flexible program development. By  mapping the memory of a dedicated group of  processors into the address space of a host  computer, efficient sharing of system  resources between them is possible. The  software is characterized by a few simple  design concepts but provides the facilities out  of which more powerful utilities like multi-processor pseudoterminal emulator, a  transparent and fast file server, and a flexible  symbolic debugger could be constructed.
</description>
<pubDate>Fri, 01 Jul 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6478</guid>
<dc:date>1988-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relaxing the Brightness Constancy Assumption in Computing Optical Flow</title>
<link>https://hdl.handle.net/1721.1/6477</link>
<description>Relaxing the Brightness Constancy Assumption in Computing Optical Flow
Gennert, Michael A.; Negahdaripour, Shahriar
Optical flow is the apparent (or perceived)  motion of image brightness patterns arising  from relative motion of objects and observer.  Estimation of the optical flow requires the  application of two kinds of constraint: the flow  field smoothness constraint and the  brightness constancy constraint. The  brightness constancy constraint permits one  to match image brightness values across  images, but is very restrictive. We propose  replacing this constraint with a more general  constraint, which permits a linear  transformation between image brightness  values. The transformation parameters are  allowed to vary smoothly so that inexact  matching is allowed. We describe the  implementation on a highly parallel computer  and present sample results.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6477</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synapses That Compute Motion</title>
<link>https://hdl.handle.net/1721.1/6476</link>
<description>Synapses That Compute Motion
Poggio, Tomaso A; Koch, C.
Biophysics of computation is a new field that  attempts to characterize the role in information  processing of the several biophysical  mechanisms in neurons, synapses, and  membranes that have been uncovered in  recent years. In this article, we review a  synaptic mechanism, based on the interaction  between excitation and silent inhibition, that  implements a veto-like operation. Synapses  of this type may underlie direction selectivity to  direction of motion in the vertebrate retina.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6476</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Integration and Detection of Discontinuities: The Key Role of Intensity Edges</title>
<link>https://hdl.handle.net/1721.1/6475</link>
<description>Visual Integration and Detection of Discontinuities: The Key Role of Intensity Edges
Gamble, Ed; Poggio, Tomaso
Integration of several vision modules is likely  to be one of the keys to the power and  robustness of the human visual system. The  problem of integrating early vision cues is  also emerging as a central problem in current  computer vision research. In this paper we  suggest that integration is best performed at  the location of discontinuities in early  processes, such as discontinuities in image  brightness, depth, motion, texture and color.  Coupled Markov Random Field models,  based on Bayes estimation techiques, can be  used to combine vision modalities with their  discontinuities. These models generate  algorithms that map naturally onto parallel  fine-grained architectures such as the  Connection Machine. We derive a scheme to  integrate intensity edges with stereo depth  and motion field information and show results  on synthetic and natural images. The use of  intensity edges to integrate other visual cues  and to help discover discontinuities emerges  as a general and powerful principle.
</description>
<pubDate>Thu, 01 Oct 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6475</guid>
<dc:date>1987-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Obviously Synchronizable Series Expressions: Part II: Overview of the Theory and Implementation</title>
<link>https://hdl.handle.net/1721.1/6474</link>
<description>Obviously Synchronizable Series Expressions: Part II: Overview of the Theory and Implementation
Waters, Richard C.
The benefits of programming in a functional  style are well known. In particular, algorithms  that are expressed as compositions of  functions operating on series/vectors/streams  of data elements are much easier to  understand and modify than equivalent  algorithms expressed as loops.  Unfortunately, many programmers hesitate to  use series expressions, because they are  typically implemented very inefficiently- the  prime source of inefficiency being the creation  of intermediate series objects.  A restricted class of series expressions,  obviously synchronizable series expressions,  is defined which can be evaluated very  efficiently. At the cost of introducing  restrictions which place modest limits on the  series expressions which can be written, the  restrictions guarantee that the creation of  intermediate series objects is never  necessary. This makes it possible to  automatically convert obviously  synchronizable series expressions into highly  efficient loops using straightforward  algorithms.
</description>
<pubDate>Tue, 01 Mar 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6474</guid>
<dc:date>1988-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synchronizable Series Expressions: Part II: Overview of the Theory and Implementation</title>
<link>https://hdl.handle.net/1721.1/6473</link>
<description>Synchronizable Series Expressions: Part II: Overview of the Theory and Implementation
Waters, Richard C.
The benefits of programming in a functional  style are well known. In particular, algorithms  that are expressed as compositions of  functions operating on series/vectors/streams  of data elements are much easier to  understand and modify than equivalent  algorithms expressed as loops.  Unfortunately, many programmers hesitate to  use series expressions, because they are  typically implemented very inefficiently- the  prime source of inefficiency being the creation  of intermediate series objects.  A restricted class of series expressions,  obviously synchronizable series expressions,  is defined which can be evaluated very  efficiently. At the cost of introducing  restrictions which place modest limits on the  series expressions which can be written, the  restrictions guarantee that the creation of  intermediate series objects is never  necessary. This makes it possible to  automatically convert obviously  synchronizable series expressions into highly  efficient loops using straightforward  algorithms.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6473</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Obviously Synchronizable Series Expression: Part I: User's Manual for the OSS Macro Package</title>
<link>https://hdl.handle.net/1721.1/6472</link>
<description>Obviously Synchronizable Series Expression: Part I: User's Manual for the OSS Macro Package
Waters, Richard C.
The benefits of programming in a functional  style are well known. In particular, algorithms  that are expressed as compositions of  functions operating on series/vectors/streams  of data elements are much easier to  understand and modify than equivalent  algorithms expressed as loops.  Unfortunately, many programmers hesitate to  use series expressions, because they are  typically implemented very inefficiently.  Common Lisp macro packages (OSS) has  been implemented which supports a  restricted class of series expressions,  obviously synchronizable series expressions,  which can be evaluated very efficiently by  automatically converting them into loops.  Using this macro package, programmers can  obtain the advantages of expressing  computations as series expressions without  incurring any run-time overhead.
</description>
<pubDate>Tue, 01 Mar 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6472</guid>
<dc:date>1988-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simplified Voronoi Diagrams</title>
<link>https://hdl.handle.net/1721.1/6471</link>
<description>Simplified Voronoi Diagrams
Canny, John; Donald, Bruce
The Voronoi diagram has proved to be a  useful tool in a variety of contexts in  computational geometry. Our interest here is  in using the diagram to simplify the planning  of collision-free paths for a robot among  obstacles, the so-called generalized movers'  problem. The Voronoi diagram, as usually  defined, is a strong deformation retract of  free space so that free space can be  continuously deformed onto the diagram. In  particular, any path in free space can be  continuously deformed onto the diagram. This  means that the diagram is complete for path  planning, i.e., searching the original space for  paths can be reduced to a search on the  diagram. Reducing the dimension of the set  to be searched usually reduces the time  complexity of the search. Secondly, the  diagram leads to robust paths, i.e., paths that  are maximally clear of obstacles.
</description>
<pubDate>Wed, 01 Apr 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6471</guid>
<dc:date>1987-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Dynamicist's Workbench: I Automatic Preparation of Numerical Experiments</title>
<link>https://hdl.handle.net/1721.1/6470</link>
<description>The Dynamicist's Workbench: I Automatic Preparation of Numerical Experiments
Abelson, Harold; Sussman, Gerald Jay
The dynamicist's workbench is a system for  automating some of the work of experimental  dynamics. We describe a portion of our  system that deals with the setting up and  execution of numerical simulations. This part  of the workbench includes a spectrum of  computational tools---numerical methods,  symbolic algebra, and semantic constraints.  These tools are designed so that combined  methods, tailored to particular problems, can  be constructed.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6470</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Formalizing Reusable Software Components in the Programmer's Apprentice</title>
<link>https://hdl.handle.net/1721.1/6469</link>
<description>Formalizing Reusable Software Components in the Programmer's Apprentice
Rich, Charles; Waters, Richard C.
There has been a long-standing desire in  computer science for a way of collecting and  using libraries of standard software  components. The limited success in actually  doing this stems not from any resistance to  the idea, nor from any lack of trying, but rather  from the difficulty of choosing an appropriate  formalism for representing components. For a  formalism to be maximally useful, it must  satisfy five key desiderata: expressiveness,  convenient combinability, semantic  soundness, machine manipulability, and  programming language independence. The  Plan Calculus formalism developed as part of  the Programmer's Apprentice project satisfies  each of these desiderata quite well. It does  this by combining the ideas from flowchart  schemas, data abstraction, logical  formalisms, and program transformations.  The efficacy of the Plan Calculus has been  demonstrated in part by a prototype program  editor called the Knowledge- based Editor in  Emacs. This editor makes it possible for a  programmer to construct a program rapidly  and reliably by combining components  represented as plans.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6469</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scheme 86: An Architecture for Microcoding a Scheme Interpreter</title>
<link>https://hdl.handle.net/1721.1/6468</link>
<description>Scheme 86: An Architecture for Microcoding a Scheme Interpreter
Wu, Henry M.
I describe the design and implementation  plans for a computer that is optimized as a  microcoded interpreter for Scheme. The  computer executes SCode, a typed-pointer  representation. The memory system has low-latency as well as high throughput. Multiple  execution units in the processor complete  complex operations in less than one memory  cycle, allowing efficient use of memory  bandwidth. The processor provides hardware  support for tagged data objects and runtime  type checking. I will discuss the motivation for  this machine, its architecture, why it can  interpret Scheme efficiently, and the  computer-aided design tools developed for  building this computer.
</description>
<pubDate>Mon, 01 Aug 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6468</guid>
<dc:date>1988-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Solutions to Geometric Problems on the Scan Model of Computation</title>
<link>https://hdl.handle.net/1721.1/6467</link>
<description>Parallel Solutions to Geometric Problems on the Scan Model of Computation
Blelloch, Guy E.; Little, James J.
This paper describes several parallel  algorithms that solve geometric problems.  The algorithms are based on a vector model  of computation---the scan-model. The  purpose of this paper is both to show how the  model can be used and to show a set of  interesting algorithms, most of which have  been implemented on the Connection  Machine, a highly parallel single instruction  multiple data (SIMD) computer.
</description>
<pubDate>Mon, 01 Feb 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6467</guid>
<dc:date>1988-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparative Analysis</title>
<link>https://hdl.handle.net/1721.1/6466</link>
<description>Comparative Analysis
Weld, Daniel S.
Comparative analysis is the problem of  predicting how a system will react to  perturbations in its parameters, and why. For  example, comparative analysis could be  asked to explain why the period of an  oscillating spring/block system would  increase if the mass of the block were larger.  This paper formalizes the problem of  comparative analysis and presents a  technique, differential qualitative (DQ)  analysis, which solves the task, providing  explanations suitable for use by design  systems, automated diagnosis, intelligent  tutoring systems, and explanation-based  generalization. DQ analysis uses inference  rules to deduce qualitative information about  the relative change of system parameters.  Multiple perspectives are used to represent  relative change values over intervals of time.  Differential analysis has been implemented,  tested on a dozen examples, and proven  sound. Unfortunately, the technique is  incomplete; it always terminates, but does not  always return an answer.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6466</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extracting Qualitative Dynamics from Numerical Experiments</title>
<link>https://hdl.handle.net/1721.1/6465</link>
<description>Extracting Qualitative Dynamics from Numerical Experiments
Yip, Kenneth Man-Kam
The Phase Space is a powerful tool for  representing and reasoning about the  qualitative behavior of nonlinear dynamical  systems. Significant physical phenomena of  the dynamical system---periodicity,  recurrence, stability and the like---are reflected  by outstanding geometric features of the  trajectories in the phase space. This paper  presents an approach for the automatic  reconstruction of the full dynamical behavior  from the numerical results by exploiting  knowledge of Dynamical Systems Theory and  techniques from computational geometry and  computer vision. The approach is applied to  an important class of dynamical systems, the  area-preserving maps, which often arise from  the study of Hamiltonian systems.
</description>
<pubDate>Sun, 01 Mar 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6465</guid>
<dc:date>1987-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Translation via Abstraction and Reimplementation</title>
<link>https://hdl.handle.net/1721.1/6464</link>
<description>Program Translation via Abstraction and Reimplementation
Waters, Richard C.
Essentially all program translators (both  source-to-source translators and compilers)  operate via transliteration and refinement.  This approach is fundamentally limited in the  quality of the output it can produce. In  particular, it tends to be insufficiently sensitive  to global features of the source program and  too sensitive to irrelevant local details. This  paper presents the alternate translation  paradigm of abstraction and  reimplementation, which is one of the goals of  the Programmer's Apprentice project. A  translator has been constructed which  translates Cobol programs into Hibol (a very  high level, business data processing  language). A compiler has been designed  which generates extremely efficient PDP-11  object code for Pascal programs.
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6464</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Bandwidth Limitations in Robot Force Control</title>
<link>https://hdl.handle.net/1721.1/6463</link>
<description>Understanding Bandwidth Limitations in Robot Force Control
Eppinger, Steven D.; Seering, Warren P.
This paper provides an analytical overview of  the dynamics involved in force control. Models  are developed which demonstrate, for the  one-axis explicit force control case, the effects  on system closed-loop bandwidth of: a) robot  system dynamics that are not usually  considered in the controller design; b) drive-train and task nonlinearities; and c) actuator  and controller dynamics. The merits and  limitations of conventional solutions are  weighed, and some new solutions are  proposed. Conclusions are drawn which give  insights into the relative importance of the  effects discussed.
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6463</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principle-Based Parsing for Machine Translation</title>
<link>https://hdl.handle.net/1721.1/6462</link>
<description>Principle-Based Parsing for Machine Translation
Dorr, Bonnie J.
Many syntactic parsing strategies for machine  translation systems are based entirely on  context-free grammars. These parsers  require an overwhelming number of rules;  thus, translation systems using rule-based  parsers either have limited linguistic  coverage, or they have poor performance due  to formidable grammar size. This report  shows how a principle-based parser with a  'co-routine' design improves parsing for  translation. The parser consists of a skeletal  structure-building mechanism that operates  in conjunction with a linguistically based  constraint module, passing control back and  forth until a set of underspecified skeletal  phrase-structures is converted into a fully  instantiated parse tree. The modularity of the  parsing design accomodates linguistic  generalization, reduces the grammar size,  allows extension to other languages, and is  compatible with studies of human language  processing.
</description>
<pubDate>Tue, 01 Dec 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6462</guid>
<dc:date>1987-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reification without Evaluation</title>
<link>https://hdl.handle.net/1721.1/6461</link>
<description>Reification without Evaluation
Bawden, Alan
Constructing self-referential systems, such as  Brian Smith's 3-Lisp language, is actually  more straightforward than you think. Anyone  can build an infinite tower of processors  (where each processor implements the  processor at the next level below) by  employing some common sense and one  simple trick. In particular, it is not necessary to  re-design quotation, take a stand on the  relative merits of evaluation vs. normalization,  or treat continuations as meta-level objects.  This paper presents a simple programming  language interpreter that illustrates how this  can be done. By keeping its expression  evaluator entirely separate from the  mechanisms that implement its infinite tower,  this interpreter avoids many troublesome  aspects of previous self-referential  programming languages. Given these  basically straightforward techniques,  processor towers might be easily constructed  for a wide variety of systems to enable them to  manipulate and reason about themselves.
</description>
<pubDate>Wed, 01 Jun 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6461</guid>
<dc:date>1988-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Offices are Open Systems</title>
<link>https://hdl.handle.net/1721.1/6460</link>
<description>Offices are Open Systems
Hewitt, Carl E.
This paper takes a prescriptive stance on how  to establish the information-processing  foundations for taking action and making  decisions in office work from an open system  perspective. We propose due process as  a central activity in organizational information  processing.
</description>
<pubDate>Sun, 01 Feb 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6460</guid>
<dc:date>1987-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dimensionality-Reduction Using Connectionist Networks</title>
<link>https://hdl.handle.net/1721.1/6459</link>
<description>Dimensionality-Reduction Using Connectionist Networks
Saund, Eric
This paper presents a method for using the  self-organizing properties of connectionist  networks of simple computing elements to  discover a particular type of constraint in  multidimensional data. The method performs  dimensionality-reduction in a wide class of  situations for which an assumption of linearity  need not be made about the underlying  constraint surface. We present a scheme for  representing the values of continuous (scalar)  variables in subsets of units. The  backpropagation weight updating method for  training connectionist networks is extended by  the use of auxiliary pressure in order to coax  hidden units into the prescribed  representation for scalar-valued variables.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6459</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ambiguities of a Motion Field</title>
<link>https://hdl.handle.net/1721.1/6458</link>
<description>Ambiguities of a Motion Field
Negahdaripour, Shahriar
We study the conditions under which a  perspective motion field can have multiple  interpretations. Furthermore, we show that in  most cases, the ambiguity in the interpretation  of a motion field can be resolved by imposing  the physical constraint that depth is positive  over the image region onto which the surface  projects.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6458</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Direct Method for Locating the Focus of Expansion</title>
<link>https://hdl.handle.net/1721.1/6457</link>
<description>A Direct Method for Locating the Focus of Expansion
Negahdaripour, Shahriar; Horn, Berthold K.P.
We address the problem of recovering the  motion of a monocular observer relative to a  rigid scene. We do not make any  assumptions about the shapes of the  surfaces in the scene, nor do we use  estimates of the optical flow or point  correspondences. Instead, we exploit the  spatial gradient and the time rate of change of  brightness over the whole image and explicitly  impose the constraint that the surface of an  object in the scene must be in front of the  camera for it to be imaged.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6457</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognizing Rigid Objects by Aligning Them with an Image</title>
<link>https://hdl.handle.net/1721.1/6456</link>
<description>Recognizing Rigid Objects by Aligning Them with an Image
Huttenlocher, Daniel P.; Ullman, Shimon
This paper presents an approach to  recognition where an object is first {\\it aligned}  with an image using a small number of pairs  of model and image features, and then the  aligned model is compared directly against  the image. To demonstrate the method, we  present some examples of recognizing flat  rigid objects with arbitrary three-dimensional  position, orientation, and scale, from a single  two-scale-space segmentation of edge  contours. The method is extended to the  domain of non-flat objects as well.
</description>
<pubDate>Thu, 01 Jan 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6456</guid>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Programmer's Apprentice: A Program Design Scenario</title>
<link>https://hdl.handle.net/1721.1/6455</link>
<description>The Programmer's Apprentice: A Program Design Scenario
Rich, Charles; Waters, Richard C.
A scenario is used to illustrate the capabilities  of a proposed Design Apprentice, focussing  on the area of detailed, low-level design.  Given a specification, the Design Apprentice  will be able to make many of the design  decisions needed to synthesize the required  program. The Design Apprentice will also be  able to detect various kinds of contradictions  and omissions in a specification.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6455</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Approach To Object Recognition: Aligning Pictorial Descriptions</title>
<link>https://hdl.handle.net/1721.1/6454</link>
<description>An Approach To Object Recognition: Aligning Pictorial Descriptions
Ullman, Shimon
This paper examines the problem of shape-based object recognition and proposes a new  approach, the alignment of pictorial  descriptions. The first part of the paper  reviews general approaches to visual object  recognition and divides these approaches  into three broad classes: invariant properties  methods, object decomposition methods, and  alignment methods. The second part  presents the alignment method. In this  approach the recognition process is divided  into two stages. The first determines the  transformation in space that is necessary to  bring the viewed object into alignment with  possible object-models. The second stage  determines the model that best matches the  viewed object. The proposed alignment  method also uses abstract description, but  unlike structural description methods, it uses  them pictorially, rather than in symbolic  structural descriptions.
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6454</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simplifying Decision Trees</title>
<link>https://hdl.handle.net/1721.1/6453</link>
<description>Simplifying Decision Trees
Quinlan, J.R.
Many systems have been developed for  constructing decision trees from collections of  examples. Although the decision trees  generated by these methods are accurate and  efficient, they often suffer the disadvantage of  excessive complexity that can render them  incomprehensible to experts. It is  questionable whether opaque structures of  this kind can be described as knowledge, no  matter how well they function. This paper  discusses techniques for simplifying decision  trees without compromising their accuracy.  Four methods are described, illustrated, and  compared on a test- bed of decision trees  from a variety of domains.
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6453</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>ARIADNE: Pattern-Directed Inference and Hierarchical Abstraction in Protein Structure Recognition</title>
<link>https://hdl.handle.net/1721.1/6452</link>
<description>ARIADNE: Pattern-Directed Inference and Hierarchical Abstraction in Protein Structure Recognition
Lathrop, Richard H.; Webster, Teresa A.; Smith, Temple F.
There are many situations in which a very  detailed low-level description encodes,  through a hierarchical organization, a  recognizable higher-order pattern. The macro-molecular structural conformations of proteins  exhibit higher order regularities whose  recognition is complicated by many factors.  ARIADNE searches for similarities between  structural descriptors and hypothesized  protein structure at levels more abstract than  the primary sequence, based on differential  similarity to rule antecedents and the  controlled use of tentative higher-order  structural hypotheses. Inference is grounded  solely in knowledge derivable from the  primary sequence, and exploits secondary  structure predictions. A novel proposed  alignment and functional domain identification  of the aminoacyl-tRNA synthetases was found  using this system.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6452</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Achieving Artificial Intelligence through Building Robots</title>
<link>https://hdl.handle.net/1721.1/6451</link>
<description>Achieving Artificial Intelligence through Building Robots
Brooks, Rodney A.
We argue that generally accepted  methodologies of Artificial Intelligence  research are limited in the proportion of  human level intelligence they can be expected  to emulate. We argue that the currently  accepted decompositions and static  representations used in such research are  wrong. We argue for a shift to a process  based model, with a decomposition based on  task achieving behaviors as the organizational  principle. In particular we advocate building  robotic insects.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6451</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovery Systems</title>
<link>https://hdl.handle.net/1721.1/6450</link>
<description>Discovery Systems
Haase, Kenneth W., Jr.
Cyrano is a thoughtful reimplementation of  Lenat's controversial Eurisko program,  designed to perform automated discovery and  concept formation in a variety of technical  fields. The 'thought' in the reimplementation  has come from several directions: an appeal  to basic principles, which led to identifying  constraints of modularity and consistency on  the design of discovery systems; an appeal to  transparency, which led to collapsing more  and more of the control structure into the  representation; and an appeal to  accountability, which led to the explicit  specification of dependencies in the concept  formation process. The process of  reimplementing Lenat's work has already  revealed several insights into the nature of  Eurisko-like systems in general; these  insights are incorporated into the design of  Cyrano. Foremost among these new insights  is the characterization of Eurisko-like systems  (shich I call inquisitive systems) as search  processes which dynamically reconfigure  their search space by the formation of new  concepts and representations. This insight  reveals requirements for modularity and  'consistency' in the definition of new concepts  and representations.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6450</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Solution of Ill-Posed Problems in Computational Vision</title>
<link>https://hdl.handle.net/1721.1/6449</link>
<description>Probabilistic Solution of Ill-Posed Problems in Computational Vision
Marroquin, J.; Mitter, S.; Poggio, Tomaso A
We formulate several problems in early vision  as inverse problems. Among the solution  methods we review standard regularization  theory, discuss its limitations, and present  new stochastic (in particular, Bayesian)  techniques based on Markov Random Field  models for their solution. We derive efficient  algorithms and describe parallel  implementations on digital parallel SIMD  architectures, as well as a new class of  parallel hybrid computers that mix digital with  analog components.
</description>
<pubDate>Sun, 01 Mar 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6449</guid>
<dc:date>1987-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Simple Motion Planning Algorithm for General Robot Manipulators</title>
<link>https://hdl.handle.net/1721.1/6448</link>
<description>A Simple Motion Planning Algorithm for General Robot Manipulators
Lozano-Perez, Tomas
This paper presents a simple and efficient  algorithm, using configuration space, to plan  collision-free motions for general  manipulators. We describe an  implementation of the algorithm for  manipulators made up of revolute joints. The  configuration-space obstacles for an n  degree-of-freedom manipulator are  approximated by sets of n-1 dimensional  slices, recursively built up from one  dimensional slices. This obstacle  representation leads to an efficient  approximation of the free space outside of the  configuration-space obstacles.
</description>
<pubDate>Sun, 01 Jun 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6448</guid>
<dc:date>1986-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defining Natural Language Grammars in GPSG</title>
<link>https://hdl.handle.net/1721.1/6447</link>
<description>Defining Natural Language Grammars in GPSG
Ristad, Eric Sven
This paper is a formal analysis of whether  generalized phrase structure grammar's  (GPSG) weak context-free generative power  will allow it to achieve three of its central  goals: (1) to characterize all and only the  natural language grammars, (2) to  algorithmically determine membership and  generative power consequences of GPSG's  and (3) to embody the universalism of natural  language entirely in the formal system. I prove  that "=E*?" is undecidable for GPSGs and, on  the basis of this result and the unnaturalness  of E*, I argue that GPSG's three goals and its  weak context-free generative power conflict  with each other: there is no algorithmic way of  knowing whether any given GPSG generates  a natural language or an unnatural one. The  paper concludes with a diagnosis of the result  and suggests that the problem might be met  by abandoning the weak context-free  framework and assuming substantive  constraints.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6447</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Complexity of Current GPSG Theory</title>
<link>https://hdl.handle.net/1721.1/6446</link>
<description>Computational Complexity of Current GPSG Theory
Ristad, Eric Sven
An important goal of computational linguistics  has been to use linguistic theory to guide the  construction of computationally efficient real-world natural language processing systems.  At first glance, the entirely new generalized  phrase structure grammar (GPSG) theory of  Gazdar, Klein, Pullum, and Sag (1985)  appears to be a blessing on two counts. First,  their precise formal system and the broad  empirical coverage of their published English  grammar might be a direct guide for a  transparent parser design and  implementation. Second, since GPSG has  weak context-free generative power and  context-free languages can be parsed in  O(n3) by a wide range of algorithms, GPSG  parsers would appear to run in polynomial  time. This widely-assumed GPSG "efficient  parsbility" result is misleading: here we prove  that the universal recognition problem for the  new GPSG theory is exponentially-polynomial  time hard, and assuredly intractable. The  paper pinpoints sources of intractability (e.g.  metarules and syntactic features in the GPSG  formal system and concludes with some  linguistically and computationally motivated  restrictions on GPSG.
</description>
<pubDate>Tue, 01 Apr 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6446</guid>
<dc:date>1986-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Issues in Model Based Troubleshooting</title>
<link>https://hdl.handle.net/1721.1/6445</link>
<description>Issues in Model Based Troubleshooting
Hamscher, Walter; Davis, Randall
To determine why something has stopped  working, it's helpful to know how it was  supposed to work in the first place. This  simple fact underlies recent work on a  number of systems that do diagnosis from  knowledge about the internal structure of  behavior of components of the malfunctioning  device. Recently much work has been done in  this vein in many domains with an apparent  diversity of techniques. But the variety of  domains and the variety of computational  mechanisms used to implement these  systems tend to obscure two important facts.  First, existing programs have similar  mechanisms for generating and testing fault  hypotheses. Second, most of these systems  have similar built-in assumptions about both  the devices being diagnosed and their failure  modes; these assumptions in turn limit the  generality of the programs. The purpose of  this paper is to identify the problems and non-problems in model based troubleshooting.  The non-problems are in generating and  testing fault hypotheses about misbehaving  components in simple static devices; a small  core of largely equivalent techniques covers  the apparent profusion of existing  approaches. The problems occur with devices  that aren't static, aren't simple and whose  components fail in ways current programs  don't hypothesize and hence can't diagnose.
</description>
<pubDate>Sun, 01 Mar 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6445</guid>
<dc:date>1987-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic AI: Translating Piaget into Lisp</title>
<link>https://hdl.handle.net/1721.1/6444</link>
<description>Genetic AI: Translating Piaget into Lisp
Drescher, Gary L.
This paper presents a constuctivist model of  human cognitive development during infancy.  According to constructivism, the elements of  mental representation -- even such basic  elements as the concept of physical object -- are constructed afresh by each individual,  rather than being innately supplied. Here I  propose a (partially specified, not yet  implemented) mechanism, the Schema  Mechanism; this mechanism is intended to  achieve a series of cognitive constructions  characteristic of infants' sensorimotor-stage  development, primarily as described by  Piaget. In reference to Piaget's 'genetic  epistemology', I call this approach genetic AI -- 'genetic' not in the sense of genes, but in the  sense of genesis: development from the point  of origin.
</description>
<pubDate>Sat, 01 Feb 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6444</guid>
<dc:date>1986-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Classifying Objects from Visual Information</title>
<link>https://hdl.handle.net/1721.1/6443</link>
<description>Classifying Objects from Visual Information
Bobick, Aaron; Richards, Whitman
Consider a world of 'objects.' Our goal is to  place these objects into categories that are  useful to the observer using sensory data.  One criterion for utility is that the categories  allow the observer to infer the object's  potential behaviors, which are often non-observable. Under what condidtions can such  useful categories be created? We propose a  solution which requires 1.) that modes or  clusters of natural structures are present in  the world, and, 2.) that the physical properties  of these structures are reflected in the  sensory data used by the observer for  classification. Given these two constraints, we  explore the type of additional knowledge  sufficient for the observer to generate an  internal representation that makes explicit the  natural modes. Finally we develop a formal  expression of the object classification  problem.
</description>
<pubDate>Sun, 01 Jun 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6443</guid>
<dc:date>1986-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Outer Solar System for 210 Million Years</title>
<link>https://hdl.handle.net/1721.1/6442</link>
<description>The Outer Solar System for 210 Million Years
Applegate, James H.; Douglas, Michael R.; Gursel, Yekta; Sussman, Gerald Jay; Wisdom, Jack
We used a special purpose computer to  integrate the orbits of the outer five planets for  100 Myr into the future and 100 Myr into the  past. The strongest features in the Fourier  transforms of the orbital elements of the  Jovian planets can be indentified with the  frequencies predicted by linear secular theory.  Many of the weaker features in the Fourier  spectra are identified as linear combinations  of the basic frequencies. We note serious  differences between our measurements and  the predictions of Bretagnon (1974). The  amplitude of the 3.796 Myr period libration of  Pluto's longitude of perihelion is modulated  with a period of 34 Myr. Very long periods, on  the order of 137 million years, are also seen.
</description>
<pubDate>Sat, 01 Feb 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6442</guid>
<dc:date>1986-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Passive Navigation: Analytical Solution for Quadratic Patches</title>
<link>https://hdl.handle.net/1721.1/6441</link>
<description>Direct Passive Navigation: Analytical Solution for Quadratic Patches
Negahdaripour, Shahriar; Yuille, Alan
In this paper, we solve the problem of  recovering the motion of an observer relative  to a surface which can be locally  approximated by a quadratic patch directly  from image brightness values. We do not  compute the optical flow as an intermediate  step. We use the coefficients of the Taylor  series expansion of the intensity function in  two frames to determine 15 intermediate  parameters, termed the essential  parameters, from a set of linear equations.  We then solve analytically for the motion and  structure parameters from a set of nonlinear  equations in terms of these intermediate  parameters. We show that the solution is  always unique, unlike some earlier results  that reported two-fold ambiguities in some  special cases.
</description>
<pubDate>Sat, 01 Mar 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6441</guid>
<dc:date>1986-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatio-Temporal Reasoning and Linear Inequalities</title>
<link>https://hdl.handle.net/1721.1/6440</link>
<description>Spatio-Temporal Reasoning and Linear Inequalities
Valdes-Perez, Raul E.
Time and space are sufficiently similar to  warrant in certain cases a common  representation in AI problem-solving systems.  What is represented is often the constraints  that hold between objects, and a concern is  the overall consistency of a set of constraints.  This paper scrutinizes two current  approaches to spatio-temporal reasoning.  The suitableness of Allen's temporal algebra  for constraint networks is influenced directly  by the mathematical properties of the algebra.  These properties are extracted by a  formulation as a network of set-theoretic  relations, such that some previous theorems  due to Montanari apply. Some new theorems  concerning consistency of these temporal  constraint networks are also presented.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6440</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagram Understanding: The Intersection of Computer Vision and Graphics</title>
<link>https://hdl.handle.net/1721.1/6439</link>
<description>Diagram Understanding: The Intersection of Computer Vision and Graphics
Montalvo, Fanya S.
A problem common to Computer Vision and  Computer Graphics is identified. It is the  problem of representing, acquiring and  validating symbolic descriptions of visual  properties. The intersection of Computer  Vision and Computer Graphics provides a  basis for diagrammatic conversations  between users and systems. I call this  problem domain Diagram Understanding  because of its analogy with Natural Language  Understanding. The recognition and  generation of visual objects from symbolic  descriptions aare two sides of the same coin.  A paradigm for the discovery and validation of  higher-level visual properties is introduced.  The paradigm involves two aspects. One is  the notion of denotation: the map between  symbolic descriptions and visual properties.  The denotation map can be validated by focus  on the conversation between users and a  system. The second aspect involves a  method for discovering a natural rich set of  visual primitives. The notion of visual property  is expanded, and the paradigm is further  illustrated with a traditional business graphics  example.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6439</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hermeneutics: From Textual Explication to Computer Understanding?</title>
<link>https://hdl.handle.net/1721.1/6438</link>
<description>Hermeneutics: From Textual Explication to Computer Understanding?
Mallery, John C.; Hurwitz, Roger; Duffy, Gavan
Hermeneutics, a branch of continental  European philosophy concerned with human  understanding and the interpretation of written  texts, offers insights that may contribute to the  understanding of meaning, translation,  architectures for natural language  understanding, and even to the methods  suitable for scientific inquiry in AI. After briefly  reviewing the historical development of  hermeneutics as a method of interpretation,  this article examines the contributions of  hermeneutics to the human sciences. This  background provides perspective for a review  of recent hermeneutically-oriented AI  research, including the Alker, Lehnert and  Schneider computer-assisted techniques for  coding the affective structure of narratives, the  earlier positive proposal by Winograd and  Bateman, the later pessimism of Winograd  and Flores on the possibility of AI, as well as  the system-building efforts of Duffey and  Mallery.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6438</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Optical Flow of Planar Surfaces</title>
<link>https://hdl.handle.net/1721.1/6437</link>
<description>The Optical Flow of Planar Surfaces
Ullman, Shimon
The human visual system can recover the 3D  shape of moving objects on the basis of  motion information alone. Computational  studies of this capacity have considered  primarily non-planar rigid objects. With  respect to moving planar surfaces, previous  studies by Hay (1966), Tsai and Huang  (1981), Longuet-Higgins (1984), have shown  that the planar velocity field has in general a  two-fold ambiguity: there are two different  planes engaged in different motions that can  induce the same velocity field. The current  analysis extends the analysis of the planar  velocity field in four directions: (1) the use of  flow parameters of the type suggested by  Koenderink and van Doorn (1975), (2) the  exclusion of confusable non-planar solutions,  (3) a new proof and a new method for  computing the 3D motion and surface  orientation, and (4) a comparison with the  information available in orthographic velocity  fields, which is important for determining the  stability of the 3D recovery process.
</description>
<pubDate>Sun, 01 Dec 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6437</guid>
<dc:date>1985-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Vision Chip</title>
<link>https://hdl.handle.net/1721.1/6436</link>
<description>A Vision Chip
Batali, John
Some well understood and well justified  algorithms for early visual processing must  be implemented in hardware for later visual  processing to be studied. This paper  describes the design and hardware  implementation of a particular operator of  visual processing. I constructed an NMOS  VLSI circuit that computes the gradient, and  detects zero-crossings, in a digital video  image in real time. The algorithms employed  by the chip, the design process that led to it,  and its capabilites and limitations are  discussed. For hardware to be a useful tool  for AI, designing it must be as much like  programming as possible. This paper  concludes with some discussion of how such  a goal can be met.
</description>
<pubDate>Fri, 01 May 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6436</guid>
<dc:date>1981-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Circumscribing Circumscription: A Guide to Relevance and Incompleteness</title>
<link>https://hdl.handle.net/1721.1/6435</link>
<description>Circumscribing Circumscription: A Guide to Relevance and Incompleteness
Williams, Brian C.
Intelligent agents in the physical world must  work from incomplete information due to  partial knowledge and limited resources. An  agent copes with these limitations by applying  rules of conjecture to make reasonable  assumptions about what is known.  Circumscription, proposed by McCarthy, is the  formalization of a particularly important rule of  conjecture likened to Occam's razor. That is,  the set of all objects satisfying a certain  property is the smallest set of objects that is  consistent with what is known. This paper  examines closely the properties and the  semantics underlying circumscription,  considering both its expressive power and  limitations. In addition we study  circumscription's relationship to several  related formalisms, such as negation by  failure, the closed world assumption, default  reasoning and Planner's THNOT. In the  discussion a number of extensions to  circumscription are proposed, allowing one to  tightly focus its scope of applicability. In  addition, several new rules of conjecture are  proposed based on the notions of relevance  and minimality. Finally a synthesis between  the approaches of McCarthy and Konolige is  used to extend circumscription, as well as  several other rules of conjecture, to account  for resource limitations.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6435</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Sequential Phonetic Constraints in Recognizing Spoken Words</title>
<link>https://hdl.handle.net/1721.1/6434</link>
<description>Exploiting Sequential Phonetic Constraints in Recognizing Spoken Words
Huttenlocher, Daniel P.
Machine recognition of spoken language  requires developing more robust recognition  algorithms. The current paper extends the  work of Shipman and Zue by investigating the  power of partial phonetic descriptions. First  we demonstrate that sequences of manner of  articulation classes are more reliable and  provide more constraint than other classes.  Alone these are of limited utility, due to the  high degree of variability in natural speech.  This variability is not uniform, however, as  most modifications and deletions occur in  unstressed syllables. The stressed syllables  provide substantially more constraint. This  indicates that recognition algorithms can be  made more robust by exploiting the manner of  articulation information in stressed syllables.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6434</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism</title>
<link>https://hdl.handle.net/1721.1/6433</link>
<description>Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism
Agha, Gul; Hewitt, Carl
We argue that the ability to model shared  objects with changing local states, dynamic  reconfigurability, and inherent parallelism are  desirable properties of any model of  concurrency. The actor model addresses  these issues in a uniform framework. This  paper briefly describes the concurrent  programming language Act3 and the  principles that have guided its development.  Act3 advances the state of the art in  programming languages by combining the  advantages of object-oriented programming  with those of functional programming. We  also discuss considerations relevant to large-scale parallelism in the context of open  systems, and define an abstract model which  establishes the equivalence of systems  defined by actor programs.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6433</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Robust Layered Control System for a Mobile Robot</title>
<link>https://hdl.handle.net/1721.1/6432</link>
<description>A Robust Layered Control System for a Mobile Robot
Brooks, Rodney A.
We describe a new architecture for controlling  mobile robots. Layers of control system are  built to let the robot operate at increasing  levels of competence. Layers are made up of  asynchronous modules which communicate  over low bandwidth channels. Each module is  an instance of a fairly simple computational  machine. Higher level layers can subsume  the roles of lower levels by suppressing their  outputs. However, lower levels continue to  function as higher levels are added. The  result is a robust and flexible robot control  system. The system is intended to control a  robot that wanders the office areas of our  laboratory building maps of its surroundings.  In this paper we demonstrate the system  controlling a detailed simulation of the robot.
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6432</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Passive Navigation: Analytical Solution for Planes</title>
<link>https://hdl.handle.net/1721.1/6431</link>
<description>Direct Passive Navigation: Analytical Solution for Planes
Negahdaripour, Shahriar
In this paper, we derive a closed form solution  for recovering the motion of an observer  relative to a planar surface directly from image  brightness derivatives. We do not compute the  optical flow as an intermediate step, only the  spatial and temporal intensity gradients at a  minimum of 8 points. We solve a linear matrix  equation for the elements of a 3x3 matrix. The  eigenvalue decomposition of its symmetric  part is then used to compute the motion  parameters and the plane orientation.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6431</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Synthesis of Force-Closure Grasps</title>
<link>https://hdl.handle.net/1721.1/6430</link>
<description>The Synthesis of Force-Closure Grasps
Nguyen; Van-Duc
This paper addresses the problem of synthesizing planar grasps that have force closure. A grasp on an object is a force closure grasp if and only if we can exert, through the set of contacts, arbitrary force and moment on this object. Equivalently, any motion of the object is resisted by a contact force, that is the object cannot break contact with the finger tips without some non-zero external work. The force closure constraint is addressed from three different points of view: mathematics, physics, and computational geometry. The last formulation results in fast and simple polynomial time algorithms for directly constructing force closure grasps. We can also find grasps where each finger has an independent region of contact on the set of edges.
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6430</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Edge Detection</title>
<link>https://hdl.handle.net/1721.1/6429</link>
<description>Edge Detection
Hildreth, Ellen C.
The goal of vision is to recover physical  properties of objects in a scene, such as the  location of object boundaries and the  structure, color, and texture of object surfaces,  from the two-dimensional image that is  projected onto the eye or camera. The first  clues about the physical properties of the  scene are provided by the changes of  intensity in the image. The importance of  intensity changes and edges in early visual  processing has led to extensive research on  their detection, description, and use, both in  computer and biological vision systems. This  article reviews some of the theory that  underlies the detection of edges and the  methods used to carry out this analysis.
</description>
<pubDate>Sun, 01 Sep 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6429</guid>
<dc:date>1985-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Variable Precision Logic</title>
<link>https://hdl.handle.net/1721.1/6428</link>
<description>Variable Precision Logic
Michalski, Ryszard S.; Winston, Patrick H.
Variable precision logic is concerned with  problems of reasoning with incomplete  information and under time constraints. It  offers mechanisms for handling trade-offs  between the precision of inferences and the  computational efficiency of deriving them. Of  the two aspects of precision, the specificity of  conclusions and the certainty of belief in them,  we address here primarily the latter, and  employ censored production rules as an  underlying representational and  computational mechanism. Such rules are  created by augmenting ordinary production  rules with an exception condition and are  written in the form if A then D unless C, where  C is the exception condition.  From a control viewpoint, censored production  rules are intended for situations in which the  implication A {arrow} B holds frequently and  the assertion C holds rarely. Systems using  censored production rules are free to ignore  the exception conditions, when time is a  premium. Given more time, the exception  conditions are examined, lending credibility to  initial, high-speed answers, or changing  them. Such logical systems therefore exhibit  variable certainty of conclusions, reflecting  variable investments of computational  resources in conducting reasoning. From a  logical viewpoint, the unless operator  between B and C acts as the exclusive-or  operator. From an expository viewpoint, the if A  then B part of the censored production rule  expresses an important information (e.g., a  causal relationship), while the unless C part  acts only as a switch that changes the polarity  of B to ??hen C holds.  Expositive properties are captured  quantitatively by augmenting censored rules  with two parameters that indicate the certainty  of the implication if A then B. Parameter 6 is  the certainty when the truth value C is  unknown, and 7 is the certainty when C is  known to be false.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6428</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computational Complexity of Two-Level Morphology</title>
<link>https://hdl.handle.net/1721.1/6427</link>
<description>The Computational Complexity of Two-Level Morphology
Barton, G. Edward, Jr.
Morphological analysis requires knowledge of  the stems, affixes, combnatory patterns, and  spelling-change processes of a language.  The computational difficulty of the task can be  clarified by investigating the computational  characteristics of specific models of  morphologial processing. The use of finite-state machinery in the "two-level" model by  Kimmo Koskenicimi model does not  guarantee efficient processing. Reductions of  the satisfiability problem show that finding the  proper lexical??face correspondence in a  two-level generation or recognition problem  can be computationally difficult. However,  another source of complexity in the existing  algorithms can be sharply reduced by  changing the implementation of the dictionary  component. A merged dictionary with bit-vectors reduces the number of choices  among alternative dictionary subdivisions by  allowing several subdivisions to be searched  at once.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6427</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sensing Strategies for Disambiguating Among Multiple Objects in Known Poses</title>
<link>https://hdl.handle.net/1721.1/6426</link>
<description>Sensing Strategies for Disambiguating Among Multiple Objects in Known Poses
Grimson, W. Eric L.
The need for intelligent interaction of a robot  with its environment frequently requires  sensing of the environment. Further, the need  for rapid execution requires that the interaction  between sensing and action take place using  as little sensory data as possible, while still  being reliable. Previous work has developed  a technique for rapidly determining the  feasible poses of an object from sparse,  noisy, occluded sensory data. In this paper,  we examine techniques for acquiring position  and surface orientation data about points on  the surfaces of objects, with the intent of  selecting sensory points that will force a  unique interpretation of the pose of the object  with as few data points as possible. Under  some simple assumptions about the sensing  geometry, we derive a technique for predicting  optimal sensing positions. The technique  has been implemented and tested. To fully  specify the algorithm, we need estimates of  the error in estimating the position and  orientation of the object, and we derive  analytic expressions for such error for the  case of one particular approach to object  recognition.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6426</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Closed Form Solution for Inverse Kinematics of Robot Manipulator with Redundancy</title>
<link>https://hdl.handle.net/1721.1/6425</link>
<description>A Closed Form Solution for Inverse Kinematics of Robot Manipulator with Redundancy
Chang, Pyung H.
A closed form equation for inverse kinematics  of manipulator with redundancy is derived,  using the Lagrangian multiplier method. The  proposed equation is proved to provide the  exact equilibrium state for the resolved motion  method. And is shown to be a general  expression that yields the extended Jacobian  method. The repeatability problem n the  resolved motion method does not exist in the  proposed equation. The equation is  demonstrated to give more accurate  trajectories than the resolved motion method.
</description>
<pubDate>Sat, 01 Mar 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6425</guid>
<dc:date>1986-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revised Report On The Algorithmic Language Scheme</title>
<link>https://hdl.handle.net/1721.1/6424</link>
<description>Revised Report On The Algorithmic Language Scheme
Clinger, William; Rees, Jonathan
Data and procedures and the values they  amass, Higher-order functions to combine  and mix and match, Objects with their local  state, the message they pass, A property, a  package, the control of point for a catch- In the  Lambda Order they are all first-class. One  thing to name them all, one things to define  them, one thing to place them in  environments and bind them, in the Lambda  Order they are all first-class. Keywords:  Scheme, Lisp, functional programming,  computer languages.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6424</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Passive Navigation</title>
<link>https://hdl.handle.net/1721.1/6423</link>
<description>Direct Passive Navigation
Negahdaripour, Shahriar; Horn, Berthold K.P.
In this paper, we show how to recover the  motion of an observer relative to a planar  surface directly from image brightness  derivatives. We do not compute the optical  flow as an intermediate step. We derive a set  of nine non-linear equations using a least-squares formulation. A simple iterative  scheme allows us to find either of two  possible solutions of these equations. An  initial pass over the relevant image region is  used to accumulate a number of moments of  the image brightness derivatives. All of the  quantities used in the iteration can be  efficiently computed from these totals, without  the need to refer back to the image. A new,  compact notation allows is to show easily that  there are at most two planar solutions. Key  words: Passive Navigation, Optical flow,  Structure and Motion, Least Squares, Planar  surface, Non-linear Equations, Dial Solution,  Planar Motion Field Equation.
</description>
<pubDate>Fri, 01 Feb 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6423</guid>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shape and Source from Shading</title>
<link>https://hdl.handle.net/1721.1/6422</link>
<description>Shape and Source from Shading
Brooks, Michael J.; Horn, Berthold K.P.
Well-known methods for solving the shape-from-shading problem require knowledge of  the reflectance map. Here we show how the  shape-from-shading problem can be solved  when the reflectance map is not available, but  is known to have a given form with some  unknown parameters. This happens, for  example, when the surface is known to be  Lambertian, but the direction to the light  source is not known. We give an iterative  algorithm that alternately estimates the  surface shape and the light source direction.  Use of the unit normal in parameterizing the  reflectance map, rather than the gradient or  stereographic coordinates, simpliflies the  analysis. Our approach also leads to an  iterative scheme for computing shape from  shading that adjusts the current estimates of  the focal normals toward or away from the  direction of the light source. The amount of  adjustment is proportional to the current  difference between the predicted and the  observed brightness. We also develop  generalizations to less constrained forms of  reflectance maps.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6422</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spotlight on Attention</title>
<link>https://hdl.handle.net/1721.1/6421</link>
<description>Spotlight on Attention
Hurlbert, A.; Poggio, Tomaso A
We review some recent psychophysical,  psychological and anatomical data which  highlight the important role of attention in  visual information processing, and discuss  the evidence for a serial spotlight of attention.  We point out the connections between the  questions raised by the spotlight model and  computational results on the intrinsic  parallelism of several tasks in vision.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6421</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>PP: A LISP Pretty Printing System</title>
<link>https://hdl.handle.net/1721.1/6420</link>
<description>PP: A LISP Pretty Printing System
Waters, Richard C.
The PP system provides an efficient  implementation of the Common Lisp pretty  printing function PPRINT. In addition, PP  goes beyond ordinary pretty printers by  providing mechanisms which allow the user  to control the exact form of pretty printed  output. This is done by extending LISP in two  ways. First, several new FORMAT directives  are provided which support dynamic  decisions about the placement of newlines  based on the line width available for output.  Second, the concept of print-self methods is  extended so that it can be applied to lists as  well as to objects which can receive  messages. Together, these extensions  support pretty printing of both programs and  data structures.  The PP system also modifies the way that the  Lisp printer handles the abbreviation of  output. The traditional mechanisms for  abbreviating lists based on nesting depth and  length are extended so that they automatically  apply to every kind of structure without the  user having to take any explicit action when  writing print-self methods. A new abbreviation  mechanism introduced which can be used to  limit the total number of lines printed.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6420</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Variational Approach to Shape from Shading</title>
<link>https://hdl.handle.net/1721.1/6419</link>
<description>The Variational Approach to Shape from Shading
Horn, Berthold K.P.
We develop a systematic approach to the  discovery of parallel iterative schemes for  solving the shape-from-shading problem on a  grid. A standard procedure for finding such  schemes is outlines, and subsequently used  to derive several new ones. The shape-from-shading problem is known to be  mathematically equivalent to a non-linear first-order partial differential equation in surface  elevation. To avoid the problems inherent in  methods used to solve such equations, we  follow previous work in reformulating the  problem as one of finding a surface  orientation field that minimizes the integral of  the brightness error. The calculus of  variations is then employed to derive the  appropriate Euler equations on which iterative  schemes can be based.  The problem of minimizing the integral of the  brightness error term it ill posed, since it has  an infinite number of solutions in terms of  surface orientation fields. A previous method  used a regularization technique to overcome  this difficulty. An extra term was added to the  integral to obtain an approximation to a  solution that was as smooth as possible.  We point out here that surface orientation has  to obey an integrability constraint if it is to  correspond to an underlying smooth surface.  Regularization methods do not guarantee that  the surface orientation recovered satisfies this  constraint. Consequently, we attempt to  develop a method that enforces integrability,  but fail to find a convergent iterate scheme  based on the resulting Euler equations. We  show, however, that such a scheme can be  derived if, instead of strictly enforcing the  constraint, a penalty term derived from the  constraint is adopted. This new scheme,  while it can be expressed simply and  elegantly using the surface gradient,  unfortunately cannot deal with constraints  imposed by occluding boundaries. These  constraints are crucial if ambiguities in the  solution of the shape-from shading problem  are to be avoided,  Different schemes result if one uses different  parameters to describe surface orientation  We derive two new schemes, using unit  surface normals, that facilitate the  incorporation of the occluding boundary  information. These schemes, while more  complex, have several advantages over  previous ones.
</description>
<pubDate>Fri, 01 Mar 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6419</guid>
<dc:date>1985-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Complexity of ID/LP Parsing</title>
<link>https://hdl.handle.net/1721.1/6418</link>
<description>On the Complexity of ID/LP Parsing
Barton, G. Edward, Jr.
Recent linguistic theories cast surface  complexity as the result of interacting  subsystems of constraints. For instance, the  ID/LP grammar formalism separates  constraints on immediate dominance from  those on linear order. Shieber (1983) has  shown how to carry out direct parsing of ID/LP  grammars. His algorithm uses ID and LP  constraints directly in language processing,  without expanding them into a context-free  "object grammar." This report examines the  computational difficulty of ID/LP parsing.  Shieber's purported O (G square times n  cubed) runtime bound underestimated the  difficulty of ID/LP parsing; the worst-case  runtime of his algorithm is exponential in size.  A reduction of the vertex-cover problem proves  that ID/LP parsing is NP-complete. The  growth of the internal data structures is the  source of difficulty in Shieber's algorithm. The  computational and linguistic implications of  these results are discussed. Despite the  potential for combinatorial explosion,  Shieber's algorithm remains better than the  alternative of parsing an expanded object  grammar.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6418</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hypothesizing and Refining Causal Models</title>
<link>https://hdl.handle.net/1721.1/6417</link>
<description>Hypothesizing and Refining Causal Models
Doyle, Richard J.
An important common sense competence is  the ability to hypothesize causal relations.  This paper presents a set of constraints  which make the problem of formulating  causal hypotheses about simple physical  systems a tractable one. The constraints  include: (1) a temporal and physical proximity  requirement, (2) a set of abstract causal  explanations for changes in physical systems  in terms of dependences between quantities,  and (3) a teleological assumption that  dependences in designed physical systems  are functions.  These constraints were embedded in a  learning system which was tested in two  domains: a sink and a toaster. The learning  system successfully generated and refined  naï¶¥ causal models of these simple physical  systems. The causal models which emerge  from the learning process support causal  reasoning- explanation, prediction, and  planning. Inaccurate predictions and failed  plans in turn indicate deficiencies in the  causal models and the need to re-hypothesize. Thus learning supports  reasoning which leads to further learning.  The learning system makes use of standard  inductive rules of inference as well as the  constraints on causal hypotheses to  generalize its causal models.   Finally, a simple example involving an analogy  illustrates another way to repair incomplete  causal models.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6417</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of Censors for Nonmonotonic Reasoning and Analogy in Medical Desicion-Making</title>
<link>https://hdl.handle.net/1721.1/6416</link>
<description>The Use of Censors for Nonmonotonic Reasoning and Analogy in Medical Desicion-Making
Mansour, Hormoz
A patient rarely has a single, isolated disease.  The situation is usually much more complex  since the different parts of the human  organism and metabolism interact with each  other and follow several feedback patterns.  These interactions and feedback patterns  become more important with the addition of  the external environment. When a disease is  present, the first steps of the medical  diagnosis should be to research and to  determine whether another disease interacts  with ("Censors") or changed the significant  symptoms, syndromes, or results of the  laboratory tests of the first disease.  Understanding of this interaction and the  appropriate reasoning is based on a type of  non-monotonic logic. We will try, within this  paper, to see the effect of two diseases on  each other. One important part of the effect of  two diseases on each other is the entrancing  effect of what we call "Censors." In addition,  causal reasoning, reasoning by analogy, and  learning from precedents are important and  necessary for a human-like expert in  medicine. Some aspects of their application  to thyroid diseases, with an implemented  system, are considered in this paper.
</description>
<pubDate>Fri, 01 Nov 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6416</guid>
<dc:date>1985-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>What a Parallel Programming Language Has to Let You Say</title>
<link>https://hdl.handle.net/1721.1/6415</link>
<description>What a Parallel Programming Language Has to Let You Say
Bawden, Alan; Agre, Philip E.
We have implemented in simulation a  prototype language for the Connection  Machine called CL1. CL1 is an extrapolation  of serial machine programming language  technology: in CL1 one programs the  individual processors to perform local  computations and talk to the communications  network. We present details of the largest of  out experiments with CL1, an interpreter for  Scheme (a dialect of Lisp) that allows a large  number of different Scheme programs to be  run in parallel on the otherwise SIMD  Connection Machine. Our aim was not to  propose Scheme as a language for a  Connection Machine programming, but to  gain experience using CL1 to implement an  interesting and familiar algorithm.  Consideration of the difficulties we  encountered led us to the conclusion that CL1  programs do not capture enough of the  causal structure of the processes they  describe. Starting from this observation, we  have designed a successor language called  CGL (for Connection Graph Language).
</description>
<pubDate>Sat, 01 Sep 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6415</guid>
<dc:date>1984-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biophysics of Computation: Neurons, Synapses and Membranes</title>
<link>https://hdl.handle.net/1721.1/6414</link>
<description>Biophysics of Computation: Neurons, Synapses and Membranes
Koch, Christof; Poggio, Tomaso
Synapses, membranes and  neurotransmitters play an important role in  processing information in the nervous  system. We do not know, however, what  biophysical mechanisms are critical for  neuronal computations, what elementary  information processing operations they  implement, and which sensory or motor  computations they underlie. In this paper, we  outline an approach to these problems. We  will review a number of different biophysical  mechanisms such as synaptic interactions  between excitation and inhibition, dendritic  spines, non-impulse generating membrane  nonlinearities and transmitter-regulated  voltage channels. For each one, we discuss  the information processing operations that  may be implemented. All of these  mechanisms act either within a few  milliseconds, such as the action potential or  synaptic transmission, or over several  hundred milliseconds or even seconds,  modulating some property of the circuit. In  some cases we will suggest specific  examples where a biophysical mechanism  underlies a given computation. In particular,  we will discuss the neuronal operations, and  their implementation, underlying direction  selectivity in the vertebrate retina.
</description>
<pubDate>Mon, 01 Oct 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6414</guid>
<dc:date>1984-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface Reconstruction Preserving Discontinuities</title>
<link>https://hdl.handle.net/1721.1/6413</link>
<description>Surface Reconstruction Preserving Discontinuities
Marroquin, J.L.
Well-known methods for solving the shape-from-shading problem require knowledge of the reflectance map. Here we show how the shape-from-shading problem can be solved when the reflectance map is not available, but is known to have a given form with some unknown parameters. This happens, for example, when the surface is known to be Lambertian, but the direction to the light source is not known. We give an iterative algorithm that alternately estimates the surface shape and the light source direction. Use of the unit normal in parameterizing the reflectance map, rather than the gradient or stereographic coordinates, simpliflies the analysis. Our approach also leads to an iterative scheme for computing shape from shading that adjusts the current estimates of the focal normals toward or away from the direction of the light source. The amount of adjustment is proportional to the current difference between the predicted and the observed brightness. We also develop generalizations to less constrained forms of reflectance maps.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6413</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinematic Features of Unrestrained Arm Movements</title>
<link>https://hdl.handle.net/1721.1/6412</link>
<description>Kinematic Features of Unrestrained Arm Movements
Atkeson, Christopher G.; Hollerback, John M.
Unrestrained human arm trajectories  between point targets have been investigated  using a three dimensional tracking apparatus,  the Selspot system. Movements were  executed between different points in a vertical  plane under varying conditions of speed and  hand-held load. In contrast to past results  which emphasized the straightness of hand  paths, movement regions were discovered in  which the hand paths were curved. All  movements, whether curved or straight,  showed an invariant tangential velocity profile  when normalized for speed and distance.  The velocity profile invariance with speed and  load is interpreted in terms of simplification of  the underlying arm dynamics, extending the  results of Hollerbach and Flash (1982).
</description>
<pubDate>Sun, 01 Jul 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6412</guid>
<dc:date>1984-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Principle-Based Parser</title>
<link>https://hdl.handle.net/1721.1/6411</link>
<description>Toward a Principle-Based Parser
Barton, G. Edward, Jr.
Parser design lags behind linguistic theory.  While modern transformational grammar has  largely abandoned complex, language-specific rule systems in favor of modular  subsystems of principles and parameters, the  rule systems that underlie existing natural-language parsers are still large, detailed, and  complicated. The shift to modular theories in  linguistics took place because of the scientific  disadvantages of such rule systems. Those  scientific ills translate into engineering  maladies that make building natural-language systems difficult. The cure for these  problems should be the same in parser  design as it was in linguistic theory. The shift  to modular theories of syntax should be  replicated in parsing practice; a parser should  base its actions on interacting modules of  principles and parameters rather than a  complex, monolithic rule system. If it can be  successfully carried out, the shift will make it  easier to build natural-language systems  because it will shorten and simplify the  language descriptions that are needed for  parsing. It will also allow parser design to  track new developments in linguistic theory.
</description>
<pubDate>Sun, 01 Jul 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6411</guid>
<dc:date>1984-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theoretical Analysis of the Electrical Properties of a X-Cell in the Cat's LGN</title>
<link>https://hdl.handle.net/1721.1/6410</link>
<description>A Theoretical Analysis of the Electrical Properties of a X-Cell in the Cat's LGN
Koch, Christof
Electron microscope studies of relay cells in  the lateral geniculate nucleus of the CAT have  shown that the retinal input of X-cells is  associated with a special synaptic circuitry,  termed the spine-triad complex. The retinal  afferents make an asymmetrical synapse with  both a dendritic appendage of the X-cell and a  geniculate interneuron. The interneuron  contacts in turn the same dendritic  appendage with a symmetrical synaptic  profile. The retinal input to geniculate Y-cells  is predominately found on dendritic shafts  without any triadic arrangement. We explore  the integrative properties of X- and Y-cells  resulting from this striking dichotomy in  synaptic architecture. The basis of our  analysis is the solution of the cable equation  for a branched dendritic tree with a known  somatic input resistance. Under the  assumption that the geniculate interneuron  mediates a shunting inhibition, activation of  the interneuron reduces very efficiently the  excitatory post-synaptic potential induced by  the retinal afferent without affecting the  electrical activity in the rest of the cell.  Therefore, the spine-triad circuit implements  the analogy of an AND-NOT gate, unique to  the X-system. Functionally, this corresponds  to a presynaptic, feed-forward type of inhibition  of the optic tract terminal. Since Y-cells lack  this structure, inhibition acts globally, reducing  the general electrical activity of the cell. We  propose that geniculate interneurons gate the  flow of visual information into the X-system as  a function of the behavioral state of the  animal, enhancing the center-surround  antagonism and possibly mediating  reciprocal lateral inhibition, eye-movement  related suppression and selective visual  attention.
</description>
<pubDate>Thu, 01 Mar 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6410</guid>
<dc:date>1984-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Coordination of Arm Movements: An Experimentally Confirmed Mathematical Model</title>
<link>https://hdl.handle.net/1721.1/6409</link>
<description>The Coordination of Arm Movements: An Experimentally Confirmed Mathematical Model
Flash, Tamar; Hogan, Neville
This paper presents studies of the  coordination f voluntary human arm  movements. A mathematical model is  formulated which is shown to predict both the  qualitative features and the quantitative details  observed experimentally in planar, multi-joint  arm movements. Coordination is modelled  mathematically by defining an objective  function, a measure of performance for any  possible movement. The unique trajectory  which yields the best performance is  determined using dynamic optimization  theory. In the work presented here the  objective function is the square of the  magnitude of jerk (rate of change of  acceleration) of the hand integrated over the  entire movement. This is equivalent to  assuming that a major goal of motor  coordination is the production of the  smoothest possible movement of the hand.  The theoretical analysis is based solely on  the kinematics of movement independent of  the dynamics of the musculoskeletal system,  and is successful only when formulated in  terms of the motion of the hand in  extracorporal space. The implications with  respect to movement organization are  discussed.
</description>
<pubDate>Thu, 01 Nov 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6409</guid>
<dc:date>1984-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Analog Model of Computation for the Ill-Posed Problems of Early Vision</title>
<link>https://hdl.handle.net/1721.1/6408</link>
<description>An Analog Model of Computation for the Ill-Posed Problems of Early Vision
Poggio, Tomaso; Koch, Christof
A large gap exists at present between  computational theories of vision and their  possible implementation in neural hardware.  The model of computation provided by the  digital computer is clearly unsatisfactory for  the neurobiologist, given the increasing  evidence that neurons are complex devices,  very different from simple digital switches. It  is especially difficult to imagine how networks  of neurons may solve the equations involved  in vision algorithms in a way similar to digital  computers. In this paper, we suggest an  analog model of computation in electrical or  chemical networks for a large class of vision  problems, that map more easily into  biological plausible mechanisms. Poggio  and Torre (1984) have recently recognized that  early vision problems such as motion  analysis (Horn and Schunck, 1981; Hildreth,  1984a,b), edge detection (Torre and Poggio,  1984), surface interpolation (Grimson, 1981;  Terzopoulos 1984), shape-from-shading  (Ikeuchi and Horn, 1981) and stereomatching  can be characterized as mathematically ill-posed problems in the sense of Hadamard  (1923). Ill-posed problems can be "solved",  according to regularization theories, by  variational principles of a specific type. A  natural way of implementing variational  problems are electrical, chemical or neuronal  networks. We present specific networks for  solving several low-level vision problems,  such as the computation of visual motion and  edge detection.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6408</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linguistic Support of Receptionists for Shared Resources</title>
<link>https://hdl.handle.net/1721.1/6407</link>
<description>Linguistic Support of Receptionists for Shared Resources
Hewitt, Carl; Reinhardt, Tom; Agha, Gul; Attardi, Giuseppe
This paper addressed linguistic issues that  arise in providing support for shared  resources in large scale concurrent systems.  Our work is based on the Actor Model of  computation which unifies the lambda  calculus, the sequential stored-program and  the object-oriented models of computation.  We show how receptionist can be used to  regulate the se of shared resources by  scheduling their access and providing  protection against unauthorized or accidental  access. A shared financial account is an  example of the kind of resource that needs a  receptionist. Issues involved in the  implementation of scheduling policies for  shared resources are also addressed. The  modularity problems involved in implementing  servers which multiplex the use of physical  devices illustrated how delegation aids in the  implementation of parallel problem solving  systems for communities of actors.
</description>
<pubDate>Sat, 01 Sep 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6407</guid>
<dc:date>1984-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>PRISM: A Practical Real-Time Imaging Stereo Matcher</title>
<link>https://hdl.handle.net/1721.1/6406</link>
<description>PRISM: A Practical Real-Time Imaging Stereo Matcher
Nishihara, H.K.
A binocular-stereo-matching algorithm for  making rapid visual range measurements in  noisy images is described. This technique is  developed for application to problems in  robotics where noise tolerance, reliability, and  speed are predominant issues. A high speed  pipelined convolver for preprocessing images  and an unstructured light technique for  improving signal quality are introduced to help  enhance performance to meet the demands  of this task domain. These optimizations,  however, are not sufficient. A closer  examination of the problems encountered  suggests that broader interpretations of both  the objective of binocular stereo and of the  zero-crossing theory of Marr and Poggio and  required. In this paper, we restrict ourselves to  the problem of making a single primitive  surface measurement. For example, to  determine whether or not a specified volume  of space is occupied, to measure the range to  a surface at an indicated image location, or to  determine the elevation gradient at that  position. In this framework we make a subtle  but important shift from the explicit use of  zero-crossing contours (in band-pass filtered  images) as the elements matched between  left and right images, to use of the signs  between zero-crossings. With this change, we  obtain a simpler algorithm with a reduced  sensitivity to noise and a more predictable  behavior. The PRISM system incorporates this  algorithm with the unstructured light technique  and a high speed digital convolver. It has  been used successfully by others as a sensor  in a path planning system and a bin picking  system.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6406</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Information Storage Mechanism: Calcium and Spines</title>
<link>https://hdl.handle.net/1721.1/6405</link>
<description>An Information Storage Mechanism: Calcium and Spines
Robinson, Hugh; Koch, Christof
This proposal addresses some of the  biophysical events possibly underlying fast  activity-dependent changes in synaptic  efficiency. Dendritic spines in the cortex have  attracted increased attention over the last  years as a possible locus of cellular plasticity  given the large number of studies reporting a  close correlation between presynaptic activity  (or lack of thereof) and changes in spine  shape. This is highlighted by recent reports,  showing that the spine cytoplasm contains  high levels of actin. Moreover, it has been  demonstrated that a high level of intracellular  free calcium Ca squared positive, is a  prerequisite for various forms of synaptic  potentiation. We propose a series of  plausible steps, linking presynaptic electrical  activity at dendritic spines with a short lasting  change in spine geometry. Specifically, we  conjecture that the spike-induced excitatory  postsynaptic potential triggers an influx of Ca  squared positive into the spine, where it will  rapidly bind to intracellular calcium buffers  such as calmodulin and calcineurin.  However, for prolonged or intense presynaptic  electrical activity, these buffers will saturate,  the free Ca squared positive will then activate  the actin/myosin network in the spine neck,  reversibly shortening the length of the neck  and increasing its diameter. This change in  the geometry of the spine will lead to an  increase in the synaptic efficiency of the  synapse. We will discuss the implication of  our proposal for the control of cellular  plasticity and its relation to generalized  attention and arousal.
</description>
<pubDate>Sun, 01 Apr 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6405</guid>
<dc:date>1984-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Generalized Ordering Constraint for Stereo Correspondence</title>
<link>https://hdl.handle.net/1721.1/6404</link>
<description>A Generalized Ordering Constraint for Stereo Correspondence
Yuille, A.L.; Poggio, Tomaso A
The ordering constraint along epipolar lines is  a powerful constraint that has been exploited  by some recent stereomatching algorithms.  We formulate a generalized ordering  constraint, not restricted to epipolar lines. We  prove several properties of the generalized  ordering constraint and of the "forbidden  zone", the set of matches that would violate  the constraint. We consider both the  orthographic and the perspective projection  case, the latter for a simplified but standard  stereo geometry. The disparity gradient limit  found in the human stereo system may be  related to a form of the ordering constraint. To  illustrate our analysis we outline a simple  algorithm that exploits the generalized  ordering constraint for matching contours of  wireframe objects. We also show that the use  of the generalized ordering constraint implies  several other stereo matching constraints: a0  the ordering constraint along epipolar lines, b)  figural continuity, c) Binford's cross-product  constraint, d) Mayhew and Frisby's figural  continuity constraint. We finally discuss ways  of extending the algorithm to arbitrary 3-D  objects.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6404</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Scientific Subroutines in LISP</title>
<link>https://hdl.handle.net/1721.1/6403</link>
<description>Some Scientific Subroutines in LISP
Roylance, Gerald
Here's a LISP library of mathematical  functions that calculate hyperbolic and inverse  hyperbolic functions. Bessel functions, elliptic  integrals, the gamma and beta functions, and  the incomplete gamma and beta functions.  There are probability density functions,  cumulative distributions, and random number  generators for the normal, Poisson, chi-square, Student's T. and Snedecor's F  integration, root finding, and convergence.  Code to factor numbers and to the Solovay-Strassen probabilistic prime test.
</description>
<pubDate>Sat, 01 Sep 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6403</guid>
<dc:date>1984-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ill-Posed Problems and Regularization Analysis in Early Vision</title>
<link>https://hdl.handle.net/1721.1/6402</link>
<description>Ill-Posed Problems and Regularization Analysis in Early Vision
Tomaso, Poggio; Torre, Vincent
One of the best definitions of early vision is  that it is inverse optics --- a set of  computational problems that both machines  and biological organisms have to solve. While  in classical optics the problem is to determine  the images of physical objects, vision is  confronted with the inverse problem of  recovering three-dimensional shape from the  light distribution in the image. Most processes  of early vision such as stereomatching,  computation of motion and the "structure  from" processes can be regarded as  solutions to inverse problems. This common  characteristic of early vision can be  formalized: most early vision problems are "ill-posed problems" in the sense of Hadamard.  We will show that a mathematical theory  developed for regularizing ill-posed problems  leads in a natural way to the solution of the  early vision problems in terms of variational  principles of a certain class. This is a new  theoretical framework for some of the  variational solutions already obtained in the  analysis of early vision processes. It also  shows how several other problems in early  vision can be approached and solved.
</description>
<pubDate>Sun, 01 Apr 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6402</guid>
<dc:date>1984-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining Grasp Points Using Photometric Stereo and the PRISM Binocular Stereo System</title>
<link>https://hdl.handle.net/1721.1/6401</link>
<description>Determining Grasp Points Using Photometric Stereo and the PRISM Binocular Stereo System
Ikeuchi, Katsushi; Nishihara, Keith H.; Horn, Berthold K.P.; Sobalvarro, Patrick; Nagata, Shigemi
This paper describes a system which locates  and grasps doughnut shaped parts from a  pile. The system uses photometric stereo  and binocular stereo as vision input tools.  Photometric stereo is used to make surface  orientation measurements. With this  information the camera field is segmented  into isolated regions of continuous smooth  surface. One of these regions is then  selected as the target region. The attitude of  the physical object associated with the target  region is determined by histograming surface  orientations over that region and comparing  with stored histograms obtained from  prototypical objects. Range information, not  available from photometric stereo is obtained  by the PRISM binocular stereo system. A  collision-free grasp configuration and  approach trajectory is computed and executed  using the attitude, and range data.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6401</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Basic Solid Mechanics for Tactile Sensing</title>
<link>https://hdl.handle.net/1721.1/6400</link>
<description>Basic Solid Mechanics for Tactile Sensing
Fearing, Ronald S.; Hollerbach, John M.
In order to stably grasp objects without using  object models, tactile feedback from the  fingers is sometimes necessary. This  feedback can be used to adjust grasping  forces to prevent a part form slipping from a  hand. If the angle of force at the object finger  contact can be determined, slip can be  prevented by the proper adjustment of finger  forces. Another important tactile sensing task  is finding the edged and corners of an object,  since they are usually feasible grasping  locations. This paper describes how this  information can be extracted from the finger-object contact using strain sensors beneath a  compliant skin. For determining contact  forces, strain measurements are easier to  use than the surface deformation profile. The  finger is modelled as an infinite linear elastic  half plane to predict the measured strain for  several contact types and forces. The number  of sensors required is less than has been  proposed for other tactile recognition tasks.  A rough upper bound on sensor density  requirements for a specific depth is presented  that is bas3ed on the frequency response of  the elastic medium. The effects of different  sensor stiffness on sensor performance are  discussed.
</description>
<pubDate>Thu, 01 Mar 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6400</guid>
<dc:date>1984-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Selecting One Among the Many: A Simple Network Implementing Shifts in Selective Visual Attention</title>
<link>https://hdl.handle.net/1721.1/6399</link>
<description>Selecting One Among the Many: A Simple Network Implementing Shifts in Selective Visual Attention
Koch, Christof; Ullman, Shimon
This study addresses the question of how  simple networks can account for a variety of  phenomena associated with the shift of a  specialized processing focus across the  visual scene. We address in particular  aspects of the dichotomy between the  preattentive-paralel and the attentive-serial  modes of visual perception and their  hypothetical neuronal implementations.  Specifically we propose the following: 1.) A  number of elementary features, such as color,  orientation, direction of movement, disparity  ect. are represented in parallel in different  topographical maps, called the early  representation. 2.) There exists a selective  mapping from this early representation into a  more central representation, such that at any  instant the central representation contains the  properties of only a single location in the  visual scene, the selected location. 3.) We  discuss some selection rules that determine  which location will be mapped into the central  representation. The major rule, using the  saliency or conspicuity of locations in the early  representation, is implemented using a so-called Winner-Take-All network. A hierarchical  pyramid-like architecture is proposed for this  network. We suggest possible  implementatinos in neuronal hardware,  including a possible role for the extensive  back-projection from the cortex to the LGN.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6399</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Combinatorics of Local Constraints in Model-Based Recognition and Localization from Sparse Data</title>
<link>https://hdl.handle.net/1721.1/6398</link>
<description>The Combinatorics of Local Constraints in Model-Based Recognition and Localization from Sparse Data
Grimson, W. Eric L.
The problem of recognizing what objects are  where in the workspace of a robot can be cast  as one of searching for a consistent matching  between sensory data elements and  equivalent model elements. In principle, this  search space is enormous and to control the  potential combinatorial explosion, constraints  between the data and model elements are  needed. We derive a set of constraints for  sparse sensory data that are applicable to a  wide variety of sensors and examine their  characteristics. We then use known bounds  on the complexity of constraint satisfaction  problems together with explicit estimates of  the effectiveness of the constraints derived for  the case of sparse, noisy three-dimensional  sensory data to obtain general theoretical  bounds on the number of interpretations  expected to be consistent with the data. We  show that these bounds are consistent with  empirical results reported previously. The  results are used to demonstrate the graceful  degradation of the recognition technique with  the presence of noise in the data, and to  predict the number of data points needed in  general to uniquely determine the object  being sensed.
</description>
<pubDate>Sat, 01 Mar 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6398</guid>
<dc:date>1986-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extended Gaussian Images</title>
<link>https://hdl.handle.net/1721.1/6397</link>
<description>Extended Gaussian Images
Horn, Berthold K.P.
This is a primer on extended Gaussian  Images. Extended Gaussian Images are  useful for representing the shapes of  surfaces. They can be computed easily from:  1. Needle maps obtained using photometric  stereo, or 2. Depth maps generated by  ranging devices or stereo. Importantly, they  can also be determined simply from  geometric models of the objects. Extended  Gaussian images can be of use in at least  two of the tasks facing a machine vision  system. 1. Recognition, and 2. Determining  the attitude in space of an object. Here, the  extended Gaussian image is defined and  some of its properties discussed. An  elaboration for non-convex objects is  presented and several examples are shown.
</description>
<pubDate>Fri, 01 Jul 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6397</guid>
<dc:date>1983-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnostic Reasoning Based on Structure and Behavior</title>
<link>https://hdl.handle.net/1721.1/6396</link>
<description>Diagnostic Reasoning Based on Structure and Behavior
Davis; Randall
We describe a system that reasons from first principles, i.e., using knowledge of structure and behavior. The system has been implemented and tested on several examples in the domain of troubleshooting digital electronic circuits. We give an example of the system in operation, illustrating that this approach provides several advantages, including a significant degree of device independence, the ability to constrain the hypotheses it considers at the outset, yet deal with a progressively wider range of problems, and the ability to deal with situations that are novel in the sense that their outward manifestations may not have been encountered previously. As background we review our basic approach to describing structure and behavior, then explore some of the technologies used previously in troubleshooting. Difficulties encountered there lead us to a number of new contributions, four of which make up the central focus of this paper. We describe a technique we call constraint suspension that provides a powerful tool for troubleshooting. We point out the importance of making explicit the assumptions underlying reasoning and describe a technique that helps enumerate assumptions methodically. The result is an overall strategy for troubleshooting based on the progressive relaxation of underlying assumptions. The system can focus its efforts initially, yet will methodically expand its focus to include a broad range of faults. Finally, abstracting from our examples, we find that the concept of adjacency proves to be useful in understanding why some faults are especially difficult and why multiple different representations are useful.
</description>
<pubDate>Fri, 01 Jun 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6396</guid>
<dc:date>1984-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model-Based Recognition and Localization from Sparse Range or Tactile Data</title>
<link>https://hdl.handle.net/1721.1/6395</link>
<description>Model-Based Recognition and Localization from Sparse Range or Tactile Data
Grimson, W. Eric L.; Lozano-Perez, Tomas
This paper discusses how local  measurements of three-dimensional  positions and surface normals (recorded by a  set of tactile sensors, or by three-dimensional  range sensors), may be used to identify and  locate objects, from among a set of known  objects. The objects are modeled as  polyhedra having up to six degrees of freedom  relative to the sensors. We show that  inconsistent hypotheses about pairings  between sensed points and object surfaces  can be discarded efficiently by using local  constraints on: distances between faces,  angles between face normals, and angles  (relative to the surface normals) of vectors  between sensed points. We show by  simulation and by mathematical bounds that  the number of hypotheses consistent with  these constraints is small. We also show how  to recover the position and orientation of the  object from the sense data. The algorithm's  performance on data obtained from a  triangulation range sensor is illustrated.
</description>
<pubDate>Mon, 01 Aug 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6395</guid>
<dc:date>1983-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hidden Clues in Random Line Stereograms</title>
<link>https://hdl.handle.net/1721.1/6394</link>
<description>Hidden Clues in Random Line Stereograms
Nishihara, H.K.; Poggio, Tomaso A
Successful fusion of random-line  stereograms with breaks in the vernier acuity  range has been previously interpreted to  suggest that the interpolation process  underlying hyperacuity is parallel and  preliminary to stereomatching. In this paper  (a) we demonstrate with computer  experiments that vernier cues are not needed  to solve the stereomatching problem posed  by these stereograms and (b) we provide  psychophysical evidence that human  stereopsis probably does not use vernier  cues alone to achieve fusion of these  random-line stereograms.
</description>
<pubDate>Mon, 01 Aug 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6394</guid>
<dc:date>1983-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hypothesizing Channels through Free-Space in Solving the Findpath Problem</title>
<link>https://hdl.handle.net/1721.1/6393</link>
<description>Hypothesizing Channels through Free-Space in Solving the Findpath Problem
Donald, Bruce R.
Given a polyhedral environment, a technique  is presented for hypothesizing a channel  volume through the free space containing a  class of successful collision-free paths. A set  of geometric constructions between obstacle  faces is proposed, and we define a mapping  from a field of view analysis to a direct local  construction of free space. The algorithm has  the control structure of a search which  propagates construction of a connected  channel towards a goal along a frontier of  exterior free faces. Thus a channel volume  starts out by surrounding the moving object in  the initial configuration and "grows" towards  the goal. Finally, we show techniques for  analyzing the channel decomposition of free  space and suggesting a path.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6393</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computation of the Velocity Field</title>
<link>https://hdl.handle.net/1721.1/6392</link>
<description>The Computation of the Velocity Field
Hildreth, Ellen C.
The organization of movement in the changing  retinal image provides a valuable source of  information for analyzing the environment in  terms of objects, their motion in space and  their three-dimensional structure. A  description of this movement is not provided  to our visual system directly, however; it must  be inferred from the pattern of changing  intensity that reaches the eye. This paper  examines the problem of motion  measurement, which we formulate as the  computation of an instantaneous two-dimensional velocity field from the changing  image. Initial measurements of motion take  place at the location of significant intensity  changes, as suggested by Marr and Ullman  (1981). These measurements provide only  one component of local velocity, and must be  integrated to compute the two-dimensional  velocity field. A fundamental problem for this  integration stage is that the velocity field is not  determined uniquely from information  available in the changing image. We  formulate an additional constraint of  smoothness of the velocity field, based on the  physical assumption that surfaces are  generally smooth, which allows the  computation of a unique velocity field. A  theoretical analysis of the conditions under  which this computation yields the correct  velocity field suggests that the solution is  physically plausible. Empirical studies show  the predictions of this computation to be  consistent with human motion perception.
</description>
<pubDate>Thu, 01 Sep 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6392</guid>
<dc:date>1983-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parts of Recognition</title>
<link>https://hdl.handle.net/1721.1/6391</link>
<description>Parts of Recognition
Hoffman, D.D.; Richards, Whitman
A complete theory of object recognition is an  impossibility ??t simply because of the  multiplicity of visual cues we exploit in elegant  coordination to identify an object, but primarily  because recognition involves fixation of belief,  and anything one knows may be relevant. We  finesse this obstacle with two moves. The first  restricts attention to one visual cue, the  shapes of objects; the second restricts  attention to one problem, the initial guess at  the identity of an object. We propose that the  visual system decomposes a shape into  parts, that it does so using a rule defining part  boundaries rather than part shapes, that the  rule exploits a uniformity of nature ??ransversality, and that parts with their  descriptions and spatial relations provide a  first index into a memory of shapes. These  rules lead to a more comprehensive  explanation of several visual illusions. The  role of inductive inference is stressed in our  theory. We conclude with a pré£©s of unsolved  problems.
</description>
<pubDate>Thu, 01 Dec 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6391</guid>
<dc:date>1983-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fingerprints Theorems for Zero-Crossings</title>
<link>https://hdl.handle.net/1721.1/6390</link>
<description>Fingerprints Theorems for Zero-Crossings
Yuille, A.L.; Poggio, Tomaso A
We prove that the scale map of the zero-crossings of almost all signals filtered by the  second derivative of a gaussian of variable  size determines the signal uniquely, up to a  constant scaling and a harmonic function. Our  proof provides a method for reconstructing  almost all signals from knowledge of how the  zero-crossing contours of the signal, filtered  by a gaussian filter, change with the size of  the filter. The proof assumes that the filtered  signal can be represented as a polynomial of  finite, albeit possibly very high, order. An  argument suggests that this restriction is not  essential. Stability of the reconstruction  scheme is briefly discussed. The result  applies to zero- and level-crossings of linear  differential operators of gaussian filters. The  theorem is extended to two dimensions, that  is to images. These results are reminiscent of  Logan's theorem. They imply that extrema of  derivatives at different scales are a complete  representation of a signal.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6390</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semantic Support for Work in Organizations</title>
<link>https://hdl.handle.net/1721.1/6389</link>
<description>Semantic Support for Work in Organizations
Barber, Gerald; Jong, Peter de; Hewitt, Carl
Present day computer systems cannot  implement much of the work carried out in  organizations such as: planning, decision  making, analysis, and dealing with  unanticipated situations. Such organizational  activities have traditionally been considered  too unstructured to be suitable for automation  by computer. We are working on the  development of computer technology to  overcome these limitations. Our goal is the  development of a computer system which is  capable of the following: describing the  semantics of applications as well as the  structure of the organization carrying out the  work, aiding workers in carrying out the  applications using these descriptions, and  acquiring these capabilities in the course of  the daily work through a process which is  analogous to apprenticeship.
</description>
<pubDate>Fri, 01 Apr 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6389</guid>
<dc:date>1983-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Zero-Crossings on Lines of Curvature</title>
<link>https://hdl.handle.net/1721.1/6388</link>
<description>Zero-Crossings on Lines of Curvature
Yuille, A.
We investigate the relations between the  structure of the image and events in the  geometry of the underlying surface. We  introduce some elementary differential  geometry and use it to define a coordinate  system on the object based on the lines of  curvature. Using this coordinate system we  can prove results connecting the extrema,  ridges and zero-crossings in the image to  geometrical features of the object. We show  that extrema of the image typically correspond  to points on the surface with zero Gaussian  curvature and that parabolic lines often give  rise to ridges, or valleys, in the image  intensity. We show that directional zero-crossings of the image along the lines of  curvature generally correspond to extrema of  curvature along such lines.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6388</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wrist-Partitioned Inverse Kinematic Accelerations and Manipulator Dynamics</title>
<link>https://hdl.handle.net/1721.1/6387</link>
<description>Wrist-Partitioned Inverse Kinematic Accelerations and Manipulator Dynamics
Hollerbach, John M.; Sahar, Gideon
An efficient algorithm is presented for the  calculation of the inverse kinematic  accelerations for a 6 degree-of-freedom  manipulator with a spherical wrist. The  inverse kinematic calculation is shown to  work synergistically with the inverse dynamic  calculation, producing kinematic parameters  needed in the recursive Newton-Euler  dynamics formulation. Additional savings in  the dynamics computation are noted for a  class of kinematically well-structured  manipulators such as spherical-wrist arms  and for manipulators with simply-structured  inertial parameters.
</description>
<pubDate>Fri, 01 Apr 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6387</guid>
<dc:date>1983-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Text through Summarization and Analogy</title>
<link>https://hdl.handle.net/1721.1/6386</link>
<description>Understanding Text through Summarization and Analogy
Tonfoni, Graziella; Doyle, Richard J.
Understanding a text exactly in the way that the  Text Producer meant the text to be understood  is highly unlikely unless the text interpretation  process is constrained. Specific  understanding-directing criteria are given in  the form of a Premise which is a configuration  of plot-units. After performing a Premise-directed text summarization, the Text Receiver  will have understood the text as the Text  Producer intended and will then be able to  replace missing relations within the exercises  and produce new texts by applying analogy.
</description>
<pubDate>Fri, 01 Apr 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6386</guid>
<dc:date>1983-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining Attitude of Object from Needle Map Using Extended Gaussian Image</title>
<link>https://hdl.handle.net/1721.1/6385</link>
<description>Determining Attitude of Object from Needle Map Using Extended Gaussian Image
Ikeuchi; Katsushi
An extended Gaussian image (EGI) is  constructed by mapping the surface normals  of an object onto the Gussian sphere. The  attitude of an object is greatly constrained by  the global distribution of EGI mass over the  visible Gaussian hemisphere. Constraints on  the viewer direction are derived from the  position of the EGI mass center, and from the  direction of the EGI inertia axis. The algorithm  embodying these constraints and the EGI  mass distribution are implemented using a  lookup table. A function for matching an  observed EGI with the prototypical EGIs is  also proposed. The algorithm determines the  attitude of an object successfully both from a  synthesized needle map and a real needle  map.
</description>
<pubDate>Fri, 01 Apr 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6385</guid>
<dc:date>1983-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theoretical Analysis of Electrical Properties of Spines</title>
<link>https://hdl.handle.net/1721.1/6384</link>
<description>A Theoretical Analysis of Electrical Properties of Spines
Koch, C.; Poggio, Tomaso A
The electrical properties of a cortical (spiny)  pyramidal cell were analyzed on the basis of  passive cable theory from measurements  made on histological material (Koch, Poggio  &amp; Torre 1982). The basis of this analysis is  the solution o the cable equation for an  arbitrary branched dendritic tree. We  determined the potential at the soma as a  function of the synaptic input (transient  conductance changes) and as a function of  the spine neck dimensions. From our  investigation four major points emerge: 1.  Spine may effectively compress the effect of  each single excitatory synapse on the soma,  mapping a wide range of inputs onto a limited  range of outputs (nonlinear saturation). This  is also true for very fast transient inputs, in  sharp contrast with the case of a synapse on  a dendrite. 2. The somatic depolarization due  to an excitatory synapse on a spine is a very  sensitive function of the spine neck length and  diameter. Thus the spine can effectively  control the resulting saturation curve. This  might be the basic mechanism underlying  ultra-short memory, long-term potentiation in  the hippocampus or learning in the  cerebellum. 3. Spines with shunting inhibitory  synapses on them are ineffective in reducing  the somatic depolarization due to excitatory  inputs on the dendritic shaft or on other  spines. Thus isolated inhibitory synapses on  a spine are not expected to occur. 4. The  conjunction of an excitatory synapse with a  shunting inhibitory synapse on the same  spine may result in a time-discrimination  circuit with a temporal resolution of around  100usec.
</description>
<pubDate>Fri, 01 Apr 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6384</guid>
<dc:date>1983-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information Processing in Dendritic Spines</title>
<link>https://hdl.handle.net/1721.1/6383</link>
<description>Information Processing in Dendritic Spines
Koch, C.; Poggio, Tomaso A
Dendritic spines are small twigs on the  dendrites of a very large class of neurons in  the central nervous system. There are  between 10 (3) and 10 (5) spines per neuron,  each one including at least one synapse, i.e.  a connection with other neurons. Thus,  spines are usually associated with an  important feature of neurons ??eir high  degree of connectivity ??e of the most  obvious differences between present  computers and brains. We have analysed the  electrical properties of a cortical (spiny)  pyramidal cell on the basis of passive cable  theory, from measurements made on  histological material, using the solution of the  cable equation for an arbitrary branched  dendritic tree. As postulated by Rall, we found  that the somatic potential induced by firing  synapse on a spine is a very sensitive  function of the dimension of the spine. This  observation leads to several hypotheses  concerning the electrical functions of spines,  especially with respect to their role in memory.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6383</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Extremum Principle for Shape from Contour</title>
<link>https://hdl.handle.net/1721.1/6382</link>
<description>An Extremum Principle for Shape from Contour
Brady, Michael; Yuille, Alan
An extremum principle is developed that  determines three-dimensional surface  orientation from a two-dimensional contour.  The principle maximizes the ratio of the area  to the square of the perimeter, a measure of  the compactness or symmetry of the three-dimensional surface. The principle interprets  regular figures correctly and it interprets skew  symmetries as oriented real symmetries. The  maximum likelihood method approximates  the principle on irregular figures, but we show  that it consistently overestimates the slant of  an ellipse.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6382</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symmetric Set Theory: A General Theory of Isomorphism, Abstraction, and Representation</title>
<link>https://hdl.handle.net/1721.1/6381</link>
<description>Symmetric Set Theory: A General Theory of Isomorphism, Abstraction, and Representation
McAllester, David Allen
It is possible to represent a finite set of points  (atoms) by a finite sequence of points.  However a finite set of points has no  distinguished member and therefore it is  impossible to define a function which takes a  finite set of points and returns a "first" point in  that set. Thus it is impossible to represent a  finite sequence of points by a finite set of  points. The theory of symmetric sets provides  a framework in which the observation about  sets and sequences can be proven. The  theory of symmetric sets is similar to classical  (Zermello-Fraenkel) set theory with the  exception that the universe of symmetric sets  includes points (ur-elements). Points provide  a basis for general notions of isomorphism  and symmetry. The general notions of  isomorphism and symmetry in turn provide a  basis for natural, simple, and universal  definitions of abstractness, essential  properties and functions, canonicality, and  representations. It is expected that these  notions will play an important role in the theory  of data structures and in the construction of  general techniques for reasoning about data  structures.
</description>
<pubDate>Mon, 01 Aug 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6381</guid>
<dc:date>1983-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solving Uninterpreted Equations with Context Free Expression Grammars</title>
<link>https://hdl.handle.net/1721.1/6380</link>
<description>Solving Uninterpreted Equations with Context Free Expression Grammars
McAllester, David Allen
It is shown here that the equivalence class of  an expression under the congruence closure  of any finite set of equations between ground  terms is a context free expression language.  An expression is either a symbols or an n-tuple of expressions; the difference between  expressions and strings is that expressions  have inherent phrase structure. The Downey,  Sethi and Tarjan algorithm for computing  congruence closures can be used to convert  finite set of equations E to a context free  expression grammar G such that for any  expression u the equivalence class of u under  E is precisely the language generated by an  expression form I'(u) under grammar G. the  fact that context free expression languages  are closed under intersection is used to  derive an algorithm for computing a grammar  for the equivalence class of a given  expression under any finite disjunction of finite  sets of equations between ground  expressions. This algorithm can also be used  to derive a grammar representing the  equivalence class of conditional expressions  of the form if P then u else v. The description  of an equivalence class by a context free  expression grammar can also be used to  simplify expressions under "well behaved"  simplicity orders. Specifically if G is a context  free expression grammar which generates an  equivalence class of expressions then for any  well behaved simplicity order there is a  subset G' of the productions G such that the  expressions generated by G' are exactly those  expressions of the equivalence class which  are simplicity bounds and whose subterms  are also simplicity bounds. Furthermore G'  can be computed from G in order nlog(n) time  plus the time required to do order nlog(n)  comparisons between expressions where n  is the size G.
</description>
<pubDate>Sun, 01 May 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6380</guid>
<dc:date>1983-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Studies in the Interpretation of Structure and Motion: Summary and Extension</title>
<link>https://hdl.handle.net/1721.1/6379</link>
<description>Computational Studies in the Interpretation of Structure and Motion: Summary and Extension
Ullman, Shimon
Computational studies of the interpretation of  structure from motion examine the conditions  under which three-dimensional structure can  be recovered from motion in the image. The  first part of this paper summarizes the main  results obtained to date in these studies. The  second part examines two issues: the  robustness of the 3-D interpretation of  perspective velocity fields, and the 3-D  information contained in orthographic velocity  fields. The two are related because, under  local analysis, limitations on the interpretation  of orthographic velocity fields also apply to  perspective projection. The following results  are established: When the interpretation is  applied locally, the 3-D interpretation of the  perspective velocity field is unstable. The  orthographic velocity field determines the  structure of the inducing object exactly up to a  depth-scaling. For planar objects, the  orthographic velocity field always admits two  distinct solutions up to depth-scaling. The 3-D  structure is determined uniquely by a "view  and a half" of the orthographic velocity field.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6379</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tactile Recognition and Localization Using Object Models: The Case of Polyhedra on a Plane</title>
<link>https://hdl.handle.net/1721.1/6378</link>
<description>Tactile Recognition and Localization Using Object Models: The Case of Polyhedra on a Plane
Gaston, Peter C.; Lozano-Perez, Tomaso
This paper discusses how data from multiple  tactile sensors may be used to identify and  locate one object, from among a set of known  objects. We use only local information from  sensors: (1) the position of contact points,  and (2) ranges of surface normals at the  contact points. The recognition and  localization process is structured as the  development and pruning of a tree of  consistent hypotheses about pairings  between contact points and object surfaces.  In this paper, we deal with polyhedral objects  constrained to lie on a known plane, i.e.,  having three degrees of positioning freedom  relative to the sensors.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6378</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representations for Reasoning About Change</title>
<link>https://hdl.handle.net/1721.1/6377</link>
<description>Representations for Reasoning About Change
Simmons, Reid G.; Davis, Randall
This paper explores representations used to  reason about objects which change over time  and the processes which cause changes.  Specifically, we are interested in solving a  problem known as geologic interpretation. To  help solve this problem, we have developed a  simulation technique, which we call  imagining. Imagining takes a sequence of  events and simulates them by drawing  diagrams. In order to do this imagining, we  have developed two representations of  objects, one involving histories and the other  involving diagrams, and two corresponding  representations of physical processes, each  suited to reasoning about one of the object  representations. These representations  facilitate both spatial and temporal reasoning.
</description>
<pubDate>Fri, 01 Apr 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6377</guid>
<dc:date>1983-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Introspection</title>
<link>https://hdl.handle.net/1721.1/6376</link>
<description>Computational Introspection
Batali, John
Introspection is the process of thinking about  one's own thoughts and feelings. In this  paper, I discuss recent attempts to make  computational systems that exhibit  introspective behavior: [Smith, 982],  [Weyhrauch, 1978], and [Doyle, 1980]. Each  presents a system capable of manipulating  representations of its own program and  current context. I argue that introspective ability  is crucial for intelligent systems ??thout it an  agent cannot represent certain problems that  it must be able to solve. A theory of intelligent  action would describe how and why certain  actions intelligently achieve an agent's goals.  The agent would both embody and represent  this theory; it would be implemented as the  program for the agent; and the importance of  introspection suggests that the agent  represent its theory of action to itself.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6376</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Scaling of Manipulator Trajectories</title>
<link>https://hdl.handle.net/1721.1/6375</link>
<description>Dynamic Scaling of Manipulator Trajectories
Hollerbach, John M.
A fundamental time-scaling property of  manipulator dynamics has been identified  that allows modification of movement speed  without complete dynamics recalculation. By  exploiting this property, it can be determined  whether a planned trajectory is dynamically  realizable given actuator torque limits, and if  not, how to modify the trajectory to bring to  bring it within dynamic an actuating  constraints.
</description>
<pubDate>Sat, 01 Jan 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6375</guid>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Measurement of Visual Motion</title>
<link>https://hdl.handle.net/1721.1/6374</link>
<description>The Measurement of Visual Motion
Hildreth, Ellen C.; Ullman, Shimon
The analysis of visual motion divides naturally  into two stages: the first is the measurement  of motion, for example, the assignment of  direction and magnitude of velocity to  elements in the image, on the basis of the  changing intensity pattern; the second is the  use of motion measurements, for example, to  separate the scene into distinct objects, and  infer their three-dimensional structure. In this  paper, we present a computational study of  the measurement of motion. Similar to other  visual processes, the motion of elements is  not determined uniquely by information in the  changing image; additional constraint is  required to compute a unique velocity field.  Given this global ambiguity of motion, local  measurements from the changing image,  such as those provided by directionally-selective simple cells in primate visual cortex,  cannot possibly specify a unique local velocity  vector, and in fact, specify only one component  of velocity. Computation of the full two-dimensional velocity field requires the  integration of local motion measurements,  either over an area, or along contours in the  image. We will examine possible algorithms  for computing motion, based on a range of  additional constraints. Finally, we will present  implications for the biological computation of  motion.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6374</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot Programming</title>
<link>https://hdl.handle.net/1721.1/6373</link>
<description>Robot Programming
Lozano-Perez, Tomas
The industrial robot's principal advantage over  traditional automation is programmability.  Robots can perform arbitrary sequences of  pre-stored motions or of motions computed  as functions of sensory input. This paper  reviews requirements for and developments  in robot programming systems. The key  requirements for robot programming systems  examined in the paper are in the areas of  sensing, world modeling, motion  specification, flow of control, and  programming support. Existing and proposed  robot programming systems fall into three  broad categories: guiding systems in which  the user leads a robot through the motions to  be performed, robot-level programming  systems in which the user writes a computer  program specifying motion and sensing, and  task-level programming systems in which the  user specifies operations by their desired  effect on objects. A representative sample of  systems in each of these categories is  surveyed in the paper.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6373</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Binocular Shading and Visual Surface Reconstruction</title>
<link>https://hdl.handle.net/1721.1/6372</link>
<description>Binocular Shading and Visual Surface Reconstruction
Grimson, W.E.L.
Zero-crossing or feature-point based stereo  algorithms can, by definition, determine  explicit depth information only at particular  points on the image. To compute a complete  surface description, this sparse depth map  must be interpolated. A computational theory  of this interpolation or reconstruction process,  based on a surface consistency constraint,  has previously been proposed. In order to  provide stronger boundary conditions for the  interpolation process, other visual cues to  surface shape are examined in this paper. In  particular, it is shown that, in principle,  shading information from the two views can  be used to determine the orientation of the  surface normal along the feature-point  contours, as well as the parameters of the  reflective properties of the surface material.  The numerical stability of the resulting  equations is also examined.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6372</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Policy-Protocol Interaction in Composite Processes</title>
<link>https://hdl.handle.net/1721.1/6371</link>
<description>Policy-Protocol Interaction in Composite Processes
Barter, C.J.
Message policy is defined to be the  description of the disposition of messages of  a single type, when received by a group of  processes. Group policy applies to all the  processes of a group, but for a single  message type. It is proposed that group policy  be specified in an expression which is  separate from the code of the processes of  the group, and in a separate notation. As a  result, it is possible to write policy  expressions which are independent of  process state variables, and as well use a  simpler control notation based on regular  expressions. Input protocol, on the other  hand, applies to single processes or a group  as a whole for all message types.  Encapsulation of processes is presented with  an unusual emphasis on the transactions  and resources which associate with an  encapsulated process rather than the state  space of the process environment. This is  due to the notion of encapsulation without  shared variables, and to the association  between group policies, message sequences  and transactions.
</description>
<pubDate>Wed, 01 Sep 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6371</guid>
<dc:date>1982-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Open Systems</title>
<link>https://hdl.handle.net/1721.1/6370</link>
<description>Open Systems
Hewitt, Carl; Jong, Peter de
This paper describes some problems and  opportunities associated with conceptual  modeling for the kind of "open systems" we  foresee must and will be increasingly  recognized as a central line of computer  system development. Computer applications  will be based on communication between  sub-systems which will have been developed  separately and independently. Some of the  reasons for independent development are the  following: competition, different goals and  responsibilities, economics, and  geographical distribution. We must deal with  all the problems that arise from this  conceptual disparity of sub-systems which  have been independently developed. Sub-systems will be open-ended and incremental  ??dergoing continual evolution. There are  no global objects. The only thing that all the  various sub-systems hold in common is the  ability to communicate with each other. In this  paper we study Open Systems from the  viewpoint of Message Passing Semantics, a  research programme to explore issues in the  semantics of communication in parallel  systems such as negotiation, transaction  management, problem solving, change, and  self-knowledge.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6370</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>LetS: An Expressional Loop Notation</title>
<link>https://hdl.handle.net/1721.1/6369</link>
<description>LetS: An Expressional Loop Notation
Waters, Richard C.
Many loops can be more easily understood  and manipulated if they are viewed as being  built up out of operations on sequences of  values. A notation is introduced which makes  this viewpoint explicit. Using it, loops can be  represented as compositions of functions  operating on sequences of values. A library of  standard sequence functions is provided  along with facilities for defining additional  ones. The notation is not intended to be  applicable to every kind of loop. Rather, it has  been simplified wherever possible so that  straightforward loops can be represented  extremely easily. The expressional form of the  notation makes it possible to construct and  modify such loops rapidly and accurately. The  implementation of the notation does not  actually use sequences but rather compiles  loop expressions into iterative loop code. As a  result, using the notation leads to no  reduction in run time efficiency.
</description>
<pubDate>Tue, 01 Feb 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6369</guid>
<dc:date>1983-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Process Theory</title>
<link>https://hdl.handle.net/1721.1/6368</link>
<description>Qualitative Process Theory
Forbus, Kenneth D.
Things move, collide, flow, bend, heat up, cool  down, stretch, break and boil. These and  other things that happen to cause changes in  objects over time are intuitively characterized  as processes. To understand common sense  physical reasoning and make machines that  interact significantly with the physical world we  must understand the qualitative reasoning  about processes, their effects, and their limits.  Qualitative Process theory defines a simple  notion of physical process that appears quite  useful as a language in which to write  physical theories. Reasoning about  processes also motivates a new qualitative  representation for quantity, the Quantity  Space. This paper includes the basic  definitions of Qualitative Process theory,  describes several different kinds of reasoning  that can be performed with them, and  discusses its implications for causal  reasoning. The use of the theory is illustrated  by several examples, including figuring out  that a boiler can blow up, that an oscillator  with friction will eventually stop, and how to  say that you can pull with a string, but not  push with it.
</description>
<pubDate>Sun, 01 May 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6368</guid>
<dc:date>1983-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear Interactions in a Dendritic Tree: Localization, Timing and Role in Information Processing</title>
<link>https://hdl.handle.net/1721.1/6367</link>
<description>Nonlinear Interactions in a Dendritic Tree: Localization, Timing and Role in Information Processing
Poggio, Tomaso A; Koch, C.
In a dendritic tree transient synaptic inputs  activating ionic conductances with an  equilibrium potential near the resting potential  can veto very effectively other excitatory inputs.  Analog operations of this type can be very  specific with respect to relative locations of the  inputs and their timing. We examine with  computer experiments the precise conditions  underlying this effect in the case of b-like cat  retinal ganglion cell. The critical condition  required for strong and specific interactions is  that the peak inhibitory conductance change  must be sufficiently large almost  independently of other electrical parameters.  In this case, a passive dendritic tree may  perform hundreds of independent analog  operations on its synaptic inputs, without  requiring any threshold mechanism.
</description>
<pubDate>Tue, 01 Sep 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6367</guid>
<dc:date>1981-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Seeing What Your Programs Are Doing</title>
<link>https://hdl.handle.net/1721.1/6366</link>
<description>Seeing What Your Programs Are Doing
Lieberman, Henry
An important skill in programming is being  able to visualize the operation of procedures,  both for constructing programs and  debugging them. Tinker is a programming  environment for Lisp that enables the  programmer to "see what the program is  doing" while the program is being  constructed, by displaying the result of each  step in the program on representative  examples. To help the reader visualize the  operation of Tinker itself, an example is  presented of how he or she might use Tinker  to construct an alpha-beta tree search  program.
</description>
<pubDate>Mon, 01 Feb 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6366</guid>
<dc:date>1982-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rotationally Symmetric Operators for Surface Interpolation</title>
<link>https://hdl.handle.net/1721.1/6365</link>
<description>Rotationally Symmetric Operators for Surface Interpolation
Brady, Michael; Horn, Berthold K.P.
The use of rotationally symmetric operators in  vision is reviewed and conditions for rotational  symmetry are derived for linear and quadratic  forms in the first and second partial  directional derivatives of a function f(x,y).  Surface interpolation is considered to be the  process of computing the most conservative  solution consistent with boundary conditions.  The "most conservative" solution is modeled  using the calculus of variations to find the  minimum function that satisfies a given  performance index. To guarantee the  existence of a minimum function, Grimson  has recently suggested that the performance  index should be a semi-norm. It is shown that  all quadratic forms in the second partial  derivatives of the surface satisfy this criterion.  The seminorms that are, in addition,  rotationally symmetric form a vector space  whose basis is the square Laplacian and the  quadratic variation. Whereas both seminorms  give rise to the same Euler condition in the  interior, the quadratic variation offers the  tighter constraint at the boundary and is to be  preferred for surface interpolation.
</description>
<pubDate>Sun, 01 Nov 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6365</guid>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Approaches to Image Understanding</title>
<link>https://hdl.handle.net/1721.1/6364</link>
<description>Computational Approaches to Image Understanding
Brady, Michael
Recent theoretical developments in Image  Understanding are surveyed. Among the  issues discussed are: edge finding, region  finding, texture, shape from shading, shape  from texture, shape from contour, and the  representations of surfaces and objects.  Much of the work described was developed in  the DARPA Image Understanding project.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6364</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Powerful Ideas</title>
<link>https://hdl.handle.net/1721.1/6363</link>
<description>Some Powerful Ideas
Lawler, Robert
Here is a set of problem solving ideas  (absorbed by and developed through the MIT  Logo project over many years) presented in  such a way as to useful to someone with a  Logo computer. With the ideas on unbound,  single sheets, you can easily pick out those  you like and set aside the others. The ideas  vary in sophistication and accessibility: no  threshold, no ceiling.
</description>
<pubDate>Tue, 01 Dec 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6363</guid>
<dc:date>1981-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Program Testing Assistant</title>
<link>https://hdl.handle.net/1721.1/6362</link>
<description>A Program Testing Assistant
Chapman, David
This paper describes the design and  implementation of a program testing  assistant which aids a programmer in the  definition, execution, and modification of test  cases during incremental program  development. The testing assistant helps in  the interactive definition of test cases and  executes them automatically when  appropriate. It modifies test cases to preserve  their usefulness when the program they test  undergoes certain types of design changes.  The testing assistant acts as a fully integrated  part of the programming environment and  cooperates with existing programming tools,  including a display editor, compiler,  interpreter, and debugger.
</description>
<pubDate>Sun, 01 Nov 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6362</guid>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microelectronics In Nerve Cells: Dendritic Morphology and Information Processing</title>
<link>https://hdl.handle.net/1721.1/6361</link>
<description>Microelectronics In Nerve Cells: Dendritic Morphology and Information Processing
Poggio, Tomaso A; Torre, V.
The electrical properties of the different  anatomical types of retinal ganglion cells in  the cat were calculated on the basis of  passive cable theory from measurements  made on histological material provided by  Boycott and Wassle (1974). The interactions  between excitation and inhibition when the  inhibitory battery is near the resting potential  can be strongly nonlinear in these cells. We  analyse some of the integrative properties of  an arbitrary passive dendritic tree and we then  derive the functional properties which are  characteristic for the various types of ganglion  cells. In particular, we derive several general  results concerning the spatial specificity of  shunting inhibition in "vetoing" an excitatory  input (the "on path" property) and its  dependence on the geometrical and electric  properties of the dendritic tree. Our main  conclusion is that specific branching patterns  coupled with a suitable distribution of  synapses are able to support complex  information processing operations on the  incoming signals. Thus, a neuron seems  likely to resemble an (analog) ISI circuit with  thousands of elementary processing units ??he synapses ??ther than a single logical  gate. A dendritic tree would be near to the  ultimate in microelectronics with little patches  of postsynaptic membrane representing the  fundamental units for several elementary  computations.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6361</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sniffer: A System that Understands Bugs</title>
<link>https://hdl.handle.net/1721.1/6360</link>
<description>Sniffer: A System that Understands Bugs
Shapiro, Daniel G.
This paper presents a bug understanding  system, called sniffer, which applies  inspection methods to generate a deep  understanding of a narrow class of errors.  Sniffer is an interactive debugging aide. It can  locate and identify error-containing  implementations of typical programming  cliché³¬ and it can describe them using the  terminology employed by expert  programmers.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6360</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evidence Relating Subjective Contours and Interpretations Involving Occlusion</title>
<link>https://hdl.handle.net/1721.1/6359</link>
<description>Evidence Relating Subjective Contours and Interpretations Involving Occlusion
Stevens, Kent A.
Subjective contours, according to one theory,  outline surfaces that are apparently  interposed between the viewer and  background (because of the disruption of  background figures, sudden termination of  lines, and other occlusion "cues") but are not  explicitly outlined by intensity discontinuities.  This theory predicts that if occlusion cures are  not interpreted as evidence of occlusion, no  intervening surface need be postulated,  hence no subjective contours would be seen.  This prediction, however, is difficult to test  because observers normally interpret the  cues as occlusion evidence and normally see  the subjective contours. This article describes  a patient with visual agnosia who is both  unable to make the usual occlusion  interpretations and is unable to see subjective  contours. He has, however, normal ability to  interpret standard visual illusions,  stereograms, and in particular, stereogram  versions of the standard subjective contour  figures, which elicit to him strong subjective  edges in depth (corresponding to the  subjective contours viewed in the monocular  versions of the figures).
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6359</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Interactions Between Limb Segments During Planar Arm Movement</title>
<link>https://hdl.handle.net/1721.1/6358</link>
<description>Dynamic Interactions Between Limb Segments During Planar Arm Movement
Hollerbach, John M.; Flash, Tamar
Movement of multiple segment limbs requires  generation of appropriate joint torques which  include terms arising from dynamic  interactions among the moving segments as  well as from such external forces as gravity.  The interaction torques, arising from inertial,  centripetal, and Coriolis forces, are not  present for single joint movements. The  significance of the individual interaction forces  during reaching movements in a horizontal  plane involving only the shoulder and elbow  joints has been assessed for different  movement paths and movement speeds.  Trajectory formation strategies which simplify  the dynamics computation are presented.
</description>
<pubDate>Sun, 01 Nov 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6358</guid>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Abstraction, Inspection and Debugging in Programming</title>
<link>https://hdl.handle.net/1721.1/6357</link>
<description>Abstraction, Inspection and Debugging in Programming
Rich, Charles; Waters, Richard C.
We believe that software engineering has  much to learn from other mature engineering  disciplines, such as electrical engineering,  and that the problem solving behaviors of  engineers in different disciplines have many  similarities. Three key ideas in current  artificial intelligence theories of engineering  problem solving are: Abstraction ??ing a  simplified view of the problem to guide the  problem solving process. Inspection ??roblem solving by recognizing the form  ("plan") of a solution. Debugging ??ncremental modification of an almost  satisfactory solution to a more satisfactory  one. These three techniques are typically  used together in a paradigm which we call  AID (for Abstraction, Inspection, Debugging):  First an abstract model of the problem is  constructed in which some important details  are not intentionally omitted. In this simplified  view inspection methods are more likely to  succeed, yielding the initial form of a solution.  Further details of the problem are then added  one at a time with corresponding incremental  modifications to the solution. This paper  states the goals and milestones of the  remaining three years of a five year research  project to study the fundamental principles  underlying the design and construction of  large software systems and to demonstrate  the feasibility of a computer aided design tool  for this purpose, called the programmer's  apprentice.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6357</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning New Principles from Precedents and Exercises: The Details</title>
<link>https://hdl.handle.net/1721.1/6356</link>
<description>Learning New Principles from Precedents and Exercises: The Details
Winston, Patrick H.
Much Learning is done by way of studying  precedents and exercises. A teacher supplies  a story, gives a problem, and expects a  student both to solve a problem and to  discover a principle. The student must find the  correspondence between the story and the  problem, apply the knowledge in the story to  solve the problem, generalize to form a  principle, and index the principle so that it can  be retrieved when appropriate. This sort of  learning pervades Management, Political  science, Economics, Law, and Medicine as  well as the development of common-sense  knowledge about life in general. This paper  presents a theory of how it is possible to learn  by precedents and exercises and describes  an implemented system that exploits the  theory. The theory holds that causal relations  identify the regularities that can be exploited  from past experience, given a satisfactory  representation for situations. The  representation used stresses actors and  objects which are taken from English-like  input and arranged into a kind of semantic  network. Principles emerge in the form of  production rules which are expressed in the  same way situations are.
</description>
<pubDate>Sun, 01 Nov 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6356</guid>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Color Vision and Image Intensities: When Are Changes Material?</title>
<link>https://hdl.handle.net/1721.1/6355</link>
<description>Color Vision and Image Intensities: When Are Changes Material?
Rubin, John M.; Richards, W.A.
Marr has emphasized the difficulty in  understanding a biological system or its  components without some idea of its goals. In  this paper, a preliminary goal for color vision  is proposed and analyzed. That goal is to  determine where changes of material occur in  a scene (using only spectral information).  This goal is challenging for two reasons. First,  the effects of many processes (shadowing,  shading from surface orientation changes,  highlights, variations in pigment density) are  confounded with the effects of material  changes in the available image intensities.  Second, material changes are essentially  arbitrary. We are consequently led to a  strategy of rejecting the presence of such  confounding processes. We show there is a  unique condition, the spectral crosspoint, that  allows rejection of the hypothesis that  measured image intensities arise from one of  the confounding processes. (If plots are made  of image intensity versus wavelength from two  image regions, and the plots intersect, we say  that there is a spectral crosspoint.) We restrict  our attention to image intensities measured  from regions on opposite sides of an edge  because material changes almost always  cause edges. Also, by restricting our attention  to luminance discontinuities, we can avoid  peculiar conspiracies of confounding  processes that might mimic a material  change. Our crosspoint conjecture is that  biological visual systems interpret spectral  crosspoints across edges as material  changes. A circularly symmetric operator is  designed to detect crosspoints: it turns out to  resemble the double-opponent cell which is  commonplace in biological color vision  systems.
</description>
<pubDate>Fri, 01 May 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6355</guid>
<dc:date>1981-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Active Touch Sensing</title>
<link>https://hdl.handle.net/1721.1/6354</link>
<description>Active Touch Sensing
Hillis, William Daniel
The mechanical hand of the future will roll a  screw between its fingers and sense, by  touch, which end is which. This paper  describes a step toward such a manipulator ?? robot finger that is used to recognize small  objects by touch. The device incorporates a  novel imaging tactile sensor ?? artificial skin  with hundreds of pressure sensors in a  space the size of a finger tip. The sensor is  mounted on a tendon-actuated mechanical  finger, similar in size and range of motion to a  human index finger. A program controls the  finger, using it to press and probe the object  placed in front of it. Based on how the object  feels, the program guesses its shape and  orientation and then uses the finger to test  and refine the hypothesis. The device is  programmed to recognize commonly used  fastening devices ??ts, bolts, flats, washers,  lock washers, dowel pins, cotter pins and set  screws.
</description>
<pubDate>Wed, 01 Apr 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6354</guid>
<dc:date>1981-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chaosnet</title>
<link>https://hdl.handle.net/1721.1/6353</link>
<description>Chaosnet
Moon, David A.
Chaosnet is a local network, that is, a system  for communication among a group of  computers located within about 1000 meters  of each other. Originally developed by  the Artificial Intelligence Laboratory as the  internal communications medium of the  Lisp Machine system, it has since come to be  used to link a variey of machines around  MIT and elsewhere.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6353</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Use of Parallelism to Implement a Heuristic Search</title>
<link>https://hdl.handle.net/1721.1/6352</link>
<description>The Use of Parallelism to Implement a Heuristic Search
Kornfeld, William A.
The role of parallel processing in heuristic  search is examined by means of an example  (cryptarithmetic addition). A problem solver is  constructed that combines the metaphors of  constraint propagation and hypothesize-and-test. The system is capable of working on  many incompatible hypotheses at one time.  Furthermore, it is capable of allocating  different amounts of processing power to  running activities and and changing these  allocations as computation proceeds. It is  empirically found that the parallel algorithm is,  on the average, more efficient than a  corresponding sequential one. Implications of  this for problem solving in general are  discussed.
</description>
<pubDate>Sun, 01 Mar 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6352</guid>
<dc:date>1981-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thinking About Lots of Things at Once without Getting Confused: Parallelism in Act 1</title>
<link>https://hdl.handle.net/1721.1/6351</link>
<description>Thinking About Lots of Things at Once without Getting Confused: Parallelism in Act 1
Lieberman, Henry
As advances in computer architecture and  changing economics make feasible  machines with large-scale parallelism,  Artificial Intelligence will require new ways of  thinking about computation that can exploit  parallelism effectively. We present the actor  model of computation as being appropriate  for parallel systems, since it organizes  knowledge as active objects acting  independently, and communicating by  message passing. We describe the parallel  constructs in our experimental actor  interpreter Act 1. Futures create concurrency,  by dynamically allocating processing  resources much as Lisp dynamically  allocates passive storage. Serializers restrict  concurrency by constraining the order in which  events take place, and have changeable local  state. Using the actor model allows  parallelism and synchronization to be  implemented transparently, so that parallel or  synchronized resources can be used as  easily as their serial counterparts.
</description>
<pubDate>Fri, 01 May 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6351</guid>
<dc:date>1981-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Preview of Act 1</title>
<link>https://hdl.handle.net/1721.1/6350</link>
<description>A Preview of Act 1
Lieberman, Henry
The next generation of artificial intelligence  programs will require the ability to organize  knowledge as groups of active objects. Each  object should have only its own local  expertise, the ability to operate in parallel with  other objects, and the ability to communicate  with other objects. Artificial Intelligence  programs will also require a great deal of  flexibility, including the ability to support  multiple representations of objects, and to  incrementally and transparently replace  objects with new, upward-compatible  versions. To realize this, we propose a model  of computation based on the notion of an  actor, an active object that communicates by  message passing. Actors blur the  conventional distinction between data and  procedures. The actor philosophy is  illustrated by a description of our prototype  actor interpreter Act 1.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6350</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Negotiation as a Metaphor for Distributed Problem Solving</title>
<link>https://hdl.handle.net/1721.1/6349</link>
<description>Negotiation as a Metaphor for Distributed Problem Solving
Davis, Randall; Smith, Reid G.
We describe the concept of distributed  problem solving and define it as the  cooperative solution of problems by a  decentralized and loosely coupled collection  of problem solvers. This approach to problem  solving offers the promise of increased  performance and provides a useful medium  for exploring and developing new problem-solving techniques. We present a framework  called the contract net that specifies  communication and control in a distributed  problem solver. Task distribution is viewed as  an interactive process, a discussion carried  on between a node with a task to be executed  and a group of nodes that may be able to  execute the task. We describe the kinds of  information that must be passed between  nodes during the discussion in order to obtain  effective problem-solving behavior. This  discussion is the origin of the negotiation  metaphor: Task distribution is viewed as a  form of contract negotiation. We emphasize  that protocols for distributed problem solving  should help determine the content of the  information transmitted, rather than simply  provide a means of sending bits from one  node to another. The use of the contract net  framework is demonstrated in the solution of  a simulated problem in area surveillance, of  the sort encountered in ship or air traffic  control. We discuss the mode of operation of  a distributed sensing system, a network of  nodes extending throughout a relatively large  geographic area, whose primary aim is the  formation of a dynamic map of traffic in the  area. From the results of this preliminary  study we abstract features of the framework  applicable to problem solving in general,  examining in particular transfer of control.  Comparisons with PLANNER, CONNIVER,  HEARSAY-II, and PUP6 are used to  demonstrate that negotiation ??e two-way  transfer of information ?? a natural extension  to the transfer of control mechanisms used in  earlier problem-solving systems.
</description>
<pubDate>Fri, 01 May 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6349</guid>
<dc:date>1981-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Representation of Angular Velocity and Its Effect on the Efficiency of Manipulator Dynamics Computation</title>
<link>https://hdl.handle.net/1721.1/6348</link>
<description>On the Representation of Angular Velocity and Its Effect on the Efficiency of Manipulator Dynamics Computation
Silver, William M.
Recently there has been considerable interest  in efficient formulations of manipulator  dynamics, mostly due to the desirability of  real-time control or analysis of physical  devices using modest computers. The  inefficiency of the classical Lagrangian  formulation is well known, and this has led  researchers to seek alternative methods.  Several authors have developed a highly  efficient formulation of manipulator dynamics  based on the Newton-Euler equations, and  there may be some confusion as to the  source of this efficiency. This paper shows  that there is in fact no fundamental difference  in computational efficiency between  Lagrangian and Newton-Euler formulations.  The efficiency of the above-mentioned  Newton-Euler formulation is due to two  factors: the recursive structure of the  computation and the representation chosen of  the rotational dynamics. Both of these factors  can be achieved in the Lagrangian  formulation, resulting in an algorithm identical  to the Newton-Euler formulation. Recursive  Lagrangian dynamics has been discussed  previously by Hollerbach. This paper takes the  final step by comparing in detail the  representations that have been used for  rotational dynamics and showing that with a  proper choice of representation the  Lagrangian formulation is indeed equivalent  to the Newton-Euler formulation.
</description>
<pubDate>Sun, 01 Mar 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6348</guid>
<dc:date>1981-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Record of the Workshop on Research in Office Semantics</title>
<link>https://hdl.handle.net/1721.1/6347</link>
<description>Record of the Workshop on Research in Office Semantics
Barber, Gerald R.
This paper is a compendium of the ideas and  issues presented at the Chatham Bars  Workshop on Office Semantics. The intent of  the workshop was to examine the state of the  art in office systems and to elucidate the  issues system designers were concerned  with in developing next generation office  systems. The workshop involved a cross-section of people from government, industry  and academia. Presentations in the form of  talks and video tapes were made of  prototypical systems.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6347</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control of a Tendon Arm</title>
<link>https://hdl.handle.net/1721.1/6346</link>
<description>Control of a Tendon Arm
Lim, Kuk Huang
The dynamics and control of tendon driven  three degree of freedom shoulder joint are  studied. A control scheme consisting of two  phases has been developed. In the first  phase, approximation of the time optimal  control trajectory was applied open to the loop  to the system. In the second phase a closed  loop linear feedback law was employed to  bring the system to the desired final state and  to maintain it there.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6346</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Music, Mind and Meaning</title>
<link>https://hdl.handle.net/1721.1/6345</link>
<description>Music, Mind and Meaning
Minsky, Marvin
Speculating about cognitive aspects of  listening to music, this essay discusses: how  metric regularity and thematic repetition might  involve representation frames and memory  structures, how the result of listening might  resemble space-models, how phrasing and  expression might evoke innate responses  and, finally, why we like music ?? rather,  what is the nature of liking itself.
</description>
<pubDate>Sun, 01 Feb 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6345</guid>
<dc:date>1981-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equation Counting and the Interpretation of Sensory Data</title>
<link>https://hdl.handle.net/1721.1/6344</link>
<description>Equation Counting and the Interpretation of Sensory Data
Richards, W.A.; Rubin, J.M.; Hoffman, D.D.
Many problems in biological information  processing require the solution to a complex  system of equations in many unknown  variables. An equation-counting procedure is  described for determining whether such a  system of equations will indeed have a  unique solution, and under what conditions  the solution should be interpreted as "correct".  Three examples of the procedure are given for  illustration, one for auditory signal processing  and two from vision.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6344</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational Theory of Visual Surface Interpolation</title>
<link>https://hdl.handle.net/1721.1/6343</link>
<description>A Computational Theory of Visual Surface Interpolation
Grimson, W.E.L.
Computational theories of structure from  motion [Ulman, 1979] and stereo vision [Marr  and Poggio, 1979] only specify the  computation of three-dimensional surface  information at special points in the image. Yet,  the visual perception is clearly of complete  surfaces. In order to account for this, a  computational theory of the interpolation of  surfaces from visual information is presented.
</description>
<pubDate>Mon, 01 Jun 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6343</guid>
<dc:date>1981-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>GPRINT: A LISP Pretty Printer Providing Extensive User Format Control Mechanism</title>
<link>https://hdl.handle.net/1721.1/6342</link>
<description>GPRINT: A LISP Pretty Printer Providing Extensive User Format Control Mechanism
Waters, Richard C.
A Lisp pretty printer is presented which makes  it easy for a user to control the format of the  output produced. The printer can be used as a  general mechanism for printing data  structures as well as programs. It is divided  into two parts: a set of formatting functions  and an output routine. The user specifies how  a particular type of object should be formatted  by creating a formatting function for the type.  When passed an object of that type, the  formatting function creates a sequence of  directions which specify how the object  should be printed if it can fit on one line and  how it should be printed if it must be broken  up across multiple lines. A simple template  language makes it easy to specify these  directions. Based on the line length available,  the output routine decides what structures  have to be broken up across multiple lines  and produces the actual output following the  directions created by the formatting functions.  The paper concludes with a discussion of  how the pretty printing method presented  could be applied to languages other than  Lisp.
</description>
<pubDate>Wed, 01 Sep 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6342</guid>
<dc:date>1982-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Session with TINKER: Interleaving Program Testing with Program Design</title>
<link>https://hdl.handle.net/1721.1/6341</link>
<description>A Session with TINKER: Interleaving Program Testing with Program Design
Lieberman, Henry; Hewitt, Carl
Tinker is an experimental interactive  programming system which integrates  program testing with program design. New  procedures are created by working out the  steps of the procedure in concrete situations.  Tinker displays the results of each step as it  is performed, and constructs a procedure for  the general case from sample calculations.  The user communicates with Tinker mostly by  selecting operations from menus on an  interactive graphic display rather than by  typing commands. This paper presents a  demonstration of our current implementation  of Tinker.
</description>
<pubDate>Mon, 01 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6341</guid>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>One Child's Learning: Introducing Writing with a Computer</title>
<link>https://hdl.handle.net/1721.1/6340</link>
<description>One Child's Learning: Introducing Writing with a Computer
Lawler, R.W.
This is a case study of how one child learned  to write in a computer-rich setting. Although  computer access did affect her learning  significantly, the details presented here go  beyond supporting that claim. They provide a  simple example of what a computer-based  introduction to writing might be like for other  children. We conclude with a short discussion  of issues raised by the study.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6340</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Against Direct Perception</title>
<link>https://hdl.handle.net/1721.1/6339</link>
<description>Against Direct Perception
Ullman, S.
Central to contemporary cognitive science is  the notion that mental processes involve  computations defined over internal  representations. This notion stands in sharp  contrast with another prevailing view ??e  direct theory of perception whose most  prominent proponent has been J.J. Gibson.  The publication of his recent book (The  Ecological Approach to Visual Perception ??oston, Houghton Mifflin Company, 1979)  offers an opportunity to examine the theory of  direct perception and to contrast it with the  computational/representational view. In this  paper the notion of direct perception is  examined primarily from a theoretical  standpoint, and various objections are raised  against it. An attempt is made to place the  theory of direct perception in perspective by  embedding it in a more comprehensive  framework.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6339</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Model for the Spatio-Temporal Organization of X- and Y-Type Ganglion Cells in the Primate Retina</title>
<link>https://hdl.handle.net/1721.1/6338</link>
<description>A Model for the Spatio-Temporal Organization of X- and Y-Type Ganglion Cells in the Primate Retina
Richter, J.; Ullman, S.
A model is proposed for the spatial and  temporal characteristics of X- and Y-type  responses of ganglion cells in the primate  retina. The model is related to a theory of  directional selectivity proposed by Marr &amp;  Ullman (1981). The X- and Y-type responses  predicted by the model to a variety of stimuli  are examined and compared with  electrophysiological recordings. A number of  implications and predictions are discussed.
Updated October 1981
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6338</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining Optical Flow</title>
<link>https://hdl.handle.net/1721.1/6337</link>
<description>Determining Optical Flow
Horn, Berthold K.P.; Schunck, Brian G.
Optical flow cannot be computed locally, since  only one independent measurement  is available from the image sequence at a  point, while the flow velocity has two  components. A second constraint is needed.  A method for finding the optical flow  pattern is presented which assumes that the  apparent velocity of the brightness  pattern varies smoothly almost everywhere in  the image. An iterative implementation  is shown which successfully computes the  optical flow for a number of synthetic  image sequences. The algorithm is robust in  that it can handle image sequences that  are quantized rather coarsely in space and  time. It is also insensitive to quantization  of brightness levels and additive noise.  Examples are included where the assumption  of smoothness is violated at singular points  or along lines in the image.
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6337</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Evaluation and Cultivation of Spatial and Linguistic Abilities in Individuals with Cerebral Palsy</title>
<link>https://hdl.handle.net/1721.1/6336</link>
<description>The Evaluation and Cultivation of Spatial and Linguistic Abilities in Individuals with Cerebral Palsy
Weir, Sylvia
The work of the Cerebral Palsy project  (members: Seymour Papert, Sylvia Weir, Jose  Valente and Gary Drescher) over the past  eighteen months is summarized, and the next  phase of activity is outlined. The issues to be  addressed by the proposed research are as  follows: 1. An investigation of computer-based  techniques to maximize the acquisition of  spatial and linguistic skills in severely  Cerebral Palsied children, to serve the  educational and therapeutic needs of this  population. 2. Developing a set of computer-based diagnostic tools for use with physically  handicapped persons which could contribute  to the provision of a functional specification of  subcategories of Cerebral Palsy. 3.  Investigating the ways in which findings on  Cerebral Palsy subjects can inform our  theories of cognitive development and the  adult functioning of normal individuals.
</description>
<pubDate>Mon, 01 Oct 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6336</guid>
<dc:date>1979-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Real Time Garbage Collector Based on the Lifetimes of Objects</title>
<link>https://hdl.handle.net/1721.1/6335</link>
<description>A Real Time Garbage Collector Based on the Lifetimes of Objects
Lieberman, Henry; Hewitt, Carl
In previous heap storage systems, the cost of  creating objects and garbage collection is  independent of the lifetime of the object. Since  objects with short lifetimes account for a large  portion of storage use, it's worth optimizing a  garbage collector to reclaim storage for these  objects more quickly. The garbage collector  should spend proportionately less effort  reclaiming objects with longer lifetimes. We  present a garbage collection algorithm which:  Makes storage for short-lived objects cheaper  than storage for long-lived objects. Operates  in real time ??ject creation and access  times are bounded. Increases locality of  reference, for better virtual memory  performance. Works well with multiple  processors and a large address space.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6335</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The SCHEME-79 Chip</title>
<link>https://hdl.handle.net/1721.1/6334</link>
<description>The SCHEME-79 Chip
Holloway, Jack; Steel, Guy Lewis, Jr.; Sussman, Gerald Jay; Bell, Alan
We have designed and implemented a  single-chip microcomputer (which we call  SCHEME-79) which directly interprets a typed  pointer variant of SCHEME, a dialect of the  language LISP. To support this interpreter the  chip implements an automatic storage  allocation system for heap-allocated data and  an interrupt facility for user interrupt routines  implemented in SCHEME. We describe how  the machine architecture is tailored to support  the language, and the design methodology by  which the hardware was synthesized. We  develop an interpreter for SCHEME written in  LISP which may be viewed as a microcode  specification. This is converted by successive  compilation passes into actual hardware  structures on the chip. We develop a  language embedded in LSIP for describing  layout artwork so we can procedurally define  generators for generalized macro  components. The generators accept  parameters to produce the specialized  instances used in a particular design. We  discuss the performance of the current design  and directions for improvement, both in the  circuit performance and in the algorithms  implemented by the chip. A complete  annotated listing of the microcode embodied  by the chip is included.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6334</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Comments on a Recent Theory of Stereopsis</title>
<link>https://hdl.handle.net/1721.1/6333</link>
<description>Some Comments on a Recent Theory of Stereopsis
Marr, David C.; Poggio, Tomaso
A number of developments have taken place  since the formulation of Marr and Poggio's  theory of human stereo vision. In particular,  these concern the shape of the underlying  receptive fields, the control of eye movements  and the role of neuronal pools in the so-called  pulling effect. These and other connected  matters are briefly discussed.
</description>
<pubDate>Tue, 01 Jul 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6333</guid>
<dc:date>1980-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Information Processing Approach to Understanding the Visual Cortex</title>
<link>https://hdl.handle.net/1721.1/6332</link>
<description>An Information Processing Approach to Understanding the Visual Cortex
Crick, Francis H.C.; Marr, David C.; Poggio, Tomaso
An outline description is given of the  experimental work on the visual acuity and  hyperacuity of human beings. The very high  resolution achieved in hyperacuity  corresponds to a fraction of the spacing  between adjacent cones in the fovea. We  briefly outline a computational theory of early  vision, according to which (a) retinal image is  filtered through a set of approximately  bandpass, spatial filters and (b) zero-crossings may contain sufficient information  for much of the subsequent processing.  Consideration of the optimum filter lead to  one which is equivalent to a cell with a  particular center-surround type of response.  An "edge" in the visual field then corresponds  to a line of zero-crossings in the filtered  image. The mathematics of sampling and of  Logan's zero-crossing theorem are briefly  explained.
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6332</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phantom Stacks: If You Look Too Hard, They Aren't There</title>
<link>https://hdl.handle.net/1721.1/6331</link>
<description>Phantom Stacks: If You Look Too Hard, They Aren't There
Stallman, Richard M.
A Stack is a very efficient way of allocating and  deallocating memory, but it works only with a  restricted pattern of usage. Garbage collection  is completely flexible but comparatively costly.  The implementation of powerful control  structures naturally uses memory which  usually fits in with stack allocation but must  have the flexibility to do otherwise from time to  time. How can we manage memory which  only once in a while violates stack restrictions,  without paying a price the rest of the time?  This paper provides an extremely simple way  of doing so, in which only the part of the  system which actually uses the stack needs  to know anything about the stack. We call  them Phantom Stacks because they are liable  to vanish if subjected to close scrutiny.  Phantom Stacks will be used in the next  version of the Artificial Intelligence Lab's  Scheme microprocessor chip.
</description>
<pubDate>Tue, 01 Jul 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6331</guid>
<dc:date>1980-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>EMACS Manual for TWENEX Users</title>
<link>https://hdl.handle.net/1721.1/6330</link>
<description>EMACS Manual for TWENEX Users
Stallman, Richard M.
A reference manual for the extensible,  customizable, self-documenting real-time  display editor. This manual corresponds to  EMACS version 162.
</description>
<pubDate>Tue, 01 Mar 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6330</guid>
<dc:date>1983-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>EMACS Manual for ITS Users</title>
<link>https://hdl.handle.net/1721.1/6329</link>
<description>EMACS Manual for ITS Users
Stallman, Richard M.
A reference manual for the extensible,  customizable, self-documenting real-time  display editor. This manual corresponds to  EMACS version 162.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6329</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instrumental With and the Control Relation in English</title>
<link>https://hdl.handle.net/1721.1/6328</link>
<description>Instrumental With and the Control Relation in English
Levin, Beth C.
This paper explores the nature of the  underlying representation of a sentence, that  representation formulated to make explicit the  semantic structure of a sentence as a  description of an event. It argues that the  typical conception of an underlying  representation as a predicate-argument  representation, exemplified in systems of  case and thematic relations, must be  modified. An underlying representation must  include semantic relations between noun  phrases as well as the predicate-argument  relations of noun phrases to a verb. An  examination of instrumental with will be used  to motivate and justify this revision. In  particular, an account of instrumental with  requires the introduction of the control  relation, a relation between two noun  phrases.
</description>
<pubDate>Thu, 01 Nov 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6328</guid>
<dc:date>1979-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Outlook on Truth Maintenance</title>
<link>https://hdl.handle.net/1721.1/6327</link>
<description>An Outlook on Truth Maintenance
McAllester, David A.
Truth maintenance systems have been used  in several recent problem solving systems to  record justifications for deduced assertions,  to track down the assumptions which underlie  contradictions when they arise, and to  incrementally modify assertional data  structures when assumptions are retracted. A  TMS algorithm is described here that is  substantially different from previous systems.  This algorithm performs deduction in  traditional propositional logic in such a way  that the premise set from which deduction is  being done can be easily manipulated. A  novel approach is also taken to the role of a  TMS in larger deductive systems. In this  approach the TMS performs all propositional  deduction in a uniform manner while the  larger system is responsible for controlling  the instantiation of universally quantified  formulae and axiom schemas.
</description>
<pubDate>Fri, 01 Aug 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6327</guid>
<dc:date>1980-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanical Arm Control</title>
<link>https://hdl.handle.net/1721.1/6326</link>
<description>Mechanical Arm Control
Waters, Richard C.
This paper discusses three main problems  associated with the control of the motion of a  mechanical arm. 1) Transformation between  different coordinate systems associated with  the arm. 2) Calculation of detailed trajectories  for the arm to follow. 3) Calculation of the  forces which must be applied to the joints of  the arm in order to make it move along a  specified path. Each of the above problems is  amenable to exact solution. However, the  resulting equations are, in general, quite  complex and difficult to compute. This paper  investigates several methods for speeding up  this calculation, and for getting approximate  solutions to the equations.
</description>
<pubDate>Mon, 01 Oct 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6326</guid>
<dc:date>1979-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Disjunctive Concepts From Examples</title>
<link>https://hdl.handle.net/1721.1/6325</link>
<description>Learning Disjunctive Concepts From Examples
Iba, Glenn A.
This work proposes a theory for machine  learning of disjunctive concepts. The  paradigm followed is one of teaching and  testing, where the teaching is accomplished  by presenting a sequence of positive and  negative examples of the target concept. The  core of the theory has been implemented and  tested as computer programs. The theory  addresses the problem of deciding when it is  appropriate to merge descriptions and when it  is appropriate to form a disjunctive split. The  approach outlined has the advantage that it  allows recovery from over generalizations. It is  observed that negative examples play an  important role in the decision making  process, as well as in detecting over  generalizations and instigating recovery.  Because of the ability to recover from over  generalizations when they occur, the system  is less sensitive to the ordering of the training  sequence than other systems. The theory is  presented in a domain and representation  independent format. A few conditions are  presented, which abstract the assumptions  made about any representation scheme that  is to be employed within the theory. The work  is illustrated in several different domains,  illustrating the generality and flexibility of the  theory.
</description>
<pubDate>Sat, 01 Sep 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6325</guid>
<dc:date>1979-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Final Report of the Brookline LOGO Project. Part III: Profiles of Individual Student's Work</title>
<link>https://hdl.handle.net/1721.1/6324</link>
<description>Final Report of the Brookline LOGO Project. Part III: Profiles of Individual Student's Work
Papert, Seymour A.; Watt, Daniel; diSessa, Andrea; Weir, Sylvia
During the school year 1977/78 four  computers equipped with LOGO and Turtle  Graphics were installed in an elementary  school in Brookline, Mass. All sixth grade  students in the school had between 20 and  40 hours of hands-on experience with the  computers. The work of 16 students was  documented in detail.
</description>
<pubDate>Sat, 01 Sep 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6324</guid>
<dc:date>1979-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Final Report of the Brookline LOGO Project. Part II: Project Summary and Data</title>
<link>https://hdl.handle.net/1721.1/6323</link>
<description>Final Report of the Brookline LOGO Project. Part II: Project Summary and Data
Papert, Seymour A.; Watt, Daniel; diSessa, Andrea; Weir, Sylvia
During the school year 1977/78 four  computers equipped with LOGO and Turtle  Graphics were installed in an elementary  school in Brookline, Mass. All sixth grade  students in the school had between 20 and  40 hours of hands-on experience with the  computers. The work of 16 students was  documented in detail.
</description>
<pubDate>Sat, 01 Sep 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6323</guid>
<dc:date>1979-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Remotely-Manned Energy and Production Economy</title>
<link>https://hdl.handle.net/1721.1/6322</link>
<description>Toward a Remotely-Manned Energy and Production Economy
Minsky, Marvin
We can solve many problems of Energy,  Health, Productivity, and Environmental Quality  by improving the technology of remote control.  This will produce Nuclear Safety and Security,  Advances in Mining, Increases in Productivity,  Economies in Transportation, New Industries  and Markets. By creating "mechanical hands"  that are versatile and economical enough, we  shape a new world of health, energy and  security. It will take 10 to 20 years, and cost  about a billion dollars.
</description>
<pubDate>Sat, 01 Sep 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6322</guid>
<dc:date>1979-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Procedural Attachment</title>
<link>https://hdl.handle.net/1721.1/6321</link>
<description>Procedural Attachment
Steels, Luc
A frame-based reasoning system is extended  to deal with procedural attachment.  Arguments are given why procedural  attachment is needed in a symbolic reasoner.  The notion of an infinitary concept is  introduced. Conventions for representing  procedures and a control structure regulating  their execution is discussed. Examples from  electrical engineering and music illustrate  arithmetic constraints and constraints over  properties of strings and sequences.
</description>
<pubDate>Wed, 01 Aug 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6321</guid>
<dc:date>1979-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evidence for a Fifth, Smaller Channel in Early Human Vision</title>
<link>https://hdl.handle.net/1721.1/6320</link>
<description>Evidence for a Fifth, Smaller Channel in Early Human Vision
Marr, D.; Hildreth, E.; Poggio, Tomaso A
Recent studies in psychophysics and  neurophysiology suggest that the human  visual system utilizes a range of different size  or spatial frequency tuned mechanisms in its  processing of visual information. It has been  proposed that there exist four such  mechanisms, operating everywhere in the  visual field, with the smallest mechanism  having a central excitatory width of 3' of arc in  the ventral fovea. This note argues that there  exists indirect evidence for the existence of a  fifth, smaller channel, with a central width in  the fovea of 1.5'.
</description>
<pubDate>Wed, 01 Aug 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6320</guid>
<dc:date>1979-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Application of the Photometric Stereo Method</title>
<link>https://hdl.handle.net/1721.1/6319</link>
<description>An Application of the Photometric Stereo Method
Ikeuchi, Katsushi; Horn, Berthold K.P.
The orientation of patches on the surface of  an object can be determined from multiple  images taken with different illuminations, but  from the same viewing position. This method,  referred to as photometric stereo, can be  implemented using table lookup based on  numerical inversion of experimentally  determined reflectance maps. Here we  concentrate on objects with specularly  reflecting surfaces, since these are of  importance in industrial applications.  Previous methods, intended for diffusely  reflecting surfaces, employed point source  illumination, which is quite unsuitable in this  case. Instead, we use a distributed light  source obtained by uneven illumination of a  diffusely reflecting planar surface.  Experimental results are shown to verify  analytic expressions obtained for a method  employing three light source distributions.
</description>
<pubDate>Wed, 01 Aug 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6319</guid>
<dc:date>1979-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>SEQUINS and QUILLS: Representations for Surface Topography</title>
<link>https://hdl.handle.net/1721.1/6318</link>
<description>SEQUINS and QUILLS: Representations for Surface Topography
Horn, Berthold K.P.
The shape of a continuous surface can be  represented by a collection of surface  normals. These normals are like a  porcupine's quills. Equivalently, one can use  the surface patches on which these normals  rest. These in turn are like sequins sewn on a  costume. These and other representations for  information which can be obtained from  images and used in the recognition and  description of objects in a scene will be briefly  described.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6318</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Recursive Lagrangian Formulation of Manipulator Dynamics</title>
<link>https://hdl.handle.net/1721.1/6317</link>
<description>A Recursive Lagrangian Formulation of Manipulator Dynamics
Hollerbach, John M.
An efficient Lagrangian formulation of  manipulator dynamics has been developed.  The efficiency derives from recurrence  relations for the velocities, accelerations, and  generalized forces. The number of additions  and multiplications varies linearly with the  number of joints, as opposed to past  Lagrangian dynamics formulations with an n4  dependence. With this formulation it should  be possible in principle to compute the  Lagrangian dynamics in real time. The  computational complexities of this and other  dynamics formulations including recent  Newton-Euler formulations and tabular  formulations are compared. It is concluded  that recursive formulations based either on  the Lagrangian or Newton-Euler dynamics  offer the best method of dynamics calculation.
</description>
<pubDate>Sun, 01 Jun 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6317</guid>
<dc:date>1980-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Overview of a Theory of Syntactic Recognition for Natural Language</title>
<link>https://hdl.handle.net/1721.1/6316</link>
<description>An Overview of a Theory of Syntactic Recognition for Natural Language
Marcus, Mitchell P.
Assume that the syntax of natural language  can be parsed by a left-to-right deterministic  mechanism without facilities for parallelism or  backup. It will be shown that this  "determinism" hypothesis, explored within the  context of the grammar of English, leads to a  simple mechanism, a grammar interpreter,  having the following properties: (a) Simple  rules of grammar can be written for this  interpreter which capture the generalizations  behind various linguistic phenomena, despite  the seeming difficulty of capturing such  generalizations in the framework of a  processing model for recognition. (b) The  structure of the grammar rules cannot parse  sentences which violate either of two  constraints which Chomsky claims are  linguistic universals. This result depends in  part upon the computational use of  Chomsky's notion of Annotated Surface  Structure. (c) The grammar interpreter  provides a simple explanation for the difficulty  caused by "garden path" sentences, such as  "The cotton clothing is made of grows in  Mississippi". To the extent that these  properties, all of which reflect deep properties  of natural language, follow from the original  hypothesis, they provide indirect evidence for  the truth of this assumption. This memo is an  abridged form of several topics discussed at  length in [Marcus 77]; it does not discuss the  mechanism used to parse noun phrases nor  the kinds of interaction between syntax and  semantics discussed in that work.
</description>
<pubDate>Sun, 01 Jul 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6316</guid>
<dc:date>1979-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Enhanced Spherical Images for Object Representation</title>
<link>https://hdl.handle.net/1721.1/6315</link>
<description>Using Enhanced Spherical Images for Object Representation
Smith, David A.
The processes involved in vision,  manipulation, and spatial reasoning depend  greatly on the particular representation of  three-dimensional objects used. A novel  representation, based on concepts of  differential geometry, is explored. Special  attention is given to properties of the  enhanced spherical image model,  reconstruction of objects from their  representation, and recognition of similarity  with prototypes. Difficulties associated with  representing smooth and non-convex bodies  are also discussed.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6315</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Aided Evolutionary Design for Software Engineering</title>
<link>https://hdl.handle.net/1721.1/6314</link>
<description>Computer Aided Evolutionary Design for Software Engineering
Rich, Charles; Shrobe, Howard E.; Waters, Richard C.
We report on a partially implemented  interactive computer aided design tool for  software engineering. A distinguishing  characteristic of our project is its concern for  the evolutionary character of software  systems. Our project draws a distinction  between algorithms and systems, centering  on its attention on support for the system  designer. Although verification has played a  large role in recent research, our perspective  suggests that the complexity and evolutionary  nature of software systems requires a  number of additional techniques, which are  described in this paper.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6314</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specifying and Proving Properties of Guardians for Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/6313</link>
<description>Specifying and Proving Properties of Guardians for Distributed Systems
Hewitt, Carl; Attardi, Giuseppe; Lieberman, Henry
In a distributed system where many  processors are connected by a networ and  communicate using message passing, many  users can be allowed to access the same  facilities. A public utility is usually an  expensive or limited resource whose use has  to be regulated. A GUARDIAN is an  abstraction that can be used to regulate the  use of resources by scheduling their access,  providing protection, and implementing  recovery from hardware failures. We present a  language construct called a PRIMITIVE  SERIALIZER which can be used to express  efficient implementations of guardians in a  modular fashion. We have developed a proof  methodology for proving strong properties of  network utilities e.g. the utility is guaranteed to  respond to each request which it is sent. This  proof methodology is illustrated by proving  properties of a guardian which manages two  hardcopy printing devices.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6313</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraints: A Language for Expressing Almost-Hierarchical Descriptions</title>
<link>https://hdl.handle.net/1721.1/6312</link>
<description>Constraints: A Language for Expressing Almost-Hierarchical Descriptions
Sussman, Gerald Jay; Steel, Guy Lewis, Jr.
We present an interactive system organized  around networks of constraints rather than the  programs which manipulate them. We  describe a language of hierarchical constraint  networks. We describe one method of  deriving useful consequences of a set of  constraints which we call propagation.  Dependency analysis is used to spot and  track down inconsistent subsets of a  constraint set. Propagation of constraints is  most flexible and useful when coupled with  the ability to perform symbolic manipulations  on algebraic expressions. Such  manipulations are in turn best expressed as  alterations or augmentations of the constraint  network. Almost-Hierarchical Constraint  Networks can be constructed to represent the  multiple viewpoints used by engineers in the  synthesis and analysis of electrical networks.  These multiple viewpoints are used in  terminal equivalence and power arguments to  reduce the apparent synergy in a circuit so  that it can be attacked algebraically.
</description>
<pubDate>Sat, 01 Aug 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6312</guid>
<dc:date>1981-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraints</title>
<link>https://hdl.handle.net/1721.1/6311</link>
<description>Constraints
Steel, Guy Lewis, Jr.; Sussman, Gerald Jay
We present an interactive system organized  around networks of constraints rather than the  programs which manipulate them. We  describe a language of hierarchical constraint  networks. We describe one method of  deriving useful consequences of a set of  constraints which we call propagation.  Dependency analysis is used to spot and  track down inconsistent subsets of a  constraint set. Propagation of constraints is  most flexible and useful when coupled with  the ability to perform symbolic manipulations  on algebraic expressions. Such  manipulations are in turn best expressed as  alterations of augmentations of the constraint  network. Numerous diagrams ornament the  text.
</description>
<pubDate>Wed, 01 Nov 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6311</guid>
<dc:date>1978-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Reasoning and Rationalization in Electronics</title>
<link>https://hdl.handle.net/1721.1/6310</link>
<description>Causal Reasoning and Rationalization in Electronics
Kleer, Johan De
This research attempts to formalize the type of  causal arguments engineerings employ to  understand circuit behavior. A causal  argument consists of a sequence of changes  to circuit quantities (called events), each of  which is caused by precious events. The set  of events that an individual event can directly  cause is largely an artifact of the point of view  taken to analyze the circuit. A particular causal  argument does not rule out other possibly  conflicting causal arguments for the same  circuit. If the actual behavior of the circuit is  know or determined by measurements, the  correct argument can be identified. The  selected argument is a rationalization for the  observed behavior since it explains but does  not guarantee the observed behavior. A  causal analysis program QUAL has been  implemented which determines the response  of a circuit to changes in input signals. It  operates with a simple four valued arithmetic  of unknown, unchanging, increasing and  decreasing. This program is used to illustrate  the applicability of causal reasoning to circuit  recognition, algebraic analysis,  troubleshooting and design.
</description>
<pubDate>Fri, 01 Sep 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6310</guid>
<dc:date>1978-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Calculating the Reflectance Map</title>
<link>https://hdl.handle.net/1721.1/6309</link>
<description>Calculating the Reflectance Map
Horn, Berthold K.P.; Sjoberg, Robert W.
It appears that the development of machine  vision may benefit from a detailed  understanding of the imaging process. The  reflectance map, showing scene radiance as  a function of surface gradient, has proved to  be helpful in this endeavor. The reflectance  map depends both on the nature of the  surface layers of the objects being imaged  and the distribution of light sources. Recently,  a unified approach to the specification of  surface reflectance in terms of both incident  and reflected beam geometry has been  proposed. The reflectance-distribution  function (BRDF). Here we derive the  reflectance map in terms of the BRDF and the  distribution of source radiance. A number of  special cases of practical importance are  developed in detail. The significance of this  approach to the understanding of image  formation is briefly indicated.
</description>
<pubDate>Sun, 01 Oct 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6309</guid>
<dc:date>1978-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information Prosthetics for the Handicapped</title>
<link>https://hdl.handle.net/1721.1/6308</link>
<description>Information Prosthetics for the Handicapped
Papert, Seymour A.; Weir, Sylvia
In this proposal we describe a technological  step towards the realization of INFORMATION  PROSTHETICS. Our primary focus is on using  rather than making the technology.  Specifically, our goal is to transpose for the  use of cerebral-palsied children a computer-based learning environment we have  developed, and to study in this environment a  series of issues in developmental  psychology, in the psychology of learning, in  psycho-diagnostic techniques and in  methods of instruction.
</description>
<pubDate>Fri, 01 Sep 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6308</guid>
<dc:date>1978-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing a Computational Representation for Problem Solving Skills</title>
<link>https://hdl.handle.net/1721.1/6307</link>
<description>Developing a Computational Representation for Problem Solving Skills
Goldstein, Ira
This paper describes the evolution of a  problem solving model over several  generations of computer coaches. Computer  coaching is a type of computer assisted  instruction in which the coaching program  observes the performance of a student  engaged in some intellectual game. The  coach's function is to intervene occasionally in  student generated situations to discuss  appropriate skills that might improve the  student's play. Coaching is a natural context  in which to investigate the teaching and  learning processes, but it is a demanding  task. The computer must be able to analyze  the student's performance in terms of a  model of the underlying problem solving  skills. This model must represent not only  expertise for the task but also intermediate  stages of problem solving skill and typical  difficulties encountered by the learner.  Implementing several generations of  computer coaches to meet these demands  has resulted in a model that represents  problem solving skills a san evolving set of  rules for a domain acting on an evolving  representation of the problem and executed  by a resource-limited problem solver. This  paper describes this evolution from its  starting point as a simple rule-based  approach to its current form.
</description>
<pubDate>Sun, 01 Oct 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6307</guid>
<dc:date>1978-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Proposal for a Computational Model of Anatomical and Physiological Reasoning</title>
<link>https://hdl.handle.net/1721.1/6306</link>
<description>A Proposal for a Computational Model of Anatomical and Physiological Reasoning
Smith, Brian Cantwell
The studies of anatomy and physiology are  fundamental ingredients of medical  education. This paper identifies six ways in  which such functional knowledge serves as  the underpinnings for general medical  reasoning, and outlines the design of a  computational model of common sense  reasoning about human physiology. The  design of the proposed model is grounded in  a set of declarative representational ideas  sometimes called "frame theory":  representational structures constructed from  multiple-perspective, potentially redundant,  descriptions, organized into structured  collections, and associated with the objects  and classes being described.
</description>
<pubDate>Wed, 01 Nov 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6306</guid>
<dc:date>1978-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bandpass Channels, Zero-Crossings, and Early Visual Information Processing</title>
<link>https://hdl.handle.net/1721.1/6305</link>
<description>Bandpass Channels, Zero-Crossings, and Early Visual Information Processing
Marr, D.; Poggio, Tomaso A; Ullman, S.
A recent advance by B.F. Logan in the theory of  one octave bandpass signals may throw new  light on spatial-frequency-tuned channels in  early visual information processing.
</description>
<pubDate>Fri, 01 Sep 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6305</guid>
<dc:date>1978-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining Shape and Reflectance Using Multiple Images</title>
<link>https://hdl.handle.net/1721.1/6304</link>
<description>Determining Shape and Reflectance Using Multiple Images
Horn, Berthold K.P.; Woodham, Robert J.; Silverwilliam, M.
Distributions of surface orientation and  reflectance factor on the surface of an object  can be determined from scene radiances  observed by a fixed sensor under varying  lighting conditions. Such techniques have  potential application to the automatic  inspection of industrial parts, the  determination of the attitude of a rigid body in  space and the analysis of images returned  from planetary explorers. A comparison is  made of this method with techniques based  on images obtained from different viewpoints  with fixed lighting.
</description>
<pubDate>Tue, 01 Aug 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6304</guid>
<dc:date>1978-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-Monotonic Logic I</title>
<link>https://hdl.handle.net/1721.1/6303</link>
<description>Non-Monotonic Logic I
McDermott, Drew; Doyle, Jon
"Non-monotonic" logical systems are logics in  which the introduction of new axioms can  invalidate old theorems. Such logics are very  important in modeling the beliefs of active  processes which, acting in the presence of  incomplete information, must make and  subsequently revise predictions in light of new  observations. We present the motivation and  history of such logics. We develop model and  proof theories, a proof procedure, and  applications for one important non-monotonic  logic. In particular, we prove the  completeness of the non-monotonic predicate  calculus and the decidability of the non-monotonic sentential calculus. We also  discuss characteristic properties of this logic  and its relationship to stronger logics, logics  of incomplete information, and truth  maintenance systems.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6303</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Director Guide</title>
<link>https://hdl.handle.net/1721.1/6302</link>
<description>Director Guide
Kahn, Kenneth M.
Director is a programming language  designed for dynamic graphics, artificial  intelligence, and use by computer-naï¶¥  people. It is based upon the actor or object  oriented approach to programming and  resembles Act 1 and SmallTalk. Director  extends MacLisp by adding a small set of  primitive actors and the ability to create new  ones. Its graphical features include an  interface to the TV turtle, quasi-parallelism,  many animation primitives, a parts/whole  hierarchy and a primitive actor for making and  recording "movies". For artificial intelligence  programming Director provides a pattern-directed data base associated with each  actor, an inheritance hierarchy, and a means  of conveniently creating non-standard control  structures. For use by naï¶¥ programmers  Director is appropriate because of its stress  upon very powerful, yet conceptually simple  primitives and its verbose, simple syntax  based upon pattern matching. Director code  can be turned into optimized Lisp which in  turn can be compiled into machine code.
</description>
<pubDate>Sat, 01 Dec 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6302</guid>
<dc:date>1979-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Photometric Stereo</title>
<link>https://hdl.handle.net/1721.1/6301</link>
<description>Photometric Stereo
Woodham, Robert J.
Traditional stereo techniques determine  range by relating two images of an object  viewed from different directions. If the  correspondence between picture elements is  known, then distance to the object can be  calculated by triangulation. Unfortunately, it is  difficult to determine this correspondence.  This paper introduces a novel technique  called photometric stereo. The idea of  photometric stereo is to vary the direction of  the incident illumination between successive  views while holding the viewing direction  constant. This provides enough information to  determine surface orientation at each picture  element. Since the imaging geometry does  not change, the correspondence between  picture elements is known a priori. This  stereo technique is photometric because it  uses the intensity values recorded in a single  picture element, in successive views, rather  than the relative positions of features.
</description>
<pubDate>Thu, 01 Jun 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6301</guid>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of a Three Degree of Freedom Kinematic Chain</title>
<link>https://hdl.handle.net/1721.1/6300</link>
<description>Dynamics of a Three Degree of Freedom Kinematic Chain
Horn, Berthold K.P.; Hirokawa, Ken-Ichi; Vazirani, Vijay
In order to be able to design a control system  for high-speed control of mechanical  manipulators, it is necessary to understand  properly their dynamics. Here we present an  analysis of a detailed model of a three-link  device which may be viewed as either a "leg"  in a locomotory system, or the first three  degrees of freedom of an "arm" providing for  its gross motions. The equations of motion  are shown to be non-trivial, yet manageable.
</description>
<pubDate>Sat, 01 Oct 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6300</guid>
<dc:date>1977-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Synthetic Students as a Model of Human Behavior</title>
<link>https://hdl.handle.net/1721.1/6299</link>
<description>Analysis of Synthetic Students as a Model of Human Behavior
Ihrie, David Wayne
The research described in this report is an  attempt to evaluate the educational effects of a  computer game known as Wumpus. A set of  five synthetic computer students was taken as  a model of the progress of real students  playing a sequence of twenty Wumpus  "warrens". Using a combination of  observations made of the students,  representations drawn by the students and  protocols kept by the computer of each  session, it was found that the synthetic  students are a reasonable static model of real  students, but miss completely many of the  important dynamic factors which affect a  student's play. In spite of this, the Wumpus  game was found to be an effective  educational tool.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6299</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Interpretation of Structure From Motion</title>
<link>https://hdl.handle.net/1721.1/6298</link>
<description>The Interpretation of Structure From Motion
Ullman, S.
The interpretation of structure from motion is  examined from a computational point of view.  The question addressed is how the 3-D  structure and motion of objects can be  inferred from the 2-D transformations of their  projected images when no 3-D information is  conveyed by the individual projections.
</description>
<pubDate>Fri, 01 Oct 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6298</guid>
<dc:date>1976-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding in Incomplete Worlds</title>
<link>https://hdl.handle.net/1721.1/6297</link>
<description>Understanding in Incomplete Worlds
Rosenberg, Steven
Most real world domains differ from the micro-worlds traditionally used in A.I. in that they  have an incomplete factual database which  changes over time. Understanding in these  domains can be thought of as the generation  of plausible inferences which are able to use  the facts available, and respond to changes in  them. A traditional rule interpreter such as  Planner can be extended to construct  plausible inferences in these domains by A)  allowing assumptions to be made in applying  rules, resulting in simplifications of rules  which can be used in an incomplete  database; B) monitoring the antecedents and  consequents of a rule so that inferences can  be maintained over a changing database. The  resulting chains of inference can provide a  dynamic description of an event. This allows  general reasoning processes to be used to  understand in domains for which large  numbers of Schema-like templates have  been proposed as the best model.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6297</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Three Valued Truth Maintenance System</title>
<link>https://hdl.handle.net/1721.1/6296</link>
<description>A Three Valued Truth Maintenance System
McAllester, David A.
Truth maintenance systems have been used  in recently developed problem solving  systems. A truth maintenance system (TMS)  is designed to be used by deductive systems  to maintain the logical relations among the  beliefs which those systems manipulate.  These relations are used to incrementally  modify the belief structure when premises are  changed, giving a more flexible context  mechanism than has been present in earlier  artificial intelligence systems. The relations  among beliefs can also be used to directly  trace the source of contradictions or failures,  resulting in far more efficient backtracking.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6296</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Progress Report on the Discourse and Reference Components of PAL</title>
<link>https://hdl.handle.net/1721.1/6295</link>
<description>A Progress Report on the Discourse and Reference Components of PAL
Sidner, Candace
This paper reports on research being  conducted on a computer assistant, called  PAL. PAL is being designed to arrange  various kinds of events with concern for the  who, what, when, where and why of that event.  The goal for PAL is to permit a speaker to  interact with it in English and to use extended  discourse to state the speaker's  requirements. The portion of the language  system discussed in this report  disambiguates references from discourse  and interprets the purpose of sentences of the  discourse. PAL uses the focus of discourse to  direct its attention to a portion of the discourse  and to the database to which the discourse  refers. The focus makes it possible to  disambiguate references with minimal  search. Focus and a frames representation of  the discourse make it possible to interpret  discourse purposes. The focus and  representation of the discourse are explained,  and the computational components of PAL  which implement reference disambiguation  and discourse interpretation are presented in  detail.
</description>
<pubDate>Sat, 01 Apr 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6295</guid>
<dc:date>1978-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Destriping Satellite Images</title>
<link>https://hdl.handle.net/1721.1/6294</link>
<description>Destriping Satellite Images
Horn, B.K.P.; Woodham, R.J.
Before satellite images obtained with multiple  image sensors can be used in image  analysis, corrections must be introduced for  the differences in transfer functions on these  sensors. Methods are here presented for  obtaining the required information directly  from the statistics of the sensor outputs. The  assumption is made that the probability  distribution of the scene radiance seen by  each image sensor is the same. Successful  destriping of LANDSAT images is  demonstrated.
</description>
<pubDate>Wed, 01 Mar 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6294</guid>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Semantic Memory: Effects of Presenting Semantic Information in Different Modalities</title>
<link>https://hdl.handle.net/1721.1/6293</link>
<description>Modeling Semantic Memory: Effects of Presenting Semantic Information in Different Modalities
Rosenberg, Steven; Simon, Herbert A.
How is semantic information from different  modalities integrated and stored? If related  ideas are encountered in French and English,  or in pictures and sentences, is the result a  single representation in memory or two  modality-dependent ones? Subjects were  presented with items in different modalities,  then were asked whether or not subsequently  presented items were identical with the  former ones. Subjects frequently accepted  translations and items semantically  consistent with those presented earlier as  identical, although not as often as they  accepted items actually seen previously. The  same pattern of results was found when the  items were French and English sentences,  and when they were pictures and sentences.  The results can be explained by the  hypothesis that subjects integrate information  across modalities into a single underlying  semantic representation. A computer model,  embodying this hypothesis, made predictions  in close agreement with the data.
</description>
<pubDate>Sat, 01 Apr 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6293</guid>
<dc:date>1978-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>LANDSAT MSS Coordinate Transformations</title>
<link>https://hdl.handle.net/1721.1/6292</link>
<description>LANDSAT MSS Coordinate Transformations
Horn, Berthold K.P.; Woodham, Robert J.
A number of image analysis tasks require the  registration of a surface model with an image.  In the case of satellite images, the surface  model may be a map or digital terrain model  in the form of surface elevations on a grid of  points. We develop here an affine  transformation between coordinates of Multi-Spectral Scanner (MSS) images produced by  the LANDSAT satellites, and coordinates of a  system lying in a plane tangent to the earth's  surface near the sub-satellite (Nadir) point.
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6292</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparative Schematology</title>
<link>https://hdl.handle.net/1721.1/6291</link>
<description>Comparative Schematology
Patterson, Michael S.; Hewitt, Carl E.
While we may have the intuitive idea of one  programming language having greater power  than another, or of some subset of a  language being an adequate "core" for that  language, we find when we try to formalize  this notion that there is a serious theoretical  difficulty. This lies in the fact that even quite  rudimentary languages are nevertheless  "universal" in the following sense. If the  language allows us to program with simple  arithmetic or list processing functions, then  any effective control structure can be  simulated, traditionally by encoding a Turing  machine computation in some way. In  particular, a simple language with some  basic arithmetic can express programs for  any partial recursive function. Such an  encoding is usually quite unnatural and  impossibly inefficient. Thus in order to carry  on a practical study of the comparative power  of different languages we are led to banish  explicit functions and deal instead with  abstract, uninterpreted programs, or  schemas. What follows is a brief report on  some preliminary exploration in this area.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6291</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shaded Perspective Images of Terrain</title>
<link>https://hdl.handle.net/1721.1/6290</link>
<description>Shaded Perspective Images of Terrain
Strat, Thomas M.
In order to perform image analysis, one must  have a thorough understanding of how  images are formed. This memo presents an  algorithm that produces shaded perspective  images of terrain as a vehicle to  understanding the fundamentals of image  formation. The image is constructed using  standard projection equations along with an  efficient hidden-surface removal technique.  The image intensity is calculated using the  reflectance map, a convenient way of  describing the surface reflection as a function  of surface gradient. Aside from its use as a  tool toward understanding image analysis,  the algorithm has several applications of its  own, including providing video input to a flight  simulator.
</description>
<pubDate>Wed, 01 Mar 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6290</guid>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Comparison of PARSIFAL with Augmented Transition Networks</title>
<link>https://hdl.handle.net/1721.1/6289</link>
<description>A Comparison of PARSIFAL with Augmented Transition Networks
Swartout, William R.
This paper compares Marcus' parser,  PARSIFAL with Woods' Augmented Transition  Network (ATN) parser. In particular, the paper  examines the two parsers in light of Marcus'  Determinism Hypothesis. An overview of each  parser is presented. Following that, the  Determinism Hypothesis is examined in  detail. A method for transforming the  PARSIFAL grammar rules into the ATN  formalism is outlined. This transformation  shows some of the fundamental differences  between PARSIFAL and ATN parsers, and the  nature of the hypotheses used in PARSIFAL.  Finally, the principle of least commitment is  proposed as an alternative to the  Determinism Hypothesis.
</description>
<pubDate>Wed, 01 Mar 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6289</guid>
<dc:date>1978-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Glimpse of Truth Maintenance</title>
<link>https://hdl.handle.net/1721.1/6288</link>
<description>A Glimpse of Truth Maintenance
Doyle, Jon
To choose their actions, reasoning programs  must be able to draw conclusions from  limited information and subsequently revise  their beliefs when discoveries invalidate  previous assumptions. A truth maintenance  system is a problem solver subsystem for  performing these functions by recording and  maintaining the reasons for program beliefs.  These recorded reasons are useful in  constructing explanations of program actions  in "responsible" programs, and in guiding the  course of action of a problem solver. This  paper describes the structure of a truth  maintenance system, methods for encoding  control structures in patterns of reasons for  beliefs, and the method of dependency-directed backtracking.
</description>
<pubDate>Wed, 01 Nov 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6288</guid>
<dc:date>1978-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Glimpse of Truth Maintenance</title>
<link>https://hdl.handle.net/1721.1/6287</link>
<description>A Glimpse of Truth Maintenance
Doyle, Jon
Many procedurally-oriented problem solving  systems can be viewed as performing a  mixture of computation and deduction, with  much of the computation serving to decide  what deductions should be made. This  results in bits and pieces of deductions being  strewn throughout the program text and  execution. This paper describes a problem  solver subsystem called a truth maintenance  system which collects and maintains these  bits of deductions. Automatic functions of the  truth maintenance system then use these  pieces of "proofs" to consistently update a  data base of program beliefs and to perform a  powerful form of backtracking called  dependency-directed backtracking.
</description>
<pubDate>Wed, 01 Feb 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6287</guid>
<dc:date>1978-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Assessment and Documentation of a Children's Computer Laboratory</title>
<link>https://hdl.handle.net/1721.1/6286</link>
<description>Assessment and Documentation of a Children's Computer Laboratory
Papert, Seymour A.; Watt, Daniel H.
This research will thoroughly document the  experiences of a small number of 5th grade  children in an elementary school computer  laboratory, using LOGO, an advanced  computer language designed for children.  Four groups of four children will be taught a  10-week LOGO course. Detailed anecdotal  records will be kept, and observers will note  the development of the children's computer  programming skills, and the acquisition of  knowledge in the areas of mathematics,  science, and language, and of cognitive  strategies and attitudinal changes which  transfer beyond the specific subject matter  studied.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6286</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programming Viewed as an Engineering Activity</title>
<link>https://hdl.handle.net/1721.1/6285</link>
<description>Programming Viewed as an Engineering Activity
Rich, Charles; Shrobe, Howard E.; Waters, Richard C.; Sussman, Gerald J.; Hewitt, Carl E.
It is profitable to view the process of writing  programs as an engineering activity. A  program is a deliberately contrived  mechanism constructed from parts whose  behaviors are combined to produce the  behavior of the whole. We propose to develop  a notion of understanding a program which is  analogous to similar notions in other  engineering subjects. Understanding is a rich  notion in engineering domains. It includes the  ability to identify the parts of a mechanism and  assign a purpose to each part. Understanding  also entails being able to explain to someone  how a mechanism works and rationalize its  behavior under unusual circumstances.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6285</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Configuration Space Control</title>
<link>https://hdl.handle.net/1721.1/6284</link>
<description>Configuration Space Control
Horn, Berthold K.P.; Raibert, Marc H.
Complicated systems with non-linear time-varying behavior are difficult to control using  classical linear feedback methods applied  separately to individual degrees of freedom. At  the present, mechanical manipulators, for  example, are limited in their rate of movement  by the inability of traditional feedback systems  to deal with time-varying inertia, torque  coupling effects between links and Coriolis  forces. Analysis of the dynamics of such  systems, however, provides the basic  information needed to achieve adequate  control.
</description>
<pubDate>Thu, 01 Dec 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6284</guid>
<dc:date>1977-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Revised Report on SCHEME: A Dialect of LISP</title>
<link>https://hdl.handle.net/1721.1/6283</link>
<description>The Revised Report on SCHEME: A Dialect of LISP
Steele, Guy Lewis, Jr.; Sussman, Gerald Jay
SCHEME is a dialect of LISP. It is an  expression-oriented, applicative order,  interpreter-based language which allows one  to manipulate programs as data. It differs  from most current dialects of LISP in that it  closes all lambda-expressions in the  environment of their definition or declaration,  rather than in the execution environment. This  has the consequence that variables are  normally lexically scoped, as in ALGOL.  However, in contrast with ALGOL, SCHEME  treats procedures as a first-class data type.  They can be the values of variables, the  returned values of procedures, and  components of data structures. Another  difference from LISP is that SCHEME is  implemented in such a way that tail-recursions execute without net growth of the  interpreter stack. The effect of this is that a  procedure call behaves like a GOTO and thus  procedure calls can be used to implement  iterations, as in PLASMA.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6283</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Minimum Energy Movement for a Spring Muscle Model</title>
<link>https://hdl.handle.net/1721.1/6282</link>
<description>The Minimum Energy Movement for a Spring Muscle Model
Hollerbach, John M.
There are many ways of programming an  actuator or effector for movement between the  same two points. In the interest of efficiency it  is sometimes desirable to program that  trajectory which requires the least amount of  energy. This paper considers the minimum  energy movement for a spring-like actuator  abstracted from muscle mechanics and  energetics. It is proved that for this actuator a  bang-coast-bang actuation pattern minimizes  the energy expenditure. For some parameter  values this pattern is modified by a singular  arc at the first switching point. A surprising  limitation on the duration of coast is  demonstrated. Some relaxations of the  restrictions underlying the spring model are  shown to preserve the bang-coast-bang  solution.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6282</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>COMEX: A Support System for a Commodities Expert</title>
<link>https://hdl.handle.net/1721.1/6281</link>
<description>COMEX: A Support System for a Commodities Expert
Stansfield, James L.
The intelligent support system project is  developing a program (COMEX) to assist a  commodities expert in tasks such as  interpreting data, predicting trends and  intelligent noticing. Large amounts of  qualitative and quantitative information about  factors such as weather, trade and crop  condition need to be managed. This memo  presents COMEX-), a prototype system written  in FRL, a frame-based language (Goldstein &amp;  Roberts, 1977). COMEX-O has a complaint  handling system, frame structure matching  and simple reasoning. By conversing with a  user, it builds groupings of frame structures to  represent events. These are called  CLUSTERS and are proposed as a new  representation method. New CLUSTERS are  built from previously defined ones using  INSTANTIATION and AGGREGATION, two  methods which combine with frame  inheritance and constraints to make up a  general event representation mechanism.  CLUSTERS capture the idea of generic  patterns of relationships between frames and  raise an issue named the GENERIC  CONSTRAINT PROBLEM concerning  constraints between the parts of a cluster. The  final section presents plans for future work on  qualitative reasoning within COMEX and  includes a hypothetical scenario.
</description>
<pubDate>Mon, 01 Aug 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6281</guid>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Light Source Effects</title>
<link>https://hdl.handle.net/1721.1/6280</link>
<description>Light Source Effects
Forbus, K.
The perception of surface luster in achromatic  single view images seems to depend on the  existence of regions with source-like  properties. These regions are due to the  interaction of specular component of the  surface's reflectance and the illumination.  Light source effects are broken down into  three categories according to gross aspects  of the physical situation in which they occur,  and criteria for detecting the regions they  cause are suggested.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6280</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Arithmetic in MACLISP</title>
<link>https://hdl.handle.net/1721.1/6279</link>
<description>Fast Arithmetic in MACLISP
Steele, Guy Lewis, Jr.
MacLISP provides a compiler which produces  numerical code competitive in speed with  some FORTRAN implementations and yet  compatible with the rest of the MacLISP  system. All numerical programs can be run  under MacLISP interpreter. Additional  declarations to the compiler specify type  information which allows the generation of  optimized numerical code which generally  does not require the garbage collection of  temporary numerical results. Array accesses  are almost as fast as in FORTRAN, and  permit the use of dynamically allocated arrays  of varying dimensions. Here we discuss the  implementation decisions regarding user  interface, data representations, and  interfacing conventions which allow the  generation of fast numerical LISP code.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6279</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data Representations in PDP-10 MACLISP</title>
<link>https://hdl.handle.net/1721.1/6278</link>
<description>Data Representations in PDP-10 MACLISP
Steele, Guy Lewis, Jr.
The internal representations of the various  MacLISP data types are presented and  discussed. Certain implementation tradeoffs  are considered. The ultimate decisions on  these tradeoffs are discussed in the light of  MacLISP's prime objective of being an  efficient high-level language for the  implementation of large systems such as  MACSYMA. The basic strategy of garbage  collection is outlined, with reference to the  specific representations involved. Certain  "clever tricks" are explained and justified. The  "address space crunch" is explained and  some alternative solutions explored.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6278</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wusor II: A Computer Aided Instruction Program with Student Modelling Capabilities</title>
<link>https://hdl.handle.net/1721.1/6277</link>
<description>Wusor II: A Computer Aided Instruction Program with Student Modelling Capabilities
Carr, Brian
Wusor II is the second program that has been  developed to tutor students in the game of  Wumpus. From the earlier efforts with Wusor I  it was possible to produce a rule-based  expert which processed a relatively complete  mastery of the game. Wusor II endeavors to  teach the knowledge embodied in the rules  used by the Expert. The Student Model  represents Wusor's estimation of the  student's knowledge of said rules, and this  estimation is based primarily on analyses of  the player's moves. The Student Model allows  Wusor to personalize its explanations to the  student according to the student's current  knowledge of the game. The result is a  system which, according to preliminary  results, is highly effective at tutoring students  of varied abilities.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6277</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation and Recognition of the Spatial Organization of Three Dimensional Shapes</title>
<link>https://hdl.handle.net/1721.1/6276</link>
<description>Representation and Recognition of the Spatial Organization of Three Dimensional Shapes
Marr, D.; Nishihara, H.K.
The human visual process can be studied by  examining the computational problems  associated with deriving useful information  from retinal images. In this paper, we apply  this approach to the problem of representing  three-dimensional shapes for the purpose of  recognition.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6276</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representing Visual Information</title>
<link>https://hdl.handle.net/1721.1/6275</link>
<description>Representing Visual Information
Marr, D.
Vision is the construction of efficient symbolic  descriptions from images of the world. An  important aspect of vision is the choice of  representations for the different kinds of  information in a visual scene. In the early  stages of the analysis of an image, the  representations used depend more on what it  is possible to compute from an image than on  what is ultimately desirable, but later  representations can be more sensitive to the  specific needs of recognition. This essay  surveys recent work in vision at M.I.T. from a  perspective in which the representational  problems assume a primary importance. An  overall framework is suggested for visual  information processing, in which the analysis  proceeds through three representations; (1)  the primal sketch, which makes explicit the  intensity changes and local two-dimensional  geometry of an image (2) the 2 1/2-D sketch,  which is a viewer-centered representation of  the depth, orientation and discontinuities of  the visible surfaces, and (3) the 3-D model  representation, which allows an object-centered description of the three-dimensional  structure and organization of a viewed shape.  Recent results concerning processes for  constructing and maintaining these  representations are summarized and  discussed.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6275</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning by Creating and Justifying Transfer Frames</title>
<link>https://hdl.handle.net/1721.1/6274</link>
<description>Learning by Creating and Justifying Transfer Frames
Winston, Patrick H.
In the particular kind of learning discussed in  this paper, the teacher names a destination  and a source. In the sentence, "Robbie is like  a fox," Robbie is the destination and fox is the  source. The student, on analyzing the  teacher's instruction, computes a filter called  a transfer frame. The transfer frame stands  between the source and the destination and  determines what information is allowed to  pass from one to the other.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6274</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control and Learning by the State Space Model: Experimental Findings</title>
<link>https://hdl.handle.net/1721.1/6273</link>
<description>Control and Learning by the State Space Model: Experimental Findings
Raibert, Marc
This is the second of a two part presentation  of a model for motor control and learning. The  model was implemented using a small  computer and the MIT -Scheinman  manipulator. Experiments were conducted  which demonstrate the controller's ability to  learn new movements, adapt to mechanical  changes caused by inertial and elastic  loading, and generalize its behavior among  similar movements. A second generation  model, based on improvements suggested by  these experiments is suggested.
</description>
<pubDate>Fri, 01 Apr 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6273</guid>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Viewing Control Structures as Patterns of Passing Messages</title>
<link>https://hdl.handle.net/1721.1/6272</link>
<description>Viewing Control Structures as Patterns of Passing Messages
Hewitt, Carl
The purpose of this paper is to discuss some  organizational aspects of programs using the  actor model of computation. In this paper we  present an approach to modelling intelligence  in terms of a society of communicating  knowledge-based problem-solving experts. In  turn each of the experts can be viewed as a  society that can be further decomposed in the  same way until th primitive actors fo the  system are reached. We are investigating the  nature of the communication mechanisms  needed for effective problem-solving by a  society of experts and the conventions of  discourse that make this possible. In this way  we hope eventually to develop a framework  adequate for the discussion of the central  issues of problem-solving involving parallel  versus serial processing and centralization  versus decentralization of control and  information storage.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6272</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic Evaluation Using Conceptual Representations for Programs with Side-Effects</title>
<link>https://hdl.handle.net/1721.1/6271</link>
<description>Symbolic Evaluation Using Conceptual Representations for Programs with Side-Effects
Yonezawa, Akinori; Hewitt, Carl
Symbolic evaluation is a process which  abstractly evaluates an program on abstract  data. A formalism based on conceptual  representations is proposed as a  specification language for programs with  side-effects. Relations between algebraic  specifications and specifications based on  conceptual representations are discussed  and limitations of the current algebraic  specification techniques are pointed out.  Symbolic evaluation is carried out with explicit  use of a notion of situations. Uses of  situational tags in assertions make it  possible to state relations about properties of  objects in different situations. The proposed  formalism can deal with problems of side-effects which have been beyond the scope of  Floyd-Hoare proof rules and give a solution to  McCarthy's frame problem.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6271</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capturing Intuitive Knowledge in Procedural Description</title>
<link>https://hdl.handle.net/1721.1/6270</link>
<description>Capturing Intuitive Knowledge in Procedural Description
Bamberger, Jeanne
Trying to capture intuitive knowledge is a little  like trying to capture the moment between  what just happened and what is about to  happen. Or to quote a famous philosopher,  "You can't put your foot in the same river  once." The problem is tha tyou can only  "capture" what stands still. Intuitive knowledge  is not a static structure, but rather a continuing  process of constructing coherence and  meaning out of the sensory phenomena that  come at you. To capture intuitive knowledge,  then means: Given some phenomena, what  are your spontaneous ways of selecting  significant features or for choosing what  constitutes an element; how do you determine  what is the same and what is different; how  do you agregate or chunk the sensory data  before you?
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6270</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Teaching the Computer to Add: An Example of Problem-Solving in an Anthropomorphic Computer Culture</title>
<link>https://hdl.handle.net/1721.1/6269</link>
<description>Teaching the Computer to Add: An Example of Problem-Solving in an Anthropomorphic Computer Culture
Solomon, Cynthia J.
Computers open up new ways to think about  knowledge and learning. Learning computer  science should draw upon and feed these  new approaches. In a previous paper called  "Leading a Child to a Computer Culture" I  discuss some ways to do so in a very  elementary context. This paper is a  contribution to extending such thinking to a  more advanced project.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6269</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pre-Readers' Concepts of the English Word</title>
<link>https://hdl.handle.net/1721.1/6268</link>
<description>Pre-Readers' Concepts of the English Word
Lawler, Robert
Pre-Readers exhibit concepts of the English  word different from those of literate adults.  The inclusive word concept is primary: A word  is what we call an utterance and any of its  parts. Pre-Readers suffer confusion between  homophones at the syllabic level, e.g., the  sound of the suffix in "PUPPY" is confused  with the name of the letter. Conflict between  implicit judgments of wordhood (inferred from  the child's counting of the number of words in  an utterance) and explicit judgments  (responses to questions about whether an  item is a word) vary from high, for pre-readers,  to low, for beginning readers. The  justifications pre-readers offer to support their  judgments of wordhood are notable for not  including any argumetns based on immediate  verbal context. A concept development theory  is offered to interpret this data and their  relaxation to learning to read.
</description>
<pubDate>Mon, 01 Nov 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6268</guid>
<dc:date>1976-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local Methods for Localizing Faults in Electronic Circuits</title>
<link>https://hdl.handle.net/1721.1/6267</link>
<description>Local Methods for Localizing Faults in Electronic Circuits
Kleer, Johan De
The work described in this paper is part of an  investigation of the issues involved in making  expert problem solving programs for  engineering design and for maintenance of  engineered systems. In particular, the paper  focuses on the troubleshooting of electronic  circuits. Only the individual properties of the  components are used, and not the collective  properties of groups of components. The  concept of propagation is introduced which  uses the voltage-current properties of  components to determing additional  information from given measurements. Two  propagated values can be discovered for the  same point. This is called a coincidence. In a  faulted circuit, the assumptions made about  components in the coinciding propagations  can then be used to determine information  about the faultiness of these components. In  order for the program to deal with actual  circuits, it handles errors in measurement  readings and tolerances in component  parameters. This is done by propagating  ranges of numbers instead of single  numbers. Unfortunately, the comparing of  ranges introduces many complexities into the  theory of coincidences. In conclusion, we  show how such local deductions can be used  as the basis for qualitative reasoning and  troubleshooting.
</description>
<pubDate>Mon, 01 Nov 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6267</guid>
<dc:date>1976-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Student Science Training Program in Mathematics, Physics and Computer Science</title>
<link>https://hdl.handle.net/1721.1/6266</link>
<description>Student Science Training Program in Mathematics, Physics and Computer Science
Abelson, Harold; diSessa, Andy
During the summer of 1976, the  Massachussetts Institute of Technology  Artificial Intelligence Laboratory sponsored a  Student Science Training Program in  Mathematics, Physics and Computer Science  for high ability secondary school students.  This report describes, in some detail, the style  of the program, the curriculum and the  projects the students undertook.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6266</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computation of Locally Parallel Structure</title>
<link>https://hdl.handle.net/1721.1/6265</link>
<description>Computation of Locally Parallel Structure
Stevens, Kent A.
A Moire-like effect can be observed in dot  patterns consisting of two superimposed  copies of a random dot pattern where one  copy has been expanded, translated, or  rotated. One perceives in these patterns a  structure that is locally parallel. Our ability to  perceive this structure is shown by experiment  to be limited by the local geometry of the  pattern, independent of the overall structure or  the dot density. A simple representation of  locally parallel structure is proposed, and it is  found to be computable by a non-iterative,  parallel algorithm. An implementation of this  algorithm is demonstrated. Its performance  parallels that observed experimentally,  providing a potential explanation for human  performance. Advantages are discussed for  the early description of locally parallel  structure in the course of visual processing.
</description>
<pubDate>Tue, 01 Mar 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6265</guid>
<dc:date>1977-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grammar as a Programming Language</title>
<link>https://hdl.handle.net/1721.1/6264</link>
<description>Grammar as a Programming Language
Rowe, Neil
This paper discusses some student projects  involving generative grammars. While  grammars are usually associated with  linguisitics, their usefuleness goes far beyond  just "language" to make different domains.  Their application is general enough to make  grammars a sort of programming language in  their own right.
</description>
<pubDate>Fri, 01 Oct 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6264</guid>
<dc:date>1976-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>PAZATN: A Linguistic Approach to Automatic Analysis of Elementary Programming Protocols</title>
<link>https://hdl.handle.net/1721.1/6263</link>
<description>PAZATN: A Linguistic Approach to Automatic Analysis of Elementary Programming Protocols
Miller, Mark L.; Goldstein, Ira P.
PATN is a design for a machine problem  solver which uses an augmented transition  network (ATN) to represent planning  knowledge. In order to explore PATN's  potential as a theory of human problem  solving, a linguistic approach to protocol  analysis is presented. An interpretation of a  protocol is taken to be a parse tree  supplemented by semantic and pragmatic  annotation attached to various nodes. This  paradigm has implications for constructing a  cognitive model of the individual and  designing computerized tutors.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6263</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structured Planning and Debugging: A Linguistic Theory of Design</title>
<link>https://hdl.handle.net/1721.1/6262</link>
<description>Structured Planning and Debugging: A Linguistic Theory of Design
Goldstein, Ira P.; Miller, Mark L.
A unified theory of planning an debugging is  explored by designing a problem solving  program called PATN. PATN uses an  augmented transition network (ATN) to  represent a broad range of planning  techniques, including identification,  decomposition, and reformulation. (The ATN  [Woods 1970] is a simple yet powerful  formalism which has been effectively utilized  in computational linguistics.)
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6262</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>SPADE: A Grammar Based Editor for Planning and Debugging Programs</title>
<link>https://hdl.handle.net/1721.1/6261</link>
<description>SPADE: A Grammar Based Editor for Planning and Debugging Programs
Miller, Mark L.; Goldstein, Ira P.
A grammar of plans is developed from a  taxonomy of basic planning techniques. This  grammar serves as the basis for the design  of a new kind of interactive programming  environment (SPADE), in which programs are  generated by explicitly articulating planning  descisions. The utility of this approach to  program definition is that a record of these  decisions, called the plan derivation, provides  guidance for subsequent modification of  debugging of the program.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6261</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parsing Protocols Using Problem Solving Grammars</title>
<link>https://hdl.handle.net/1721.1/6260</link>
<description>Parsing Protocols Using Problem Solving Grammars
Miller, Mark L.; Goldstein, Ira P.
A theory of the planning and debugging of  programs is formalized as is context free  grammar. The grammar is used to reveal the  constituent structure of problem solving  episodes, by parsing protocols in which  programs are written, tested and debugged.  This is illustrated by the detailed analysis of  an actual session with a beginning student.  The virtues and limitations of the context free  formalism are considered.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6260</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>AI Based Personal Learning Environments: Directions for Long Term Research</title>
<link>https://hdl.handle.net/1721.1/6259</link>
<description>AI Based Personal Learning Environments: Directions for Long Term Research
Goldstein, Ira P.; Miller, Mark L.
The application of artificial intelligence (AI)  techniques to the design of personal learning  environments is an enterprise of both  theoretical and practical interest. In the short  term, the process of developing and testing  intelligent tutoring programs serves as a new  experimental vehicle for exploring alternative  cognitive and pedagogical theories. In the  long term, such programs should supplement  the educational supervision and guidance  provided by human teachers. This paper  illustrates our long term perspective by a  scenario with a hypothetical tutoring system  for elementary graphics programming.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6259</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overview of a Linguistic Theory of Design</title>
<link>https://hdl.handle.net/1721.1/6258</link>
<description>Overview of a Linguistic Theory of Design
Miller, Mark L.; Golstein, Ira P.
The SPADE theory uses linguistic formalisms  to model the program planning and  debugging processes. The theory has been  applied to constructing a grammar-based  editor in which programs are written in a  structured fashion, designing an automatic  programming system based on Augmented  Transition Network, and parsing protocols of  programming episodes.
</description>
<pubDate>Tue, 01 Feb 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6258</guid>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dual Coding and the Representation of Letter Strings</title>
<link>https://hdl.handle.net/1721.1/6257</link>
<description>Dual Coding and the Representation of Letter Strings
Rosenberg, Steven T.
Sub-strings derived from four-letter strings  (e.g. ABCD) were presented to subjects using  a variation on Bransford and Franks' (1971)  paradigm. Each strins was in either upper or  lower case. Subjects were then tested for  recognition of the strings, false recognition of  translations of the strings into the other case,  and false recognitions of new but legal  strings. Subjects accepted previously seen  strings most frequently, following by  translations, with New strings accepted least  often. This replicateds Rosenberg and  Simon's (in press) findings with sentences  and pictures that express the same concept.  However, in the present experiment the two  forms of a string were unbiased with respect  to verbal or pictorial encoding. The forms in  which a string could appear (upper or lower  case) were not confounded with the two types  of encoding (verbal and pictorial)  hypothesized by a dual coding theory. The  results supported the view that the previously  reported difference between the original form  and a translation is best explained by a model  which uses a single representation that  preserves some form distinctions.
</description>
<pubDate>Fri, 01 Jul 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6257</guid>
<dc:date>1977-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wumpus Advisor 1: A First Implementation Program that Tutors Logical and Probabilistic Reasoning Skills</title>
<link>https://hdl.handle.net/1721.1/6256</link>
<description>Wumpus Advisor 1: A First Implementation Program that Tutors Logical and Probabilistic Reasoning Skills
Stansfield, James L.; Carr, Brian P.; Goldstein, Ira P.
The Wumpus Advisor program offers advice to  a player involved in choosing the best move in  a game for which competence in dealing with  incomplete and uncertain knowledge is  required. The design and implementation of  the advisor explores a new paradigm in  Computer Assisted Instruction, in which the  performance of computer-based tutors is  greatly improved through the application of  Artificial Intelligence techniques. This report  describes the design of the Advisor and  outlines directions for further work. Our  experience with the tutor is informal and  psychological experimentation remains to be  done.
</description>
<pubDate>Fri, 01 Oct 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6256</guid>
<dc:date>1976-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis</title>
<link>https://hdl.handle.net/1721.1/6255</link>
<description>Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis
Stallman, Richard M.; Sussman, Gerald Jay
We present a rule-based system for  computer-aided circuit analysis. The set of  rules, called EL, is written in a rule language  called ARS. Rules are implemented by ARS  as pattern-directed invocation demons  monitoring an associative data base.  Deductions are performed in an antecedent  manner, giving EL's analysis a catch-as-catch-can flavor suggestive of the behavior of  expert circuit analyzers. We call this style of  circuit analysis propagation of constraints.  The system threads deduced facts with  justifications which mention the antecedent  facts and the rule used. These justifications  may be examined by the user to gain insight  into the operation of the set of rules as they  apply to a problem. The same justifications  are used by the system to determine the  currently active data-base context for  reasoning in hypothetical situations. They are  also used by the system in the analysis  failures to reduce the search space. This  leads to effective control of cominatorial  search which we call dependency-directed  backtracking.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6255</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation and Recognition of the Spatial Organization of Three-Dimensional</title>
<link>https://hdl.handle.net/1721.1/6254</link>
<description>Representation and Recognition of the Spatial Organization of Three-Dimensional
Marr, D.; Nishihara, H.K.
A method is given for representing 3-D  shapes. It is based on a hierarchy of stick  figures (called 3-D models), where each stick  corresponds to an axis in the shape's  generalized cone representation. Although the  representation of a complete shape may  contain many stick figures at different levels of  detail, only one stick figure is examined at a  time while the representation is being used ot  interpret an image. By thus balancing scope  of description against detail, the complexity of  the computations needed to support the  representation is minimized. The method  requires (a) a database of stored stick figures;  (b) a simple device called the image-space  processor for moving between object-centered and viewer-centered coordinate  frames; and (c) a process for "relaxing" a  stored model onto the image during  recognition. The relation of the theory to  "mental rotation" phenomena is discussed,  and some critical experimental predictions  are made.
</description>
<pubDate>Sun, 01 Aug 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6254</guid>
<dc:date>1976-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Geometry of Linear Threshold Functions</title>
<link>https://hdl.handle.net/1721.1/6253</link>
<description>Computational Geometry of Linear Threshold Functions
Abelson, Harold
Linear threshold machines are defined to be  those whose computations are based on the  outputs of a set of linear threshold decision  elements. The number of such elements is  called the rank of the machine. An analysis of  the computational geometry of finite-rank  linear threshold machines, analogous to the  analysis of finite-order perceptrons given by  Minsky and Papert, reveals that the use of  such machines as "general purpose pattern  recognition systems" is severely limited. For  example, these machines cannot recognize  any topological invariant, nor can they  recognize non-trivial figures "in context".
</description>
<pubDate>Thu, 01 Jul 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6253</guid>
<dc:date>1976-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Case Study of a Young Child Doing Turtle Graphics in LOGO</title>
<link>https://hdl.handle.net/1721.1/6252</link>
<description>A Case Study of a Young Child Doing Turtle Graphics in LOGO
Solomon, Cynthia J.; Papert, Seymour A.
This paper explores some important issues  with regard to using computers in education. It  probes into the question of what  programming ideas and projects will engage  young children. In particular, a seven year old  child's involvement in turtle graphics is  presented as a case study.
</description>
<pubDate>Thu, 01 Jul 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6252</guid>
<dc:date>1976-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computerized Look at Cat Locomotion or One Way to Scan a Cat</title>
<link>https://hdl.handle.net/1721.1/6251</link>
<description>A Computerized Look at Cat Locomotion or One Way to Scan a Cat
Speckert, Glen
This paper describes a three phase project  concerning the watchin, analyzing, and  describing of motions of a cat in various gaits.  All data is based on two 16mm films of an  actual cat moving on a treadmill. In phase I,  the low level issues of tracking key points on  the cat from frame to frame are discussed.  Phase II deals with building and using a  graphics tool to analyze the data of phase I.  Pahse III is a high level discussion of cat  locomotion based on the trajectories and  movements explored by phase II.
</description>
<pubDate>Thu, 01 Jul 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6251</guid>
<dc:date>1976-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Poetic and Social Criteria for Education Design</title>
<link>https://hdl.handle.net/1721.1/6250</link>
<description>Some Poetic and Social Criteria for Education Design
Papert, Seymour A.
Ten years is in some ways a challenging and  in some ways a very awkward period for  predicting the impact of computers in  education. If you asked me whether the  practice of education will have undergone a  fundamental change through the impact of  computers in either five years of in twenty-five  years, I could answer with complete  confidence "NO" to the first question and  "YES" to the second. But what happens in the  ten years depends very sensitively on how  hard we try; on when the people with the  requisite financial, intellectual and moral  resources recognize the opportunity and the  urgency of action. If we act smartly it is still  possible that by 1985 the existence of model  schools and learning centers will have  changed the ball-park in which society sets  the sights of its educational ambitions.
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6250</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Occluding Contour</title>
<link>https://hdl.handle.net/1721.1/6249</link>
<description>Analysis of Occluding Contour
Marr, D.
Almost nothing can be deduced about a  general 3-D surface given only its occluding  contours in an image, yet contour information  is easily and effectively used by us to infer the  shape of a surface. Therefore, implicit in the  perceptual analysis of occluding contour must  lie various assumptions about the viewed  surfaces. The assumptions that seem most  natural are (a) that the distinction between  convex and concave segments reflects real  properties of the viewed surface; and (b) that  contiguous portions of contour arise from  contiguous parts of the viewed surface ??e.  there are no invisible obscuring edges. It is  proved that, for smooth surfaces, these  assumptions are essentially equivalent to  assuming that the viewed surface is a  generalized cone. Methods are defined for  finding the axis of such a cone, and for  segmenting a surface constructed of several  cones into its components, whose axes can  then be found separately. These methods,  together with the algorithms for implementing  them devised by Vatan &amp; Marr (1977), provide  one link between an uninterpreted figure  extracted from an image, and the 3-D  representation theory of Marr and Nishihara  (1977).
</description>
<pubDate>Fri, 01 Oct 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6249</guid>
<dc:date>1976-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal to NSF: An Evaluative Study of Modern Technology in Education</title>
<link>https://hdl.handle.net/1721.1/6248</link>
<description>Proposal to NSF: An Evaluative Study of Modern Technology in Education
Papert, Seymour A.
This proposal to the NSF describes a new  phase of research planned in LOGO. Previous  phases have concentrated on developing a  conceptual superstructure (theories and  teaching methods) and a material infra-structure (hardware and software) for a new  style of using computers in education. We  now want to test, to prove and to disseminate  the results of our work, which will, of course,  continue along the lines of the early phases.  Part 1 is an overview of where we are and  what we have to do next in the historical  framework of the uses of computers for  education. Parts 2 and 3 focus more on the  specific content of the work planned for the  next three years (1976-79).
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6248</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesizing Constraint Expressions</title>
<link>https://hdl.handle.net/1721.1/6247</link>
<description>Synthesizing Constraint Expressions
Freuder, Eugene C.
An algorithm is presented for determining the  values which simultaneously satisfy a set of  relations, or constraints, involving different  subsets of n variables. The relations are  represented in a series of constraint  networks, which ultimately contain a node for  every subset of the n variables. Constraints  may be propagated through such networks in  (potentially) parallel fashion to determine the  values which simultaneously satisfy all the  constraints. The iterated constraint  propagation serves to mitigate combinatorial  explosion. Applications in scene analysis,  graph theory, and backtrack search are  provided.
</description>
<pubDate>Thu, 01 Jul 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6247</guid>
<dc:date>1976-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Text-Justifier TJ6</title>
<link>https://hdl.handle.net/1721.1/6246</link>
<description>The Text-Justifier TJ6
Cohen, Joseph D.
This memo, intended as both a reference and  user's manual describes the text-justifying  program TJ6, which compiles a neat output  document from a sloppy input manuscript.  TJ6 can justify and fill text; automatically  number pages and figures; control page  format and indentation; underline, superscript,  and subscript; print a table of contents; etc.
</description>
<pubDate>Sat, 01 May 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6246</guid>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Near is Near?</title>
<link>https://hdl.handle.net/1721.1/6245</link>
<description>How Near is Near?
Danofsky, Murray Elias
This paper presents a system for  understanding the concept of near and far,  weighing such factors as purpose of the  judgement, dimensions of the objects,  absolute size of the distance, and size of the  distance relative to other objects, ranges, and  standards. A further section discusses the  meaning of phrases such as very near, much  nearer than, and as near as. Although we will  speak of near as a judgement about physical  distance, most of the ideas developed will be  applicable to any continuous measurable  parameter, such as size or time. An  adaptation for rows (discrete spaces) is made  as well.
</description>
<pubDate>Sun, 01 Feb 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6245</guid>
<dc:date>1976-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leading a Child to a Computer Culture</title>
<link>https://hdl.handle.net/1721.1/6244</link>
<description>Leading a Child to a Computer Culture
Solomon, Cynthia J.
"LOGO" is sometimes used as the name of a  programming language. It is also used as the  name of...what shall I call it?... an  environment, a culture, a way of thinking about  computers and about learning and about  putting the two together. I shall try to convey to  you how I bring a child into this environment.
</description>
<pubDate>Mon, 01 Dec 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6244</guid>
<dc:date>1975-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Development of Musical Intelligence I: Strategies for Representing Simple Rhythms</title>
<link>https://hdl.handle.net/1721.1/6243</link>
<description>The Development of Musical Intelligence I: Strategies for Representing Simple Rhythms
Bamberger, Jeanne
This paper is the first in a series of  monographs which will describe various  aspects of the development of musical  intelligence.
</description>
<pubDate>Sat, 01 Nov 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6243</guid>
<dc:date>1975-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial Disposition of Axes in a Generalized Cylinder Representation of Objects</title>
<link>https://hdl.handle.net/1721.1/6242</link>
<description>Spatial Disposition of Axes in a Generalized Cylinder Representation of Objects
Marr, D.; Hishihara, H.K.
It is proposed that the 3-D representation of  an object is based primarily on a stick-figure  configuration, where each stick represents  one or more axes in the object's generalized  cylinder representation. The loosely  hierarchical description of a stick figure is  interpreted by a special-purpose processor,  able to maintain two vectors and the  gravitational vertical relative to a Cartesian  space-frame. It delivers information about the  appearance of these vectors, which helps the  system to rotate its model into the correct 3-D  orientation relative to the viewer during  recognition.
</description>
<pubDate>Mon, 01 Dec 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6242</guid>
<dc:date>1975-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Early Processing of Visual Information</title>
<link>https://hdl.handle.net/1721.1/6241</link>
<description>Early Processing of Visual Information
Marr, D.
The article describes a symbolic approach to  visual information processing, and sets out  four principles that appear to govern the  design of complex symbolic information  processing systems. A computational theory  of early visual information processing is  presented, which extends to about the level of  figure-ground separation. It includes a  process-oriented theory of texture vision. Most  of the theory has been implemented, and  examples are shown of the analysis of  several natural images. This replaces Memos  324 and 334.
</description>
<pubDate>Mon, 01 Dec 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6241</guid>
<dc:date>1975-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Very Large Planner-Type Data Bases</title>
<link>https://hdl.handle.net/1721.1/6240</link>
<description>Very Large Planner-Type Data Bases
McDermott, Drew V.
This paper describes the implementation of a  typical data-base manaer for an A.I. language  like Planner, Conniver, or QA4, and some  proposed extensions for applications  involving greater quantities of data than usual.  The extensions are concerned with data  bases involving several active and potentially  active sub-data-bases, or "contexts". The  major mechanisms discussed are the use of  contexts as packets of data with free  variables; and indexing data according to the  contexts they appear in. The paper also  defends the Planner approach to data  representations against some more recent  proposals.
</description>
<pubDate>Mon, 01 Sep 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6240</guid>
<dc:date>1975-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Art of Snaring Dragons</title>
<link>https://hdl.handle.net/1721.1/6239</link>
<description>The Art of Snaring Dragons
Cohen, Harvey A.
DRAGONs are formidable problems in  elementary mechanics not amenable to  solution by naï¶¥ formula cranking. What is  the intellectual weaponry one needs to snare  a Dragon? To snare a Dragon one brings to  mind an heuristic frame ??specifically  structured association of problem solving  ideas. Data on the anatomy of heuristic  frames ??st how and what ideas are linked  together ??s been obtained from the  protocols of many attacks on Dragons by  students and physicists. In this paper various  heuristic frames are delineated by detailing  how they motivate attacks on two particular  Dragons, Milko and Jugglo, from the writer's  compilation. This model of the evolution of  problem solving skills has also been applied  to the interpretation of the intellectual growth  of children, and in an Appendix we use it to  give a cogent interpretation for the protocols of  Piagetian "Conservation" experiments. The  model provides a sorely needed theoretical  framework to discuss teaching strategems  calculated to promote problem solving skills.
Revised May 1975
</description>
<pubDate>Fri, 01 Nov 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6239</guid>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence, Language and the Study of Knowledge</title>
<link>https://hdl.handle.net/1721.1/6238</link>
<description>Artificial Intelligence, Language and the Study of Knowledge
Goldstein, Ira; Papert, Seymour A.
This paper studies the relationship of Artificial  Intelligence to the study of language and the  representation of the underlying knowledge  which supports the comprehension process.  It develops the view that intelligence is based  on the ability to use large amounts of diverse  kinds of knowledge in procedural ways, rather  than on the possession of a few general and  uniform principles. The paper also provides a  unifying thread to a variety of recent  approaches to natural language  comprehension. We conclude with a brief  discussion of how Artificial Intelligence may  have a radical impact on education if the  principles which it utilizes to explore the  representation and use of knowledge are  made available to the student to use in his  own learning experiences. This paper is a  revised version of an earlier document written  with Marvin Minsky. Many of the ideas in this  paper owe much to Minsky's thoughtful  critique; the authors, however, take  responsibility fo the organization and wording  of this document.
Revised March 1976
</description>
<pubDate>Tue, 01 Jul 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6238</guid>
<dc:date>1975-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Teaching Teachers LOGO: The Lesley Experiments</title>
<link>https://hdl.handle.net/1721.1/6237</link>
<description>Teaching Teachers LOGO: The Lesley Experiments
Austin, Howard
This research is concerned with the question  of whether or not teachers who lack  specialized backgrounds can adapt to and  become proficient in the technically complex,  philosophically sophisticated LOGO learning  environment. Excellent results were obtained  and are illustrated through a series of  examples of student work. The report then  gives some brief observations about the  thought styles observed and concludes with  suggestions for further work.
</description>
<pubDate>Thu, 01 Apr 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6237</guid>
<dc:date>1976-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Image Intensity Understanding</title>
<link>https://hdl.handle.net/1721.1/6236</link>
<description>Image Intensity Understanding
Horn, Berthold K.P.
Image intensities have been processed  traditionally without much regard to how they  arise. Typically they are used only to segment  an image into regions or to find edge-fragments. Image intensities do carry a great  deal of useful information about three-dimensional aspects of objects and some  initial attempts are made here to exploit this.  An understanding of how images are formed  and what determines the amount of light  reflected from a point on an object to the  viewer is vital to such a development. The  gradient-space, popularized by Huffman and  Mackworth is a helpful tool in this regard.
</description>
<pubDate>Fri, 01 Aug 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6236</guid>
<dc:date>1975-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing Natural Images: A Computational Theory of Texture Vision</title>
<link>https://hdl.handle.net/1721.1/6235</link>
<description>Analyzing Natural Images: A Computational Theory of Texture Vision
Marr, D.
A theory of early and intermediate visual  information processing is given, which  extends to about the level of figure-ground  separation. Its core is a computational theory  of texture vision. Evidence obtained from  perceptual and from computational  experiments is adduced in its support. A  consequence of the theory is that high-level  knowledge about the world influences visual  processing later and in a different way from  that currently practiced in machine vision.
</description>
<pubDate>Sun, 01 Jun 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6235</guid>
<dc:date>1975-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Visual Detection of Light Sources</title>
<link>https://hdl.handle.net/1721.1/6234</link>
<description>On Visual Detection of Light Sources
Ullman, Shimon
The paper addresses the following problem:  Given an array of light intensities obtained  from some scene, find the light sources in the  original scene. The following factors are  discussed from the point of view of their  relevance to light sources detection: The  highest intensity in the scene, absolute  intensity value, local and global contrast,  comparison with the average intensity, and  lightness computation. They are shown to be  insufficient for explaining humans' ability to  identify light sources in their visual field.  Finally, a method for accomplishing the  source detection task in the mondrian world is  presented.
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6234</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ideas About Management of LISP Data Bases</title>
<link>https://hdl.handle.net/1721.1/6233</link>
<description>Ideas About Management of LISP Data Bases
Sandewall, Erik
The paper advocates the need for systems  which support maintenance of LISP-type data  bases, and describes an experimental  system of this kind, call DABA. In this system,  a description of the data base's structure is  kept in the data base itself. A number of utility  programs use the description for operations  on the data base. The description must  minimally include syntactic information  reminiscent of data structure declarations in  more conventional programming languages,  and can be extended by the user. Two  reasons for such systems are seen: (1) As A.I.  programs develop from toy domains using toy  data bases, to more realistic exercises, the  management of the knowledge base  becomes non-trivial and requires program  support. (2) A powerful way to organize LISP  programs is to make them data-driven,  whereby pieces of program are distributed  throughout a data base. A data base  management system facilitates the use of this  programming style. The paper describes and  discusses the basic ideas in the DABA  system as well as the technique of data driven  programs.
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6233</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thesis Progress Report: A System for Representing and Using Real-World Knowledge</title>
<link>https://hdl.handle.net/1721.1/6232</link>
<description>Thesis Progress Report: A System for Representing and Using Real-World Knowledge
Fahlman, Scott E.
This paper describes progress to date in the  development of a system for representing  various forms of real-world knowledge. The  knowledge is stored in the form of a net of  simple parallel processing elements, which  allow certain types of deduction and set-intersection to be performed very quickly and  easily. It is claimed that this approach offers  definite advantages for recognition and many  other data-accessing tasks. Suggestions are  included for the application of this system as  a tool in vision, natural-language processing,  speech recognition, and other problem  domains.
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6232</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computational View of the Skill of Juggling</title>
<link>https://hdl.handle.net/1721.1/6231</link>
<description>A Computational View of the Skill of Juggling
Austin, Howard
This research has as its basic premise the  belief that physical and mental skills are  highly similar, enough so in fact that  computation paradigms such as the ones  used in Artificial Intelligence research about  predominantly mental skills can be usefully  extended to include physical skills. This  thesis is pursued experimentally by  categorization of "juggling bugs" via detailed  video observations. A descriptive language for  juggling movements is developed and a  taxonomy of bugs is presented. The  remainder of the paper is concerned with an  empirical determination of the characteristics  of an ultimate theory of juggling movements.  The data presented is relevant to the  computational issues of control structure,  naming, addressing and subprocedurization.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6231</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Localization of Failures in Radio Circuits: A Study in Causal and Teleological Reasoning</title>
<link>https://hdl.handle.net/1721.1/6230</link>
<description>Localization of Failures in Radio Circuits: A Study in Causal and Teleological Reasoning
Sussman, Gerald Jay; Brown, Allen L.
This paper examines some methodologies  for diagnosing correctly designed radio  circuits which are failing to perform in the  intended way because of some faulty  component. Particular emphasis is placed on  the utility and necessity of good teleological  descriptions in successfully executing the  task of isolating failing components.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6230</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Glossary of PDP11 LOGO Primitives</title>
<link>https://hdl.handle.net/1721.1/6229</link>
<description>A Glossary of PDP11 LOGO Primitives
Goldenberg, E. Paul
This glossary was written for the purpose of  providing a quick and concise yet accurate  description of the primitives and special  words and characters of the March 18, 1975  PDP 11 implementation of the LOGO languge.  Many entries include references to other  related words and/or examples of the use of  the primitive being described, but this is not  intended to replace the functions of a good  manual. For a more detailed and  comprehensive description of the language,  see the LOGO MANUAL, LOGO MEMO 7. The  description of each LOGO word includes the  work, itself, any arguments that the word may  require, the "type" of word it is, abbreviated  and alternate forms of the work, if any, and a  definition correct as the date of this glossary.  Word tupe is described on the first page and  an example of the formatt of the entries is  given below. In the appendix to this glossary  are sections about 1) LOGO words that take a  variable number of inputs, 2) infix operators,  3) editing characters, 4) special characters, 5)  special names, 6) decimal ascii code and  corresponding characters.
</description>
<pubDate>Sat, 01 Mar 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6229</guid>
<dc:date>1975-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Glossary of LOGO Primitives</title>
<link>https://hdl.handle.net/1721.1/6228</link>
<description>A Glossary of LOGO Primitives
Abelson, Harold; Adams, Jim
This is a brief description of the primitives in  PDP 11 LOGO. It is intended to provide a  quick reference for users who are already  familiar with LOGO basics. For a more  detailed and comprehensive description of  LOGO, consult the LOGO Manual (A.I. Memo  313, LOGO Memo 7).
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6228</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>What's in a Tune</title>
<link>https://hdl.handle.net/1721.1/6227</link>
<description>What's in a Tune
Bamberger, Jeanne
The work reported here began with two  fundamental assumptions: 1) The perception  of music is an active process; it involves the  individual in selecting, sorting, and grouping  the features of the phenomena before her. 2)  Individual differences in response to a  potentially sensible melody rest heavily on  just which features the individual has access  to or is able to focus on.
</description>
<pubDate>Fri, 01 Nov 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6227</guid>
<dc:date>1974-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>LOGO Manual</title>
<link>https://hdl.handle.net/1721.1/6226</link>
<description>LOGO Manual
Abelson, Harold; Goodman, Nat; Rudolph, Lee
This document descibes the LOGO system  implemented for the PDP 11/45 at the M.I.T.  Artificial Intelligence Laboratory. The "system"  includes not only the LOGO evaluator, but also  a dedicated time-sharing system which  services about a dozen users. There are also  various special devices such as robot turtles,  tone generators, and CRT displays.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6226</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Luxury of Necessity</title>
<link>https://hdl.handle.net/1721.1/6225</link>
<description>The Luxury of Necessity
Bamberger, Jeanne
This paper was originally written as an  address to a conference of the National  Association of Schools of Music on "The  Music Consumer". Posing a series of  questions which point to fundamental issues  underlyin the LOGO music project, the paper  goes on to describe some of the specific  projects with which students have been  working in an effort to probe these issues.  Emphasis is placed on "modes of  representation" as a significant realm of  enquiry: just how does an individual represent  a tune to himself, what are the differences  between formal and informal modes of  representation ??at features and relations  of a melody does a representation capture,  what does it leave out? What is the influence  of such modes of "perception", how do they  effect strategies of problem solving, notions of  "same" and "different" or even influence  musical "taste"? Finally, there are some hints  at what might constitute "sufficiently powerful  representations" of musical design with  examples from both simple and complex  pieces of music as well as a probe into what  might distinguish "simple" from "complex"  musical designs.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6225</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>TORTIS: Toddler's Own Recursive Turtle Interpreter System</title>
<link>https://hdl.handle.net/1721.1/6224</link>
<description>TORTIS: Toddler's Own Recursive Turtle Interpreter System
Perlman, Radia
TORTIS is a device for preschool children to  communicated with and program the turtle. It  consistst of several boxes (currently 3 button  boxes and two blox boxes) designed so that  only a few new concepts are introduced at a  time but more can be added when the child  becomes familiar with what he has. Hopefully  transitions are gradual enough so that the  child never thinks talking to the turtle is too  hard or that he is "too dumb". And hopefully  playing with the system should teach such  concepts as numbers, breaking large  problems into small solvable steps, writin and  debugging procedures, recursion, variables,  and conditionals. Most important of all, it  should teach that learning is fun.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6224</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Commenting Proofs</title>
<link>https://hdl.handle.net/1721.1/6223</link>
<description>Commenting Proofs
Geiser, James R.
This paper constitutes a summary of a  seminar entitled "Commenting Proofs" given  a the Artificial Intelligence Laboratory during  the spring of 1974. The work is concerned  with new syntactic structures in formal proofs  which derive from their pragmatic and  semantic aspects. It is a synthesis of  elements from Yessenin-Volpin's  foundational studies and developments in  Artificial Intelligence concerned with  commenting programs and the use of this  idea in automatic debugging procedures.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6223</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Force Feedback in Precise Assembly Tasks</title>
<link>https://hdl.handle.net/1721.1/6222</link>
<description>Force Feedback in Precise Assembly Tasks
Inoue, Hirochika
This paper describes the execution of precise  assembly tasks by a robot. The level of  performance of the experimental system  allows such basic actions as putting a peg  into a hole, screwing a nut on a bolt, and  picking up a thin piece from a flat table. The  tolerance achieved in experiments was 0.001  inch. The experiments proved that force  feedback enabled a reliable assembly of a  bearing complex consisting of eight parts with  close tolerances. A movie of the  demonstration is available.
</description>
<pubDate>Thu, 01 Aug 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6222</guid>
<dc:date>1974-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>LLOGO: An Implementation of LOGO in LISP</title>
<link>https://hdl.handle.net/1721.1/6221</link>
<description>LLOGO: An Implementation of LOGO in LISP
Goldstein, Ira; Lieberman, Henry; Bochner, Harry; Miller, Mark
This paper describes LLOGO, an  implementation of the LOGO language written  in MACLISP for the ITS, TEN50 and TENEX  PDP-10 systems, and MULTICS. The relative  merits of LOGO and LISP as educational  languages are discussed. Design decisions  in the LISP implementation of LOGO are  contrasted with those of two other  implementations: CLOGO for the PDP-10 and  11LOGO for the PDP-11, both written in  assembler language. LLOGO's special  facilities for character-oriented display  terminals, graphic display 'turtles', and music  generation are also described.
</description>
<pubDate>Sat, 01 Mar 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6221</guid>
<dc:date>1975-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>LLOGO: An Implementation of LOGO in LISP</title>
<link>https://hdl.handle.net/1721.1/6220</link>
<description>LLOGO: An Implementation of LOGO in LISP
Goldstein, Ira; Lieberman, Henry; Bochner, Harry; Miller, Mark
This paper describes LLOGO, an  implementation of the LOGO language written  in MACLISP for the ITS, TEN50 and TENEX  PDP-10 systems, and MULTICS. The relative  merits of LOGO and LISP as educational  languages are discussed. Design decisions  in the LISP implementation of LOGO are  contrasted with those of two other  implementations: CLOGO for the PDP-10 and  11LOGO for the PDP-11, both written in  assembler language. LLOGO's special  facilities for character-oriented display  terminals, graphic display 'turtles', and music  generation are also described.
</description>
<pubDate>Sat, 01 Jun 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6220</guid>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Summary of MYCROFT: A System for Understanding Simple Picture Programs</title>
<link>https://hdl.handle.net/1721.1/6219</link>
<description>Summary of MYCROFT: A System for Understanding Simple Picture Programs
Goldstein, Ira P.
A collection of powerful ideas??cription,  plans, linearity, insertions, global knowledge  and imperative semantics?? explored  which are fundamental to debugging skill. To  make these concepts precise, a computer  monitor called MYCROFT is described that  can debug elementary programs for drawing  pictures. The programs are those written for  LOGO turtles.
</description>
<pubDate>Wed, 01 May 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6219</guid>
<dc:date>1974-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Plane Geometry Theorem Proving Using Forward Chaining</title>
<link>https://hdl.handle.net/1721.1/6218</link>
<description>Plane Geometry Theorem Proving Using Forward Chaining
Nevins, Arthur J.
A computer program is described which  operates on a subset of plane geometry. Its  performance not only compares favorably with  previous computer programs, but within its  limited problem domain (e.g. no curved lines  nor introduction of new points), it also invites  comparison with the best human theorem  provers. The program employs a combination  of forward and backward chaining with the  forward component playing the more  important role. This, together with a deeper  use of diagrammatic information, allows the  program to dispense with the diagram filter in  contrast with its central role in previous  programs. An important aspect of human  problem solving may be the ability to structure  a problem space so that forward chaining  techniques can be used effectively.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6218</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Relaxation Approach to Splitting in an Automatic Theorem Prover</title>
<link>https://hdl.handle.net/1721.1/6217</link>
<description>A Relaxation Approach to Splitting in an Automatic Theorem Prover
Nevins, Arthur J.
The splitting of a problem into subproblems  often involves the same variable appearing in  more than one of the subproblems. This  makes these subproblems dependent upon  one another since a solution to one may not  qualify as a solution to another. A two stage  method of splitting is described which first  obtains solutions by relaxing the dependency  requirement and then attempts to reconcile  solutions to different subproblems. The  method has been realized as part of an  automatic theorem prover programmed in  LISP which takes advantage of the procedural  power that LISP provides. The program has  had success with sryptarithmetic problems,  problems from blocks world, and has been  used as asubroutine in a plane geometry  theorem prover.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6217</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Mechanical Arm Control System</title>
<link>https://hdl.handle.net/1721.1/6216</link>
<description>A Mechanical Arm Control System
Waters, Richard C.
This paper describes a proposed mechanical  arm control system and some of the lines of  thought which led to this design. In particular,  the paper discusses the basic systme  required in order for the arm to control its  environment, and deal with error situations  which arise. In addition the paper discusses  the system needed to control the motion of the  arm using the computed torque drive method,  and force feedback.
</description>
<pubDate>Tue, 01 Jan 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6216</guid>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Outline for Mini-Arms Based on Manipulator Technology</title>
<link>https://hdl.handle.net/1721.1/6215</link>
<description>Design Outline for Mini-Arms Based on Manipulator Technology
Flatau, Carl R.
The design of small manipulators is an art  requiring proficiency in diverse disciplines.  This paper documents some of the general  ideas illustrated by a particular design for an  arm roughly one quarter human size. The  material is divided into the following sections:  A. General design constraints. B. Features of  existing manipulator technology. C. Scaling  relationships for major arm components. D.  Design of a particular small manipulator. E.  Comments on future possibilities.
</description>
<pubDate>Tue, 01 May 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6215</guid>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal to ARPA for Research on Intelligent Automata and Micro-Automation</title>
<link>https://hdl.handle.net/1721.1/6214</link>
<description>Proposal to ARPA for Research on Intelligent Automata and Micro-Automation
Winston, P.; Horn, B.K.P.; Sussman, G.J.
The results of a decade of work in Artificial Intelligence have brought us to the threshold of a new phase of knowledge-based programming -- in which we can design computer systems that (1) react reasonably to significantly complicated situations and (2) perhaps more important for the future -- interact intelligently with their operators when they encounter limitations, bugs or insufficient information. This proposal lays out programmes for bringing several such systems near to the point of useful application. These include: A physical "micro-automation" system for maintenance and repair of electronic circuits. A related "expert" problem-solving program for diagnosis and modification of electronic circuits. A set of advanced "Automatic Programming" techniques and systems for aid in developing and debugging large computer programs. Some Advanced Natural Language application methods and sustems for use with these and other interactive projects. A series of specific "expert" problem solvers, including Chess analysis. Steps toward a new generation of more intelligent Information Retrieval and Management Assistance systems.
</description>
<pubDate>Sat, 01 Sep 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6214</guid>
<dc:date>1973-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uses of Technology to Enhance Education</title>
<link>https://hdl.handle.net/1721.1/6213</link>
<description>Uses of Technology to Enhance Education
Papert, Seymour A.
Section 1: Schematic outline of project and what we want. Hardly any intellectual content. Section 2: Statement of our goals in general terms. This statement is intended to have serious intellectual content but lacks meaty examples. Readers who find it too abstract for comfort might like to read at least part of #3 first. Section 3: A series fo extended examples intended to give more concrete substance to the generalities in #2. Section4: This is the real "proposal". It sets out specifically a list of concrete "goals" on which we want to work in the immediate future. Appendix: Papers by Jeanne Bamberger, Marvin Minsky, Seymour Papert and Cynthia Solomon.
</description>
<pubDate>Fri, 01 Jun 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6213</guid>
<dc:date>1973-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Lightness</title>
<link>https://hdl.handle.net/1721.1/6212</link>
<description>On Lightness
Horn, Berthold K.P.
The intensity at a point in an image is the product of the reflectance at the corresponding object point and the intensity of illumination at that point. We are able to perceive lightness, a quantity closely correlated with reflectance. How then do we eliminate the component due to illumination from the image on our retina? The two components of image intensity differ in their spatial distribution. A method is presented here which takes advantage of this to compute lightness from image intensity in a layered, parallel fashion.
</description>
<pubDate>Mon, 01 Oct 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6212</guid>
<dc:date>1973-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>U.T.: Telnet Reference Manual</title>
<link>https://hdl.handle.net/1721.1/6211</link>
<description>U.T.: Telnet Reference Manual
Eastlake, Donald E.
UT is a user telnet program designed to run under the ITS time sharing system. It implements the relatively recent ARPA network negotiating protocol for telnet connections.
</description>
<pubDate>Mon, 01 Apr 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6211</guid>
<dc:date>1974-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Paterson's Worm</title>
<link>https://hdl.handle.net/1721.1/6210</link>
<description>Paterson's Worm
Beeler, Michael
A description of a mathematical idealization of the feeding pattern of a kind of worm is given.
</description>
<pubDate>Fri, 01 Jun 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6210</guid>
<dc:date>1973-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Manipulator Design Vignettes</title>
<link>https://hdl.handle.net/1721.1/6209</link>
<description>Manipulator Design Vignettes
Minsky, Marvin
This memo is about mechanical arms. The literature on robotics seems to be deficient in such discussions, perhaps because not enough sharp theoretical problems have been formulated to attract interest. I"m sure many of these matters have been discussed in other literatures-- prosthetics, orthopedics, mechanical engineering, etc., and references to such discussions would be welcome. We raise these issues in the context of designing the mini-robot" system in the A.I. Laboratory in 1972-1973. But we would like to attract the interests of the general heuristic programming community to such questions.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6209</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Heterarchical Program for Recognition of Polyhedra</title>
<link>https://hdl.handle.net/1721.1/6208</link>
<description>A Heterarchical Program for Recognition of Polyhedra
Shirai, Yoshiaki
Recognition of polyhedra by a heterarchical program is presented. The program is based on the strategy of recognizing objects step by step, at each time making use of the previous results. At each stage, the most obvious and simple assumption is made and the assumption is tested. To find a line segment, a range of search is proposed. Once a line segment is found, more of the line is determined by tracking along it. Whenever a new fact is found, the program tries to reinterpret the scene taking the obtained information into consideration. Results of the experiment using an image dissector are satisfactory for scenes containing a few blocks and wedges. Some limitations of the present program and proposals for future development are described.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6208</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Concrete Approach to Abstract Recursive Definitions</title>
<link>https://hdl.handle.net/1721.1/6207</link>
<description>A Concrete Approach to Abstract Recursive Definitions
Wand, Mitchell
We introduce a non-categorical alternative to Wagner's Abstract Recursive Definitions [Wg-1,2] using a generalization of the notion of clone called a u-clone. Our more concrete approach yields two new theorems: 1.) the free u-clone generated by a ranked set is isomorphic to the set of loop-representable flow diagrams with function symbols in the set, 2.) For every element of a u-clone there is an expression analogous to a regular expression. Several well-known theorems of language and automata theory are drawn as special cases of this theorem.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6207</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>PEEK</title>
<link>https://hdl.handle.net/1721.1/6206</link>
<description>PEEK
Eastlake, Donald E.
PEEK is a utility program designed to operate under the ITS time sharing system. It enables a user to monitor a variety of aspects of the time sharing system by providing, to the user, various periodically updated displays.
</description>
<pubDate>Fri, 01 Feb 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6206</guid>
<dc:date>1974-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>PEEK</title>
<link>https://hdl.handle.net/1721.1/6205</link>
<description>PEEK
Eastlake, Donald E.
PEEK is a utility program designed to operate under the ITS time sharing system. It enables a user to monitor a variety of aspects of the time sharing system by providing periodically updated display output or periodic printer output to teletype or line printer.  just what information is being presented to the user is controlled by PEEKs information mode. The available modes are listed in section 3 below. Section 5 describes how PEEK determines which device to output on. Section 2 describes, in general, how the user can input commands to PEEK.
</description>
<pubDate>Tue, 01 May 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6205</guid>
<dc:date>1973-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Conniver Reference Manual</title>
<link>https://hdl.handle.net/1721.1/6204</link>
<description>The Conniver Reference Manual
McDermott, Drew V.; Sussman, Gerald Jay
This manual is an introduction and reference to the latest version of the Conniver programming language, an AI language wit general control and data-base structures.
Updated January 1974
</description>
<pubDate>Mon, 01 May 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6204</guid>
<dc:date>1972-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Conniver Reference Manual</title>
<link>https://hdl.handle.net/1721.1/6203</link>
<description>The Conniver Reference Manual
McDermott, Drew V.; Sussman, Gerald Jay
This manual is intended to be a guide to the philosophy and use of the programming language CONNIVER, which is "complete," and running at the AI Lab now. It assumes good knowledge of LISP, but no knowledge of Micro-Planner, in whose implementation many design decisions were made that are not to be expected to have consequences in CONNIVER. Those not familiar with LISP should consult Weissmans (1967) Primer, the LISP 1.5 Programmer's Manual (McCarthy et.al., 1962), or Jon L. Whites (1970) and others (PDP-6, 1967) excellent memos here at our own lab
</description>
<pubDate>Mon, 01 May 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6203</guid>
<dc:date>1972-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Two Counter Machine Cannot Calculate 2N</title>
<link>https://hdl.handle.net/1721.1/6202</link>
<description>A Two Counter Machine Cannot Calculate 2N
Schroeppel, Rich
This note proves that a two counter machine cannot calculate 2N.
</description>
<pubDate>Mon, 01 May 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6202</guid>
<dc:date>1972-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficiency of Equivalence Algorithms</title>
<link>https://hdl.handle.net/1721.1/6201</link>
<description>Efficiency of Equivalence Algorithms
Fischer, Michael J.
This paper was first presented at the Symposium on Complexity of Computer Computations, IBM Thomas J. Watson Research Center, Yorktown Heights, New York, on March 22, 1972.  The equivalence problem is to determine the finest partition on a set that is consistent with a sequence of assertions of the form "x == y". A strategy for doing this on a computer processes the assertions serially, maintaining always in storage a representation of the partition defined by the assertions so far encountered. To process the command "x == y", the equivalence classes of x and y are determined. If they are the same, nothing further is done; otherwise the two classes are merged together.  Galler and Fischer (1964A) give an algorithm for solving this problem based on tree structures, and it also appears in Knuth (1968A). The items in each equivalent class are arranged in a tree, and each item except for the root contains a pointer to its father. The root contains a flag indicating that it is a root, and it may also contain other information relevant to the equivalence class as a whole.
</description>
<pubDate>Sat, 01 Apr 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6201</guid>
<dc:date>1972-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Why Conniving is Better than Plannng</title>
<link>https://hdl.handle.net/1721.1/6200</link>
<description>Why Conniving is Better than Plannng
Sussman, Gerald Jay; McDermott, Drew Vincent
This paper is a critique of a computer  programming language, Carl Hewitts  PLANNER, a formalism designed especially  to cope with the problems that Artificial  Intelligence encounters. It is our contention  that the backtrack control structure that is the  backbone of PLANNER is particular,  automatic backtracking encourages inefficient  algorithms, conceals what is happening from  the user, and misleads him with primitives  having powerful names whose power is only  superficial. An alternative, a programming  language called CONNIVER which avoids  these problems, is presented from the point  of view of this critique.
</description>
<pubDate>Sat, 01 Apr 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6200</guid>
<dc:date>1972-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>NIM: A Game-Playing Program</title>
<link>https://hdl.handle.net/1721.1/6199</link>
<description>NIM: A Game-Playing Program
Papert, Seymour A.; Solomon, Cynthia
This note illustrates some ideas about how to  initiate beginning students into the art of  planning and writing a program complex  enough to be considered a project rather than  an exercise on using the language or simple  programming ideas. The project is to write a  program to play a simple game ("one-pile  NIM" or "21") as invincibly as possible. We  developed the project for a class of seventh  grader children we taught in 1968-69 at the  Muzzey Junior High School in Lexington,  Massachusetts. This was the longest  programming project these children had  encountered, and our intention was to give  them a model of how to go about working  under these conditions.
</description>
<pubDate>Thu, 01 Jan 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6199</guid>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computer-Controlled Oculometer: A Prototype Interactive Eye Movement Tracking System</title>
<link>https://hdl.handle.net/1721.1/6198</link>
<description>The Computer-Controlled Oculometer: A Prototype Interactive Eye Movement Tracking System
Hillsman, Matthew J.; Williams, R. Wade; Roe, John S.
One kind of eye movement tracking device  which has great potential is the digital  computer-controlled Oculometer, an  instrument which non-invasively measures  point of regard of the subject, as well as pupil  diameter and blink occurrence. In conjunction  with a computer-generated display which can  change in real time as a function of the  subject's eye motions, the computer-controlled Oculometer makes possible a  variety of interactive measurement and control  systems. Practical applications of such  schemes have had to await the development  of an instrument design which does not  inconvenience the subject, and which  conveniently interfaces with a digital computer  (see ref. 1).  This report describes an Oculometer  subsystem and an eye-tracking/control  program designed for use with the PDP-6  computer of the MIT Project MAC Artificial  Intelligence Group. The oculometer electro-optic subsystem utilizes near-infrared light  reflected specularly off the front surface of the  subject's cornea and diffusely off the retina,  producing a bright pupil with an overriding  corneal highlight. An electro-optic scanning  aperture vidissector within the unit, driven by a  digital eye-tracking algorithm programmed  into the PDP-6 computer, detects and tracks  the centers of the corneal highlight and the  bright pupil to give eve movement  measurements. A computer-controlled,  moving mirror head motion tracker directly  coupled to the vidissector tracker permits the  subject reasonable freedom of movement.  Various applications of this system, which are  suggested by the work reported here, include;  (a) using the eye as a control device, (b)  recording eye fixation and exploring patterns,  (c) game playing, (d) training machines, and  (e) psychophysiological testing and recording.
</description>
<pubDate>Tue, 01 Sep 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6198</guid>
<dc:date>1970-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mini-Robot Proposal to ARPA</title>
<link>https://hdl.handle.net/1721.1/6197</link>
<description>Mini-Robot Proposal to ARPA
Minsky, Marvin
During the next decade it will become practical to use more and more sophisticated techniques of automation--we shall call this "robotics"--both in established industries and in new areas. The rate at which these techniques become available will depend very much on the way research programs are organized to pursue them. The issues involved are rather large and touch not only on technical matters but also on aspects of national economic policy and attitudes toward world trade positions. The project herein proposed is concerned with the development of two particular aspects of Robotics, namely; 1.) Development of a miniature hand-eye system 2.) Development of remote, ARPA-NETWORK style operation of robotic systems, in which simple jobs are handled locally while more complex computations are done on a larger scale.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6197</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planner Implementation Proposal to ARPA 1972-1973</title>
<link>https://hdl.handle.net/1721.1/6196</link>
<description>Planner Implementation Proposal to ARPA 1972-1973
Hewitt, Carl
The task objective is the generalization and implementation of the full power of the problem solving formalism PLANNER in the next two years. We will show how problem solving knowledge can be effectively incorporated into the formalism. Several domains will be explored to demonstrate how PLANNER enhances problem solving.
</description>
<pubDate>Wed, 01 Dec 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6196</guid>
<dc:date>1971-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>11SIM Reference Manual</title>
<link>https://hdl.handle.net/1721.1/6195</link>
<description>11SIM Reference Manual
Eastlake, Donald E.
A program that simulates a Digital Equipment  Corporation PDP-11 computer and many of its  peripherals on the AI Laboratory Time Sharing  System (ITS) is described from a user's  reference point of view. This simulator has a  built in DDT-like command level which  provides the user with the normal range of  DDT facilities but also with several special  debugging features built into the simulator.  The DDT command language was  implemented by Richard M. Stallman while  the simulator was written by the author of this  memo.
</description>
<pubDate>Tue, 01 Feb 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6195</guid>
<dc:date>1972-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>ITS Status Report</title>
<link>https://hdl.handle.net/1721.1/6194</link>
<description>ITS Status Report
Eastlake, Donald E.
ITS is a time-shared operating system designed for the Artificial Intelligence Laboratory DEC PDP-10/PDP-6 installation and tailored to its special requirements. This status report described the design philosophy behind the ITS system, the hardware and software facilities of the system implemented with this philosophy, and some information on work currently in progress or desirable in the near future.
</description>
<pubDate>Sat, 01 Apr 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6194</guid>
<dc:date>1972-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Inquiry into Algorithmic Complexity</title>
<link>https://hdl.handle.net/1721.1/6193</link>
<description>An Inquiry into Algorithmic Complexity
ONeil, Patrick E.
This is the first section in a proposed  monograph on algorithmic complexity theory.  Future sections shall include: information  Theory as a Proof Technique; Algorithms  Using Linear Form Inequalities; Some  Probabilistic Analyses of Algorithms, etc.  Comments, suggestions and corrections are  welcomed. Please let me know what you  think. This is not a limited distribution  document, although I may wish to publish it  later. Anyone who develops an idea based on  this work to a more advanced state is  welcome to publish first. I would be very eager  to see any such result as soon as possible.
</description>
<pubDate>Wed, 01 Sep 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6193</guid>
<dc:date>1971-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Information Theory and the Game of JOTTO</title>
<link>https://hdl.handle.net/1721.1/6192</link>
<description>Information Theory and the Game of JOTTO
Beeler, Michael
The word game, JOTTO, has attracted the interest of several computer programmers over the years, not to mention countless devoted players. Rules are: 1.) Each of 2 players selects a 5-letter English word, or a proper noun, as his "secret word." 2.) Play consists of alternate turns of naming a "test word, whose constraints are the same as ton the secret words, and the opponent answering how close the test word is to his secret word. 3.) Closeness is measured in jots; each jot is a one-to-one letter match, and independent of which word is the test word. GLASS versus SMILE or SISSY is 2 jots. 4.) The first payer to guess his opponent's secret word wins.  Constraints on a JOTTO program are; First, it must have a dictionary of all possible words at the outset of each game. (The modification of adding newly experienced words to its dictionary is trivial in practice ad not worth the programming efforts, especially since one wants to avoid adding word-like typing errors, etc.) the 9unacceptable) alternative is to have a letter-deducing algorithm and then a "word-proposer" to order the 5 factorial = 120 combinations (perhaps based on diagram frequencies and vowel constraints) once all 5 letters are found. Second, the most use the program can make of the jots from a given test word is to eliminate from its list of "possible secret words of opponent" all those which do not have that number of jots against that test word. Hence, each test word should be chosen to maximize the expected information derived.
</description>
<pubDate>Sun, 01 Aug 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6192</guid>
<dc:date>1971-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Proofs of Limit Theorems</title>
<link>https://hdl.handle.net/1721.1/6191</link>
<description>Computer Proofs of Limit Theorems
Bledsoe, W.W.; Boyer, Robert S.; Henneman, William H.
In this paper we describe some relatively simple changes that have been made to an existing automatic theorem proving program to enable it to prove efficiently a number of the limit theorems of elementary calculus. These changes include subroutines of a general nature which apply to all areas of analysis , and a special "limit-heuristic" design for the limit theorems of calculus.
</description>
<pubDate>Tue, 01 Jun 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6191</guid>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theories, Pre-Theories and Finite State Transformations on Trees</title>
<link>https://hdl.handle.net/1721.1/6190</link>
<description>Theories, Pre-Theories and Finite State Transformations on Trees
Wand, Mitchell
The closure of an algebra is defined as a generalization of the semigroup of a finite automation. Pretheories are defined as a subclass of the closed algebras, and the relationship between pretheories and the algebraic theories of Lawrence [1963] is explored. Finally, pretheories are applied to the characterization problem of finite state transformations on trees, solving an open problem of Thatcher [1969].
</description>
<pubDate>Sat, 01 May 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6190</guid>
<dc:date>1971-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instant TJ6. How to Get the System to Type Your Papers</title>
<link>https://hdl.handle.net/1721.1/6189</link>
<description>Instant TJ6. How to Get the System to Type Your Papers
Dowson, Mark
TJ6 is a program that takes disk files of text and arranges them so that they can be printed out neatly on 8 1/2 by 11 paper, lines justified, pages numbered, and so on. So that TJ6 will know what to do you must insert instructions to it in your file. AI Memo No 164 A fully describes TJ6 and lists all the instructions available. This note described a useful subset of the instructions to get you started.
</description>
<pubDate>Wed, 01 Sep 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6189</guid>
<dc:date>1971-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linking Loader for MIDAS</title>
<link>https://hdl.handle.net/1721.1/6188</link>
<description>Linking Loader for MIDAS
Samson, Peter
This memo was originally printed as MAC Memo 268, January 31, 1966. The MIDAS Linking Loader is a PDP-6program to load re-locatable format output from the MIDAS assembler, with facilities to handle symbolic cross-reference between independently assembled programs. Although it is arranged primarily to load from DECtape, the loader is able to load paper-tape re-locatable programs.
</description>
<pubDate>Mon, 01 Mar 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6188</guid>
<dc:date>1971-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computer as a Performing Instrument</title>
<link>https://hdl.handle.net/1721.1/6187</link>
<description>The Computer as a Performing Instrument
Mumma, Gordon; Smoliar, Stephen
This memo was originally presented as a Project MAC seminar on February 20, 1970. From the outset, the computer has established two potential roles in the musical arts--the one as a sound synthesizer and the other as a composer (or composer's assistant). The most important developments in synthesis have been due to MAX Matthew at the Bell telephone Laboratories [7]. His music V system endows a computer with most of the capabilities of the standard hardware of electronic music. Its primary advantage is that the user may specify arbitrarily complex sound sequences and achieve then with a minimum of editing effort. Its primary disadvantage is that it is not on-line, so that the user loses that critical sense of immediacy which he, as a composer, may deem valuable.
</description>
<pubDate>Mon, 01 Feb 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6187</guid>
<dc:date>1971-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Equivalence Problems in a Model of Computation</title>
<link>https://hdl.handle.net/1721.1/6186</link>
<description>Equivalence Problems in a Model of Computation
Paterson, Michael Stewart
A central problem in the mathematical teory of computers and computation is to find a suitable framework for expressing the ececution of a computer program by a computer. Within the framework we want to be alble to provide answers to such questions as; (1) Does a certain program perform a certain task? (2) Are two programs equivalent, i.e., do they perform the same task? (3) Under what conditions, if at all, will a program fail to help? (4) how can a given program be simplified, in some sense, or made more efficient? These kinds of questions are customarily answered by experienced intuition, for simple programs, supplemented by trial and, often error for more complicated ones. We should like to replace such methods by a formalizable procedure, capable of being carried out by a computer program.
Issued November 1970
</description>
<pubDate>Tue, 01 Aug 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6186</guid>
<dc:date>1967-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A User's Guide to the A.I. Group LISCOM LISP Complier: Interim Report</title>
<link>https://hdl.handle.net/1721.1/6185</link>
<description>A User's Guide to the A.I. Group LISCOM LISP Complier: Interim Report
Golden, Jeffrey P.
The LISCOM version of the AI group PDP/6 LISP compiler is a descendant of the original Greenblatt-Nelson compiler, and is a friendly sibling to the COMPLR version maintained by Jon L. White. The compiler operates in two passes to translate LISP code into LAP code. The first pass performs a general study of the S-expression function definition which is to be compiled, producing as output a modified S-expression and various tables attached to free variables. The second pass does the actual compilation (generation of assembly code), making use of the transformations performed and the information gathered by the first pass.  The LISCOM version of the compiler is being used as a vehicle for the implementation of "fast arithmetic" in LISP. This work is being done under the auspices of the MATHLAB project of the AI Laboratory. The early stages of the compiler implementation were handled by W. Diffie, and the work has been continued by the present author.
</description>
<pubDate>Tue, 01 Dec 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6185</guid>
<dc:date>1970-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Micro-Planner Reference Manual (Update)</title>
<link>https://hdl.handle.net/1721.1/6184</link>
<description>Micro-Planner Reference Manual (Update)
Sussman, Gerald Jay; Winograd, Terry; Charniak, Eugene
This is a manual for the use of the Micro Planner interpreter, which implements a subset of Carl Hewitt's language, PLANNER and is now available for use by the Artificial Intelligence Group.
</description>
<pubDate>Wed, 01 Dec 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6184</guid>
<dc:date>1971-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Display Functions in LISP</title>
<link>https://hdl.handle.net/1721.1/6183</link>
<description>Display Functions in LISP
Binford, Thomas O.
This note describes a system which compiles various forms of LISP lists and arrays into display commands for the DEC 340 display, and provides supporting functions for scaling, for moving elements in a display, for pot control of certain displays, and for adding elements to and removing elements from the display.
</description>
<pubDate>Sun, 01 Mar 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6183</guid>
<dc:date>1970-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>PROGRAMMER: A Language for Writing Grammars</title>
<link>https://hdl.handle.net/1721.1/6182</link>
<description>PROGRAMMER: A Language for Writing Grammars
Winograd, Terry
This memo describes PROGRAMMER, a  parser for natural language. It consists of a  language for writing grammars in the form of  programs, and an interpreter which can use  these grammars to parse sentence.  PROGRAMMER is one part of an integrated  system being written for the computer  comprehension of natural language. The  system will carry on a discourse in English,  accepting data statements, answering  questions, and carrying out commands. It  has a verbally integrated structure, to perform  parsing, semantic analysis, and deduction  concurrently, and to use the results of each t  guide the course of the entire process. This  interaction is possible because all three  aspects are written in the form of programs.  This will allow the system to make full use of  its "intelligence" (including non-linguistic  knowledge about the subject being  discussed) in interpreting the meaning of  sentences.
</description>
<pubDate>Sat, 01 Nov 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6182</guid>
<dc:date>1969-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Integration of a Class of Special Functions with the Risch Algorithm</title>
<link>https://hdl.handle.net/1721.1/6181</link>
<description>The Integration of a Class of Special Functions with the Risch Algorithm
Moses, Joel
We indicate how to extend the Risch algorithm to handle a class of special functions defined in terms of integrals. Most of the integration machinery for this class of functions is similar to the machinery in the algorithm which handles logarithms. A program embodying much of the extended integration algorithm has been written. It was used to check a table of integrals and it succeeded in finding some misprints in it.
</description>
<pubDate>Mon, 01 Sep 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6181</guid>
<dc:date>1969-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Arithmetic-Statement Pseudo-Ops: .I and .F</title>
<link>https://hdl.handle.net/1721.1/6180</link>
<description>The Arithmetic-Statement Pseudo-Ops: .I and .F
Horn, B.K.P.
This is a feature of MIDAS which facilitates  the rapid writing and debugging of programs  involving much numerical calculation. The  statements used are ALGOL-like and easy to  interpret.
</description>
<pubDate>Fri, 01 Aug 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6180</guid>
<dc:date>1969-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preprocessor for Programs which Recognize Scenes</title>
<link>https://hdl.handle.net/1721.1/6179</link>
<description>Preprocessor for Programs which Recognize Scenes
Mahabala, H.N.
A visual scene is transformed from a very  simple and convenient format, to an internal  format which describes the same scene, but  is more akin to complex manipulations. This  format is compatible with programs like  "SEE". The entire analysis is done using a  basic primitive which gives the orientation of a  point with respect to a directed line. A novel  handling of inaccuracies in the scene is  achieved by considering the lines to be  stripes of small but negligible width. The  criterion is very general and easy to modify.
</description>
<pubDate>Fri, 01 Aug 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6179</guid>
<dc:date>1969-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discovering Good Regions for Teitelman's Character Recognition Scheme</title>
<link>https://hdl.handle.net/1721.1/6178</link>
<description>Discovering Good Regions for Teitelman's Character Recognition Scheme
Winston, Patrick
Warren Teitelman presented a novel scheme  for real time character recognition in his  master's thesis submitted in June of 1963. A  rectangle, in which a character is to be drawn,  is divided into two parts, one shaded and the  other unshaded. Using this division a  computer converts characters into ternary  vectors in the following way. If a pen enters  the shaded region, a 1 is added to the vector.  When the unshaded region is entered, a 0 is  appended. Finally 1 illustrates the basic idea  he used. Thus, with the shading shown, the  character V is converted to 1 0 x 1 0.* A V  drawn without lifting the pen would yield a 1 0  1. A t gives 1 0 w 1, and so on. Notice that  each character may yield several vectors,  depending upon the style of the user as well  as the division of the rectangle into shaded  and unshaded regions. In order to conserve  storage space and reduce search time, the  character vectors of Teitelman"s scheme are  stored in a tree-like structure like that shown  in figure 2.
</description>
<pubDate>Thu, 01 May 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6178</guid>
<dc:date>1969-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Optimum Recognition Error and Reject Tradeoff</title>
<link>https://hdl.handle.net/1721.1/6177</link>
<description>On Optimum Recognition Error and Reject Tradeoff
Chow, C.K.
The performance of a pattern recognition system is characterized by its error and reject tradeoff. This paper describes an optimum rejection rule and presents a general relation between the error and reject probabilities and some simple properties of the tradeoff in the optimum recognition system. The error rate can be directly evaluated from the reject function. Some practical implications of the results are discussed. Examples in normal distributions and uniform distributions are given.
</description>
<pubDate>Tue, 01 Apr 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6177</guid>
<dc:date>1969-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Greenblatt Chess Program</title>
<link>https://hdl.handle.net/1721.1/6176</link>
<description>The Greenblatt Chess Program
Greenblatt, Richard D.; Eastlake, Donald E., III; Crocker, Stephen D.
Since mid-November 1966 a chess program has been under development at the Artificial Intelligence Laboratory of Project MAC at M.I.T. This paper describes the state of the program as of August 1967 and gives some of the details of the heuristics and algorithms employed.
</description>
<pubDate>Tue, 01 Apr 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6176</guid>
<dc:date>1969-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Heuristic Program that Constructs Decision Trees</title>
<link>https://hdl.handle.net/1721.1/6175</link>
<description>A Heuristic Program that Constructs Decision Trees
Winston, Patrick
Suppose there is a set of objects, {A, B,...E} and a set of tests, {T1, T2,...TN). When a test is applied to an object, the result is wither T or F. Assume the test may vary in cost and the object may vary in probability or occurrence. One then hopes that an unknown object may be identified by applying a sequence if tests. The appropriate test at any point in the sequence in general should depend on the results of previous tests. The problem is to construct a good test scheme using the test cost, the probabilities of occurrence, and a table of test outcomes.
</description>
<pubDate>Sat, 01 Mar 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6175</guid>
<dc:date>1969-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot Utility Functions</title>
<link>https://hdl.handle.net/1721.1/6174</link>
<description>Robot Utility Functions
Nelson, Stewart; Levitt, Michael
This document describes a set of routines  which have been provided at both the monitor  and user level to facilitate the following  operations: 1) Vidissector input; 2) Pot Box  input; 3) Arm motion; and 4) Display list  generation.  This program was developed under contract  with Systems Concepts, Incorporated.
</description>
<pubDate>Sat, 01 Feb 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6174</guid>
<dc:date>1969-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decomposition of a Visual Scene into Three-Dimensional Bodies</title>
<link>https://hdl.handle.net/1721.1/6173</link>
<description>Decomposition of a Visual Scene into Three-Dimensional Bodies
Guzman, Adolfo
The program described here takes as its input a collection of lines, vertices and surfaces describing a scene, and analyzes the scene into a composition of three-dimensional objects. The program does not need to know the form (model, or pattern) of the objects which are likely to appear: the scene is not searched for cubes, wedges, or houses, with an a-priori knowledge of the form of these objects; rather, the program pays attention to configurations of surfaces and lines which would make plausible three-dimensional solids, and in this way "bodies" are identified. Partially occluded bodies are handled correctly. The program is restricted to scenes formed by straight lines, where no shadows or noise are present. It has been tested in rather complicated scenes composed by rather simple objects. Examples are given.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6173</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>WIRElist</title>
<link>https://hdl.handle.net/1721.1/6172</link>
<description>WIRElist
Holloway, John
This memo describes a design aid used for the automatic production of wirelists for machine or hand wiring of wire-cards.
</description>
<pubDate>Wed, 01 Jan 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6172</guid>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>PLANNER: A Language for Manipulating Models and Proving Theorems in a Robot</title>
<link>https://hdl.handle.net/1721.1/6171</link>
<description>PLANNER: A Language for Manipulating Models and Proving Theorems in a Robot
Hewitt, Carl
PLANNER is a language for proving theorems and manipulating models in a robot. The language is built out of a number of problem-solving primitives together with a hierarchical control structure. Statements can be asserted and perhaps later withdrawn as the state of the world changes. Conclusions can be drawn from these various changes in state. Goals can be established and dismissed when they are satisfied. The deductive system of PLANNER is subordinate to the hierarchical control structure in order to make the language efficient. The use of a general-purpose matching language makes the deductive system more powerful. The language is being applied to solve problems faced by a robot and as a semantic base for English.
Revised
</description>
<pubDate>Sat, 01 Aug 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6171</guid>
<dc:date>1970-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linear Separation and Learning</title>
<link>https://hdl.handle.net/1721.1/6170</link>
<description>Linear Separation and Learning
Minsky, Marvin; Papert, Seymour A.
This is a reprint of page proofs of Chapter 12 of Perceptrons, M. Minsky and S. Papert, MIT Press 1968, (we hope). It replaces A.I. Memo No. 156 dated March 1968.  The perceptron and convergence theorems of Chapter 11 are related to many other procedures that are studied in an extensive and disorderly literature under such titles as LEARNING MACHINES, MODELS OF LEARNING, INFORMATION RETRIEVAL, STATISTICAL DECISION THEORY, PATTERN RECOGNITION and many more. In this chapter we will study a few of these to indicate points of contact with the perception and to revel deep differences. We can give neither a fully rigorous account not a unifying theory of these topics: this would go as far beyond our knowledge as beyond the scope of this book. The chapter is written more in the spirit of inciting students to research than to offering solutions to problems.
</description>
<pubDate>Tue, 01 Oct 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6170</guid>
<dc:date>1968-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognition of Topological Invariants by Modular Arrays</title>
<link>https://hdl.handle.net/1721.1/6169</link>
<description>Recognition of Topological Invariants by Modular Arrays
Beyer, Terry
In this paper we study recognition of topological invariant properties of patterns by use of finite, rectangular 2-dimensional, interactive arrays of finite state automata (hereafter called modular arrays). The use of modular arrays as pattern recognition devices has been studied by Atrubin [1] and by Unger [2]. Our aim is to show that modular arrays can not only recognize a large variety of topological invariants, but can do so in times that are almost minimal for a certain class of machines. We begin by describing our model of the modular array as a pattern recognition connectivity. Next, we introduce a fundamental transformation of patterns and prove several interesting properties of the transformation. Finally, we apply the transformation to modular arrays to obtain fast methods of recognizing a wide variety of topological invariants.
</description>
<pubDate>Sun, 01 Sep 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6169</guid>
<dc:date>1968-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Description and Control of Manipulation by Computer-Controlled Arm</title>
<link>https://hdl.handle.net/1721.1/6168</link>
<description>Description and Control of Manipulation by Computer-Controlled Arm
Gresser, Jean-Yves
The immediate purpose of the research on Intelligent Automata is to have an autonomous machine able to understand uncomplicated commands and to manipulate simple objects without human intervention. This thesis is concerned with the programming of a special output device of the present machine existing at Project MAC: an arm with eight degrees of freedom, made of our identical segments. Classical approaches through hill-climbing and optimal control techniques are discussed. However a new method is proposed to decompose the problem, in an eight-dimensional space, into a sequence of subproblems in spaces with fewer dimensions. Each subproblem can then be solved with simple analytical geometry. A simulation program, which applies this method, is able to propose several configurations for a given goal (expressed as a point in a five-dimensional space).
</description>
<pubDate>Sun, 01 Sep 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6168</guid>
<dc:date>1968-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Text-Justifier TJ6</title>
<link>https://hdl.handle.net/1721.1/6167</link>
<description>The Text-Justifier TJ6
Greenblatt, R.; Horn, B.K.P.; Krakauer, Lawrence J.
This memo describes the TJ6 type justifying program, which can be used in the production of memos, such as this one. In addition, Appendices 1, 2, and 3 of this memo contain related information about TECO, the "Selectric" and the type 37 teletype, thus gathering most of the information needed for producing write ups into one location. A sample of input to TJ6 is given in section IV and is in fact the very input used to produce this page of output.  The output from TJ6 may be either justified text, with the right margin exactly aligned, as in this introduction, or it may be "filled" text, as in this introduction, with the right margin only approximately aligned. The remainder of this memo will be justified.  The sections of this memo are: Introduction, Using TJ6, Console operation of TJ6 and Sample TJ6 input. Appendix 1 relates to inserting lower case letters into the TECO buffer, Appendix 2 relates to the "Selectric" output device, and Appendix 3 is how to use a type 37 Teletype.
</description>
<pubDate>Mon, 01 Jun 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6167</guid>
<dc:date>1970-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Producing Memos, Using TJ6, TECO and the Type 37 Teletype</title>
<link>https://hdl.handle.net/1721.1/6166</link>
<description>Producing Memos, Using TJ6, TECO and the Type 37 Teletype
Krakauer, Lawrence J.
This memo describes the TJ6 type justifying  program, which can be used in the production  of memos, such as this one. In addition,  sections III and IV of this memo contain  related information about TECO and the type  37 teletype, thus gathering most of the  information needed for producing write ups  into one location. A sample of input to TJ6 is  given in section V, and is in fact the very input  used to produce this page of output.  The output from TJ6 may be either justified  text, with the right margin exactly aligned, as in  this introduction, or it may be "filled" text, with  the right margin only approximately aligned.  Since I do not personally like the appearance  of justified text, the remainder of this memo  will not be justified, but this decision, of  course, rests with each particular user. The  sections of this report are: Introduction, using  TJ6, Inserting lower case letters into the  TECO buffer, How to use a type 37 teletype,  and Sample TJ6 input.
</description>
<pubDate>Sun, 01 Sep 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6166</guid>
<dc:date>1968-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>ITS 1.5 Reference Manual</title>
<link>https://hdl.handle.net/1721.1/6165</link>
<description>ITS 1.5 Reference Manual
Eastlake, D.; Greenblatt, R.; Holloway, J.; Knight, T.; Nelson, S.
This reference manual consists of two parts. The first (sections 1 through 6) is intended for those who are either interested in the ITS 1.5 time sharing monitor for its own sake or who wish to write machine language programs to run under it. Some knowledge of PDP-6 (or PDP-10) machine language is useful in reading this part. The second part (sections 7, 8, and 9) describes three programs that run under ITS. The first program (DDT) is a modified machine language debugging program that also replaces the "monitor command" level (where the user is typing directly at the monitor) present in most time-sharing systems. The remaining two (PEEK and LOCK) are a status display and a miscellaneous utility program. It should be remembered that the McCulloch Laboratory PDP-6 and PDP-10 installation is undergoing continuous software and hardware development which may rapidly outdate this manual.
</description>
<pubDate>Tue, 01 Jul 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6165</guid>
<dc:date>1969-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical Solution of Elliptic Boundary Value Problems by Spline Functions</title>
<link>https://hdl.handle.net/1721.1/6164</link>
<description>Numerical Solution of Elliptic Boundary Value Problems by Spline Functions
Shah, Jayant M.
A numerical method for solving linear, two-dimensional elliptic boundary value problems is presented. The method is essentially the Ritz procedure which uses; polynomial spline functions to approximate the exact solution. The spline functions are constructed by defining a polynomial function over each of a set of disjoint subdomains and imposing certain compatibility conditions along common boundaries between subdomains. The main advantage of the methods is that it does not even require the continuity of the spline functions across the boundaries between subdomains. Therefore it is easy to construct classes of spline functions which will produce any specified rate of convergence.
</description>
<pubDate>Mon, 01 Apr 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6164</guid>
<dc:date>1968-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>SARGE: A Program for Drilling Students in Freshman Calculus Integration Problems</title>
<link>https://hdl.handle.net/1721.1/6163</link>
<description>SARGE: A Program for Drilling Students in Freshman Calculus Integration Problems
Moses, Joel
The SARGE program is a prototype of a  program which is intended to be used as an  adjacent to regular classroom work in  freshman calculus. Using SARGE, students  can type their step-by-step solution to an  indefinite integration problem, and can have  the correctness of their solution determined  by the system. The syntax for these steps  comes quite close to normal mathematical  notation, given the limitations of typewriter  input. The methods of solution is pretty much  unrestricted as long as no mistakes are  made along the way. If a mistake is made,  SARGE will catch it and yield an error  message. The student may modify the  incorrect step, or he may ask the program for  advice on how the mistake arose by typing  "help". At present the program is weak in  generating explanations for mistakes.  Sometimes the "help" mechanisms will just  yield a response which will indicate the way in  which the erroneous step can be corrected. In  order to improve the explanation mechanism  one would need a sophisticated analysis of  students solutions to homework or quiz  problems. Experience with the behavior of  students with SARGE, which is nil at present,  should also help in accomplishing this goal.  SARGE is available as SARGE SAVED in  T302 2517.
</description>
<pubDate>Fri, 01 Mar 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6163</guid>
<dc:date>1968-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time-Sharing LISP for the PDP-6</title>
<link>https://hdl.handle.net/1721.1/6162</link>
<description>Time-Sharing LISP for the PDP-6
White, John
This memo written in the style and convention of A.I. memo No. 116A, may be considered an addendum thereto. It should prove to be a welcome updating on the LISP system.
</description>
<pubDate>Fri, 01 Mar 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6162</guid>
<dc:date>1968-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linear Decision and Learning Models</title>
<link>https://hdl.handle.net/1721.1/6161</link>
<description>Linear Decision and Learning Models
Minsky, Marvin L.
This memorandum is a first draft of an essay on the simplest "learning" process. Comments are invited. Subsequent sections will treat, among other things: the "stimulus-sampling" model of Estes, relations between Perceptron-type error, reinforcement and Bayesian-type correlation reinforcement and some other statistical methods viewed in the same way.
</description>
<pubDate>Fri, 01 Mar 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6161</guid>
<dc:date>1968-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Left to Right then Right to Left Parsing Algorithm</title>
<link>https://hdl.handle.net/1721.1/6160</link>
<description>A Left to Right then Right to Left Parsing Algorithm
Martin, William A.
Determination of the minimum resources required to parse a language generated by a given context free grammar is an intriguing and yet unsolved problem. It seems plausible that any unambiguous context free grammar could be parsed in time proportional to the length, n, of each input string. Early (2) has presented an algorithm which parses "many" grammars in the proportional to n, but requires n2 on some. His work is an extension of Knuth's method. Knuth's method fails when more than one alternative must be examined by a push-down automation making a left to right scan of the input string. Early's extension takes all possible alternatives simultaneously without duplication of effort at any given one step. The method presented here continues through the string in order to gain pass, which is made on the symbols accumulated on the stack of the automation. The algorithm is probably more efficient than Early's on certain grammars; it will fail completely on others. The essential idea may be interesting to those attacking the general problem.
</description>
<pubDate>Thu, 01 Feb 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6160</guid>
<dc:date>1968-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>REEX: A CONVERT Program to Realize the McNaughton-Yamada Analysis Algorithm</title>
<link>https://hdl.handle.net/1721.1/6159</link>
<description>REEX: A CONVERT Program to Realize the McNaughton-Yamada Analysis Algorithm
McIntosh, Harold V.
REEX is a CONVERT program, realized in the CTSS-LISP of Project Mac, for carrying out the McNaughton-Yamada analysis algorithm, whereby a regular expression is found describing the words accepted by a finite state machine whose transition table is given. Unmodified the algorithm will produce 4n terms representing an n-state machine. This number could be reduced by eliminating duplicate calculations and rejecting ona high level expressions corresponding to no possible path in the same state diagram. The remaining expressions present a serious simplification problem, since empty expressions and null words are generated liberally by the algorithm. REEX treats only the third of these problems, and at that makes simplifications mainly oriented toward removing null words, empty expressions, and expressions of the form XUX*, AuB*A, and others closely similar. REEX is primarily useful to understand the algorithm, but hardly usable for machines with six or more states.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6159</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>PDP-6 IAP</title>
<link>https://hdl.handle.net/1721.1/6158</link>
<description>PDP-6 IAP
White, John L.
LAP is a LISP FEXPR (or FSUBR when compiled) which is executed primarily for its side effect??ely assembling a symbolic listing into core as a machine language subroutine. As such, it is about the most convenient and rapid way for a LISP user to add machine language primitives to the LISP system, especially if the function in question are in a developmental stage and are reasonably small (e.g. 1-500 instructions). Also, the LISP compiler currently gives its results as a file of LAP code, which may then be loaded into core by IAP.  Virtually any function definition, whether by DEFPROP, LABEL, or LAP is an extension of LISP's primitives; and as in any actual programming language, the side-effects and global interactions are often of primary importance. Because of this, and because of the inherently broader range of machine instructions and data formats, a function quite easily described and written in PDP-6 machine language may accomplish what is only most painfully and artificially written in LISP. One must, then, consider the total amount of code in each language to accomplish a given task, the amount of commentary necessary to clarify the intent of the task given the program (in this sense, LISP code rates very high??ajor benefit of the confines of LISP is that a good program serves as its own comment, and usually needs no further elucidations), and other considerations of programming convenience.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6158</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional Abstraction in LISP and PLANNER</title>
<link>https://hdl.handle.net/1721.1/6157</link>
<description>Functional Abstraction in LISP and PLANNER
Hewitt, Carl
Presented here is part of the graduate work  that I am doing in the much broader area of  protocol analysis (see A.I. memo 137). The  goal of the function abstraction is to find a  procedure that satisfies a given set of  fragmentary protocols. Thus functional  abstraction is the inverse operation to taking a  set of protocols of a routine. The basis  technique in functional abstraction (which we  shall call IMAGE) is to find a minimal  homomorphic image of a set of fragmentary  protocols. It is interesting to note that the  technique of finding a minimal homomorphic  image is the same one used to compute the  schematized goal tree in A.I. memo 137. We  define (a less than b) to mean that a is erased  and b is written in its place. We shall use  (a:b) to mean that the value of b is a.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6157</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>CGRU and CONG: CONVERT and LISP Programs to Find the Congruence Relations of a Finite State Machine</title>
<link>https://hdl.handle.net/1721.1/6156</link>
<description>CGRU and CONG: CONVERT and LISP Programs to Find the Congruence Relations of a Finite State Machine
McIntosh, Harold V.
CRGU is a CONVERT program, CONG its literal transcription into LISP, realized in the CTSS LISP of Project MAC, for finding all the congruence relations of a finite state machine whose transition table is given as an argument. Central to both programs is the hull construction, which forms the smallest congruence relation containing a given relation. This is done by examining all pairs of equivalent elements to see if their images are equivalent. Otherwise the image classes are joined and the calculation repeated. With the hull program, one starts with the identity relation and proceed by joining pairs of congruence classes in previously found partitions, and forming the hull in order to see if he may produce a new partition. The process terminates when all such extensions have been tried without producing any new relations.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6156</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>REC/8: A CONVERT Compiler of REC for the PDP-8</title>
<link>https://hdl.handle.net/1721.1/6155</link>
<description>REC/8: A CONVERT Compiler of REC for the PDP-8
McIntosh, Harold V.
REC/8 is a CONVERT program, realized in the CTSS LISP of Project MAC, for compiling RED expressions into the machine language of the PDP-8 computer. Since the compilation consists in its majority of subroutines calls (to be compiled, after removal of LISP parentheses by MACPO-8) the technique is applicable with trivial modification to any other computer having the subroutine jump and indirect transfer instructions. The purpose of the program is both to compile REC expressions and to illustrate the workings of the REC language, and accordingly a description of this language is given. It contains operators and predicates; flow of control is achieved by parentheses which define subexpressions, colon which implies interaction, and semicolon which terminates the execution of an expression. Predicates pass control to the position following the next colon or semicolon, allowing the execution of alternative expression strings.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6155</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>SUBM: A CONVERT Program for Constructing the Subset Machine Defined by a Transition System</title>
<link>https://hdl.handle.net/1721.1/6154</link>
<description>SUBM: A CONVERT Program for Constructing the Subset Machine Defined by a Transition System
McIntosh, Harold V.
SUBM is a CONVERT program, realized in the CTSS LISP of Project MAC, for constructing the subset machine with the same behaviour as a given transition system. The program interactively collects the six items defining a transition system: its state set, alphabet, transition function, initial states, accepting states and spontaneous transitions. It then computes the subset machine, producing its state set, transition function, initial state and accepting states.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6154</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>DDT Reference Manual</title>
<link>https://hdl.handle.net/1721.1/6153</link>
<description>DDT Reference Manual
Osman, Eric
This memo describes the version of DDT  used as the command level of the A.I.  Laboratory Time Sharing System (ITS).  Besides the usual program control,  examination, and modification features, this  DDT provides many special utility commands.  It also has the capability to control several  programs for a user and to a single  instruction continue mode and interrupt on  read or write reference to a given memory  location. This memo was prepared with the  assistance of Donald E. Eastlake and many  others.
</description>
<pubDate>Wed, 01 Sep 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6153</guid>
<dc:date>1971-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>PICPAC: A PDP-6 Picture Package</title>
<link>https://hdl.handle.net/1721.1/6152</link>
<description>PICPAC: A PDP-6 Picture Package
Silver, Roland
PICPAC is a program to be used for manipulating pictures of real-world scenes. It operated under ITS (the Incompatible Time-Sharing System) under control of a simple on-line command language. It includes facilities for reading pictures from either vidissector, for reading and writing them on disk or microtape, and for displaying or plotting them. It also includes focusing and control functions.
</description>
<pubDate>Sun, 01 Oct 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6152</guid>
<dc:date>1967-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>I/O Test</title>
<link>https://hdl.handle.net/1721.1/6151</link>
<description>I/O Test
Beeler, Michael
IO TEST is intended as a hardware testing and debugging aid for use with the PDP-6 and its associated input multiplexer (analog to digital converter) and output multiplexer (digital to analog converter). While all characters typed are echoed, only the following have any effect on the program' S operations: F, Y, W, V, B, E, D, S, nT, P A.
</description>
<pubDate>Sun, 01 Oct 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6151</guid>
<dc:date>1967-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stereo and Perspective Calculations</title>
<link>https://hdl.handle.net/1721.1/6150</link>
<description>Stereo and Perspective Calculations
Minsky, Marvin
A brief introduction to use of projecting coordinates for hand-eye position computations. Some standard theorems. Appendix A reproduces parts of Roberts' thesis concerning homogenous coordinated and matching of perspectively transformed objects. Appendix B by Arnold Griffith derives the stereo calibration formulae using just the invariance of cross-ratios on projections of lines, and he describes a program that uses this.
</description>
<pubDate>Fri, 01 Sep 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6150</guid>
<dc:date>1967-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>STRING</title>
<link>https://hdl.handle.net/1721.1/6149</link>
<description>STRING
Samson, Peter
This document describes the STRING  programming language which has been  implemented on the MAC Artificial Group's  PDP-6 computer. In the STRING system, all  objects--constants, variables, functions and  programs--are stored and processed in the  form of strings of characters. The STRING  language is unusually concise, yet at the  dame time unusually rich in commands,  including a strong arithmetic facility.
</description>
<pubDate>Fri, 01 Sep 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6149</guid>
<dc:date>1967-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>EUTERPE-LISP: A LISP System with Music Output</title>
<link>https://hdl.handle.net/1721.1/6148</link>
<description>EUTERPE-LISP: A LISP System with Music Output
Smoliar, Stephen
EUTERPE (Ai memo no. 129) was designed as a "real-time music program" which would interpret music described as "voice-programs" in DDT. These voice-programs consisted of note words, description of tones to be sounded, and control words which determined the parameters of pitch, tempo, articulation and wave form and allowed for a subroutine feature and transfer within the voice-program. It had been hoped that complex musical forms could be described in terms of a few collections of note words and sequences of control words. However, musical variation and development is more subtle than the developmental power of these control words. Any transformation of musical material may be expressed as a LISP function; therefore, the control words were abandoned and EUTERPE was linked to LISP. The voice-programs would be written and loaded by LISP and played by EUTERPE. The principle function in the system is LOAD which takes two arguments: 1) an absolute location in core and 2) a list of note words. The note words are translated into EUTERPE-readable code and loaded into the proper voice program. The addresses of the first location of each if the six voice programs are SETQed by the system with the names VOICE1, ..., VOICE6. The value of LOAD s the next file word in core, so a series of lists may be loaded by bootstrapping.
</description>
<pubDate>Fri, 01 Sep 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6148</guid>
<dc:date>1967-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Linearly Unrecognizable Patterns</title>
<link>https://hdl.handle.net/1721.1/6147</link>
<description>Linearly Unrecognizable Patterns
Minsky, Marvin; Papert, Seymour A.
The central theme of this study is the  classification of certain geometrical properties  according to the type of computation  necessary to determine whether a given figure  has them.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6147</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decomposition of a Visual Scene into Bodies</title>
<link>https://hdl.handle.net/1721.1/6146</link>
<description>Decomposition of a Visual Scene into Bodies
Guzman, Adolfo
This memorandum describes a program  which finds bodies in a scene, presumably  formed by 3-dimensional objects, with some  of them perhaps not completely visible.
</description>
<pubDate>Fri, 01 Sep 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6146</guid>
<dc:date>1967-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Calcomp Plotter as an Output Device</title>
<link>https://hdl.handle.net/1721.1/6145</link>
<description>The Calcomp Plotter as an Output Device
Speciner, Michael
(1)CHAR PLOT (see AI Memo 125) has been  modified for TS. [It may be found on MS4 with  the non-TS version]. The following changes  should be noted: CRKBRK (now called  PLTBRK in the non-TS CHAR PLOT), SUBPLT  (which is not needed since PLOTC can be  called recursively), PP (ditto), LBUFF and  LWBUFF (as the TS system does the  buffering) do not exist in the TS version.  CRKCHN, now called PLTCHN (in both TS  and non-TS versions) does exist. The  command 1110 ... (go to the effective address  at process time) still exists, bit in TS return is  with "POPJ P", rather than JRST 12, @  PLTBRK". The character codes 0 and 200  (lower case 0) respectively OPEN and CLOSE  the plotter. (2) CHARPL SCOPE may soon be  also so modified for TS. (3) SCOPE PLOT is  unchanged. (4) None of the above TS routines  can be used easily at present due to the lack  of TS STINK.
</description>
<pubDate>Sat, 01 Jul 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6145</guid>
<dc:date>1967-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>PLANNER: A Language for Proving Theorems</title>
<link>https://hdl.handle.net/1721.1/6144</link>
<description>PLANNER: A Language for Proving Theorems
Hewitt, Carl
The following is a description of SCHEMATISE, a proposal for a program that proves very elementary theorems though the use of planning. The method is most easily explained through an example die to Black.
</description>
<pubDate>Sat, 01 Jul 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6144</guid>
<dc:date>1967-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Matrix Inversion in LISP</title>
<link>https://hdl.handle.net/1721.1/6143</link>
<description>Matrix Inversion in LISP
White, John L.
Very shortly there will appear on the vision library tape a field named @IAS which is a collection of compiled SUBR"s for performing general matrix row reduction and inversions. For an array A a call (IAS A NEW N M) performs gaussian row reduction on the first N rows of the array A (and in fact operated on only the first M columns); so that if M&gt;N then the N+1 st through the Mth columns of the output array contain the solutions to the implicit M-N+1 systems of NxN simultaneous linear equations, while the first N columns contain the inverse matrix of A11 ANN. If the NEW is "T" then a new array of size NXM is declared and the answers are stored directly over the input array and no new array declarations are done. Currently, maximization of pivotal elements is not done; thus IAS will give wrong answers on certain numerically ill-conditioned matrices even though they be non-singular. It is possible to remedy this problem, at some expense, if necessary. IAS also uses a portion of binary program space for temporary storage and may give an error message if not enough space is available.
</description>
<pubDate>Sat, 01 Jul 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6143</guid>
<dc:date>1967-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automata On a 2-Dimensional Tape</title>
<link>https://hdl.handle.net/1721.1/6142</link>
<description>Automata On a 2-Dimensional Tape
Blum, M.; Hewitt, C.
This paper explains our approach to the problem of pattern recognition by serial computer. The rudimentary theory of vision presented here lies within the framework of automata theory. Out goal is to classify the types of patterns that can be recognized by an automaton that scans a finite 2-dimensional tape. For example, we would like to know if an automaton can decide whether or not a given pattern on a tape forms a connected region. This paper should be viewed as a Progress Report on work done to date. Our goal now is to generalize the theory presented here and make it applicable to a wide variety of pattern-recognizing machines.
</description>
<pubDate>Thu, 01 Jun 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6142</guid>
<dc:date>1967-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>PSEG: Standardization of Data</title>
<link>https://hdl.handle.net/1721.1/6141</link>
<description>PSEG: Standardization of Data
Bowring, Jim
PSEG is a function of one argument--a region name which comes from REGIONLIST, as created by TOPOLOGIST. When it is done, the following data structure exists. *indicates that the data was already stored correctly when PSEG got it. REGIONLIST is a list of region names created by TOPOLOGIST. On the property list of each region are the following indicators: TYPE, OUTERBOUNDARY, NUCLEUS, HOLES, holes, NEIGHBORS, SHAPE, VERTIS, and SEGS. VERTEXLIST and SEGMENTLISTs are also discussed.
</description>
<pubDate>Thu, 01 Jun 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6141</guid>
<dc:date>1967-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Glossary of Vision Terms</title>
<link>https://hdl.handle.net/1721.1/6140</link>
<description>A Glossary of Vision Terms
Abbott, Russ
Underlined terms are included in the glossary.
</description>
<pubDate>Thu, 01 Jun 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6140</guid>
<dc:date>1967-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Additions to LAP</title>
<link>https://hdl.handle.net/1721.1/6139</link>
<description>Additions to LAP
White, John L.
In addition to the description on page 13 of AI Memo 116A LAP has the following features: Current Assembly Location Reference, Assembly Time Arithmetic, Constants, Multiple Entry Routines, and Defined Machine Operations in LAP.  The atom "*" has a SYM value during assembly an integer which is the current cell address being assembled into. Thus (JRST O *) is a well known infinite loop equivalent to A (JRST O A). When LAP encounters a non-atomic argument in the position normally occupied but the address part of an instruction, and it is not one of the recognizable forms (QUOTE atom) (E function) of (C constant), then the assembly time calculates of the list of members are summed and this is the quantity assigned as address. Thus (JRST O (* 1)) is a do-little instruction roughly equivalent to TRA * +1 in FAP.
</description>
<pubDate>Sat, 01 Jul 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6139</guid>
<dc:date>1967-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>POLYSEG</title>
<link>https://hdl.handle.net/1721.1/6138</link>
<description>POLYSEG
Griffith, Arnold K.
POLYSEG takes as input a list of dotted pairs of numbers. These pairs are assumed to be the co-ordinates of adjacent points along a single closed line. It is further assumed that the x and y co-ordinates of successive points differ by 1, 0, or -1. The output of POLYSEG is a list of dotted pairs of numbers, representing vertices of a polygonal approximation to the figure whose boundary was input. The scale is increased by a factor of four over that of the output; and the output is in fixed or floating point mode; according to the input.
</description>
<pubDate>Sat, 01 Apr 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6138</guid>
<dc:date>1967-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Miscellaney of Convert Programming</title>
<link>https://hdl.handle.net/1721.1/6137</link>
<description>A Miscellaney of Convert Programming
McIntosh, Harold V.; Guzman, Adolfo
CONVERT shares with other programming languages the circumstance that it is was easier to evaluate the language and to learn its uses if it is possible to scrutinize a representative sample of programs which effect typical but simple and easily understood calculations. Consequently we have gathered together several examples of varying degrees of difficulty in order to show CONVERT in action. In each case the CONVERT program, written as a LISP function ready for execution in CTSS, is shown, together with the results of its application to a small variety of arguments, and a general explanation of the program, its intent, form of its arguments, and method of operation. When the notation CLOCK (()) ... CLOCK (T) appears, the time f execution has been determined, and is shown, in tenths of seconds immediately after the result has been printed. Since there is no particular organization to the selection of examples, we here give a brief catalogue of them.
</description>
<pubDate>Sat, 01 Apr 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6137</guid>
<dc:date>1967-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>LISP Linkage Feature: Incorporating MIDAS into PDP-6 LISP</title>
<link>https://hdl.handle.net/1721.1/6136</link>
<description>LISP Linkage Feature: Incorporating MIDAS into PDP-6 LISP
Silver, Roland
Some PDP6 LISP users have felt a need for a way to incorporate MIDAS subroutines into LISP. LISP has been changed to let you do this, using files found on the LISP SYSTEM microtape. You write a routine for LISP in much the same way that you write any other MIDAS relocatable subroutine. You must, however, observe the constraints imposed by LISP's allocation and use of accumulators, and its methods of handling input, output, and interrupts. In addition, you require linkage to LISP before your routine can operate properly: The entry point(s) of the subroutine must be put on the property list(s) of the appropriate atom(s), and the address fields of the instructions pointing to other routines, to list structure, or the other LISP data structures must be set properly. This is done when LISP begins operation??er allocation, but before going into its listen loop. We provide eight macros to ease the job of creating such linkages: SUBR, FSUBR, MACRO, QUOTE, E, SPECIAL, and SYM. If you write "SUBR name" at a location a in your routine, LISP will subsequently ascribe the property SUBR to the atom name, with entry location a. Similar remarks apply to the use of FSBUR, LSBUR, and MACRO. The significance and use of other four macros is perhaps best communicated through examples: 1. An instruction like "MOVEI A,QUOTE(X Y Z)" will be assembled as "MOVEI A,O". At link time, however, LISP will insert the location of list (X Y Z) into the address field of the instruction. 2. 2. Suppose that the atom FOO has the properties shown in Figure 1. Then the instructions "MOVEI A QUOTE FOO", "MOVEM B, SPECIAL FOO", "PUSHJ P, SYM FOO", and "CALL E FOO" will each be assembled with a zero address field, which will be modified at link time to be b, c, 106, and 101, respectively.
Revised
</description>
<pubDate>Sun, 01 Oct 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6136</guid>
<dc:date>1967-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>CNTOUR</title>
<link>https://hdl.handle.net/1721.1/6135</link>
<description>CNTOUR
Krakauer, Lawrence J.
The CNTOUR program plots an intensity relief  map of an image which is read from tape,  disc, or from either vidisector camera. It is  used to examine vidisector images. It may  also be used as a general purpose aiming,  monitoring and focusing program, especially  for high-contrast images, for which it  produces something like a line drawing. The  program is available both in a time sharing  and a non time sharing version.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6135</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>SCPLOT BIN</title>
<link>https://hdl.handle.net/1721.1/6134</link>
<description>SCPLOT BIN
Sordillo, Donald
This program will take a list of display instructions and cause it to be plotted. For further or more detailed information consult with Michael Speciner.
</description>
<pubDate>Sat, 01 Oct 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6134</guid>
<dc:date>1966-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Primitive Control P Feature</title>
<link>https://hdl.handle.net/1721.1/6133</link>
<description>A Primitive Control P Feature
Eastlake, Donald E., III
A program, some TECO macros, and some small modifications to existing systems software have been written, called PRO, whose purpose is to reduce the large number of control languages and system programs it has been necessary to know about and the large amount of redundant typing it has been necessary to do to effectively use the MAC PDP-6 system. PRO allows a user knowing the command languages on only TECO, DOT, and PRO to effectively edit and debug email absolute programs with a minimum of command typing overhead (systems of this sort are called control P features for historic reasons). The remainder of this memo, which describes PRO and its use in detail, assumes some knowledge of TECO, DOT, and the MAC PDP-6 system. (In this memo the symbol $ always stands for the character ALT MOD).
</description>
<pubDate>Sat, 01 Oct 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6133</guid>
<dc:date>1966-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Music Playing on the PDP-6</title>
<link>https://hdl.handle.net/1721.1/6132</link>
<description>Music Playing on the PDP-6
Sordillo, Donald
This memo describes a process of converting coded music into auditory stimuli on the PDP-6. Attached is a copy of the original specifications for the coding (a PDP-1 memo by Peter Samson).
</description>
<pubDate>Mon, 01 Aug 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6132</guid>
<dc:date>1966-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Input Macro for TECO</title>
<link>https://hdl.handle.net/1721.1/6131</link>
<description>An Input Macro for TECO
Eastlake, D.
A macro has been written for TECO that enables one to insert characters into the buffer as they are typed with the entire current page (if not greater than the display screen"s height in length) always being displayed. This macro now exists on the MACDMP system tape as a file entitled "CTLP INP".
</description>
<pubDate>Thu, 01 Sep 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6131</guid>
<dc:date>1966-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modifications to PDP-6 Teletype Logic</title>
<link>https://hdl.handle.net/1721.1/6130</link>
<description>Modifications to PDP-6 Teletype Logic
Knight, Tom
The existing teletype logic for the PDP-6 has been modified to accommodate up to four additional teletypes. These were added with a minimum of change to the existing logic, and are easily removable by taking out the cable in 4M2 and replacing the cable in 4M1 with the jumper module.
</description>
<pubDate>Mon, 01 Aug 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6130</guid>
<dc:date>1966-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Output to the PDP-6 Calcomp Plotter</title>
<link>https://hdl.handle.net/1721.1/6129</link>
<description>Output to the PDP-6 Calcomp Plotter
Holloway, Jack
The plotter on the console of the PDP-6 is currently attached to device number 774, and accepts stepping pulses given under control of a CONO to that device. Its normal mode of operation is to CONO the desired bits on, wait an instruction, and cono a zero.
</description>
<pubDate>Mon, 01 Aug 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6129</guid>
<dc:date>1966-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Additions to Vision Library</title>
<link>https://hdl.handle.net/1721.1/6128</link>
<description>Additions to Vision Library
White, John
Modified LAP: Additions have been made to  LAP as described in the PDP-6 write-up.
</description>
<pubDate>Mon, 01 Aug 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6128</guid>
<dc:date>1966-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Summer Vision Group: A Quick Look at Some of Our Programs</title>
<link>https://hdl.handle.net/1721.1/6127</link>
<description>Summer Vision Group: A Quick Look at Some of Our Programs
Sussman, Gerald Jay; Guzman, Adolfo
no abstract
</description>
<pubDate>Fri, 01 Jul 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6127</guid>
<dc:date>1966-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sides 21</title>
<link>https://hdl.handle.net/1721.1/6126</link>
<description>Sides 21
Greenblatt, Richard; Sordillo, Donald A.
SIDES 21 produces a graph consisting of the locations of lines which comprise the sides of either a geometric solid or a plane figure. The representation is in floating point mode, suitable for subsequent processing. The input is a picture intensity-function.
</description>
<pubDate>Mon, 01 Aug 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6126</guid>
<dc:date>1966-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Summer Vision Project</title>
<link>https://hdl.handle.net/1721.1/6125</link>
<description>The Summer Vision Project
Papert, Seymour A.
The summer vision project is an attempt to use our summer workers effectively in the construction of a significant part of a visual system. The particular task was chosen partly because it can be segmented into sub-problems which allow individuals to work independently and yet participate in the construction of a system complex enough to be real landmark in the development of "pattern recognition". The basic structure is fixed for the first phase of work extending to some point in July. Everyone is invited to contribute to the discussion of the second phase. Sussman is coordinator of "Vision Project" meetings and should be consulted by anyone who wishes to participate. The primary goal of the project is to construct a system of programs which will divide a vidisector picture into regions such as likely objects, likely background areas and chaos. We shall call this part of its operation FIGURE-GROUND analysis. It will be impossible to do this without considerable analysis of shape and surface properties, so FIGURE-GROUND analysis is really inseparable in practice from the second goal which is REGION DESCRIPTION. The final goal is OBJECT IDENTIFICATION which will actually name objects by matching them with a vocabulary of known objects.
</description>
<pubDate>Fri, 01 Jul 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6125</guid>
<dc:date>1966-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic Integration II</title>
<link>https://hdl.handle.net/1721.1/6124</link>
<description>Symbolic Integration II
Moses, Joel
In this memo we describe the current state of  the integration program originally described in  AI Memo 97 (MAC-M-310). Familiarity with  Memo 97 is assumed. Some of the  algorithms described in that memo have been  extended. Certain new algorithms and a  simple integration by parts routine have been  added. The current program can integrate all  the problems which were solved by SAINT  and also the two problems which were  solved. Due to the addition of a decision  procedure the program is capable of  identifying certain integrands (such as e or e/ x) as not integrable in closed form.
</description>
<pubDate>Sat, 01 Oct 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6124</guid>
<dc:date>1966-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Experiments in Finite Algebra-II</title>
<link>https://hdl.handle.net/1721.1/6123</link>
<description>Computer Experiments in Finite Algebra-II
Maurer, W.D.
In a previous memo (Computer Experiments in Finite Algebra, MAC-M-245) we described a computer system for the handling of finite groups, semigroups, subsets, finite maps, and constants. This system has been extended to read and write disk files; a mechanical procedure has been developed for extending the system; and a program (the inferential Compiler) has been written which accepts a source language consisting of mathematical statements in a standard format and compiles code which verifies these statements over a file or files of special cases (including possible counterexamples). Three limitations of the system were mentioned in the previous memo. Of these, (1) and (3) have been effectively eliminated in the current system. Limitation (2) still exists and will be overcome only in ALGEBRA III, which is briefly described in section 4.
</description>
<pubDate>Wed, 01 Dec 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6123</guid>
<dc:date>1965-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Experiments in Finite Algebra</title>
<link>https://hdl.handle.net/1721.1/6122</link>
<description>Computer Experiments in Finite Algebra
Maurer, W.D.
The experiments described here concern an initial design for a computer system specifically for the handling of finite groups, rings, fields, semigroups, and vector spaces. The usefulness of such a system was discussed in (1). The system has been coded MAD, with certain subroutines in FAP, for the IBM 7094, and is designed to operate in a time-sharing environment.
</description>
<pubDate>Tue, 01 Jun 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6122</guid>
<dc:date>1965-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>PDP-6 LISP Input-Output for the Dataphone</title>
<link>https://hdl.handle.net/1721.1/6121</link>
<description>PDP-6 LISP Input-Output for the Dataphone
Martin, William A.
A version of LISP 1.5 for the PDP-6 Computer has been extended to include IO through the dataphone. This makes possible communication between programs running in Project MAC time sharing and LISP programs running on the PDP-6. The method of handling input-output for the dataphone is similar to that for the typewriter, paper tape punch, and paper tape reader. Three useful LISP functions are presented as examples of dataphone programming.
</description>
<pubDate>Tue, 01 Jun 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6121</guid>
<dc:date>1965-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topics in Model Theory</title>
<link>https://hdl.handle.net/1721.1/6120</link>
<description>Topics in Model Theory
Levin, Michael
The concept of "free" as in free group and free semi-group is extended to arbitrary first order theories. Every consistent theory has free models. Some problems of obtaining a categorical theory of models are discussed.
</description>
<pubDate>Sat, 01 May 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6120</guid>
<dc:date>1965-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Matter, Mind and Models</title>
<link>https://hdl.handle.net/1721.1/6119</link>
<description>Matter, Mind and Models
Minsky, Marvin
This paper attempts to explain why people become confused by questions about the relation between menal and physical events. When a question leads to confused, inconsistent answers, this may be (1) because the question is ultimately meaningless or at least unanswerable, but it may also be (2) because an adequate answer requires a powerful analytical apparatus. My view is that many important questions about relation between mind and brain are of this latter kind, and that some of the necessary technical and conceptual tools are becoming available as a result of work on he problems of making computer programs behave intelligently. In this paper we suggest a theory of why introspection does not give clear answers to these questions. The paper does not go very far toward finding technical solutions to the questions, but there is probably some value in finding at least a clear explanation of why we are confused.
</description>
<pubDate>Mon, 01 Mar 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6119</guid>
<dc:date>1965-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The COMIT Feature in LISP II</title>
<link>https://hdl.handle.net/1721.1/6118</link>
<description>The COMIT Feature in LISP II
Bobrow, Daniel G.
The purpose of COMIT feature is to facilitate certain types of list manipulations in LISP II. This feature is a syntactic convenience, rather than an extension of the semantics of LISP. It permits the programmer to test directly whether a piece of list structure matches a certain pattern, and if so, to construct another structure utilizing subsegments of the original structure which matched parts of the given pattern.
</description>
<pubDate>Mon, 01 Feb 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6118</guid>
<dc:date>1965-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Television Camera-To-Computer Adapter: PDP-6 Device 770</title>
<link>https://hdl.handle.net/1721.1/6117</link>
<description>Television Camera-To-Computer Adapter: PDP-6 Device 770
Minsky, Marvin
The TVA (Television Adaptor) is a data-input device just completed. Any standard Closed-Circuit Television Camera can be connected to the PDP-6, without modification, by a single BNC connector. Then a simple program can make a digitized image of selected size and position appear in core memory. Operation is automatically controlled by the PDP-6 priority-interrupt system so that, to the programmer, the core-image is automatically read-in and maintained. This is an open invitation to come in and discuss applications. We are particularly interested in (i) projects leading to a working page-reader system, first for teletype character sets and later to include recognition of larger alphabets and hand-written corrections, and (ii) projects leading to recognition functions that will be useful in coordination with the mechanical hand system.
</description>
<pubDate>Fri, 01 Jan 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6117</guid>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>CTSS LISP Notice-Supplement to A.I. Memo No. 67</title>
<link>https://hdl.handle.net/1721.1/6116</link>
<description>CTSS LISP Notice-Supplement to A.I. Memo No. 67
Hart, T.
The LISP system (command version) has been updated. Bugs are corrected include:  1. out of pushdown list in compiled function will not transfer to 77777. 2. with compiler printing turned off by comprint, it is truly off. 3. "ERROR54A/" when running comiled program no longer occurs. 5. CSET and CSETQ have their proper values. 6. the public versions of PRINT DATA and EDIT DATA have been improved. In particular, the function DEFINELIST has been removed from PRINT; EDIT has had a minor bug in filelistadd corrected, and the functions filelistdelete [1; x; y] and extract [1; n; m] added. The former deletes the function on the list 1, from file n m and writes a new file n EDIT with these changes made. The latter extracts the function 1 from the file n DATA and adds them to the file m DATA, updating the disc by writing appropriate EDIT class files.
</description>
<pubDate>Tue, 01 Dec 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6116</guid>
<dc:date>1964-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unrecognizable Sets of Numbers</title>
<link>https://hdl.handle.net/1721.1/6115</link>
<description>Unrecognizable Sets of Numbers
Minsky, Marvin; Papert, Seymour A.
When is a set A of positive integers, represented as binary numbers, "regular" in the sense that it is a set of sequences that can be recognized by a finite-state machine? Let pie A(n) be the number of members of A less than the integer n. It is shown that the asymptotic behavior of pie A(n) is subject to severe restraints if A is regular. These constraints are violated by many important natural numerical sets whose distribution functions can be calculated, at least asymptotically. These include the set P of prime numbers for which pie P(n)~n/log n for large n, the set of integers A (k) of the form n to the power k for which pie A(k)(n)1/k, and many others. The technique cannot, however, yield a decision procedure for regularity since for every infinite regular set A there is a nonregular set A for which /pie Z(n)-pie A(n)/is less than or equal to 1, so that the asymptotic behaviors of the two distribution functions are essentially identical.
</description>
<pubDate>Sun, 01 Nov 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6115</guid>
<dc:date>1964-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposed Instructions on the GE 635 for List Processing and Push Down Stacks</title>
<link>https://hdl.handle.net/1721.1/6114</link>
<description>Proposed Instructions on the GE 635 for List Processing and Push Down Stacks
Levin, Michael
The instructions that transmit data between the index registers and the memory work only on the left half (address) portion of memory. These instructions are LDXn (load index n from address of storage word). And STXn (store the contents of index n in address of storage word). The effective address of both of these instructions includes modification by index registers. A corresponding set of instructions for transmitting data to or from the right half of memory would facilitate list structure operations. The present order code makes it impossible to so list-chaining operations (car or cdr) without disturbing the A or Q registers.
</description>
<pubDate>Tue, 01 Sep 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6114</guid>
<dc:date>1964-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>String Manipulation in the New Language</title>
<link>https://hdl.handle.net/1721.1/6113</link>
<description>String Manipulation in the New Language
Bobrow, Daniel G.
String manipulation can be made convenient within the *** language by implementing two functions:  1) match [workspace; pattern] and 2) construct {format;pmatch]. In this memo I describe how I think these two functions can be implemented, and how they might be used to express operations now conveniently denoted in string manipulation languages such as COMIT, SNOBOL, and METEOR.
</description>
<pubDate>Wed, 01 Jul 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6113</guid>
<dc:date>1964-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Operation of a Semantic Question-Answering System</title>
<link>https://hdl.handle.net/1721.1/6112</link>
<description>Operation of a Semantic Question-Answering System
Raphael, Bertram
A computer program has been written in the LISP programming language which accepts information and answers questions presented to it in a restricted form of natural English language. The program achieves its effects by automatically creating, adding to, and searching a relational model for factual information. The purpose of this memo is to describe and explain the behavior of the program.  The remainder of this section briefly describes the structure of the model. Section II presents sample conversations illustrating various features of the program, and describes the implementation of those features. Section III is a brief survey of conclusions drawn from this research. It is assumed throughout that the reader is at least somewhat familiar with the LISP programming system (and its meta-language notation), the concept of property (description) lists, and the usual notations of Mathematical Logic.
</description>
<pubDate>Fri, 01 Nov 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6112</guid>
<dc:date>1963-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>MACRO Definitions for LISP</title>
<link>https://hdl.handle.net/1721.1/6111</link>
<description>MACRO Definitions for LISP
Hart, Timothy P.
In LISP 1.5 special forms are used for three logically separate purposes: a) to reach the alist, b) to allow functions to have an indefinite number of arguments, and c) to keep arguments from being evaluated.  New LISP interpreters can easily satisfy need (a) by making the alist a SPECIAL-type or APVAL-type entity. Uses (b) and (c) can be replaced by incorporating a MACRO instruction expander in define. I am proposing such an expander.
</description>
<pubDate>Tue, 01 Oct 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6111</guid>
<dc:date>1963-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Proposal for a Geometry Theorem Proving Program</title>
<link>https://hdl.handle.net/1721.1/6110</link>
<description>A Proposal for a Geometry Theorem Proving Program
Hart, Timothy P.
During the last half of the nineteenth century the need for formal methods of proof became evident to mathematicians who were making such confidence-shaking discoveries as non-Euclidean geometry.  The demand is not to be denied; every jump must be barred from our deductions. That it is hard to satisfy must be set down to the tediousness of proceeding step by step. Every proof which is even a little complicated threatens to become inordinately long. [M1] G. Frege, 1884  This general desire for rigor has persisted since that time, and a great deal has been learned about formal methods. But, for the reason noted by Frege, very little of real mathematics has been done with full formal treatment. Our present hope is to use computers to take the drudgery out of formal demonstrations, just as they are taking it out of accounting.  Toward this end, several programs are under way. They vary in purpose; the Proofchecker [H8, H9] is to be capable of filling the gaps of a proof; the work of Mott et. al. [H10] aims to achieve the equivalent of a desk calculator ability as an aid to a mathematician doing formal proofs.  The most intriguing prospect, however, is that computers can eventually be made to both devise and prove interesting non-trivial theorems wholly on their own. The first of these desires, the devising of interesting conjectures, has not even been attempted. I believe, however, that we are on the verge of achieving the second of these ends, the mechanical proof of non-trivial theorems, a belief which I hope I can justify in the sequel.
</description>
<pubDate>Sun, 01 Sep 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6110</guid>
<dc:date>1963-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Primitive Recursion</title>
<link>https://hdl.handle.net/1721.1/6109</link>
<description>Primitive Recursion
Levin, Michael
This is one of a series of memos concerning a logical system for proof-checking. It is not self-contained, but belongs with future memos which will describe a complete formal system with its intended interpretation and application. This memo also assumes familiarity with LISP and with "A Basis for a Mathematical Theory of Computation" by John McCarthy.
</description>
<pubDate>Mon, 01 Jul 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6109</guid>
<dc:date>1963-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal for a FAP Language Debugging Program</title>
<link>https://hdl.handle.net/1721.1/6108</link>
<description>Proposal for a FAP Language Debugging Program
Winett, Joel
A time-sharing system for the 7090 computer  is being developed at the M.I.T. Computation  Center whereby many users can  communicate simultaneously with the  computer through individual consoles. In the  time-sharing system a time-sharing  supervisor (TSS) program directs the running  of each user's program in such a manner that  each user's program is run in short bursts of  computation. The effect is that the user sitting  at his console has complete control over his  program with unrestricted use of a large  computing machine. Through the use of  commands in the time-sharing system a user  who writes a program in the FAP language  can assemble his program, load it into core,  and start the program. In order to make the  most use of the time-sharing facility the user  during the debugging stages of his program  will want to dynamically monitor his running  program and make changes as necessary.  The proposed FAP language debugging  program gives the user the facility to  communicate with his program using the  symbols defined within his program.
</description>
<pubDate>Sat, 01 Jun 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6108</guid>
<dc:date>1963-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Universality of TAG Systems with P-2</title>
<link>https://hdl.handle.net/1721.1/6107</link>
<description>Universality of TAG Systems with P-2
Cocke, John; Minsky, Marvin
In the following sections we show, by a simple direct construction, that computations done by Turing machines can be duplicated by a very simple symbol manipulation process. The process is described by a simple form of Post Canonical system with some very strong restrictions. First, the system is monogenic; each formula (string of symbols) of the system can be affected by one and only one production (rule of inference) to yield a unique result. Accordingly, if we begin with a single axiom (initial string) the system generates a simply ordered sequence of formulas, and this operation of a monogenic system brings to mind the idea of a machine. The Post canonical system is further restricted to be of the "Tag" variety, described briefly below. It was shown in [1] that Tag systems are equivalent to Turing machines. The proof in [1] is very complicated and uses lemmas concerned with a variety of two-tape non-writing Turing machines. Our proof here avoids these otherwise interesting machines and strengthens the main result, obtaining the theorem with a best possible "deletion number" P ?? Also, the representation of the Turing machine in the present system has a lower degree of exponentiation, which may be of significance in applications.  These systems seem to be of value in establishing unsolvability of combinatorial problems.
</description>
<pubDate>Mon, 01 Apr 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6107</guid>
<dc:date>1963-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>METEOR: A LISP Interpreter for String Transformations</title>
<link>https://hdl.handle.net/1721.1/6106</link>
<description>METEOR: A LISP Interpreter for String Transformations
Bobrow, Daniel G.
Conditional expressions, composition and recursion are the basic operations used in LISP to define functions on list structures. Any computable function of arbitrarily complex list structures may be described using these operations, but certain simple transformations of linear lists (strings) are awkward to define in this notation. Such transformations may be characterized (and caricaturized) by the following instructions for a transformation: "Take that substring there, and that other one starting with "Black", which has the substring mentioned third as the first; then inserts the second substring mentioned; omit the first and leave the unmentioned parts of the original string unchanged."
</description>
<pubDate>Mon, 01 Apr 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6106</guid>
<dc:date>1963-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Suggested Conventions for LISP Time-Sharing System</title>
<link>https://hdl.handle.net/1721.1/6105</link>
<description>Suggested Conventions for LISP Time-Sharing System
Robnett, Richard A.
Below is a list of suggested Conventions and De-bugging aids for LISP time-sharing. Any and all suggestions are encouraged and should be submitted in writing to R. A. Robnett in a hurry.
</description>
<pubDate>Mon, 01 Apr 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6105</guid>
<dc:date>1963-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Representation of Semantic Information</title>
<link>https://hdl.handle.net/1721.1/6104</link>
<description>Computer Representation of Semantic Information
Raphael, Bertram
A major obstacle in the development of learning machines, mechanical translation, advanced information retrieval systems, and other areas of artificial intelligence, has been the problem of defining, encoding, and representing within a computer the "meaning" of the text data being processed. Various devices have been used to avoid this problem, but very little work has been done toward solving it. The purpose of this memo (and the thesis research with which it is associated) is to describe one possible solution, and report on a computer program which demonstrates its feasibility.
</description>
<pubDate>Mon, 01 Apr 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6104</guid>
<dc:date>1963-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neural Nets and Theories of Memory</title>
<link>https://hdl.handle.net/1721.1/6103</link>
<description>Neural Nets and Theories of Memory
Minsky, Marvin
A number of models developed in work often called "neural-net" research may be of interest to physiologists working on the problem of memory. From this work comes a variety of ideas on how networks of neuron-like elements can be made to act as learning machines. Some of these may suggest ways in which memory may be stored in nervous systems. It is important, perhaps, to recognize that these models were not founded at all on physiological ideas; they really stem from psychological and introspective notions. They all involve some form of alteration of synaptic transmission properties contingent on the pre- and post-synaptic activity during and after the relevant behavior. This notion is suggested not so much by actual observation of synapses as by the introspective simile of wearing down a path -- the "ingraining" of a frequently-traveled route. Below we shall argue that this idea is useful and suggestive, but not sufficient. These models can be made to account for learning connections between stimuli and responses on a low level, but do not seem to account for higher, symbolic behavior. We will argue that the latter suggests a return to the search for localization of memory, a topic that has been unpopular for many years.
</description>
<pubDate>Fri, 01 Mar 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6103</guid>
<dc:date>1963-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Proposal to Investigate the Application of a Heuristic Theory of Tree Searching to a Chess Playing Program</title>
<link>https://hdl.handle.net/1721.1/6102</link>
<description>A Proposal to Investigate the Application of a Heuristic Theory of Tree Searching to a Chess Playing Program
Bloom, Burton H.
The problem of devising a mechanical procedure for playing chess is fundamentally the problem of searching the very large move-tree associated with a chess position. This tree-searching problem is representative of a large class of problems. Consequently, we will first present briefly a general theory of tree-searching problems. This theory will be useful in clarifying the intention of our proposed research.
</description>
<pubDate>Fri, 01 Feb 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6102</guid>
<dc:date>1963-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Heuristic Program to Solve Geometric Analogy Problems</title>
<link>https://hdl.handle.net/1721.1/6101</link>
<description>A Heuristic Program to Solve Geometric Analogy Problems
Evans, T.G.
A program to solve a wide class of intelligence-test problems of the "geometric-analogy" type ("figure A is to figure B as figure C is to which of the following figures?") is being constructed. The program, which is written in LISP, uses heuristic methods to (a) calculate, from relatively primitive input descriptions, "articular" (cf. Minsky, Steps Toward Artificial Intelligence) descriptions of the figures, then (b) utilize these descriptions in finding an appropriate transformation rule and applying it, modifying it as necessary, to arrive at an answer. The current version has solved a number of geometric-analogy problems and is now being modified in several ways and run on further test cases.
</description>
<pubDate>Mon, 01 Oct 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6101</guid>
<dc:date>1962-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Identities Concerning the Function Subst [x; y; z]</title>
<link>https://hdl.handle.net/1721.1/6100</link>
<description>Some Identities Concerning the Function Subst [x; y; z]
Norton, Lewis M.
The purpose of this paper is two-fold; 1) to explore the use of recursion induction in proving theorem about functions of symbolic expressions, in particular. 2) to investigate thoroughly the algebraic properties of the LISP function subst [x; y; z] by this method. The main result is embodied in Theorem 8.
Revised March 1962
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6100</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Basis for a Mathematical Theory of Computation</title>
<link>https://hdl.handle.net/1721.1/6099</link>
<description>A Basis for a Mathematical Theory of Computation
McCarthy, John
This paper is a corrected version of the paper of the same title given at the Western Joint Computer Conference, May 1961. A tenth section discussing the relations between mathematical logic and computation has been added. Programs that learn to modify their own behaviors require a way of representing algorithms so that interesting properties and interesting transformations of algorithms are simply represented. Theories of computability have been based on Turing machines, recursive factions of integers and computer programs. Each of these has artificialities which make it difficult to manipulate algorithms or to prove things about them. The present paper presents a formalism based on conditional forms and recursive functions whereby the functions computable in terms of certain base functions can be simply expressed. We also describe some of the formal properties of conditional forms and a method called recursion induction for proving facts about algorithms. A final section in the relations between computation and mathematical logic is included.
</description>
<pubDate>Mon, 01 Jan 1962 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6099</guid>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Alpha-Beta Heuristic</title>
<link>https://hdl.handle.net/1721.1/6098</link>
<description>The Alpha-Beta Heuristic
Edwards, D.J.; Hart, T.P.
The Alpha-Beta heuristic is a method for pruning unneeded branches from the move tree of a game. The algorithm makes use of information gained about part of the tree to reject those branches which will not affect the principle variation.
</description>
<pubDate>Fri, 01 Dec 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6098</guid>
<dc:date>1961-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Character-Handling Facilities in the LISP System</title>
<link>https://hdl.handle.net/1721.1/6097</link>
<description>Character-Handling Facilities in the LISP System
Abrahams, Paul
Because of the new read program, a number of facilities are being added to the LISP system to permit manipulation of single characters and print names. Machine-language functions have been provided for breaking print names down into a list of their characters, for forming a list of characters into a print name, for creating a numerical object from a list of its characters, for reading in characters one by one from an input medium, and for testing characters to see whether they are letters, numbers, operation characters, etc. A number of auxiliary objects and sub-routines are also described in this memo.
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6097</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recursive Functions of Symbolic Expressions and Their Computation by Machine</title>
<link>https://hdl.handle.net/1721.1/6096</link>
<description>Recursive Functions of Symbolic Expressions and Their Computation by Machine
McCarthy, J.
The attached paper is a description of the LISP system starting with the machine-independent system of recursive functions of symbolic expressions. This seems to be a better point of view for looking at the system than the original programming approach. After revision, the paper will be submitted for publication in a logic or computing journal. This memorandum contains only the machine independent parts of the system. The representation of S-expressions in the computer and the system for representing S-functions by computer subroutines will be added.
</description>
<pubDate>Fri, 13 Mar 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6096</guid>
<dc:date>1959-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>The Incremental Garbage Collection Processes</title>
<link>https://hdl.handle.net/1721.1/6095</link>
<description>The Incremental Garbage Collection Processes
Baker, Henry G.; Hewitt, Carl
This paper investigates some problems  associated with an expression evaluation  order that we call "future" order, which is  different from call-by-name, call-by-value, and  call-by-need. In future order evaluation, an  object called "future" is created to serve as the  value of each expression that is to be  evaluated and separate process is dedicated  to its evaluation. This mechanism allows the  fully parallel evaluation of the expressions in a  programming language. We discuss an  approach to a problem that arises in this  context: futures which were thought to be  relevant when they were created become  irrelevant through not being needed later in  computation. The problem of irrelevant  processes also appears in multiprocessing  problem-solving systems which start several  processors working on the same problem but  with different methods, and return with the  solution which finishes first. This parallel  method strategy has the drawback that the  processes which are investigating the losing  methods must be identified, cleanly stopped,  and the processors they are using  reassigned to more useful tasks. The solution  we propose is that of incremental garbage  collection. The goal structure of the solution  plan should be explicitly represented in  memory as part of the graph memory (like  Lisp's heap) so that a garbage collection  algorithm can discover which processes are  performing useful work, and which can be  recycled for a new task. An incremental  algorithm for the unified garbage collection of  storage and processes is described.
</description>
<pubDate>Thu, 01 Dec 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6095</guid>
<dc:date>1977-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Art of the Interpreter of the Modularity Complex (Parts Zero, One, and Two)</title>
<link>https://hdl.handle.net/1721.1/6094</link>
<description>The Art of the Interpreter of the Modularity Complex (Parts Zero, One, and Two)
Steele, Guy Lewis, Jr.; Sussman, Gerald Jay
We examine the effects of various language  design decisions on theprogramming styles  available to a user of the language, with  particular emphasis on the ability to  incrementally construct modular systems. At  each step we exhibit an interactive meta-circular interpreter for the language under  consideration. Each new interpreter is the  result of an incremental change to a previous  interpreter. We explore the consequences of  various variable binding disciplines and the  introduction of side effects. We find that  dynamic scoping is unsuitable for  constructing procedural abstractions, but has  another role as agent of modularity, being a  structured form of side effect. More general  side effects are also found to be necessary to  promote modular style. We find that the notion  of side effect and the notion of equality (object  identity) are mutually constraining; to define  one is to define the other. The interpreters we exhibit are all written in a simple dialect of  LISP, and all implement LISP-like languages.  A subset of these interpreters constitute a  partial historical reconstruction of the actual  evaluation of LISP.
</description>
<pubDate>Mon, 01 May 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6094</guid>
<dc:date>1978-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theory of Human Stereo Vision</title>
<link>https://hdl.handle.net/1721.1/6093</link>
<description>A Theory of Human Stereo Vision
Marr, D.; Poggio, Tomaso A
An algorithm is proposed for solving the  stereoscopic matching problem. The  algorithm consists of five steps: 1.) Each  image is filtered with bar masks of four sizes  that vary with eccentricity; the equivalent filters  are about one octave wide. 2.) Zero-crossings  of the mask values are localized, and  positions that correspond to terminations are  found. 3.) For each mask size, matching takes  place between pairs of zero crossings or  terminations of the same sign in the two  images, for a range of disparities up to about  the width of the mask's central region. 4.)  Wide masks can control vergence  movements, thus causing small masks to  come into correspondence. 5.) When a  correspondence is achieved, it is written into a dynamic buffer, called the 2-1/2-D  sketch. It is shown that this proposal provides  a theoretical framework for most existing  psychophysical and neurophysiological data  about stereopsis. Several critical experimental  predictions are also made, for instance about  the size of Panum's area under various  conditions. The results of such experiments  would tell us whether, for example,  cooperativity is necessary for the fusion  process.
</description>
<pubDate>Tue, 01 Nov 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6093</guid>
<dc:date>1977-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Levels of Complexity in Discourse for Reference Disambiguation and Speech Act Interpretation</title>
<link>https://hdl.handle.net/1721.1/6092</link>
<description>Levels of Complexity in Discourse for Reference Disambiguation and Speech Act Interpretation
Bullwinkle, Candace
This paper presents a discussion of means  of describing the discourse and its  components which makes speech act  interpretation and reference disambiguation  possible with minimal search of the  knowledge in the database. A portion of this  paper will consider how a frames  representation of sentences and common  sense knowledge provides a mechanism for  representing the postulated discourse  components. Finally some discussion of the  use of the discourse model and of frames in a  discourse understanding program for a  personal assistant will be presented.
</description>
<pubDate>Sun, 01 May 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6092</guid>
<dc:date>1977-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>LAMBDA: The Ultimate Declarative</title>
<link>https://hdl.handle.net/1721.1/6091</link>
<description>LAMBDA: The Ultimate Declarative
Steele, Guy Lewis, Jr.
In this paper, a sequel to "LAMBDA: The U ltimate Imperative", a new view of LAMBDA as a renaming operator is presented and contrasted with the usual functional view taken by L ISP. This view, combined with the view of function invocation as a kind of generalized GOTO, leads to several new insights into the nat ure of the LISP evaluation mechanism and the symmetry between form and function, evaluation and application, and control and environmen t. It also complements Hewitt's actors theory nicely, explaining the intent of environment manipulation as cleanly, generally, and intu itively as the actors theory explains control structures. The relationship between functional and continuation-passing styles of progra mming is also clarified. This view of LAMBDA leads directly to a number of specific techniques for use by an optimizing compiler: 1.) T emporary locations and user-declared variables may be allocated in a uniform manner. 2.) Procedurally defined data structures may compi le into code as good as would be expected for data defined by the more usual declarative means. 3.) Lambda-calculus-theoretic models of such constructs as GOT, DO loops, call-by-name, etc. may be used directly as macros, the expansion of which may then compile into code as good as that produced by compilers which are designed especially to handle GOTO, DO, etc. The necessary characteristics of such a c ompiler designed according to this philosophy are discussed. Such a compiler is to be built in the near future as a testing ground for these ideas.
</description>
<pubDate>Mon, 01 Nov 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6091</guid>
<dc:date>1976-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Arithmetic Shifting Considered Harmful</title>
<link>https://hdl.handle.net/1721.1/6090</link>
<description>Arithmetic Shifting Considered Harmful
Steele, Guy Lewis, Jr.
For more than a decade there has been  great confusion over the semantics of the  standard "arithmetic right shift" instruction.  This confusion particularly afflicts authors of  computer reference handbooks and of  optimizing compilers. The fact that shifting is  not always equivalent to division has been red iscovered over and over again over the years,  but has never been publicized. This paper  quotes a large number of sources to prove the  widespread extent of this confusion, and then  proceeds to a short discussion of the problem  itself and what to do about it.
</description>
<pubDate>Wed, 01 Sep 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6090</guid>
<dc:date>1976-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Framework for Representing Knowledge</title>
<link>https://hdl.handle.net/1721.1/6089</link>
<description>A Framework for Representing Knowledge
Minsky, Marvin
This is a partial theory of thinking, combining  a number of classical and modern concepts  from psychology, linguistics, and AI. Whenever  one encounters a new situation (or makes a  substantial change in one's viewpoint) he  selects from memory a structure called a  frame, a remembered framework to be  adopted to fit reality by changing details as  necessary. A frame is a data-structure for  representing a stereotyped situation, like  being in a certain kind of living room, or going  to a child's birthday party. Attached to each  frame are several kinds of information. Some  of this information is about how to use the  frame. Some is about what one can expect to  happen next. Some is about what to do if  these expectations are not confirmed. The  "top levels" of a frame are fixed, and represent  things that are always true about the  supposed situation. The lower levels have  many "slots" that must be filled by specific  instances or data. Collections of related  frames are linked together into frame-systems. The effects of important actions are  mirrored by transformations between the  frames of a system. These are used to make  certain kinds of calculations economical, to  represent changes of emphasis and attention  and to account for effectiveness of "imagery".  In Vision, the different frames of a system  describe the scene from different viewpoints,  and the transformations between one frame  and another represent the effects of moving  from place to place. Other kinds of frame-systems can represent actions, cause-effect  relations, or changes in conceptual viewpoint.  The paper applies the frame-system idea  also to problems of linguistic understanding:  memory, acquisition and retrieval of  knowledge, and a variety of ways to reason by  analogy and jump to conclusions based on  partial similarity matching.
</description>
<pubDate>Sat, 01 Jun 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6089</guid>
<dc:date>1974-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Acceleration of Series</title>
<link>https://hdl.handle.net/1721.1/6088</link>
<description>Acceleration of Series
Gosper, R.W.
The rate of convergence of infinite series can be accelerated b y a suitable splitting of each term into two parts and then combining the second part of the n-th term with the first part of the (n+1) -th term t get a new series and leaving the first part of the first term as an "orphan". Repeating this process an infinite number of times, the series will often approach zero, and we obtain the series of orphans, which may converge faster than the original series. H euristics for determining the splits are given. Various mathematical constants, originally defined as series having a term ratio which approaches 1, are accelerated into series having a term ratio less than 1. This is done with the constants of Euler and Catalan. The se ries for pi/4 = arctan 1 is transformed into a variety of series, among which is one having a term ration of 1/27 and another having a term ratio of 54/3125. A series for 1/pi is found which has a term ratio of 1/64 and each term of which is an integer divided by a powe r of 2, thus making it easy to evaluate the sum in binary arithmetic. We express zeta(3) in terms of pi-3 and a series having a term ra tio of 1/16. Various hypergeometric function identities are found, as well as a series for (arcsin y)-2 curiously related to a series f or y arcsin y. Convergence can also be accelerated for finite sums, as is shown for the harmonic numbers. The sum of the reciprocals of the Fibonacci numbers has been expressed as a series having the convergence rate of theta function. Finally, it is shown that a series whose n-th term ratio is (n+p)(n+q)/(n+r)(n+s), where p, q, r, s are integers, is equal to c + d pi-2, where c and d are rational.
</description>
<pubDate>Fri, 01 Mar 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6088</guid>
<dc:date>1974-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence Progress Report</title>
<link>https://hdl.handle.net/1721.1/6087</link>
<description>Artificial Intelligence Progress Report
Minsky, Marvin; Papert, Seymour A.
Research at the Laboratory in vision,  language, and other problems of intelligence.  This report is an attempt to combine a  technical progress report with an exposition of our point of view about certain  problems in the Theory of Intelligence.
</description>
<pubDate>Sat, 01 Jan 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6087</guid>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>HAKMEM</title>
<link>https://hdl.handle.net/1721.1/6086</link>
<description>HAKMEM
Beeler, M.; Gosper, R.W.; Schroeppel, R.
Here is some little know data which may be of i nterest to computer hackers. The items and examples are so sketchy that to decipher them may require more sincerity and curiosity than a non-hacker can muster. Doubtless, little of this is new, but nowadays it's hard to tell. So we must be content to give you an insight , or save you some cycles, and to welcome further contributions of items, new or used.
The original source, drafts, and related files from 1972-1985 that were used in the creation of this memo are available as a zip package accessible via the “source and related files” link under Additional downloads. See the README file within the package for more details.
</description>
<pubDate>Tue, 01 Feb 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6086</guid>
<dc:date>1972-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>ITS 1.4 Reference Manual</title>
<link>https://hdl.handle.net/1721.1/6085</link>
<description>ITS 1.4 Reference Manual
This reference manual is intended for those who have some knowledge of PDP-6 machine language and are either interested in the ITS monitor for its own sake, or who wish to write machine language programs to run under it. It should be remembered that the Project MAC, AI Group, PDP-6 installation is undergoing continuous software and hardware developments. Please direct all corrections, additions, or comments to the author.
</description>
<pubDate>Sat, 01 Jun 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6085</guid>
<dc:date>1968-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies</title>
<link>https://hdl.handle.net/1721.1/6084</link>
<description>The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies
Papert, Seymour A.
In December 1965 a paper by Hubert Dreyfus  revived the old game of generating curious  arguments for and against Artificial  Intelligence. Dreyfus hit top form in September  1967 with an explanation in the Review of  Metaphysics of the philosophically interesting  difficulties encountered in constructing robots.  The best of these is that a mechanical arm  controlled by a digital computer could not  reasonably be expected to move fast enough  to play ping-pong.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6084</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Fast Parsing Scheme for Hand-Printed Mathematical Expressions</title>
<link>https://hdl.handle.net/1721.1/6083</link>
<description>A Fast Parsing Scheme for Hand-Printed Mathematical Expressions
Martin, William A.
A set of one-line text-book-style mathematical expressions is defined by a context free grammar. This grammar generates strings which describe the expressions in terms of mathematical symbols and some simple positional operators, such as vertical concatenation. The grammar rules are processed to abstract information used to drive the parsing scheme. This has been called syntax-controlled as opposed to syntax-directed analysis. The parsing scheme consists of two operations. First, the X-Y plane is searched in such a way that the mathematical characters are picked up in a unique order. Then, the resulting character string is parsed using a precedence algorithm with certain modifications for special cases. The search of the X-Y plane is directed by the particular characters encountered.
</description>
<pubDate>Thu, 19 Oct 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6083</guid>
<dc:date>1967-10-19T00:00:00Z</dc:date>
</item>
<item>
<title>An Algorithm for Bootstrapping Communications</title>
<link>https://hdl.handle.net/1721.1/6082</link>
<description>An Algorithm for Bootstrapping Communications
Beal, Jacob
I present an algorithm which allows two agents to generate a simple language based only on observations of a shared environment. Vocabulary and roles for the language are learned in linear time. Communication is robust and degrades gradually as complexity increases. Dissimilar modes of experience will lead to a shared kernel vocabulary.
</description>
<pubDate>Mon, 13 Aug 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6082</guid>
<dc:date>2001-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Hash-Coding Functions of a Complex Variable</title>
<link>https://hdl.handle.net/1721.1/6081</link>
<description>Hash-Coding Functions of a Complex Variable
Martin, William A.
A common operation in non-numerical analysis is the comparison of symbolic mathematical expressions. Often equivalence under the algebraic and trigonometric relations can be determined with the high probability by hash-coding the expressions using finite field arithmetic and then comparing the resulting hash-code numbers. The use of this scheme in a program for algebraic simplification is discussed.
</description>
<pubDate>Thu, 25 Jun 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6081</guid>
<dc:date>1964-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>A LISP Garbage Collector Algorithm Using Serial Secondary Storage</title>
<link>https://hdl.handle.net/1721.1/6080</link>
<description>A LISP Garbage Collector Algorithm Using Serial Secondary Storage
Minsky, M.L.
This paper presents an algorithm for reclaiming unused free storage memory cells in LISP. It depends on availability of a fast secondary storage device, or a large block of available temporary storage. For this price, we get: 1.) Packing of free-storage into a solidly packed block. 2.) Smooth packing of arbitrary linear blocks and arrays. 3.) The collector will handle arbitrarily complex re-entrant list structure with no introduction of spurious copies. 4.) The algorithm is quite efficient; the marking pass visits words at most twice and usually once, and the loading pass is linear. 5.) The system is easily modified to allow for increase in size of already fixed consecutive blocks, provided one can afford to initiate a collection pass or use a modified array while waiting for such a pass to occur.
</description>
<pubDate>Fri, 27 Dec 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6080</guid>
<dc:date>1963-12-27T00:00:00Z</dc:date>
</item>
<item>
<title>Motion Estimation from Disparity Images</title>
<link>https://hdl.handle.net/1721.1/6079</link>
<description>Motion Estimation from Disparity Images
Demirdjian, D.; Darrell, T.
A new method for 3D rigid motion estimation from stereo is proposed in this paper. The appealing feature of this method is that it directly uses the disparity images obtained from stereo matching. We assume that the stereo rig has parallel cameras and show, in that case, the geometric and topological properties of the disparity images. Then we introduce a rigid transformation (called d-motion) that maps two disparity images of a rigidly moving object. We show how it is related to the Euclidean rigid motion and a motion estimation algorithm is derived. We show with experiments that our approach is simple and more accurate than standard approaches.
</description>
<pubDate>Mon, 07 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6079</guid>
<dc:date>2001-05-07T00:00:00Z</dc:date>
</item>
<item>
<title>Reducing Drift in Parametric Motion Tracking</title>
<link>https://hdl.handle.net/1721.1/6078</link>
<description>Reducing Drift in Parametric Motion Tracking
Rahimi, A.; Morency, L.-P.; Darrell, T.
We develop a class of differential motion trackers that automatically stabilize when in finite domains. Most differ-ential trackers compute motion only relative to one previous frame, accumulating errors indefinitely. We estimate pose changes between a set of past frames, and develop a probabilistic framework for integrating those estimates. We use an approximation to the posterior distribution of pose changes as an uncertainty model for parametric motion in order to help arbitrate the use of multiple base frames. We demonstrate this framework on a simple 2D translational tracker and a 3D, 6-degree of freedom tracker.
</description>
<pubDate>Mon, 07 May 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6078</guid>
<dc:date>2001-05-07T00:00:00Z</dc:date>
</item>
<item>
<title>Certified Computation</title>
<link>https://hdl.handle.net/1721.1/6077</link>
<description>Certified Computation
Arkoudas, Konstantine
This paper introduces the notion of certified computation. A certified computation does not only produce a result r, but also a correctness certificate, which is a formal proof that r is correct. This can greatly enhance the credibility of the result: if we trust the axioms and inference rules that are used in the certificate,then we can be assured that r is correct. In effect,we obtain a trust reduction: we no longer have to trust the entire computation; we only have to trust the certificate. Typically, the reasoning used in the certificate is much simpler and easier to trust than the entire computation. Certified computation has two main applications: as a software engineering discipline, it can be used to increase the reliability of our code; and as a framework for cooperative computation, it can be used whenever a code consumer executes an algorithm obtained from an untrusted agent and needs to be convinced that the generated results are correct. We propose DPLs (Denotational Proof Languages)as a uniform platform for certified computation. DPLs enforce a sharp separation between logic and control and over versatile mechanicms for constructing certificates. We use Athena as a concrete DPL to illustrate our ideas, and we present two examples of certified computation, giving full working code in both cases.
</description>
<pubDate>Mon, 30 Apr 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6077</guid>
<dc:date>2001-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>Exploration in Gradient-Based Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/6076</link>
<description>Exploration in Gradient-Based Reinforcement Learning
Meuleau, Nicolas; Peshkin, Leonid; Kim, Kee-Eung
Gradient-based policy search is an alternative to value-function-based methods for reinforcement learning in non-Markovian domains. One apparent drawback of policy search is its requirement that all actions be 'on-policy'; that is, that there be no explicit exploration. In this paper, we provide a method for using importance sampling to allow any well-behaved directed exploration policy during learning. We show both theoretically and experimentally that using this method can achieve dramatic performance improvements.
</description>
<pubDate>Tue, 03 Apr 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6076</guid>
<dc:date>2001-04-03T00:00:00Z</dc:date>
</item>
<item>
<title>Plan-view Trajectory Estimation with Dense Stereo Background Models</title>
<link>https://hdl.handle.net/1721.1/6075</link>
<description>Plan-view Trajectory Estimation with Dense Stereo Background Models
Darrell, T.; Demirdjian, D.; Checka, N.; Felzenswalb, P.
In a known environment, objects may be tracked in multiple views using a set of back-ground models. Stereo-based models can be illumination-invariant, but often have undefined values which inevitably lead to foreground classification errors. We derive dense stereo models for object tracking using long-term, extended dynamic-range imagery, and by detecting and interpolating uniform but unoccluded planar regions. Foreground points are detected quickly in new images using pruned disparity search. We adopt a 'late-segmentation' strategy, using an integrated plan-view density representation. Foreground points are segmented into object regions only when a trajectory is finally estimated, using a dynamic programming-based method. Object entry and exit are optimally determined and are not restricted to special spatial zones.
</description>
<pubDate>Thu, 01 Feb 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6075</guid>
<dc:date>2001-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recursive Functions of Symbolic Expressions and Their Computation</title>
<link>https://hdl.handle.net/1721.1/6074</link>
<description>Recursive Functions of Symbolic Expressions and Their Computation
McCarthy, J.
This memorandum is a continuation of Memo 8.
</description>
<pubDate>Mon, 30 Mar 1959 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6074</guid>
<dc:date>1959-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>MATHSCOPE Part I: A Proposal for a Mathematical Manipulation-Display System</title>
<link>https://hdl.handle.net/1721.1/6073</link>
<description>MATHSCOPE Part I: A Proposal for a Mathematical Manipulation-Display System
Minsky, Marvin
Mathscope: A compiler for two-dimensional mathematical picture syntax. Mathscope is a proposed program for displaying publication-quality mathematical expressions given symbolic (list-structure) representations of the expressions. The goal is to produce 'portraits' of expressions that are sufficiently close to conventional typographic conventions that mathematicians will be able to work with without much effort -- so that they do not have to learn much in the way of a new language, so far as the representation of mathematical formulae is concerned
</description>
<pubDate>Fri, 01 Nov 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6073</guid>
<dc:date>1963-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recent Improvements in DDT</title>
<link>https://hdl.handle.net/1721.1/6072</link>
<description>Recent Improvements in DDT
Edwards, D.J.; Minsky, M.L.
This paper will report new developments and recent improvements to DDT. "Window DDT" now will remember undefined symbols and define them on a later command. Using sequence breaks, it can change the contents of memory while a program is running, and the contents of memory can be displayed in symbolic form on the scope.
</description>
<pubDate>Fri, 01 Nov 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6072</guid>
<dc:date>1963-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Introduction to the Calculus of Knowledge</title>
<link>https://hdl.handle.net/1721.1/6071</link>
<description>Introduction to the Calculus of Knowledge
Raphael, Bertram
This paper deals with the "Calculus of Knowledge", an extension of the propositional calculus in which one may reason about what other people know. Semantic and Syntactic systems are developed, certain theorems are proven, and a formal solution in the system of a well-known reasoning problem is presented.
</description>
<pubDate>Wed, 01 Nov 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6071</guid>
<dc:date>1961-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>LISP Error Stops as of May 10, 1961</title>
<link>https://hdl.handle.net/1721.1/6070</link>
<description>LISP Error Stops as of May 10, 1961
author, No
no abstract
</description>
<pubDate>Mon, 01 May 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6070</guid>
<dc:date>1961-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Arithmetic in LISP 1.5</title>
<link>https://hdl.handle.net/1721.1/6069</link>
<description>Arithmetic in LISP 1.5
Levin, Michael
As of the present, the following parts of LISP 1.5 are working. This is an excerpt from the forth coming LISP 1.5 Programmer's Manual.
</description>
<pubDate>Sat, 01 Apr 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6069</guid>
<dc:date>1961-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Proofchecker</title>
<link>https://hdl.handle.net/1721.1/6068</link>
<description>The Proofchecker
Abrahams, Paul
The Proofchecker is a heuristically oriented computer program for checking mathematical proofs, with the checking of textbook proofs as its ultimate goal. It constructs, from each proof step given to it, a corresponding sequence of formal steps, if possible.  It records the current state of the proof in the form of what it is sufficient to prove. There are two logical rules of inference: modus powers and insertion (if it is sufficient to prove B, and A is the theorem, then it is sufficient to prove A implies B). The permissible formal steps include these rules of inference as well as provision for handling definitions, lemmas, calculations, and reversion to previous states. As of now, most of the formalisms are programmed and partially debugged, but the heuristic aspects have yet to be programmed.
</description>
<pubDate>Sun, 01 Jan 1961 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6068</guid>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>DERIVATOR I: A Program for Visual Inspection of Solutions to First-Order Non-Linear Differential Equations</title>
<link>https://hdl.handle.net/1721.1/6067</link>
<description>DERIVATOR I: A Program for Visual Inspection of Solutions to First-Order Non-Linear Differential Equations
Minsky, Marvin
Derivator is a PDP-1 program for examining the solutions to differential equations by inspection of a visual display of trajectories. Because fixed-point arithmetic is used (in order to maintain visual display speeds), Derivator must be regarded as a qualitative tool. It is subject to truncation error in the trajectory-following program, and round-off error due to 'underflow' in the function-definition programs for dy and dx. Still it appears to be very suitable for studying topology of solutions around singularities, etc. The display shows the solution curves ('characteristics') in the x-y plane. They are generated parametrically.
</description>
<pubDate>Sun, 01 Dec 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6067</guid>
<dc:date>1963-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Recognition of Curved Objects</title>
<link>https://hdl.handle.net/1721.1/6066</link>
<description>On the Recognition of Curved Objects
Grimson, W. Eric L.
Determining the identity and pose of occluded  objects from noisy data is a critical part of a  system's intelligent interaction with an  unstructured environment. Previous work has  shown that local measurements of the  position and surface orientation of small  patches of an object's surface may be used in  a constrained search process to solve this  problem for the case of rigid polygonal objects  using two-dimensional sensory data, or rigid  polyhedral objects using three-dimensional  data. This note extends the recognition  system to deal with the problem of  recognizing and locating curved objects. The  extension is done in two dimensions, and  applies to the recognition of two-dimensional  objects from two-dimensional data, or to the  recognition of three-dimensional objects in  stable positions from two- dimensional data.
</description>
<pubDate>Wed, 01 Jul 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6066</guid>
<dc:date>1987-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Recognition of Parameterized Objects</title>
<link>https://hdl.handle.net/1721.1/6065</link>
<description>On the Recognition of Parameterized Objects
Grimson, W. Eric L.
Determining the identity and pose of occluded  objects from noisy data is a critical step in  interacting intelligently with an unstructured  environment. Previous work has shown that  local measurements of position and surface  orientation may be used in a constrained  search process to solve this problem, for the  case of rigid objects, either two-dimensional  or three-dimensional. This paper considers  the more general problem of recognizing and  locating objects that can vary in parameterized  ways. We consider objects with rotational,  translational, or scaling degrees of freedom,  and objects that undergo stretching  transformations. We show that the  constrained search method can be extended  to handle the recognition and localization of  such generalized classes of object families.
</description>
<pubDate>Thu, 01 Oct 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6065</guid>
<dc:date>1987-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lisp: A Language for Stratified Design</title>
<link>https://hdl.handle.net/1721.1/6064</link>
<description>Lisp: A Language for Stratified Design
Abelson, Harold; Sussman, Gerald Jay
We exhibit programs that illustrate the power  of Lisp as a language for expressing the  design and organization of computational  systems. The examples are chosen to  highlight the importance of abstraction in  program design and to draw attention to the  use of procedures to express abstractions.
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6064</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy Functions for Early Vision and Analog Networks</title>
<link>https://hdl.handle.net/1721.1/6063</link>
<description>Energy Functions for Early Vision and Analog Networks
Yuille, Alan
This paper describes attempts to model the  modules of early vision in terms of  minimizing energy functions, in particular  energy functions allowing discontinuities in  the solution. It examines the success  of using Hopfield-style analog networks for  solving such problems. Finally it discusses  the limitations of the energy function  approach.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6063</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rigidity and Smoothness of Motion</title>
<link>https://hdl.handle.net/1721.1/6062</link>
<description>Rigidity and Smoothness of Motion
Yuille, Alan; Ullman, Shimon
sMany theories of structure from motion divide the process into twosparts which are solved using different assumptions. Smoothness of thesvelocity field is often assumed to solve the motion correspondencesproblem, and then rigidity is used to recover the 3D structure. Wesprove results showing that, in a statistical sense, smoothness of thesvelocity field follows from rigidity of the motion.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6062</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Relative Orientation</title>
<link>https://hdl.handle.net/1721.1/6061</link>
<description>Relative Orientation
Horn, Berthold K.P.
Before corresponding points in images taken  with two cameras can be used to recover  distances to objects in a scene, one has to  determine the position and orientation of one  camera relative to the other. This is the  classic photogrammetric problem of relative  orientation, central to the  interpretation of binocular stereo  information. Described here is a particularly  simple iterative scheme for recovering  relative orientation that, unlike existing  methods, does not require a good initial  guess for the baseline and the rotation.
</description>
<pubDate>Tue, 01 Sep 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6061</guid>
<dc:date>1987-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Abstraction in Numerical Methods</title>
<link>https://hdl.handle.net/1721.1/6060</link>
<description>Abstraction in Numerical Methods
Halfant, Matthew; Sussman, Gerald Jay
We illustrate how the liberal use of high-order  procedural abstractions and infinite streams  helps us to express some of the vocabulary  and methods of numerical analysis. We  develop a software toolbox encapsulating the  technique of Richardson extrapolation, and  we apply these tools to the problems of  numerical integration and differentiation. By  separating the idea of Richardson  extrapolation from its use in particular  circumstances, we indicate how numerical  programs can be written that exhibit the  structure of the ideas from which they are  formed.
</description>
<pubDate>Thu, 01 Oct 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6060</guid>
<dc:date>1987-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>UNITRAN: An Interlingual Machine Translation System</title>
<link>https://hdl.handle.net/1721.1/6059</link>
<description>UNITRAN: An Interlingual Machine Translation System
Dorr, Bonnie Jean
This report describes the UNITRAN  (UNIversal TRANslator) system, an  implementation of a principle-based  approach to natural language translation. The  system is "interlingual", i.e., the model is  based on universal principles that hold  across all languages; the distinctions among  languages are handled by settings of  parameters associated with the universal  principles. Interaction effects of linguistic  principles are handled by the syste so that the  programmer does not need to specifically  spell out the details of rule applications. Only  a small set of principles covers all languages;  thus, the unmanageable grammar size of  alternative approaches is no longer a  problem.
</description>
<pubDate>Tue, 01 Dec 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6059</guid>
<dc:date>1987-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expressing Mathematical Subroutines Constructively</title>
<link>https://hdl.handle.net/1721.1/6058</link>
<description>Expressing Mathematical Subroutines Constructively
Roylance, Gerald
The typical subroutines that compute $\\sin(x)$  and $\\exp(x)$ bear little resemblance to our  mathematical knowledge of these functions:  they are composed of concrete arithmetic  expressions that include many mysterious  numerical constants. Instead of programming  these subroutines conventionally, we can  express their construction using symbolic  ideas such as periodicity and Taylor series.  Such an approach has many advantages: the  code is closer to the mathematical basis of  the function, less vulnerable to errors, and is  trivially adaptable to various precisions.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6058</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-Rigid Motion and Regge Calculus</title>
<link>https://hdl.handle.net/1721.1/6057</link>
<description>Non-Rigid Motion and Regge Calculus
Jasinschi, Rado; Yuille, Alan
We study the problem of recovering the  structure from motion of figures which are  allowed to perform a controlled non-rigid  motion. We use Regge Calculus to  approximate a general surface by a net of  triangles. The non- rigid flexing motion we  deal with corresponds to keeping the  triangles rigid and allowing bending only at  the joins between triangles. We show that  depth information can be obtained by using a  modified version of the Incremental  Rigidity Scheme devised by Ullman (1984).  We modify this scheme to allow for flexing  motion and call our version the Incremental  Semirigidity Scheme.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6057</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inspection Methods in Programming: Cliches and Plans</title>
<link>https://hdl.handle.net/1721.1/6056</link>
<description>Inspection Methods in Programming: Cliches and Plans
Rich, Charles
Inspection methods are a kind of  engineering problem solving based on the  recognition and use of standard forms or  cliches. Examples are given of program  analysis, program synthesis and  program validation by inspection. A  formalism, called the Plan Calculus,  is defined and used to represent  programming cliches in a  convenient, canonical, and programming-language independent fashion.
</description>
<pubDate>Tue, 01 Dec 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6056</guid>
<dc:date>1987-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Task-Level Robot Learning: Ball Throwing</title>
<link>https://hdl.handle.net/1721.1/6055</link>
<description>Task-Level Robot Learning: Ball Throwing
Aboaf, Eric W.; Atkeson, Christopher G.; Reinkensmeyer, David J.
We are investigating how to program robots  so that they learn tasks from practice. One  method, task-level learning,  provides advantages over simply perfecting  models of the robot's lower level systems.  Task-level learning can compensate for the  structural modeling errors of the robot's lower  level control systems and can speed up the  learning process by reducing the degrees of  freedom of the models to be learned. We  demonstrate two general  learning procedures---fixed-model learning  and refined-model learning---on a ball-throwing robot system.
</description>
<pubDate>Tue, 01 Dec 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6055</guid>
<dc:date>1987-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Programmer's Apprentice Project: A Research Overview</title>
<link>https://hdl.handle.net/1721.1/6054</link>
<description>The Programmer's Apprentice Project: A Research Overview
Rich, Charles; Waters, Richard C.
The goal of the Programmer's Apprentice project is to develop a theory of how expert programmers analyze, synthesize, modify, explain, specify, verify, and document programs. This research goal overlaps both artificial intelligence and software engineering. From the viewpoint of artificial intelligence, we have chosen programming as a domain in which to study fundamental issues of knowledge representation and reasoning. From the viewpoint of software engineering, we seek to automate the programming process by applying techniques from artificial intelligence.
</description>
<pubDate>Sun, 01 Nov 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6054</guid>
<dc:date>1987-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Depth and Shape from Stereo, in Agreement with Psychophysical Evidendence</title>
<link>https://hdl.handle.net/1721.1/6053</link>
<description>Qualitative Depth and Shape from Stereo, in Agreement with Psychophysical Evidendence
Weinshall, Daphna
Obtaining exact depth from binocular  disparities is hard if camera calibration is  needed. We will show that qualitative depth  information can be obtained from stereo  disparities with almost no computations and  with no prior knowledge (or computation) of  camera parameters. We derive two  expressions that order all matched points in  the images in two distinct depth-consistent  ways from image coordinates only. One is a  tilt-related order $\\lambda$, the other is a  depth-related order $\\chi$. Using $\\lambda$  demonstrates some anomalies and unusual  characteristics that have been observed in  psychophysical experiments. The same  approach is applied to qualitatively estimate  changes in the curvature of a contour on the  surface of an object, with either $x$- or $y$-coordinate fixed.
</description>
<pubDate>Tue, 01 Dec 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6053</guid>
<dc:date>1987-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Software Structuring Principles for VLSI CAD</title>
<link>https://hdl.handle.net/1721.1/6052</link>
<description>Software Structuring Principles for VLSI CAD
Katzenelson, Jacob; Zippel, Richard
A frustrating aspect of the frequent changes to  large VLSI CAD systems is that so little of the  old available programs can be reused. It  takes too much time and effort to find the  reusable pieces and recast them for the new  use. Our thesis is that such systems can be  designed for reusability by designing the  software as layers of problem oriented  languages, which are implemented by  suitably extending a "base" language. We  illustrate this methodology with respect to  VLSI CAD programs and a particular  language layer: a language for handling  networks. We present two different  implementations. The first uses UNIX and  Enhanced C. The second approach uses  Common Lisp on a Lisp machine.
</description>
<pubDate>Tue, 01 Dec 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6052</guid>
<dc:date>1987-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>System Validation via Constraint Modeling</title>
<link>https://hdl.handle.net/1721.1/6051</link>
<description>System Validation via Constraint Modeling
Waters, Richard C.
Constraint modeling could be a very important  system validation method, because its  abilities are complementary to both testing  and code inspection. In particular, even  though the ability of constraint modeling to  find errors is limited by the simplifications  which are introduced when making a  constraint model, constraint modeling can  locate important classes of errors which are  caused by non-local faults (i.e., are hard to  find with code inspection) and manifest  themselves as failures only in unusual  situations (i.e., are hard to find with testing).
</description>
<pubDate>Mon, 01 Feb 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6051</guid>
<dc:date>1988-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model-Based Robot Learning</title>
<link>https://hdl.handle.net/1721.1/6050</link>
<description>Model-Based Robot Learning
Atkeson, Christopher G.; Aboaf, Eric W.; McIntyre, Joseph; Reinkensmeyer, David J.
Models play an important role in learning from  practice. Models of a controlled system can be  used as learning operators to refine  commands on the basis of performance  errors. The examples used to demonstrate  this include positioning a limb at a visual  target and following a defined trajectory. Better  models lead to faster correction of command  errors, requiring less practice to attain a given  level of performance. The benefits of accurate  modeling are improved performance in all  aspects of control, while the risks of  inadequate modeling are poor learning  performance, or even degradation of  performance with practice.
</description>
<pubDate>Fri, 01 Apr 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6050</guid>
<dc:date>1988-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Dexterity Measure for the Kinematic Control of Robot Manipulator with Redundany</title>
<link>https://hdl.handle.net/1721.1/6049</link>
<description>A Dexterity Measure for the Kinematic Control of Robot Manipulator with Redundany
Chang, Pyung H.
We have derived a new performance  measure, product of minors of the Jacobian  matrix, that tells how far kinematically  redundant manipulators are from singularity.  It was demonstrated that previously used  performance measures, namely condition  number and manipulability measure allowed  to change configurations, caused  repeatability problems and discontinuity  effects. The new measure, on the other hand,  assures that the arm solution remains in the  same configuration, thus effectively  preventing these problems.
</description>
<pubDate>Mon, 01 Feb 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6049</guid>
<dc:date>1988-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preshaping Command Inputs to Reduce System Vibration</title>
<link>https://hdl.handle.net/1721.1/6048</link>
<description>Preshaping Command Inputs to Reduce System Vibration
Singer, Neil; Seering, Warren
A method is presented for generating shaped  command inputs which significantly reduce or  eliminate endpoint vibration. Desired system  inputs are altered so that the system  completes the requested move without  residual vibration. A short move time penalty  is incurred (on the order of one period of the  first mode of vibration). The preshaping  technique is robust under system parameter  uncertainty and may be applied to both open  and closed loop systems. The Draper  Laboratory's Space Shuttle Remote  Manipulator System simulator (DRS) is used  to evaluate the method. Results show a factor  of 25 reduction in endpoint residual vibration  for typical moves of the DRS.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6048</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Behavior-Based Arm Controller</title>
<link>https://hdl.handle.net/1721.1/6047</link>
<description>A Behavior-Based Arm Controller
Connell, Jonathan H.
In this paper we describe a working, implemented controller for a real, physical mobile robot arm. The controller is composed of a collection of 15 independent behaviors which run, in real time, on a set of 8 loosely coupled on-board 8-bit microprocessors. We describe how these behaviors cooperate to actually seek out and retrieve objects using local sensory data. We also discuss the methodology used to decompose this collection task and the types of spatial representation and reasoning used by the system.
</description>
<pubDate>Wed, 01 Jun 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6047</guid>
<dc:date>1988-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic Construction of a 2D Scale-Space Image</title>
<link>https://hdl.handle.net/1721.1/6046</link>
<description>Symbolic Construction of a 2D Scale-Space Image
Saund, Eric
The shapes of naturally occurring objects  characteristically involve spatial events  occurring at many scales. This paper offers a  symbolic approach to constructing a primitive  shape description across scales for 2D binary  (silhouette) shape images: grouping  operations are performed over collections of  tokens residing on a Scale-Space  Blackboard. Two types of grouping operations  are identified that, respectively: (1) aggregate  edge primitives at one scale into edge  primitives at a coarser scale and (2) group  edge primitives into partial-region assertions,  including curved- contours, primitive-corners,  and bars. This approach avoids several  drawbacks of numerical smoothing methods.
</description>
<pubDate>Fri, 01 Apr 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6046</guid>
<dc:date>1988-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Real-Time Part Position Sensing</title>
<link>https://hdl.handle.net/1721.1/6045</link>
<description>Real-Time Part Position Sensing
Gordon, Steven J.; Seering, Warren P.
A light stripe vision system is used to  measure the location of polyhedral features of  parts from a single frame of video camera  output. Issues such as accuracy in locating  the line segments of intersection in the image  and combining redundant information from  multiple measurements and multiple sources  are addressed. In 2.5 seconds, a prototype  sensor was capable of locating a two inch  cube to an accuracy (one standard deviation)  of .002 inches (.055 mm) in translation and .1  degrees (.0015 radians) in rotation. When  integrated with a manipulator, the system was  capable of performing high precision  assembly tasks.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6045</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamical Systems and Motion Vision</title>
<link>https://hdl.handle.net/1721.1/6044</link>
<description>Dynamical Systems and Motion Vision
Heel, Joachim
In this paper we show how the theory of  dynamical systems can be employed to solve  problems in motion vision. In particular we  develop algorithms for the recovery of dense  depth maps and motion parameters using  state space observers or filters. Four different  dynamical models of the imaging situation  are investigated and corresponding filters/ observers derived. The most powerful of  these algorithms recovers depth and motion  of general nature using a brightness  change constraint assumption. No feature-matching preprocessor is required.
</description>
<pubDate>Fri, 01 Apr 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6044</guid>
<dc:date>1988-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computational Study of Vision</title>
<link>https://hdl.handle.net/1721.1/6043</link>
<description>The Computational Study of Vision
Hildreth, Ellen C.; Ullman, Shimon
The computational approach to the study of  vision inquires directly into the sort of  information processing needed to extract  important information from the changing  visual image---information such as the three-dimensional structure and movement of  objects in the scene, or the color and texture  of object surfaces. An important contribution  that computational studies have made is to  show how difficult vision is to perform, and  how complex are the processes needed to  perform visual tasks successfully. This article  reviews some computational studies of  vision, focusing on edge detection, binocular  stereo, motion analysis, intermediate vision,  and object recognition.
</description>
<pubDate>Fri, 01 Apr 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6043</guid>
<dc:date>1988-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scheme86: A System for Interpreting Scheme</title>
<link>https://hdl.handle.net/1721.1/6042</link>
<description>Scheme86: A System for Interpreting Scheme
Berlin, Andrew A.; Wu, Henry M.
Scheme86 is a computer system designed to  interpret programs written in the Scheme  dialect of Lisp. A specialized architecture,  coupled with new techniques for optimizing  register management in the interpreter,  allows Scheme86 to execute interpreted  Scheme at a speed comparable to that of  compiled Lisp on conventional workstations.
</description>
<pubDate>Fri, 01 Apr 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6042</guid>
<dc:date>1988-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Lexical Regularities in Designing Natural Language Systems</title>
<link>https://hdl.handle.net/1721.1/6041</link>
<description>Exploiting Lexical Regularities in Designing Natural Language Systems
Katz, Boris; Levin, Beth
This paper presents the lexical component of the START Question Answering system developed at the MIT Artificial Intelligence Laboratory. START is able to interpret correctly a wide range of semantic relationships associated with alternate expressions of the arguments of verbs. The design of the system takes advantage of the results of recent linguistic research into the structure of the lexicon, allowing START to attain a broader range of coverage than many existing systems.
</description>
<pubDate>Fri, 01 Apr 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6041</guid>
<dc:date>1988-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Structure of the N-body Problem</title>
<link>https://hdl.handle.net/1721.1/6040</link>
<description>Computational Structure of the N-body Problem
Katzenelson, Jacob
This work considers the organization and  performance of computations on parallel  computers of tree algorithms for the N-body  problem where the number of particles is on  the order of a million. The N-body problem is  formulated as a set of recursive equations  based on a few elementary functions, which  leads to a computational structure in the form  of a pyramid-like graph, where each vertex is a  process, and each arc a communication link.  The pyramid is mapped to three different  processor configurations: (1) A pyramid of  processors corresponding to the processes  pyramid graph; (2) A hypercube of processors,  e.g., a connection-machine like architecture;  (3) A rather small array, e.g., $2 \\times 2 \\ times 2$, of processors faster than the ones  considered in (1) and (2) above. Simulations  of this size can be performed on any of the  three architectures in reasonable time.
</description>
<pubDate>Fri, 01 Apr 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6040</guid>
<dc:date>1988-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Sensitivity of the Hough Transform for Object Recognition</title>
<link>https://hdl.handle.net/1721.1/6039</link>
<description>On the Sensitivity of the Hough Transform for Object Recognition
Grimson, W. Eric L.; Huttenlocher, David
A common method for finding an object's  pose is the generalized Hough transform,  which accumulates evidence for possible  coordinate transformations in a parameter  space and takes large clusters of similar  transformations as evidence of a correct  solution. We analyze this approach by deriving  theoretical bounds on the set of  transformations consistent with each data-model feature pairing, and by deriving  bounds on the likelihood of false peaks in the  parameter space, as a function of noise,  occlusion, and tessellation effects. We argue  that blithely applying such methods to  complex recognition tasks is a risky  proposition, as the probability of false  positives can be very high.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6039</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical Evidence that the Motion of Pluto is Chaotic</title>
<link>https://hdl.handle.net/1721.1/6038</link>
<description>Numerical Evidence that the Motion of Pluto is Chaotic
Sussman, Gerald Jay; Wisdom, Jack
The Digital Orrery has been used to perform  an integration of the motion of the outer  planets for 845 million years. This integration  indicates that the long-term motion of the  planet Pluto is chaotic. Nearby trajectories  diverge exponentially with an e-folding time of  only about 20 million years.
</description>
<pubDate>Fri, 01 Apr 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6038</guid>
<dc:date>1988-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Robot Flexibility for Endpoint Force Control</title>
<link>https://hdl.handle.net/1721.1/6037</link>
<description>Modeling Robot Flexibility for Endpoint Force Control
Eppinger, Steven D.; Seering, Warren P.
Dynamic models have been developed in an  attempt to match the response of a robot  arm. The experimental data show rigid-body  and five resonant modes. The frequency  response and pole-zero arrays for various  models of structural flexibility are compared  with the data to evaluate the characteristics of  the models, and to provide insight into the  nature of the flexibility in the robot. Certain  models are better able to depict  transmission flexibility while others  describe types of structural flexibility.
</description>
<pubDate>Sun, 01 May 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6037</guid>
<dc:date>1988-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Syntactic Closures</title>
<link>https://hdl.handle.net/1721.1/6036</link>
<description>Syntactic Closures
Bawden, Alan; Rees, Jonathan
In this paper we describe {\\it syntactic  closures}. Syntactic closures address the  scoping problems that arise when writing  macros. We discuss some issues raised by  introducing syntactic closures into the macro  expansion interface, and we compare  syntactic closures with other approaches.  Included is a complete implementation.
</description>
<pubDate>Wed, 01 Jun 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6036</guid>
<dc:date>1988-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of Series Expressions: Part I: User's Manual for the Series Macro Package</title>
<link>https://hdl.handle.net/1721.1/6035</link>
<description>Optimization of Series Expressions: Part I: User's Manual for the Series Macro Package
Waters, Richard C.
The benefits of programming in a functional style are well known. In particular, algorithms that are expressed as compositions of functions operating on series/vectors/streams of data elements are much easier to understand and modify than equivalent algorithms expressed as loops. Unfortunately, many programmers hesitate to use series expressions, because they are typically implemented very inefficiently. A Common Lisp macro package (OSS) has been implemented which supports a restricted class of series expressions, obviously synchronizable series expressions, which can be evaluated very efficiently by automatically converting them into loops. Using this macro package, programmers can obtain the advantages of expressing computations as series expressions without incurring any run-time overhead.
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6035</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Performance of a Mechanical Design 'Compiler'</title>
<link>https://hdl.handle.net/1721.1/6034</link>
<description>The Performance of a Mechanical Design 'Compiler'
Ward, Allen C.; Seering, Warren
A mechanical design "compiler" has been  developed which, given an appropriate  schematic, specifications, and utility function  for a mechanical design, returns catalog  numbers for an optimal implementation. The  compiler has been successfully tested on a  variety of mechanical and hydraulic power  transmission designs and a few temperature  sensing designs. Times required have been  at worst proportional to the logarithm of the  number of possible combinations of catalog  numbers.
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6034</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligent Assistance for Program Recognition, Design, Optimization, and Debugging</title>
<link>https://hdl.handle.net/1721.1/6033</link>
<description>Intelligent Assistance for Program Recognition, Design, Optimization, and Debugging
Rich, Charles; Waters, Richard C.
A recognition assistant will help reconstruct  the design of a program, given only its source  code. A design assistant will assist a  programmer by detecting errors and  inconsistencies in his design choices and by  automatically making many straightforward  implementation decisions. An optimization  assistant will help improve the performance of  programs by identifying intermediate results  that can be reused. A debugging assistant will  aid in the detection, localization, and repair of  errors in designs as well as completed  programs.
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6033</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Routing Statistics for Unqueued Banyan Networks</title>
<link>https://hdl.handle.net/1721.1/6032</link>
<description>Routing Statistics for Unqueued Banyan Networks
Knight, Thomas F., Jr.; Sobalvarro, Patrick G.
Banyan networks comprise a large class of  networks that have been used for  interconnection in large-scale  multiprocessors and telephone switching  systems. Regular variants of Banyan  networks, such as delta and butterfly  networks, have been used in multiprocessors  such as the IBM RP3 and the BBN Butterfly.  Analysis of the performance of Banyan  networks has typically focused on these  regular variants. We present a methodology  for performance analysis of unbuffered  Banyan multistage interconnection networks.  The methodology has two novel features: it  allows analysis of networks where some  inputs are more likely to be active than others,  and allows analysis of Banyan networks of  arbitrary topology.
</description>
<pubDate>Sat, 01 Sep 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6032</guid>
<dc:date>1990-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of Series Expressions: Part II: Overview of the Theory and Implementation</title>
<link>https://hdl.handle.net/1721.1/6031</link>
<description>Optimization of Series Expressions: Part II: Overview of the Theory and Implementation
Waters, Richard C.
The benefits of programming in a functional  style are well known. In particular, algorithms  that are expressed as compositions of  functions operating on series/vectors/streams  of data elements are much easier to  understand and modify than equivalent  algorithms expressed as loops. Unfortunately,  many programmers hesitate to use series  expressions, because they are typically  implemented very inefficiently---the prime  source of inefficiency being the creation of  intermediate series objects. A restricted class  of series expressions, obviously  synchronizable series expressions, is defined  which can be evaluated very efficiently. At the  cost of introducing restrictions which place  modest limits on the series expressions  which can be written, the restrictions  guarantee that the creation of intermediate  series objects is never necessary. This  makes it possible to automatically convert  obviously synchronizable series expressions  into highly efficient loops using straight  forward algorithms.
</description>
<pubDate>Sun, 01 Jan 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6031</guid>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Height and Gradient from Shading</title>
<link>https://hdl.handle.net/1721.1/6030</link>
<description>Height and Gradient from Shading
Horn, Berthold K.P.
The method described here for recovering the  shape of a surface from a shaded image can  deal with complex, wrinkled surfaces.  Integrability can be enforced easily because  both surface height and gradient are  represented. The robustness of the method  stems in part from linearization of the  reflectance map about the current estimate of  the surface orientation at each picture cell.  The new scheme can find an exact solution  of a given shape-from-shading problem even  though a regularizing term is included. This is  a reflection of the fact that shape-from-shading problems are not ill-posed when  boundary conditions are available or when  the image contains singular points.
</description>
<pubDate>Mon, 01 May 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6030</guid>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Indexing for Visual Recognition from a Large Model Base</title>
<link>https://hdl.handle.net/1721.1/6029</link>
<description>Indexing for Visual Recognition from a Large Model Base
Breuel, Thomas M.
This paper describes a new approach to the  model base indexing stage of visual object  recognition. Fast model base indexing of 3D  objects is achieved by accessing a database  of encoded 2D views of the objects using a  fast 2D matching algorithm. The algorithm is  specifically intended as a plausible solution  for the problem of indexing into very large  model bases that general purpose vision  systems and robots will have to deal with in  the future. Other properties that make  the indexing algorithm attractive are that it can  take advantage of most geometric and non-geometric properties of features  without modification, and that it addresses  the incremental model acquisition problem  for 3D objects.
</description>
<pubDate>Wed, 01 Aug 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6029</guid>
<dc:date>1990-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Verification of Hypothesized Matches in Model-Based Recognition</title>
<link>https://hdl.handle.net/1721.1/6028</link>
<description>On the Verification of Hypothesized Matches in Model-Based Recognition
Grimson, W. Eric L.; Huttenlocher, Daniel P.
In model-based recognition, ad hoc  techniques are used to decide if a match of  data to model is correct. Generally an  empirically determined threshold is placed on  the fraction of model features that must be  matched. We rigorously derive conditions  under which to accept a match, relating the  probability of a random match to the fraction of  model features accounted for, as a function of  the number of model features, number of  image features and the sensor noise. We  analyze some existing recognition systems  and show that our method yields results  comparable with experimental data.
</description>
<pubDate>Mon, 01 May 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6028</guid>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Combinatorics of Heuristic Search Termination for Object Recognition in Cluttered Environments</title>
<link>https://hdl.handle.net/1721.1/6027</link>
<description>The Combinatorics of Heuristic Search Termination for Object Recognition in Cluttered Environments
Grimson, W. Eric L.
Many recognition systems use constrained  search to locate objects in cluttered  environments. Earlier analysis showed that  the expected search is quadratic in the  number of model and data features, if all the  data comes from one object, but is  exponential when spurious data is included.  To overcome this, many methods terminate  search once an interpretation that is "good  enough" is found. We formally examine the  combinatorics of this, showing that correct  termination procedures dramatically reduce  search. We provide conditions on the object  model and the scene clutter such that the  expected search is quartic. These results are  shown to agree with empirical data for  cluttered object recognition.
</description>
<pubDate>Mon, 01 May 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6027</guid>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experience with Acore: Implementing GHC with Actors</title>
<link>https://hdl.handle.net/1721.1/6026</link>
<description>Experience with Acore: Implementing GHC with Actors
Palmucci, Jeff; Waldsburger, Carl; Duis, David; Krause, Paul
This paper presents a concurrent interpreter  for a general-purpose concurrent logic  programming language, Guarded Horn  Clauses (GHC). Unlike typical  implementations of GHC in logic  programming languages, the interpreter is  implemented in the Actor language Acore. The  primary motivation for this work was to probe  the strengths and weaknesses of Acore as a  platform for developing sophisticated  programs. The GHC interpreter provided a  rich testbed for exploring Actor programming  methodology. The interpreter is a pedagogical  investigation of the mapping of GHC  constructs onto the Actor model. Since we  opted for simplicity over optimization, the  interpreter is somewhat inefficient.
</description>
<pubDate>Wed, 01 Aug 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6026</guid>
<dc:date>1990-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel and Deterministic Algorithms for MRFs: Surface Reconstruction and Integration</title>
<link>https://hdl.handle.net/1721.1/6025</link>
<description>Parallel and Deterministic Algorithms for MRFs: Surface Reconstruction and Integration
Geiger, Davi; Girosi, Federico
In recent years many researchers have investigated the use of Markov random fields (MRFs) for computer vision. The computational complexity of the implementation has been a drawback of MRFs. In this paper we derive deterministic approximations to MRFs models. All the theoretical results are obtained in the framework of the mean field theory from statistical mechanics. Because we use MRFs models the mean field equations lead to parallel and iterative algorithms. One of the considered models for image reconstruction is shown to give in a natural way the graduate non-convexity algorithm proposed by Blake and Zisserman.
</description>
<pubDate>Mon, 01 May 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6025</guid>
<dc:date>1989-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Considerations for an Earth-Based Flexible Robotic System</title>
<link>https://hdl.handle.net/1721.1/6024</link>
<description>Design Considerations for an Earth-Based Flexible Robotic System
Christian, Andrew
This paper provides insights into the  problems of designing a robot with joint and  link flexibility. The relationship between the  deflection of the robot under gravity is  correlated with the fundamental frequency of  vibration. We consider different types of link  geometry and evaluate the flexibility potential  of different materials. Some general  conclusions and guidelines for constructing  a flexible robot are given.
</description>
<pubDate>Wed, 01 Mar 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6024</guid>
<dc:date>1989-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal/Temporal Connectives: Syntax and Lexicon</title>
<link>https://hdl.handle.net/1721.1/6023</link>
<description>Causal/Temporal Connectives: Syntax and Lexicon
Brent, Michael R.
This report elucidates the linguistic  representation of temporal relations among  events. This involves examining sentences  that contain two clauses connected by words  like once, by the time, when, and before.  Specifically, the effect of the tenses of the  connected clauses on the acceptability of  sentences are examined. For example,  Rachel disappeared once Jon had fallen  asleep is fine, but *Rachel had disappeared  once Jon fell asleep is unacceptable. A theory  of acceptability is developed and its  implications for interpretation discussed.  Factoring of the linguisitic knowledge into a  general, syntactic component and a lexical  component clarifies the interpretation  problem. Finally, a computer model of the  theory is demonstrated.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6023</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Object-Oriented Software Reuse Tool</title>
<link>https://hdl.handle.net/1721.1/6022</link>
<description>An Object-Oriented Software Reuse Tool
Monegan, Michael D.
The Object-oriented Reuse Tool (ORT) supports the reuse of object-oriented software by maintaining a library of reusable classes and recording information about their reusability as well as information associated with their design and verification. In the early design phases of object-oriented development, ORT facilitates reuse by providing a flexible way to navigate the library, thereby aiding in the process of refining a design to maximally reuse existing classes. A collection of extensions to ORT have also been identified. These extensions would compose the remainder of a system useful in increasing reuse in object-oriented software production.
</description>
<pubDate>Sat, 01 Apr 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6022</guid>
<dc:date>1989-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Multiprocessor Architecture Using Modular Arithmetic for Very High Precision Computation</title>
<link>https://hdl.handle.net/1721.1/6021</link>
<description>A Multiprocessor Architecture Using Modular Arithmetic for Very High Precision Computation
Wu, Henry M.
We outline a multiprocessor architecture that  uses modular arithmetic to implement  numerical computation with 900 bits of  intermediate precision. A proposed prototype,  to be implemented with off-the-shelf parts, will  perform high-precision arithmetic as fast as  some workstations and mini- computers can  perform IEEE double-precision arithmetic. We  discuss how the structure of modular  arithmetic conveniently maps into a simple,  pipelined multiprocessor architecture. We  present techniques we developed to  overcome a few classical drawbacks of  modular arithmetic. Our architecture is  suitable to and essential for the study of  chaotic dynamical systems.
</description>
<pubDate>Sat, 01 Apr 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6021</guid>
<dc:date>1989-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>SQUIRT: The Prototypical Mobile Robot for Autonomous Graduate Students</title>
<link>https://hdl.handle.net/1721.1/6020</link>
<description>SQUIRT: The Prototypical Mobile Robot for Autonomous Graduate Students
Flynn, Anita M.; Brooks, Rodney A.; Wells, William M., III; Barrett, David S.
This paper describes an exercise in building  a complete robot aimed at being as small as  possible but using off-the-shelf components  exclusively. The result is an autonomous  mobile robot slightly larger than one cubic  inch which incorporates sensing, actuation,  onboard computation, and onboard power  supplies. Nicknamed Squirt, this robot acts as  a 'bug', hiding in dark corners and venturing  out in the direction of last heard noises, only  moving after the noises are long gone.
</description>
<pubDate>Sat, 01 Jul 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6020</guid>
<dc:date>1989-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Twilight Zones and Cornerstones: A Gnat Robot Double Feature</title>
<link>https://hdl.handle.net/1721.1/6019</link>
<description>Twilight Zones and Cornerstones: A Gnat Robot Double Feature
Flynn, Anita M.; Brooks, Rodney A.; Tavrow, Lee S.
We want to build tiny gnat-sized robots, a  millimeter or two in diameter. They will be  cheap, disposable, totally self-contained  autonomous agents able to do useful things  in the world. This paper consists of two parts.  The first describes why we want to build them.  The second is a technical outline of how to go  about it. Gnat robots are going to change the  world.
</description>
<pubDate>Sat, 01 Jul 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6019</guid>
<dc:date>1989-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lexical Conceptual Structure and Generation in Machine Translation</title>
<link>https://hdl.handle.net/1721.1/6018</link>
<description>Lexical Conceptual Structure and Generation in Machine Translation
Dorr, Bonnie J.
This report introduces an implemented scheme for generating target- language sentences using a compositional representation of meaning called lexical conceptual structure. Lexical conceptual structure facilitates two crucial operations associated with generation: lexical selection and syntactic realization. The compositional nature of the representation is particularly valuable for these two operations when semantically equivalent source-and-target-language words and phrases are structurally or thematically divergent. To determine the correct lexical items and syntactic realization associated with the surface form in such cases, the underlying lexical-semantic forms are systematically mapped to the target-language syntactic structures. The model described constitutes a lexical-semantic extension to UNITRAN.
</description>
<pubDate>Thu, 01 Jun 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6018</guid>
<dc:date>1989-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Networks and the Best Approximation Property</title>
<link>https://hdl.handle.net/1721.1/6017</link>
<description>Networks and the Best Approximation Property
Girosi, Federico; Poggio, Tomaso
Networks can be considered as approximation schemes. Multilayer networks of the backpropagation type can approximate arbitrarily well continuous functions (Cybenko, 1989; Funahashi, 1989; Stinchcombe and White, 1989). We prove that networks derived from regularization theory and including Radial Basis Function (Poggio and Girosi, 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property of best approximation. The main result of this paper is that multilayer networks, of the type used in backpropagation, are not best approximation. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6017</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conceptual Basis of the Lexicon in Machine Translation</title>
<link>https://hdl.handle.net/1721.1/6016</link>
<description>Conceptual Basis of the Lexicon in Machine Translation
Dorr, Bonnie J.
This report describes the organization and content of lexical information required for the task of machine translation. In particular, the lexical-conceptual basis for UNITRAN, an implemented machine translation system, will be described. UNITRAN uses an underlying form called lexical conceptual structure to perform lexical selection and syntactic realization. Lexical word entries have two levels of description: the first is an underlying lexical-semantic representation that is derived from hierarchically organized primitives, and the second is a mapping from this representation to a corresponding syntactic structure. The interaction of these two levels will be discussed and the lexical selection and syntactic realization processes will be described.
</description>
<pubDate>Tue, 01 Aug 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6016</guid>
<dc:date>1989-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Standard Map Machine</title>
<link>https://hdl.handle.net/1721.1/6015</link>
<description>The Standard Map Machine
LaMacchia, Brian; Nieh, Jason
We have designed the Standard Map  Machine(SMM) as an answer to the intensive  computational requirements involved in the  study of chaotic behavior in nonlinear  systems. The high-speed and high-precision  performance of this computer is due to its  simple architecture specialized to the  numerical computations required of nonlinear  systems. In this report, we discuss the design  and implementation of this special-purpose  machine.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6015</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering</title>
<link>https://hdl.handle.net/1721.1/6014</link>
<description>Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering
Poggio, Tomaso; Girosi, Federico
The theory developed in Poggio and Girosi  (1989) shows the equivalence between  regularization and a class of three-layer  networks that we call regularization networks  or Hyper Basis Functions. These networks  are also closely related to the classical Radial  Basis Functions used for interpolation tasks  and to several pattern recognition and neural  network algorithms. In this note, we extend the  theory by defining a general form of these  networks with two sets of modifiable  parameters in addition to the coefficients $c_\\ alpha$: moving centers and adjustable norm- weight.
</description>
<pubDate>Sun, 01 Apr 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6014</guid>
<dc:date>1990-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Do the Right Thing</title>
<link>https://hdl.handle.net/1721.1/6013</link>
<description>How to Do the Right Thing
Maes, Pattie
This paper presents a novel approach to the  problem of action selection for an  autonomous agent. An agent is viewed as a  collection of competence modules. Action  selection is modeled as an emergent property  of an activation/inhibition dynamics among  these modules. A concrete action selection  algorithm is presented and a detailed account  of the results is given. This algorithm  combines characteristics of both traditional  planners and reactive systems: it produces  fast and robust activity in a tight interaction  loop with the environment, while at the same  time allowing for some prediction and  planning to take place.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6013</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous Stochastic Cellular Automata that Have a Stationary Distribution and No Detailed Balance</title>
<link>https://hdl.handle.net/1721.1/6012</link>
<description>Continuous Stochastic Cellular Automata that Have a Stationary Distribution and No Detailed Balance
Poggio, Tomaso; Girosi, Federico
Marroquin and Ramirez (1990) have recently  discovered a class of discrete stochastic  cellular automata with Gibbsian invariant  measures that have a non-reversible dynamic  behavior. Practical applications include more  powerful algorithms than the Metropolis  algorithm to compute MRF models. In this  paper we describe a large class of stochastic  dynamical systems that has a Gibbs  asymptotic distribution but does not satisfy  reversibility. We characterize sufficient  properties of a sub-class of stochastic  differential equations in terms of the  associated Fokker-Planck equation for the  existence of an asymptotic probability  distribution in the system of coordinates  which is given. Practical implications include  VLSI analog circuits to compute coupled MRF  models.
</description>
<pubDate>Sat, 01 Dec 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6012</guid>
<dc:date>1990-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bringing the Grandmother Back into the Picture: A Memory-Based View of Object Recognition</title>
<link>https://hdl.handle.net/1721.1/6011</link>
<description>Bringing the Grandmother Back into the Picture: A Memory-Based View of Object Recognition
Edelman, Shimon; Poggio, Tomaso
We describe experiments with a versatile  pictorial prototype based learning scheme for  3D object recognition. The GRBF scheme  seems to be amenable to realization in  biophysical hardware because the only  kind of computation it involves can be  effectively carried out by combining receptive  fields. Furthermore, the scheme is  computationally attractive because it brings  together the old notion of a "grandmother'' cell  and the rigorous approximation methods of  regularization and splines.
</description>
<pubDate>Sun, 01 Apr 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6011</guid>
<dc:date>1990-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast, Cheap and Out of Control</title>
<link>https://hdl.handle.net/1721.1/6010</link>
<description>Fast, Cheap and Out of Control
Brooks, Rodney A.; Flynn, Anita M.
Spur-of-the-moment planetary exploration  missions are within our reach. Complex  systems and complex missions usually take  years of planning and force launches to  become incredibly expensive. We argue here  for cheap, fast missions using large numbers  of mass produced simple autonomous robots  that are small by today's standards, perhaps 1  to 2kg. We suggest that within a few years it  will be possible, at modest cost, to invade a  planet with millions of tiny robots.
</description>
<pubDate>Fri, 01 Dec 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6010</guid>
<dc:date>1989-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Consensus Knowledge Acquisition</title>
<link>https://hdl.handle.net/1721.1/6009</link>
<description>Consensus Knowledge Acquisition
Trice, Andrew; Davis, Randall
We have developed a method and prototype  program for assisting two experts in their  attempts to construct a single, consensus  knowledge base. We show that consensus  building can be effectively facilitated by a  debugging approach that identifies, explains,  and resolves discrepancies in their  knowledge. To implement this approach we  identify and use recognition and repair  procedures for a variety of discrepancies.  Examples of this knowledge are illustrated\\  with sample transcripts from CARTER, a  system for reconciling two rule-based  systems. Implications for resolving other  kinds of knowledge representations are also  examined.
</description>
<pubDate>Fri, 01 Dec 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6009</guid>
<dc:date>1989-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stereo Feature Matching in Disparity Space</title>
<link>https://hdl.handle.net/1721.1/6008</link>
<description>Stereo Feature Matching in Disparity Space
Braunegg, David J.
This paper describes a new method for  matching, validating, and disambiguating  features for stereo vision. It is based on the  Marr-Poggio- Grimson stereo matching  algorithm which uses zero-crossing contours  in difference-of-Gaussian filtered images as  features. The matched contours are  represented in disparity space, which makes  the information needed for matched contour  validation and disambiguation easily  accessible. The use of disparity space also  makes the algorithm conceptually cleaner  than previous implementations of the Marr-Poggio-Grimson algorithm and yields a more  efficient matching process.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6008</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Alternative to Using the 3D Delaunay Tessellation for Representing Freespace</title>
<link>https://hdl.handle.net/1721.1/6007</link>
<description>An Alternative to Using the 3D Delaunay Tessellation for Representing Freespace
Braunegg, David J.
Representing the world in terms of visible  surfaces and the freespacesexisting between  these surfaces and the viewer is an important  problemsin robotics. Recently, researchers  have proposed using the 3DsDelaunay  Tessellation for representing 3D stereo vision  data and thesfreespace determined  therefrom. We discuss problems with using  thes3D Delaunay Tessellation as the basis of  the representation andspropose an  alternative representation that we are  currentlysinvestigating. This new  representation is appropriate for  planningsmobile robot navigation and  promises to be robust when using  stereosdata that has errors and uncertainty.
</description>
<pubDate>Fri, 01 Sep 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6007</guid>
<dc:date>1989-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Location Recognition Using Stereo Vision</title>
<link>https://hdl.handle.net/1721.1/6006</link>
<description>Location Recognition Using Stereo Vision
Braunegg, David J.
A mobile robot must be able to determine its  own position in the world. To support truly  autonomous navigation, we present a system  that builds and maintains its own models of  world locations and uses these models to  recognize its world position from stereo vision  input. The system is designed to be robust  with respect to input errors and to respond to  a gradually changing world by updating the  world location models. We present results  from tests of the system that demonstrate its  reliability. The model builder and recognition  system fit into a planned world modeling  system that we describe.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6006</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Recovery of Motion and Shape in the General Case by Fixation</title>
<link>https://hdl.handle.net/1721.1/6005</link>
<description>Direct Recovery of Motion and Shape in the General Case by Fixation
Taalebinezhaad, M. Ali
This work introduces a direct method called  FIXATION for solving the general motion  vision problem. This Fixation method results  in a constraint equation between  translational and rotational velocities that in  combination with the Brightness-Change  Constraint Equation (BCCE) solves the  general motion vision problem, arbitrary  motion with respect to an arbitrary rigid  environment. Neither Correspondence nor  Optical Flow has been used here. Recently  Direct Motion Vision methods have used the  BCCE for solving the motion vision problem  of special motions or environments. In  contrast to those solutions, the Fixation  method does not put such severe restrictions  on the motion or the environment.
</description>
<pubDate>Thu, 01 Mar 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6005</guid>
<dc:date>1990-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Estimation of Structure and Motion from Multiple Frames</title>
<link>https://hdl.handle.net/1721.1/6004</link>
<description>Direct Estimation of Structure and Motion from Multiple Frames
Heel, Joachim
This paper presents a method for the  estimation of scene structure and camera  motion from a sequence of images. This  approach is fundamentally new. No  computation of optical flow or feature  correspondences is required. The method  processes image sequences of arbitrary  length and exploits the redundancy for a  significant reduction in error over time. No  assumptions are made about camera motion  or surface structure. Both quantities are fully  recovered. Our method combines the "direct''  motion vision approach with the theory of  recursive estimation. Each step is illustrated  and evaluated with results from real images.
</description>
<pubDate>Thu, 01 Mar 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6004</guid>
<dc:date>1990-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Recognition as Representation and Search</title>
<link>https://hdl.handle.net/1721.1/6003</link>
<description>Machine Recognition as Representation and Search
Zhao, Feng
Generality, representation, and control have  been the central issues in machine  recognition. Model-based recognition is the  search for consistent matches of the model  and image features. We present a  comparative framework for the evaluation of  different approaches, particularly those of  ACRONYM, RAF, and Ikeuchi et al. The  strengths and weaknesses of these  approaches are discussed and compared  and the remedies are suggested. Various  tradeoffs made in the implementations are  analyzed with respect to the systems'  intended task-domains. The requirements for  a versatile recognition system are motivated.  Several directions for future research are  pointed out.
</description>
<pubDate>Fri, 01 Dec 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6003</guid>
<dc:date>1989-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computation of Texture and Stereoscopic Depth in Humans</title>
<link>https://hdl.handle.net/1721.1/6002</link>
<description>Computation of Texture and Stereoscopic Depth in Humans
Fahle, Manfred; Troscianko, Tom
The computation of texture and of  stereoscopic depth is limited by a number of  factors in the design of the optical front-end  and subsequent processing stages in  humans and machines. A number of limiting  factors in the human visual system, such as  resolution of the optics and opto-electronic  interface, contrast, luminance, temporal  resolution and eccentricity are reviewed and  evaluated concerning their relevance for the  recognition of texture and stereoscopic depth.  The algorithms used by the human brain to  discriminate between textures and to compute  stereoscopic depth are very fast and efficient.  Their study might be beneficial for the  development of better algorithms in machine  vision.
</description>
<pubDate>Sun, 01 Oct 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6002</guid>
<dc:date>1989-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Limits of Precision for Human Eye Motor Control</title>
<link>https://hdl.handle.net/1721.1/6001</link>
<description>Limits of Precision for Human Eye Motor Control
Fahle, Manfred
Dichoptic presentation of vernier stimuli, i.e.,  one segment to each eye, yielded three times  higher thresholds than binocular  presentation, mainly due to uncorrelated  movements of both eyes. Thresholds allow  one to calculate an upper estimate for the  amplitudes of uncorrelated eye movements  during fixation. This estimate matches the  best results from direct eye position  recording, with the calculated mean amplitude  of eye tremor corresponding to roughly one  photoreceptor diameter. The combined  amplitude of both correlated and uncorrelated  eye movements was also measured by  delaying one segment of the vernier relative to  its partner under monocular or dichoptic  conditions.
</description>
<pubDate>Wed, 01 Nov 1989 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6001</guid>
<dc:date>1989-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Model for Rivalry Between Cognitive Contours</title>
<link>https://hdl.handle.net/1721.1/6000</link>
<description>A Model for Rivalry Between Cognitive Contours
Fahle, Manfred; Palm, Gunther
The interactions between illusory and real  contours have been inves- tigated under  monocular, binocular and dichoptic  conditions. Results show that under all three  presentation conditions, periodic alternations,  generally called rivalry, occur during the  perception of cognitive (or illusory) triangles,  while earlier research had failed to find such  rivalry (Bradley &amp; Dumais, 1975). With line  triangles, rivalry is experienced only under  dichoptic conditions. A model is proposed to  account for the observed phenomena, and the  results of simulations are presented.
</description>
<pubDate>Fri, 01 Jun 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/6000</guid>
<dc:date>1990-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Shifter Hyposthesis for the Elimination of Motion Blur</title>
<link>https://hdl.handle.net/1721.1/5999</link>
<description>On the Shifter Hyposthesis for the Elimination of Motion Blur
Fahle, Manfred
Moving objects may stimulate many retinal  photoreceptors within the integration time of  the receptors without motion blur being  experienced. Anderson and vanEssen (1987)  suggested that the neuronal representation of  retinal images is shifted on its way to the  cortex, in an opposite direction to the motion.  Thus, the cortical representation of objects  would be stationary. I have measured  thresholds for two vernier stimuli, moving  simultaneously into opposite directions over  identical positions. Motion blur for these  stimuli is not stronger than with a single  moving stimulus, and thresholds can be  below a photoreceptor diameter. This result  cannot be easily reconciled with the  hypothesis of Tshifter circuitsU.
</description>
<pubDate>Wed, 01 Aug 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5999</guid>
<dc:date>1990-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting the Redundancy of a Hand-Arm Robotic System</title>
<link>https://hdl.handle.net/1721.1/5998</link>
<description>Exploiting the Redundancy of a Hand-Arm Robotic System
Melchiorri, Claudio; Salisbury, J.K.
In this report, a method for exploiting the  redundancy of a hand-arm mechanical  system for manipulation tasks is illustrated.  The basic idea is to try to exploit the different  intrinsic capabilities of the arm and hand  subsystems. The Jacobian transpose  technique is at the core of the method:  different behaviors of the two subsystems  are obtained by means of constraints in  Null(J) generated by non-orthogonal  projectors. Comments about the computation  of the constraints are reported in the memo,  as well as a description of some preliminary  experiments on a robotic system at the A.I.  Lab., M.I.T.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5998</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Supercomputer Toolkit and Its Applications</title>
<link>https://hdl.handle.net/1721.1/5997</link>
<description>The Supercomputer Toolkit and Its Applications
Abelson, Harold; Berlin, Andrew A.; Katzenelson, Jacob; McAllister, William H.; Rozas, Guillermo J.; Sussman, Gerald Jay
The Supercomputer Toolkit is a proposed  family of standard hardware and software  components from which special-purpose  machines can be easily configured. Using the  Toolkit, a scientist or an engineer, starting  with a suitable computational problem, will be  able to readily configure a special purpose  multiprocessor that attains supercomputer-class performance on that problem, at a  fraction of the cost of a general purpose  supercomputer. The Toolkit is currently being  built as a joint project between Hewlett-Packard and MIT. The software and the  applications are in various stages of  development and research.
</description>
<pubDate>Sun, 01 Jul 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5997</guid>
<dc:date>1990-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contact Sensing from Force Measurements</title>
<link>https://hdl.handle.net/1721.1/5996</link>
<description>Contact Sensing from Force Measurements
Bicchi, Antonio; Salisbury, J. Kenneth; Brock, David L.
This paper addresses contact sensing, i.e.  the problem of resolving the location of a  contact, the force at the interface and the  moment about the contact normals. Called  "intrinsic'' contact sensing for the use of  internal force and torque measurements, this  method allows for practical devices which  provide simple, relevant contact information in  practical robotic applications. Such sensors  have been used in conjunction with robot  hands to identify objects, determine surface  friction, detect slip, augment grasp stability,  measure object mass, probe surfaces,  control collision and a variety of other useful  tasks. This paper describes the theoretical  basis for their operation and provides a  framework for future device design.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5996</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Criterion for the Optimal Design of Multiaxis Force Sensors</title>
<link>https://hdl.handle.net/1721.1/5995</link>
<description>A Criterion for the Optimal Design of Multiaxis Force Sensors
Bicchi, Antionio
This paper deals with the design of multi-axis  force (also known as force/torque) sensors,  as considered within the framework of optimal  design theory. The principal goal of this paper  is to identify a mathematical objective function,  whose minimization corresponds to the  optimization of sensor accuracy. The  methodology employed is derived from linear  algebra and analysis of numerical stability.  The problem of optimizing the number of  basic transducers employed in a multi-component sensor is also addressed. Finally,  applications of the proposed method to the  design of a simple sensor as well as to the  optimization of a novel, 6-axis miniaturized  sensor are discussed.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5995</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data and Model-Driven Selection Using Color Regions</title>
<link>https://hdl.handle.net/1721.1/5994</link>
<description>Data and Model-Driven Selection Using Color Regions
Syeda-Mahmood, Tanveer Fathima
A key problem in model-based object  recognition is selection, namely, the problem  of determining which regions in the image are  likely to come from a single object. In this  paper we present an approach that extracts  and uses color region information to perform  selection either based solely on image- data  (data-driven), or based on the knowledge  of the color description of the model (model -driven). The paper presents a method of  perceptual color specification by color  categories to extract perceptual color regions.  It also discusses the utility of color-based  selection in reducing the search involved in  recognition.
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5994</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimation of Discontinuous Displacement Vector Fields with the Minimum Description Length Criterion</title>
<link>https://hdl.handle.net/1721.1/5993</link>
<description>Estimation of Discontinuous Displacement Vector Fields with the Minimum Description Length Criterion
Dengler, Joachim
A new noniterative approach to determine  displacement vector fields with discontinuities  is described. In order to overcome the  limitations of current methods, the problem is  regarded as a general modelling problem.  Starting from a family of regularized  estimates, by measuring the difference in  description length the compatibility between  different levels of regularization is determined.  This gives local but noisy evidence of  possible model boundaries at multiple  scales. With the two constraints of continous  lines of discontinuities and the spatial  coincidence assumption consistent boundary  evidence is found. Based on this combined  evidence the model is updated, now  describing homogeneous regions with sharp  discontinuities.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5993</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Shape of Shading</title>
<link>https://hdl.handle.net/1721.1/5992</link>
<description>The Shape of Shading
Weinshall, Daphna
This paper discusses the relationship  between the shape of the shading, the  surface whose depth at each point equals the  brightness in the image, and the shape of the  original surface. I suggest the shading as an  initial local approximation to shape, and  discuss the scope of this approximation and  what it may be good for. In particular,  qualitative surface features, such as the sign  of the Gaussian curvature, can be computed  in some cases directly from the shading.  Finally, a method to compute the direction of  the illuminant (assuming a single point light  source) from shading on occluding contours  is shown.
</description>
<pubDate>Mon, 01 Oct 1990 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5992</guid>
<dc:date>1990-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding Junctions Using the Image Gradient</title>
<link>https://hdl.handle.net/1721.1/5991</link>
<description>Finding Junctions Using the Image Gradient
Beymer, David J.
Junctions are the intersection points of three  or more intensity surfaces in an image. An  analysis of zero crossings and the gradient  near junctions demonstrates that gradient-based edge detection schemes fragment  edges at junctions. This fragmentation is  caused by the intrinsic pairing of zero  crossings and a destructive interference of  edge gradients at junctions. Using the  previous gradient analysis, we propose a  junction detector that finds junctions in edge  maps by following gradient ridges and using  the minimum direction of saddle points in the  gradient. The junction detector is  demonstrated on real imagery and previous  approaches to junction detection are  discussed.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5991</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of Visual Modules from Examples: Learning Hyperacuity</title>
<link>https://hdl.handle.net/1721.1/5990</link>
<description>Synthesis of Visual Modules from Examples: Learning Hyperacuity
Poggio, Tomaso; Fahle, Manfred; Edelman, Shimon
Networks that solve specific visual tasks,  such as the evaluation of spatial relations with  hyperacuity precision, can be eastily  synthesized from a small set of examples.  This may have significant implications for the  interpretation of many psychophysical results  in terms of neuronal models.
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5990</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Piezoelectric Micromotors for Microrobots</title>
<link>https://hdl.handle.net/1721.1/5989</link>
<description>Piezoelectric Micromotors for Microrobots
Flynn, Anita M.; Tavrow, Lee S.; Bart, Stephen F.; Brooks, Rodney A.
By combining new robot control systems with  piezoelectric motors and micromechanics, we  propose creating micromechanical systems  which are small, cheap and completely  autonomous. We have fabricated small - a  few millimeters in diameter - piezoelectric  motors using ferroelectric thin films and  consisting of two pieces: a stator and a rotor.  The stationary stator includes a piezoelectric  film in which we induce bending in the form of  a traveling wave. Anything which sits atop the  stator is propelled by the wave. A small glass  lens placed upon the stator becomes the  spinning rotor. Using thin films of PZT on  silicon nitride memebranes, various types of  actuator structures have been fabricated.
</description>
<pubDate>Fri, 01 Feb 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5989</guid>
<dc:date>1991-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Imagination and Situated Cognition</title>
<link>https://hdl.handle.net/1721.1/5988</link>
<description>Imagination and Situated Cognition
Stein, Lynn Andrea
A subsumption-based mobile robot is  extended to perform cognitive tasks. Following  directions, the robot navigates directly to  previously unexplored goals. This robot  exploits a novel architecture based on the  idea that cognition uses the underlying  machinery of interaction, imagining  sensations and actions.
</description>
<pubDate>Fri, 01 Feb 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5988</guid>
<dc:date>1991-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extracting and Representing Qualitative Behaviors of Complex Systems in Phase Spaces</title>
<link>https://hdl.handle.net/1721.1/5987</link>
<description>Extracting and Representing Qualitative Behaviors of Complex Systems in Phase Spaces
Zhao, Feng
We develop a qualitative method for  understanding and representing phase space  structures of complex systems and  demonstrate the method with a program,  MAPS --- Modeler and Analyzer for Phase  Spaces, using deep domain knowledge of  dynamical system theory. Given a dynamical  system, the program generates a complete,  high level symbolic description of the phase  space structure sensible to human beings  and manipulable by other programs. Using  the phase space descriptions, we are  developing a novel control synthesis strategy  to automatically synthesize a controller for a  nonlinear system in the phase space to  achieve desired properties.
</description>
<pubDate>Fri, 01 Mar 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5987</guid>
<dc:date>1991-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Method for Skew-free Distribution of Digital Signals Using Matched Variable Delay Lines</title>
<link>https://hdl.handle.net/1721.1/5986</link>
<description>A Method for Skew-free Distribution of Digital Signals Using Matched Variable Delay Lines
Knight, Thomas; Wu, Henry M.
The ability to distribute signals everywhere in  a circuit with controlled and known delays is  essential in large, high-speed digital  systems. We present a technique by which a  signal driver can adjust the arrival time of the  signal at the end of the wire using a pair of  matched variable delay lines. We show an  implemention of this idea requiring no extra  wiring, and how it can be extended to  distribute signals skew-free to receivers along  the signal run. We demonstrate how this  scheme fits into the boundary scan logic of a  VLSI chip.
</description>
<pubDate>Sun, 01 Mar 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5986</guid>
<dc:date>1992-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Control Algorithms for Chaotic Systems</title>
<link>https://hdl.handle.net/1721.1/5985</link>
<description>Control Algorithms for Chaotic Systems
Bradley, Elizabeth
This paper presents techniques that actively  exploit chaotic behavior to accomplish  otherwise-impossible control tasks. The state  space is mapped by numerical integration at  different system parameter values and  trajectory segments from several of these  maps are automatically combined into a path  between the desired system states. A fine-grained search and high computational  accuracy are required to locate appropriate  trajectory segments, piece them together and  cause the system to follow this composite  path. The sensitivity of a chaotic system's  state-space topology to the parameters of its  equations and of its trajectories to the initial  conditions make this approach rewarding in  spite of its computational demands.
</description>
<pubDate>Fri, 01 Mar 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5985</guid>
<dc:date>1991-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Apparent Opacity Affects Perception of Structure from Motion</title>
<link>https://hdl.handle.net/1721.1/5984</link>
<description>Apparent Opacity Affects Perception of Structure from Motion
Kersten, Daniel; Bulthoff, Heinrich
The judgment of surface attributes such as  transparency or opacity is often considered to  be a higher-level visual process that would  make use of low-level stereo or motion  information to tease apart the transparent  from the opaque parts. In this study, we  describe a new illusion and some results that  question the above view by showing that  depth from transparency and opacity can  override the rigidity bias in perceiving depth  from motion. This provides support for the  idea that the brain's computation of the  surface material attribute of transparency may  have to be done either before, or in parallel  with the computation of structure from motion.
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5984</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear Analog Networks for Image Smoothing and Segmentation</title>
<link>https://hdl.handle.net/1721.1/5983</link>
<description>Nonlinear Analog Networks for Image Smoothing and Segmentation
Lumsdaine, A.; Wyatt, J.L., Jr.; Elfadel, I.M.
Image smoothing and segmentation  algorithms are frequently formulatedsas  optimization problems. Linear and nonlinear  (reciprocal) resistivesnetworks have solutions  characterized by an extremum principle.  Thus,sappropriately designed networks can  automatically solve certainssmoothing and  segmentation problems in robot vision. This  papersconsiders switched linear resistive  networks and nonlinear resistivesnetworks for  such tasks. The latter network type is derived  from thesformer via an intermediate  stochastic formulation, and a new  resultsrelating the solution sets of the two is  given for the "zerostermperature'' limit. We  then present simulation studies of  severalscontinuation methods that can be  gracefully implemented in analog VLSIsand  that seem to give "good'' results for these non-convexsoptimization problems.
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5983</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phase Space Navigator: Towards Automating Control Synthesis in Phase Spaces for Nonlinear Control Systems</title>
<link>https://hdl.handle.net/1721.1/5982</link>
<description>Phase Space Navigator: Towards Automating Control Synthesis in Phase Spaces for Nonlinear Control Systems
Zhao, Feng
We develop a novel autonomous control  synthesis strategy called Phase Space  Navigator for the automatic synthesis of  nonlinear control systems. The Phase Space  Navigator generates global control laws by  synthesizing flow shapes of dynamical  systems and planning and navigating system  trajectories in the phase spaces. Parsing  phase spaces into trajectory flow pipes  provide a way to efficiently reason about the  phase space structures and search for global  control paths. The strategy is particularly  suitable for synthesizing high-performance  control systems that do not lend themselves  to traditional design and analysis techniques.
</description>
<pubDate>Mon, 01 Apr 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5982</guid>
<dc:date>1991-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Multi-Scale Veto Model: A Two-Stage Analog Network for Edge Detection and Image Reconstruction</title>
<link>https://hdl.handle.net/1721.1/5981</link>
<description>The Multi-Scale Veto Model: A Two-Stage Analog Network for Edge Detection and Image Reconstruction
Dron, Lisa
This paper presents the theory behind a  model for a two-stage analog network for  edge detection and image reconstruction to  be implemented in VLSI. Edges are detected  in the first stage using the multi-scale veto  rule, which eliminates candidates that do not  pass a threshold test at each of a set of  different spatial scales. The image  is reconstructed in the second stage from the  brightness values adjacent to edge locations.  The MSV rule allows good localization and  efficient noise removal. Since the  reconstructed images are visually similar  to the originals, the possibility exists of  achieving significant bandwidth  compression.
</description>
<pubDate>Sun, 01 Mar 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5981</guid>
<dc:date>1992-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable Applications: Interpreter Meets Interface</title>
<link>https://hdl.handle.net/1721.1/5980</link>
<description>Programmable Applications: Interpreter Meets Interface
Eisenberg, Michael
Current fashion in "user-friendly'' software  design tends to place an overreliance on  direct manipulation interfaces. To be truly  expressive (and thus truly user-friendly),  applications need both learnable interfaces  and domain-enriched languages that are  accessible to the user. This paper discusses  some of the design issues that arise in the  creation of such programmable applications.  As an example, we present "SchemePaint", a  graphics application that combines a  MacPaint-like interface with an interpreter for  (a "graphics-enriched'') Scheme.
</description>
<pubDate>Tue, 01 Oct 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5980</guid>
<dc:date>1991-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contour Matching Using Local Affine Transformations</title>
<link>https://hdl.handle.net/1721.1/5979</link>
<description>Contour Matching Using Local Affine Transformations
Bachelder, Ivan A.
Partial constraints are often available in visual  processing tasks requiring the matching of  contours in two images. We propose a non- iterative scheme to determine contour  matches using locally affine transformations.  The method assumes that contours are  approximated by the orthographic projection of  planar patches within oriented neighborhoods  of varying size. For degenerate cases, a  minimal matching solution is chosen closest  to the minimal pure translation. Performance  on noisy synthetic and natural contour  imagery is reported.
</description>
<pubDate>Wed, 01 Apr 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5979</guid>
<dc:date>1992-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Comparative Analysis of Reinforcement Learning Methods</title>
<link>https://hdl.handle.net/1721.1/5978</link>
<description>A Comparative Analysis of Reinforcement Learning Methods
Mataric, Maja
This paper analyzes the suitability of  reinforcement learning (RL) for both  programming and adapting situated agents.  We discuss two RL algorithms: Q-learning  and the Bucket Brigade. We introduce a  special case of the Bucket Brigade, and  analyze and compare its performance to Q in  a number of experiments. Next we discuss  the key problems of RL: time and space  complexity, input generalization, sensitivity to  parameter values, and selection of the  reinforcement function. We address the  tradeoffs between the built-in and learned  knowledge and the number of training  examples required by a learning  algorithm. Finally, we suggest directions for  future research.
</description>
<pubDate>Tue, 01 Oct 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5978</guid>
<dc:date>1991-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Correspondence and Affine Shape from Two Orthographic Views: Motion and Recognition</title>
<link>https://hdl.handle.net/1721.1/5977</link>
<description>Correspondence and Affine Shape from Two Orthographic Views: Motion and Recognition
Shashua, Amnon
The paper presents a simple model for  recovering affine shape and correspondence  from two orthographic views of a 3D object. It  is shown that four corresponding points along  two orthographic views, taken under similar  illumination conditions, determine affine  shape and correspondence for all other  points. The scheme is useful for purposes of  visual recognition by generating novel views of  an object given two model views. It is also  shown that the scheme can handle objects  with smooth boundaries, to a good  approximation, without introducing any  modifications or additional model views.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5977</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Control Algorithm for Chaotic Physical Systems</title>
<link>https://hdl.handle.net/1721.1/5976</link>
<description>A Control Algorithm for Chaotic Physical Systems
Bradley, Elizabeth
Control algorithms which exploit the unique  properties of chaos can vastly improve the  design and performance of many practical  and useful systems. The program Perfect  Moment is built around such an algorithm.  Given two points in the system's state space,  it autonomously maps the space, chooses a  set of trajectory segments from the maps,  uses them to construct a composite path  between the points, then causes the system  to follow that path. This program is illustrated  with two practical examples: the driven single  pendulum and its electronic analog, the  phase-locked loop. Strange attractor bridges,  which alter the reachability of different state  space points, can be used to increase the  capture range of the circuit.
</description>
<pubDate>Tue, 01 Oct 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5976</guid>
<dc:date>1991-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intellectual Property and Software: The Assumptions are Broken</title>
<link>https://hdl.handle.net/1721.1/5975</link>
<description>Intellectual Property and Software: The Assumptions are Broken
Davis; Randall
In March 1991 the World Intellectual Property  Organization held an international symposium  attended primarily by lawyers, to discuss the  questions that artificial intelligence poses for  intellectual property law (i.e., copyright and  patents). This is an edited version of a talk  presented there, which argues that AI poses  few problems in the near term and that almost  all the truly challenging issues arise instead  from software in general. The talk was an  attempt to bridge the gap between the legal  community and the software community, to  explain why existing concepts and categories  in intellectual property law present such  difficult problems for software, and why  software as a technology breaks several  important assumptions underlying intellectual  property law.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5975</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Supercomputer Toolkit: A General Framework for Special-purpose Computing</title>
<link>https://hdl.handle.net/1721.1/5974</link>
<description>The Supercomputer Toolkit: A General Framework for Special-purpose Computing
Abelson, Harold; Berlin, Andrew A.; Katzenelson, Jacob; McAllister, William H.; Rozas, Guillermo J.; Sussman, Gerald Jay; Wisdom, Jack
The Toolkit is a family of hardware modules  (processors, memory, interconnect, and input-output devices) and a collection of software  modules (compilers, simulators, scientific  libraries, and high-level front ends) from  which high-performance special-purpose  computers can be easily configured and  programmed. The hardware modules are  intended to be standard, reusable parts.  These are combined by means of a user- reconfigurable, static interconnect technology.  T he Toolkit's software support, based on n ovel compilation techniques, produces e xtremely high- performance numerical code  from high-level language input, and will  eventually automatically configure  hardware modules for particular applications.
</description>
<pubDate>Fri, 01 Nov 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5974</guid>
<dc:date>1991-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grammar Rewriting</title>
<link>https://hdl.handle.net/1721.1/5973</link>
<description>Grammar Rewriting
McAllester, David
We present a term rewriting procedure based  on congruence closure that can be used with  arbitrary equational theories. This procedure  is motivated by the pragmatic need to prove  equations in equational theories where  confluence can not be achieved. The  procedure uses context free grammars to  represent equivalence classes of terms. The  procedure rewrites grammars rather than  terms and uses congruence closure to  maintain certain congruence properties of the  grammar. Grammars provide concise  representations of large term sets. Infinite  term sets can be represented with finite  grammars and exponentially large term sets  can be represented with linear sized  grammars.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5973</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Observations on Cognitive Judgments</title>
<link>https://hdl.handle.net/1721.1/5972</link>
<description>Observations on Cognitive Judgments
McAllester, David
It is obvious to anyone familiar with the rules  of the game of chess that a king on an empty  board can reach every square. It is true, but  not obvious, that a knight can reach every  square. Why is the first fact obvious but the  second fact not? This paper presents an  analytic theory of a class of obviousness  judgments of this type. Whether or not the  specifics of this analysis are correct, it seems  that the study of obviousness judgments can  be used to construct integrated theories of  linguistics, knowledge representation, and  inference.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5972</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Language Based Inference Procedures Applied to Schubert's Steamroller</title>
<link>https://hdl.handle.net/1721.1/5971</link>
<description>Natural Language Based Inference Procedures Applied to Schubert's Steamroller
Givan, Robert; McAllester, David; Shalaby, Sameer
We have previously argued that the syntactic  structure of natural language can be exploited  to construct powerful polynomial time  inference procedures. This paper supports  the earlier arguments by demonstrating that a  natural language based polynomial time  procedure can solve Schubert's steamroller in  a single step.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5971</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lifting Transformations</title>
<link>https://hdl.handle.net/1721.1/5970</link>
<description>Lifting Transformations
McAllester, David; Siskind, Jeffrey
Lifting is a well known technique in resolution  theorem proving, logic programming, and  term rewriting. In this paper we formulate  lifting as an efficiency-motivated program  transformation applicable to a wide variety of  nondeterministic procedures. This  formulation allows the immediate lifting of  complex procedures, such as the Davis-Putnam algorithm, which are otherwise  difficult to lift. We treat both classical lifting,  which is based on unification, and various  closely related program transformations  which we also call lifting transformations.  These nonclassical lifting transformations are  closely related to constraint techniques in  logic programming, resolution, and term  rewriting.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5970</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tractable Inference Relations</title>
<link>https://hdl.handle.net/1721.1/5969</link>
<description>Tractable Inference Relations
Givan, Robert; McAllester, David
We consider the concept of local sets of  inference rules. Locality is a syntactic  condition on rule sets which guarantees that  the inference relation defined by those rules is  polynomial time decidable. Unfortunately,  determining whether a given rule set is local  can be difficult. In this paper we define  inductive locality, a strengthening of locality.  We also give a procedure which can  automatically recognize the locality of any  inductively local rule set. Inductive locality  seems to be more useful that the earlier  concept of strong locality. We show that  locality, as a property of rule sets, is  undecidable in general.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5969</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognition and Structure from One 2D Model View: Observations on Prototypes, Object Classes and Symmetries</title>
<link>https://hdl.handle.net/1721.1/5968</link>
<description>Recognition and Structure from One 2D Model View: Observations on Prototypes, Object Classes and Symmetries
Poggio, Tomaso; Vetter, Thomas
In this note we discuss how recognition can  be achieved from a single 2D model view  exploiting prior knowledge of an object's  structure (e.g. symmetry). We prove that for  any bilaterally symmetric 3D object one non- accidental 2D model view is sufficient for  recognition. Symmetries of higher order allow  the recovery of structure from one 2D view.  Linear transformations can be learned exactly  from a small set of examples in the case of  "linear object classes" and used to  produce new views of an object from a single  view.
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5968</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boltzmannn Weighted Selection Improves Performance of Genetic Algorithms</title>
<link>https://hdl.handle.net/1721.1/5967</link>
<description>Boltzmannn Weighted Selection Improves Performance of Genetic Algorithms
de la Maza, Michael; Tidor, Bruce
Modifiable Boltzmann selective pressure is  investigated as a tool to control variability in  optimizations using genetic algorithms. An  implementation of variable selective pressure,  modeled after the use of temperature as a  parameter in simulated annealing  approaches, is described. The convergence  behavior of optimization runs is illustrated as  a function of selective pressure; the method is  compared to a genetic algorithm lacking this  control feature and is shown to exhibit  superior convergence properties on a small  set of test problems. An analysis is presented  that compares the selective pressure of this  algorithm to a standard selection procedure.
</description>
<pubDate>Sun, 01 Dec 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5967</guid>
<dc:date>1991-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time-Reversible Maxwell's Demon</title>
<link>https://hdl.handle.net/1721.1/5966</link>
<description>Time-Reversible Maxwell's Demon
Skordos, P. A.
A time-reversible Maxwell's demon is  demonstrated which creates a density  difference between two chambers initialized to  have equal density. The density difference is  estimated theoretically and confirmed by  computer simulations. It is found that the  reversible Maxwell's demon compresses  phase space volume even though its  dynamics are time reversible. The  significance of phase space  volume compression in operating a  microscopic heat engine is also discussed.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5966</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Task and Object Learning in Visual Recognition</title>
<link>https://hdl.handle.net/1721.1/5965</link>
<description>Task and Object Learning in Visual Recognition
Edelman, Shimon; Heinrich Bulthoff,; Sklar, Erik
Human performance in object recognition  changes with practice, even in the absence of  feedback to the subject. The nature of the  change can reveal important properties of the  process of recognition. We report an  experiment designed to distinguish between  non-specific task learning and object- specific  practice effects. The results of the experiment  support the notion that learning through  modification of object representations can be  separated from less interesting effects of  practice, if appropriate response measures  (specifically, the coefficient of variation of  response time over views of an object)  are used. Furthermore, the results, obtained  with computer-generated amoeba-like  objects, corroborate previous findings  regarding the development of canonical  views and related phenomena with practice.
</description>
<pubDate>Tue, 01 Jan 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5965</guid>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Novel Approach to Graphics</title>
<link>https://hdl.handle.net/1721.1/5964</link>
<description>A Novel Approach to Graphics
Poggio, Tomaso; Brunelli, Roberto
sWe show that we can optimally represent the set of 2D images producedsby the point features of a rigid 3D model as two lines in twoshigh-dimensional spaces. We then decribe a working recognition systemsin which we represent these spaces discretely in a hash table. We cansaccess this table at run time to find all the groups of model featuressthat could match a group of image features, accounting for the effectssof sensing error. We also use this representation of a model's imagessto demonstrate significant new limitations of two other approaches tosrecognition: invariants, and non-accidental properties.
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5964</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>What Makes a Good Feature?</title>
<link>https://hdl.handle.net/1721.1/5963</link>
<description>What Makes a Good Feature?
Richards, W.; Jepson, A.
Using a Bayesian framework, we place  bounds on just what features are worth  computing if inferences about the world  properties are to be made from image data.  Previously others have proposed that useful  features reflect "non-accidental'' or  "suspicious'' configurations (such as parallel  or colinear lines). We make these notions  more precise and show them to be context  sensitive.
</description>
<pubDate>Wed, 01 Apr 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5963</guid>
<dc:date>1992-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local Versus Global Control Laws for Cooperative Agent Teams</title>
<link>https://hdl.handle.net/1721.1/5962</link>
<description>Local Versus Global Control Laws for Cooperative Agent Teams
Parker, Lynne E.
The design of the control laws governing the  behavior of individual agents is crucial for the  successful development of cooperative agent  teams. These control laws may utilize a  combination of local and/or global knowledge  to achieve the resulting group behavior. A key  difficulty in this development is deciding the  proper balance between local and global  control required to achieve the desired  emergent group behavior. This paper  addresses this issue by presenting some  general guidelines and principles for  determining the appropriate level of global  versus local control. These principles are  illustrated and implemented in a "keep  formation'' cooperative task case study.
</description>
<pubDate>Sun, 01 Mar 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5962</guid>
<dc:date>1992-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chaotic Evolution of the Solar System</title>
<link>https://hdl.handle.net/1721.1/5961</link>
<description>Chaotic Evolution of the Solar System
Sussman, Gerald J.; Wisdom, Jack
The evolution of the entire planetary system  has been numerically integrated for a time  span of nearly 100 million years. This  calculation confirms that the evolution of the  solar system as a whole is chaotic, with a  time scale of exponential divergence of about  4 million years. Additional numerical  experiments indicate that the Jovian planet  subsystem is chaotic, although some small  variations in the model can yield  quasiperiodic motion. The motion of Pluto is  independently and robustly chaotic.
</description>
<pubDate>Sun, 01 Mar 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5961</guid>
<dc:date>1992-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Space Efficient 3D Model Indexing</title>
<link>https://hdl.handle.net/1721.1/5960</link>
<description>Space Efficient 3D Model Indexing
Jacobs, David W.
We show that we can optimally represent the  set of 2D images produced by the point  features of a rigid 3D model as two lines in  two high-dimensional spaces. We then  decribe a working recognition system in which  we represent these spaces discretely in a  hash table. We can access this table at run  time to find all the groups of model features  that could match a group of image features,  accounting for the effects of sensing error. We  also use this representation of a model's  images to demonstrate significant new  limitations of two other approaches to  recognition: invariants, and non- accidental  properties.
</description>
<pubDate>Sat, 01 Feb 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5960</guid>
<dc:date>1992-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognizing 3D Ojbects of 2D Images: An Error Analysis</title>
<link>https://hdl.handle.net/1721.1/5959</link>
<description>Recognizing 3D Ojbects of 2D Images: An Error Analysis
Grimson, W. Eric; Huttenlocher, Daniel P.; Alter, T. D.
Many object recognition systems use a small  number of pairings of data and model  features to compute the 3D transformation  from a model coordinate frame into the  sensor coordinate system. With perfect image  data, these systems work well. With uncertain  image data, however, their performance is  less clear. We examine the effects of 2D  sensor uncertainty on the computation of 3D  model transformations. We use this analysis  to bound the uncertainty in the transformation  parameters, and the uncertainty associated  with transforming other model features into  the image. We also examine the impact of the  such transformation uncertainty on  recognition methods.
</description>
<pubDate>Wed, 01 Jul 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5959</guid>
<dc:date>1992-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Projective Structure from Two Uncalibrated Images: Structure from Motion and RecRecognition</title>
<link>https://hdl.handle.net/1721.1/5958</link>
<description>Projective Structure from Two Uncalibrated Images: Structure from Motion and RecRecognition
Shashua, Amnon
This paper addresses the problem of  recovering relative structure, in the form of an  invariant, referred to as projective structure,  from two views of a 3D scene. The invariant  structure is computed without any prior  knowledge of camera geometry, or internal  calibration, and with the property that  perspective and orthographic projections  are treated alike, namely, the system makes  no assumption regarding the existence of  perspective distortions in the input images.
</description>
<pubDate>Tue, 01 Sep 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5958</guid>
<dc:date>1992-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Why Do We See Three-dimensional Objects?</title>
<link>https://hdl.handle.net/1721.1/5957</link>
<description>Why Do We See Three-dimensional Objects?
Marill, Thomas
When we look at certain line-drawings, we  see three-dimensional objects. The question  is why; why not just see two-dimensional  images? We theorize that we see objects  rather than images because the objects we  see are, in a certain mathematical sense,  less complex than the images; and that  furthermore the particular objects we see will  be the least complex of the available  alternatives. Experimental data supporting the  theory is reported. The work is based on  ideas of Solomonoff, Kolmogorov, and the  "minimum description length'' concepts of  Rissanen.
</description>
<pubDate>Mon, 01 Jun 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5957</guid>
<dc:date>1992-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Limitations of Geometric Hashing in the Presence of Gaussian Noise</title>
<link>https://hdl.handle.net/1721.1/5956</link>
<description>Limitations of Geometric Hashing in the Presence of Gaussian Noise
Sarachik, Karen B.
This paper presents a detailed error analysis  of geometric hashing for 2D object recogition.  We analytically derive the probability of false  positives and negatives as a function of the  number of model and image, features and  occlusion, using a 2D Gaussian noise model.  The results are presented in the form of ROC  (receiver-operating characteristic) curves,  which demonstrate that the 2D Gaussian  error model always has better performance  than that of the bounded uniform model. They  also directly indicate the optimal performance  that can be achieved for a given clutter and  occlusion rate, and how to choose the  thresholds to achieve these rates.
</description>
<pubDate>Thu, 01 Oct 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5956</guid>
<dc:date>1992-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Causal Reconstruction</title>
<link>https://hdl.handle.net/1721.1/5955</link>
<description>Causal Reconstruction
Borchardt, Gary C.
Causal reconstruction is the task of reading a  written causal description of a physical  behavior, forming an internal model of  the described activity, and demonstrating  comprehension through question answering. T his task is difficult because written d escriptions often do not specify exactly how r eferenced events fit together. This article (1) ch aracterizes the causal reconstruction  problem, (2) presents a representation called  transition space, which portrays events in  terms of "transitions,'' or collections of  changes expressible in everyday language,  and (3) describes a program  called PATHFINDER, which uses the  transition space representation to  perform causal reconstruction on simplified  English descriptions of physical activity.
</description>
<pubDate>Mon, 01 Feb 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5955</guid>
<dc:date>1993-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complexity as a Sclae-Space for the Medial Axis Transform</title>
<link>https://hdl.handle.net/1721.1/5954</link>
<description>Complexity as a Sclae-Space for the Medial Axis Transform
Chaney, Ronald
The medial axis skeleton is a thin line graph  that preserves the topology of a region. The  skeleton has often been cited as a useful  representation for shape description, region  interpretation, and object recognition.  Unfortunately, the computation of the skeleton  is extremely sensitive to variations in the  bounding contour. In this paper, we describe  a robust method for computing the medial  axis skeleton across a variety of scales. The  resulting scale-space is parametric with the  complexity of the skeleton, where the  complexity is defined as the number of  branches in the skeleton.
</description>
<pubDate>Fri, 01 Jan 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5954</guid>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Global Approach to Parameter Estimation of Chaotic Dynamical Systems</title>
<link>https://hdl.handle.net/1721.1/5953</link>
<description>A Global Approach to Parameter Estimation of Chaotic Dynamical Systems
Siapas, Athanassios G.
We present a novel approach to parameter  estimation of systems with complicated  dynamics, as well as evidence for the  existence of a universal power law that  enables us to quantify the dependence  of global geometry on small changes in the  parameters of the system. This power law  gives rise to what seems to be a new  dynamical system invariant.
</description>
<pubDate>Tue, 01 Dec 1992 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5953</guid>
<dc:date>1992-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data and Model-Driven Selection Using Parallel-Line Groups</title>
<link>https://hdl.handle.net/1721.1/5952</link>
<description>Data and Model-Driven Selection Using Parallel-Line Groups
Tanveer, S.; Mahmood, F.
A key problem in model-based object  recognition is selection, namely, the problem  of isolating regions in an image that are likely  to come from a single object. This isolation  can be either based solely on image data  (data-driven) or can incorporate the  knowledge of the model object (model-driven). In this paper we present an approach  that exploits the property of closely-spaced  parallelism between lines on objects to  achieve data and model-driven selection.  Specifically, we present a method of  identifying groups of closely-spaced parallel  lines in images that generates a linear  number of small-sized and reliable groups  thus meeting several of the desirable  requirements of a grouping scheme for  recognition. The line groups generated form  the basis for data and model-driven selection.  Data-driven selection is achieved by selecting  salient line groups as judged by a saliency  measure that emphasizes the likelihood of  the groups coming from single objects. The  approach to model-driven selection, on the  other hand, uses the description of closely-spaced parallel line groups on the model  object to selectively generate line groups in  the image that are likely to eb the projections  of the model groups under a set of allowable  transformations and taking into account the  effect of occlusions, illumination changes,  and imaging errors. We then discuss the  utility of line groups-based selection in the  context of reducing the search involved in  recognition, both as an independent selection  mechanism, and when used in combination  with other cues such as color. Finally, we  present results that indicate a vast  improvement in the performance of a  recognition system that is integrated with  parallel line groups-based selection.
</description>
<pubDate>Sat, 01 May 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5952</guid>
<dc:date>1993-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Simplified Method for Deriving Equations of Motion For Continuous Systems with Flexible Members</title>
<link>https://hdl.handle.net/1721.1/5951</link>
<description>A Simplified Method for Deriving Equations of Motion For Continuous Systems with Flexible Members
Singer, Neil C.; Seering, Warren P.
A method is proposed for deriving dynamical  equations for systems with both rigid and  flexible components. During the derivation,  each flexible component of the system is  represented by a "surrogate element" which  captures the response characteristics of that  component and is easy to mathematically  manipulate. The derivation proceeds  essentially as if each surrogate element were  a rigid body. Application of an extended form  of Lagrange's equation yields a set of  simultaneous differential equations which can  then be transformed to be the exact, partial  differential equations for the original flexible  system. This method's use facilitates  equation generation either by an analyst or  through application of software-based  symbolic manipulation.
</description>
<pubDate>Sat, 01 May 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5951</guid>
<dc:date>1993-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Application of Charge Detection to Dynamic Contact Sensing</title>
<link>https://hdl.handle.net/1721.1/5950</link>
<description>Application of Charge Detection to Dynamic Contact Sensing
Eberman, Brian; Salisbury, S. Kenneth
The manipulation contact forces convey  substantial information about the  manipulation state. This paper address the  fundamental problem of interpreting the force  signals without any additional manipulation  context. Techniques based on forms of the  generalized sequential likelihood ratio test are  used to segment individual strain signals into  statistically equivalent pieces. We report on  our experimental development of the  segmentation algorithm and on its results for  contact states. The sequential likelihood ratio  test is reviewed and some of its special  cases and optimal properties are discussed.  Finally, we conclude by discussing extensions  to the techniques and a contact interpretation  framework.
</description>
<pubDate>Mon, 01 Mar 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5950</guid>
<dc:date>1993-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Method for Eliminating Skew Introduced by Non-Uniform Buffer Delay and Wire Lengths in Clock Distribution Trees</title>
<link>https://hdl.handle.net/1721.1/5949</link>
<description>A Method for Eliminating Skew Introduced by Non-Uniform Buffer Delay and Wire Lengths in Clock Distribution Trees
Wu, Henry M.
The computation of a piecewise smooth  function that approximates a finite set of data  points is decomposed into two decoupled  tasks: first, the computation of the locally  smooth models, and hence, the  segmentation of the data into classes that  consist on the sets of points best  approximated by each model, and second, the  computation of the normalized discriminant  functions for each induced class. The  approximating function is then computed as  the optimal estimator with respect to this  measure field. Applications to image  processing and time series prediction are  presented as well.
</description>
<pubDate>Thu, 01 Apr 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5949</guid>
<dc:date>1993-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building Brains for Bodies</title>
<link>https://hdl.handle.net/1721.1/5948</link>
<description>Building Brains for Bodies
Brooks, Rodney; Stein, Lynn A.
We describe a project to capitalize on newly  available levels of computational resources in  order to understand human cognition. We will  build an integrated physical system including  vision, sound input and output, and dextrous  manipulation, all controlled by a continuously  operating large scale parallel MIMD computer.  The resulting system will learn to "think'' by  building on its bodily experiences to  accomplish progressively more abstract  tasks. Past experience suggests that in  attempting to build such an integrated system  we will have to fundamentally change the way  artificial intelligence, cognitive science,  linguistics, and philosophy think about the  organization of intelligence. We expect to be  able to better reconcile the theories that will  be developed with current work in  neuroscience.
</description>
<pubDate>Sun, 01 Aug 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5948</guid>
<dc:date>1993-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Why Stereo Vision is Not Always About 3D Reconstruction</title>
<link>https://hdl.handle.net/1721.1/5947</link>
<description>Why Stereo Vision is Not Always About 3D Reconstruction
Grimson, W. Eric L.
It is commonly assumed that the goal of  stereovision is computing explicit 3D scene  reconstructions. We show that very accurate  camera calibration is needed to support this,  and that such accurate calibration is difficult  to achieve and maintain. We argue that for  tasks like recognition, figure/ground  separation is more important than 3D depth  reconstruction, and demonstrate a stereo  algorithm that supports figure/ground  separation without 3D reconstruction.
</description>
<pubDate>Thu, 01 Jul 1993 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5947</guid>
<dc:date>1993-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Object Recognition Using No Higher Than Second or Third Order Statistics of the Image</title>
<link>https://hdl.handle.net/1721.1/5946</link>
<description>Direct Object Recognition Using No Higher Than Second or Third Order Statistics of the Image
Nagao, Kenji; Horn, Berthold
Novel algorithms for object recognition are  described that directly recover the  transformations relating the image to its  model. Unlike methods fitting the typical  conventional framework, these new methods  do not require exhaustive search for each  feature correspondence in order to solve for  the transformation. Yet they allow  simultaneous object identification and  recovery of the transformation. Given  hypothesized % potentially corresponding  regions in the model and data (2D views) --- which are from planar surfaces of the 3D  objects --- these methods allow direct  compututation of the parameters of the  transformation by which the data may be  generated from the model. We propose two  algorithms: one based on invariants derived  from no higher than second and third order  moments of the image, the other via a  combination of the affine properties of  geometrical and the differential attributes of  the image. Empirical results on natural  images demonstrate the effectiveness of the  proposed algorithms. A sensitivity analysis of  the algorithm is presented. We demonstrate  in particular that the differential method is  quite stable against perturbations --- although  not without some error --- when compared  with conventional methods. We also  demonstrate mathematically that even a  single point correspondence suffices,  theoretically at least, to recover affine  parameters via the differential method.
</description>
<pubDate>Fri, 01 Dec 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5946</guid>
<dc:date>1995-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognizing 3D Object Using Photometric Invariant</title>
<link>https://hdl.handle.net/1721.1/5945</link>
<description>Recognizing 3D Object Using Photometric Invariant
Nagao, Kenji; Grimson, Eric
In this paper we describe a new efficient  algorithm for recognizing 3D objects by  combining photometric and geometric  invariants. Some photometric properties are  derived, that are invariant to the changes of  illumination and to relative object motion with  respect to the camera and/or the lighting  source in 3D space. We argue that  conventional color constancy algorithms can  not be used in the recognition of 3D objects.  Further we show recognition does not require  a full constancy of colors, rather, it only needs  something that remains unchanged under the  varying light conditions sand poses of the  objects. Combining the derived color  invariants and the spatial constraints on the  object surfaces, we identify corresponding  positions in the model and the data space  coordinates, using centroid invariance of  corresponding groups of feature positions.  Tests are given to show the stability and  efficiency of our approach to 3D object  recognition.
</description>
<pubDate>Sat, 22 Apr 1995 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5945</guid>
<dc:date>1995-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>A Security Kernel Based on the Lambda-Calculus</title>
<link>https://hdl.handle.net/1721.1/5944</link>
<description>A Security Kernel Based on the Lambda-Calculus
Rees, Jonathan A.
Cooperation between independent agents  depends upon establishing adegree of  security. Each of the cooperating agents  needs assurance that the cooperation will not  endanger resources of value to that agent. In  a computer system, a computational  mechanism can assure safe cooperation  among the system's users by mediating  resource access according to desired security  policy. Such a mechanism, which is called a  security kernel, lies at the heart of many  operating systems and programming  environments.The report describes Scheme  48, a programming environment whose  design is guided by established principles of  operating system security. Scheme 48's  security kernel is small, consisting of the call-by-value $lambda$-calculus with a few simple  extensions to support abstract data types,  object mutation, and access to hardware  resources. Each agent (user or subsystem)  has a separate evaluation environment that  holds objects representing privileges granted  to that agent. Because environments  ultimately determine availability of object  references, protection and sharing can be  controlled largely by the way in which  environments are constructed. I will describe  experience with Scheme 48 that shows how it  serves as a robust and flexible experimental  platform. Two successful applications of  Scheme 48 are the programming  environment for the Cornell mobile robots,  where Scheme 48 runs with no (other)  operating system support; and a secure multi-user environment that runs on workstations.
</description>
<pubDate>Wed, 13 Mar 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5944</guid>
<dc:date>1996-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>Edge and Mean Based Image Compression</title>
<link>https://hdl.handle.net/1721.1/5943</link>
<description>Edge and Mean Based Image Compression
Desai, Ujjaval Y.; Mizuki, Marcelo M.; Masaki, Ichiro; Horn, Berthold K.P.
In this paper, we present a static image  compression algorithm for very low bit rate  applications. The algorithm reduces spatial  redundancy present in images by extracting  and encoding edge and mean information.  Since the human visual system is highly  sensitive to edges, an edge-based  compression scheme can produce intelligible  images at high compression ratios. We  present good quality results for facial as well  as textured, 256~x~256 color images at 0.1 to  0.3 bpp. The algorithm described in this paper  was designed for high performance, keeping  hardware implementation issues in mind. In  the next phase of the project, which is  currently underway, this algorithm will be  implemented in hardware, and new edge-based color image sequence compression  algorithms will be developed to achieve  compression ratios of over 100, i.e., less than  0.12 bpp from 12 bpp. Potential applications  include low power, portable video telephones.
</description>
<pubDate>Fri, 01 Nov 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5943</guid>
<dc:date>1996-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Function Application on a DNA Substrate</title>
<link>https://hdl.handle.net/1721.1/5942</link>
<description>Parallel Function Application on a DNA Substrate
Blumberg, Andrew Justin
In this paper I present a new model that  employs a biological (specifically DNA -based) substrate for performing computation.  Specifically, I describe strategies for  performing parallel function application in the  DNA-computing models described by  Adelman, Cai et. al., and Liu et. al. Employing  only DNA operations which can presently be  performed, I discuss some direct algorithms  for computing a variety of useful mathematical  functions on DNA, culminating in an algorithm  for minimizing an arbitrary continuous  function. In addition, computing genetic  algorithms on a DNA substrate is briefly  discussed.
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5942</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>General Purpose Parallel Computation on a DNA Substrate</title>
<link>https://hdl.handle.net/1721.1/5941</link>
<description>General Purpose Parallel Computation on a DNA Substrate
Blumberg, Andrew Justin
In this paper I describe and extend a new DNA  computing paradigm introduced in Blumberg  for building massively parallel machines in  the DNA-computing models described by  Adelman, Cai et. al., and Liu et. al. Employing  only DNA operations which have been  reported as successfully performed, I present  an implementation of a Connection Machine,  a SIMD (single-instruction multiple-data)  parallel computer as an illustration of how to  apply this approach to building computers in  this domain (and as an implicit demonstration  of PRAM equivalence). This is followed with a  description of how to implement a MIMD  (multiple-instruction multiple-data) parallel  machine. The implementations described  herein differ most from existing models in  that they employ explicit communication  between processing elements (and hence  strands of DNA).
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5941</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complex Feature Recognition: A Bayesian Approach for Learning to Recognize Objects</title>
<link>https://hdl.handle.net/1721.1/5940</link>
<description>Complex Feature Recognition: A Bayesian Approach for Learning to Recognize Objects
Viola, Paul
We have developed a new Bayesian  framework for visual object recognition which  is based on the insight that images of objects  can be modeled as a conjunction of local  features. This framework can be used to both  derive an object recognition algorithm and an  algorithm for learning the features  themselves. The overall approach, called  complex feature recognition or CFR, is unique  for several reasons: it is broadly applicable to  a wide range of object types, it makes  constructing object models easy, it is capable  of identifying either the class or the identity of  an object, and it is computationally efficient--requiring time proportional to the size of the  image. Instead of a single simple feature  such as an edge, CFR uses a large set of  complex features that are learned from  experience with model objects. The response  of a single complex feature contains much  more class information than does a single  edge. This significantly reduces the number  of possible correspondences between the  model and the image. In addition, CFR takes  advantage of a type of image processing  called 'oriented energy'. Oriented energy is  used to efficiently pre-process the image to  eliminate some of the difficulties associated  with changes in lighting and pose.
</description>
<pubDate>Fri, 01 Nov 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5940</guid>
<dc:date>1996-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dense Depth Maps from Epipolar Images</title>
<link>https://hdl.handle.net/1721.1/5939</link>
<description>Dense Depth Maps from Epipolar Images
Mellor, J.P.; Teller, Seth; Lozano-Perez, Tomas
Recovering three-dimensional information  from two-dimensional images is the  fundamental goal of stereo techniques. The  problem of recovering depth (three-dimensional information) from a set of  images is essentially the correspondence  problem: Given a point in one image, find the  corresponding point in each of the other  images. Finding potential correspondences  usually involves matching some image  property. If the images are from nearby  positions, they will vary only slightly,  simplifying the matching process. Once a  correspondence is known, solving for the  depth is simply a matter of geometry. Real  images are composed of noisy, discrete  samples, therefore the calculated depth will  contain error. This error is a function of the  baseline or distance between the images.  Longer baselines result in more precise  depths. This leads to a conflict: short  baselines simplify the matching process, but  produce imprecise results; long baselines  produce precise results, but complicate the  matching process. In this paper, we present a  method for generating dense depth maps  from large sets (1000's) of images taken from  arbitrary positions. Long baseline images  improve the accuracy. Short baseline images  and the large number of images greatly  simplifies the correspondence problem,  removing nearly all ambiguity. The algorithm  presented is completely local and for each  pixel generates an evidence versus depth and  surface normal distribution. In many cases,  the distribution contains a clear and distinct  global maximum. The location of this peak  determines the depth and its shape can be  used to estimate the error. The distribution  can also be used to perform a maximum  likelihood fit of models directly to the images.  We anticipate that the ability to perform  maximum likelihood estimation from purely  local calculations will prove extremely useful  in constructing three dimensional models  from large sets of images.
</description>
<pubDate>Fri, 01 Nov 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5939</guid>
<dc:date>1996-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lens Distortion Calibration Using Point Correspondences</title>
<link>https://hdl.handle.net/1721.1/5938</link>
<description>Lens Distortion Calibration Using Point Correspondences
Stein, Gideon P.
This paper describes a new method for lens  distortion calibration using only point  correspondences in multiple views, without  the need to know either the 3D location of the  points or the camera locations. The standard  lens distortion model is a model of the  deviations of a real camera from the ideal  pinhole or projective camera model.Given  multiple views of a set of corresponding  points taken by ideal pinhole cameras there  exist epipolar and trilinear constraints among  pairs and triplets of these views. In practice,  due to noise in the feature detection and due  to lens distortion these constraints do not  hold exactly and we get some error. The  calibration is a search for the lens distortion  parameters that minimize this error. Using  simulation and experimental results with real  images we explore the properties of this  method. We describe the use of this method  with the standard lens distortion model, radial  and decentering, but it could also be used  with any other parametric distortion models.  Finally we demonstrate that lens distortion  calibration improves the accuracy of 3D  reconstruction.
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5938</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Methods for Estimation of Structure and Motion from Three Views</title>
<link>https://hdl.handle.net/1721.1/5937</link>
<description>Direct Methods for Estimation of Structure and Motion from Three Views
Stein, Gideon P.; Shashua, Amnon
We describe a new direct method for  estimating structure and motion from image  intensities of multiple views. We extend the  direct methods of Horn- and-Weldon to three  views. Adding the third view enables us to  solve for motion, and compute a dense depth  map of the scene, directly from image spatio -temporal derivatives in a linear manner  without first having to find point  correspondences or compute optical flow.  We describe the advantages and limitations  of this method which are then verified through  simulation and experiments with real images.
</description>
<pubDate>Sun, 01 Dec 1996 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5937</guid>
<dc:date>1996-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Delta Tree: An Object-Centered Approach to Image-Based Rendering</title>
<link>https://hdl.handle.net/1721.1/5936</link>
<description>The Delta Tree: An Object-Centered Approach to Image-Based Rendering
Dally, William J.; McMillan, Leonard; Bishop, Gary; Fuchs, Henry
This paper introduces the delta tree, a data  structure that represents an object using a set  of reference images. It also describes an  algorithm for generating arbitrary re-projections of an object by traversing its delta  tree. Delta trees are an efficient  representation in terms of both storage and  rendering performance. Each node of a delta  tree stores an image taken from a point on a  sampling sphere that encloses the object.  Each image is compressed by discarding  pixels that can be reconstructed by warping its  ancestor's images to the node's viewpoint.  The partial image stored at each node is  divided into blocks and represented in the  frequency domain. The rendering process  generates an image at an arbitrary viewpoint  by traversing the delta tree from a root node to  one or more of its leaves. A subdivision  algorithm selects only the required blocks  from the nodes along the path. For each  block, only the frequency components  necessary to reconstruct the final image at an  appropriate sampling density are used. This  frequency selection mechanism handles both  antialiasing and level-of-detail within a single  framework. A complex scene is initially  rendered by compositing images generated  by traversing the delta trees of its  components. Once the reference views of a  scene are rendered once in this manner, the  entire scene can be reprojected to an arbitrary  viewpoint by traversing its own delta tree. Our  approach is limited to generating views of an  object from outside the object's convex hull. In  practice we work around this problem by  subdividing objects to render views from  within the convex hull.
</description>
<pubDate>Fri, 02 May 1997 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5936</guid>
<dc:date>1997-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Visible Decomposition: Real-Time Path Planning in Large Planar Environments</title>
<link>https://hdl.handle.net/1721.1/5935</link>
<description>Visible Decomposition: Real-Time Path Planning in Large Planar Environments
Maron, Oded; Lozano-Perez, Tomas
We describe a method called Visible Decomposition for computing collision-free paths in real time through a planar environment with a large number of obstacles. This method divides space into local visibility graphs, ensuring that all operations are local. The search time is kept low since the number of regions is proved to be small. We analyze the computational demands of the algorithm and the quality of the paths it produces. In addition, we show test results on a large simulation testbed.
</description>
<pubDate>Mon, 01 Jun 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5935</guid>
<dc:date>1998-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Corpus-Based Techniques for Word Sense Disambiguation</title>
<link>https://hdl.handle.net/1721.1/5934</link>
<description>Corpus-Based Techniques for Word Sense Disambiguation
Levow, Gina-Anne
The need for robust and easily extensible  systems for word sense disambiguation  coupled with successes in training systems  for a variety of tasks using large on-line  corpora has led to extensive research into  corpus-based statistical approaches to this  problem. Promising results have been  achieved by vector space representations of  context, clustering combined with a semantic  knowledge base, and decision lists based on  collocational relations.  We evaluate these  techniques with respect to three important  criteria: how their definition of context affects  their ability to incorporate different types of  disambiguating information, how they define  similarity among senses, and how easily they  can generalize to new senses. The strengths  and weaknesses of these systems provide  guidance for future systems which must  capture and model a variety of disambiguating  information, both syntactic and semantic.
</description>
<pubDate>Wed, 27 May 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5934</guid>
<dc:date>1998-05-27T00:00:00Z</dc:date>
</item>
<item>
<title>Leaderless Distributed Hierarchy Formation</title>
<link>https://hdl.handle.net/1721.1/5933</link>
<description>Leaderless Distributed Hierarchy Formation
Beal, Jacob
I present a system for robust leaderless  organization of an amorphous network into hierarchical clusters. This  system, which assumes that nodes are spatially embedded and can only  talk to neighbors within a given radius, scales to networks of arbitrary  size and converges rapidly. The amount of data stored at each  node is logarithmic in the diameter of the network, and the hierarchical  structure produces an addressing scheme such that there is an  invertible relation between distance and address for any pair of nodes.  The system adapts automatically to stopping failures, network  partition, and reorganization.
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5933</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nonparametric Belief Propagation and Facial Appearance Estimation</title>
<link>https://hdl.handle.net/1721.1/5932</link>
<description>Nonparametric Belief Propagation and Facial Appearance Estimation
Sudderth, Erik B.; Ihler, Alexander T.; Freeman, William T.; Willsky, Alan S.
In many applications of graphical models  arising in computer vision, the hidden variables of interest are most  naturally specified by continuous, non-Gaussian distributions.  There exist inference algorithms for discrete approximations to  these continuous distributions, but for the high-dimensional  variables typically of interest, discrete inference becomes  infeasible. Stochastic methods such as particle filters  provide an appealing alternative. However, existing techniques fail  to exploit the rich structure of the graphical models describing  many vision problems. Drawing on ideas from regularized particle  filters and belief propagation (BP), this paper develops a  nonparametric belief propagation (NBP) algorithm applicable to  general graphs. Each NBP iteration uses an efficient sampling procedure  to update kernel-based approximations to the true, continuous  likelihoods. The algorithm can accomodate an extremely broad class of  potential functions, including nonparametric representations. Thus, NBP  extends particle filtering methods to the more general vision  problems that graphical models can describe. We apply the NBP  algorithm to infer component interrelationships in a parts-based face  model, allowing location and reconstruction of occluded features.
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5932</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Supercomputing with MIT Matlab</title>
<link>https://hdl.handle.net/1721.1/5931</link>
<description>Interactive Supercomputing with MIT Matlab
Husbands, Parry; Isbell, Charles Lee, Jr.; Edelman, Alan
This paper describes MITMatlab, a system  that enables users of supercomputers or  networked PCs to work on large data sets  within Matlab transparently. MITMatlab is  based on the Parallel Problems Server  (PPServer), a standalone 'linear algebra  server' that provides a mechanism for running  distributed memory algorithms on large data  sets.  The PPServer and MITMatlab enable  high-performance interactive supercomputing.  With such a tool, researchers can now use  Matlab as more than a prototyping tool for  experimenting with small problems. Instead,  MITMatlab makes is possible to visualize and  operate interactively on large data sets. This  has implications not only in supercomputing,  but for Artificial Intelligence applicatons such  as Machine Learning, Information Retrieval  and Image Processing.
</description>
<pubDate>Tue, 28 Jul 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5931</guid>
<dc:date>1998-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>Multiple Scales in Small-World Networks</title>
<link>https://hdl.handle.net/1721.1/5930</link>
<description>Multiple Scales in Small-World Networks
Kasturirangan, Rajesh
Small-world architectures may be implicated  in a range of phenomena from networks of  neurons in the cerebral cortex to social  networks and propogation of viruses. Small-world networks are interpolations of regular  and random networks that retain the  advantages of both regular and random  networks by being highly clustered like regular  networks and having small average path  length between nodes, like random networks.  While most of the recent attention on small-world networks has focussed on the effect of  introducing disorder/randomness into a  regular network, we show that that the  fundamental mechanism behind the small-world phenomenon is not disorder/ randomness, but the presence of connections  of many different length scales. Consequently,  in order to explain the small-world  phenomenon, we introduce the concept of  multiple scale networks and then state the  multiple length scale hypothesis. We show  that small-world behavior in randomly rewired  networks is a consequence of features  common to all multiple scale networks. To  support the multiple length scale hypothesis,  novel network architectures are introduced  that need not be a result of random rewiring of  a regular network. In each case it is shown  that whenever the network exhibits small-world behavior, it also has connections of  diverse length scales. We also show that the  distribution of the length scales of the new  connections is significantly more important  than whether the new connections are long  range, medium range or short range.
</description>
<pubDate>Wed, 11 Aug 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5930</guid>
<dc:date>1999-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Amorphous Computing</title>
<link>https://hdl.handle.net/1721.1/5929</link>
<description>Amorphous Computing
Abelson, Harold; Allen, Don; Coore, Daniel; Hanson, Chris; Homsy, George; Knight, Thomas F., Jr.; Nagpal, Radhika; Rauch, Erik; Sussman, Gerald Jay; Weiss, Ron
Amorphous computing is the development of  organizational principles and programming  languages for obtaining coherent behaviors  from the cooperation of myriads of unreliable  parts that are interconnected in unknown,  irregular, and time-varying ways. The impetus  for amorphous computing comes from  developments in microfabrication and  fundamental biology, each of which is the  basis of a kernel technology that makes it  possible to build or grow huge numbers of  almost-identical information-processing units  at almost no cost. This paper sets out a  research agenda for realizing the potential of  amorphous computing and surveys some  initial progress, both in programming and in  fabrication. We describe some approaches to  programming amorphous systems, which are  inspired by metaphors from biology and  physics. We also present the basic ideas of  cellular computing, an approach to  constructing digital-logic circuits within living  cells by representing logic levels by  concentrations DNA-binding proteins.
</description>
<pubDate>Sun, 29 Aug 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5929</guid>
<dc:date>1999-08-29T00:00:00Z</dc:date>
</item>
<item>
<title>Co-dimension 2 Geodesic Active Contours for MRA Segmentation</title>
<link>https://hdl.handle.net/1721.1/5928</link>
<description>Co-dimension 2 Geodesic Active Contours for MRA Segmentation
Lorigo, Liana M.; Faugeras, Olivier; Grimson, W.E.L.; Keriven, Renaud; Kikinis, Ron; Westin, Carl-Fredrik
Automatic and semi-automatic magnetic resonance angiography (MRA)s segmentation techniques can potentially save radiologists larges amounts of time required for manual segmentation and cans facilitate further data analysis. The proposed MRAs segmentation method uses a mathematical modeling technique whichs is well-suited to the complicated curve-like structure of bloods vessels. We define the segmentation task as ans energy minimization over all 3D curves and use a level set methods to search for a solution. Ours approach is an extension of previous level set segmentations techniques to higher co-dimension.
</description>
<pubDate>Wed, 11 Aug 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5928</guid>
<dc:date>1999-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Boosting Image Database Retrieval</title>
<link>https://hdl.handle.net/1721.1/5927</link>
<description>Boosting Image Database Retrieval
Tieu, Kinh; Viola, Paul
We present an approach for image database  retrieval using a very large number of highly-selective features and simple on-line  learning. Our approach is predicated on the  assumption that each image is generated by  a sparse set of visual "causes" and that  images which are visually similar share  causes. We propose a mechanism for  generating a large number of complex  features which capture some aspects of this  causal structure. Boosting is used to learn  simple and efficient classifiers in this complex  feature space. Finally we will describe a  practical implementation of our retrieval  system on a database of 3000 images.
</description>
<pubDate>Fri, 10 Sep 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5927</guid>
<dc:date>1999-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Organizing a Global Coordinate System from Local Information on an Amorphous Computer</title>
<link>https://hdl.handle.net/1721.1/5926</link>
<description>Organizing a Global Coordinate System from Local Information on an Amorphous Computer
Nagpal, Radhika
This paper demonstrates that it is possible to  generate a reasonably accurate coordinate  system on randomly distributed processors,  using only local information and local  communication. By coordinate systems we  imply that each element assigns itself a  logical coordinate that maps to its global  physical location, starting with no apriori  knowledge of position or orientation. The  algorithm presented is inspired by biological  systems that use chemical gradients to  determine the position of cells. Extensive  analysis and simulation results are  presented. Two key results are: there is a  critical minimum average neighborhood size  of 15 for good accuracy and there is a  fundamental limit on the resolution of any  coordinate system determined strictly from  local communication. We also demonstrate  that using this algorithm, random distributions  of processors produce significantly better  accuracy than regular processor grids - such  as those used by cellular automata. This has  implications for discrete models of biology as  well as for building smart sensor arrays.
</description>
<pubDate>Sun, 29 Aug 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5926</guid>
<dc:date>1999-08-29T00:00:00Z</dc:date>
</item>
<item>
<title>Trans-membrane Signal Transduction and Biochemical Turing Pattern Formation</title>
<link>https://hdl.handle.net/1721.1/5925</link>
<description>Trans-membrane Signal Transduction and Biochemical Turing Pattern Formation
Millonas, Mark M.; Rauch, Erik M.
The Turing mechanism for the production of a  broken spatial symmetry in an initially  homogeneous system of reacting and  diffusing substances has attracted much  interest as a potential model for certain  aspects of morphogenesis such as pre-patterning in the embryo, and has also served  as a model for self-organization in more  generic systems. The two features necessary  for the formation of Turing patterns are short-range autocatalysis and long-range inhibition  which usually only occur when the diffusion  rate of the inhibitor is significantly greater than  that of the activator. This observation has  sometimes been used to cast doubt on  applicability of the Turing mechanism to  cellular patterning since many messenger  molecules that diffuse between cells do so at  more-or-less similar rates. Here we show that  stationary, symmetry-breaking Turing patterns  can form in physiologically realistic systems  even when the extracellular diffusion  coefficients are equal; the kinetic properties  of the 'receiver' and 'transmitter' proteins  responsible for signal transduction will be  primary factors governing this process.
</description>
<pubDate>Tue, 28 Sep 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5925</guid>
<dc:date>1999-09-28T00:00:00Z</dc:date>
</item>
<item>
<title>LISP Exercises</title>
<link>https://hdl.handle.net/1721.1/5924</link>
<description>LISP Exercises
Hart, Timothy P.; Levin, Michael
The following exercises are carefully graded to mesh with the sections in Chapter I, "The LISP Language", in the LISP 1.5 Programmer's Manual. Each exercise should be worked immediately after reading the manual section indicated.
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5924</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Graphical Typewriter: A Versatile Remote Console Idea</title>
<link>https://hdl.handle.net/1721.1/5923</link>
<description>The Graphical Typewriter: A Versatile Remote Console Idea
Minsky, Marvin
It would be useful to develop a combination  typewriter-plotter along the lines described  below. The device could be coupled to a  telephone line with a reasonably small  amount of electronics -- mostly relays.
</description>
<pubDate>Wed, 01 Jan 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5923</guid>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Language Input for a Computer Problem Solving System</title>
<link>https://hdl.handle.net/1721.1/5922</link>
<description>Natural Language Input for a Computer Problem Solving System
Bobrow, Daniel G.
This paper describes a computer program which accepts and "understands" a comfortable, but restricted set of one natural language, English. Certain difficulties are inherent in this problem of making a machine "understand" English. Within the limited framework of the subject matter understood by the program, many of these problems are solved or circumvented. I shall describe these problems and my solutions, and point out those solutions which I feel have general applicability. I will also indicate which must be replaced by more general methods to be really useful, and give my ideas about what general solutions to these particular problems might entail.
</description>
<pubDate>Sun, 01 Mar 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5922</guid>
<dc:date>1964-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>REVISED USER'S VERSION - Time Sharing LISP</title>
<link>https://hdl.handle.net/1721.1/5921</link>
<description>REVISED USER'S VERSION - Time Sharing LISP
Martin, William; Hart, Timothy
This memo describes changes to the LISP system by several people. The changes reduce printout and give the user more control over it. They also make it possible for LISP to communicate with the teletype and the disk. The last sections describe programs available in the public files which are useful for printing, editing, and debugging LISP functions.
</description>
<pubDate>Wed, 01 Apr 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5921</guid>
<dc:date>1964-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Secondary Storage in LISP</title>
<link>https://hdl.handle.net/1721.1/5920</link>
<description>Secondary Storage in LISP
Edwards, Daniel J.
A principal limitation of LISP processors in many computations is that of inadequate primary random-access storage. This paper explores several methods of using a secondary storage medum (such as drums, disk files or magetic tape) to augment primary storage capacity and points out some limitations of these methods
</description>
<pubDate>Sun, 01 Dec 1963 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5920</guid>
<dc:date>1963-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Syntax of the New Language</title>
<link>https://hdl.handle.net/1721.1/5919</link>
<description>Syntax of the New Language
Levin, Michael
This is a definition of the syntax of the *** language. It consists of modifications and extensions of the "Revised Report on the Algorithmic Language ALGOL 60" which is printed in the "Communications of the ACM", January 1963. The paragraph numbering of that report is used in this paper. The corrections and additions are made partially in Backus normal form, and partially in English, and the choice has been made on the basis of convenience. For example, the use of the weak separator is described readily in a few sentences, whereas the modification to incorporate this into the syntax as described in Backus normal form would have been extensive.
</description>
<pubDate>Fri, 01 May 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5919</guid>
<dc:date>1964-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>New Language Storage Conventions</title>
<link>https://hdl.handle.net/1721.1/5918</link>
<description>New Language Storage Conventions
Levin, Michael
These conventions are for the implementation  of the new language on a large computer on  which time-sharing is the standard role of  operation. Each user is at any time asigned a  certain amount of primary storage. This can  eb the entire memory of the machine for non  time-shared operation. When this quota is  filled, then it is necessary either to extend it, or  to have the reclaimer routine compact the  user's storage. This decision can be made at  run time and may be based on the user's  storage requirements, and on the cost of  primary memory at that particular instant. This  may in turn depend on the degree of  saturation of the system.
</description>
<pubDate>Fri, 01 May 1964 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5918</guid>
<dc:date>1964-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>PDP-6 TECO</title>
<link>https://hdl.handle.net/1721.1/5917</link>
<description>PDP-6 TECO
Samson, Peter
TECO is a scope-keyboard text- editor. It uses  an on-line command language (which  permits macro-definitions, corditional, etc.) as  well as text operations. The macro language  permits the most sophisticated search,  match, and substitution operations as well as  simple typographical corrections to text.
</description>
<pubDate>Thu, 01 Jul 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5917</guid>
<dc:date>1965-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>PDP-6 LISP Input-Output for the Display</title>
<link>https://hdl.handle.net/1721.1/5914</link>
<description>PDP-6 LISP Input-Output for the Display
Martin, William
An intermediate level language for display programming has been embedded in LISP 1.5 The language is intended as a basis for higher analysis of display information. Through the construction of a hierarchy of LISP functions it will be possible to assign a complicated meaning to a series of simple light pen motions, or to construct a complex picture. The intermediate level of language should abstract from the light pen trajectory the information which these LISP functions require and provide basis for time, and programming effort. The first section of this memo discusses the system and gives programming examples. The details of the examples can be understood by reading the second section which discusses the implementation and the LISP functions available.
</description>
<pubDate>Tue, 01 Jun 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5914</guid>
<dc:date>1965-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>MACTAP: A PDP-6 DECtape Handling Package</title>
<link>https://hdl.handle.net/1721.1/5913</link>
<description>MACTAP: A PDP-6 DECtape Handling Package
Samson, Peter
MACTAP is a set of PDP-6 subroutines to read and write DECtape in the MAC file format (see MAC-M-249). Programmers can call these subroutines for input or output of ASCII data, which will be compatible with TECO files; or for binary (36. -bit word) data. They were extracted mainly from PDP-6 TECO and arranged and checked out in their present form by Jack Holloway.
</description>
<pubDate>Wed, 01 Sep 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5913</guid>
<dc:date>1965-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>FLIP - A Format List Processor</title>
<link>https://hdl.handle.net/1721.1/5912</link>
<description>FLIP - A Format List Processor
Teitelman, Warren
This memo describes a notion of programming language for expressing, from within a LISP system, string manipulation such as those performed in COMIT. The COMIT formalism has been extended in several ways: the patterns (the left-half constituents of COMIT terminology) can be variable names of the results of computation; predicates can be associated with these elementary patterns allowing more precise specifications of the segments they match; the names of elementary patterns themselves may be variable or the results if computation; it is no longer necessary to restrict the operations to a linear string of characters (or words) since elementary patterns can themselves match structures; etc. Similar generalizations exist for formats, i.e. what corresponds to the right-half of the COMIT rule.
</description>
<pubDate>Sat, 01 Jul 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5912</guid>
<dc:date>1967-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of the Hand</title>
<link>https://hdl.handle.net/1721.1/5911</link>
<description>Design of the Hand
Minsky, Marvin
The following scheme for designing a general-purpose manipulator organ has many theoretical attractions. The basic idea is perhaps best conceived as a theoretical, or mathematical, idea. While it is unlikely that the actual system will be very much like it, it may have value as a sort of ideal against whose elegance we can match engineering and practical compromise.
</description>
<pubDate>Sun, 01 Aug 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5911</guid>
<dc:date>1965-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Theory of Computer Instructions</title>
<link>https://hdl.handle.net/1721.1/5910</link>
<description>A Theory of Computer Instructions
Maurer, Ward Douglas
This paper has arisen from an attempt to determine the nature of computer instructions from a viewpoint of general function and set theory. Mathematical machines, however the term is understood, are not adequate models for the computers of today; this is true whether we are talking about Turning machines, sequential machines, push-down automata, generalized sequential machines, or any of the other numerous machine models that have been formulated I the last fifteen years. Most of these models are either not general enough, as the sequential or Turning machines with their single input and output devices; or capable of accurately reproducing only one important programming feature; or in a sense too general (see discussion of sequential machines in Chapter 10 below). On the other hand, modern computers, whether they are binary, decimal, or mixed, whether they have one or two instructions per word, or one instruction covering several words, have several important common features, All of their instructions have input, output, and affected regions (in the sense of Definitions B and K below). The study of the input and output regions and the structure of affected regions of all the instructions on a given computer can provide a key to its logical efficiency.
</description>
<pubDate>Wed, 01 Sep 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5910</guid>
<dc:date>1965-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>MIDAS</title>
<link>https://hdl.handle.net/1721.1/5909</link>
<description>MIDAS
Samson, Peter
The MIDAS linking loader is a PDP-6 program to load relocatable-format output from the MIDAS assemblers, with facilities to handle symbolic cross-reference between independently assembled programs. Although it is arranged primarily to load from DECtape, the loader is able also to load paper-tape relocatable programs. To use the loader, load it off the MACDMOP SYSTEM tape as the file STINK (A file STINK NEW may exist, repairing old bugs or introducing new features.) Then the loader expects commands to be typed in on the on-line Teletype; two successive ALT MODE characters terminate the string of commands. The commands in a string are not performed until the string is thus terminated. While a command in a string has not been terminated, RUBOUT will erase the last typed-in character (and type it out again as a reminder). A command string may contain any number of commands, and the effect is the same whether the commands are together in one string or are in successively typed-in strings each delimited by two ALT MODES.
</description>
<pubDate>Tue, 01 Oct 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5909</guid>
<dc:date>1968-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Syntax and Display of Mathematical Expressions</title>
<link>https://hdl.handle.net/1721.1/5908</link>
<description>Syntax and Display of Mathematical Expressions
Martin, William
A LISP program converts a mathematical expression stored in list structure form, into a text-book style visual display. To do this, requires the selection and positioning of the individual symbols which make up the expression, using a combination of global and local information. This memo describes a table-driven picture-compiler which makes the necessary information available. Syntax rules have been written for a large class of mathematical expressions. These rules are simplified by the introduction of concepts concerning the relative position of symbols. In addition to the symbols and their coordinates the program sends a parsing of the symbols to the display. This program is a refinement of the system proposed by M.L. Minsky in Artificial Intelligence Memo 61, 'Mathscope: Part I'.
</description>
<pubDate>Thu, 01 Jul 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5908</guid>
<dc:date>1965-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Use of MACDMP</title>
<link>https://hdl.handle.net/1721.1/5907</link>
<description>Use of MACDMP
Samson, Peter
MACIMP is a PDP-6 program which can load from DECtape to core memory, dump core onto DECtape, or verify a previously dumped filel against memory. Normally, just before it loads, it clears all of memory to 0 (except itself and locations 0 through 37); and, in general, it does not dump locations containing 0. (It also does not dump itself, or locations 0 through 37.) In this way, a short program uses only a few blocks on tape. MACIMP uses the MAC PDP-6 file structure and directory scheme, and writes files in mode 1.
</description>
<pubDate>Thu, 01 Jul 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5907</guid>
<dc:date>1965-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Useful Algebraic Property of Robinson's Unification Algorithm</title>
<link>https://hdl.handle.net/1721.1/5906</link>
<description>A Useful Algebraic Property of Robinson's Unification Algorithm
Hart, Timothy
This memo presupposes some acquaintance  with "A Machine Oriented Logic Based on the  Resolution Principle", J.A. Robinson, JACM  Jan65. The reader unfamiliar with this paper  should be able to get a general idea of the  theorem if he knows that OA is a post operator  indicating a minimal set of substitutions  (most general substitution) necessary to  transform all elements of the set of formulae,  A, into the same element (to "unify" A), so that  when OA exists AOA is a set with one  element (a "unit"). Example: A={f(x),y f(g(u)), f(g(z))} UA= {g(u)/x, f(g(u))/y, u/z} AOA= {f(g(u))} Another most general unifier of A is {g(z)/x,  f(g(z))/y, z/u}.
</description>
<pubDate>Mon, 01 Nov 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5906</guid>
<dc:date>1965-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Topics in Model Theory</title>
<link>https://hdl.handle.net/1721.1/5905</link>
<description>Topics in Model Theory
Levin, Michael
The concept of free as in "free group" is generalized to any first order theory. An interesting class of homomorphisms between models is discussed. Relations between model theory and abelian categories are discussed speculatively.  This paper represents an incomplete study and may contain serious errors. A knowledge of model theory, and of MIT course 18.892 in particular is assumed.
</description>
<pubDate>Sat, 01 Jan 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5905</guid>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A New Version of CTSS LISP</title>
<link>https://hdl.handle.net/1721.1/5904</link>
<description>A New Version of CTSS LISP
Fenichel, Robert R.; Moses, Joel
A new version of the CTSS LISP is now  available. The new system provides additional  data storage and several new functions and  constants. The I/O capabilities, EXCISE, the  error comments, and several routines have  been improved. Musch irrelevant code and  many bugs have all been removed. FAP  source decks and BOD listings are available.  The decks are organized so as to ease the  job of assembling private LISP systems in  which uneeded features are absent. Without  reassembling, the user can create a private  LISP system in which the data storage space  has been arbitrarily allocated among binary  program space, the push-down list, full word  space, and free storage.
</description>
<pubDate>Tue, 01 Feb 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5904</guid>
<dc:date>1966-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Program Feature for CONVERT</title>
<link>https://hdl.handle.net/1721.1/5903</link>
<description>A Program Feature for CONVERT
Guzman, Adolfo; McIntosh, Harold
A program feature has been constructed for CONVERT, closely modeled after the similar facility found in many versions of LISP. Since it is functional or operational in nature, it has been included as a skeleton form, together with a number of related operator skeletons. This Memo describes them, and also the RUL mode, which allows the user to specify arbitrary components of a pattern as the result of a computation performed while the matching process is taking place.
</description>
<pubDate>Fri, 01 Apr 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5903</guid>
<dc:date>1966-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>POLYBRICK: Adventures in the Domain of Parallelepipeds</title>
<link>https://hdl.handle.net/1721.1/5902</link>
<description>POLYBRICK: Adventures in the Domain of Parallelepipeds
Guzman, Adolfo
A collection of programs tries to recognize, which one more successfully than its predecessor, 3-dimensional parallelepipeds (solids limited by 6 planes, parallel two-by-two), using as data 2-dimensional idealized projections. Special attention is given to the last of those programs; the method used is discussed in some detail and, in the light of its success and failures, a more general one is proposed.
</description>
<pubDate>Sun, 01 May 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5902</guid>
<dc:date>1966-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>MAC PDP-6 DECtape File Structure</title>
<link>https://hdl.handle.net/1721.1/5901</link>
<description>MAC PDP-6 DECtape File Structure
Samson, Peter
The MAC system programs, MACDMP, TECO, and MIDAS, assume a certain data structure on DECtapes which they handle. Each DECtape has 1100 blocks of 200 words, numbered 0 through 1077. Block 0 and blocks 1070 through 1077 are not used by the MAC system. Block 100 of each tape contains the File Directory: a 200-word table describing the current contents of blocks 1 through 1067. The data on the tape is organized into files, each file consisting of one or more blocks. Each file has a name and a mode: the name is composed of 2 six-character subnames, and the mode is a two-bit number. The File Directory has space for 27 files.
</description>
<pubDate>Thu, 01 Jul 1965 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5901</guid>
<dc:date>1965-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic Integration</title>
<link>https://hdl.handle.net/1721.1/5900</link>
<description>Symbolic Integration
Moses, Joel
A program has been written which is capable  of integrating all but two of the problems  solved by the Siagle's symbollic integration  program SAINT. In contrast to SAINT, it is a  purely algorithmic program and it has  achieved running times two to three orders of  magnitude faster than SAINT. This program  and some of the basic routines which it uses  are described. A heuristic for integration,  called the Edge heuristic, is presented. It is  claimed that this heuristic with the aid of a few  algorithms is capapble of solving all the  problems solved by the algorithmic program  and many others as well.
</description>
<pubDate>Wed, 01 Jun 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5900</guid>
<dc:date>1966-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>PDP-6 LISP</title>
<link>https://hdl.handle.net/1721.1/5899</link>
<description>PDP-6 LISP
Samson, Peter
This is a mosaic description of PDP-6 LISP, intended for readers familiar with the LISP 1.5 Programmer's Manual or who have used LISP on some other computer. Some of the newer features (e.g. the display) are experimental and subject to change; in such respects this should not be regarded as a final document. Some Distinctive characteristics:  Top-level type in is to EVAL. There is no EVALQUOTE. EQUAL will not correctly compare fixed-point numbers to floating-point. Also (ZEROP 0.0) is NIL.  T and NIL evaluate to T and NIL. There are not *T* and F. Interpreted variables, and variable used free in compiled functions, are automatically SPECIAL and may be used without restriction to communicate values. Also any PROG and LAMBDA variables in a compiled function may be declared SPECIAL, and will be bound and restored correctly. COMMON does not exist. Flags are not allowed; elements on a property list of an atom are expected to be paired. MAP, MAPCAR, etc. assume the first argument is the function, and the second is the list. Defining of functions is usually done with DEFPROP.
</description>
<pubDate>Wed, 01 Jun 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5899</guid>
<dc:date>1966-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Figure Boundary Description Routings for the PDP-6sVision Project</title>
<link>https://hdl.handle.net/1721.1/5898</link>
<description>Figure Boundary Description Routings for the PDP-6sVision Project
White, John
As a step in the direction of "computer vision," several programs have been written which transform the output of a vidisector into some mathematical descriptions of the boundaries enclosing the objects in the field of view. Most of the discussion concerns the techniques used to transform a sequence of points, presumably representing a curve in the two-dimensional plane of view, into the best-fit conic-curve segment, or best-fit straight line. The resultant output of this stage is a list of such segments, one list for each boundary found.
</description>
<pubDate>Thu, 01 Sep 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5898</guid>
<dc:date>1966-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Summer Vision Programs</title>
<link>https://hdl.handle.net/1721.1/5897</link>
<description>Summer Vision Programs
Lamport, Leslie
We assume that we are given a square array that describes a scene. The name of the array will be "array." The number of points representing the side length of the array will be called "pts." (I.e., (pts)2 is the total number of entries in the array.)
</description>
<pubDate>Sat, 01 Oct 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5897</guid>
<dc:date>1966-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A New Machine-Learning Technique Applied to the Game of Checkers</title>
<link>https://hdl.handle.net/1721.1/5896</link>
<description>A New Machine-Learning Technique Applied to the Game of Checkers
Griffith, Arnold
This paper described a recent refinement of  the machine--learning process employed by  Samuel (1) in connection with his  development of a checker playing program.  Samuels checker player operates in much the  same way a human player does; by looking  ahead, and by making a qualitative judgment  of the strength of the board positions it  encounters. A machine learning process is  applied to the development of an accurate  procedure for making this strength evaluation  of board positions. Before discussing my  modifications to Samuels learning process, I  should like to describe briefly Samuel's  strength evaluation procedure, and the  associated learning process.
</description>
<pubDate>Tue, 01 Mar 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5896</guid>
<dc:date>1966-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>CHAR PLOT</title>
<link>https://hdl.handle.net/1721.1/5883</link>
<description>CHAR PLOT
Sordillo, Donald
CHAR PLOT is a routine which enables one to use the Calcomp plotter as an output typewriter. This program is stored as CHPLOT BIN [English CHAR PLOT]. In use a code, representing a character of command as defined in Appendix I, is placed into accumulator C. Upon calling the routine the plotter will, either print a character, or set itself into one of several modes. The input to the routine is a word whose 8 low order bits contain a code and whose sign bit must be 0. The routine is entered by MOVE C, [WORD], PUSHJ P, PLOTC. A word=O stops everything and initiates the system. Note: The program starts off in lower case mode. While it is in this mode any attempt to issue a lower-case code causes the computer to hang up. It is suggested that the first call be used to set the routine to upper case and the 8th bit in the code used to shift between upper and lower cases. The symbols P,C and CRKCHN are global and user-defined. Other symbols are PLOTC (Normal entry point), UCTAB (Beginning of lower case table.), LCTAB (Beginning of lower case table, CLNGTH (Routine which returns length of the character which was its argument in Acc. C.
</description>
<pubDate>Sat, 01 Oct 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5883</guid>
<dc:date>1966-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Description of the CNTOUR Program</title>
<link>https://hdl.handle.net/1721.1/5882</link>
<description>A Description of the CNTOUR Program
Krakauer, Lawrence J.
The CNTOUR program plots an intensity relief map of an image which is read from the vidisector camera (TV-B). It may be used as a general purpose aiming, monitoring and focusing program, especially for high-contrast images, for which it produces something like a line drawing.
</description>
<pubDate>Tue, 01 Nov 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5882</guid>
<dc:date>1966-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Step by Step Computer Solution of Three Problems in Non-Numerical Analysis</title>
<link>https://hdl.handle.net/1721.1/5881</link>
<description>A Step by Step Computer Solution of Three Problems in Non-Numerical Analysis
Martin, William
This memo describes the step by step  solution of three problems from different fields  of applied mathematics. These problems are  solved by typing a series of computer  commands for the manipulation of symbolic  mathematical expressions. These  commands are best typed at the PDP-6  console, so that the Type 30 display and the  wider range of keyboard symbols can be  used. The syntax of commands typed at the  PDP-6 will be described. These commands  are translated into a string of symbols which  are sent to CTSS, where they are parsed into  a LISP expression, which is then evaluated.  The mathematical operators which are  available in the system will be described and  then the step by step solution of each of the  problems will be given.
</description>
<pubDate>Fri, 01 Jul 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5881</guid>
<dc:date>1966-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Program Memo about EYE</title>
<link>https://hdl.handle.net/1721.1/5880</link>
<description>Program Memo about EYE
Samson, Peter
EYE is a program (on the Vision System tape with the name EYE BALL) which displays on the 340 field of view of the vidisector. The program is controlled by the light pen, which selects various modes and options; and by the box with four pots, to locate the exact area examined.
</description>
<pubDate>Thu, 01 Dec 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5880</guid>
<dc:date>1966-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardware Memo - Input Multiplexer Status</title>
<link>https://hdl.handle.net/1721.1/5879</link>
<description>Hardware Memo - Input Multiplexer Status
Noftsker, Russell
Note: Computer control of Input Multiplexer and Output Sample and Hold is available when clock and test switches on the I/O box are in "Computer Input" and "Computer Output" positions, respectively. Manual operation of the Input Multiplexer and Output Sample and Hold is available when the same switches are "Clock Mode" and "Test Mode" respectively. In "Test Mode," output commands are derived from input channels 154 through 177 as noted in the current INPUT MULTIPLEXER STATUS. These channels are potentiometer readings from wither Joy Stick Console where Pot No. 1 is at the top and No. 10 is consecutively at bottom. See OUTPUT SAMPLE AND HOLD for Output Channel numbers.
</description>
<pubDate>Sat, 01 Oct 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5879</guid>
<dc:date>1966-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>PDP-6 Software Update</title>
<link>https://hdl.handle.net/1721.1/5878</link>
<description>PDP-6 Software Update
Eastlake, Donald E., III
Conventions of this memo- Most numbers written in Arabic numerals are octal while all those written out in English are decimal. Underlying a character and immediately preceding it with a vertical bar indicates the character produced by holding down the control key while striking that character except in the case of 1$ which represents an ALT MODE. Characters not indictable with the character set used in this memo or control of such a character are described between angle bracket. The string from the open to the close angle bracket should be considered as one character which may be controlled by underlining and preceding with a vertical bar. Lower case letters in a command string usually indicate a possibly optional variable while capital letters or special characters are constant. Note the special conventions involving [cents] in the MACDMP section. Organization of PDP-6 Software: MACDMP is normally used to load system and user machine language programs. If when one approaches the PDP-6 it is not in MACDMP (which is usually displaying a file directory) one should first try starting at location 177400 which is MACDMPs starting address. If this fails be sure a system tape is mounted of drive number one and try reading in at location 0 (see appendix). If that loses try locations 1 and 2. If still unsuccessful try placing a paper tape of MACDMP in the paper tape reader, turning it on, and starting at location 20 (appendix). If all else fails you can conclude that most of memory is clobbered and load a paper tape of MACDMP according to the instructions on the inside of the left door of the first bay of the PDP-6 to the left of the console.
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5878</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>CONVERT</title>
<link>https://hdl.handle.net/1721.1/5877</link>
<description>CONVERT
Guzman, Adolfo; McIntosh, Harold
A programming language is described which is applicable to problems conveniently described by transformation rules. By this we mean that patterns may be prescribed, each being associated with a skeleton, so that a series of such pairs may be searched until a pattern is found which matches an expression to be transformed. The conditions for a match are governed by a code which allows sub-expressions to be identified and eventually substituted into the corresponding skeleton. The primitive patterns and primitive skeletons are described, as well as the principles which allow their elaboration into more complicated patterns and skeletons. The advantages of the language are that it allows one to apply transformation rules to lists and arrays as easily as strings, that both patterns and skeletons may be defined recursively, and that as a consequence programs may be stated quite concisely.
</description>
<pubDate>Wed, 01 Jun 1966 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5877</guid>
<dc:date>1966-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Primitive Recognizer of Figures in a Scene</title>
<link>https://hdl.handle.net/1721.1/5876</link>
<description>A Primitive Recognizer of Figures in a Scene
Guzman, Adolfo
Given a scene, as seen for instance from a  T.V. camera or a picture, it is desired to  analyze it to organize, differentiate and identify  desired objects or classes of objects (i.e.,  patterns) in it. The present report describes a  program, written in CONVERT, which partially  achieves this goal. Two inputs to the program  determine its behavior and response: 1. The  scene to be analyzed, which is entered in a  symbolic format (it may contain 3-dimensional  and curved objects). 2. A symbolic description  -- called the model -- of the class for the  objects we want to identify in the scene (1):  Given a set of models for the objects we want  to locate, and a scene or picture, the program  will identify in it all those objects or figures  which are similar to one of the models,  provided they appear complete in the picture  (i.e., no partial occlusion or hidden parts).  Recognition is independent of position,  orientation, size etc.; it strongly depends on  the topology of the model. Important  restrictions and suppositions are: (a) the input  is assumed perfect --noiseless-- and highly  organized; (b) more than one mode is, in  general, required for the description of one  object and (c) only objects which appear  unobstructed are recognized. Work is  continuing in order to drop restriction (c) and  to improve (a).
</description>
<pubDate>Sun, 01 Jan 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5876</guid>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vision Memo</title>
<link>https://hdl.handle.net/1721.1/5875</link>
<description>Vision Memo
Minsky, Marvin
This Memo proposes a set of systems programs for vision work. Please comment immediately as we should start on it at once. Values stored outside an array range should have no effect, but set an overflow flag: values read outside a range are zero and also should set a flag. Coordinates normally occur as a dotted pair (x. y) in half words. For display purposes, normally the 10 most significant bits are used, but higher resolution options will be available. To specify a sub-array we have to state its size, location and mesh. All sub-arrays will be square. (Generalizing to rectangle is unwise because the natural generalization for later systems will be projective).
</description>
<pubDate>Wed, 01 Feb 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5875</guid>
<dc:date>1967-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating Stereo Disparities</title>
<link>https://hdl.handle.net/1721.1/5874</link>
<description>Estimating Stereo Disparities
Minsky, Marvin
An interesting practical and theoretical problem is putting bounds on how much computation one needs to find the stereo-disparity between two narrow-angle stereo scenes. By narrow angle I mean situations wherein the angle subtended by the eyes is a very few degrees: the kind of correlation-disparity method discussed here probably isn't applicable to the wide-angle stereo we'll usually use for scene-analysis in the Project. The method we consider is to find the local maximum of local correlation between the left and right scenes, over a range of displacements along the eye-eye axis. Obviously this is a simple-minded method that will fail in certain situations: here we are not interested in bad cases so much as in getting estimates of the minimal computation in the favorable situations. A correlation can be considered as a properly-normalized sum of pairwise products of intensifies (or other surface functions). The correlation, for each disparity d, is obtained by using pairs that are d units apart in visual angle, referred to a standard azimuth scale in each eye. One can imagine a scheme in which the pairs are all different in the retinas.
</description>
<pubDate>Wed, 01 Feb 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5874</guid>
<dc:date>1967-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remarks on Correlation Tracking</title>
<link>https://hdl.handle.net/1721.1/5873</link>
<description>Remarks on Correlation Tracking
Minsky, Marvin
The problem is to track the motion of part of a field of view. Let us assume that the scene is a two-dimensional picture in a plane perpendicular to the roll axis. (these simplifying assumptions, of course, are a main problem in estimating how the system works in real life). So we can think of the picture as a function f(x,y) in some plane.  Now suppose that at time to the scene is fo(x,y) and at some time later it has moved, and is ft(x,y). Suppose also that the scene has not changed, but has only been moved rigidly in the plane. Then an elegant mathematical way to estimate this motion is to compute the cross-correlation of the original and current picture. First let us review a basic simple mathematical fact. Given any function f(x) and any displacement {triangle}, it is true that sf(x)f(x)&gt;_sf(x)f(x+triangle).
</description>
<pubDate>Wed, 01 Mar 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5873</guid>
<dc:date>1967-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Tracking of Eye Motions</title>
<link>https://hdl.handle.net/1721.1/5872</link>
<description>Computer Tracking of Eye Motions
Minsky, Marvin; Papert, Seymour A.
This memo is to explain why the Artificial Intelligence group of Project MAC is developing methods for on-line tracking of human eye movements. It also gives a brief resume of results to date and the next steps.
</description>
<pubDate>Wed, 01 Mar 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5872</guid>
<dc:date>1967-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Quick Fail-Safe Procedure for Determining Whether the GCD of 2 Polynomials is 1</title>
<link>https://hdl.handle.net/1721.1/5871</link>
<description>A Quick Fail-Safe Procedure for Determining Whether the GCD of 2 Polynomials is 1
Moses, Joel
One of the most widely used routines in an  algebraic manipulation system is a  polynomial manipulation package (1,2,3).  The crucial operation in such routines is the  extraction of the Greatest Common Divisor  (GCD) of two polynomials. This operation is  crucial because of its frequent use and  because it is an expensive operation in regard  to time and space. Experiments by Collins(1)  have shown that given two polynomials  chosen at random, the GCD has a high  probability of being 1. Taking into account this  probability and the cost of obtaining a GCD  (some GCDs of polynomials of degree 5 in  two or three variables can take on the order of  a minute on the 7094(1), it appears that a  quick method of determining whether the  GCD is exactly 1 would be profitable. While  no such complete method is known to exist, a  fail-safe procedure has been found and is  described here. A fail-safe procedure is  characterized by the fact that when it comes to  decision (in this case that the GCD is 1), then  the decision is correct. However, the  conclusion (i.e. that the GCD is 1) may be  true, and the procedure need not arrive at a  decision regarding it. It is believed that the  fail-safe procedure presented here (and its  extension to the linear case) will arrive at a  decision quite frequently when the GCD is  actually 1.
</description>
<pubDate>Wed, 01 Mar 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5871</guid>
<dc:date>1967-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>CHAR PLOT</title>
<link>https://hdl.handle.net/1721.1/5870</link>
<description>CHAR PLOT
Sordillo, Donald
CHAR PLOT is a routine which enables one to use the CalComp plotter as a versatile output device. It is presently available as CHPLOT BIN (English CHAR PLOT) on tape MS 3. The program CHAR PLOT is normally called by a PUSHJ P, PL:OTC with a code representing a command or character (as defined in Appendix I) in accumulator C. Upon calling, the routine will either plot a character or line, or perform an internal control function. A O code initializes the routine, erasing any unexecuted (buffered) commands.
</description>
<pubDate>Wed, 01 Mar 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5870</guid>
<dc:date>1967-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hardware and Program Memo About SERVO</title>
<link>https://hdl.handle.net/1721.1/5869</link>
<description>Hardware and Program Memo About SERVO
Beeler, Michael
SERVO is intended as an engineering and  programming analyzing and debugging aid for  use with devices connected through the input  and output multiplexers to the PDP-6. Cannel  numbers and values to output, as well as  some other numeric arguments, are in octal.  Only the frequency of K, N, Q\t &amp; W, the  duration of I &amp; U, and the argument of Z are  decimal. Commands are single letters, as  follows.
</description>
<pubDate>Wed, 01 Mar 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5869</guid>
<dc:date>1967-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Incorporating MIDAS Routines into PDP-6 LISP</title>
<link>https://hdl.handle.net/1721.1/5868</link>
<description>Incorporating MIDAS Routines into PDP-6 LISP
Silver, Roland
Some PDP6 LISP users have felt a need for a way to incorporate MIDAS subroutines into LISP. LISP has been changed to let you do this, using files found on the LISP SYSTEM microtape. You write a routine for LISP in much the same way that you write any other MIDAS relocatable subroutine. You must, however, observe the constraints imposed by LISP's allocation and use of accumulators, and its methods of handling input, output, and interrupts. In addition, you require linkage to LISP before your routine can operate properly: The entry point(s) of the subroutine must be put on the property list(s) of the appropriate atom(s), and the address fields of the instructions pointing to other routines, to list structure, or the other LISP data structures must be set properly. This is done when LISP begins operation??er allocation, but before going into its listen loop. We provide eight macros to ease the job of creating such linkages: SUBR, FSUBR, MACRO, QUOTE, E, SPECIAL, and SYM. If you write "SUBR name" at a location a in your routine, LISP will subsequently ascribe the property SUBR to the atom name, with entry location a. Similar remarks apply to the use of FSBUR, LSBUR, and MACRO.
</description>
<pubDate>Wed, 01 Mar 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5868</guid>
<dc:date>1967-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Boundary Detection</title>
<link>https://hdl.handle.net/1721.1/5867</link>
<description>On Boundary Detection
Herskovits, Annette; Binford, Thomas O.
A description is given of how edge erase of prismatic objects appear through a television camera serving as visual input to a computer. Two types of edge-finding predicates are proposed and compared, one linear in intensity, the other non-linear. A statistical analysis of both is carried out, assuming input data distorted by a Gaussian noise. Both predicates have been implemented as edge-verifying procedures, ie. Procedures aiming at high sensitivity and limited to looking for edges when approximate location and directions are given. Both procedures have been tried on actual scenes. Of the two procedures the non-linear one emerged as a satisfactory solution to line-verification because it performs well in spite of surface irregularities.
</description>
<pubDate>Wed, 01 Jul 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5867</guid>
<dc:date>1970-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal to ARPA for Research on Artificial Intelligence at MIT, 1970-1971</title>
<link>https://hdl.handle.net/1721.1/5866</link>
<description>Proposal to ARPA for Research on Artificial Intelligence at MIT, 1970-1971
Minsky, Marvin; Papert, Seymour A.
The MIT Artificial Intelligence Project has a  variety of goals all bound together by search  for principles of intelligent behavior. Among  our immediate goals are to develop systems  with practical applications for: Visually-controlled automatic manipulation and  physical world problem-solving, machine  understanding of natural language text and  narrative, and advanced applied mathematics.  The long-range goals are concerned with  simplifying, unifying and extending the  techniques of heuristic programming. We  expect the results of our work to: make it  easier to write and debug large heuristic  programs, develop packaged collections of  knowledge about many different kinds of  things, lending to programs with more  resourcefulness, understanding and common  sense", and identify and sharpen certain  principles for programming intelligence.
</description>
<pubDate>Tue, 01 Dec 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5866</guid>
<dc:date>1970-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parsing Key Word Grammars</title>
<link>https://hdl.handle.net/1721.1/5865</link>
<description>Parsing Key Word Grammars
Martin, William
Key word grammars are defined to be the same as context free grammars, except that a production may specify a string of arbitrary symbols. These grammars define languages similar to those used in the programs CARPS and ELIZA. We show a method of implementing the LR9k) parsing algorithm for context free grammars which can be modified slightly in order to parse key word grammars. When this is done algorithm can use many of the techniques used in ELIZA parse. Therefore, the algorithm helps to show the relation between the classical parsers and key word parsers.
</description>
<pubDate>Sat, 01 Mar 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5865</guid>
<dc:date>1969-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Form and Content in Computer Science</title>
<link>https://hdl.handle.net/1721.1/5864</link>
<description>Form and Content in Computer Science
Minsky, Marvin
The trouble with computer science today is an obsessive concern with form instead of content. This essay has three parts, suggesting form-content displacements in Theory of Computation in Programming languages and in Education.
</description>
<pubDate>Mon, 01 Dec 1969 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5864</guid>
<dc:date>1969-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Focusing</title>
<link>https://hdl.handle.net/1721.1/5863</link>
<description>Focusing
Horn, B.K.P.
This memo describes a method of automatically focusing the new vidisector (TVC). The same method can be used for distance measuring. Included are instructions describing the use of a special LISP and the required LISP-functions. The use of the vidisectors, as well as estimated of their physical characteristics is also included, since a collection of such data has not previously been available.
</description>
<pubDate>Wed, 01 May 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5863</guid>
<dc:date>1968-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Multiple Procedure DDT</title>
<link>https://hdl.handle.net/1721.1/5862</link>
<description>A Multiple Procedure DDT
Knight, Thomas
This Memo. Describes a version of DDT used as the command level of the A.I. Group PDP-6 Time Sharing System (ITS). Special features include capability to handle multiple jobs, ability to stop open read or write references to a given location, and the ability of system programs to return command strings to be executed by the DDT.
</description>
<pubDate>Mon, 01 Jan 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5862</guid>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Stability Test for Configurations of Blocks</title>
<link>https://hdl.handle.net/1721.1/5861</link>
<description>A Stability Test for Configurations of Blocks
Blum, Manuel; Griffith, Arnold; Neumann, Bernard
This work is based on notes provided by  Manuel Blum, which are paraphrased in  section I, and which contain the examples  used in the appendix. The main portion of this  report was written by Bernard Neumann, who  generalized Blum's results to situation  involving friction. The program performing the  relevant computation, which appears in the  appendix, was written by Arnold Griffith, who  compiled this memo.
</description>
<pubDate>Sun, 01 Feb 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5861</guid>
<dc:date>1970-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Interim LISP User's Guide</title>
<link>https://hdl.handle.net/1721.1/5860</link>
<description>An Interim LISP User's Guide
White, John L.
The substance of this memo is to initiate the naï¶¥ LISP user into the intricacies of the system at the Project MAC A.I. Lab. It is composed, at this time, of a Progress Report on the development of the LISP system and a few appendices but as such should be nearly adequate to start out a person who understands the basic ideas of LISP, and has understood a minimal part of the LISP 1.5 Primer.
</description>
<pubDate>Sun, 01 Mar 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5860</guid>
<dc:date>1970-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>PEEK and LOCK</title>
<link>https://hdl.handle.net/1721.1/5859</link>
<description>PEEK and LOCK
III, Donald Eastlake
This memo describes two small utility  programs that are of assistance in using the  ITS 1.4 (see A.I. 161, MAC-M-377) time  sharing system. LOCK performs  miscellaneous utility functions while PEEK  displays, with periodic updates, various  aspects of the time sharing system's status.
</description>
<pubDate>Fri, 01 Nov 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5859</guid>
<dc:date>1968-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Holes</title>
<link>https://hdl.handle.net/1721.1/5858</link>
<description>Holes
Winston, Patrick H.
This memo originally had two parts. The first  dealt with certain deficiencies in an early  version of Guzman's program, SEE. The  problems have been fixed, and the  corresponding discussion has been dropped  from this memo. The part remaining deals  with line drawings of objects with holes.
revised April 1970
</description>
<pubDate>Thu, 01 Aug 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5858</guid>
<dc:date>1968-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remarks on Visual Display and Console Systems</title>
<link>https://hdl.handle.net/1721.1/5857</link>
<description>Remarks on Visual Display and Console Systems
Minsky, Marvin
This serves as a preliminary draft of Deluxe Picture Maintenance System, June, 1963. It is Technical Memorandum No. 1.
</description>
<pubDate>Sat, 01 Jun 1968 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5857</guid>
<dc:date>1968-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Removing Shadows in a Scene</title>
<link>https://hdl.handle.net/1721.1/5856</link>
<description>Removing Shadows in a Scene
Orban, Richard
This paper describes a LISP function, ERASER, to be used in the process of recognizing objects by a computer. It is a pre-processor to a program called SEE which finds whole bodies in a scene. A short description of SEE and the required data-form for a scene is given. SEE is simulated for five different scenes to demonstrate the effects of shadows on its operation. The function , ERASER is explained through a sequence of operation, the heuristic used and detailed results for test cases. Finally, a "how to use it" section describes the data required to be on the property lists of the vertices in the scene, and the cruft that ERASER puts on these p-lists as it operates.
</description>
<pubDate>Sat, 01 Aug 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5856</guid>
<dc:date>1970-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Movie Memo</title>
<link>https://hdl.handle.net/1721.1/5855</link>
<description>Movie Memo
Beeler, Michael
This is intended as brief explanation of how to use the Kodak movie camera in sync with a display.
</description>
<pubDate>Wed, 01 Apr 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5855</guid>
<dc:date>1970-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Function of FUNCTION in LISP, or Why the FUNARG Problem Should be Called the Environment Problem</title>
<link>https://hdl.handle.net/1721.1/5854</link>
<description>The Function of FUNCTION in LISP, or Why the FUNARG Problem Should be Called the Environment Problem
Moses, Joel
A problem common to many powerful programming languages arises when one has to determine what values to assign to free variables in functions. Different implementational approaches which attempt to solve the problem are considered. The discussion concentrates on LISP implementations and points out why most current LISP systems are not as general as the original LISP 1.5 system. Readers not familiar with LISP should be able to read this paper without difficulty since we have tried to couch the argument in ALGOL-like terms as much as possible.
</description>
<pubDate>Mon, 01 Jun 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5854</guid>
<dc:date>1970-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cellular Automata</title>
<link>https://hdl.handle.net/1721.1/5853</link>
<description>Cellular Automata
Banks, Edwin Roger
This paper presents in order 1) a brief description of the results, 2) a definition of cellular automata, 3) discussion of previous work in this area by Von Neumann and Codd, and 4) details of how the prescribed behaviors are achieved (with computer simulations included in the appendices). The results include showing that a two state cell with five neighbors is sufficient for universality.
</description>
<pubDate>Mon, 01 Jun 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5853</guid>
<dc:date>1970-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Peter Samson's Music Processor, BIG</title>
<link>https://hdl.handle.net/1721.1/5852</link>
<description>Peter Samson's Music Processor, BIG
Beeler, Michael
The contents of this memo are: commands which create a name, commands which create music, playing commands, plotting commands, general utility commands, debugging commands (in relation to relics of the past, features you might hope to live to see), error comments and a final appendix--MUSCOM.
</description>
<pubDate>Wed, 01 Jul 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5852</guid>
<dc:date>1970-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparative Schematology</title>
<link>https://hdl.handle.net/1721.1/5851</link>
<description>Comparative Schematology
Paterson, Michael S.; Hewitt, Carl E.
While we may have the intuitive idea of one programming language having greater power than another, or of some subset of a language being an adequate 'core' for that language, we find when we try to formalize this notion that there is a serious theoretical difficulty. This lies in the fact that even quite rudimentary languages are nevertheless 'universal' in the following sense. If the language allows us to program with simple arithmetic or list-processing functions then any effective control structure can be simulated, traditionally by encoding a Turing machine computation in some way. In particular, a simple language with some basic arithmetic can express programs for any partial recursive function. Such an encoding is usually quite unnatural and impossibly inefficient. Thus, in order to carry on a practical study of the comparative power of different languages we are led to banish explicit functions and deal instead with abstract, uninterpreted programs or schemas. What follows is a brief report on some preliminary exploration in this area.
</description>
<pubDate>Sun, 01 Nov 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5851</guid>
<dc:date>1970-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>EUTERPE A Computer Language for the Expression of Musical Ideas</title>
<link>https://hdl.handle.net/1721.1/5850</link>
<description>EUTERPE A Computer Language for the Expression of Musical Ideas
Smoliar, Stephen
The electronic medium has vastly increased  the amount of material available to the  contemporary composer. The various pieces  of electronic equipment available today allow  one to produce any conceivable sound; yet  because of the complex nature of their output,  these devices are generally difficult to control  and the composer of electronic music may  take several hours to prepare but a few  minutes of his creation. EUTERPE was  designed during the summer of 1966 by  Marvin Minsky as a "real-time" music  program" to be used at a teletype which was a  direct link with a digital computer. The  program is an interpreter and compiler,  basically a translation device to convert  symbolic input into internal machine language  of a computer. The symbolic input consists of  yup to six "voice-programs" which are strings  of words.
</description>
<pubDate>Sat, 01 Apr 1967 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5850</guid>
<dc:date>1967-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>More Comparative Schematology</title>
<link>https://hdl.handle.net/1721.1/5849</link>
<description>More Comparative Schematology
Hewitt, Carl E.
Schemas are programs in which some of the function symbols are un-interpreted. In this paper we compare classed of schemas in which various kinds of constraints are imposed on some of the function symbols. Among the classes of schemas compared are program, recursive, hierarchical and parallel.
</description>
<pubDate>Sat, 01 Aug 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5849</guid>
<dc:date>1970-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Vision Laboratory: Part One</title>
<link>https://hdl.handle.net/1721.1/5848</link>
<description>The Vision Laboratory: Part One
Binford, Thomas O.
Some of the facilities for vision programming are discussed in the format of a user's manual.
</description>
<pubDate>Wed, 01 Jul 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5848</guid>
<dc:date>1970-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Look-Ahead Strategies in One Person Games with Randomly Generated Game Trees</title>
<link>https://hdl.handle.net/1721.1/5847</link>
<description>Look-Ahead Strategies in One Person Games with Randomly Generated Game Trees
Johnson, David S.
A random method for generated binary trees  is presented, ad twp forms of a class of one  person games called, "Tree Solitaire" which  have such trees as their game trees are  defined. After what "look ahead strategy"  means in terms of such games is discussed,  as theorem on the most efficient use of  unlimited look-ahead is proved, and a  collection of strategies involving 0, 1, or 2  look-ahead per move is introduced.  A method involving diagrams is presented for  calculation the probability of winning under the  various strategies over a restricted class of  games. The superiority of one of the l look-ahead strategies over the other is proved for  games of the first form on this restricted  class. For games of the second form of this  class, all the introduced strategies have their  chances of winning calculated, and these  results are compared among themselves,  with the result for the first form of the game,  and with the results of Monte Carlo estimation  of the chance of winning in a particular case.  An approximate methods for evaluating  strategies form any given position is  introduced, used to explain some of the  previous results, and suggest modifications  of strategies already defined, which are then  evaluated by Monte Carlo methods.  Finally, variants on Tree Solitaire are  suggested, their general implications are  discussed, and using the methods already  developed one of the most suggestive  variants is studied and the results show a  significant reversal from those of the original  game, which is explained by the difference in  the games on one particular.
</description>
<pubDate>Wed, 01 Jul 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5847</guid>
<dc:date>1970-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extending Guzman's SEE Program</title>
<link>https://hdl.handle.net/1721.1/5846</link>
<description>Extending Guzman's SEE Program
Rattner, Martin
Adolfo Guzman's SEE program groups the regions of a two-dimensional scene into bodies, using, using local evidence in the scene to link regions together. This paper discusses an extended version of the SEE procedure that makes extensive use of evidence in the scene which indicated that two regions should be split into separate bodies.  The new procedure is better in several ways: 1) it correctly analyzes many scenes for which SEE makes mistakes; 2) it can interact with a higher-level object-recognizing program; 3) it can provide alternative solutions on demand.
</description>
<pubDate>Wed, 01 Jul 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5846</guid>
<dc:date>1970-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Construction of Decision Trees</title>
<link>https://hdl.handle.net/1721.1/5845</link>
<description>Construction of Decision Trees
Banks, Edwin Roger
The construction of optimal decision trees for the problem stated within can be accomplished by an exhaustive enumeration. This paper discusses two approaches. The section on heuristic methods gives mostly negative results (E.G. there is no merit factor that will always yield the optimal tests, etc.), but most to these methods do give good results. The section entitled "Exhaustive Enumeration Revisited" indicates some powerful shortcuts that can be applied to an exhaustive enumeration, extending the range of this method.
</description>
<pubDate>Sun, 01 Feb 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5845</guid>
<dc:date>1970-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Teaching Procedures in Humans and Robots</title>
<link>https://hdl.handle.net/1721.1/5844</link>
<description>Teaching Procedures in Humans and Robots
Hewitt, Carl
Analysis of the structure of procedures is  central to the foundations of problem soling.  In this paper we explore three principle  means for teaching procedures: telling,  canned loops, and procedural abstraction.  The most straightforward way to teach a  procedure is by telling how to accomplish it in  a high level goal-oriented language. In the  method of canned loops the control structure  that is needed for the procedure is supposed  and the procedure is deduced. In the method  of procedural abstraction the procedure is  abstracted from protocols of the procedure on  examples.
</description>
<pubDate>Tue, 01 Sep 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5844</guid>
<dc:date>1970-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Simple Algorithm for Self-Replication</title>
<link>https://hdl.handle.net/1721.1/5843</link>
<description>A Simple Algorithm for Self-Replication
Winograd, Terry
A recurrent topic of interest in the theory of automata has been the possibility of self-reproducing automata, particularly those which could reproduce globally through an application of a algorithm. In such a device, the "growth" at any point would depend at any time only on the local environment, but overall effect would be the reproduction of complex structures.  This paper gives an algorithm of this type (an extension of an algorithm brought to my attention by Professor Fredkin) and examines the conditions under which such replication will occur. The system on which it operates will be defined, and the main theorem on its operation will follow several lemmas.
</description>
<pubDate>Fri, 01 May 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5843</guid>
<dc:date>1970-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hypergeometric Functions in MATHLAB</title>
<link>https://hdl.handle.net/1721.1/5842</link>
<description>Hypergeometric Functions in MATHLAB
Wilson, Lewis
This memo describers some of the important  properties and manipulations of  Hypergeometric Functions which my be useful  in MATHLAB. A convention for representing  the function is adopted which is readily  adaptable to LISP operation. The most  general tye of HGF with which we will be  concerned is a function of a single variable, x,  and is parametricized by "a" list, of length p,  and a "B" list, of length "q". the latter consists,  in general, of atoms; the argument is usually  x, but may also be a simple function of x.
</description>
<pubDate>Mon, 01 Jun 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5842</guid>
<dc:date>1970-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>INSIM1: A Computer Model of Simple Forms of Learning</title>
<link>https://hdl.handle.net/1721.1/5841</link>
<description>INSIM1: A Computer Model of Simple Forms of Learning
Jones, Thomas L.
INSIM1 is a computer program, written in  LISP, which models simple forms of learning  analogues to the learning of a human infant  during the first few weeks of his life, such as  learning to suck the thumb and learning to  perform elementary hand-eye coordination.  The program operates by discovering cause-effect relationship and arranging them in a  goal tree. For example, if A causes B, and the  program wants B, it will set up A as a subgoal,  working backward along the chain of  causation until it reaches a subgoal which  can be reached directly; i.e. a muscle pull.  Various stages of the simulated infant's  learning are described.
</description>
<pubDate>Wed, 01 Apr 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5841</guid>
<dc:date>1970-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal to ARPA for Research on Artificial Intelligence at M.I.T., 1971-1972</title>
<link>https://hdl.handle.net/1721.1/5840</link>
<description>Proposal to ARPA for Research on Artificial Intelligence at M.I.T., 1971-1972
Minsky, Marvin; Papert, Seymour A.
The activities of the Artificial Intelligence  laboratory can be viewed under three main  aspects; (1) Artificial Intelligence- understanding the principles of making  intelligent machines along the lines  discusses in previous proposals, and  elaborated below. (2) Natural Intelligence- As  we understand intelligence better we see  fewer differences between the problems of  understanding human and machine  intelligence.  We have been increasingly able to translate  our ideas about programming machines into  ideas about educating children, and are  currently developing systematic methods in  elementary education. And conversely, we  attribute to our observations and experience in  the latter activities much of what we believe  are important new conceptions of how to  organize knowledge for programs that really  understand. (3) mathematical theories; This  aspect is relevant not because we often need  to solve specific mathematical problems but  especially because we are firmly committed to  maintaining a mathematical style in the  laboratory. In many centers we have seen  decline and deterioration following apparently  successful "experiment" in artificial  intelligence because the principles behind the  performance were not understood, hence the  limitations unseen.
</description>
<pubDate>Fri, 01 Oct 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5840</guid>
<dc:date>1971-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using the EUTERPE Music System</title>
<link>https://hdl.handle.net/1721.1/5839</link>
<description>Using the EUTERPE Music System
Smoliar, Stephen W.
This memo describes the practical  implementation of programs written in the  language EUTERPE. Details of this language  are given in the author's thesis (A Parallel  Processing Model of Musical Structures) and  will not be treated here. We shall only be  concerned with the preparation and  processing of a EUTREPE source program.  Sample programs are given in their entirely in  the thesis or may be read off the authors file  directory (SWS;). Notational conventions are  those of Dowson's guide to the AI lab  timesharing system (AI Memo No 215).
</description>
<pubDate>Fri, 01 Oct 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5839</guid>
<dc:date>1971-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>An AI Approach to English Morphemic Analysis</title>
<link>https://hdl.handle.net/1721.1/5838</link>
<description>An AI Approach to English Morphemic Analysis
Winograd, Terry
This paper illustrated an approach toward understanding natural language through the techniques of artificial intelligence. It explores the structure of English word-endings both morpho-graphemically and semantically. It illustrated the use of procedures and semantic representations in relating the broad range of knowledge a language user brings to bear on understanding and utterance.
</description>
<pubDate>Mon, 01 Feb 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5838</guid>
<dc:date>1971-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Teaching Children to be Mathematicians vs. Teaching About Mathematics</title>
<link>https://hdl.handle.net/1721.1/5837</link>
<description>Teaching Children to be Mathematicians vs. Teaching About Mathematics
Papert, Seymour A.
Being a mathematician is no more definable as 'knowing' a set of mathematical facts than being a poet is definable as knowing a set of linguistic facts. Some modern math ed reformers will give this statement a too easy assent with the comment: 'Yes, they must understand, not merely know.' But this misses the capital point that being a mathematician, again like being a poet, or a composer or an engineer, means doing, rather than knowing or understanding. This essay is an attempt to explore some ways in which one might be able to put children in a better position to do mathematics rather than merely to learn about it.
</description>
<pubDate>Thu, 01 Jul 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5837</guid>
<dc:date>1971-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Twenty Things To Do With A Computer</title>
<link>https://hdl.handle.net/1721.1/5836</link>
<description>Twenty Things To Do With A Computer
Papert, Seymour A.; Solomon, Cynthia
When people talk about computers in education they do not all have the same image in mind. Some think of using the computer to program the kid; others think of using the kid to program the computer. But most of them have at least this in common: the transaction between the computer and the kid will be some kind of "conversation" or "questions and answers" in words or numbers.
</description>
<pubDate>Tue, 01 Jun 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5836</guid>
<dc:date>1971-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Teaching Children Thinking</title>
<link>https://hdl.handle.net/1721.1/5835</link>
<description>Teaching Children Thinking
Papert, Seymour A.
This paper is dedicated to the hope that someone with power to act will one day see that contemporary research on education is like the following experiment by a nineteenth century engineer who worked to demonstrate that engines were better than horses. This he did by hitching a 1/8 HP motor in parallel with his team of four strong stallions. After a year of statistical research he announced a significant difference. However, it was generally thought that there was a Hawthorne effect on the horses.
</description>
<pubDate>Fri, 01 Oct 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5835</guid>
<dc:date>1971-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computer Laboratory for Elementary Schools</title>
<link>https://hdl.handle.net/1721.1/5834</link>
<description>A Computer Laboratory for Elementary Schools
Papert, Seymour A.
This is a research project on elementary education whose immediate objective is the development of new methods and materials for teaching in an environment of computers and computer-controlled devices. Longer term objectives are related to theories of cognitive processes and to conjectures about the possibility of producing much larger changes than are usually thought possible in the expected intellectual achievement of children. This proposal is formulated in terms of the self-sufficient immediate objectives.
</description>
<pubDate>Fri, 01 Oct 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5834</guid>
<dc:date>1971-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Micro-Planner Reference Manual</title>
<link>https://hdl.handle.net/1721.1/5833</link>
<description>Micro-Planner Reference Manual
Sussman, Gerald; Winograd, Terry
Micro-Planner is an implementation of a subset of Cal Hewitt's language, PLANNER by Gerald Jay Sussman, Terry Winograd, and Eugene Charniak on the AI group computer in LISP. Micro-Planner is now a publically accessible systems program in the AI group systems ITS. The current version of Micro-Planner, embedded in an allocated LISP, may be obtained by incanting ':PLNR' or 'PLNR' to DDT. Micro-Planner is also available as EXPR code or LAP code. All questions, suggestions, or comments about Micro-Planner should be directed to Gerald Jay Sussman (login name GJS) who will maintain the program.
</description>
<pubDate>Wed, 01 Jul 1970 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5833</guid>
<dc:date>1970-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing a Musical Ear: A New Experiment</title>
<link>https://hdl.handle.net/1721.1/5832</link>
<description>Developing a Musical Ear: A New Experiment
Bamberger, Jeanne
I would like to report on some ideas we have  been developing at M.I.T. for self-paced,  independent music study. The aim of our  approach is to nurture in students that  enigmatic quality called, "musical"-- be it a  "musical ear" or an individual's capacity to  give a "musical performance". While all of us  cherish these qualities, rarely do we come to  grips with them directly in teaching. More often  we rely on our magical or mystical faith in the  inspiration of music, itself, and its great  artists, to do the teaching. And for some  (maybe ultimately all) this is the best course.  But what about the others to whom we teach  only the techniques of playing instruments or  some "facts" about music--its forms, its  history and its apparent elements? How often  do we have or take the time to examine the  assumptions underlying these "facts" we  teach, or to question the relation between  what we teach and what we do as musicians?
</description>
<pubDate>Sat, 01 Jul 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5832</guid>
<dc:date>1972-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lock</title>
<link>https://hdl.handle.net/1721.1/5831</link>
<description>Lock
Eastlake, Donald E.
LOCK is a miscellaneous utility program operating under the ITS system. It allows the user to easily and conveniently perform a variety of infrequently required tasks. Most of these relate to console input-output or the operation of the ITS system.
</description>
<pubDate>Thu, 01 Jun 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5831</guid>
<dc:date>1972-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Get onto the System</title>
<link>https://hdl.handle.net/1721.1/5830</link>
<description>How to Get onto the System
Dowson, Mark
This memo is intended to get very new users started on the MAC AI system. It presents some simple rituals for making and editing fields, getting print outs, making microtapes, and so on. Most of the rituals given are not the only ways of doing something or even necessarily the simplest, but they do work. Some sources of more detailed documentation are referenced at the end of this memo; read then when you want to know more. If you don't understand something or need any kind of help- ask. No one minds; they all know how you feel.
</description>
<pubDate>Thu, 01 Apr 1971 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5830</guid>
<dc:date>1971-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>How the GAS Program Works with a Note on Simulating Turtles with Touch Sensors</title>
<link>https://hdl.handle.net/1721.1/5829</link>
<description>How the GAS Program Works with a Note on Simulating Turtles with Touch Sensors
Speciner, Michael
The GAS program is a display simulation of a  2 dimensional ideal gas. Barriers, or walls,  are line segments, and molecules, alias  particles or balls, are circles. Collisions occur  between balls and other balls as well as  between balls and walls. All collisions are  elastic. Global gravitational, electric, and  magnetic fields can be imposed to act on the  articles. The following is a description of  some of the inner workings on the program.
</description>
<pubDate>Fri, 01 Dec 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5829</guid>
<dc:date>1972-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Teaching of Procedures-Progress Report</title>
<link>https://hdl.handle.net/1721.1/5828</link>
<description>Teaching of Procedures-Progress Report
Sussman, Gerald Jay
The idea of building a programmer is very seductive in that it holds the promise of massive bootstrapping and thus ties in with many ideas about learning and teaching. I will avoid going into those issues here. It is necessary, however, to explain what I am not working on. I am not interested in developing new and better languages for expressing algorithms. When FORTRAN was invented, it was touted as an automatic programmer, and indeed it was, as it relieved the user of complete specification of the details of implementation. Newer programming languages are just elaborations (usually better) of that basic idea. I am, however, interested in the problem of implementation of a partially specified algorithm rather tan a complete algorithm and a partially specified implementation. This problem is truly in the domain of Artificial Intelligence because the system which "solves" this problem needs a great deal of knowledge about the problem domain for which te algorithm is being constructed in order to "reasonably" complete the specification. Indeed, a programmer is not told exactly the algorithm to be implemented, he is told the problem which his program is expected to solve.
</description>
<pubDate>Sun, 01 Oct 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5828</guid>
<dc:date>1972-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Manipulator Design Vignettes</title>
<link>https://hdl.handle.net/1721.1/5827</link>
<description>Manipulator Design Vignettes
Minsky, Marvin
This memo is about mechanical arms. The  literature on robotics seems to be deficient in  such discussions, perhaps because not  enough sharp theoretical problems have  been formulated to attract interest. I'm sure  many of these matters have been discussed  in other literatures ??osthetics, orthopedics,  mechanical engineering, etc., and references  to such discussions would be welcome. We  raise these issues in the context of designing  the "mini-robot" system in the A.I. Laboratory  in 1972-1973. But we would like to attract the  interest of the general heuristic programming  community to such questions.
</description>
<pubDate>Sun, 01 Oct 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5827</guid>
<dc:date>1972-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Infants in Children Stories - Toward a Model of Natural Language Comprehension</title>
<link>https://hdl.handle.net/1721.1/5826</link>
<description>Infants in Children Stories - Toward a Model of Natural Language Comprehension
Meyer, Garry S.
How can we construct a program that will  understand stories that children would  understand? By understand we mean the  ability to answer questions about the story.  We are interested here with understanding  natural language in a very broad area. In  particular how does one understand stories  about infants? We propose a system which  answers such questions by relating the story  to background real world knowledge. We  make use of the general model proposed by  Eugene Charniak in his Ph.D. thesis  (Charniak 72). The model sets up  expectations which can be used to help  answer questions about the story. There is a  set of routines called BASE-routines that  correspond to our "real world knowledge" and  routines that are "put-in" which are called  DEMONs that correspond to contextual  information. Context can help to assign a  particular meaning to an ambiguous word, or  pronoun.
</description>
<pubDate>Tue, 01 Aug 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5826</guid>
<dc:date>1972-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Making of the Film, SOLAR CORONA</title>
<link>https://hdl.handle.net/1721.1/5825</link>
<description>The Making of the Film, SOLAR CORONA
Beeler, Michael
The film SOLAR CORONA was made from data taken from August 14, 1969 through May 7, 1970, by OSO-VI, one of the Orbiting Satellite Observatories. One of the experiments on board scanned across and up and down the image of the sun, as we read a printed page. Each line of the scan was broken up into several distinct measurement points, similar to our eyes fixating as we read a line of text.
</description>
<pubDate>Thu, 01 Feb 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5825</guid>
<dc:date>1973-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal to ARPA for Continuation of Micro-Automation Development</title>
<link>https://hdl.handle.net/1721.1/5824</link>
<description>Proposal to ARPA for Continuation of Micro-Automation Development
Minsky, Marvin; Papert, Seymour A.
This proposal discusses practical aspects of our project to produce a replicable research tool for development of real-world computer-controlled hand-eye systems. If this proposal is read out of context, it will not seem very sophisticated because it is concerned mainly with the practical aspects of putting together an engineering system. The theoretical and conceptual context is described more thoroughly in the memo, supplementary to our main ARPA contract proposal, that describes in detail robotics reasearch at the MIT A.I. Laboratory.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5824</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Little Robot System</title>
<link>https://hdl.handle.net/1721.1/5814</link>
<description>The Little Robot System
Silver, David
The Little Robot System provides for the I.T.S. user a medium size four degree of freedom six axis robot which is controlled by the PDP-6 computer through the programming language Lisp. The robot includes eight force feedback channels which when interpreted by the PDP-6 are read by Lisp as the signed force applied to the end of the fingers. The first six forces are the X,Y, and Z forces and the torques around X, Y, and Z. the other two forces are the grippers and the vice grippers. The three X, Y, and Z forces and three torques are computed from six numbers read in from six L.V.D.Ts (Linear Variable Differential Transformers) arranged three in the vertical and three in the horizontal plane within a stress strain spring loaded wrist. The grip is read in from a strain gauge mounted on the stationary reference finger. The relative position between the motor shaft and the vice shaft is determined through means of two potentiometers to measure the vice force. The two shafts are coupled by a spring.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5814</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Why Conniving is Better than Planning</title>
<link>https://hdl.handle.net/1721.1/5813</link>
<description>Why Conniving is Better than Planning
Sussman, Gerald Jay
A higher level language derives its great  power form the fact that it tends to impose  structure on the problem solving behavior for  the user. Besides providing a library of useful  subroutines with a uniform calling sequence,  the author of a higher level language imposes  his theory of problem solving on the user. By  choosing what primitive data structures,  control structures, and operators he presents  to the user, he makes the implementation of  some algorithms more difficult than others,  thus discouraging some techniques and  encouraging others. So, to be "good", a  higher level language must not only simplify  the job of programming, by providing features  which package programming structures  commonly found in the domain for which the  language was designed, it must also do its  best to discourage the use of structures which  lead to "bad" algorithms.
</description>
<pubDate>Tue, 01 Feb 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5813</guid>
<dc:date>1972-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finding the Skeleton of a Brick</title>
<link>https://hdl.handle.net/1721.1/5812</link>
<description>Finding the Skeleton of a Brick
Finin, Tim
TC-SKELETONs duty is to help find the  dimensions of brick shaped objects by  searching for sets of three complete edges,  one for each dimension. The program was  originally written by Patrick Winston, and then  was refined and improved by Tim Finin.
</description>
<pubDate>Thu, 01 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5812</guid>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The FINDSPACE Problem</title>
<link>https://hdl.handle.net/1721.1/5811</link>
<description>The FINDSPACE Problem
Sussman, Gerald J.
The FINDSPACE problem is that of establishing a volume in space where an object of specified dimensions will fit. The problem seems to have two subproblems: the hypothesis generation problem of finding a likely spot to try, and the verification problem of testing that spot for occupancy by other objects. This paper treats primarily the verification problem.
</description>
<pubDate>Thu, 01 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5811</guid>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>D-SCRIPT: A Computational Theory of Descriptions</title>
<link>https://hdl.handle.net/1721.1/5810</link>
<description>D-SCRIPT: A Computational Theory of Descriptions
Moore, Robert C.
This paper descries D-SCRIPT, a language for representing knowledge in artificial intelligence programs. D-SCRIPT contains a powerful formalism for descriptions, which permits the representation of statements that are problematical for other systems. Particular attention is paid to problems of opaque contexts, time contexts, knowledge about knowledge. The design of a theorem prover for this language is also considered.
</description>
<pubDate>Thu, 01 Feb 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5810</guid>
<dc:date>1973-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Linguistics Oriented Programming Language</title>
<link>https://hdl.handle.net/1721.1/5809</link>
<description>A Linguistics Oriented Programming Language
Pratt, Vaughan R.
A programming language for natural language processing programs is described. Examples of the output of programs written using it are given. The reasons for various design decisions are discussed. An actual session with the system is presented, in which a small fragment of an English-to-French translator is developed. Some of the limitations of the system are discussed, along with plans for further development.
</description>
<pubDate>Thu, 01 Feb 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5809</guid>
<dc:date>1973-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal to ARPA for Continued Research on A.I.</title>
<link>https://hdl.handle.net/1721.1/5808</link>
<description>Proposal to ARPA for Continued Research on A.I.
Minsky, Marvin
The Artificial Intelligence Laboratory proposes  to continue its work on a group of closely  interconnected projects, all bearing on  questions about how to make computers able  to use more sophisticated kinds of knowledge  to solve difficult problems. This proposal  explains what we expect to come of this work,  and why it seems to us the most profitable  direction for research at this time. The core of  this proposal is about well-defined specific  tasks such as extending the computer's ability  to understand information presented as visual  scenes--or in natural, human language.  Although these specific goals are important  enough in themselves, we see their pursuit  also as tightly bound to the development of a  general theory of the computations needed to  produce intelligent processes. Obviously, a  certain amount of theory is needed to achieve  progress in this and we maintain that the  steps toward a deep theory in this domain  must include thorough analysis of a very  specific phenomena. Our confidence in this  strategy is based both on past successes  and on our current theory of knowledge  structure. These bases are discussed both  below and in the appendices.
</description>
<pubDate>Sun, 01 Oct 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5808</guid>
<dc:date>1972-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Human Oriented Logic for Automatic Theorem Proving</title>
<link>https://hdl.handle.net/1721.1/5807</link>
<description>A Human Oriented Logic for Automatic Theorem Proving
Nevins, Arthur J.
The automation of first order logic has received comparatively little attention from researcher intent upon synthesizing the theorem proving mechanism used by humans. The dominant point of view [15], [18] has been that theorem proving on the computer should be oriented to the capabilities of the computer rather than to the human mind and therefore one should not be afraid to provide the computer with a logic that humans might find strange and uncomfortable. The preeminence of this point of view is not hard to explain since until now the most successful theorem proving programs have been machine oriented. Nevertheless, there are at least two reasons for being dissatisfied with the machine oriented approach. First, a mathematician often is interested more in understanding the proof of a proposition than in being told that the propositions true, for the insight gained from an understanding of the proof can lead to the proof of additional propositions and the development of new mathematical concepts. However, machine oriented proofs can appear very unnatural to a human mathematician thereby providing him with little if any insight. Second, the machine oriented approach has failed to produce a computer program which even comes close to equaling a good human mathematician in theorem proving ability; this leads one to suspect that perhaps the logic being supplied to the machine is not as efficient as the logic used by humans. The approach taken in this paper has been to develop a theorem proving program as a vehicle for gaining a better understanding of how humans actually prove theorems. The computer program which has emerged from this study is based upon a logic which appears more "natural" to a human (i.e., more human oriented). While the program is not yet the equal of a top flight human mathematician, it already has given indication (evidence of which is presented in section 9) that it can outperform the best machine oriented theorem provers.
</description>
<pubDate>Sun, 01 Oct 1972 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5807</guid>
<dc:date>1972-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Low-level Symbolic Representation of Intensity Changes in an Image</title>
<link>https://hdl.handle.net/1721.1/5806</link>
<description>The Low-level Symbolic Representation of Intensity Changes in an Image
Marr, David
A family of symbols is defined by which much of the useful information in an image may be represented, and its choice is justified. The family includes symbols for the various commonly occuring intensity profiles that are associated with the edges of objects, and symbols for the gradual luminance changes that provide clues about a surface's shape. It is shown that these descriptors may readily be computed from measurements similar to those made by simple cells in the visual cortex of the cat. The methods that are described have been implemented, and examples are shown of their application to natural images.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5806</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Position Extraction using Stereo Eye Systems with a Relative Rotational Motion Capability</title>
<link>https://hdl.handle.net/1721.1/5805</link>
<description>Visual Position Extraction using Stereo Eye Systems with a Relative Rotational Motion Capability
Corwin, Daniel W.
This paper discusses the problem of context-free position estimation using a stereo vision  system with moveable eyes. Exact and  approximate equations are developed linking  position to measureable quantities of the  image-space, and an algorithm for finding  these quantities is suggested in rough form.  An estimate of errors and resolution limits is  provided.
</description>
<pubDate>Thu, 01 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5805</guid>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Differential Perceptrons</title>
<link>https://hdl.handle.net/1721.1/5804</link>
<description>Differential Perceptrons
Brooks, Martin; Ginsparg, Jerrold
As originally proposed, perceptrons were machines that scanned a discrete retina and combined the data gathered in a linear fashion to make decisions about the figure presented on the retina. This paper considers differential perceptions, which view a continuous retina. Thus, instead of summing the results of predicates, we must now integrate. This involves setting up a predicate space which transforms the typical perceptron sum, Ea(p)a(f), into Esacp,f(p)dp, where f is the figure on the retina, i.e. in the differential case, the figure is viewed as a function on the predicate space. We show that differential perceptrons are equivalent to perceptrons on the class of figures that fit exactly onto a sufficiently small square grid. By investigating predicates of various geometric transformations, we discover that translation and symmetry can be computed in finite order using finite coefficients in both continuous and discrete cases. We also note that in the perceptron scheme, combining data linearly implies the ability to combine data in a polynomial fashion.
</description>
<pubDate>Mon, 01 Jan 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5804</guid>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Heuristic Techniques in Computer Aided Circuit Analysis</title>
<link>https://hdl.handle.net/1721.1/5803</link>
<description>Heuristic Techniques in Computer Aided Circuit Analysis
Sussman, Gerald Jay; Stallman, Richard Matthew
We present EL, a new kind of circuit analysis  program. Whereas other circuit analysis  systems rely on classical, formal analysis  techniques, EL employs heuristic "inspection"  methods to solve rather complex DC bias  circuits. These techniques also give EL the  ability to explain any result in terms of its own  qualitative reasoning processes. EL's  reasoning is based on the concept of a "local  one-step deduction" augmented by various  "teleological" principles and by the concept of  a "macro-element". We present several  annotated examples of EL in operation and an  explanation of how it works. We also show  how EL can be extended in several directions,  including sinusoidal steady state analysis.  Finally, we touch on possible implications for  engineering education. We feel that EL is  significant not only as a novel approach to  circuit analysis but also as an application of  Artificial Intelligence techniques to a new and  interesting domain.
</description>
<pubDate>Sat, 01 Mar 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5803</guid>
<dc:date>1975-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Note on the Computation of Binocular Disparity in a Symbolic, Low-Level Visual Processor</title>
<link>https://hdl.handle.net/1721.1/5802</link>
<description>A Note on the Computation of Binocular Disparity in a Symbolic, Low-Level Visual Processor
Marr, David
The goals of the computation that extracts  disparity from pairs of pictures of a scene are  defined, and the contraints imposed upon that  computation by the three-dimensional  structure of the world are determined.  Expressing the computation as a grey-level  correlation is shown to be inadequate. A  precise expression of the goals of the  computation is possible in a low-level  symbolic visual processor: the constraints  translate in this environment to pre-requisites  on the binding of disparity values to low-level  symbols. The outine of a method based on  this is given.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5802</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Recognition of Sharp, Closely Spaced Edges</title>
<link>https://hdl.handle.net/1721.1/5801</link>
<description>The Recognition of Sharp, Closely Spaced Edges
Marr, David
The recognition of sharp edges from edge- and bar-mask convolutions with an image is studied for the special case where the separation of the edges is of the order of the masks' panel-widths. Desmearing techniques are employed to separate the items in the image. Attention is also given to parsing de-smeared mask convolutions into edges and bars; to detecting edge and bar terminations; and to the detection of small blobs.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5801</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Purpose of Low-level Vision</title>
<link>https://hdl.handle.net/1721.1/5800</link>
<description>On the Purpose of Low-level Vision
Marr, David
This article advances the thesis that the purpose of low-level vision is to encode symbolically all of the useful information contained in an intensity array, using a vocabulary of very low-level symbols: subsequent processes should have access only to this symbolic description. The reason is one of computational expediency: it allows the low-level processes to run almost autonomously: and it greatly simplifies the application of criteria to an image, whose representation in terms of conditions on the initial intensities, or on simple measurements made from them, is very cummbersome. The implications of this thesis for physiological and for computational approaches to vision are discussed. A list is given of several computational problems in low-level vision: some of these are dealt with in the accompanying articles.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5800</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Orienting Silicon Integrated Circuit Chips for Lead Bonding</title>
<link>https://hdl.handle.net/1721.1/5799</link>
<description>Orienting Silicon Integrated Circuit Chips for Lead Bonding
Horn, Berthold K.P.
Will computers that see and understand what  they see revolutionize industry by automating  the part orientation and part inspection  processes? There are two obstacles: the  expense of computin and our feeble  understanding of images. We believe these  obstacles are fast ending. To illustrate what  can be done we describe a working program  that visually determines the position and  orientation of silicon chips used in integrated  circuits.
</description>
<pubDate>Wed, 01 Jan 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5799</guid>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Elementary Geometry Theorem Proving</title>
<link>https://hdl.handle.net/1721.1/5798</link>
<description>Elementary Geometry Theorem Proving
Goldstein, Ira
An elementary theorem prover for a small part of plane Euclidean geometry is presented. The purpose is to illustrate important problem solving concepts that naturally arise in building procedural models for mathematics.
</description>
<pubDate>Sun, 01 Apr 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5798</guid>
<dc:date>1973-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pretty-Printing, Converting List to Linear Structure</title>
<link>https://hdl.handle.net/1721.1/5797</link>
<description>Pretty-Printing, Converting List to Linear Structure
Goldstein, Ira
Pretty-printing is the conversion of the list structure to a readable format. This paper outlines the computational problems encountered in such a task and documents the current algorithm in use.
</description>
<pubDate>Thu, 01 Feb 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5797</guid>
<dc:date>1973-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parsing Intensity Profiles</title>
<link>https://hdl.handle.net/1721.1/5796</link>
<description>Parsing Intensity Profiles
Lozano-Perez, Tomas
Much low-level vision work in AI deals with  one-dimensional intensity profiles. This paper  describes PROPAR, a system that allows a  convenient and uniform mechanism for  recognizin such profiles. PROPAR is a  modified Augmented Transition Networks  parser. The grammar used by the parser  serves to describe and label the set of  acceptable profiles. The input to the parser  are descriptions of segments of a piecewise  linear approximation to an intensity profile. A  sample grammar is presented and the  results discussed.
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5796</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Binford-Horn LINE-FINDER</title>
<link>https://hdl.handle.net/1721.1/5795</link>
<description>The Binford-Horn LINE-FINDER
Horn, Berthold K.P.
This paper briefly describes the processing  performed in the course of producing a line  drawing from an image obtained through an  image dissector camera. The edge-marking  pahse uses a non-linear parallel line-follower.  Complicated statistical measures are not  used. The line and vertex generating phases  use a number of heuristics to guide the  transition from edge-fragments to cleaned up  line-drawing. Higher-level understanding of  the blocks-world is not used. Sample line-drawings produced by the program are  included.
</description>
<pubDate>Sat, 01 Dec 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5795</guid>
<dc:date>1973-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>SCHEME: An Interpreter for Extended Lambda Calculus</title>
<link>https://hdl.handle.net/1721.1/5794</link>
<description>SCHEME: An Interpreter for Extended Lambda Calculus
Sussman, Gerald J.; Steele, Guy L., Jr.
Inspired by ACTORS [Greif and Hewitt] [Smith and Hewitt], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [Church], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to: (1) alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc. by clarifying the embedding of non-recursive control structures in a recursive host language like LISP. (2) explain how to use these control structures, independent of such issues as pattern matching and data base manipulation. (3) have a simple concrete experimental domain for certain issues of programming semantics and style.
</description>
<pubDate>Mon, 01 Dec 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5794</guid>
<dc:date>1975-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Turtle Escapes the Plane: Some Advanced Turtle Geometry</title>
<link>https://hdl.handle.net/1721.1/5793</link>
<description>Turtle Escapes the Plane: Some Advanced Turtle Geometry
diSessa, Andy
Since the LOGO Turtle took his first step he  has been mathematically confined to running  around on flat surfaces. Fortunately the  physically intuitive, procedurally oriented  nature of the Turtle which makes him a  powerful explorer in the plane is equally, if not  more apparent when he is liberated to tread  curved surfaces. This paper is aimed roughly  at the High School level. Yet because it is built  on intuition and physical action rather than  formalism, it can reach such "graduate  school" mathematical ideas as geodesics,  Gaussian Curvature, and topological  invariants as expressed in the Gauss-Bonnet  Theorem.
</description>
<pubDate>Mon, 01 Dec 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5793</guid>
<dc:date>1975-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal to ARPA for Continued Research on A.I. for 1973</title>
<link>https://hdl.handle.net/1721.1/5792</link>
<description>Proposal to ARPA for Continued Research on A.I. for 1973
Minsky, Marvin; Papert, Seymour A.
The Artificial Intelligence Laboratory proposes  to continue its work on a group of closely  interconnected projects, all bearing on  questions about how to make computers able  to use more sophisticated kinds of knowledge  to solve difficult problems. This proposal  explains what we expect to come of this work,  and why it seems to us the most profitable  direction for research at this time. The core of  this proposal is about well-defined specific  tasks such as extending the computer"s  ability to understand information presented as  visual scenes, or in natural, human language.  Although these specific goals are important  enough in themselves, we see their pursuit  also as tightly bound to the development of a  general theory of the computations needed to  produce intelligent processes. Obviously, a  certain amount of theory is needed to achieve  progress in this and we maintain tha the  steps toward a comprehensive theory in this  domain muyst include thorough analysis of  very specific phenomena. Our confidence in  this strategy is based both on past successes  and on our current theory of knowledge  structure. Our proposed solutions are still  evolving, but they all seem to revolve around  new methods of programming and new ways  to represent knowledge about programming.
</description>
<pubDate>Fri, 01 Jun 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5792</guid>
<dc:date>1973-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Grammar for the People: Flowcharts of SHRDLU's Grammar</title>
<link>https://hdl.handle.net/1721.1/5791</link>
<description>Grammar for the People: Flowcharts of SHRDLU's Grammar
Rubin, Andee
The grammar which SHRDLU uses to parse sentences is outlined in a series of flowcharts which attempt to modularize and illuminate its structure. In addition, a short discussion of systemic grammar is included.
</description>
<pubDate>Thu, 01 Mar 1973 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5791</guid>
<dc:date>1973-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lambda: The Ultimate Imperative</title>
<link>https://hdl.handle.net/1721.1/5790</link>
<description>Lambda: The Ultimate Imperative
Steele, Guy Lewis, Jr.; Sussman, Gerald Jay
We demonstrate how to model the following  common programmingsconstructs in terms of  an applicative order language similar to LISP:  Simple Recursion, Iteration, Compound  Statements and Expressions, GO TO and  Assignment, Continuation-Passing, Escape  Expressions, Fluid Variables, Call by Name,  Call by Need, and Call by Reference. The  models require only (possibly self-referent)  lambda application, conditionals, and (rarely)  assignment. No complex data structures  such as stacks are used. The models are  transparent, involving only local syntactic  transformations. This paper is partly tutorial  in intent, gathering all the models together for  purposes of context.
</description>
<pubDate>Mon, 01 Mar 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5790</guid>
<dc:date>1976-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A State Space Model for Sensorimotor Control and Learning</title>
<link>https://hdl.handle.net/1721.1/5789</link>
<description>A State Space Model for Sensorimotor Control and Learning
Raibert, Marc
This is the first of a two-part presentation  which deals with certain computer controlled  manipulator problems. This first part  discusses a model which is designed to  address problems of motor control, motor  learning, adaptation, and sensorimotor  integration. In this section the problems are  outlined and a solution is given which makes  used of a state space memory and a piece-wise linearization of the equations of motion.  A forthcoming companion article will present  the results of tests performed on an  implementation of the model.
</description>
<pubDate>Thu, 01 Jan 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5789</guid>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Velocity Space and the Geometry of Planetary Orbits</title>
<link>https://hdl.handle.net/1721.1/5788</link>
<description>Velocity Space and the Geometry of Planetary Orbits
Abelson, Harold; diSessa, Andrea; Rudolph, Lee
We develop a theory of orbits for the inverse-square central force law which differs  considerably from the usual deductive  approach. In particular, we make no explicit  use of calculus. By beginning with qualitative  aspects of solutions, we are led to a number  of geometrically realizable physical invariants  of the orbits. Consequently most of our  theorems rely only on simple geometrical  relationships. Despite its simplicity, our  planetary geometry is powerful enough to treat  a wide range of perturbations with relative  ease. Furthermore, without introducing any  more machinery, we obtain full quantitative  results. The paper concludes with sugestions  for further research into the geometry of  planetary orbits.
</description>
<pubDate>Sun, 01 Dec 1974 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5788</guid>
<dc:date>1974-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Logo Progress Report 1973-1975</title>
<link>https://hdl.handle.net/1721.1/5787</link>
<description>Logo Progress Report 1973-1975
Abelson, Harold; Bamberger, J.; I. Goldstein,; Papert, Seymour A.
Over the past two years, the Logo Project has  grown along many dimensions. This  document provides an overview in outline  form of the main activities and  accomplishments of the past as well as the  major goals guiding our current research.  Research on the design of learning  environments, the corresponding  development of a theory of learning and the  exploration of teaching activities in these  environments is presented.
Revised March 1976
</description>
<pubDate>Mon, 01 Sep 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5787</guid>
<dc:date>1975-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Frame for Frames: Representing Knowledge for Recognition</title>
<link>https://hdl.handle.net/1721.1/5786</link>
<description>A Frame for Frames: Representing Knowledge for Recognition
Kuipers, Benjamin J.
This paper presents a version of frames  suitable for representing knowledge for a  class of reconition problems. An initial section  gives an intuitive model of frames, and  illustrates a number of desirable features of  such a representation. A more technical  example describes a small recognition  program for the Blocks World which  implements some of these features. The final  section discusses the more general  significance of the representation and the  recognition process used in the example.
</description>
<pubDate>Sat, 01 Mar 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5786</guid>
<dc:date>1975-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model-Driven Geometry Theorem Prover</title>
<link>https://hdl.handle.net/1721.1/5785</link>
<description>Model-Driven Geometry Theorem Prover
Ullman, Shimon
This paper describes a new Geometry  Theorem Prover, which was implemented to  illuminate some issues related to the use of  models in theorem provin. The paper is  divided into three parts: Part 1 describes  G.T.P. and presents the ideas embedded in it.  It concentrates on the forward search method,  and gives two examples of proofs produced  that way. Part 2 describes the backward  search mechanism and presents proofs to a  sequence of successively harder problems.  The last section of the work addresses the  notion of similarity in a problem, defines a  notion of semantic symmetry, and compares it  to Gelernter's concept of syntactic symmetry.
</description>
<pubDate>Thu, 01 May 1975 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5785</guid>
<dc:date>1975-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Computer Technology to Provide a Creative Learning Environment for Preschool Children</title>
<link>https://hdl.handle.net/1721.1/5784</link>
<description>Using Computer Technology to Provide a Creative Learning Environment for Preschool Children
Perlman, Radia
TORTIS is a system of special terminals  together with software which is designed to  provide programming capability and be  accesible for use by very young children. The  system is designed to add capabilities in  small increments so that the child is never  overwhelmed by too much to learn at one  time, and maintains a feeling of control over  the environment. This system facilitates  learning of various concepts such as relative  size of numbers, frames of reference,  procedures, conditionals, and recursion, but  more importantly it teaches good problem  solving techniques and a healthy approach to  learning.
</description>
<pubDate>Sat, 01 May 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5784</guid>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial Knowledge</title>
<link>https://hdl.handle.net/1721.1/5783</link>
<description>Spatial Knowledge
Kuipers, Benjamin
This paper introduces a model of spatial  cognition to describe the states of partial  knowledge that people have about the spatial  structure of a large-scale environment. Spatial  knowledge has several different  representations, each of which captures one  aspect of the geography. With knowledge  stored in multiple representations, we must  examine the procedures for assimilating new  information for solving problems, and for  communicating information between  representations. The model centers on an  abstract machine called the TOUR machine,  which executes a description of the route to  drive the "You Are Here" pointer (a small  working memory) through a map that  describes the geography. Representations for  local and global spatial knowledge are  discussed in detail. The model is compared  with a survey of the psychological literature.  Finally, the directions of necessary and  desirable future research are outlined.
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5783</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Understanding Computation to Understanding Neural Circuitry</title>
<link>https://hdl.handle.net/1721.1/5782</link>
<description>From Understanding Computation to Understanding Neural Circuitry
Marr, D.; Poggio, Tomaso A
The CNS needs to be understood at four nearly independent levels of description: (1) that at which the nature of computation is expressed; (2) that at which the algorithms that implement a computation are characterized; (3) that at which an algorithm is committed to particular mechanisms; and (4) that at which the mechanisms are realized in hardware. In general, the nature of a computation is determined by the problem to be solved, the mechanisms that are used depend upon the available hardware, and the particular algorithms chosen depend on the problem and on the available mechanisms. Examples are given of theories at each level.
</description>
<pubDate>Sat, 01 May 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5782</guid>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Laboratory Environment for Applications Oriented Vision and Manipulation</title>
<link>https://hdl.handle.net/1721.1/5781</link>
<description>A Laboratory Environment for Applications Oriented Vision and Manipulation
Horn, Berthold K.P.; Winston, Patrick H.
This report is a brief summary guide to work  done in the M.I.T. Artificial Intelligence  Laboratory directed at the production of tools  for productivity technology research. For  detailed coverage of the work, readers should  use this summary as an introduction to the  reports and papers listed in the bibliography.
</description>
<pubDate>Sat, 01 May 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5781</guid>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cooperative Computation of Stereo Disparity</title>
<link>https://hdl.handle.net/1721.1/5780</link>
<description>Cooperative Computation of Stereo Disparity
Marr, D.; Poggio, Tomaso A
The extraction of stereo disparity information from two images depends upon establishing a correspondence between them. This article analyzes the nature of the correspondence computation, and derives a cooperative algorithm that implements it. We show that this algorithm successfully extracts information from random-dot stereograms, and its implications for the psychophysics and neurophysiology of the visual system are briefly discussed.
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5780</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overview of a Linguistic Theory of Design</title>
<link>https://hdl.handle.net/1721.1/5779</link>
<description>Overview of a Linguistic Theory of Design
Miller, Mark L.; Goldstein, Ira P.
SPADE is a theory of the design of computer  programs in terms of complementary  planning and debugging processes. An  overview of the authors' recent research on  this theory is provided. SPADE borrows tools  from computational linguistics ??ammars,  augmented transition networks (ATN's), chart-based parsers ?? formalize planning and  debugging. The theory has been applied to  parsing protocols of programming episodes,  constructing a grammar-based editor in which  programs are written in a structured fashion,  and designing an automatic programming  system based ont eh ATN formalism.
</description>
<pubDate>Tue, 01 Feb 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5779</guid>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physiology and Psychology of Color Vision -- A Review</title>
<link>https://hdl.handle.net/1721.1/5778</link>
<description>Physiology and Psychology of Color Vision -- A Review
Taenzer, David
This paper is a review of the anatomy,  physiology, and psychology of human color  vision.
</description>
<pubDate>Sun, 01 Aug 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5778</guid>
<dc:date>1976-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A System for Understanding Mathematical FORTRAN Programs</title>
<link>https://hdl.handle.net/1721.1/5777</link>
<description>A System for Understanding Mathematical FORTRAN Programs
Waters, Richard C.
This paper proposes a system which, when  implemented, will be able to understand  mathematical FORTRAN programs such as  those in the IBM Scientific Subroutine  Package. The system takes, as input, a  program and annotation of the program. In  order to understand the program, the system  develops a "plan" for it. The "plan" specifies  the purpose of each feature of the program,  and how these features cooperate in order to  create the behavior exhibited by the program.  The system can use its understanding of the  program to answer questions about it  including questions about the ramifications of  a proposed modification. It is also able to aid  in debugging the program by detecting errors  in it, and by locating the features of the  program which are responsible for an error.  The system should be of significant  assistance to a person who is writing a  program.
</description>
<pubDate>Sun, 01 Aug 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5777</guid>
<dc:date>1976-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence -- A Personal View</title>
<link>https://hdl.handle.net/1721.1/5776</link>
<description>Artificial Intelligence -- A Personal View
Marr, David
The goal of A.I. is to identify and solve useful information processing problems. In so doing, two types of theory arise. Here, they are labelled Types 1 and 2, and their characteristics are outlined. This discussion creates a more than usually rigorous perspective of the subject, from which past work and future prospects are briefly reviewed
</description>
<pubDate>Mon, 01 Mar 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5776</guid>
<dc:date>1976-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Competence/Performance Dichotomy in Programming</title>
<link>https://hdl.handle.net/1721.1/5775</link>
<description>The Competence/Performance Dichotomy in Programming
Pratt, Vaughan R.
We consider the problem of automating some  of the duties of programmers. We take as our  point of departure the claim that data  management has been automated to the  point where the programmer concerned only  about the correctness (as opposed to the  efficiency) of his program need not involve  himself in any aspect of the storage allocation  problem. We focus on what we feel is a  sensible next step, the problem of automating  aspects of control. To accomplish this we  propose a definition of control based on a fact/ heuristic dichotomy, a variation of Chomsky's  competence/performance dichotomy. The  dichotomy formalizes an idea originating with  McCarthy and developed by Green, Hewitt,  McDermott, Sussman, Hayes, Kowalski and  others. It allows one to operate arbitrarily on  the control component of a program without  affecting the program's correctness, which is  entirely the responsibility of the fact  component. The immediate objectives of our  research are to learn how to program keeping  fact and control separate, and to identify those  aspects of control amenable to automation.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5775</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Occlusion Clues and Subjective Contours</title>
<link>https://hdl.handle.net/1721.1/5774</link>
<description>Occlusion Clues and Subjective Contours
Stevens, Kent A.
The paper describes some experiments with  a visual agnosia patient who has lost the  abillity to perceive subjective contours. The  patient's interpretations of simple examples  of occlusion indicate that he fails to notice  monocular occlusion clues, as well. The  findings support the hypothesis that  subjective countours are constructions that  account for occluded figures, in the absence  of objective edges. The patient's ability to  perceive coutours by stereopsis  demonstrates that stereopsis independently  gives rise to disparity countours. Furthermore,  the overall results strongly suggest that the  detection of occlusion is modularized, and  that the module for detecting
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5774</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The TV Turtle: A Logo Graphics System for Raster Displays</title>
<link>https://hdl.handle.net/1721.1/5773</link>
<description>The TV Turtle: A Logo Graphics System for Raster Displays
Lieberman, Henry
Until recently, most computer graphics  systems have been oriented toward the  display of line drawins, continually refreshing  the screen from a display list of vectors.  Developments such as plasma panel  displays and rapidly declining memory prices  have now made feasible raster graphics  systems, which instead associate some  memory with each point on the screen, and  display points according to the contents of the  memory. This paper discusses the  advantages and limitations of such systems.  Raster systems permit operations which are  not feasible on vector displays, such as  reading directly from the screen as well as  writing it, and manipulating two dimensional  areas as well as vectors. Conceptual  differences between programming for raster  and vector systems are illustrated with a  description of the author's TV Turtle, a  graphics system for raster scan video display  terminals. This system is embedded in Logo,  a Lisp-like interactive programming language  designed for use by kids, and is based on  Logo's turtle geometry approach to graphics.  Logo provides powerful ideas for using  graphics which are easy for kids to learn, yet  generalize naturally when advanced  capabilities such as primitives for animation  and color are added to the system.
</description>
<pubDate>Tue, 01 Jun 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5773</guid>
<dc:date>1976-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overlays: A Theory of Modelling for Computer Aided Instruction</title>
<link>https://hdl.handle.net/1721.1/5772</link>
<description>Overlays: A Theory of Modelling for Computer Aided Instruction
Carr, Brian P.; Goldstein, Ira P.
Overlay modelling is a technique for describing a student's problem solving skills in terms of modular program designed to be an expert for the given domain. The model is an overlay on the expert program in that it consists of a set of hypotheses regarding the student's familiarity with the skills employed by the expert. The modelling is performed by a set of P rules that are triggered by different sources of evidence, and whose effect is to modify these hypotheses. A P critic monitors these rules to detect discontinuities and inconsistencies in their predictions. A first implementation of overlay modelling exists as a component of WUSOR-II, a CAI program based on artificial intelligence techniques. WUSOR-II coaches a student in the logical and probability skills required to play the computer game WUMPUS. Preliminary evidence indicates that overlay modelling significantly improves the appropriateness of the tutoring program's explanations.
</description>
<pubDate>Tue, 01 Feb 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5772</guid>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>NUDGE, A Knowledge-Based Scheduling Program</title>
<link>https://hdl.handle.net/1721.1/5771</link>
<description>NUDGE, A Knowledge-Based Scheduling Program
Goldstein, Ira P.; Roberts, R. Bruce
Traditional scheduling algorithms (using the techniques of PERT charts, decision analysis or operations research) require well-defined quantitative, complete sets of constraints. They are insufficient for scheduling situations where the problem description is ill-defined, involving incomplete, possibly inconsistent and generally qualitative constraints. The NUDGE program uses an extensive knowledge base to debug scheduling requests by supplying missing details and resolving minor inconsistencies. The result is that an informal request is converted to a complete description suitable for a traditional scheduler.
</description>
<pubDate>Tue, 01 Feb 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5771</guid>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Musical Intelligence II: Children's Representation of Pitch Relations</title>
<link>https://hdl.handle.net/1721.1/5770</link>
<description>Development of Musical Intelligence II: Children's Representation of Pitch Relations
Bamberger, Jeanne
The work reported here is an outgrowth of  studies in the development of musical  intelligence and learning that have been  underway for about four years. Beginning as  one of the activities in the LOGO Lab (a part of  the MIT Artificial Intelligence Laboratory) the  research has expanded to include more  theoretical work in the MIT Division for Study a nd Research in Education.
</description>
<pubDate>Wed, 01 Dec 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5770</guid>
<dc:date>1976-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal to the Advanced Research Projects Agency</title>
<link>https://hdl.handle.net/1721.1/5769</link>
<description>Proposal to the Advanced Research Projects Agency
Winston, Patrick H.
This is the substance of a proposal submitted  in June, 1975, for research in the areas of  large data bases and intelligent terminals,  applications of machine vision and  manipulation, basic studies in Artificial  Intelligence, and LISP machine development.
</description>
<pubDate>Sat, 01 May 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5769</guid>
<dc:date>1976-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The FRL Manual</title>
<link>https://hdl.handle.net/1721.1/5768</link>
<description>The FRL Manual
Roberts, R. Bruce; Goldstein, Ira P.
The Frame Representation Language (FRL) is described. FRL is an adjunct to LISP which implements several representation techniques suggested by Minsky's [75] concept of a frame: defaults, constraints, inheritance, procedural attachment and annotation.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5768</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The FRL Primer</title>
<link>https://hdl.handle.net/1721.1/5767</link>
<description>The FRL Primer
Roberts, R. Bruce; Goldstein, Ira P.
The Frame Representation Language (FRL) is an experimental language written to explore the use of frames as a knowledge representation technique. The term 'frame' as used in FRL was inspired by Minsky's [75] development of frame theory. FRL extends the traditional Property List representation scheme by allowing properties to have comments, defaults and constraints, to inherit information from abstract forms of the same type, and to have attached procedures triggered by adding or deleting values, or if a value is needed. We introduce FRL with the aid of a simple example: WHOSIS, a database of AI persons' names, addresses, interests and publications. A second section contains an abridged manual describing FRL's most-used commands and conventions.
</description>
<pubDate>Fri, 01 Jul 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5767</guid>
<dc:date>1977-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Annotated Production Systems: A Model for Skill Acquisition</title>
<link>https://hdl.handle.net/1721.1/5766</link>
<description>Annotated Production Systems: A Model for Skill Acquisition
Goldstein, Ira P.; Grimson, Eric
Annotated Production Systems provide a  procedural model for skill acquisition by  augmenting a production model of the skill  with formal commentary describing plans,  bugs, and interraltionships between various  productions. This commentary supports  processes of efficient interpretation, self-debugging and self-improvement. The theory  of annotated productions is developed by  analyzing the skill of attitude instrument flying.  An annotated production interpreter has been  written that executes skill models which  control a flight simulator. Preliminary evidence  indicates that annotated productions  effectively model certain bugs and certain  learning behaviors characteristic of student  pilots.
</description>
<pubDate>Tue, 01 Feb 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5766</guid>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Filling in the Gaps: The Shape of Subjective Contours and a Model for Their Generation</title>
<link>https://hdl.handle.net/1721.1/5765</link>
<description>Filling in the Gaps: The Shape of Subjective Contours and a Model for Their Generation
Ullman, Shimon
The properties of isotropy, smoothness,  minimum curvature and locality suggest the  shape of filled-in contours between two  boundary edges. The contours are composed  of the arcs of two circles tangent to the given  edges, meeting smoothly, and minimizing the  total curvature. It is shown that shapes  meeting all the above requirement can be  generated by a network which performs  simple, local computations. It is suggested  that the filling-in process plays an important  role in the early processing of visual  information.
</description>
<pubDate>Fri, 01 Oct 1976 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5765</guid>
<dc:date>1976-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modelling Distributed Systems</title>
<link>https://hdl.handle.net/1721.1/5764</link>
<description>Modelling Distributed Systems
Yonezawa, Akinori; Hewitt, Carl
Distributed systems are multi-processor  information processing systems which do not  rely on the central shared memory for  communication. This paper presents ideas  and techniques in modelling distributed  systems and its application to Artificial  Intelligence. In section 2 and 3, we discuss a  model of distributed systems and its  specification and verification techniques. We  introduce a simple example of air line  reservation systems in Section 4 and illustrate  our specification and verification techniques  for this example in the subsequent sections.  Then we discuss our further work.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5764</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Plain Talk About Neurodevelopmental Epistemology</title>
<link>https://hdl.handle.net/1721.1/5763</link>
<description>Plain Talk About Neurodevelopmental Epistemology
Minksy, Marvin
This paper is based on a theory being  devloped in collaboration with Seymour  Papert in which we view the mind as an  organized society of intercommunicating  "agents". Each such agent is, by itself, very  simple. The subject of this paper is how that  simplicity affects communication between  different parts of a single mind and , indirectly,  how it may affect inter-personal  communications.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5763</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Proof-Checker for Dynamic Logic</title>
<link>https://hdl.handle.net/1721.1/5762</link>
<description>A Proof-Checker for Dynamic Logic
Litvintchouk, S.D.; Pratt, V.R.
We consider the problem of getting a  computer to follow reasoning conducted in  dynamic logic. This is a recently developed  logic of programs that subsumes most  existing first-order logics of programs that  manipulate their environment, including  Floyd's and Hoare's logics of partial  correctness and Manna and Waldinger's logic  of total correctness. Dynamic logic is more  closely related to classical first-order logic  than any other proposed logic of programs.  This simplifies the design of a proof-checker  for dynamic logic. Work in progress on the  implementation of such a program is reported  on, and an example machine-checked proof  is exhibited.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5762</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Synthetic Images to Register Real Images with Surface Models</title>
<link>https://hdl.handle.net/1721.1/5761</link>
<description>Using Synthetic Images to Register Real Images with Surface Models
Horn, Berthold K.P.; Bachman, Brett L.
A number of image analysis tasks can benefit  from registration of the image with a model of  the surface being imaged. Automatic  navigation using visible light or radar images  requires exact alignment of such images with  digital terrain models. In addition, automatic  classification of terrain, using satellite  imagery, requires such alignment to deal  correctly with the effects of varying sun angle  and surface slope. Even inspection  techniques for certain industrial parts may be  improved by this means.
</description>
<pubDate>Mon, 01 Aug 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5761</guid>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>SLICES: At the Boundary Between Analysis and Synthesis</title>
<link>https://hdl.handle.net/1721.1/5760</link>
<description>SLICES: At the Boundary Between Analysis and Synthesis
Sussman, Gerald Jay
The algebraic difficulty of determining the  component values in a circuit of known  topology and specifications is large. Expert  circuit designers use terminal equivalence  and power arguments to reduce the apparent  synergy in a circuit so that their computational  power can be focussed. A new descriptive  mechanism, called slices, is introduced.  Slices combine the notion of equivalence with  identification of parameters. Armed with  appropriate slices, an automatic analysis  procedure, Analysis by Propagation of  Constraints can be used to assign the  component values in a circuit. Techniques of  formation, notation, and use of slices are  described. The origin of slices in the  topological design process is indicated.  Slices are shown to be of wider interest in  scientific thought than just in circuit analysis.
</description>
<pubDate>Fri, 01 Jul 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5760</guid>
<dc:date>1977-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Teacher's Guide for Computational Models of Animal Behavior</title>
<link>https://hdl.handle.net/1721.1/5759</link>
<description>Teacher's Guide for Computational Models of Animal Behavior
Abelson, Harold; Goldenberg, Paul
This is an experimental curriculum unit which  suggests how the computational perspective  can be integrated into a subject such as  elementary school biology. In order to  illustrate the interplay of computer and non-computer activities, we have prepared the unit  as a companion to the Elementary School  Science Study "Teacher's Guide to Behavior of  Mealworms." This material is based on use of  the Logo computer language.
</description>
<pubDate>Fri, 01 Apr 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5759</guid>
<dc:date>1977-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Frame-based Text Processing</title>
<link>https://hdl.handle.net/1721.1/5758</link>
<description>Frame-based Text Processing
Rosenberg, Steven T.
This paper presents an overview of a theory of discourse structure, and discusses a model for assimilating text into a frame-based data structure. The model has been applied to the analysis of news articles. The theory assumes sentences contain links to the database which are relatively easy to compute. These links point to prior themes which contain expectations and procedural knowledge. This knowledge is used to assimilate new sentences to these themes. At any given time, only procedural knowledge from the indicated theme is active in processing new sentences.
</description>
<pubDate>Tue, 01 Nov 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5758</guid>
<dc:date>1977-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Density Reconstruction Using Arbitrary Ray Sampling Schemes</title>
<link>https://hdl.handle.net/1721.1/5757</link>
<description>Density Reconstruction Using Arbitrary Ray Sampling Schemes
Horn, Berthold K.P.
Methods for calculating the distribution of  absorption densities in a cross section  through an object from density integrals along  rays in the plane of the cross section are well  known, but are restricted to particular  geometries of data collection. So-called  convolutional-backprojection-summation  methods, used now for parallel ray data, have  recently been extended to special cases of the  fan-beam reconstruction problem by the  addition of pre- and post-multiplication steps.  In this paper, I present a technique for deriving  reconstructing algorithms for arbitrary ray-sampling schemes: the resulting algorithms  entail the use of a general linear operator, but  require little more computation than the  convolutional methods, which represent  special cases.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5757</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Specification and Proof Techniques for Serializers</title>
<link>https://hdl.handle.net/1721.1/5756</link>
<description>Specification and Proof Techniques for Serializers
Atkinson, Russell; Hewitt, Carl
This paper presents an implementation  mechanism, specification language, and  proof techniques for problems involving the  arbitration of concurrent requests to shared  resources. This mechanism is the serializer  which may be described as a kind of  protection mechanism, in that it prevents  improper orders of access to a protected  resource. Serializers are a generalization and  improvement of the monitor mechanism of  Brinch-Hansen and Hoare.
</description>
<pubDate>Mon, 01 Aug 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5756</guid>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computation of Immediate Texture Discrimination</title>
<link>https://hdl.handle.net/1721.1/5755</link>
<description>The Computation of Immediate Texture Discrimination
Schatz, Bruce R.
The computation of immediate texture  discrimination involves finding boundaries  between regions of differing texture. Various  textures are examined to investigate the  factors determining discrimination in the  limited domain of line-and-point images. Two  operators embodying necessary properties  are proposed: length and orientation of actual  lines and of local virtual lines between  terminators. It is conjectured that these are  sufficient as well. Relations between this  theory and those of Julesz and of Marr are  discussed. Supporting psychological  evidence is introduced and an  implementation strategy outlined.
</description>
<pubDate>Mon, 01 Aug 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5755</guid>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning by Creating and Justifying Transfer Frames</title>
<link>https://hdl.handle.net/1721.1/5754</link>
<description>Learning by Creating and Justifying Transfer Frames
Winston, Patrick H.
Learning is defined to be the computation  done by a student when there is a transfer of  information to him from a teacher. In the  particular kind of learning discussed, the  teacher names a source and destination. In  the sentence, "Robbie is like a fox," fox is the  source and Robbie is the destination. The  student, on analyzing the teacher's instruction,  computes a kind of filter called a transfer  frame. It stands between the source and the  destination and determines what information  is allowed to pass from one to the other.
</description>
<pubDate>Sat, 01 Jan 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5754</guid>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO</title>
<link>https://hdl.handle.net/1721.1/5753</link>
<description>Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO
Steele, Guy Lewis, Jr.
Folklore states that GOTO statements are 'cheap', while procedure calls are 'expensive'. This myth is largely a result of poorly designed language implementations. The historical growth of this myth is considered. Both theoretical ideas and an existing implementation are discussed which debunk this myth. It is shown that the unrestricted use of procedure calls permits great stylistic freedom. In particular, any flowchart can be written as a 'structured' program without introducing extra variables. The difficulty with the GOTO statement and the procedure call is characterized as a conflict between abstract programming concepts and concrete language constructs.
</description>
<pubDate>Sat, 01 Oct 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5753</guid>
<dc:date>1977-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Hand-Printed Algebra for Computer Tutoring</title>
<link>https://hdl.handle.net/1721.1/5752</link>
<description>Understanding Hand-Printed Algebra for Computer Tutoring
Purcell, Stephen C.
This thesis demonstrates how the use of a  global context can improve the power of a  local character recognizer. The global context  considered is a computer tutor of high school  algebra that observes a student working  algebra problems on a graphics tablet. The  tutoring system is integrated with a character  recognizer to understand the pen strokes of  an algebra tutoring system is designed and  implemented. This thesis joins together two  users of a computer, intelligent tutoring and  tablet communication. Natural communication  with computers has been pursued through  speech understanding, English text  understanding, special purpose languages,  hand printing and graphics. This work extends  the power of hand-printing understanders by  using more varied and higher level sources of  knowledge than have been used previously.
</description>
<pubDate>Tue, 01 Feb 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5752</guid>
<dc:date>1977-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>LISP Machine Progress Report</title>
<link>https://hdl.handle.net/1721.1/5751</link>
<description>LISP Machine Progress Report
Bawden, Alan; Greenblatt, Richard; Holloway, Jack; Knight, Thomas; Moon, David; Weinreb, Daniel
This informal paper introduces the LISP  Machine, describes the goals and current  status of the project, and explicates some of  the key ideas. It covers the LISP machine  implementation, LISP as a system language,  input/output, representation of data,  representation of programs, control  structures, storage organization, garbage  collection, the editor, and the current status of  the work.
</description>
<pubDate>Mon, 01 Aug 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5751</guid>
<dc:date>1977-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explicit Control of Reasoning</title>
<link>https://hdl.handle.net/1721.1/5750</link>
<description>Explicit Control of Reasoning
Kleer, Johan de; Doyle, Jon; Steele, Guy L., Jr.; Sussman, Gerald Jay
The construction of expert problem-solving  systems requires the development of  techniques for using modular representations  of knowledge without encountering  combinatorial explosions in the solution effort.  This report describes an approach to dealing  with this problem based on making some  knowledge which is usually implicitly part of  an expert problem solver explicit, thus  allowing this knowledge about control to be  manipulated and reasoned about. The basic  components of this approach involve using  explicit representations of the control structure  of the problem solver, and linking this and  other knowledge manipulated by the expert by  means of explicit data dependencies.
</description>
<pubDate>Wed, 01 Jun 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5750</guid>
<dc:date>1977-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fan-beam Reconstruction Methods</title>
<link>https://hdl.handle.net/1721.1/5749</link>
<description>Fan-beam Reconstruction Methods
Horn, Berthold K.P.
In a previous paper a technique was  developed for finding reconstruction  algorithms for arbitrary ray-sampling  schemes. The resulting algorithms use a  general linear operator, the kernel of which  depends on the details of the scanning  geometry. Here this method is applied to the  problem of reconstructing density  distributions from arbitrary fan-beam data.  The general fan-beam method is then  specialized to a number of scanning  geometries of practical importance. Included  are two cases where the kernel of the general  linear operator can be factored and rewritten  as a function of the difference of coordinates  only and the superposition integral  consequently simplifies into a convolution  integral. Algorithms for these special cases of  the fan-beam problem have been developed  previously by others. In the general case,  however, Fourier transforms and convolutions  do not apply, and linear space-variant  operators must be used. As a demonstration,  details of a fan-beam method for data  obtained with uniform ray-sampling density  are developed.
</description>
<pubDate>Tue, 01 Nov 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5749</guid>
<dc:date>1977-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Introduction to the EMACS Editor</title>
<link>https://hdl.handle.net/1721.1/5748</link>
<description>An Introduction to the EMACS Editor
Ciccarelli, Eugene
EMACS is a real-time editor primarily intended  for display terminals. The intent of this memo  is to describe EMACS in enough detail to  allow a user to edit comfortably in most  circumstances, knowing how to get more  information if needed. Basic commands  described cover buffer editing, file handling,  and getting help. Two sections cover  commands especially useful for editing LISP  code, and text (word- and paragraph-commands). A brief "cultural interest" section  describes the environment that supports  EMACS commands.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5748</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of a Cooperative Stereo Algorithm</title>
<link>https://hdl.handle.net/1721.1/5747</link>
<description>Analysis of a Cooperative Stereo Algorithm
Marr, D.; G. Palm,; Poggio, Tomaso A
Marr &amp; Poggio (1976) recently described a  cooperative algorithm that solves the  correspondence problem for stereopsis. This  article uses a probabilistic technique to  analyze the convergence of that algorithm, and  derives the conditions governing the stability  of the solution state. The actual results of  applying the algorithm to random-dot  stereograms are compared with the  probabilistic analysis. A satisfactory  mathematical analysis of the asymptotic  behaviour of the algorithm is possible for a  suitable choice of the parameter values and  loading rules, and again the actual  performance of the algorithm under these  conditions is compared with the theoretical  predictions. Finally, some problems raised by  the analysis of this type of "cooperative"  algorithm are briefly discussed.
</description>
<pubDate>Sat, 01 Oct 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5747</guid>
<dc:date>1977-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>AMORD: A Deductive Procedure System</title>
<link>https://hdl.handle.net/1721.1/5746</link>
<description>AMORD: A Deductive Procedure System
Kleer, Johan de; Doyle, Jon; Rich, Charles; Steele, Guy L., Jr.; Sussman, Gerald Jay
We have implemented an interpreter for a  rule-based system, AMORD, based on a non-chronological control structure and a system  of automatically maintained data-dependencies. The purpose of this paper is to  serve as a reference manual and as an  implementation tutorial. We wish to illustrate:  (1) The discipline of explicit control and  dependencies, (2) How to use AMORD, and  (3) One way to implement the mechanisms  provided by AMORD. This paper is organized  into sections. The first section is a short  "reference manual" describing the major  features of AMORD. Next, we present some  examples which illustrate the style of  expression encouraged by AMORD. This style  makes control information explicit in a rule-manipulable form, and depends on an  understanding of the use of non-chronological  justifications for program beliefs as a means  for determining the current set of beliefs. The  third section is a brief description of the Truth  Maintenance System employed by AMORD for  maintaining these justifications and program  beliefs. The fourth section presents a  complete annotated interpreter for AMORD,  written in MacLISP.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5746</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Propagation of Constraints Applied to Circuit Synthesis</title>
<link>https://hdl.handle.net/1721.1/5745</link>
<description>Propagation of Constraints Applied to Circuit Synthesis
Kleer, Johan de; Sussman, Gerald Jay
A major component in the process of design  is synthesis, the determination of the  parameters of the parts of a network given  desiderata for the behavior of the network as a  whole. Traditional automated synthesis  techniques are either restricted to small,  precisely defined classes of circuit functions  for which exact mathematical methods exist or  they depend upon numerical optimization  methods in which it is difficult to determine the  basis for any of the answers generated and  their relations to the design desiderata and  constraints. We are developing a symbolic  computer-aided design tool, SYN, which can  be of assistance to an engineer in the  synthesis of a large class of circuits. The  symbolic methods produce solutions which  are clear and insightful. The dependence of  each parameter on the individual design  desiderata and circuit constraints can be  easily traced.
</description>
<pubDate>Fri, 01 Sep 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5745</guid>
<dc:date>1978-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interim Report of the LOGO Project in the Brookline Public Schools</title>
<link>https://hdl.handle.net/1721.1/5744</link>
<description>Interim Report of the LOGO Project in the Brookline Public Schools
Members of the LOGO Project
The LOGO activities of a group of 16 sixth-grade students, representing a full spectrum  of ability, are being documented with a view to  developing ways of capturing the learning  possibilities of such an environment. The first  group of eight subjects have completed 25  closely observed hours, extending over 7  weeks, in a LOGO classroom situated in a  Brookline school. This is an interim report on  these observations designed to exhibit the  content of what has been learned; and  insights into both the variety of cognitive styles  of the pupils and the variety of learning  situations available to a teacher with which to  respond to different pupil styles and abilities.  We have a large amount of data available for  analysis, and we are interested in looking at  this material from several points of view. The  current state of our various analysis is  presented here, without any effort to prune the  considerable redundancy which has been  generated in the process of doing this  multiple-cut exercise.
</description>
<pubDate>Thu, 01 Jun 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5744</guid>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Theory of Local and Global in Computation</title>
<link>https://hdl.handle.net/1721.1/5743</link>
<description>Towards a Theory of Local and Global in Computation
Abelson, Harold
We formulate the rudiments of a method for  assessing the difficulty of dividing a  computational problem into "independent  simpler parts." This work illustrates  measures of complexity which attempt to  capture the distinction between "local" and  "global" computational problems. One such  measure is the covering multiplicity, or  average number of partial computations  which take account of a given piece of data.  Another measure reflects the intuitive notion of  a "highly interconnected" computational  problem, for which subsets of the data cannot  be processed "in isolation." These ideas are  applied in the setting of computational  geometry to show that the connectivity  predicate has unbounded convering  multiplicity and is highly interconnected; and  in the setting of numerical computations to  measure the complexity of evaluating  polynomials and solving systems of linear  equations.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5743</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On "Learnable" Representations of Knowledge: A Meaning for the Computational Metaphor</title>
<link>https://hdl.handle.net/1721.1/5742</link>
<description>On "Learnable" Representations of Knowledge: A Meaning for the Computational Metaphor
diSessa, Andrea A.
The computational metaphor which proposes  the comparison of processes of mind to  realizable or imaginable computer activities  suggests a number of educational concerns.  This paper discusses some of those  concerns including procedural modes of  knowledge representation and control  knowledge ??owing what to do. I develop a  collection of heuristics for education  researchers and curriculum developers which  are intended to address the issues raised.  Finally, an extensive section of examples is  given to concretize those heuristics.
</description>
<pubDate>Thu, 01 Sep 1977 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5742</guid>
<dc:date>1977-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Hypothetical Monologue Illustrating the Knowledge Underlying Program Analysis</title>
<link>https://hdl.handle.net/1721.1/5741</link>
<description>A Hypothetical Monologue Illustrating the Knowledge Underlying Program Analysis
Shrobe, Howard E.; Waters, Richard C.; Sussman, Gerald J.
Automated Program Analysis is the process  of discovering decompositions of a system  into sub-units such that the behavior of the  whole program can be inferred from the  behavior of its parts. Analysis can be  employed to increase the explanatory power  of a program understanding system. We  identify several techniques which are useful  for automated program analysis. Chief among  these is the identification and classification of  the macro-scale units of programming  knowledge which are characteristic of the  problem domain. We call these plans. This  paper presents a summary of how plans can  be used in program analysis in the form of a  hypothetical monologue. We also show a  small catalogue of plans which are  characteristic of AI programming. Finally, we  present some techniques which facilitate plan  recognition.
</description>
<pubDate>Mon, 01 Jan 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5741</guid>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Genetic Epistemology of Rule Systems</title>
<link>https://hdl.handle.net/1721.1/5740</link>
<description>The Genetic Epistemology of Rule Systems
Goldstein, Ira P.
I shall describe a model of the evolution of the  rule-structured knowledge that serves as a  cornerstone of our development of computer-based coaches. The key idea is a graph  structure whose nodes represent rules, and  whose links represent various evolutionary  relationships such as generalization,  correction, and refinement. This graph guides  both student modelling and tutoring as  follows: the coach models the student in  terms of nodes in this graph, and selects  tutoring strategies for a given rule on the  basis of its genetic links. It also suggests a  framework for a theory of learning in which the  graph serves as a memory structure  constructed by the student by means of  processes corresponding to the various links.  Given this framework, a learning complexity  measure can be defined in terms of the  topology of the graph.
</description>
<pubDate>Sun, 01 Jan 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5740</guid>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>K-Lines: A Theory of Memory</title>
<link>https://hdl.handle.net/1721.1/5739</link>
<description>K-Lines: A Theory of Memory
Minsky, Marvin
Most theories of memory suggest that when  we learn or memorize something, some  "representation" of that something is  constructed, stored and later retrieved. This  raises questions like: How is information  represented? How is it stored? How is it  retrieved? Then, how is it use? This paper  tries to deal with all these at once. When you  get an idea and want to "remember" it, you  create a "K-line" for it. When later activated, the  K-line induces a partial mental state  resembling the one that created it. A "partial  mental state" is a subset of those mental  agencies operating at one moment. This view  leads to many ideas about the development,  structure and physiology of Memory, and  about how to implement frame-like  representations in a distributed processor.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5739</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Director Guide</title>
<link>https://hdl.handle.net/1721.1/5738</link>
<description>Director Guide
Kahn, Kenneth M.
Director is a programming language  designed for dynamic graphics, artificial  intelligence, and naï¶¥ users. It is based upon  the actor or object oriented approach to  programming and resembles Act 1 and  SmallTalk. Director extends MacLisp by  adding a small set of primitive actors and the  ability to create new ones. Its graphical  features include an interface to the TV turtle,  pseudo-parallelism, many animation  primitives, and a primitive actor for making  and recording "movies". For artificial  intelligence programming Director provides a  pattern-directed data base associated with  each actor, an inheritance hierarchy, pseudo-parallelism, and a means of conveniently  creating non-standard control structures. For  use by relatively naï¶¥ programmers Director  is appropriate because its stress upon very  powerful, yet conceptually simple primitives  and its verbose, simple syntax based upon  pattern matching. Director code can be turned  into optimized Lisp which in turn can be  compiled into machine code.
</description>
<pubDate>Thu, 01 Jun 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5738</guid>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Graphics Using Quasi Parallelism</title>
<link>https://hdl.handle.net/1721.1/5737</link>
<description>Dynamic Graphics Using Quasi Parallelism
Kahn, Kenneth M.; Hewitt, Carl
Dynamic computer graphics is best  represented as several processes operating  in parallel. Full parallel processing, however,  entails much complex mechanism making it  difficult to write simple, intuitive programs for  generating computer animation. What is  presented in this paper is a simple means of  attaining the appearance of parallelism and  the ability to program the graphics in a  conceptually parallel fashion without the  complexity of a more general parallel  mechanism. Each entity on the display screen  can be independently programmed to move,  turn, change size, color or shape and to  interact with other entities.
</description>
<pubDate>Thu, 01 Jun 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5737</guid>
<dc:date>1978-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>EMACS: The Extensible, Customizable, Self-Documenting Display Editor</title>
<link>https://hdl.handle.net/1721.1/5736</link>
<description>EMACS: The Extensible, Customizable, Self-Documenting Display Editor
Stallman, Richard M.
EMACS is a display editor which is  implemented in an interpreted high level  language. This allows users to extend the  editor by replacing parts of it, to experiment  with alternative command languages, and to  share extensions which are generally useful.  The ease of extension has contributed to the  growth of a large set of useful features. This  paper describes the organization of the  EMACS system, emphasizing the way in  which extensibility is achieved and used.
</description>
<pubDate>Sun, 01 Mar 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5736</guid>
<dc:date>1981-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Understanding Mathematics</title>
<link>https://hdl.handle.net/1721.1/5735</link>
<description>Understanding Understanding Mathematics
Michener, Edwina Rissland
In this paper we look at some of the  ingredients and processes involved in the  understanding of mathematics. We analyze  elements of mathematical knowledge,  organize them in a coherent way and take  note of certain classes of items that share  noteworthy roles in understanding. We thus  build a conceptual framework in which to talk  about mathematical knowledge. We then use  this representation to describe the acquisition  of understanding. We also report on  classroom experience with these ideas.
</description>
<pubDate>Tue, 01 Aug 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5735</guid>
<dc:date>1978-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-Monotonic Logic I</title>
<link>https://hdl.handle.net/1721.1/5734</link>
<description>Non-Monotonic Logic I
McDermott, Drew; Doyle, Jon
"Non-monotonic" logical systems are logics in  which the introduction of new axioms can  invalidate old theorems. Such logics are very  important in modeling the beliefs of active  processes which, acting in the presence of  incomplete information, must make and  subsequently revise predictions in light of new  observations. We present the motivation and  history of such logics. We develop model and  proof theories, a proof procedure, and  applications for one important non-monotonic  logic. In particular, we prove the  completeness of the non-monotonic predicate  calculus and the decidability of the non-monotonic sentential calculus. We also  discuss characteristic properties of this logic  and its relationship to stronger logics, logics  of incomplete information, and truth  maintenance systems.
</description>
<pubDate>Tue, 01 Aug 1978 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5734</guid>
<dc:date>1978-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Truth Maintenance System</title>
<link>https://hdl.handle.net/1721.1/5733</link>
<description>A Truth Maintenance System
Doyle, Jon
To choose their actions, reasoning programs  must be able to make assumptions and  subsequently revise their beliefs when  discoveries contradict these assumptions.  The Truth Maintenance System (TMS) is a  problem solver subsystem for performing  these functions by recording and maintaining  the reasons for program beliefs. Such  recorded reasons are useful in constructing  explanations of program actions in guiding  the course of action of a problem solver. This  paper describes (1) the representations and  structure of the TMS, (2) the mechanisms  used to revise the current set of beliefs, (3)  how dependency-directed backtracking  changes the current set of assumptions, (4)  techniques for summarizing explanations of  beliefs, (5) how to organize problem solvers  into "dialectically arguing" modules, (6) how to  revise models of the belief systems of others,  and (7) methods for embedding control  structures in patterns of assumptions. We  stress the need of problem solvers to choose  between alternative systems of beliefs, and  outline a mechanism by which a problem  solver can employ rules guiding choices of  what to believe, what to want, and what to do.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5733</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning and Reasoning by Analogy: The Details</title>
<link>https://hdl.handle.net/1721.1/5732</link>
<description>Learning and Reasoning by Analogy: The Details
Winston, Patrick H.
We use analogy when we say something is a  Cinderella story and when we learn about  resistors by thinking about water pipes. We  also use analogy when we learn subjects like  Economics, Medicine and Law. This paper  presents a theory of analogy and describes  an implemented system that embodies the  theory. The specific competence to be  understood is that of using analogies to do  certain kinds of learning and reasoning.  Learning takes place when analogy is used to  generate a constraint description in one  domain, given a constraint description in  another, as when we learn Ohm's law by way  of knowledge about water pipes. Reasoning  takes place when analogy is used to answer  questions about one situation, given another  situation that is supposed to be a precedent,  as when we answer questions about Hamlet  by way of knowledge about Macbeth. The input  language used and the treatment of words  implying CAUSE have been improved. AIM  632, "Learning New Principles from  Precedents and Exercises," describes these  improvements and subsequent work. It is, at  this writing, in publication in the Artificial  Intelligence Journal.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5732</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of LISP-based Processors, or SCHEME: A Dielectric LISP, or Finite Memories Considered Harmful, or LAMBDA: The Ultimate Opcode</title>
<link>https://hdl.handle.net/1721.1/5731</link>
<description>Design of LISP-based Processors, or SCHEME: A Dielectric LISP, or Finite Memories Considered Harmful, or LAMBDA: The Ultimate Opcode
Steele, Guy Lewis, Jr.; Sussman, Gerald Jay
We present a design for a class of computers whose 'instruction sets' are based on LISP. LISP, like traditional stored-program machine languages and unlike most high-level languages, conceptually stores programs and data in the same way and explicitly allows programs to be manipulated as data. LISP is therefore a suitable language around which to design a stored-program computer architecture. LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. The record structures can be organized into trees or graphs. An instruction set can be designed for programs expressed as such trees. A processor can interpret these trees in a recursive fashion, and provide automatic storage management for the record structures. We describe here the basic ideas behind the architecture, and for concreteness give a specific instruction set (on which variations are certainly possible). We also discuss the similarities and differences between these ideas and those of traditional architectures. A prototype VLSI microprocessor has been designed and fabricated for testing. It is a small-scale version of the ideas presented here, containing a sufficiently complete instruction interpreter to execute small programs, and a rudimentary storage allocator. We intend to design and fabricate a full-scale VLSI version of this architecture in 1979.
</description>
<pubDate>Thu, 01 Mar 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5731</guid>
<dc:date>1979-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Making Aesthetic Choices</title>
<link>https://hdl.handle.net/1721.1/5730</link>
<description>Making Aesthetic Choices
Kahn, Kenneth M.
A framework is presented for making choices  that are primarily constrained by aesthetic, as  opposed to, pragmatic considerations. An  example of the application of this framework  is a computer system called "Ani", capable of  making simple computer animation in  response to high-level incomplete story  descriptions. Aesthetic choice is presented as  a parallel computation in which each choice  point gathers together and evaluates  suggestions. When faced with difficulties  these choices can be postponed. The order in  which inter-dependent choices are made is  strongly influenced by the focus of the  problem.
</description>
<pubDate>Thu, 01 Mar 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5730</guid>
<dc:date>1979-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Differential Geometry, Surface Patches and Convergence Methods</title>
<link>https://hdl.handle.net/1721.1/5729</link>
<description>Differential Geometry, Surface Patches and Convergence Methods
Grimson, W.E.L.
The problem of constructing a surface from  the information provided by the Marr-Poggio  theory of human stereo vision is investigated.  It is argued that not only does this theory  provide explicit boundary conditions at certain  points in the image, but that the imaging  process also provides implicit conditions on  all other points in the image. This argument is  used to derive conditions on possible  algorithms for computing the surface.  Additional constraining principles are applied  to the problem; specifically that the process  be performable by a local-support parallel  network. Some mathematical tools,  differential geometry, Coons surface patches  and iterative methods of convergence,  relevant to the problem of constructing the  surface are outlined. Specific methods for  actually computing the surface are examined.
</description>
<pubDate>Thu, 01 Feb 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5729</guid>
<dc:date>1979-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Aided Evolutionary Design for Digital Integrated Systems</title>
<link>https://hdl.handle.net/1721.1/5728</link>
<description>Computer Aided Evolutionary Design for Digital Integrated Systems
Sussman, Gerald Jay; Holloway, Jack; Knight, Thomas F., Jr.
We propose to develop a computer aided  design tool which can help an engineer deal  with system evolution from the initial phases  of design right through the testing and  maintenance phases. We imagine a design  system which can function as a junior  assistant. It provides a total conversational  and graphical environment. It remembers the  reasons for design choices and can retrieve  and do simple deductions with them. Such a  system can provide a designer with  information relevant to a proposed  modification and can help him understand the  consequences of simple modifications by  pointing out the structures and functions  which will be affected by modifications. The  designer's assistant will maintain a vast  amount of such annotation on the structure  and function of the system being evolved and  will be able to retrieve the appropriate  annotation and remind the designer about the  features which he installed too long ago to  remember, or which were installed by other  designers who work with him. We will develop  the fundamental principles behind such a  designer's assistant and we will construct a  prototype system which meets many of these  desiderata.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5728</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Directional Selectivity and Its Use in Early Visual Processing</title>
<link>https://hdl.handle.net/1721.1/5727</link>
<description>Directional Selectivity and Its Use in Early Visual Processing
Marr, D.; Ullman, S.
The construction of directionally selective  units and their use in the processing of visual  motion are considered.
</description>
<pubDate>Fri, 01 Jun 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5727</guid>
<dc:date>1979-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Logo Music Projects: Experiments in Musical Perception and Design</title>
<link>https://hdl.handle.net/1721.1/5726</link>
<description>Logo Music Projects: Experiments in Musical Perception and Design
Bamberger, Jeanne
This memo gives a series of experiments  which one can use to get a better  understanding of how music works and how  music is apprehended by an active and  knowing listener. It does so by using the  children's computer language, LOGO, and  capitalizes on the use of procedural thinking  and other programming concepts (for  example, the use of variables) in the  designing and analysis of melody and rhythm.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5726</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Dream of a Lifetime: A Lazy Scoping Mechanism</title>
<link>https://hdl.handle.net/1721.1/5725</link>
<description>The Dream of a Lifetime: A Lazy Scoping Mechanism
Steele, Guy Lewis, Jr.; Sussman, Gerald Jay
We define a "rack", a data abstraction hybrid of  a register and a stack. It is used for  encapsulating the behavior of the kind of  register whose contents may have an extent  which requires that it be saved during the  execution of an unknown piece of code. A rack  can be implemented cleverly to achieve  performance benefits over the usual  implementation of a stack discipline. The  basic idea is that we interpose a state  machine controller between the rack  abstraction and its stack/registers. This  controller can act as an on-the-fly run-time  peephole optimizer, eliding unnecessary  stack operations. We demonstrate the sorts of  savings one might expect by using cleverly  implemented racks in the context of a  particular caller-saves implementation of an  interpreter for the SCHEME dialect of LISP. For  sample problems we can expect that only one  out of every four pushes that would be done by  a conventional machine will be done by a  clever version.
</description>
<pubDate>Thu, 01 Nov 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5725</guid>
<dc:date>1979-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of Edge Detection</title>
<link>https://hdl.handle.net/1721.1/5724</link>
<description>Theory of Edge Detection
Marr, D.; Hildreth, E.
A theory of edge detection is presented.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5724</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some Properties of Discontinuities in the Image Irradiance Equation</title>
<link>https://hdl.handle.net/1721.1/5723</link>
<description>Some Properties of Discontinuities in the Image Irradiance Equation
Bruss, Anna R.
The image irradiance equation is a first order  partial differential equation. Part of this paper  is a "comprehensive" guide to solving this  kind of equation. The special structure of the  image irradiance equation is explored in order  to understand the relation of discontinuities in  the surface properties and in the image  intensities.
</description>
<pubDate>Sun, 01 Apr 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5723</guid>
<dc:date>1979-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Selected Descriptor-Indexed Bibliography to the Literature on Belief Revision</title>
<link>https://hdl.handle.net/1721.1/5722</link>
<description>A Selected Descriptor-Indexed Bibliography to the Literature on Belief Revision
Doyle, Jon; London, Philip
This article presents an overview of research  in an area loosely called belief revision. Belief  revision concentrates on the issue of revising  systems of beliefs to reflect perceived  changes in the environment or acquisition of  new information. The paper includes both an  essay surveying the literature and a  descriptor-indexed bibliography of over 200  papers and books.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5722</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraints on the Visual Interpretation of Surface Contours</title>
<link>https://hdl.handle.net/1721.1/5721</link>
<description>Constraints on the Visual Interpretation of Surface Contours
Stevens, Kent A.
This article examines the computational  problems underlying the 3-D interpretation of  surface contours. A surface contour is the  image of a curve across a physical surface,  such as the edge of a shadow cast across a  surface, a gloss contour, wrinkle, seam, or  pigmentation marking. Surface contours by  and large are not as restricted as occluding  contours and therefore pose a more difficult  interpretation problem. Nonetheless, we are  adept at perceiving a definite 3-D surface from  even simple line drawings (e.g. graphical  depictions of continuous functions of two  variables). The solution of a specific surface  shape comes by assuming that the physical  curves are particularly restricted in their  geometric relationship to the underlying  surface. These geometric restrictions are  examined.
</description>
<pubDate>Thu, 01 Mar 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5721</guid>
<dc:date>1979-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Computational Theory of Semantic Memory</title>
<link>https://hdl.handle.net/1721.1/5720</link>
<description>Towards a Computational Theory of Semantic Memory
Vaina, Lucia M.
Research in memory has been a frustrating  task not least because of the intimate  familiarity with what we are trying to  understand, and partly also because the  human cognitive system has developed as an  interactive whole; it is difficult to isolate its  component modules ??necessary  prerequisite for their thorough elucidation.  Memory cannot be studied in isolation since it  is essentially only an adjunct to the proper  execution of our ordinary information  processing tasks. In order to try to formulate  specifically some of the basic requirements of  memory we must therefore examine the  structure of the processing tasks for which it  is used.
</description>
<pubDate>Fri, 01 Feb 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5720</guid>
<dc:date>1980-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Parallel Processing for Problem Solving</title>
<link>https://hdl.handle.net/1721.1/5719</link>
<description>Using Parallel Processing for Problem Solving
Kornfeld, William A.
Parallel processing as a conceptual aid in the  design of programs for problem solving  applications is developed. A pattern directed  invocation language know as Ether is  introduced. Ether embodies tow notions in  language design: activities and viewpoints.  Activities are the basic parallel processing  primitive. Different goals fo the system can be  pursued in parallel by placing them in  separate activities. Language primitives are  provided for manipulating running activities.  Viewpoints are a generalization of context  mechanisms and serve as a device for  representing multiple world models. A  number of problem solving schemes are  developed making use of viewpoints and  activities. It will be demonstrated that many  kinds of heuristic search that are commonly  implemented using backtracking can be  reformulated to use parallel processing with  advantage in control over the problem solving  behavior. The semantics of Ether are such  that such things as deadlock and race  conditions that plague many languages for  parallel processing cannot occur. The  programs presented are quite simple to  understand.
</description>
<pubDate>Sat, 01 Dec 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5719</guid>
<dc:date>1979-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>CADR</title>
<link>https://hdl.handle.net/1721.1/5718</link>
<description>CADR
Knight, Thomas F., Jr.; Moon, David A.; Holloway, Jack; Steele, Guy L., Jr.
The CADR machine, a revised version of the  CONS machine, is a general-purpose, 32-bit  microprogrammable processor which is the  basis of the Lisp-machine system, a new  computer system being developed by the  Laboratory as a high-performance,  economical implementation of Lisp. This  paper describes the CADR processor and  some of the associated hardware and low-level software.
</description>
<pubDate>Tue, 01 May 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5718</guid>
<dc:date>1979-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extra-Retinal Signals Influence Induced Motion: A New Kinetic Illusion</title>
<link>https://hdl.handle.net/1721.1/5717</link>
<description>Extra-Retinal Signals Influence Induced Motion: A New Kinetic Illusion
Prazdny, K.F.; Brady, Mike
When a moving dot, which is tracked by the  eyes and enclosed in a moving framework,  suddenly stops while the enclosing  framework continues its motion, the dot is  seen to describe a curved path. This illusion  can be explained only by assuming that extra-retinal signals are taken into account in  interpreting retinal information. The form of the  illusion, and the fact that the phenomenal path  cannot be explained on the basis of positional  information alone, suggests that the  perceived path is computed by integrating  (instantaneous) velocity information over time.  A vector addition model embodying a number  of simplifying assumptions is found to  qualitatively fit the experimental data. A  number of follow-up studies are suggested.
</description>
<pubDate>Thu, 01 May 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5717</guid>
<dc:date>1980-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Computer Implementation of a Theory of Human Stereo Vision</title>
<link>https://hdl.handle.net/1721.1/5716</link>
<description>A Computer Implementation of a Theory of Human Stereo Vision
Grimson, W.E.L.
Recently, Marr and Poggio (1979) presented a  theory of human stereo vision. An  implementation of that theory is presented  and consists of five steps: (1) The left and  right images are each filtered with masks of  four sizes that increase with eccentricity; the  shape of these masks is given by $ abla^{2}G$, the laplacian of a gaussian  function. (2) Zero-crossing in the filtered  images are found along horizontal scan lines.  (3) For each mask size, matching takes place  between zero-crossings of the same sign and  roughly the same orientation in the two  images, for a range of disparities up to about  the width of the mask's central region.  Within this disparity range, Marr and Poggio  showed that false targets pose only a simple  problem. (4) The output of the wide masks  can control vergence movements, thus  causing small masks to come into low  resolution to dealing with small disparities at  a high resolution. (5) When a  correspondence is achieved, it is stored in a  dynamic buffer, called the 2 1/2 dimensional  sketch. To support the sufficiency of the Marr-Poggio model of human stereo vision, the  implementation was tested on a wide range  of stereograms from the human stereopsis  literature. The performance of the  implementation is illustrated and compared  with human perception. As well, statistical  assumptions made by Marr and Poggio are  supported by comparison with statistics found  in practice. Finally, the process of  implementing the theory has led to the  clarification and refinement of a number of  details within the theory; these are discussed  in detail.
</description>
<pubDate>Tue, 01 Jan 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5716</guid>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring Shape from Motion Fields</title>
<link>https://hdl.handle.net/1721.1/5715</link>
<description>Inferring Shape from Motion Fields
Hoffman, D.D.
The human visual system has the ability o  utilize motion information to infer the shapes  of surfaces. More specifically, we are able to  derive descriptions of rigidly rotating smooth  surfaces entirely from the orthographic  projection of the motions of their surface  markings. A computational analysis of this  ability is proposed based on "shape from  motion" proposition. This proposition states  that given the first spatial derivatives of the  orthographically projected velocity and the  acceleration fields of a rigidly rotating regular  surface, then the angular velocity and the  surface normal at each visible point on that  surface are uniquely determined up to a  reflection.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5715</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shape from Regular Patterns: An Example of Constraint Propagation in Vision</title>
<link>https://hdl.handle.net/1721.1/5714</link>
<description>Shape from Regular Patterns: An Example of Constraint Propagation in Vision
Ikeuchi; Katsushi
An algorithm is proposed for obtaining local  surface orientation from the apparent  distortion of surface patterns in an image. A  spherical projection is used for imaging. A  mapping is defined from points on this image  sphere to a locus of points on the Gaussian  sphere which corresponds to possible  surface orientations. This mapping is based  on the measurement of the local distortions of  a repeated known texture pattern due to the  imaging projection. This locus of possible  surface orientations can be reduced to a  unique orientation at each point on the image  sphere using 3 vantage points and taking the  intersection of the loci of possible orientations  derived from each vantage. It is also possible  to derive a unique surface orientation at each  image point through the use of an iterative  constraint propagation technique along with  the orientation information available at  occluding boundaries. Both method are  demonstrated for real images.
</description>
<pubDate>Sat, 01 Mar 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5714</guid>
<dc:date>1980-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical Shape from Shading and Occluding Contours in a Single View</title>
<link>https://hdl.handle.net/1721.1/5713</link>
<description>Numerical Shape from Shading and Occluding Contours in a Single View
Ikeuchi; Katsushi
An iterative method of using occluding  boundary information is proposed to compute  surface slope from shading. We use a  stereographic space rather than the more  commonly used gradient space in order to  express occluding boundary information.  Further, we use "average" smoothness  constraints rather than the more obvious  "closed loop" smoothness constraints. We  develop alternate constraints from the  definition of surface smoothness, since the  closed loop constraints do not work in  stereographic space. We solve the image  irradiance equation iteratively using a Gauss-Seidel method applied to the constraints and  boundary information. Numerical experiments  show that the method is effective. Finally, we  analyze SEM (Scanning Electron Microscope)  pictures using this method. Other applications  are also proposed.
</description>
<pubDate>Thu, 01 Nov 1979 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5713</guid>
<dc:date>1979-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Design Procedure Language Manual</title>
<link>https://hdl.handle.net/1721.1/5712</link>
<description>The Design Procedure Language Manual
Batali, John; Hartheimer, Anne
This manual describes the Design Procedure  Language (DPL) for LSI design. DPL creates  and maintains a representation of a design in  a hierarchically organized, object-oriented  LISP data-base. Designing in DPL involves  writing programs (Design Procedures) which  construct and manipulate descriptions of a  project. The programs use a call-by-keyword  syntax and may be entered interactively or  written by other programs. DPL is the layout  language for the LISP-based Integrated  Circuit design system (LISPIC) being  developed at the Artificial Intelligence  Laboratory at MIT. The LISPIC design  environment will combine a large set of  design tools that interact through a common  data-base. This manual is for prospective  users of the DPL and covers the information  necessary to design a project with the  language. The philosophy and goals of the  LISPIC system as well as some details of the  DPL data-base are also discussed.
</description>
<pubDate>Mon, 01 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5712</guid>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representation and Recognition of the Movement of Shapes</title>
<link>https://hdl.handle.net/1721.1/5711</link>
<description>Representation and Recognition of the Movement of Shapes
Marr, David; Vaina, Lucia
The problems posed by the representation  and recognition of the movements of 3-D  shapes are analyzed. A representation is  proposed for the movements of shapes that  lie within the scope of Marr &amp; Nishihara's  (1978) 3-D model representation of static  shapes. The basic problem is, how to  segment a stream of movement into pieces  each of which can be described separately.  The representation proposed here is based  upon segmenting a movement at moments  when a component axis, e.g. an arm, starts to  move relative to its local coordinate frame  (here, the torso). So that for example walking  is divided into a sequence of the stationary  states between each swing of the arms and  legs, and the actual motions between the  stationary points (relative to the torso, not the  ground). This representation is called the  state-motion-state (SMS) moving shape  representation, and several examples of its  application are given.
</description>
<pubDate>Wed, 01 Oct 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5711</guid>
<dc:date>1980-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fundamental Scheme for Train Scheduling</title>
<link>https://hdl.handle.net/1721.1/5710</link>
<description>Fundamental Scheme for Train Scheduling
Fukumori, Koji
Traditionally, the compilation of long-term  timetables for high-density rail service with  multiple classes of trains on the same track is  a job for expert people, not computers. We  propose an algorithm that uses the range-constriction search technique to schedule the  timing and pass-through relations of trains  smoothly and efficiently. The program  determines how the timing of certain trains  constrains the timing of others, finds possible  time regions and pass-through relations and  then evaluates the efficiency of train  movement for each pass-through relation.
</description>
<pubDate>Mon, 01 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5710</guid>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Computational Theory of Early Visual Processing In Reading</title>
<link>https://hdl.handle.net/1721.1/5709</link>
<description>Toward a Computational Theory of Early Visual Processing In Reading
Brady, Mike
This paper is the first of a series aimed at  developing a theory of early visual processing  in reading. We suggest that there has been a  close parallel in the development of theories  of reading and theories of vision in Artificial  Intelligence. We propose to exploit and extend  recent results in Computer Vision to develop  an improved model of early processing in  reading. This first paper considers the  problem of isolating words in text based on  the information which Marr and Hildreth's  (1980) theory asserts is available in the  parafovea. We show in particular that the  findings of Fisher (1975) on reading  transformed texts can be accounted for  without postulating the need for complex  interactions between early processing and  downloading information as he suggests. The  paper concludes with a brief discussion of the  problem of integrating information over  successive saccades and relates the earlier  analysis fo the empirical findings of Rayner.
</description>
<pubDate>Mon, 01 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5709</guid>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Progressive Construction of Mind</title>
<link>https://hdl.handle.net/1721.1/5708</link>
<description>The Progressive Construction of Mind
Lawler, Robert W.
We propose a vision of the structure of  knowledge and processes of learning based  upon the particularity of experience. Highly  specific cognitive structures are constructed  through activities in limited domains of  experience. For new domains, new cognitive  structures develop from and call upon the  knowledge of prior structures. Applying this  vision of disparate cognitive structures to a  detailed case study, we present an  interpretation of addition-related matter from  the corpus and trace the interplay of specific  experiences with the interactions of ascribed,  disparate structures. The interpretive focus is  on learning processes through which a  broadly applicable skill emerges from the  interaction and integration of knowledge  based on specific, particular experiences.
</description>
<pubDate>Sun, 01 Jun 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5708</guid>
<dc:date>1980-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Primer for R users</title>
<link>https://hdl.handle.net/1721.1/5707</link>
<description>Primer for R users
Jones, Judi
R is a text formatter. The information in this  primer is meant to explain, in simple English,  the basic commands needed to use R. Input  for R is prepared on computer systems using  a text editor. Which editor employed depends  on which computer system you use, and your  personal preference. Almost every  characteristic of a document can be controlled  or changed if necessary.
</description>
<pubDate>Mon, 01 Sep 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5707</guid>
<dc:date>1980-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Three-Step Procedure for Language Generation</title>
<link>https://hdl.handle.net/1721.1/5706</link>
<description>A Three-Step Procedure for Language Generation
Katz, Boris
This paper outlines a three-step plan for  generating English text from any semantic  representation by applying a set of syntactic  transformations to a collection of kernel  sentences. The paper focuses on describing  a program which realizes the third step of this  plan. Step One separates the given  representation into groups and generates  from each group a set of kernel sentences.  Step Two must decide based upon both  syntactic and thematic considerations, the set  of transformations that should be performed  upon each set of kernels. The output of the  first two steps provides the "TASK" for Step  Three. Each element of the TASK  corresponds to the generation of one English  sentence, and in turn may be defined as a  triple consisting of: (a) a list of kernel phrase  markers; (b) a list of transformations to be  performed upon the list of kernels; (c) a  "syntactic separator" to separate or connect  generated sentences. Step Three takes as  input the results of Step One and Step Two.  The program which implements Step three  "reads" the TASK, executes the  transformations indicated there, combines the  altered kernels of each set into a sentence,  performs a pronomialization process, and  finally produces the appropriate English word  string. This approach subdivides a hard  problem into three more manageable and  relatively independent pieces. It uses  linguistically motivated theories at Step Two  and Step Three. As implemented so far, Step  Three is small and highly efficient. The  system is flexible; all the transformations can  be applied in any order. The system is  general; it can be adapted easily to many  domains.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5706</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interfacing the One-Dimensional Scanning of an Image with the Applications of Two-Dimensional Operators</title>
<link>https://hdl.handle.net/1721.1/5705</link>
<description>Interfacing the One-Dimensional Scanning of an Image with the Applications of Two-Dimensional Operators
Ullman, Shimon
To interface between the one-dimensional  scanning of an image, and the applications of  a two-dimensional operator, an intermediate  storage is required. For a square image of  size n2, and a square operator of size m2, the  minimum intermediate storage is shown to  be n .(m-1). An interface of this size can be  conveniently realized by using a serpentine  delay line. New kinds of imagers would be  required to reduce the size of the intermediate  storage below n.(m-1).
</description>
<pubDate>Tue, 01 Apr 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5705</guid>
<dc:date>1980-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extending a Powerful Idea</title>
<link>https://hdl.handle.net/1721.1/5704</link>
<description>Extending a Powerful Idea
Lawler, Robert W.
Mathematics is much more than the  manipulation of numbers. At its best, it  involves simple, clear examples of thought so  apt to the world we live in that those examples  provide guidance for our thinking about  problems we meet subsequently. We call  such examples, capable of heuristic use,  POWERFUL IDEAS, after Papert (1980). This  article documents a child's introduction to a  specific powerful idea in a computer  environment. We trace his extensions of that  idea to other problem areas, the first similar to  his initial experience and the second more  remote from it.
</description>
<pubDate>Tue, 01 Jul 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5704</guid>
<dc:date>1980-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Destructive Reordering of CDR-Coded Lists</title>
<link>https://hdl.handle.net/1721.1/5703</link>
<description>Destructive Reordering of CDR-Coded Lists
Steele, Guy L., Jr.
Linked list structures can be compactly  represented by encoding the CDR ("next")  pointer in a two-bit field and linearizing list  structures as much as possible. This "CDR-coding" technique can save up to 50% on  storage for linked lists. The RPLACD (alter  CDR pointer) operation can be  accommodated under such a scheme by  using indirect pointers. Standard destructive  reordering algorithms, such as REVERSE  and SORT, use RPLACD quite heavily. If these  algorithms are used on CDR-coded lists, the  result is a proliferation of indirect pointers. We  present here algorithms for destructive  reversal and sorting of CDR-coded lists which  avoid creation of indirect pointers. The  essential idea is to note that a general list can  be viewed as a linked list of array-like  "chunks". The algorithm applied to such  "chunky lists" is a fusion of separate array- and list-specific algorithms; intuitively, the  array-specific algorithm is applied to each  chunk, and the list algorithm to the list with  each chunk considered as a single element.
</description>
<pubDate>Fri, 01 Aug 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5703</guid>
<dc:date>1980-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Planning of Manipulator Transfer Movements</title>
<link>https://hdl.handle.net/1721.1/5702</link>
<description>Automatic Planning of Manipulator Transfer Movements
Lozano-Perez, Tomas
This paper deals with the class of problems  that involve finding where to place or how to  move a solid object in the presence of  obstacles. The solution to this class of  problems is essential to the automatic  planning of manipulator transfer movements,  i.e. the motions to grasp a part and place it at  some destination. This paper presents  algorithms for planning manipulator paths  that avoid collisions with objects in the  workspace and for choosing safe grasp  points on objects. These algorithms allow  planning transfer movements for Cartesian  manipulators. The approach is based on a  method of computing an explicit  representation of the manipulator  configurations that would bring about a  collision.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5702</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Jokes and the Logic of the Cognitive Unconscious</title>
<link>https://hdl.handle.net/1721.1/5701</link>
<description>Jokes and the Logic of the Cognitive Unconscious
Minsky, Marvin
Freud's theory of jokes explains how they  overcome the mental "censors" that make it  hard for us to think "forbidden" thoughts. But  his theory did not work so well for humorous  nonsense as for other comical subjects. In  this essay I argue that the different forms of  humor can be seen as much more similar,  once we recognize the importance of  knowledge about knowledge and, particularly,  aspects of thinking concerned with  recognizing and suppressing bugs ??neffective or destructive thought processes.  When seen in this light, much humor that at  first seems pointless, or mysterious,  becomes more understandable.
</description>
<pubDate>Sat, 01 Nov 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5701</guid>
<dc:date>1980-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Flavors: Message Passing in the Lisp Machine</title>
<link>https://hdl.handle.net/1721.1/5700</link>
<description>Flavors: Message Passing in the Lisp Machine
Weinreb, Daniel; Moon, David
The object oriented programming style used  in the Smalltalk and Actor languages is  available in Lisp Machine Lisp, and used by  the Lisp Machine software system. It is used  to perform generic operations on objects. Part  of its implementation is simply a convention in  procedure calling style; part is a powerful  language feature, called Flavors, for defining  abstract objects. This chapter attempts to  explain what programming with objects and  with message passing means, the various  means of implementing these in Lisp  Machine Lisp, and when you should use  them. It assumes no prior knowledge of any  other languages.
</description>
<pubDate>Sat, 01 Nov 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5700</guid>
<dc:date>1980-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conclusions from the Commodity Expert Project</title>
<link>https://hdl.handle.net/1721.1/5699</link>
<description>Conclusions from the Commodity Expert Project
Stansfield, James L.
The goal of the commodity expert project was  to develop a prototype program that would act  as an intelligent assistant to a commodity  market analyst. Since expert analysis must  deal with very large, yet incomplete, data  bases of unreliable facts about a complex  world, the project would stringently test the  applicability of Artificial Intelligence  techniques. After a significant effort however, I  am forced to the conclusion that an intelligent,  real-world system of the kind envisioned is  currently out of reach. Some of the difficulties  were due to the size and complexity of the  domain. As its true scale became evident, the  available resources progressively appeared  less adequate. The representation and  reasoning problems that arose were  persistently difficult and fundamental work is  needed before the tools will be sufficient to  engineer truly intelligent assistants. Despite  these difficulties, perhaps even because of  them, much can be learned from the project.  To assist future applications projects, I  explain in this report some of the reasons for  the negative result, and also describe some  positive ideas that were gained along the way.  In doing so, I hope to convey the respect I  have developed for the complexity of real-world domains, and the difficulty of describing  the ways experts deal them.
</description>
<pubDate>Sat, 01 Nov 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5699</guid>
<dc:date>1980-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Interpretation of Biological Motion</title>
<link>https://hdl.handle.net/1721.1/5698</link>
<description>The Interpretation of Biological Motion
Hoffman, D.D.; Flinchbaugh, B.E.
The term biological motion has been coined  by G. Johansson (1973) to refer to the  ambulatory patterns of terrestrial bipeds and  quadripeds. In this paper a computational  theory of the visual perception of biological  motion is proposed. The specific problem  addressed is how the three dimensional  structure and motions of animal limbs may be  computed from the two dimensional motions  of their projected images. It is noted that the  limbs of animals typically do not move  arbitrarily during ambulation. Rather, for  anatomical reasons, they typically move in  single planes for extended periods of time.  This simple anatomical constraint is exploited  as the basis for utilizing a "planarity  assumption" in the interpretation of biological  motion. The analysis proposed is: (1) divide  the image into groups of two or three  elements each; (2) test each group for  pairwise-rigid planar motion; (3) combine the  results from (2). Fundamental to the analysis  are two 'structure from planar motion'  propositions. The first states that the structure  and motion of two points rigidly linked and  rotating in a plane is recoverable from three  orthographic projections. The second states  that the structure and motion of three points  forming two hinged rods constrained to move  in a plane is recoverable from two  orthographic projections. The psychological  relevance of the analysis and possible  interactions with top down recognition  processes are discussed.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5698</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Curve of Least Energy</title>
<link>https://hdl.handle.net/1721.1/5697</link>
<description>The Curve of Least Energy
Horn, B.K.P.
Here we search for the curve which has the  smallest integral of the square of curvature,  while passing through two given points with  given orientation. This is the true shape of a  spline used in lofting. In computer-aided  design, curves have been sought which  maximize "smoothness". The curve  discussed here is the one arising in this way  from a commonly used measure of  smoothness. The human visual system may  use such a curve when it constructs a  subjective contour.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5697</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>GPRINT - A LISP Pretty Printer Providing Extensive User Format-Control Mechanism</title>
<link>https://hdl.handle.net/1721.1/5696</link>
<description>GPRINT - A LISP Pretty Printer Providing Extensive User Format-Control Mechanism
Waters, Richard C.
A pretty printer is presented which makes it  easy for a user to control the format of the  output produced. The printer can be used as a  general mechanism for printing data  structures as well as programs. It is divided  into two parts: a set of formatting functions,  and an output routine. Each formatting  function creates a sequence of directions  which specify how an object is to be formatted  if it can fit on one line and how it is to be  formatted if it must be broken up across  multiple lines. Based on the line length  available, the output routine decides what  structures have to be broken up across  multiple lines and produces the actual output  following the directions created by the  formatting functions. The directions passed  from the formatting functions to the output  routine form a well defined interface: a  language for specifying formatting options.  Three levels of user format-control are  provided. A simple template mechanism  makes it easy for a user to control certain  aspects of the format produced. A user can  exercise much more complete control over  how a particular type of object is formatted by  writing a special formatting function for it. He  can make global changes in format by  modifying the formatting process as a whole.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5696</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards a Better Definition of Transactions</title>
<link>https://hdl.handle.net/1721.1/5695</link>
<description>Towards a Better Definition of Transactions
Kerns, Barbara S.
This paper builds on a technical report written  by Carl Hewitt and Henry Baker called "Actors  and Continuous Functionals". What is called a  "goal-oriented activity" in that paper will be  referred to in this paper as a "transaction".  The word "transaction" brings to mind an  object closer in function to what we wish to  present than does the word "activity". This  memo, therefore, presents the definitions of a  reply and a transaction as given in Hewitt and  Baker's paper and points out some  discrepancies in their definitions. That is, that  the properties of transactions and replies as  they were defined did not correspond with our  intuitions, and thus the definitions should be  changed. The issues of what should  constitute a transaction are discussed, and a  new definition is presented which eliminates  the discrepancies caused by the original  definitions. Some properties of the newly  defined transactions are discussed, and it is  shown that the results of Hewitt and Baker's  paper still hold given the new definitions.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5695</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The SUPDUP Protocol</title>
<link>https://hdl.handle.net/1721.1/5694</link>
<description>The SUPDUP Protocol
Stallman, Richard M.
The SUPDUP protocol provides for login to a  remote system over a network with terminal-independent output, so that only the local  system need know how to handle the user's  terminal. It offers facilities for graphics and for  local assistance to remote text editors. This  memo contains a complete description of the  SUPDUP protocol in fullest possible detail.
</description>
<pubDate>Fri, 01 Jul 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5694</guid>
<dc:date>1983-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Scientific Community Metaphor</title>
<link>https://hdl.handle.net/1721.1/5693</link>
<description>The Scientific Community Metaphor
Kornfeld, William A.; Hewitt, Carl
Scientific communnities have proven to be extremely successful at solving problems. They are inherently parallel systems and their macroscopic nature makes them amenable to careful study. In this paper the character of scientific research is examined drawing on sources in the philosophy and history of science. We maintain that the success of scientific research depends critically on its concurrency and pluralism. A variant of the language Ether is developed that embodies notions of concurrency necessary to emulate some of the problem solving behavior of scientific communities. Capabilities of scientific communities are discussed in parallel with simplified models of these capabilities in this language.
</description>
<pubDate>Thu, 01 Jan 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5693</guid>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Learning</title>
<link>https://hdl.handle.net/1721.1/5692</link>
<description>Natural Learning
Miller, Laurence
This memo reports the results of a case study  into how children learn in the absence of  explicit teaching. The three subjects, an eight  year old, a ten year old and a thirteen year old  were observed in both of two experimental  micro-worlds. The first of these micro-worlds,  called the Chemicals World, included a large  table, a collection of laboratory and household  chemicals, and apparatus for conducting  experiments with chemicals; the second,  called the Mork and Mindy World included a  collection of video taped episodes of the  television series Mork and Mindy, a video-tape  machine and experimenter with whom the  subjects could discuss the episodes. The  main result of the study is a theory of how  children's interests interact with knowledge  embodied in their environment causing them  to learn new powerful ideas. An early version  of this theory is presented in chapter five.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5692</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Marr's Approach to Vision</title>
<link>https://hdl.handle.net/1721.1/5691</link>
<description>Marr's Approach to Vision
Poggio, Tomaso
In the last seven years a new computational  approach has led to promising advances in  the understanding of biological visual  perception. The foundations of the approach  are largely due to the work of a single man,  David Marr at M.I.T. Now, after his death in  Boston on November 17th 1980, research in  vision will not be the same for the growing  number of those who are following his lead.
</description>
<pubDate>Sat, 01 Aug 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5691</guid>
<dc:date>1981-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Implicit Constraints of the Primal Sketch</title>
<link>https://hdl.handle.net/1721.1/5690</link>
<description>The Implicit Constraints of the Primal Sketch
Grimson, W.E.L
Computational theories of structure-from-motion and stereo vision only specify the  computation of three-dimensional surface  information at points in the image at which the  irradiance changes. Yet, the visual perception  is clearly of complete surfaces, and this  perception is consistent for different  observers. Since mathematically the class of  surfaces which could pass through the known  boundary points provided by the stereo  system is infinite and contains widely varying  surfaces, the visual system must incorporate  some additional constraints besides the  known points in order to compute the  complete surface. Using the image irradiance  equation, we derive the surface consistency  constraint, informally referred to as no news is  good news. The constraint implies that the  surface must agree with the information from  stereo or motion correspondence, and not  vary radically between these points. An explicit  form of this surface consistency constraint is  derived, by relating the probability of a zero-crossing in a region of the image to the  variation in the local surface orientation of the  surface, provided that the surface albedo and  the illumination are roughly constant. The  surface consistency constraint can be used to  derive an algorithm for reconstructing the  surface that "best" fits the surface information  provided by stereo or motion correspondence.
</description>
<pubDate>Thu, 01 Oct 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5690</guid>
<dc:date>1981-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Passive Navigation</title>
<link>https://hdl.handle.net/1721.1/5689</link>
<description>Passive Navigation
Bruss, Anna R.; Horn, Berthold K.P.
A method is proposed for determining the  motion of a body relative to a fixed  environment using the changing image seen  by a camera attached to the body. The optical  flow in the image plane is the input, while the  instantaneous rotation and translation of the  body are the output. If optical flow could be  determined precisely, it would only have to be  known at a few places to compute the  parameters of the motion. In practice,  however, the measured optical flow will be  somewhat inaccurate. It is therefore  advantageous to consider methods which  use as much of the available information as  possible. We employ a least-squares  approach which minimizes some measure of  the discrepancy between the measured flow  and that predicted from the computed motion  parameters. Several different error norms are  investigated. In general, our algorithm leads  to a system of nonlinear equations from which  the motion parameters may be computed  numerically. However, in the special cases  where the motion of the camera is purely  translational or purely rotational, use of the  appropriate norm leads to a system of  equations from which these parameters can  be determined in closed form.
</description>
<pubDate>Sun, 01 Nov 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5689</guid>
<dc:date>1981-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Workshop on the Design and Control of Dextrous Hands</title>
<link>https://hdl.handle.net/1721.1/5688</link>
<description>Workshop on the Design and Control of Dextrous Hands
Hollerbach, John M.
The Workshop for the Design and Control of Dexterous Hands was held at the MIT Artificial Intelligence Laboratory on November 5-6, 1981. Outside experts were brought together to discuss four topics: kinematics of hands, actuation and materials, touch sensing and control. This report summarizes the discussions of the participants and attempts to identify a consensus on applications, mechanical design, and control.
</description>
<pubDate>Thu, 01 Apr 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5688</guid>
<dc:date>1982-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>How to Play Twenty Questions with Nature and Win</title>
<link>https://hdl.handle.net/1721.1/5687</link>
<description>How to Play Twenty Questions with Nature and Win
Richards, Whitman
The 20 Questions Game played by children  has an impressive record of rapidly guessing  an arbitrarily selected object with rather few,  well-chosen questions. This same strategy  can be used to drive the perceptual process,  likewise beginning the search with the intent  of deciding whether the object is Animal-Vegetable-or-Mineral. For a perceptual  system, however, several simple questions  are required even to make this first judgment  as to the Kingdom the object belongs.  Nevertheless, the answers to these first  simple questions, or their modular outputs,  provide a rich data base which can serve to  classify objects or events in much more detail  than one might expect, thanks to constraints  and laws imposed upon natural processes  and things. The questions, then, suggest a  useful set of primitive modules for initializing  perception.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5687</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Lightness Scale from Image Intensity Distributions</title>
<link>https://hdl.handle.net/1721.1/5686</link>
<description>A Lightness Scale from Image Intensity Distributions
Richards, W.A.
A lightness scale is derived from a theoretical  estimate of the probability distribution of  image intensities for natural scenes. The  derived image intensity distribution considers  three factors: reflectance, surface orientation  and illumination, and surface texture (or  roughness). The convolution of the effects of  these three factors yields the theoretical  probability distribution of image intensities. A  useful lightness scale should be the integral  of this probability density function for then  equal intervals along the scale are equally  probable and carry equal information. The  result is a scale similar to that used in  photography, or by the nervous system as its  transfer function.
</description>
<pubDate>Sat, 01 Aug 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5686</guid>
<dc:date>1981-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Semantics of Inheritance and Attributions in the Description System Omega</title>
<link>https://hdl.handle.net/1721.1/5685</link>
<description>Semantics of Inheritance and Attributions in the Description System Omega
Attardi, Giuseppe; Simi, Maria
Omega is a description system for knowledge  embedding which incorporates some of the  attractive modes of expression in common  sense reasoning such as descriptions,  inheritance, quantification, negation,  attributions and multiple viewpoints. A  formalization of Omega is developed as a  framework for investigations on the  foundations of knowledge representation. As  a logic, Omega achieves the goal of an  intuitively sound and consistent theory of  classes which permits unrestricted  abstraction within a powerful logic system.  Description abstraction is the construct  provided in Omega corresponding to set  abstraction. Attributions and inheritance are  the basic mechanisms for knowledge  structuring. To achieve flexibility and  incrementality, the language allows  descriptions with an arbitrary number of  attributions, rather than predicates with a fixed  number of arguments as in predicate logic.  This requires a peculiar interpretation for  instance descriptions, which in turn provides  insights into the use and meaning of several  kinds of attributions. The formal treatment  consists in presenting semantic models for  Omega, deriving an axiomatization and  establishing the consistency and  completeness of the logic.
</description>
<pubDate>Sat, 01 Aug 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5685</guid>
<dc:date>1981-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial Planning: A Configuration Space Approach</title>
<link>https://hdl.handle.net/1721.1/5684</link>
<description>Spatial Planning: A Configuration Space Approach
Lozano-Perez, Tomas
This paper presents algorithms for computing  constraints on the position of an object due to  the presence of obstacles. This problem  arises in applications which require choosing  how to arrange or move objects among other  objects. The basis of the approach presented  here is to characterize the position and  orientation of the object of interest as a single  point in a Configuration Space, in which each  coordinate represents a degree of freedom in  the position and/or orientation of the object.  The configurations forbidden to this object,  due to the presence of obstacles, can then be  characterized as regions in the Configuration  Space. The paper presents algorithms for  computing these Configuration Space  obstacles when the objects and obstacles are  polygons or polyhedra. An approximation  technique for high-dimensional Configuration  Space obstacles, based on projections of  obstacles slices, is described.
</description>
<pubDate>Mon, 01 Dec 1980 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5684</guid>
<dc:date>1980-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reasoning Utility Package User's Manual, Version One</title>
<link>https://hdl.handle.net/1721.1/5683</link>
<description>Reasoning Utility Package User's Manual, Version One
McAllester, David Allen
RUP (Reasoning Utility Package) is a  collection of procedures for performing  various computations relevant to automated  reasoning. RUP contains a truth maintenance  system (TMS) which can be used to perform  simple propositional deduction (unit clause  resolution) to record justifications, to track  down underlying assumptions and to perform  incremental modifications when premises are  changed. This TMS can be used with an  automatic premise controller which  automatically retracts "assumptions" before  "solid facts" when contradictions arise and  searches for the most solid proof of an  assertion. RUP also contains a procedure for  efficiently computing all the relevant  consequences of any set of equalities  between ground terms. A related utility  computes "substitution simplifications" of  terms under an arbitrary set of unquantified  equalities and a user defined simplicity order.  RUP also contains demon writing macros  which allow one to write PLANNER like  demons that trigger on various types of events  in the data base. Finally there is a utility for  reasoning about partial orders and arbitrary  transitive relations. In writing all of these  utilities an attempt has been made to provide  a maximally flexible environment for  automated reasoning.
</description>
<pubDate>Thu, 01 Apr 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5683</guid>
<dc:date>1982-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Local Front End for Remote Editing</title>
<link>https://hdl.handle.net/1721.1/5682</link>
<description>A Local Front End for Remote Editing
Stallman, Richard M.
The Local Editing Protocol allows a local  programmable terminal to execute the most  common editing commands on behalf of an  extensible text editor on a remote system,  thus greatly improving speed of response  without reducing flexibility. The Line Saving  Protocol allows the local system to save text  which is not displayed, and display it again  later when it is needed, under the control of  the remote editor. Both protocols are  substantially system and editor independent.
</description>
<pubDate>Mon, 01 Feb 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5682</guid>
<dc:date>1982-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>CARTOON: A Biologically Motivated Edge Detection Algorithm</title>
<link>https://hdl.handle.net/1721.1/5681</link>
<description>CARTOON: A Biologically Motivated Edge Detection Algorithm
Richards, W.; Nishihara, H.K.; Dawson, B.
Caricatures demonstrate that only a few  significant "edges" need to be captured to  convey the meaning of a complex pattern of  image intensities. The most important of  these "edges" are image intensity changes  arising from surface discontinuities or  occluding boundaries. The CARTOON  algorithm is an attempt to locate these special  intensity changes using a modification of the  zero-crossing coincidence scheme  suggested by Marr and Hildreth (1980).
</description>
<pubDate>Tue, 01 Jun 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5681</guid>
<dc:date>1982-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nature Abhors an Empty Vacuum</title>
<link>https://hdl.handle.net/1721.1/5680</link>
<description>Nature Abhors an Empty Vacuum
Minsky, Marvin
Imagine a crystalline world of tiny, discrete  "cells", each knowing only what its nearest  neighbors do. Each volume of space contains  only a finite amount of information, because  space and time come in discrete units. In  such a universe, we'll construct analogs of  particles and fields ??d ask what it would  mean for these to satisfy constraints like  conservation of momentum. In each case  classical mechanics will break down ??  scales both small and large, and strange  phenomena emerge: a maximal velocity, a  slowing of internal clocks, a bound on  simultaneous measurement, and quantum-like effects in very weak or intense fields. This  fantasy about conservation in cellular arrays  was inspired by this first conference on  computation and physics, a subject destined  to produce profound and powerful theories. I  wish this essay could include one such; alas,  it only portrays images of what such theories  might be like. The "cellular array" idea is  popular already in such forms as Ising  models, renormalization theories, the "Game  of Life" and Von Neumann's work on self-producing machines. This essay exploits  many unpublished ideas I got from Edward  Fredkin. The ideas about field and particle are  original; Richard Feynman persuaded me to  consider fields instead of forces, but is not  responsible for my compromise on potential  surfaces. I also thank Danny Hillis and  Richard Stallman for other ideas.
</description>
<pubDate>Sat, 01 Aug 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5680</guid>
<dc:date>1981-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Connection Machine</title>
<link>https://hdl.handle.net/1721.1/5679</link>
<description>The Connection Machine
Hillis, W. Daniel
This paper describes the connection memory, a machine for concurrently manipulating knowledge stored in semantic networks. We need the connection memory because conventional serial computers cannot move through such networks fast enough. The connection memory sidesteps the problem by providing processing power proportional to the size of the network. Each node and link in the network has its own simple processor. These connect to form a uniform locally-connected network of perhaps a million processor/memory cells
</description>
<pubDate>Tue, 01 Sep 1981 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5679</guid>
<dc:date>1981-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-Level Reconstruction of Visual Surfaces: Variational Principles and Finite Element Representations</title>
<link>https://hdl.handle.net/1721.1/5678</link>
<description>Multi-Level Reconstruction of Visual Surfaces: Variational Principles and Finite Element Representations
Terzopoulos, Demetri
Computational modules early in the human vision system typically generate sparse information about the shapes of visible surfaces in the scene. Moreover, visual processes such as stereopsis can provide such information at a number of levels spanning a range of resolutions. In this paper, we extend this multi-level structure to encompass the subsequent task of reconstructing full surface descriptions from the sparse information. The mathematical development proceeds in three steps. First, the surface most consistent with the sparse constraints is characterized as the solution to an equilibrium state of a thin flexible plate. Second, local, finite element representations of surfaces are introduced and, by applying the finite element method, the continuous variational principle is transformed into a discrete problem in the form of a large system of linear algebraic equations whose solution is computable by local-support, cooperative mechanisms. Third, to exploit the information available at each level of resolution, a hierarchy of discrete problems is formulated and a highly efficient multi-level algorithm, involving both intra-level relaxation processes and bi-directional inter-level algorithm, involving both intra-level relaxation processes and bidirectional inter-level local interpolation processes is applied to their simultaneous solution.. Examples of the generation of hierarchies of surface representations from stereo constraints are given. Finally, the basic surface approximation problem is revisited in a broader mathematical context whose implications are of relevance to vision.
</description>
<pubDate>Thu, 01 Apr 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5678</guid>
<dc:date>1982-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expert Systems: Where Are We? And Where Do We Go from Here?</title>
<link>https://hdl.handle.net/1721.1/5677</link>
<description>Expert Systems: Where Are We? And Where Do We Go from Here?
Davis; Randall
Work on Expert Systems has received  extensive attention recently, prompting  growing interest in a range of environments.  Much has been made of the basic concept  and the rule-based system approach typically  used to construct the programs. Perhaps this  is a good time then to review what we know,  assess the current prospects, and suggest  directions appropriate for the next steps of  basic research. I'd like to do that today and  propose to do it by taking you on a journey of  sorts, a metaphorical trip through the State of  the Art of Expert Systems. We'll wander about  the landscape, ranging from the familiar  territory of the Land of Accepted Wisdom, to  the vast unknowns at the Frontiers of  Knowledge. I guarantee we'll all return safely,  so come along...
</description>
<pubDate>Tue, 01 Jun 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5677</guid>
<dc:date>1982-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>CAULDRONS: An Abstraction for Concurrent Problem Solving</title>
<link>https://hdl.handle.net/1721.1/5676</link>
<description>CAULDRONS: An Abstraction for Concurrent Problem Solving
Haase, Ken
This research extends a tradition of  distributed theories of mind into the  implementation of a distributed problem  solver. In this problem solver a number of  ideas from Minsky's Society of Mind are  implemented and are found to provide  powerful abstractions for the programming of  distributed systems. These abstractions are  the cauldron, a mechanism for instantiating  reasoning contexts, the frame, a way of  modularly describing those contexts and the  goal-node, a mechanism for bringing a  particular context to bear on a specific task.  The implementation of both these  abstractions and the distributed problem  solver in which they run is described,  accompanied by examples of their application  to various domains.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5676</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Primer for the Act-1 Language</title>
<link>https://hdl.handle.net/1721.1/5675</link>
<description>A Primer for the Act-1 Language
Theriault, Daniel G.
This paper describes the current design for  the Act-1 computer programming language  and describes the Actor computational model,  which the language was designed to support.  It provides a perspective from which to view  the language, with respect to existing  computer language systems and to the  computer system and environment under  development for support of the language. The  language is informally introduced in a tutorial  fashion and demonstrated through examples.  A programming strategy for the language is  described, further illustrating its use.
</description>
<pubDate>Thu, 01 Apr 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5675</guid>
<dc:date>1982-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Relation Between Proximity and Brightness Similarity in Dot Patterns</title>
<link>https://hdl.handle.net/1721.1/5672</link>
<description>The Relation Between Proximity and Brightness Similarity in Dot Patterns
Zucker, Steven W.; Stevens, Kent A.; Sander, Peter T.
The Gestalt studies demonstrated the  tendency to visually organize dots on the  basis of similarity, proximity, and global  properties such as closure, good  continuation, and symmetry. The particular  organization imposed on a collection of dots  is thus determined by many factors, some  local, some global. We discuss  computational reasons for expecting the initial  stages of grouping to be achieved by  processes with purely local support. In the  case of dot patterns, the expectation is that  neighboring dots are grouped on the basis of  proximity and similarity of contrast, by  processes that are independent of the overall  organization and the various global factors.  We describe experiments that suggest a  purely local relationship between proximity  and brightness similarity in perceptual  grouping.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5672</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Zero-Crossings and Spatiotemporal Interpretation in Vision</title>
<link>https://hdl.handle.net/1721.1/5671</link>
<description>Zero-Crossings and Spatiotemporal Interpretation in Vision
Poggio, Tomaso; Nielsen, Kenneth; Nishihara, Keith
We will briefly outline a computational theory  of the first stages of human vision according  to which (a) the retinal image is filtered by a  set of centre-surround receptive fields (of  about 5 different spatial sizes) which are  approximately bandpass in spatial frequency  and (b) zero-crossings are detected  independently in the output of each of these  channels. Zero-crossings in each channel are  then a set of discrete symbols which may be  used for later processing such as contour  extraction and stereopsis. A formulation of  Logan's zero-crossing results is proved for  the case of Fourier polynomials and an  extension of Logan's theorem to 2-dimentsional functions is also approved.  Within this framework, we shall describe an  experimental and theoretical approach  (developed by one of us with M. Fahle) to the  problem of visual acuity and hyperacuity of  human vision. The positional accuracy  achieved, for instance, in reading a vernier is  astonishingly high, corresponding to a fraction  of the spacing between adjacent  photoreceptors in the fovea. Stroboscopic  presentation of a moving object can be  interpolated by our visual system into the  perception of continuous motion; and this  "spatio-temporal" interpolation also can be  very accurate. It is suggested that the known  spatiotemporal properties of the channels  envisaged by the theory of visual processing  outlined above implement an interpolation  scheme which can explain human vernier  acuity for moving targets.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5671</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solving the Find-Path Problem by Representing Free Space as Generalized Cones</title>
<link>https://hdl.handle.net/1721.1/5670</link>
<description>Solving the Find-Path Problem by Representing Free Space as Generalized Cones
Brooks, Rodney A.
Free space is represented as a union of  (possibly overlapping) generalized cones. An  algorithm is presented which efficiently finds  good collision free paths for convex polygonal  bodies through space littered with obstacle  polygons. The paths are good in the sense  that the distance of closest approach to an  obstacle over the path is usually far from  minimal over the class of topologically  equivalent collision free paths. The algorithm  is based on characterizing the volume swept  by a body as it is translated and rotated as a  generalized cone and determining under what  conditions generalized cone is a subset of  another.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5670</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Physical Descriptions from Functional Definitions, Examples, and Precedents</title>
<link>https://hdl.handle.net/1721.1/5669</link>
<description>Learning Physical Descriptions from Functional Definitions, Examples, and Precedents
Winston, Patrick H.; Binford, Thomas O.; Katz, Boris; Lowry, Michael
It is too hard to tell vision systems what things  look like. It is easier to talk about purpose and  what things are for. Consequently, we want  vision systems to use functional descriptions  to identify things when necessary, and we  want them to learn physical descriptions for  themselves, when possible. This paper  describes a theory that explains how to make  such systems work. The theory is a synthesis  of two sets of ideas: ideas about learning  from precedents and exercises developed at  MIT and ideas about physical description  developed at Stanford. The strength of the  synthesis is illustrated by way of  representative experiments. All of these  experiments have been performed with an  implemented system.
</description>
<pubDate>Mon, 01 Nov 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5669</guid>
<dc:date>1982-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning by Augmenting Rules and Accumulating Censors</title>
<link>https://hdl.handle.net/1721.1/5668</link>
<description>Learning by Augmenting Rules and Accumulating Censors
Winston, Patrick H.
This paper is a synthesis of several sets of  ideas: ideas about learning from precedents  and exercises, ideas about learning using  near misses, ideas about generalizing if-then  rules, and ideas about using censors to  prevent procedure misapplication. The  synthesis enables two extensions to an  implemented system that solves problems  involving precedents and exercises and that  generates if-then rules as a byproduct . These  extensions are as follows: If-then rules are  augmented by unless conditions, creating  augmented if-then rules. An augmented if-then rule is blocked whenever facts in hand  directly demonstrate the truth of an unless  condition, the rule is called a censor. Like  ordinary augmented if-then rules, censors can  be learned. Definition rules are introduced  that facilitate graceful refinement. The  definition rules are also augmented if-then  rules. They work by virtue of unless entries  that capture certain nuances of meaning  different from those expressible by necessary  conditions. Like ordinary augmented if-then  rules, definition rules can be learned. The  strength of the ideas is illustrated by way of  representative experiments. All of these  experiments have been performed with an  implemented system.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5668</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Algorithms</title>
<link>https://hdl.handle.net/1721.1/5667</link>
<description>Visual Algorithms
Poggio, Tomaso
Nonlinear, local and highly parallel algorithms  can perform several simple but important  visual computations. Specific classes of  algorithms can be considered in an abstract  way. I study here the class of polynomial  algorithms to exemplify some of the important  issues for visual processing like linear vs.  nonlinear and local vs. global. Polynomial  algorithms are a natural extension of  Perceptrons to time dependent grey level  images.. Although they share most of the  limitations of Perceptrons, they are powerful  parallel computational devices. Several of  their properties are characterized and  especially (a) their equivalence with  Perceptrons for geometrical figures and (b)  the synthesis of non-linear algorithms  (mappings) via associative learning. Finally,  the paper considers how algorithms of this  type could be implemented in nervous  hardware, in terms of synaptic interactions  strategically located in a dendritic tree.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5667</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Supporting Organizational Problem Solving with a Workstation</title>
<link>https://hdl.handle.net/1721.1/5666</link>
<description>Supporting Organizational Problem Solving with a Workstation
Barber, Gerald
This paper describes an approach to  supporting work in the office. Using and  extending ideas from the field of Artificial  Intelligence (AI) we describe office work as a  problem solving activity. A knowledge  embedding language called Omega is used  to embed knowledge of the organization into  an office worker's workstation in order to  support the office worker in his or her problem  solving. A particular approach to reasoning  about change and contradiction is discussed.  This approach uses Omega's viewpoint  mechanism. Omega's viewpoint mechanism  is a general contradiction handling facility.  Unlike other Knowledge Representation  systems, when a contradiction is reached the  reasons for the contradiction can be analyzed  by the deduction mechanism without having to  resort to a backtracking mechanism. The  Viewpoint mechanism is the heart of the  Problem Solving Support Paradigm. This  paradigm supplements the classical AI view  of problem solving. Office workers are  supported using the Problem Solving Support  Paradigm. An example is presented where  Omega's facilities are used to support an  office worker's problem solving activities. The  example illustrates the use of viewpoints and  of Omega's capabilities to reason about it's  own reasoning process.
</description>
<pubDate>Thu, 01 Jul 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5666</guid>
<dc:date>1982-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Subdivision Algorithm in Configuration Space for Findpath with Rotation</title>
<link>https://hdl.handle.net/1721.1/5665</link>
<description>A Subdivision Algorithm in Configuration Space for Findpath with Rotation
Brooks, Rodney A.; Lozano-Perez, Tomas
A hierarchical representation for configuration  space is presented, along with an algorithm  for searching that space for collision-free  paths. The detail of the algorithm are  presented for polygonal obstacles and a  moving object with two translational and one  rotational degrees of freedom.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5665</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computational Problem of Motor Control</title>
<link>https://hdl.handle.net/1721.1/5664</link>
<description>The Computational Problem of Motor Control
Poggio, Tomaso; Rosser, B.L.
We review some computational aspects of  motor control. The problem of trajectory  control is phrased in terms of an efficient  representation of the operator connecting joint  angles to joint torques. Efficient look-up table  solutions of the inverse dynamics are related  to some results on the decomposition of  function of many variables. In a biological  perspective, we emphasize the importance of  the constraints coming from the properties of  the biological hardware for determining the  solution to the inverse dynamic problem.
</description>
<pubDate>Sun, 01 May 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5664</guid>
<dc:date>1983-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computers, Brains, and the Control of Movement</title>
<link>https://hdl.handle.net/1721.1/5663</link>
<description>Computers, Brains, and the Control of Movement
Hollerbach, John M.
Many of the problems associated with the  planning and execution of human arm  trajectories are illuminated by planning and  control strategies which have been developed  for robotic manipulators. This comparison  may provide explanations for the  predominance of straight line trajectories in  human reaching and pointing movements, the  role of feedback during arm movement, as  well as plausible compensatory mechanisms  for arm dynamics.
</description>
<pubDate>Tue, 01 Jun 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5663</guid>
<dc:date>1982-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Maximizing Rigidity: The Incremental Recovery of 3-D Structure from Rigid and Rubbery Motion</title>
<link>https://hdl.handle.net/1721.1/5662</link>
<description>Maximizing Rigidity: The Incremental Recovery of 3-D Structure from Rigid and Rubbery Motion
Ullman, Shimon
The human visual system can extract 3-D  shape information of unfamiliar moving  objects from their projected transformations.  Computational studies of this capacity have  established that 3-D shape, can be extracted  correctly from a brief presentation, provided  that the moving objects are rigid. The human  visual system requires a longer temporal  extension, but it can cope, however, with  considerable deviations from rigidity. It is  shown how the 3-D structure of rigid and non-rigid objects can be recovered by maintaining  an internal model of the viewed object and  modifying it at each instant by the minimal  non-rigid change that is sufficient to account  for the observed transformation. The results  of applying this incremental rigidity scheme to  rigid and non-rigid objects in motion are  described and compared with human  perceptions.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5662</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robot Programming</title>
<link>https://hdl.handle.net/1721.1/5661</link>
<description>Robot Programming
Lozano-Perez, Tomas
The industrial robot's principal advantage over  traditional automation is programmability.  Robots can perform arbitrary sequences of  pre-stored motions or of motions computed  as functions of sensory input. This paper  reviews requirements for and developments  in robot programming systems. The key  requirements for robot programming systems  examined in the paper are in the areas of  sensing, world modeling, motion  specification, flow of control, and  programming support. Existing and proposed  robot programming systems fall into three  broad categories: guiding systems in which  the user leads a robot through the motions to  be performed, robot-level programming  systems in which the user writes a computer  program specifying motion and sensing, and  task-level programming systems in which the  user specifies operations by their desired  effect on objects. A representative sample of  systems in each of these categories is  surveyed in the paper.
</description>
<pubDate>Wed, 01 Dec 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5661</guid>
<dc:date>1982-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parsing and Generating English Using Commutative Transformations</title>
<link>https://hdl.handle.net/1721.1/5660</link>
<description>Parsing and Generating English Using Commutative Transformations
Katz, Boris; Winston, Patrick H.
This paper is about an implemented natural  language interface that translates from  English into semantic net relations and from  semantic net relations back into English. The  parser and companion generator were  implemented for two reasons: (a) to enable  experimental work in support of a theory of  learning by analogy; (b) to demonstrate the  viability of a theory of parsing and generation  built on commutative transformations. The  learning theory was shaped to a great degree  by experiments that would have been  extraordinarily tedious to perform without the  English interface with which the experimental  data base was prepared, revise, and revised  again. Inasmuch as current work on the  learning theory is moving toward a tenfold  increase in data-base size, the English  interface is moving from a facilitating role to  an enabling one. The parsing and generation  theory has two particularly important features:  (a) the same grammar is used for both  parsing and generation; (b) the  transformations of the grammar are  commutative. The language generation  procedure converts a semantic network  fragment into kernel frames, chooses the set  of transformations that should be performed  upon each frame, executes the specified  transformations, combines the altered  kernels into a sentence, performs a  pronominalization process, and finally  produces the appropriate English word string.  Parsing is essentially the reverse of  generation. The first step in the parsing  process is splitting a given sentence into a  set of kernel clauses along with a description  of how those clauses hierarchically related to  each other. The clauses are hierarchically  related to each other. The clauses are used to  produce a matrix embedded kernel frames,  which in turn supply arguments to relation-creating functions. The evaluation of the  relation-creating functions results in the  construction of the semantic net fragments.
</description>
<pubDate>Sat, 01 May 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5660</guid>
<dc:date>1982-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementation of a Theory for Inferring Surface Shape from Contours</title>
<link>https://hdl.handle.net/1721.1/5659</link>
<description>Implementation of a Theory for Inferring Surface Shape from Contours
Stevens, Kent A.
Human vision is adept at inferring the shape  of a surface from the image of curves lying  across the surface. The strongest impression  of 3-D shape derives from parallel (but not  necessarily equally spaced) contours. In  [Stevens 1981a] the computational problem of  inferring 3-D shape from image  configurations is examined, and a theory is  given for how the visual system constrains the  problem by certain assumptions. The  assumptions are three: that neither the  viewpoint not the placement of the physical  curves on the surface is misleading, and that  the physical curves are lines of curvature  across the surface. These assumptions imply  that parallel image contours correspond to  parallel curves lying across an approximately  cylindrical surface. Moreover, lines of  curvature on a cylinder are geodesic and  planar. These properties provide strong  constraint on the local surface orientation. We  describe a computational method embodying  these geometric constraints that is able to  determine the surface orientation even in  places where locally it is very weakly  constrained, by extrapolating from places  where it is strongly constrained. This  computation has been implemented, and  predicts local surface orientation that closely  matches the apparent orientation.  Experiments with the implementation support  the theory that our visual interpretation of  surface shape from contour assumes the  image contours correspond to lines of  curvature.
</description>
<pubDate>Sun, 01 Aug 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5659</guid>
<dc:date>1982-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symbolic Error Analysis and Robot Planning</title>
<link>https://hdl.handle.net/1721.1/5658</link>
<description>Symbolic Error Analysis and Robot Planning
Brooks, Rodney A.
A program to control a robot manipulator for industrial assembly operations must take into account possible errors in parts placement and tolerances of the parts themselves. Previous approaches to this problem have been to (1) engineer the situation so that the errors are small or (2) let the programmer analyze the errors and take explicit account of them. This paper gives the mathematical underpinnings for building programs (plan checkers) to carry out approach (2) automatically. The plan checker uses a geometric CAD-type database to infer the effects of actions and the propagation of errors. It does this symbolically rather than numerically, so that computations can be reversed and desired resultant tolerances can be used to infer required initial tolerances or the necessity for sensing. The checker modifies plans to include sensing and adds constraints to the plan which ensure that it will succeed. An implemented system is described and results of its execution are presented. The plan checker could be used as part of an automatic planning system of as an aid to a human robot programmer.
</description>
<pubDate>Wed, 01 Sep 1982 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5658</guid>
<dc:date>1982-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Structural Approach to Analogy</title>
<link>https://hdl.handle.net/1721.1/5657</link>
<description>A Structural Approach to Analogy
Mansour, Hormoz
There are multiple sorts of reasoning by  analogy between two domains; the one with  which we are concerned is a type of contextual  analogy. The purpose of this paper is to see  whether two domains that look analogous  would be analogous in all aspects and  contexts. To perform this, we analyse the  domain according to different particularities.  For each particularity or context we continue  the analysis and search for another one within  the same domain. In this way we create a kind  of structure for the different domains. This sort  of analysis is represented by frames and  frames which are nested within each other.  This paper describes this concept and an  implemented system "MULTI_ANALOG", a  limited example of knowledge-acquisition,  problem solving, and automatic-acquisition  based on this particular form of analogy  namely structural analogy.
</description>
<pubDate>Tue, 01 Nov 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5657</guid>
<dc:date>1983-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Routines</title>
<link>https://hdl.handle.net/1721.1/5656</link>
<description>Visual Routines
Ullman, Shimon
This paper examines the processing of visual  information beyond the creation of the early  representations. A fundamental requirement  at this level is the capacity to establish visually  abstract shape properties and spatial  relations. This capacity plays a major role in  object recognition, visually guided  manipulation, and more abstract visual  thinking. For the human visual system, the  perception of spatial properties and relations  that are complex from a computational  standpoint, nevertheless often appears  immediate and effortless. This apparent  immediateness and ease of perceiving  spatial relations is, however, deceiving. It  conceals in fact a complex array of processes  highly specialized for the task. The proficiency  of the human system in analyzing spatial  information far surpasses the capacities of  current artificial systems. The study of the  computations that underlie this competence  may therefore lead to the development of new  more efficient processors for the spatial  analysis of visual information. It is suggested  that the perception of spatial relations is  achieved by the application to the base  representations of visual routines that are  composed of sequences of elemental  operations. Routines for different properties  and relations share elemental operations.  Using a fixed set of basic operations, the  visual system can assemble different routines  to extract an unbounded variety of shape  properties and spatial relations. At a more  detailed level, a number of plausible basic  operations are suggested, based primarily  on their potential usefulness, and supported  in part by empirical evidence. The operations  discussed include shifting of the processing  focus, indexing to an odd-man-out location,  bounded activation, boundary tracing, and  marking. The problem of assembling such  elemental operations into meaningful visual  routines is discussed briefly.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5656</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling Theorems for Zero-Crossings</title>
<link>https://hdl.handle.net/1721.1/5655</link>
<description>Scaling Theorems for Zero-Crossings
Yuille, A.L.; Poggio, Tomaso A
We characterize some properties of the zero-crossings of the laplacian of signals - in particular images - filtered with linear filters, as a function of the scale of the filter (following recent work by A. Witkin, 1983). We prove that in any dimension the only filter that does not create zero crossings as the scale increases is gaussian. This result can be generalized to apply to level-crossings of any linear differential operator: it applies in particular to ridges and ravines in the image density. In the case of the second derivative along the gradient we prove that there is no filter that avoids creation of zero-crossings.
</description>
<pubDate>Wed, 01 Jun 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5655</guid>
<dc:date>1983-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analog "Neuronal" Networks in Early Vision</title>
<link>https://hdl.handle.net/1721.1/5654</link>
<description>Analog "Neuronal" Networks in Early Vision
Koch, Christof; Marroquin, Jose; Yuille, Alan
Many problems in early vision can be  formulated in terms of minimizing an energy  or cost function. Examples are shape-from-shading, edge detection, motion analysis,  structure from motion and surface  interpolation (Poggio, Torre and Koch, 1985).  It has been shown that all quadratic  variational problems, an important subset of  early vision tasks, can be "solved" by linear,  analog electrical or chemical networks  (Poggio and Koch, 1985). IN a variety of  situateions the cost function is non-quadratic,  however, for instance in the presence of  discontinuities. The use of non-quadratic cost  functions raises the question of designing  efficient algorithms for computing the optimal  solution. Recently, Hopfield and Tank (1985)  have shown that networks of nonlinear analog  "neurons" can be effective in computing the  solution of optimization problems. In this  paper, we show how these networks can be  generalized to solve the non-convex energy  functionals of early vision. We illustrate this  approach by implementing a specific network  solving the problem of reconstructing a  smooth surface while preserving its  discontinuities from sparsely sampled data  (Geman and Geman, 1984; Marroquin 1984;  Terzopoulos 1984). These results suggest a  novel computational strategy for solving such  problems for both biological and artificial  vision systems.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5654</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Issues in Parallel Architecture for Artificial Intelligence</title>
<link>https://hdl.handle.net/1721.1/5653</link>
<description>Design Issues in Parallel Architecture for Artificial Intelligence
Hewitt, Carl; Lieberman, Henry
Development of highly intelligent computers  requires a conceptual foundation that will  overcome the limitations of the von Neumann  architecture. Architectures for such a  foundation should meet the following design  goals: * Address the fundamental  organizational issues of large-scale  parallelism and sharing in a fully integrated  way. This means attention to organizational  principles, as well as hardware and software.  * Serve as an experimental apparatus for  testing large-scale artificial intelligence  systems. * Explore the feasibility of an  architecture based on abstractions, which  serve as natural computational primitives for  parallel processing. Such abstractions should  be logically independent of their software and  hardware host implementations. In this paper  we lay out some of the fundamental design  issues in parallel architectures for Artificial  Intelligence, delineate limitations of previous  parallel architectures, and outline a new  approach that we are pursuing.
</description>
<pubDate>Tue, 01 Nov 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5653</guid>
<dc:date>1983-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Smoothest Velocity Field and Token Matching</title>
<link>https://hdl.handle.net/1721.1/5652</link>
<description>The Smoothest Velocity Field and Token Matching
Yuille, A.L.
This paper presents some mathematical  results concerning the measurement of  motion of contours. A fundamental problem of  motion measurement in general is that the  velocity field is not determined uniquely from  the changing intensity patterns. Recently  Hildreth &amp; Ullman have studied a solution to  this problem based on an Extremum Principle  [Hildreth (1983), Ullman &amp; Hildreth (1983)].  That is, they formulate the measurement of  motion as the computation of the smoothest  velocity field consistent with the changing  contour. We analyse this Extremum principle  and prove that it is closely related to a  matching scheme for motion measurement  which matches points on the moving contour  that have similar tangent vectors. We then  derive necessary and sufficient conditions for  the principle to yield the correct velocity field.  These results have possible implications for  the design of computer vision systems, and  for the study of human vision.
</description>
<pubDate>Mon, 01 Aug 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5652</guid>
<dc:date>1983-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Picking Up an Object from a Pile of Objects</title>
<link>https://hdl.handle.net/1721.1/5651</link>
<description>Picking Up an Object from a Pile of Objects
Ikeuchi, Katsushi; Horn, Berthold K.P.; Nagata, Shigemi; Callahan, Tom; Fein, Oded
This paper describes a hand-eye system we  developed to perform the binpicking task. Two  basic tools are employed: the photometric  stereo method and the extended Gaussian  image. The photometric stereo method  generates the surface normal distribution of a  scene. The extended Gaussian image allows  us to determine the attitude of the object  based on the normal distribution. Visual  analysis of an image consists of two stages.  The first stage segments the image into  regions and determines the target region. The  photometric stereo system provides the  surface normal distribution of the scene. The  system segments the scene into isolated  regions using the surface normal distribution  rather than the brightness distribution. The  second stage determines object attitude and  position by comparing the surface normal  distribution with the extended-Gaussian-image. Fingers, with LED sensor, mounted on  the PUMA arm can successfully pick an object  from a pile based on the information from the  vision part.
</description>
<pubDate>Sun, 01 May 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5651</guid>
<dc:date>1983-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning Collision Free Motions for Pick and Place Operations</title>
<link>https://hdl.handle.net/1721.1/5650</link>
<description>Planning Collision Free Motions for Pick and Place Operations
Brooks, Rodney A.
An efficient algorithm which finds collision free  paths for a manipulator with 5 or 6 revolute  joints is described. It solves the problem for  four degree of freedom pick and place  operations. Examples are given of paths  found by the algorithm in tightly cluttered  workspaces. The algorithm first describes  free space in two ways: as freeways for the  hand and payload ensemble and as freeways  for the upperarm. Freeways match volumes  swept out by manipulator motions and can be  "inverted" to find a class of topologically  equivalent path segments. The two freeway  spaces are searched concurrently under  projection of constraints determined by  motion of the forearm.
</description>
<pubDate>Sun, 01 May 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5650</guid>
<dc:date>1983-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing the Roles of Descriptions and Actions in Open Systems</title>
<link>https://hdl.handle.net/1721.1/5649</link>
<description>Analyzing the Roles of Descriptions and Actions in Open Systems
Hewitt, Carl; Jong, Peter de
This paper analyzes relationships between the roles of descriptions and actions in large scale, open ended, geographically distributed, concurrent systems. Rather than attempt to deal with the complexities and ambiguities of currently implemented descriptive languages, we concentrate our analysis on what can be expressed in the underlying frameworks such as the lambda calculus and first order logic. By this means we conclude that descriptions and actions complement one another: neither being sufficient unto itself. This paper provides a basis to begin the analysis of the very subtle relationships that hold between descriptions and actions in Open Systems.
</description>
<pubDate>Fri, 01 Apr 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5649</guid>
<dc:date>1983-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Copycat Project: An Experiment in Nondeterminism and Creative Analogies</title>
<link>https://hdl.handle.net/1721.1/5648</link>
<description>The Copycat Project: An Experiment in Nondeterminism and Creative Analogies
Hofstadter, Douglas
A micro-world is described, in which many  analogies involving strikingly different  concepts and levels of subtlety can be made.  The question "What differentiates the good  ones from the bad ones?" is discussed, and  then the problem of how to implement a  computational model of the human ability to  come up with such analogies (and to have a  sense for their quality) is considered. A key  part of the proposed system, now under  development is its dependence on statistically  emergent properties of stochastically  interacting "codelets" (small pieces of ready-to-run code created by the system, and  selected at random to run with probability  proportional to heuristically assigned  "urgencies"). Another key element is a  network of linked concepts of varying levels of  "semanticity", in which activation spreads and  indirectly controls the urgencies of new  codelets. There is pressure in the system  toward maximizing the degree of "semanticity"  or "intensionality" of descriptions of structures,  but many such pressures, often conflicting,  must interact with one another, and  compromises must be made. The shifting of  (1) perceived oundaries inside structures, (2)  descriptive concepts chosen to apply to  structures, and (3) features perceived as  "salient" or not, is called "slippage". What can  slip, and how are emergent consequences of  the interaction of (1) the temporary  ("cytoplasmic") structures involved in the  analogy with (2) the permanent ("Platonic")  concepts and links in the conceptual proximity  network, or "slippability network". The  architecture of this system is postulated as a  general architecture suitable for dealing not  only with fluid analogies, but also with other  types of abstract perception and  categorization tasks, such as musical  perception, scientific theorizing, Bongard  problems and others.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5648</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Method for Computing Spectral Reflectance</title>
<link>https://hdl.handle.net/1721.1/5647</link>
<description>A Method for Computing Spectral Reflectance
Yuille, A.
Psychophysical experiments show that the  perceived colour of an object is relatively  independent of the spectrum of the incident  illumination and depends only on the surface  reflectance. We demonstrate a possible  solution to this undetermined problem by  expanding the illumination and surface  reflectance in terms of a finite number of  basis functions. This yields a number of  nonlinear equations for each colour patch. We  show that given a sufficient number of surface  patches with the same illumination it is  possible to solve these equations up to an  overall scaling factor. Generalizations to the  spatial dependent situation are discussed.  We define a method for detecting material  changes and illustrate a way of detecting the  colour of a material at its boundaries and  propagating it inwards.
</description>
<pubDate>Sat, 01 Dec 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5647</guid>
<dc:date>1984-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constructing a Depth Map from Images</title>
<link>https://hdl.handle.net/1721.1/5646</link>
<description>Constructing a Depth Map from Images
Ikeuchi; Katsushi
This paper describes two methods for  constructing a depth map from images. Each  method has two stages. First, one or more  needle maps are determined using a pair of  images. This process employs either the  Marr-Poggio-Grimson stereo and shape-from-shading, or, instead, photometric stereo.  Secondly, a depth map is constructed from  the needle map or needle maps computed by  the first stage. Both methods make use of an  iterative relaxation method to obtain the final  depth map.
</description>
<pubDate>Mon, 01 Aug 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5646</guid>
<dc:date>1983-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Vertical Image Registration in Stereopsis</title>
<link>https://hdl.handle.net/1721.1/5645</link>
<description>Vertical Image Registration in Stereopsis
Nielsen, K.R.K.; Poggio, Tomaso A
Most computational theories of stereopsis  require a registration stage prior to stereo  matching to reduce the matching to a one-dimensional search. Even after registration, it  is critical that the stereo matching process  tolerate some degree of residual  misalignment. In this paper, we study with  psychophysical techniques the tolerance to  vertical disparity in situations in which false  targets abound ?? in random dot  stereograms ??d eye movements are  eliminated. Our results show that small  amounts of vertical disparity significantly  impair depth discrimination in a forced-choice  task. Our main results are: a) vertical disparity  of only the central "figure" part of a random dot  stereogram can be tolerated up to about 3.5',  b) vertical disparity of the "figure + ground" is  tolerated up to about 6.5', and c) the  performance of the Grimson implementation  of the Marr-Poggio stereo matching algorithm  for the stereograms of experiment (a) is  consistent with the psychophysical results.  The algorithm's tolerance to vertical disparity  is due exclusively to the spatial averaging of  the underlying filters. The algorithm cannot  account by itself for the results of experiment  (b). Eye movements, which are the principal  registration mechanism for human  stereopsis, are accurate to within about 7'.  Our data suggest that tolerance to this  residual vertical disparity is attained by two  non-motor mechanisms: 1) the spatial  average performed by the receptive fields that  filter the two images prior to stereo matching,  and 2) a non-motor shift mechanism that may  be driven at least in part by monocular cues.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5645</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Smoothed Local Symmetries and Their Implementation</title>
<link>https://hdl.handle.net/1721.1/5644</link>
<description>Smoothed Local Symmetries and Their Implementation
Brady, Michael; Asada, Haruo
We introduce a novel representation of two-dimensional shape that we call smoothed local symmetries (SLS). Smoothed local symmetries represent both the bounding contour of a shape fragment and the region that it occupies. In this paper we develop the main features of the SLS representation and describe an implemented algorithm that computes it. The performance of the algorithm is illustrated for a set of tools. We conclude by sketching a method for determining the articulation of a shape into subshapes.
</description>
<pubDate>Wed, 01 Feb 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5644</guid>
<dc:date>1984-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence and Robotics</title>
<link>https://hdl.handle.net/1721.1/5643</link>
<description>Artificial Intelligence and Robotics
Brady, Michael
Since Robotics is the field concerned with the  connection of perception to action, Artificial  Intelligence must have a central role in  Robotics if the connection is to be intelligent.  Artificial Intelligence addresses the crucial  questions of: what knowledge is required in  any aspect of thinking; how that knowledge  should be represented; and how that  knowledge should be used. Robotics  challenges AI by forcing it to deal with real  objects in the real world. Techniques and  representations developed for purely cognitive  problems, often in toy domains, do not  necessarily extend to meet the challenge.  Robots combine mechanical effectors,  sensors, and computers. AI has made  significant contributions to each component.  We review AI contributions to perception and  object oriented reasoning. Object-oriented  reasoning includes reasoning about space,  path-planning, uncertainty, and compliance.  We conclude with three examples that  illustrate the kinds of reasoning or problem  solving abilities we would like to endow  robots with and that we believe are worthy  goals of both Robotics and Artificial  Intelligence, being within reach of both.
</description>
<pubDate>Wed, 01 Feb 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5643</guid>
<dc:date>1984-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Picking Parts out of a Bin</title>
<link>https://hdl.handle.net/1721.1/5642</link>
<description>Picking Parts out of a Bin
Horn, Berthold K.P.; Ikeuchi, Katsushi
One of the remaining obstacles to the  widespread application of industrial robots is  their inability to deal with parts that are not  precisely positioned. In the case of manual  assembly, components are often presented in  bins. Current automated systems, on the  other hand, require separate feeders which  present the parts with carefully controlled  position and attitude. Here we show how  results in machine vision provide techniques  for automatically directing a mechanical  manipulator to pick one object at a time out of  a pile. The attitude of the object to be picked  up is determined using a histogram of the  orientations of visible surface patches.  Surface orientation, in turn, is determined  using photometric stereo applied to multiple  images. These images are taken with the  same camera but differing lighting. The  resulting needle map, giving the orientations  of surface patches, is used to create an  orientation histogram which is a discrete  approximation to the extended Gaussian  image. This can be matched against a  synthetic orientation histogram obtained from  prototypical models of the objects to be  manipulated. Such models may be obtained  from computer aided design (CAD)  databases. The method thus requires that the  shape of the objects be described, but it is not  restricted to particular types of objects.
</description>
<pubDate>Sat, 01 Oct 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5642</guid>
<dc:date>1983-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computations Underlying the Measurement of Visual Motion</title>
<link>https://hdl.handle.net/1721.1/5641</link>
<description>Computations Underlying the Measurement of Visual Motion
Hildreth, Ellen C.
The organization of movement in a changing  image provides a valuable source of  information for analyzing the environment in  terms of objects, their motion in space, and  their three-dimensional structure. This  movement may be represented by a two-dimensional velocity field that assigns a  direction and magnitude of velocity to  elements in the image. This paper presents a  method for computing the velocity field, with  three main components. First, initial  measurements of motion in the image take  place at the location of significant changes,  which give rise to zero-crossings in the output  of the convolution of the image with a ***  operator. The initial motion measurements  provide the component of velocity in the  direction perpendicular to the local orientation  of the zero-crossing contours. Second, these  initial measurements are integrated along  contours to compute the two-dimensional  velocity field. Third, an additional constraint of  smoothness of the velocity field, based on the  physical constraint that surfaces are generally  smooth, allows the computation of a unique  velocity field. The details of an algorithm are  presented, with results of the algorithm  applied to artificial and natural image  sequences.
</description>
<pubDate>Thu, 01 Mar 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5641</guid>
<dc:date>1984-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Synthesis of Fine-Motion Strategies for Robots</title>
<link>https://hdl.handle.net/1721.1/5640</link>
<description>Automatic Synthesis of Fine-Motion Strategies for Robots
Lozano-Perez, Tomas; Mason, Matthew T.; Taylor, Russell H.
The use of active compliance enables robots  to carry out tasks in the presence of significant  sensing and control errors. Compliant  motions are quite difficult for humans to  specify, however. Furthermore, robot  programs are quite sensitive to details of  geometry and to error characteristics and  must, therefore, be constructed anew for each  task. These factors motivate the need for  automatic synthesis tools for robot  programming, especially for compliant  motion. This paper describes a formal  approach to the synthesis of compliant motion  strategies from geometric descriptions of  assembly operations and explicit estimates of  errors in sensing and control. A key aspect of  the approach is that it provides correctness  criteria for compliant motion strategies.
</description>
<pubDate>Thu, 01 Dec 1983 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5640</guid>
<dc:date>1983-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Curvature Primal Sketch</title>
<link>https://hdl.handle.net/1721.1/5639</link>
<description>The Curvature Primal Sketch
Asada, Haruo; Brady, Michael
In this paper we introduce a novel  representation of the significant changes in  curvature along the bounding contour of  planar shape. We call the representation the  curvature primal sketch. We describe an  implemented algorithm that computes the  curvature primal sketch and illustrate its  performance on a set of tool shapes. The  curvature primal sketch derives its name from  the close analogy to the primal sketch  representation advocated by Marr for  describing significant intensity changes. We  define a set of primitive parameterized  curvature discontinuities, and derive  expressions for their convolutions with the first  and second derivatives of a Gaussian. The  convolved primitives, sorted according to the  scale at which they are detected, provide us  with a multi-scaled interpretation of the  contour of a shape.
</description>
<pubDate>Wed, 01 Feb 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5639</guid>
<dc:date>1984-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Color Vision: Representing Material Categories</title>
<link>https://hdl.handle.net/1721.1/5638</link>
<description>Color Vision: Representing Material Categories
Rubin, John M.; Richards, W.A.
We argue that one of the early goals of color vision is to distinguish one kind of material from another. Accordingly, we show that when a pair of image regions is such that one region has greater intensity at one wavelength than at another wavelength, and the second region has the opposite property, then the two regions are likely to have arisen from distinct materials in the scene. We call this material change circumstance the 'opposite slope sign condition.' With this criterion as a foundation, we construct a representation of spectral information that facilitates the recognition of material changes. Our theory has implications for both psychology and neurophysiology. In particular, Hering's notion of opponent colors and psychologically unique primaries, and Land's results in two-color projection can be interpreted as different aspects of the visual system's goal of categorizing materials. Also, the theory provides two basic interpretations of the function of double-opponent color cells described by neurophysiologists.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5638</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Experiments with a Feature Based Stereo Algorithm</title>
<link>https://hdl.handle.net/1721.1/5637</link>
<description>Computational Experiments with a Feature Based Stereo Algorithm
Grimson, W. Eric L.
Computational models of the human stereo  system can provide insight into general  information processing constraints that apply  to any stereo system, either artificial or  biological. In 1977, Marr and Poggio proposed  one such computational model, that was  characterized as matching certain feature  points in difference-of-Gaussian filtered  images, and using the information obtained  by matching coarser resolution of  representations to restrict the search space  for matching finer resolution representations.  An implementation of the algorithm and its  testing on a range of images was reported in  1980. Since then a number psychophysical  experiments have suggested possible  refinements to the model and modifications to  the algorithm. As well, recent computational  experiments applying the algorithm to a variety  of natural images, especially aerial  photographs, have led to a number of  modifications. In this article, we present a  version of the Marr-Poggio-Grimson algorithm  that embodies these modifications and  illustrate its performance on a series of  natural images.
</description>
<pubDate>Sun, 01 Jan 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5637</guid>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Edge Detection</title>
<link>https://hdl.handle.net/1721.1/5636</link>
<description>On Edge Detection
Torre, V.; Poggio, Tomaso A
Edge detection is the process that attempts to  characterize the intensity changes in the  image in terms of the physical processes that  have originated them. A critical, intermediate  goal of edge detection is the detection and  characterization of significant intensity  changes. This paper discusses this part fo  the edge detection problem. To characterize  the types of intensity changes derivatives of  different types, and possibly different scales,  are needed. Thus we consider this part of  edge detection as a problem in numerical  differentiation. We show that numerical  differentiation of images is an ill-posed  problem in the sense of Hadamard.  Differentiation needs to be regularized by a  regularizing filtering operation before  differentiation. This shows that his part of  edge detection consists of two steps, a  filtering step and differentiation step.
</description>
<pubDate>Wed, 01 Aug 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5636</guid>
<dc:date>1984-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Description of Large Systems</title>
<link>https://hdl.handle.net/1721.1/5635</link>
<description>The Description of Large Systems
Pitman, Kent
In this paper we discuss the problems  associated with the description and  manipulation of large systems when their  sources are not maintained as single fields.  We show why and how tools that address  these issues, such as Unix MAKE and Lisp  Machine DEFSYSTEM, have evolved.  Existing formalisms suffer from the problem  that their syntax is not easily separable from  their functionality. In programming  languages, standard "calling conventions"  exist to insulate the caller of a function from  the syntactic details of how that function was  defined, but until now no such conventions  have existed to hide consumers of program  systems from the details of how those  systems were specified. We propose a low-level data abstraction  which can support notations such as those  used by MAKE and DEFSYSTEM without  requiring that the introduction of a new  notation be accompanied by a completely  different set of tools for instantiating or  otherwise manipulating the resulting system.   Lisp is used for presentation, bit the issues  are not idiosyncratic to LISP.
</description>
<pubDate>Sat, 01 Sep 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5635</guid>
<dc:date>1984-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Planning of Minimum-Time Trajectories for Robot Arms</title>
<link>https://hdl.handle.net/1721.1/5634</link>
<description>Planning of Minimum-Time Trajectories for Robot Arms
Sahar, Gideon; Hollerbach, John M.
The minimum-time for a robot arm has been  a longstanding and unsolved problem of  considerable interest. We present a general  solution to this problem that involves joint-space tesselation, a dynamic time-scaling  algorithm, and graph search. The solution  incorporates full dynamics of movement and  actuator constraints, and can be easily  extended for joint limits and work space  obstacles, but is subject to the particular  tesselation scheme used. The results  presented show that, in general the optimal  paths are not straight lines, bit rather curves in  joint-space that utilize the dynamics of the arm  and gravity to help in moving the arm faster to  its destination. Implementation difficulties  due to the tesselation and to combinatorial  proliferation of paths are discussed.
</description>
<pubDate>Thu, 01 Nov 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5634</guid>
<dc:date>1984-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multigrid Relaxation Methods and the Analysis of Lightness, Shading and Flow</title>
<link>https://hdl.handle.net/1721.1/5633</link>
<description>Multigrid Relaxation Methods and the Analysis of Lightness, Shading and Flow
Terzopoulos, Demetri
Image analysis problems, posed  mathematically as variational principles or as  partial differential equations, are amenable to  numerical solution by relaxation algorithms  that are local, iterative, and often parallel.  Although they are well suited structurally for  implementation on massively parallel, locally-interconnected computational architectures,  such distributed algorithms are seriously  handicapped by an inherent inefficiency at  propagating constraints between widely  separated processing elements. Hence, they  converge extremely slowly when confronted by  the large representations necessary for low-level vision. Application of multigrid methods  can overcome this drawback, as we  established in previous work on 3-D surface  reconstruction. In this paper, we develop  efficient multiresolution iterative algorithms for  computing lightness, shape-from-shading,  and optical flow, and we evaluate the  performance of these algorithms on Synthetic  images. The multigrid methodology that we  describe is broadly applicable in low-level  vision. Notably, it is an appealing strategy to  use in conjunction with regularization analysis  for the efficient solution of a wide range of ill-posed visual reconstruction problems.
</description>
<pubDate>Mon, 01 Oct 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5633</guid>
<dc:date>1984-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Find-Path Problem in the Plane</title>
<link>https://hdl.handle.net/1721.1/5632</link>
<description>The Find-Path Problem in the Plane
Nguyen; Van-Duc
This paper presents a fast heuristic algorithm  for planning collision-free paths of a moving  robot in a cluttered planar workspace. The  algorithm is based on describing the free  space between the obstacles as a network of  linked cones. Cones capture the freeways  and the bottle-necks between the obstacles.  Links capture the connectivity of the free  space. Paths are computed by intersecting  the valid configuration volumes of the moving  robot inside these cones and inside the  regions described by the links.
</description>
<pubDate>Wed, 01 Feb 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5632</guid>
<dc:date>1984-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Routines</title>
<link>https://hdl.handle.net/1721.1/5631</link>
<description>Routines
Agre, Philip E.
Regularities in the word give rise to  regularities in the way which we deal with the  world. That is to say, we fall into routines. I  have been studying the phenomena of  routinization, the process by which  institutionalized patterns of interaction with the  world arise and evolve in everyday life.  Underlying this evolution is a dialectical  process of internalization. First you build a  model of some previously unarticulated  emergent aspect of an existing routine.  Armed with an incrementally more global view  of interaction, you can often formulate an  incrementally better informed plan of attack. A  routine is not a plan in the sense of the  classical planning literature, except in the  theoretical limit of this process. I am  implementing this theory using running  arguments, a technique for writing rule-based  programs for intelligent agents. Because a  running argument is compiled into TMS  networks as it proceeds, incremental  changes in the world require only incremental  recomputation of the reasoning about what  actions to take next. The system supports a  style of programming, dialectival  argumentation that had many important  properties that recommend it as a substrate  for large AI systems. One of these might be  called additivity: an agent can modify its  reasoning in a class of situations by adducing  arguments as to why its previous arguments  were incorrect in those cases. Because no  side-effects are ever required, reflexive  systems based on dialectical argumentation  ought to be less fragile than intuition and  experience suggest. I outline the remaining  implementation problems.
</description>
<pubDate>Wed, 01 May 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5631</guid>
<dc:date>1985-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Surface Primal Sketch</title>
<link>https://hdl.handle.net/1721.1/5630</link>
<description>Toward a Surface Primal Sketch
Ponce, Jean; Brady, Michael
This paper reports progress toward the  development of a representation of significant  surface changes in dense depth maps. We  call the representation the Surface Primal  Sketch by analogy with representation of  intensity changes, image structure, and  changes in curvature of planar curves. We  describe an implemented program that  detects, localizes, and symbolically describes:  steps, where the surface height function is  discontinuous; roofs, where the surface is  continuous but the surface normal is  discontinuous; smooth joins, where the  surface normal is continuous but a principle  curvature is discontinuous and changes sign;  and shoulders, which consists of two roofs  and correspond to a step viewed obliquely.  We illustrate the performance of the program  on range maps of objects of varying  complexity.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5630</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating and Generalizing Models of Visual Objects</title>
<link>https://hdl.handle.net/1721.1/5629</link>
<description>Generating and Generalizing Models of Visual Objects
Connell, Jonathan H.; Brady, Michael
We report on initial experiments with an  implemented learning system whose inputs  are images of two-dimensional shapes. The  system first builds semantic network  descriptions of shapes based on Brady's  smoothed local symmetry representation. It  learns shape models form them using a  substantially modified version of Winston's  ANALOGY program. A generalization of Gray  coding enables the representation to be  extended and also allows a single operation,  called ablation, to achieve the effects of many  standard induction heuristics. The program  can learn disjunctions, and can learn  concepts suing only positive examples. We  discuss learnability and the pervasive  importance of representational hierarchies.
</description>
<pubDate>Mon, 01 Jul 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5629</guid>
<dc:date>1985-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computing Visible-Surface Representations</title>
<link>https://hdl.handle.net/1721.1/5628</link>
<description>Computing Visible-Surface Representations
Terzopoulos, Demetri
The low-level interpretation of images provides constraints on 3D surface shape at multiple resolutions, but typically only at scattered locations over the visual field. Subsequent visual processing can be facilitated substantially if the scattered shape constraints are immediately transformed into visible-surface representations that unambiguously specify surface shape at every image point. The required transformation is shown to lead to an ill-posed surface reconstruction problem. A well-posed variational principle formulation is obtained by invoking 'controlled continuity,' a physically nonrestrictive (generic) assumption about surfaces which is nonetheless strong enough to guarantee unique solutions. The variational principle, which admits an appealing physical interpretation, is locally discretized by applying the finite element method to a piecewise, finite element representation of surfaces. This forms the mathematical basis of a unified and general framework for computing visible-surface representations. The computational framework unifies formal solutions to the key problems of (i) integrating multiscale constraints on surface depth and orientation from multiple visual sources, (ii) interpolating these scattered constraints into dense, piecewise smooth surfaces, (iii) discovering surface depth and orientation discontinuities and allowing them to restrict interpolation appropriately, and (iv) overcoming the immense computational burden of fine resolution surface reconstruction. An efficient surface reconstruction algorithm is developed. It exploits multiresolution hierarchies of cooperative relaxation processes and is suitable for implementation on massively parallel networks of simple, locally interconnected processors. The algorithm is evaluated empirically in a diversity of applications.
</description>
<pubDate>Fri, 01 Mar 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5628</guid>
<dc:date>1985-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Codon Constraints on Closed 2D Shapes</title>
<link>https://hdl.handle.net/1721.1/5627</link>
<description>Codon Constraints on Closed 2D Shapes
Richards, Whitman; Hoffman, Donald D.
Codons are simple primitives for describing  plane curves. They thus are primarily image-based descriptors. Yet they have the power to  capture important information about the 3-D  world, such as making part boundaries  explicit. The codon description is highly  redundant (useful for error-correction). This  redundancy can be viewed as a constraint on  the number of possible codon strings. For  smooth closed strings that represent the  bounding contour (silhouette) of many smooth  3D objects, the constraints are so strong that  sequences containing 6 elements yield only  33 generic shapes as compared with a  possible number of 15, 625 combinations.
</description>
<pubDate>Tue, 01 May 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5627</guid>
<dc:date>1984-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Any Dimensional Reconstruction from Hyperplanar Projections</title>
<link>https://hdl.handle.net/1721.1/5626</link>
<description>Any Dimensional Reconstruction from Hyperplanar Projections
Gennert, Michael A.
In this paper we examine the reconstruction of  functions of any dimension from hyperplanar  projections. This is a generalization of a  problem that has generated much interest  recently, especially in the field of medical  imaging. Computed Axial Tomography (CAT)  and Nuclear Magnetic Resonance (NMR) are  two medical techniques that fall in this  framework. CAT scans measure the  hydrogen density along planes through the  body.  Here we will examine reconstruction methods  that involve backprojecting the projection data  and summing this over the entire region of  interest. There are two methods for doing  this. One method is to filter the projection  data first, and then backproject this filtered  data and sum over all projection directions.  The other method is to backproject and sum  the projection data first, and then filter. The  two methods are mathematically equivalent,  producing very similar equations.   We will derive the reconstruction formulas for  both methods for any number of dimensions.  We will examine the cases of two and three  dimensions, since these are the only ones  encountered in practice. The equations are  very different for these cases. In general, the  equations are very different for even and odd  dimensionality. We will discuss why this is  so, and show that the equations for even and  odd dimensionality are related by the Hilbert  Transform.
</description>
<pubDate>Mon, 01 Oct 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5626</guid>
<dc:date>1984-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mobile Robot Localization Using Sonar</title>
<link>https://hdl.handle.net/1721.1/5625</link>
<description>Mobile Robot Localization Using Sonar
Drumheller, Michael
This paper describes a method by which  range data from a sonar or other type of  rangefinder can be used to determine the 2-dimensional position and orientation of a  mobile robot inside a room. The plan of the  room is modeled as a list of segments  indicating the positions of walls. The method  works by extracting straight segments from  the range data and examining all hypotheses  about pairings between the segments and  walls in the model of the room. Inconsistent  pairings are discarded efficiently by using  local constraints based on distances between  walls, angles between walls, and ranges  between walls along their normal vectors.  These constraints are used to obtain a small  set of possible positions, which is further  pruned using a test for physical consistency.  The approach is extremely tolerant of noise  and clutter. Transient objects such as  furniture and people need not be included in  the room model, and very noisy, low-resolution sensors can be used. The  algorithm's performance is demonstrated  using Polaroid Ultrasonic Rangefinder, which  is a low-resolution, high-noise sensor.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5625</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Anatomy and Physiology of Gating Retinal Signals in the Mammalian Lateral Geniculate Nucleus</title>
<link>https://hdl.handle.net/1721.1/5624</link>
<description>The Anatomy and Physiology of Gating Retinal Signals in the Mammalian Lateral Geniculate Nucleus
Sherman, S. Murray; Koch, Christof
In the mammalian visual system, the lateral  geniculate nucleus is commonly thought to  act merely as a relay for the transmission of  visual information from the retina to the visual  cortex, a relay without significant elaboration  in receptive field properties or signal strength.  However, many morphological and  electrophysiological observations are at odds  with this view. In this paper, we will review the  different anatomical pathways and biophysical  mechanisms possibly implementing a  selective gating of visual information flow from  the retina to the visual cortex. We will argue  that the lateral geniculate nucleus in  mammals is one of the earliest sites where  selective, visual attention operates and where  general changes in neuronal excitability as a  function of the behavioral states of the animal,  for instance, sleep, paradoxical sleep,  arousal, etc., occur.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5624</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Describing Surfaces</title>
<link>https://hdl.handle.net/1721.1/5623</link>
<description>Describing Surfaces
Brady, Michael; Ponce, Jean; Yuille, Alan; Asada, Haruo
This paper continues our work on visual  representation s of three-dimensional  surfaces [Brady and Yuille 1984b]. The  theoretical component of our work is a study  of classes of surface curves as a source of  constraint n the surface on which they lie, and  as a basis for describing it. We analyze  bounding contours, surface intersections,  lines of curvature, and asymptotes. Our  experimental work investigates whether the  information suggested by our theoretical study  can be computed reliably and efficiently. We  demonstrate algorithms that compute lines of  curvature of a (Gaussian smoothed) surface;  determine planar patches and umbilic  regions; extract axes of surfaces of revolution  and tube surfaces. We report preliminary  results on adapting the curvature primal  sketch algorithms of Asada and Brady [1984]  to detect and describe surface intersections.
</description>
<pubDate>Tue, 01 Jan 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5623</guid>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simplified Grasping and Manipulation with Dextrous Robot Hands</title>
<link>https://hdl.handle.net/1721.1/5622</link>
<description>Simplified Grasping and Manipulation with Dextrous Robot Hands
Fearing, Ronald S.
A method is presented for stably grasping 2 dimensional polygonal objects with a dextrous hand when object models are not avaiable. Basic constraints on object vertex angles are found for feasible grasping with two fingers. Local tactile information can be used to determine the finger motion that will reach feasible grasping locations. With an appropriate choice of finger stiffness, a hand can automatically grasp these objects with two fingers. The bounded slip of a part in a hand is shown to be valuable for adapting the fingers and object to a stable situation. Examples are given to show the ability of this grasping method to accomodate disturbance forces and to perform simple part reorientations and regrasping operations.
</description>
<pubDate>Thu, 01 Nov 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5622</guid>
<dc:date>1984-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collision Detection for Moving Polyhedra</title>
<link>https://hdl.handle.net/1721.1/5621</link>
<description>Collision Detection for Moving Polyhedra
Canny, John
We consider the problem of moving a three dimensional solid object among polyhedral obstacles. The traditional formulation of configuration space for this problem uses three translational parameters and three angles (typically Euler angles), and the constraints between the object and obstacles involve transcendental functions. We show that a quaternion representation of rotation yields constraints which are purely algebraic in a higher-dimensional space. By simple manipulation, the constraints may be projected down into a six dimensional space with no increase in complexity. Using this formulation, we derive an efficient exact intersection test for an object which is translating and rotating among obstacles.
</description>
<pubDate>Mon, 01 Oct 1984 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5621</guid>
<dc:date>1984-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Perspective Projection Invariants</title>
<link>https://hdl.handle.net/1721.1/5620</link>
<description>Perspective Projection Invariants
Verri, Alessandro; Yuille, Alan
An important part of stereo vision consists of  finding and matching points in two images  which correspond to the same physical  element in the scene. We show that zeros of  curvature of curves are perspective projection  invariants and can therefore be used to find  corresponding points. They can be used to  help solve the registration problem (Longuet-Higgins, 1982) and to obtain the correct depth  when a curve enters the forbidden zone (Krol  and van de Grind, 1982). They are also  relevant to theories for representing image  curves. We consider the stability of these  zeros of curvature.
</description>
<pubDate>Sat, 01 Feb 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5620</guid>
<dc:date>1986-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>CREF: An Editing Facility for Managing Structured Text</title>
<link>https://hdl.handle.net/1721.1/5619</link>
<description>CREF: An Editing Facility for Managing Structured Text
Pitman, Kent M.
This paper reports work in progress on an  experimental text editor called CREF, the  Cross Referenced Editing Facility. CREF  deals with chunks of text, called segments,  which may have associated features such as  keywords or various kinds of links to other  segments. Text in CREF is organized into  linear collections for normal browsing. The  use of summary and cross-reference links in  CREF allows the imposition of an auxiliary  network structure upon the text which can be  useful for "zooming in and out" or "non-local  transitions." Although it was designed as a  tool for use in complex protocol analysis by a  "knowledge Engineer's Assistant," CREF has  many interesting features which should make  it suitable for a wide variety of applications,  including browsing, program editing,  document preparation, and mail reading.
</description>
<pubDate>Fri, 01 Feb 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5619</guid>
<dc:date>1985-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Regularized Solution to Edge Detection</title>
<link>https://hdl.handle.net/1721.1/5618</link>
<description>A Regularized Solution to Edge Detection
Poggio, Tomaso; Voorhees, Harry; Yuille, Alan
We consider edge detection as the problem of  measuring and localizing changes of light  intensity in the image. As discussed by Torre  and Poggio (1984), edge detection, when  defined in this way, is an ill-posed problem in  the sense of Hadamard. The regularized  solution that arises is then the solution to a  variational principle. In the case of exact data,  one of the standard regularization methods  (see Poggio and Torre, 1984) leads to cubic  spline interpolation before differentiation. We  show that in the case of regularly-spaced data  this solution corresponds to a convolution  filter---to be applied to the signal before  differentiation -- which is a cubic spline. In the  case of non-exact data, we use another  regularization method that leads to a different  variational principle. We prove (1) that this  variational principle leads to a convolution  filter for the problem of one-dimensional edge  detection, (2) that the form of this filter is very  similar to the Gaussian filter, and (3) that the  regularizing parameter $lambda$ in the  variational principle effectively controls the  scale of the filter.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5618</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parsing and Linguistic Explanation</title>
<link>https://hdl.handle.net/1721.1/5617</link>
<description>Parsing and Linguistic Explanation
Berwick, Robert C.; Weinberg, Amy S.
This article summarizes and extends recent  results linking deterministic parsing to  observed "locality principles" in syntax. It also  argues that grammatical theories based on  explicit phrase structure rules are unlikely to  provide comparable explanations of why  natural languages are built the way they are.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5617</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boundaries of Visual Motion</title>
<link>https://hdl.handle.net/1721.1/5616</link>
<description>Boundaries of Visual Motion
Rubin, John M.; Richards, W.A.
A representation of visual motion convenient  for recognition shouldsmake prominent the  qualitative differences among simple  motions. Wesargue that the first stage in such  a motion representation is to makesexplicit  boundaries that we define as starts, stops,  and forcesdiscontinuities. When one of these  boundaries occurs in motion,  humansobservers have the subjective  impression that some fleeting,ssignificant  event has occurred. We go farther and  hypothesize that onesof the subjective motion  boundaries is seen if and only if one of  oursdefined boundaries occurs. We  enumerate all possible motion  boundariessand provide evidence that they  are psychologically real.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5616</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>GPSG-Recognition is NP-Hard</title>
<link>https://hdl.handle.net/1721.1/5615</link>
<description>GPSG-Recognition is NP-Hard
Ristad, Eric Sven
Proponents of generalized phrase structure  grammar (GPSG) cite its weak context-free  generative power as proof of the  computational tractability of GPSG-Recognition. Since context-free languages  (CFLs) can be parsed in time proportional to  the cube of the sentence length, and GPSGs  only generate CFLs, it seems plausible the  GPSGs can also be parsed in cubic time.  This longstanding, widely assumed GPSG  "efficient parsability" result in misleading:  parsing the sentences of an arbitrary GPSG is  likely to be intractable, because a reduction  from 3SAT proves that the universal  recognition problem for the GPSGs of Gazdar  (1981) is NP-hard. Crucially, the time to parse  a sentence of a CFL can be the product of  sentence length cubed and context-free  grammar size squared, and the GPSG  grammar can result in an exponentially large  set of derived context-free rules. A central  object in the 1981 GPSG theory, the metarule,  inherently results in an intractable parsing  problem, even when severely constrained.  The implications for linguistics and natural  language parsing are discussed.
</description>
<pubDate>Fri, 01 Mar 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5615</guid>
<dc:date>1985-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Bayesian Estimators for Image Segmentation and Surface Reconstruction</title>
<link>https://hdl.handle.net/1721.1/5614</link>
<description>Optimal Bayesian Estimators for Image Segmentation and Surface Reconstruction
Marroquin, Jose L.
sA very fruitful approach to the solution of  image segmentation andssurface  reconstruction tasks is their formulation as  estimationsproblems via the use of Markov  random field models and Bayes  theory.sHowever, the Maximuma Posteriori  (MAP) estimate, which is the one  mostsfrequently used, is suboptimal in these  cases. We show that forssegmentation  problems the optimal Bayesian estimator is  the maximizersof the posterior marginals,  while for reconstruction tasks, thesthreshold  posterior mean has the best possible  performance. We presentsefficient distributed  algorithms for approximating these estimates  insthe general case. Based on these results,  we develop a maximumslikelihood that leads  to a parameter-free distributed algorithm  forsrestoring piecewise constant images. To  illustrate these ideas, thesreconstruction of  binary patterns is discussed in detail.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5614</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferring 3D Shapes from 2D Codons</title>
<link>https://hdl.handle.net/1721.1/5613</link>
<description>Inferring 3D Shapes from 2D Codons
Richards, Whitman; Koenderink, Jan J.; Hoffman, D.D.
All plane curves can be described at an  abstract level by a sequence of five primitive  elemental shapes, called "condons", which  capture the sequential relations between the  singular points of curvature. The condon  description provides a basis for enumerating  all smooth 2D curves. Let each of these  smooth plane be considered as the si lhouette of an opaque 3D object. Clearly an in finity of 3D objects can generate any one of ou r "condon" silhouettes. How then can we p redict which 3D object corresponds to a g iven 2D silhouette? To restrict the infinity of  choices, we impose three mathematical  properties of smooth surfaces plus one  simple viewing constraint. The constraint  is an extension of the notion of general  position, and seems to drive our preferred  inferences of 3D shapes, given only the 2D  contour.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5613</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recognition and Localization of Overlapping Parts from Sparse Data</title>
<link>https://hdl.handle.net/1721.1/5611</link>
<description>Recognition and Localization of Overlapping Parts from Sparse Data
Grimson, W. Eric L.; Lozano-Perez, Tomas
This paper discusses how sparse local  measurements of positions and surface  normals may be used to identify and locate  overlapping objects. The objects are  modeled as polyhedra (or polygons) having  up to six degreed of positional freedom  relative to the sensors. The approach  operated by examining all hypotheses about  pairings between sensed data and object  surfaces and efficiently discarding  inconsistent ones by using local constraints  on: distances between faces, angles between  face normals, and angles (relative to the  surface normals) of vectors between sensed  points. The method described here is an  extension of a method for recognition and  localization of non-overlapping parts  previously described in [Grimson and Lozano-Perez 84] and [Gaston and Lozano-Perez 84].
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5611</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Incremental Rigidity Scheme for Recovering Structure from Motion: Position vs. Velocity Based Formulations</title>
<link>https://hdl.handle.net/1721.1/5610</link>
<description>The Incremental Rigidity Scheme for Recovering Structure from Motion: Position vs. Velocity Based Formulations
Grzywacz, Norberto M.; Hildreth, Ellen C.
Perceptual studies suggest that the visual  system uses the "rigidity" assumption to  recover three dimensional structures from  motion. Ullman (1984) recently proposed a  computational scheme, the incremental  rigidity scheme, which uses the rigidity  assumptions to recover the structure of rigid  and non-rigid objects in motion. The scheme  assumes the input to be discrete positions of  elements in motion, under orthographic  projection. We present formulations of  Ullmans' method that use velocity information  and perspective projection in the recovery of  structure. Theoretical and computer analyses  show that the velocity based formulations  provide a rough estimate of structure quickly,  but are not robust over an extended time  period. The stable long term recovery of  structure requires disparate views of moving  objects. Our analysis raises interesting  questions regarding the recovery  of structure from motion in the human visual  system.
</description>
<pubDate>Tue, 01 Oct 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5610</guid>
<dc:date>1985-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prism Trees: An Efficient Representation for Manipulating and Displaying Polyhedra with Many Faces</title>
<link>https://hdl.handle.net/1721.1/5609</link>
<description>Prism Trees: An Efficient Representation for Manipulating and Displaying Polyhedra with Many Faces
Ponce, Jean
Computing surface and/or object  intersections is a cornerstone of many  algorithms in Geometric Modeling and  Computer Graphics, for example Set  Operations between solids, or surface Ray  Casting display. We present an object  centered, information preserving, hierarchical  representation for polyhedra called Prism  Tree. We use the representation to  decompose the intersection algorithms into  two steps: the localization of intersections,  and their processing. When dealing with  polyhedra with many faces (typically more  than one thousand), the first step is by far the  most expensive. The Prism Tree structure is  used to compute efficiently this localization  step. A preliminary implementation of the Set  Operations and Ray casting algorithms has  been constructed.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5609</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Approach to Automatic Robot Programming</title>
<link>https://hdl.handle.net/1721.1/5608</link>
<description>An Approach to Automatic Robot Programming
Lozano-Perez, Tomas; Brooks, Rodney A.
In this paper we propose an architecture for a  new task-level system, which we call TWAIN.  Task-level programming attempts to simplify  the robot programming process but requiring  that the user specify only goals for the  physical relationships among objects, rather  than the motions needed to achieve those  goals. A task-level specification is meant to  be completely robot independent; no  positions or paths that depend on the robot  geometry or kinematics are specified by the  user. We have two goals for this paper. Th is first is to present a more unified t reatment of some individual pieces of r esearch in task planning, whose r elationship has not previously been d escribed. The second is to provide a new  framework for further research in task-planning. This is a slightly modified version of  a paper that appeared in Proceedings of Soli d Modeling by Computers: from Theory to A pplications, Research laboratories Sympo sium Series, sponsored by General Motors,  Warren, Michigan, September 1983.
</description>
<pubDate>Mon, 01 Apr 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5608</guid>
<dc:date>1985-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Redundancy Resolution of Manipulators through Torque Optimization</title>
<link>https://hdl.handle.net/1721.1/5607</link>
<description>Redundancy Resolution of Manipulators through Torque Optimization
Hollerbach, John M.; Suh, Ki C.
Methods for resolving kinematic redundancies  of manipulators by the effect on joint torque  are examined. When the generalized inverse  is formulated in terms of accelerations and  incorporated into the dynamics, the effect of  redundancy resolution on joint torque can be  directly reflected. One method chooses the  joint acceleration null-space vector to  minimize joint torque in a least squares  sense; when the least squares is weighted by  allowable torque range, the joint torques tend  to be kept within their limits. Contrasting  methods employing only the pseudoinverse  with and without weighting by the inertia matrix  are presented. The results show an  unexpected stability problem during long  trajectories for the null-space methods and for  the inertia-weighted pseudoinverse method,  but rarely for the unweighted pseudoinverse  method. Evidently a whiplash action develops  over time that thrusts the endpoint off the  intended path, and extremely high torques are  required to overcome these natural movement  dynamics.
</description>
<pubDate>Wed, 01 Jan 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5607</guid>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Computational Approach to Vision and Motor Control</title>
<link>https://hdl.handle.net/1721.1/5606</link>
<description>The Computational Approach to Vision and Motor Control
Hildreth, Ellen C.; Hollerbach, John M.
Over the past decade it has become  increasingly clear that to understand the brain,  we must study not only its biochemical and  biophysical mechanisms and its outward  perceptual and physical behavior. We also  must study the brain at a theoretical level that  investigated the computations that are  necessary to perform its functions. The  control of movements such as reaching,  grasping and manipulating objects requires  complex mechanisms that elaborate  information form many sensors and control  the forces generated by a large number of  muscles. The act of seeing, which intuitively  seems so simple and effortless, requires  information processing whose complexity we  are just beginning to grasp. A computational  approach to the study of vision and motor  tasks. This paper discusses a particular view  of the computational approach and its  relevance to experimental neuroscience.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5606</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of Joint-Interpolated Arm Movements</title>
<link>https://hdl.handle.net/1721.1/5605</link>
<description>Characterization of Joint-Interpolated Arm Movements
Hollerbach, John M.; Atkeson, Christopher G.
Two possible sets of planning variables for  human arm movement are point angles and  hand position. Although one might expect  these possibilities to be mutually exclusive,  recently an apparently contradictory set of data  has appeared that indicated straight-line  trajectories in both hand space and joint  space at the same time. To assist in  distinguishing between these viewpoints  applied to the same data, we have  theoretically characterized the set of  trajectories derivable from a joint based  planning strategy and have compared them to  experimental measurements. We conclude  that the apparent straight-lines in joint space  happen to be artifacts of movement  kinematics near the workspace boundary.
</description>
<pubDate>Sat, 01 Jun 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5605</guid>
<dc:date>1985-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimation of Inertial Parameters of Rigid Body Links of Manipulators</title>
<link>https://hdl.handle.net/1721.1/5604</link>
<description>Estimation of Inertial Parameters of Rigid Body Links of Manipulators
An, Chae H.; Atkeson, Christopher G.; Hollerbach, John M.
A method of estimating the mass, the location  of center of mass, and the moments of inertia  of each rigid body link of a robot during  general manipulator movement is presented.  The algorithm is derived from the Newton-Euler equations, and uses measurements of  the joint torques as well as the measurement  and calculation of the kinematics of the  manipulator while it is moving. The  identification equations are linear in the  desired unknown parameters, and a modified  least squares algorithm is used to obtain  estimates of these parameters. Some of the  parameters, however, are not identifiable due  to restricted motion of proximal links and the  lack of full force/torque sensing. The algorithm  was implemented on the MIT Serial Link  Direct Drive Arm. A good match was obtained  between joint torques predicted from the  estimated parameters and the joint torques  computed from motor currents.
</description>
<pubDate>Sat, 01 Feb 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5604</guid>
<dc:date>1986-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shape from Shading, Occlusion and Texture</title>
<link>https://hdl.handle.net/1721.1/5603</link>
<description>Shape from Shading, Occlusion and Texture
Yuille, A.L.
Shape from Shading, Occlusion and Texture are three important sources of depth information. We review and summarize work done on these modules.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5603</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Multiple Moving Objects</title>
<link>https://hdl.handle.net/1721.1/5602</link>
<description>On Multiple Moving Objects
Erdmann, Michael; Lozano-Perez, Tomas
This paper explores the motion planning  problem for multiple moving objects. The  approach taken consists of assigning  priorities to the objects, then planning  motions one object at a time. For each moving  object, the planner constructs a configuration  space-time that represents the time-varying  constraints imposed on the moving object by  the other moving and stationary objects. The  planner represents this space-time  approximately, using two-dimensional slices.  The space-time is then searched for a  collision-free path. The paper demonstrates  this approach in two domains. One domain  consists of translating planar objects; the  other domain consists of two-link planar  articulated arms.
</description>
<pubDate>Thu, 01 May 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5602</guid>
<dc:date>1986-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning a Color Algorithm from Examples</title>
<link>https://hdl.handle.net/1721.1/5601</link>
<description>Learning a Color Algorithm from Examples
Hurlbert, Anya; Poggio, Tomaso
We show that a color algorithm capable of  separating illumination from reflectance in a  Mondrian world can be learned from a set of  examples. The learned algorithm is  equivalent to filtering the image data---in  which reflectance and illumination are mixed---through a center-surround receptive field in  individual chromatic channels. The operation  resembles the "retinex" algorithm recently  proposed by Edwin Land. This result is a  specific instance of our earlier results that a  standard regularization algorithm can be  learned from examples. It illustrates that the  natural constraints needed to solve a  problemsin inverse optics can be extracted  directly from a sufficient set of input data and  the corresponding solutions. The learning  procedure has been implemented as a  parallel algorithm on the Connection Machine  System.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5601</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Revised Revised Report on Scheme or An Uncommon Lisp</title>
<link>https://hdl.handle.net/1721.1/5600</link>
<description>The Revised Revised Report on Scheme or An Uncommon Lisp
Clinger, William
Data and procedures and the values they  amass, Higher-order functions to combine  and mix and match, Objects with their local  state, the message they pass, A property, a  package, the control of point for a catch- In the  Lambda Order they are all first-class. One  thing to name them all, one things to define  them, one thing to place them in  environments and bind them, in the Lambda  Order they are all first-class. Keywords:  SCHEME, LISP, functional programming,  computer languages.
</description>
<pubDate>Thu, 01 Aug 1985 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5600</guid>
<dc:date>1985-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>On Dynamic Models of Robot Force Control</title>
<link>https://hdl.handle.net/1721.1/5599</link>
<description>On Dynamic Models of Robot Force Control
Eppinger, Steven D.; Seering, Warren P.
For precise robot control, endpoint  compliance strategies utilize feedback from a  force sensor located near the tool/workpiece  interface. Such endpoint force control systems  have been observed in the laboratory to be  limited to unsatisfactory closed-loop  performance. This paper discusses the  particular dynamic properties of robot  systems which can lead to instability and limit  performance. A series of lumped-parameter  models is developed in an effort to predict the  closed-loop dynamics of a force-controlled  single axis arm. The models include some  effects of robot structural dynamics, sensor  compliance, and workpiece dynamics. The  qualitative analysis shows that the robot  dynamics contribute to force-controlled  instability. Recommendations are made for  models to be used in control system design.
</description>
<pubDate>Tue, 01 Jul 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5599</guid>
<dc:date>1986-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional Abstraction From Structure in VLSI Simulation Models</title>
<link>https://hdl.handle.net/1721.1/5598</link>
<description>Functional Abstraction From Structure in VLSI Simulation Models
Lathrop, Richard H.; Robert J. Hall,; Kirk, Robert S.
High-level functional (or behavioral)  simulation models are difficult, time-consuming, and expensive to develop. We  report on a method for automatically  generating the program code for a high-level  functional simulation model. The high-level  model is produced directly from the program  code for the circuit components' functional  models and a netlist description of their  connectivity. A prototype has been  implemented in LISP for the SIMMER  functional simulator.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5598</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Algorithms for Computer Vision on the Connection Machine</title>
<link>https://hdl.handle.net/1721.1/5597</link>
<description>Parallel Algorithms for Computer Vision on the Connection Machine
Little, James J.
The Connection Machine is a fine-grained  parallel computer having up to 64K  processors. It supports both local  communication among the processors, which  are situated in a two-dimensional mesh, and  high-bandwidth communication among  processors at arbitrary locations, using a  message-passing network. We present  solutions to a set of Image Understanding  problems for the Connection Machine. These  problems were proposed by DARPA to  evaluate architectures for Image  Understanding systems, and are intended to  comprise a representative sample of  fundamental procedures to be used in Image  Understanding. The solutions on the  Connection Machine embody general  methods for filtering images, determining  connectivity among image elements,  determining spatial relations of image  elements, and computing graph properties,  such as matchings and shortest paths.
</description>
<pubDate>Sat, 01 Nov 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5597</guid>
<dc:date>1986-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ill-Posed Problems in Early Vision</title>
<link>https://hdl.handle.net/1721.1/5596</link>
<description>Ill-Posed Problems in Early Vision
Bertero, Mario; Poggio, Tomaso; Torre, Vincent
The first processing stage in computational  vision, also called early vision, consists in  decoding 2D images in terms of properties of  3D surfaces. Early vision includes problems  such as the recovery of motion and optical  flow, shape from shading, surface  interpolation, and edge detection. These are  inverse problems, which are often ill-posed or  ill-conditioned. We review here the relevant  mathematical results on ill-posed and ill-conditioned problems and introduce the  formal aspects of regularization theory in the  linear and non-linear case. More general  stochastic regularization methods are also  introduced. Specific topics in early vision and  their regularization are then analyzed  rigorously, characterizing existence,  uniqueness, and stability of solutions.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5596</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interaction of Different Modules in Depth Perception: Stereo and Shading</title>
<link>https://hdl.handle.net/1721.1/5595</link>
<description>Interaction of Different Modules in Depth Perception: Stereo and Shading
Bulthoff, Heinrich H.; Mallot, Hanspeter A.
A method has been developed to measure  the perceived depth of computer generated  images of simple solid objects. Computer  graphic techniques allow for independent  control of different depth queues  (stereo, shading, and texture) and enable the  investigator thereby to study psychophysically  the interaction of modules for depth  perception. Accumulation of information from  shading and stereo and vetoing of depth from  shading by edge information have been  found. Cooperativity and other types of  interactions are discussed. If intensity edges  are missing, as in a smooth-shaded surface,  the image intensities themselves could be  used for stereo matching. The results are  compared with computer vision algorithms  for both single modules and their integration  for 3D vision.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5595</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Multiple Representation Approach to Understanding the Time Behavior of Digital Circuits</title>
<link>https://hdl.handle.net/1721.1/5594</link>
<description>A Multiple Representation Approach to Understanding the Time Behavior of Digital Circuits
Hall, Robert J.; Lathrop, Richard H.; Kirk, Robert S.
We put forth a multiple representation  approach to deriving the behavioral model of a  digital circuit automatically from its structure  and the behavioral simulation models of its  components. One representation supports  temporal reasoning for composition and  amplification, another supports simulation  and a third helps to partition the translation  problem. A working prototype, FUNSTRUX, is  described.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5594</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using Program Transformation to Improve Program Translation</title>
<link>https://hdl.handle.net/1721.1/5593</link>
<description>Using Program Transformation to Improve Program Translation
Kennedy, Thomas R., III
Direct, construct by construct translation from  one high level language to another often  produces convoluted, unnatural,  and unreadable results, particularly when the  source and target languages support  different models of programming. A more  readable and natural translation can be  obtained by augmenting the translator with  a program transformation system.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5593</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Fully Abstract Semantics for Event-Based Simulation</title>
<link>https://hdl.handle.net/1721.1/5592</link>
<description>A Fully Abstract Semantics for Event-Based Simulation
Hall, Robert J.
This paper shows that, provided circuits  contain no zero-delay loops, a tight  relationship, full abstraction, exists between a  natural event-based operational semantics  for circuits and a natural  denotational semantics for circuits based on  causal functions on value timelines. The  paper also discusses what goes wrong if  zero-delay loops are allowed, and illustrates  the application of this semantic relationship  to modeling questions.
</description>
<pubDate>Fri, 01 May 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5592</guid>
<dc:date>1987-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Face Representation in Cortex: Studies Using a Simple and Not So Special Model</title>
<link>https://hdl.handle.net/1721.1/5572</link>
<description>Face Representation in Cortex: Studies Using a Simple and Not So Special Model
Rosen, Ezra
The face inversion effect has been widely documented  as an effect of the uniqueness of face processing. Using a computational  model, we show that the face inversion effect is a byproduct of expertise  with respect to the face object class. In simulations using HMAX, a  hierarchical, shape based model, we show that the magnitude of the  inversion effect is a function of the specificity of the representation. Using  many, sharply tuned units, an ``expert'' has a large inversion effect. On the other hand, if fewer, broadly  tuned units are used, the expertise is lost, and this ``novice'' has a small inversion effect. As the size of the inversion effect  is a product of the representation, not the object class, given the right  training we can create experts and novices in any object class. Using the same representations as with  faces, we create experts and novices for cars. We also measure the  feasibility of a view-based model for recognition of rotated objects  using HMAX. Using faces, we show that transfer of learning to novel views is possible.  Given only one training view, the view-based model  can recognize a face at a new orientation via interpolation from the views to  which it had been tuned. Although the model can generalize well to upright faces, inverted  faces yield poor performance because the features change differently  under rotation.
</description>
<pubDate>Thu, 05 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5572</guid>
<dc:date>2003-06-05T00:00:00Z</dc:date>
</item>
<item>
<title>A Biological Model of Object Recognition with Feature Learning</title>
<link>https://hdl.handle.net/1721.1/5571</link>
<description>A Biological Model of Object Recognition with Feature Learning
Louie, Jennifer
Previous biological models of object recognition in  cortex have been evaluated using idealized scenes  and have hard-coded features, such as the HMAX  model by Riesenhuber and Poggio [10]. Because  HMAX uses the same set of features for all object  classes, it does not perform well in the task of detecting  a target object in clutter. This thesis presents a new  model that integrates learning of object-specific  features with the HMAX. The new model performs  better than the standard HMAX and comparably to a  computer vision system on face detection. Results from  experimenting with unsupervised learning of features  and the use of a biologically-plausible classifier are  presented.
</description>
<pubDate>Sun, 01 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5571</guid>
<dc:date>2003-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligent Market-Making in Artificial Financial Markets</title>
<link>https://hdl.handle.net/1721.1/5570</link>
<description>Intelligent Market-Making in Artificial Financial Markets
Das, Sanmay
This thesis describes and evaluates a market-making  algorithm for setting prices in financial markets with asymmetric  information, and analyzes the properties of artificial markets in which the  algorithm is used. The core of our algorithm is a technique for  maintaining an online probability density estimate of the underlying  value of a stock. Previous theoretical work on market-making has  led to price-setting equations for which solutions cannot be  achieved in practice, whereas empirical work on algorithms for  market-making has focused on sets of heuristics and rules that lack  theoretical justification. The algorithm presented in this thesis is  theoretically justified by results in finance, and at the same time  flexible enough to be easily extended by incorporating modules for  dealing with considerations like portfolio risk and competition from  other market-makers. We analyze the performance of our  algorithm experimentally in artificial markets with different  parameter settings and find that many reasonable real-world properties  emerge. For example, the spread increases in response to  uncertainty about the true value of a stock, average spreads tend to be higher  in more volatile markets, and market-makers with lower  average spreads perform better in environments with multiple competitive market-makers. In addition, the time series data generated by simple  markets populated with market-makers using our algorithm replicate  properties of real-world financial time series, such as volatility  clustering and the fat-tailed nature of return distributions, without the  need to specify explicit models for opinion propagation and  herd behavior in the trading crowd.
</description>
<pubDate>Sun, 01 Jun 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5570</guid>
<dc:date>2003-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Man-Machine Interfaces: Combining Top-down Constraints with Bottom-up Learning in Facial Analysis</title>
<link>https://hdl.handle.net/1721.1/5569</link>
<description>Towards Man-Machine Interfaces: Combining Top-down Constraints with Bottom-up Learning in Facial Analysis
Kumar, Vinay P.
This thesis proposes a methodology for the  design of man-machine interfaces by combining top-down and  bottom-up processes in vision. From a computational perspective, we  propose that the scientific-cognitive question of combining top-down and bottom-up knowledge is similar to the engineering  question of labeling a training set in a supervised learning problem.  We investigate these questions in the realm  of facial analysis. We propose the use of a linear morphable model  (LMM) for representing top-down structure and use it to model  various facial variations such as mouth shapes and expression, the pose of  faces and visual speech (visemes). We apply a supervised learning  method based on support vector machine (SVM) regression for  estimating the parameters of LMMs directly from pixel-based representations of  faces. We combine these methods for designing new, more self-contained systems for recognizing facial expressions, estimating facial pose and  for recognizing visemes.
</description>
<pubDate>Sun, 01 Sep 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5569</guid>
<dc:date>2002-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Importance Sampling for Reinforcement Learning with Multiple Objectives</title>
<link>https://hdl.handle.net/1721.1/5568</link>
<description>Importance Sampling for Reinforcement Learning with Multiple Objectives
Shelton, Christian Robert
This thesis considers three complications that arise from applying reinforcement learning to a real-world application. In the process of using reinforcement learning to build an adaptive electronic market-maker, we find the sparsity of data, the partial observability of the domain, and the multiple objectives of the agent to cause serious problems for existing reinforcement learning algorithms.  We employ importance sampling (likelihood ratios) to achieve good performance in partially observable Markov decision processes with few data. Our importance sampling estimator requires no knowledge about the environment and places few restrictions on the method of collecting data. It can be used efficiently with reactive controllers, finite-state controllers, or policies with function approximation. We present theoretical analyses of the estimator and incorporate it into a reinforcement learning algorithm.  Additionally, this method provides a complete return surface which can be used to balance multiple objectives dynamically. We demonstrate the need for multiple goals in a variety of applications and natural solutions based on our sampling method. The thesis concludes with example results from employing our algorithm to the domain of automated electronic market-making.
</description>
<pubDate>Wed, 01 Aug 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5568</guid>
<dc:date>2001-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three-Dimensional Correspondence</title>
<link>https://hdl.handle.net/1721.1/5567</link>
<description>Three-Dimensional Correspondence
Shelton, Christian R.
This paper describes the problem of three-dimensional object correspondence and presents an algorithm for matching two three-dimensional colored surfaces using polygon reduction and the minimization of an energy function. At the core of this algorithm is a novel data-dependent multi-resolution pyramid for polygonal surfaces. The algorithm is general to correspondence between any two manifolds of the same dimension embedded in a higher dimensional space. Results demonstrating correspondences between various objects are presented and a method for incorporating user input is also detailed.
</description>
<pubDate>Tue, 01 Dec 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5567</guid>
<dc:date>1998-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Trainable System for Object Detection in Images and Video Sequences</title>
<link>https://hdl.handle.net/1721.1/5566</link>
<description>A Trainable System for Object Detection in Images and Video Sequences
Papageorgiou, Constantine P.
This thesis presents a general, trainable  system for object detection in static images  and video sequences. The core system finds  a certain class of objects in static images of  completely unconstrained, cluttered scenes  without using motion, tracking, or handcrafted  models and without making any assumptions  on the scene structure or the number of  objects in the scene. The system uses a set  of training data of positive and negative  example images as input, transforms the  pixel images to a Haar wavelet  representation, and uses a support vector  machine classifier to learn the difference  between in-class and out-of-class patterns.  To detect objects in out-of-sample images,  we do a brute force search over all the  subwindows in the image. This system is  applied to face, people, and car detection with  excellent results. For our extensions to video  sequences, we augment the core static  detection system in several ways -- 1)  extending the representation to five frames, 2)  implementing an approximation to a Kalman  filter, and 3) modeling detections in an image  as a density and propagating this density  through time according to measured features.  In addition, we present a real-time version of  the system that is currently running in a  DaimlerChrysler experimental vehicle. As part  of this thesis, we also present a system that,  instead of detecting full patterns, uses a  component-based approach. We find it to be  more robust to occlusions, rotations in depth,  and severe lighting conditions for people  detection than the full body version. We also  experiment with various other representations  including pixels and principal components  and show results that quantify how the  number of features, color, and gray-level affect  performance.
</description>
<pubDate>Mon, 01 May 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5566</guid>
<dc:date>2000-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Massively Parallel Implementations of Theories for Apparent Motion</title>
<link>https://hdl.handle.net/1721.1/5518</link>
<description>Massively Parallel Implementations of Theories for Apparent Motion
Grzywacz, Norberto; Yuille, Alan
We investigate two ways of solving the  correspondence problem for motion using the  assumptions of minimal mapping and rigidity.  Massively parallel analog networks are  designed to implement these theories. Their  effectiveness is demonstrated with  mathematical proofs and computer  simulations. We discuss relevant  psychophysical experiments.
</description>
<pubDate>Mon, 01 Jun 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5518</guid>
<dc:date>1987-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boolean Classes</title>
<link>https://hdl.handle.net/1721.1/5517</link>
<description>Boolean Classes
McAllester, David; Zabih, Ramin
Object-oriented programming languages all  involve the notions of class and object. We  extend the notion of class so that any Boolean  combination of classes is also a class.  Boolean classes allow greater precision and  conciseness in naming the class of objects  governed by a particular method. A class can  be viewed as a predicate which is either true  or false of any given object. Unlike predicates  however classes have an inheritance  hierarchy which is known at compile time.  Boolean classes extend the notion of class,  making classes more like predicates, while  preserving the compile time computable  inheritance hierarchy.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5517</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Requirements Apprentice: On the Boundary Between Informal and Formal Specifications</title>
<link>https://hdl.handle.net/1721.1/5516</link>
<description>Toward a Requirements Apprentice: On the Boundary Between Informal and Formal Specifications
Rich, Charles; Waters, Richard C.
Requirements acquisition is one of the most  important and least well supported parts of  the software development process. The  Requirements Apprentice (RA) will assist a  human analyst in the creation and  modification of software requirements. Unlike  current requirements analysis tools, which  assume a formal description language, the  focus of the RA is on the boundary between  informal and formal specifications. The RA is  intended to support the earliest phases of  creating a requirement, in which  incompleteness, ambiguity, and contradiction  are inevitable features. From an artificial  intelligence perspective, the central problem  the RA faces is one of knowledge acquisition.  It has to develop a coherent internal  representation from an initial set of  disorganized statements. To do so, the RA  will rely on a variety of techniques, including  dependency-directed reasoning, hybrid  knowledge representation, and the reuse of  common forms (cliché³©. The Requirements  Apprentice is being developed in the context of  the Programmer's Apprentice project, whose  overall goal is the creation of an intelligent  assistant for all aspects of software  development.
</description>
<pubDate>Tue, 01 Jul 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5516</guid>
<dc:date>1986-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computations in the Vertebrate Retina: Gain Enhancement, Differentiation and Motion Discrimination</title>
<link>https://hdl.handle.net/1721.1/5515</link>
<description>Computations in the Vertebrate Retina: Gain Enhancement, Differentiation and Motion Discrimination
Koch, Christof; Poggio, Tomaso; Torre, Vincent
The vertebrate retina, which provides the  visual input to the brain and its main interface  with the outside world, is a very attractive  model system for approaching the question of  the information processing role of biological  mechanisms of nerve cells. It is as yet  impossible to provide a complete circuit  diagram of the retina, but it is now possible to  identify a few simple computations that the  retina performs and to relate them to specific  biophysical mechanisms and circuit  elements. In this paper we consider three  operations carried out by most retinae:  amplification, temporal differentiation, and  computation of the direction of motion of  visual patterns.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5515</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Attention in Brains and Computers</title>
<link>https://hdl.handle.net/1721.1/5514</link>
<description>Visual Attention in Brains and Computers
Hurlbert, Anya; Poggio, Tomaso
Existing computer programs designed to  perform visual recognition of objects suffer  from a basic weakness: the inability to  spotlight regions in the image that potentially  correspond to objects of interest. The brain's  mechanisms of visual attention, elucidated by  psychophysicists and neurophysiologists,  may suggest a solution to the computer's  problem of object recognition.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5514</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Regularization Theory and Shape Constraints</title>
<link>https://hdl.handle.net/1721.1/5513</link>
<description>Regularization Theory and Shape Constraints
Verri, Alessandro; Poggio, Tomaso
Many problems of early vision are ill-posed; to  recover unique stable solutions regularization  techniques can be used. These techniques  lead to meaningful results, provided that  solutions belong to suitable compact sets.  Often some additional constraints on the  shape or the behavior of the possible  solutions are available. This note discusses  which of these constraints can be embedded  in the classic theory of regularization and how,  in order to improve the quality of the recovered  solution. Connections with mathematical  programming techniques are also discussed.  As a conclusion, regularization of early vision  problems may be improved by the use of  some constraints on the shape of the solution  (such as monotonicity and upper and lower  bounds), when available.
</description>
<pubDate>Mon, 01 Sep 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5513</guid>
<dc:date>1986-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Analysis of Visual Motion: From Computational Theory to Neuronal Mechanisms</title>
<link>https://hdl.handle.net/1721.1/5512</link>
<description>The Analysis of Visual Motion: From Computational Theory to Neuronal Mechanisms
Hildreth, Ellen C.; Koch, Christof
This paper reviews a number of aspects of  visual motion analysis in biological systems  from a computational perspective. We  illustrate the kinds of insights that have been  gained through computational studies and  how these observations can be integrated  with experimental studies from psychology  and the neurosciences to understand  the particular computations used by  biological systems to analyze motion. The  particular areas of motion analysis that we  discuss include early motion detection and  measurement, the optical flow computation,  motion correspondence, the detection of  motion discontinuities, and the recovery of  three-dimensional structure from motion.
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5512</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stereo and Eye Movement</title>
<link>https://hdl.handle.net/1721.1/5511</link>
<description>Stereo and Eye Movement
Geiger, Davi; Yuille, Alan
We describe a method to solve the stereo  correspondence using controlled eye (or  camera) movements. These eye movements  essentially supply additional image frames  which can be used to constrain the stereo  matching. Because the eye movements are  small, traditional methods of stereo with  multiple frames will not work. We develop  an alternative approach using a systematic  analysis to define a probability distribution for  the errors. Our matching strategy  then matches the most probable points first,  thereby reducing the ambiguity for the  remaining matches. We demonstrate this  algorithm with several examples.
</description>
<pubDate>Fri, 01 Jan 1988 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5511</guid>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Motion Field and Optical Flow: Qualitative Properties</title>
<link>https://hdl.handle.net/1721.1/5510</link>
<description>Motion Field and Optical Flow: Qualitative Properties
Verri, Alessandro; Poggio, Tomaso
In this paper we show that the optical flow, a  2D field that can be associated with the  variation of the image brightness pattern, and  the 2D motion field, the projection on the  image plane of the 3D velocity field of a  moving scene, are in general different, unless  very special conditions are satisfied. The  optical flow, therefore, is ill-suited for  computing structure from motion and for  reconstructing the 3D velocity field, problems  that require an accurate estimate of the 2D  motion field. We then suggest a different use  of the optical flow. We argue that stable  qualitative properties of the 2D motion field  give useful information about the 3D velocity  field and the 3D structure of the scene, and  that they can usually be obtained from the  optical flow. To support this approach we  show how the (smoothed) optical flow and 2D  motion field, interpreted as vector fields  tangent to flows of planar dynamical systems,  may have the same qualitative properties from  the point of view of the theory of structural  stability of dynamical systems.
</description>
<pubDate>Mon, 01 Dec 1986 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5510</guid>
<dc:date>1986-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Obviously Synchronizable Series Expressions: Part I: User's Manual for the OSS Macro Package</title>
<link>https://hdl.handle.net/1721.1/5509</link>
<description>Obviously Synchronizable Series Expressions: Part I: User's Manual for the OSS Macro Package
Waters, Richard C.
The benefits of programming in a functional  style are well known. In particular, algorithms  that are expressed as compositions of  functions operating on series/vectors/streams  of data elements are much easier to  understand and modify than equivalent  algorithms expressed as loops.  Unfortunately, many programmers hesitate to  use series expressions, because they are  typically implemented very inefficiently.  Common Lisp macro packages (OSS) has  been implemented which supports a  restricted class of series expressions,  obviously synchronizable series expressions,  which can be evaluated very efficiently by  automatically converting them into loops.  Using this macro package, programmers can  obtain the advantages of expressing  computations as series expressions without  incurring any run-time overhead.
</description>
<pubDate>Thu, 01 Oct 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5509</guid>
<dc:date>1987-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self Calibration of Motion and Stereo Vision for Mobile RobotsNavigation</title>
<link>https://hdl.handle.net/1721.1/5508</link>
<description>Self Calibration of Motion and Stereo Vision for Mobile RobotsNavigation
Brooks, Rodney A.; Flynn, Anita M.; Marill, Thomas
We report on experiments with a mobile robot  using one vision process (forward motion  vision) to calibrate another (stereo vision)  without resorting to any external units of  measurement. Both are calibrated to a velocity  dependent coordinate system which is natural  to the task of obstacle avoidance. The  foundations of these algorithms, in a world of  perfect measurement, are quite elementary.  The contribution of this work is to make them  noise tolerant while remaining simple  computationally. Both the algorithms and the  calibration procedure are easy to implement  and have shallow computational depth,  making them (1) run at reasonable speed on  moderate uni-processors, (2) appear practical  to run continuously, maintaining an up-to-the-second calibration on a mobile robot, and (3)  appear to be good candidates for massively  parallel implementations.
</description>
<pubDate>Sat, 01 Aug 1987 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5508</guid>
<dc:date>1987-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical Learning: Stability is Sufficient for Generalization and Necessary and Sufficient for Consistency of Empirical Risk Minimization</title>
<link>https://hdl.handle.net/1721.1/5507</link>
<description>Statistical Learning: Stability is Sufficient for Generalization and Necessary and Sufficient for Consistency of Empirical Risk Minimization
Mukherjee, Sayan; Niyogi, Partha; Poggio, Tomaso; Rifkin, Ryan
Solutions of learning problems by Empirical  Risk  Minimization (ERM) need to be consistent, so  that they  may be predictive. They also need to be well-posed, so  that they can be used robustly. We show that  a statistical form  of well-posedness, defined in terms of the  key property of  L-stability, is necessary and sufficient for  consistency of ERM.
revised July 2003
</description>
<pubDate>Sun, 01 Dec 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/5507</guid>
<dc:date>2002-12-01T00:00:00Z</dc:date>
</item>
</channel>
</rss>
